While I was working on a prototype yesterday, I found myself writing a small bit of code that turned out to be so useful, I wondered why I had never done it before. I'd like to show it to you.... read more >>

]]>While I was working on a prototype yesterday, I found myself writing a small bit of code that turned out to be so useful, I wondered why I had never done it before. I'd like to show it to you. It'll be an opportunity to explore some recent advances in MATLAB that make useful function syntaxes easy to write.

I wrote a simple function called imageSizeInXY. This function takes a MATLAB Graphics Image object and returns its total width and height in the spatial x-y units of the axes.

function out = imageSizeInXY(im)

...

end

(I'll show the full implementation below.) I was using the function like this.

A = imread("peppers.png");

im = imshow(A, XData = [-1 1], YData = [-1 1]);

axis on

imageSizeInXY_v1(im)

But this function only works if I have captured the image object in a variable. Usually, I call imshow with no output argument, and so I don't have the image object:

imshow(A, XData = [-1 1], YData = [-1 1])

I thought to myself, it would sure be nice if I could just call imageSizeInXY with no input arguments, and it would just automatically tell me about the image that's displayed in the current figure. After thinking about this briefly, I wrote the following:

function out = imageSizeInXY(im)

arguments

im (1,1) matlab.graphics.primitive.Image = findimage

end

...

end

function im = findimage

image_objects = imhandles(imgca);

if isempty(image_objects)

error("No image object found.");

end

im = image_objects(1);

end

The first interesting piece here is the use of the arguments block, a MATLAB language syntax that was introduced in R2019b. The arguments block, and more generally the function argument validation features, helps to implement function behavior with helpful error messages and useful defaults while writing very little code.

Let's look more closely at the beginning of the function.

function out = imageSizeInXY(im)

arguments

im (1,1) matlab.graphics.primitive.Image = findimage

end

The arguments block lets us specify constraints and default behavior for the function's input arguments. The "(1,1)" part says that the input argument, im, must have size $1 \times 1$. In other words, it must be a scalar. The next part, "matlab.graphics.primitive.Image," indicates that im is required to be that class (or convertible to it).

MATLAB will enforce these size and type constraints automatically, issuing specific error messages if they are not met.

Note that you can get the full class name of MATLAB graphics objects by calling class:

class(im)

The "= findimage" portion is used to automatically provide a default value for the input im. This can be a constant value or an expression. In this case, findimage is an expression that calls the local function that I have written:

function im = findimage

image_objects = imhandles(imgca);

if isempty(image_objects)

error("No image object found.");

end

im = image_objects(1);

end

The functions imgca and imhandles are Image Processing Toolbox utilities. The function imgca returns the most recently current axes object containing one or more images, and imhandles returns the image objects within it.

The last few lines are just me being careful to handle the cases where there are no image objects to be found, or where there are multiple image objects in the axes.

The full implementation, including the simple size calculation, is below.

function out = imageSizeInXY(im)

arguments

im (1,1) matlab.graphics.primitive.Image = findimage

end

pixel_width = extent(im.XData,size(im.CData,2));

pixel_height = extent(im.YData,size(im.CData,1));

x1 = im.XData(1) - (pixel_width / 2);

x2 = im.XData(end) + (pixel_width / 2);

y1 = im.YData(1) - (pixel_height / 2);

y2 = im.YData(end) + (pixel_height / 2);

out = [(x2 - x1) (y2 - y1)];

end

function im = findimage

image_objects = imhandles(imgca);

if isempty(image_objects)

error("No image object found.");

end

im = image_objects(1);

end

function e = extent(data,num_pixels)

if (num_pixels <= 1)

e = 1;

else

e = (data(2) - data(1)) / (num_pixels - 1);

end

end

Now I don't have to remember to call imshow with an output argument in order to use it.

If you haven't already explored using function argument validation and arguments blocks for writing your own MATLAB functions, I encourage you to give them a try. It's really worth it!

]]>Today's post is by guest blogger Isaac Bruss. Isaac has been a course developer for MathWorks since 2019. His PhD is in computational physics from UMass Amherst. Isaac has helped launch and support... read more >>

]]>Today's post is by guest blogger Isaac Bruss. Isaac has been a course developer for MathWorks since 2019. His PhD is in computational physics from UMass Amherst. Isaac has helped launch and support several courses on Coursera, on topics such as robotics, machine learning, and computer vision. Isaac's post is based on a lesson from the Image Processing for Engineering and Science Specialization on Coursera.

In this post, I’ll analyze a video of a container filling with liquid. Now, that might not sound like the most exciting thing in the world, but imagine, for example, you're a quality control engineer responsible for ensuring that the rate of filling bottles is consistent. You need to determine if this rate is constant or varies over time. To answer this question, you’d need to process each individual frame from the video file. But where would you start?

Well, in this case, I started out with the Video Viewer App to become more familiar with the video. This app allows you to see the video's size, frame rate, and total number of frames along the bottom bar. By using this app, you can quickly get a sense of what I’m working with by playing the video and navigating between frames.

With the goal of measuring the liquid within the container, I needed a segmentation function that can separate the liquid from the background. To ensure I develop a robust algorithm, I export a few representative frames to create and test a segmentation function.

The Color Thresholder App is a powerful tool for interactively segmenting images based on color. This app is particularly useful for images that have distinct color differences between the object of interest and the background. In this case, I used the region selection tool to highlight pixels of interest within the L*a*b* color space. This allowed me to isolate the dark liquid from both the purple background and white foam at the top.

Once I'm satisfied with my thresholding results, I export this segmentation function from the app as a “.m” file named “liquidMask”.

This custom function was developed using only the few frames I exported from the video. But to be confident in the results, I need to ensure that it accurately isolates the dark liquid in all the frames. To do this, I first create a video reader object.

v = VideoReader("liquidVideo.mp4")

Then I loop through the frames using the hasFrame function. Inside the loop, each frame is passed into the custom liquidMask function, and the resulting mask is displayed side-by-side using the montage function.

while hasFrame(v) % Loop through all frames

img = readFrame(v); % Read a frame

bw = liquidMask(img); % Apply the custom segmentation function

montage({img,bw},'BorderSize',[20,20],'BackgroundColor',[0.5,0.5,0.5]);

drawnow

end

The resulting segmented video frames look great! Further morphological operations to better segment the region of interest are possible, but to get a quick example up and running they aren’t needed.

Now I can move on to the final part: calculating the percentage of the container that is filled with water at each frame.

To get started, I need to rewind the video to the beginning using the CurrentTime property of the VideoReader.

v.CurrentTime = 0;

Next, I initialize two variables to store values for each frame: the percentage of the container filled, and the time stamp.

nFrames = v.NumFrames;

percents = zeros(nFrames,1);

times = zeros(nFrames,1);

Now I’m ready to use a for-loop to read through the video, segment each image, and calculate the percentage of the container filled at each frame. The percentage calculation is done by counting the number of pixels that are filled with liquid and dividing it by the total number of pixels.

for i = 1:nFrames % Loop through all frames

img = readFrame(v); % Read a frame

bw = liquidMask(img); % Create the binary mask using the custom function

liquidPart = nnz(bw); % Calculate the number of liquid (true) pixels

perc = liquidPart/numel(bw); % Calculate percentage filled with liquid

time = v.CurrentTime; % Mark the current time

percents(i) = perc; % Save the fill-percentage value

times(i) = time; % Save the time stamp

end

Now that the data has been gathered, I plot the percentage of the container filled versus time to see how quickly the liquid fills the container.

figure

plot(times,percents*100)

title('Percentage of Container Filled with Liquid vs Time')

xlabel('Time (seconds)')

ylabel('Percentage Filled')

Look at that! The fill rate is not consistent. From the plot, I see that the flow slowed down around 8 seconds, increased at 15 seconds, and slowed again at 20 seconds.

This post is based on a lesson from the Image Processing for Engineering and Science Specialization on Coursera. There you’ll be able to complete many projects like this one, from segmenting cracks in images of concrete to counting cars in a video of traffic.

___

Thanks, Isaac!

]]>

For some shapes, especially ones with a small number of pixels, a commonly-used method for computing circularity often results in a value which is biased high, and which can be greater than 1. In... read more >>

]]>For some shapes, especially ones with a small number of pixels, a commonly-used method for computing circularity often results in a value which is biased high, and which can be greater than 1. In releases prior to R2023a, the function regionprops used this common method. In R2023a, the computation method has been modified to correct for the bias.

For the full story, read on!

Our definition of circularity is:

$c = \frac{4\pi a}{p^2}$

where a is the area of a shape and p is its perimeter. It is a unitless quantity that lies in the range $[0,1]$. A true circle has a circularity of 1. Here are the estimated circularity measurements for some simple shapes.

url = "https://blogs.mathworks.com/steve/files/simple-shapes.png";

A = imread(url);

p = regionprops("table",A,["Circularity" "Centroid"])

imshow(A)

for k = 1:height(p)

text(p.Centroid(k,1),p.Centroid(k,2),string(p.Circularity(k)),...

Color = "yellow",...

BackgroundColor = "blue",...

HorizontalAlignment = "center",...

VerticalAlignment = "middle",...

FontSize = 14)

end

title("Circularity measurements for some simple shapes")

Some users have been understandably puzzled to find that, in previous releases, regionprops sometimes returned values greater than 1 for circularity. For example:

url = "https://blogs.mathworks.com/steve/files/small-circle.png";

B = imread(url);

imshow(B)

outlinePixels(size(B))

text(7,7,"1.38",...

Color = "yellow",...

BackgroundColor = "blue",...

HorizontalAlignment = "center",...

VerticalAlignment = "middle",...

FontSize = 14)

title("Circularity as measured in previous releases")

Before going on, I want to pause to note that, while circularity can be a useful measurement, it can also be quite tricky. One can get lost in fractal land when trying to measure perimeter precisely; see Bernard Mandelbrot's famous paper, "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension," described on Wikipedia here. Then, add the difficulties introduced by the fact that we are working with digital images, whose pixels we sometimes think of as points on a discrete grid, and sometimes as squares with unit area. See, for example, MATLAB Answers MVP John D'Errico's commentary on this topic.

In fact, measuring perimeter is what gets the common circularity computation method in trouble. Measuring the perimeter of objects in digital images is usually done by tracing along from one pixel center to the next, and then adding up weights that are determined by the angles between successive segments.

b = bwboundaries(B);

imshow(B)

outlinePixels(size(B))

hold on

plot(b{1}(:,1),b{1}(:,2),Color = [0.850 0.325 0.098],...

Marker = "*")

hold off

A few different schemes have been used to assign the weights in order to minimize systematic bias in the perimeter measurement. The weights used by regionprops are those given in Vossepeol and Smeulders, "Vector Code Probability and Metrication Error in the Representation of Straight Lines of Finite Length," Computer Graphics and Image Processing, vol. 20, pp. 347-364, 1982. See the discussion in Luego, "Measuring boundary length," Cris' Image Analysis Blog: Theory, Methods, Algorithms, Applications, https://www.crisluengo.net/archives/310/, last accessed 20-Mar-2023.

The cause of the circularity computation bias can be seen in the figure above. The perimeter is being calculated based on a path that connects pixel centers. Some of the pixels, though, lie partially outside the red curve. The perimeter, then, is being calculated along a curve that does not completely contain all of the square pixels; the curve is just a bit too small. Relative to the area computation, then, the perimeter computation is being underestimated. Since perimeter is in the denominator of the circularity formula, the resulting value is too high.

Can we fix the problem by tracing the perimeter path differently? Let's try with a digitized square at a 45-degree angle.

mask = [ ...

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 1 0 0 0 0 0 0

0 0 0 0 0 1 1 1 0 0 0 0 0

0 0 0 0 1 1 1 1 1 0 0 0 0

0 0 0 1 1 1 1 1 1 1 0 0 0

0 0 1 1 1 1 1 1 1 1 1 0 0

0 1 1 1 1 1 1 1 1 1 1 1 0

0 0 1 1 1 1 1 1 1 1 1 0 0

0 0 0 1 1 1 1 1 1 1 0 0 0

0 0 0 0 1 1 1 1 1 0 0 0 0

0 0 0 0 0 1 1 1 0 0 0 0 0

0 0 0 0 0 0 1 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 ];

Here is the path traced by regionprops for computing the perimeter.

x = [1 6 11 6 1] + 1;

y = [6 1 6 11 6] + 1;

imshow(mask)

outlinePixels(size(mask))

hold on

plot(x,y,Color = [0.850 0.325 0.098])

hold off

The perimeter of the red square is $4 \cdot 5 \sqrt{2} \approx 28.3$. The would correspond to an area of $(5 \sqrt{2})^2$. However, the number of white squares is 61. The estimated perimeter and area are inconsistent with each other, which will result in a biased circularity measurement.

Can we fix it by tracing the perimeter along the outer points of the pixels, like this?

x = [1.5 6.5 7.5 12.5 12.5 7.5 6.5 1.5 1.5];

y = [6.5 1.5 1.5 6.5 7.5 12.5 12.5 7.5 6.5];

imshow(mask)

outlinePixels(size(mask))

hold on

plot(x,y,Color = [0.850 0.325 0.098])

hold off

Now the calculated perimeter is going to be too high because it corresponds to an area that is much greater than 61, the number of white pixels.

perim = sum(sqrt(sum(diff([x(:) y(:)]).^2,2)))

For a square, that perimeter corresponds to this area:

area = (perim/4)^2

This time, the estimated perimeter is too high with respect to the estimated area, and so the computed circularity will be biased low.

What if we compute the perimeter by tracing along the pixel edges?

x = [1.5 2.5 2.5 3.5 3.5 4.5 4.5 5.5 5.5 6.5 6.5 ...

7.5 7.5 8.5 8.5 9.5 9.5 10.5 10.5 11.5 11.5 12.5 12.5 ...

11.5 11.5 10.5 10.5 9.5 9.5 8.5 8.5 7.5 7.5 6.5 6.5 ...

5.5 5.5 4.5 4.5 3.5 3.5 2.5 2.5 1.5 1.5];

y = [7.5 7.5 8.5 8.5 9.5 9.5 10.5 10.5 11.5 11.5 ...

12.5 12.5 11.5 11.5 10.5 10.5 9.5 9.5 8.5 8.5 ...

7.5 7.5 6.5 6.5 5.5 5.5 4.5 4.5 3.5 3.5 2.5 2.5 1.5 1.5 ...

2.5 2.5 3.5 3.5 4.5 4.5 5.5 5.5 6.5 6.5 7.5];

imshow(mask)

outlinePixels(size(mask))

hold on

plot(x,y,Color = [0.850 0.325 0.098], LineWidth = 5)

hold off

But now the calculated perimeter is WAY too high with respect to the area, so the computed circularity would be very low.

perim = sum(sqrt(sum(diff([x(:) y(:)]).^2,2)))

After all that, we decided to leave the perimeter tracing method alone. Instead, we came up with a simple approximate model for the circularity bias and then used that model to derive a correction factor.

Consider a circle with radius r. We can model the bias by computing circularity using area that is calculated from r and perimeter that is calculated from $r - 1/2$. The result, $c'$, is the model of the biased circularity.

$c' = \frac{1}{1 - \frac{1}{r} + \frac{0.25}{r^2}} = \frac{1}{\left(1 - \frac{0.5}{r}\right)^2}$

How well does $c'$ match what the old regionprops computation was returning? Here is a comparison performed on digitized circles with a range of radii.

But what should we do for general shapes? After all, if we already know it's a circle, then there's no big need to compute the circularity metric, right?

Here's the strategy: Given the computed area, a, determine an equivalent radius, $r_{\text{eq}}$, which is the radius of a circle with the same area. Then, use $r_{\text{eq}}$ to derive a bias correction factor:

$c_{\text{corrected}} = c\left(1 - \frac{0.5}{r_{\text{eq}}}\right)^2 = c\left(1-\frac{0.5}{\sqrt{a/\pi}}\right)^2$

We just made a big assumption there, computing $r_{\text{eq}}$ as if our shape is a circle. Can we really do that? Well, we did a bunch of experiments, using various kinds of shapes, scaled over a large range of sizes, and rotated at various angles. Here are the results for one experiment, using an equilateral triangle with a base rotated off the horizontal axis by 20 degrees. The blue curve is the circularity computed in previous releases; the red curve is the corrected circularity as computed in R2023a, and the horizontal orange line is the theoretical value.

We saw similar curves for a broad range of shapes. For some shapes, with a theoretical circularity value below around 0.2, the correction did not improve the result. On balance, however, we decided that the correction method was worthwhile and that regionprops should be changed to incorporate it.

If you need to get the previously computed value in your code, see the Version History section of the regionprops reference page for instructions.

function outlinePixels(sz)

M = sz(1);

N = sz(2);

for k = 1.5:(N-0.5)

xline(k, Color = [.7 .7 .7]);

end

for k = 1.5:(M-0.5)

yline(k, Color = [.7 .7 .7]);

end

end

I have seen some requests and questions related to identifying objects in a binary image that are touching the image border. Sometimes the question relates to the use of imclearborder, and... read more >>

]]>I have seen some requests and questions related to identifying objects in a binary image that are touching the image border. Sometimes the question relates to the use of imclearborder, and sometimes the question is about regionprops. Today, I'll show you how to tackle the problem both ways.

I'll be using this binary version of the rice.png sample image from the Image Processing Toolbox.

url = "https://blogs.mathworks.com/steve/files/rice-bw-1.png";

A = imread(url);

imshow(A)

The function imclearborder removes all objects touching the border.

B = imclearborder(A);

imshow(B)

That seems to be the opposite of what we want. We can, however, convert this into the desired result by using the MATLAB element-wise logical operators, such as & (and), | (or), and ~ (not). In words, we want the foreground pixels that are in A and are not in B. As a MATLAB expression, it looks like this:

C = A & ~B;

imshow(C)

The function regionprops can compute all sorts of properties of binary image objects. Here is a simple example that computes the area and centroid of each object in our sample image. I'm using the form of regionprops that returns a table.

props = regionprops("table",A,["Area" "Centroid"])

My technique for finding border-touching objects with regionprops uses the BoundingBox property, so include that property along with any other properties that you want to measure.

props = regionprops("table",A,["Area" "Centroid" "BoundingBox"])

For any particular object, BoundingBox is a four-element vector containing the left, top, width, and height of the bounding box. For example, here is the bounding box of the 20th object:

props.BoundingBox(20,:)

By comparing these values to the size of the image, we can identify which objects touch the image border.

Start by determining the objects that touch specific borders.

left_coordinate = props.BoundingBox(:,1);

props.TouchesLeftBorder = (left_coordinate == 0.5);

top_coordinate = props.BoundingBox(:,2);

props.TouchesTopBorder = (top_coordinate == 0.5);

right_coordinate = left_coordinate + props.BoundingBox(:,3);

bottom_coordinate = top_coordinate + props.BoundingBox(:,4);

[M,N] = size(A);

props.TouchesRightBorder = (right_coordinate == (N+0.5));

props.TouchesBottomBorder = (bottom_coordinate == (M+0.5))

Finally, compute whether objects touch any border using |, the element-wise OR operator.

props.TouchesAnyBorder = props.TouchesLeftBorder | ...

props.TouchesTopBorder | ...

props.TouchesRightBorder | ...

props.TouchesBottomBorder

I'll finish with a quick visual sanity check of the results.

L = bwlabel(A);

L_touches_border = ismember(L,find(props.TouchesAnyBorder));

imshowpair(A,L_touches_border)

Someone recently asked me about the order of objects found by the functions bwlabel, bwconncomp, and regionprops. In this binary image, for example, what accounts for the object order?The functions... read more >>

]]>Someone recently asked me about the order of objects found by the functions bwlabel, bwconncomp, and regionprops. In this binary image, for example, what accounts for the object order?

The functions bwlabel and bwconncomp both scan for objects in the same direction. They look down each column, starting with the left-most column. This animation illustrates the search order:

Because of this search procedure, the object order depends on the top, left-most pixel in each object. Specifically, the order is a lexicographic sort of $(c,r)$ coordinates, where c and r are the column and row coordinates of the top, left-most pixel.

If you pass a binary image to regionprops, the resulting object order is the same as for bwconncomp, and that is because regionprops calls bwconncomp under the hood.

Sometimes, people who are working with images of text ask to change things so that the search proceeds along rows instead of columns. The motivation is to get the object order to be same as the order of characters on each line. (Assuming a left-to-right language, that is.) That generally doesn't work well, though. Consider this text fragment:

With a row-wise search, the letter "b" would be found first, followed by "L" and then "t." This animation shows why that happens:

If you want a different sort order, I recommend returning the regionprops output as a table, and then you can use sortrows. Here are a couple of examples.

url = "https://blogs.mathworks.com/steve/files/rice-bw-ul.png";

A = imread(url);

imshow(A)

props = regionprops("table",A,["Centroid" "Area"])

You could sort the objects according to their centroids, with the primary sort in the vertical direction and the secondary sort in the horizontal direction.

props.Centroid_x = props.Centroid(:,1);

props.Centroid_y = props.Centroid(:,2);

props_by_row = sortrows(props,["Centroid_y" "Centroid_x"])

imshow(A)

for k = 1:height(props_by_row)

x = props_by_row.Centroid_x(k);

y = props_by_row.Centroid_y(k);

text(x,y,string(k),Color = "yellow", BackgroundColor = "blue", FontSize = 16, ...

FontWeight = "bold", HorizontalAlignment = "center",...

VerticalAlignment = "middle");

end

For another example, you could sort by area.

props_by_area = sortrows(props,"Area","descend")

imshow(A)

for k = 1:height(props_by_area)

x = props_by_area.Centroid_x(k);

y = props_by_area.Centroid_y(k);

text(x,y,string(k),Color = "yellow", BackgroundColor = "blue", FontSize = 16, ...

FontWeight = "bold", HorizontalAlignment = "center", ...

VerticalAlignment = "middle");

end

title("Objects sorted by area (biggest to smallest)")

In May 2006, I wrote about a technique for computing fast local sums of an image. Today, I want to update that post with additional information about integral image and integral box filtering... read more >>

]]>In May 2006, I wrote about a technique for computing fast local sums of an image. Today, I want to update that post with additional information about integral image and integral box filtering features in the Image Processing Toolbox.

First, let's review the concept of local sums.

A = magic(7)

The local sum of the $3 \times 3$ neighborhood around (2,3) element of A can be computed this way:

submatrix = A(1:3, 2:4)

local_sum_2_3 = sum(submatrix,"all")

In image processing applications, we are often interested in the local sum divided by the number of elements in the neighborhood, which is 9 in this case:

local_sum_2_3/9

Computing all the local sums for the matrix, divided by the neighborhood size, is the same as performing 2-D filtering with a local mean filter, like this:

B = imfilter(A,ones(3,3)/9);

B(2,3)

This is often called a box filter. Computing an $N \times N$ box filter for an entire $P \times Q$ image would seem to require approximately $PQN^2$ operations. In other words, for a fixed image size, we might expect the box filter computational time to increase in proportion to $N^2$, where N is the local sum window size.

Let's see how long imfilter takes with respect to a changing local sum neighborhood size.

I = rand(3000,4000);

nn = 15:10:205;

imfilter_times = zeros(size(nn));

for k = 1:length(nn)

n = nn(k);

f = @() imfilter(I,ones(n,n)/n^2);

imfilter_times(k) = timeit(f);

end

plot(nn,imfilter_times)

title("Box filter execution time using imfilter")

xlabel("window size")

ylabel("time (s)")

The function imfilter is doing better than expected, with an execution time that increases only linearly with the window size. (That's because imfilter is taking advantage of filter separability, which I have written about previously.)

But we can compute local sums even faster, by using something called an integral image, also known as a summed-area table. Let's go back to the magic square example and use the function integralImage.

Ai = integralImage(A)

Each element of Ai is the cumulative sum of all the elements in A that are above and to the left. Ai(4,3), which is 206, is the sum of all the elements of A(1:3,1:2).

A(1:3,1:2)

sum(ans,"all")

Ai(4,3)

Expressed as a general formula, the integral image is $A_i(m,n) = \sum_{p=1}^{m-1} \sum_{q=1}^{n-1} A(p,q)$.

The really interesting thing about having the integral image, Ai, is that it lets you compute the sum of any rectangular submatrix of A by an addition and subtraction of just four numbers. Specifically, the sum of all the elements in A(m1:m2,n1:n2) is Ai(m2+1,n2+1) - Ai(m1,n2+1) - Ai(m2+1,n1) + Ai(m1,n1). Consider computing the sum of all elements in A(3:5,2:6).

m1 = 3;

m2 = 5;

n1 = 2;

n2 = 6;

A(m1:m2,n1:n2)

sum(ans,"all")

Ai(m2+1,n2+1) - Ai(m1,n2+1) - Ai(m2+1,n1) + Ai(m1,n1)

If we have the integral image, then, we can compute every element of the box filter output by adding just four terms together. In other words, the box filter computation can be independent of the box filter window size!

The Image Processing Toolbox function integralBoxFilter does this for you. You give it the integral image, and it returns the result of the box filter applied to the original image.

Let's see how long that takes as we vary the box filter window size.

integral_box_filter_times = zeros(1,length(nn));

for k = 1:length(nn)

n = nn(k);

h = @() integralBoxFilter(Ai,n);

integral_box_filter_times(k) = timeit(h);

end

plot(nn,integral_box_filter_times, ...

nn,imfilter_times)

legend(["integralBoxFilter" "imfilter"], ...

Location = "northwest")

title("Box filter execution time")

xlabel("window size")

ylabel("time (s)")

That looks great! As expected, integralBoxFilter is fast, with execution time that is independent of the window size. To be completely fair, though, we should include the time required to compute the integral image.

integral_box_filter_total_times = zeros(1,length(nn));

for k = 1:length(nn)

n = nn(k);

r = @() integralBoxFilter(integralImage(I),n);

integral_box_filter_total_times(k) = timeit(r);

end

plot(nn,imfilter_times,...

nn,integral_box_filter_times,...

nn,integral_box_filter_total_times)

title("Box filter execution time")

xlabel("window size")

ylabel("time (s)")

legend(["imfilter" "integralBoxFilter" "integralImage + integralBoxFilter"],...

Location = "northwest")

From the plot, we can see that using the integral image for box filtering is the way to go, even when you take into consideration the time needed to compute the integral image.

]]>
In some of my recent perusal of image processing questions on MATLAB Answers, I have come across several questions involving binary image objects and polygonal boundaries, like this.The questions... read more >>

]]>In some of my recent perusal of image processing questions on MATLAB Answers, I have come across several questions involving binary image objects and polygonal boundaries, like this.

The questions vary but are similar:

- How can I determine which objects touch the polygon boundary?
- How can I get rid of objects outside the boundary?
- How can I get rid of objects outside or touching the boundary?
- How can I get rid of objects within a certain distance of the boundary?

Today I want to show you how to answer all these questions and more besides, using these fundamental operations:

- drawpolygon and images.roi.Polygon
- createMask
- bwdist
- imreconstruct
- relational operators
- logical operators on binary images

(Or, as an alternative to drawpolygon, images.roi.Polygon, and createMask, you can use roipoly and poly2mask.)

Here is the image I'll be using today:

image_url = "https://blogs.mathworks.com/steve/files/rice-bw-1.png";

A = imread(image_url);

imshow(A)

And here is my polygon.

xy = [ ...

118 31

35 116

35 219

219 245

161 186

237 56];

I'll be drawing it using a helper function, showpoly, which is at the end of this post. The helper function uses drawpolygon, like this:

imshow(A)

p = drawpolygon(Position = xy, FaceAlpha = 0.5)

Note that drawpolygon creates an images.roi.Polygon (or just Polygon). The Polygon object is one of a variety of region-of-interest (ROI) objects in the Image Processing Toolbox. You can customize many aspects of its appearance and behavior, and it is useful for implementing apps that work with ROIs.

Of particular interest for today's topic is the Polygon function called createMask. This function returns a binary image with white pixels inside the polygon.

mask = createMask(p);

imshow(mask)

Now we can start answering the object-polygon questions. First up: which objects are inside the polygon, either completely or partially?

One of the things that makes MATLAB fun for image processing is using logical operators with binary images. Below, I compute the logical AND of the original binary image with the polygon mask image.

B = A & mask;

imshow(B)

showpoly

Next, I apply morphological reconstruction using imreconstruct. This function takes a marker image and expands its objects outward, while constraining the expansion to include only object pixels in the second input. This is a convenient way to convert partial objects back into their full, original shapes. The result, C, is an image containing objects that are completely or partially within the polygon.

C = imreconstruct(B,A);

imshow(C)

showpoly

Next question: which objects touch the edge, or perimeter, of the polygon? I'll start with bwperim on the polygon mask to compute a perimeter mask.

perimeter_mask = bwperim(mask);

imshow(perimeter_mask)

Going back to the logical & operator gives us the perimeter pixels that are part of any of the objects.

D = A & perimeter_mask;

imshow(D)

Morphological reconstruct is useful again here, to "reconstruct" the full shapes of objects from the object pixels that lie along the polygon perimeter.

J = imreconstruct(D,A);

imshow(J)

showpoly();

Third question: which objects lie completely outside the polygon? Recall that image C, computed above, is the objects that are completely or partially inside the polygon. The objects that we want, then, are the objects that are in A but not in C. You can compute this using A & ~C or setdiff(A,C).

objects_outside = A & ~C;

imshow(objects_outside)

showpoly();

What about the object near the upper right polygon vertex? Is it really completely outside the polygon?

axis([215 230 45 60])

It looks like a sliver of white is inside the polygon. Why is that?

I'll explain by showing the polygon mask, the outside objects, the polygon itself, and the pixel grid. I'll use imshowpair to both the polygon mask and the outside-objects mask. (The function pixelgrid is on the File Exchange.)

figure

imshowpair(objects_outside,mask)

showpoly();

axis([215 230 45 60])

pixelgrid

When a polygon only partially covers a pixel, the createMask has certain tie-breaking rules to determine whether a pixel is inside or outside. You can see the tie-breaking rules play out in several pixels above.

The fourth question is which objects are not only inside the polygon, but at least 20 pixels away from the polygon boundary? To answer this question, I'll bring in another tool: the distance transform, as computed by bwdist.

First, though, let's start by applying the logical NOT (or complement) operator, ~. Applying it to the polygon mask image gives us a mask showing the pixels that are outside the polygon. I'll turn the axis box on so you can see the extent of the exterior mask.

exterior_mask = ~mask;

imshow(exterior_mask)

axis on

The distance transform computes, for every image pixel, the distance between that pixel and the nearest foreground pixel. If a pixel is itself a foreground pixel, then the distance transform at that pixel is 0. By computing the distance transform of the exterior mask, we can find out how far every pixel is from the closest pixel in the polygon exterior.

G = bwdist(exterior_mask);

When displaying a distance transform as an image, the autoranging syntax of imshow is useful. Just specify [] as the second argument when displaying intensity values, and the minimum value will get automatically scaled to black, and the maximum intensity value will get automatically scaled to white.

imshow(G,[])

Now let's compute a mask of pixels at least 20 pixel units away from the exterior mask by using a relational operator, >=.

interior_mask_with_buffer = (G >= 20);

imshow(interior_mask_with_buffer)

showpoly();

Using the techniques described above, compute an image containing the objects that are completely outside the interior buffered mask.

H = imreconstruct(A & ~interior_mask_with_buffer, A);

imshow(H)

Next, find the objects that are in A but not in H.

I = A & ~H;

imshowpair(I,interior_mask_with_buffer)

showpoly();

The final question for today is which objects are not within 20 pixels of the polygon border. Start by computing the distance transform of the perimeter mask.

J = bwdist(perimeter_mask);

imshow(J,[])

Use a relational operator to compute a buffered perimeter mask.

perimeter_mask_with_buffer = (J <= 20);

imshow(perimeter_mask_with_buffer)

Use the techniques described above to find all of the objects that are competely or partially within the buffered perimeter mask region.

K = A & perimeter_mask_with_buffer;

L = imreconstruct(K,A);

imshow(L)

The final answer is the objects that are in the original image, A, but not in L.

M = A & ~L;

imshowpair(M,perimeter_mask_with_buffer)

showpoly();

This post demonstrated a number of MATLAB and toolbox functions and operators that work well together:

- createMask (or poly2mask) to create a binary "mask" image from polygons
- Logical operators on binary images to compute things like "in this image or mask but not in this other one"
- imreconstruct to reconstruct original binary image shapes from partial ones
- bwdist to compute the distance transform, which tells you how far away every image pixel is from some binary image mask
- Relational operators to form masks from distance transform images

Have fun with these.

And Happy New Year!

function showpoly

% Helper function

xy = [ ...

118 31

35 116

35 219

219 245

161 186

237 56];

drawpolygon(Position = xy, FaceAlpha = 0.5);

end

Note added 20-Dec-2022: I wrote this post with lossy JPEG in mind. If you work with DICOM data, see Jeff Mather's note in the comments below about the use of lossless JPEG in DICOM. I have... read more >>

]]>*Note added 20-Dec-2022: I wrote this post with lossy JPEG in mind. If you work with DICOM data, see Jeff Mather's note in the comments below about the use of lossless JPEG in DICOM.*

I always enjoy the side commentary that I sometimes find in these answers. For example, I came across this code comment in one answer:

% This is a horrible image. NEVER use JPG format

% for image analysis. Use PNG, TIFF, or BMP instead.

Today, I want to take the opportunity to endorse this statement and to elaborate on it a bit. Why do we not love JPEG files for image analysis?

The reason is that JPEG is a lossy image compression format. Lossy compression methods achieve substantial reduction in file size by using "inexact approximations and partial data discarding to represent the content." With JPEG compression, roughly speaking, pixels are grouped into blocks, and then the pixel data in each block is quantized and partially discarded. The method is called lossy because you can't get the exact original image pixel data back from a JPEG file; information has been lost.

Usually, this form of compression is acceptable for viewing photographs because the compression method exploits properties of human visual perception so that the information loss is relatively imperceptible. In image analysis applications, though, when you are trying to automatically detect or measure things, the pixel data imperfections in the JPEG file could mess it up.

Let me show you a couple of different versions of this picture:

The original picture was shot in RAW mode (which is lossless) and saved as a 4032x3024 PNG file, with a file size of 23 MB.

Here is a highly magnified view of the center of the picture. You can see the individual pixels.

I created a JPEG version of this file with a file size of only about 1 MB. If you look at the entire picture, the JPEG version looks like the original.

But if you look at a highly magnified view, you can see that pixel data has been partially discarded in a block-wise fashion.

You can probably imagine how this might affect object detection and measurement.

So, as Image Analyst says, stick to lossless formats such as PNG or TIFF for your image analysis applications.

]]>Last winter, Aktham asked on MATLAB Answers how to find the channels in this image.url = "https://www.mathworks.com/matlabcentral/answers/uploaded_files/880740/image.jpeg";A =... read more >>

]]>url = "https://www.mathworks.com/matlabcentral/answers/uploaded_files/880740/image.jpeg";

A = imread(url);

imshow(A)

Image credit: Aktham Shoukry. Used with permission.

Here is an example of what Aktham meant by "channel."

This post shows one way to accomplish the task, using erosion, morphological reconstruction, and some image arithmetic.

The channels are relatively thin in one dimension, either horizontally or vertically. So, a morphological erosion can be constructed that will keep a portion of each grain, while completely eliminating the channels.

First, a bit of preprocessing.

Ag = im2gray(A);

B = imcomplement(Ag);

imshow(B)

The channels are about 10 pixels thick, while the square-ish "grains" are about 50-by-50. Let's erode with a square structuring element with a size that's in between.

C = imerode(B,ones(25,25));

imshow(C)

Next, use morphological reconstruction to restore the original grain shapes.

D = imreconstruct(C,B);

imshow(D)

Complement the image to make the grains dark again.

E = imcomplement(D);

imshow(E)

Subtracting the original grayscale image from E gets us very close.

F = E - Ag;

imshow(F)

G = imbinarize(F);

imshow(G)

A final cleanup of the small objects, using either bwareafilt or bwareaopen, gives us our result.

H = bwareaopen(G,50);

imshow(H)

The function imoverlay gives us a nice visualization of the result. (You could also use imfuse.)

K = imoverlay(Ag,H,[.8500 .3250 .0980]);

imshow(K)

Note, however, that our result isn't quite perfect. There are partially cropped-off grains along the top border that have been identified as channels. Normally, I would recommend using imclearborder as a preprocessing step, but in this case, imclearborder eliminates too many of the desired channels.

I am open to suggestions about how to refine this method. Leave a comment and let me know what you think.

]]>
Woo hoo! The Image Processing Toolbox team has just created a new product:Medical Imaging Toolbox™Shipped with the R2022b release a couple of months ago, this product provides apps, functions,... read more >>

]]>Woo hoo! The Image Processing Toolbox team has just created a new product:

Shipped with the R2022b release a couple of months ago, this product provides apps, functions, and workflows for designing and testing diagnostic imaging applications. You can perform 3D rendering and visualization, multimodal registration, and segmentation and labeling of radiology images. The toolbox also lets you train predefined deep learning networks (with Deep Learning Toolbox). I'm looking forward to writing about this product and its capabilities.

I've been talking recently with Sailesh, one of the product developers, about the Medical Image Labeler. This app is for labeling ground truth data in 2-D and 3-D medical images. With this app, you can:

- Import multiple 2-D images or 3-D image volumes.
- View images as slice planes or volumes with anatomical orientation markers and scale bars.
- Create multiple pixel label definitions to label regions of interest. Label pixels using automatic algorithms such as flood fill, semi-automatic techniques such as interpolation, and manual techniques such as painting by superpixels.
- Write, import, and use your own custom automation algorithm to automatically label ground truth data.
- Export the labeled ground truth data as a groundTruthMedical object. You can use this object to share labels with colleagues or for training semantic segmentation deep learning networks.

The Medical Image Labeler supports 2-D images and image sequences stored in the DICOM and NIfTI file formats. An image sequence is a series of images related by time, such as ultrasound data. The app supports 3-D image volume data stored in the DICOM (single or multifile volume), NIfTI, and NRRD file formats. See Get Started with Medical Image Labeler.

I asked Sailesh for his thoughts about what to tell people in this blog about Medical Image Labeler. He commented that the app is intended for people working towards any kind of AI-assisted CAD (computer-aided diagnosis). It doesn't replace a doctor or diagnostician; rather, it assists with tasks related to their research.

Sailesh also mentioned some specific capabilities that he'd like to highlight. I've prepared a 3-minute video of one of those capabilities: Using active contours to automate object labeling. This tool is one of several in Medical Image Labeler that are intended to save time in the labeling process. Check it out:

There are a few other things about the Medical Image Labeler that I hope to show you soon.

If you have suggestions for the Medical Image Labeler, or for Medical Imaging Toolbox in general, add a comment below.

]]>