Sean's pick this week is Beating (cartoon) Heart by Nathaniel Barlow. ... read more >>

]]>Sean's pick this week is Beating (cartoon) Heart by Nathaniel Barlow.

In the spirit of celebrating Valentine's day with heart inspired FEX submissions ([1], [2]), we add to the list with another. This is an animated cartoon heart that beats!

Give it a try and let us know what you think here or leave a comment for Nathaniel.

Get
the MATLAB code

Published with MATLAB® R2019a

Here's an interesting snippet I learned while working on today's blog post: Mathematician Jacob Bernoulli asked for a logarithmic spiral... read more >>

]]>Here's an interesting snippet I learned while working on today's blog post: Mathematician Jacob Bernoulli asked for a *logarithmic spiral* to be engraved on his tombstone. Here's an example of a such a spiral.

theta = -6*pi:0.01:6*pi; rho = 1.1.^theta; polarplot(theta,rho) rlim([0 max(rho)])

Sadly, the engravers of his tombstone apparently did not fully understand his request. They engraved a spiral, but it was not logarithmic.

I came across the concept of logarithmic spirals while looking into something called *pursuit curves*. Here's an example, in the form of what is sometimes called the *mice problem*. Start by constructing `P`, a vector of four complex-valued points, so that they form the corners of a square in the complex plane.

P = [0 1 1+1i 1i]; plot(real(P),imag(P),'*') axis equal axis([-.5 1.5 -.5 1.5])

Now imagine that each of these "mice" is chasing the next one, always running directly toward it as is moves. To illustrate the basic idea, let's have the mice take a few fixed-length steps toward their targets.

P = [0 1 1+1i 1i]; step_size = 0.1; Pt = P([2 3 4 1]); V = Pt - P; P = [P ; P + step_size*V]; % Second step Pn = P(end,:); Pt = Pn(end,[2 3 4 1]); V = Pt - Pn; P = [P ; Pn + step_size*V]; % Third step. Pn = P(end,:); Pt = Pn(end,[2 3 4 1]); V = Pt - Pn; P = [P ; Pn + step_size*V]; % Fourth step. Pn = P(end,:); Pt = Pn(end,[2 3 4 1]); V = Pt - Pn; P = [P ; Pn + step_size*V];

Now plot the original positions and steps taken by each mouse.

hold on for k = 1:4 plot(real(P(:,k)),imag(P(:,k))) end hold off

You get the idea, and you can see four spirals start to take shape.

I've written a function, shown at the bottom of this post, that simulates and plots pursuit curves for any collection of starting points (mice). I chose to plot 1000 steps, with each step covering 1/100 of the remaining distance, so that the mice never quite catch their targets. The plotting function also plots the tangent lines every 10 steps. Here's the result for the four-mice-on-a-square starting configuration.

P = [0 1 1+1i 1i]; clf plotPursuitCurves(P) axis([-0.1 1.1 -0.1 1.1])

Logarithmic spiral curves have the interesting property that, when you scale them up by a constant factor, the curve remains the same, but rotated. We can get a sense of this property by zooming in on the plot. Let's zoom in by a factor of 100.

axis([0.495 0.505 0.495 0.505])

You can see that the curve's overall shape has not changed. It's just been rotated.

The function `plotPursuitCurves` lets us explore other starting configurations, such as a triangle.

clf P1 = 0; P2 = 1; P3 = exp(1j*pi/3); plotPursuitCurves([P1 P2 P3]) axis([-0.1 1.1 -0.1 1.0])

Or a hexagon.

clf theta = 0:60:330; P = cosd(theta) + 1i*sind(theta); plotPursuitCurves(P) axis([-1.1 1.1 -1.1 1.1])

We can even tinker with the select of targets. In this next example, the mice start at the vertices of a hexagon, but instead of chasing the next mouse over, they start out chasing **other** mice. We do this by reordering the vertices.

P2 = P([1 3 5 2 4 6]); clf plotPursuitCurves(P2)

Those are some odd-looking curves. If we zoom in we can see why the curves evolve the way they do by examining the tangent lines. We can also see that the curves do appear to settle into a set of regular spirals, although I do not know whether this is a general property of such pursuit curves.

xlim([-0.147 0.103]) ylim([-0.129 0.069])

Do you like to explore mathematical visualizations like this? If you have a favorite, let us know in the comments.

function plotPursuitCurves(P) P = reshape(P,1,[]); N = length(P); d = 0.01; for k = 1:1000 V = P(end,[(2:N) 1]) - P(end,:); P(end+1,:) = P(end,:) + d*V; end hold on Q = P(:,[(2:N) 1]); for k = 1:10:size(P,1) for m = 1:N T = [P(k,m) Q(k,m)]; plot(real(T),imag(T),'LineWidth',0.5,'Color',[.8 .8 .8]) end end for k = 1:N plot(real(P(:,k)),imag(P(:,k))) end hold off axis equal end

Get
the MATLAB code

Published with MATLAB® R2018b

I have a table with many columns, some of which are stings. I want to set all the string values... read more >>

]]>Happy Valentines Day! I have been working with the MATLAB community since we launched MATLAB Central wa-aaay back at the... read more >>

]]>u = [1 2 3 4] u(1)

ans = 1

A = invhilb(4); A(:,1)

ans =4×116 -120 240 -140

steel = -17; abs(steel)

ans = 17

>>

approval = pi; ceil(approval)

ans = 4

% ducks in a (Moler) matrix ducks = gallery('moler',3)

ducks =3×31 -1 -1 -1 2 0 -1 0 3

```
% ducks in a row
ducks(:)'
```

ans =1×91 -1 -1 -1 2 0 -1 0 3

h = ezplot('(x^2+y^2-1)^3 = x^2*y^3', ... [-1.5,1.5,-1.2,1.5]); h.LineWidth = 5; colormap([1 0 0])

% You are the one u = 1; % Here is magic for you magic(4*u)

ans =4×416 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1

% Nothing else matters close all; clear all % You are everything u = inf

u = Inf

why

For the love of a smart and terrified mathematician.]]>

Today, let me share why I would model and simulate while designing a race car. Bottom line: It can save you... read more >>

]]>- Evaluate new designs
- Diagnose problems with an existing design
- Test a system under conditions that are hard to reproduce, such as a system failure in a high velocity situation

*Modeling types*

- Creating and simulating models is less expensive than building and testing hardware prototypes.
- Using simulation software allows them to test different designs before building one in hardware.
- Connecting simulation software with hardware allows for testing the integration of the full design.

- How do you determine the optimal ratio between motor torque and the weight of your car?
- How do you make sure that the control algorithms you develop (perhaps in Simulink) behave the same way on the car's engine control unit (ECU) as they did in your computer simulation?
- What is your strategy to minimize errors during each design step?
- How long does it take you during a test day to get your car back on track after adapting some controller settings?

*Control system with feedback loop*

*Development process V-diagram*

The ThingSpeak team has released an updated version of the ThingSpeak Communication Library for Arduino, ESP8266, and ESP32 devices. The... read more >>

]]>- Arduino or compatible using a
**WiFi Shield** - Arduino or compatible using a
**WiFi Shield 101** - Arduino or compatible using an
**Ethernet Shield** - Arduino or compatible using a
**MKR ETH Shield** **Arduino MKR1000**(use the WiFi101 library version 0.13.0 or older. WiFi101 library versions 0.14.0 and newer have a bug that stops this ThingSpeak library from working properly)**Arduino MKR1010****Arduino VIDOR 4000****Arduino GSM 14000****Arduino Yún**(Rev1 and Rev2)**ESP8266**(tested with SparkFun ESP8266 Thing - Dev Board and NodeMCU 1.0 module)**ESP32**(tested with SparkFun ESP32 Thing)

**ReadField**: Reading from a public channel and a private channel on ThingSpeak**WriteSingleField**: Writing a sensor value to a single field on ThingSpeak**WriteMultipleFields**: Writing sensor values to multiple fields and status in one transaction with ThingSpeak

```
// Read in field 1 of the private channel which is a counter
long fieldValue = ThingSpeak.readLongField(myChannelNumber, myFieldNumber, myThingSpeakReadAPIKey);
// Check the status of the read operation to see if it was successful
statusCode = ThingSpeak.getLastReadStatus();
if(statusCode == 200) {
Serial.println("Field Value: " + String(fieldValue));
}
else {
Serial.println("Problem reading channel. HTTP error code " + String(statusCode));
}
```

The code behind the ThingSpeak library is available on GitHub. Discover other MathWorks Open Source and Community Projects on The MathWorks GitHub page.]]>Buckets. Sometimes it rains buckets. Other times we are challenged with ice buckets. When it comes to my code, tests,... read more >>

]]>Buckets. Sometimes it rains buckets. Other times we are challenged with ice buckets. When it comes to my code, tests, and development process, I want to put it all in buckets. One bucket for tests that are crucial to my iterative workflow. One bucket for tests that are less crucial on a change by change basis, but still important overall. A bucket for my performance tests and a bucket for my UI tests. It's buckets all the way down.

A long time ago we chatted a couple times about how we can put some of our tests into different buckets and then select them differently. This is great but I didn't really complete the story with how this can work in your automation. After all, you are using a CI system to run your tests for you right?

Well the trouble here is that I only want to run some of my tests on every commit, but then I'd like to defer others. For example, I could see myself running all of my fast unit tests on every change to my code, but then run the performance tests perhaps every night. I might also have a set of tests that are expensive and only suited to run periodically. Maybe they take a long time to run, operate on large data or some external resource, require manual verification, or even are a bit flaky. I still want to keep these tests in my arsenal, but I want to defer running them to a weekly or even monthly schedule.

In Jenkins, you can do this with a parameterized build quite easily. First, create a standard freestyle job:

Sitting right there at the top of the build configuration page, right under our collective noses, is the option to mark the build as parameterized. Check that box and we can add our own parameter values, or buckets as one might say, which give us our own bona fide parameters to build against. For our case let's add a standard build for every commit, a build to run our performance tests, and a build to run our more expensive deferred tests:

This now changes our project home page to include "Build with parameters". "Build" only is ** so** last week.

Great, our project now runs our standard build to run against every commit, but if we want to inject a bit of flavor into our build we now have that option:

What is this really doing? How is the build different? Well it comes down to the environment. The parameters we have defined now become environment variables during the build so we can process them in our CI script. It might look something like this:

testType = string(getenv("TestType")); if testType == "Performance" perfResults = runperf("IncludeSubfolders", true, "Superclass", "matlab.perftest.TestCase"); assert(all([perfResults.Valid]), "Some performance tests were invalid!") convertToJMeterCSV(perfResults); else testsToRun = runtests("IncludeSubfolders", true, "Tag", testType); assert(~any([results.Failed]), join([testType, "Tests failed!"])); end

Now all we need to do is add TestTags to the "Standard" and "Deferred" tests and we have test selection at our mercy! The default build, such as we might schedule or run on every commit to source control, will use the first parameter value, in this case "Standard" so that fits nicely.

Note for the performance tests we can process them differently, like we demonstrated here. Also, we select these performance tests via their super class, since we have derived all of these tests from ** matlab.perftest.TestCase** . If we want, we can still run these as normal tests as well by simply adding the "Standard" TestTag to them. This would ensure that they don't break on any given change and we don't find out until we want to gather performance data.

There we have it. It's pretty easy to get going with a parameterized build, and leveraging features like test tagging it can really open up some nice workflows. Have you used parameterized builds? Would love to hear your stories!

Get
the MATLAB code

Published with MATLAB® R2018b

Today I want to tell you about two new features in Cody. One of them, Customized Groups, I've been anticipating... read more >>

]]>Jiro's pick this week is DragDataTip by Allen.Here's a simple, but quite useful utility to use in conjunction with MATLAB's... read more >>

]]>Jiro's pick this week is `DragDataTip` by Allen.

Here's a simple, but quite useful utility to use in conjunction with MATLAB's data tips. It customizes the data tips so that you can drag the labels to any location for better visibility.

The entry requires you to also download and install `draggable` by Francois Bouffard. Here's an example.

hFig = figure; hLin = plot(peaks); grid on title('Draggable Data Tips') xlabel('Data Index Values') ylabel('Acceleration (g''s)') hFig.KeyPressFcn = {@DragDataTip,gca};

Then create some data tips using the built-in feature. Once you're done (and turned off the datacursor mode), press Ctrl-M. This converts the standard data tips to Drag Data Tips. Then you can move the data tips to any location.

See this in action.

You can find other examples in the help of `DragDataTip`.

```
help DragDataTip
```

**Comments**

Give it a try and let us know what you think here or leave a comment for Allen.

Get
the MATLAB code

Published with MATLAB® R2018b

Have you heard of animated PNG (APNG) files? Are you interested in creating them from MATLAB? They are supposed to... read more >>

]]>Have you heard of animated PNG (APNG) files? Are you interested in creating them from MATLAB? They are supposed to be an improvement over GIF for making animations that are viewable in a web browser. (If you have an opinion about this, one way or the other, please share your thoughts in the comments.)

Here's a sample image. Do you see the animation in your web browser, or do you see only a still image?

*Public domain image. Credit: Holger Will*

When I look at this image on my Mac, it animates using Safari (version 11.1.2) and Chrome (version 62), but not with the MATLAB browser.

I have to admit that I had never heard of animated PNG files until last week, when I happened to see them mentioned in MATLAB Answers. One of the MATLAB Answers pages where APNG has been discussed is the question, "How can I create animated GIF images in MATLAB?", where Royi suggested using APNG as a superior replacement for GIF.

I was curious, so I started looking into it.

First, a little background. The PNG image format was created in the 1990s, and it was intended as a replacement for GIF. It is an efficient and popular lossless image file format that is universally supported in browsers and other applications that deal with image files. However, GIF has one capability that PNG does not have: animations. For this reason, PNG never replaced GIF for use on web pages, despite some temporary legal restrictions associated with using GIF in the 1990s. (Search for "gif unisys" if you're interested.)

As an image file format, GIF has a couple of significant weaknesses. The first is that a GIF image can have only at most 256 different colors. For animated GIFs, this limitation applies to each frame. The second weakness is that saving an image with transparency uses up one of the 256 colors, and the transparency of each image pixel is either fully opaque or fully transparent. This makes it difficult to make images with smoothly antialiased edges, and it also causes artifacts when a GIF still image or animation using transparency is displayed on top of a different background color than what was originally used to render the image.

The animated PNG format was intended to overcome both of these weaknesses. (Also, unlike GIF, there is no controversy over how to pronounce PNG.)

Here are some key information sources about APNG:

- Wikipedia article
- APNG specification (by Stuart Parmenter, Vladimir Vukicevic, and Andrew Smith)
- APNG web site with history, samples, tutorial, and some code

And here are a few interesting things I've learned:

- An APNG file is a valid PNG file. According to the PNG spec, compliant file readers are required to ignore file chunks that they don't know about. Therefore, a compliant PNG reader that doesn't handle APNG should simply show the first image frame in the file.
- Apparently, the community responsible for maintaining the PNG specification never accepted the APNG spec extension or updated the reference library (libpng) to handle APNG.
- Despite that, APNG has achieved a surprisingly broad level of browser support. According to the Wikipedia page, as of this writing, only Internet Explorer and Microsoft Edge are holdouts.
- Various benchmarks are available online that claim to show that smaller file sizes are achievable with APNG, despite the fact that PNG files are not limited to just 256 colors. I haven't verified it, but I believe this file size improvement is because of the optional interframe differencing feature of APNG.
- There is a fairly straightforward implementation recipe on the Wikipedia page for combining any number of individual PNG files into an APNG file.

Finally, I learned (by accident!) that the Preview app on the Mac is capable of converting an animated GIF file to an APNG file. You can accomplish this by viewing an animated GIF file in Preview and then using the "Export" menu to save it as a PNG. It gets saved with the animation intact.

My 20-Apr-2017 post on the "Eight Queens Problem" included an animated GIF. Here it is as an APNG file, as created by the Preview app. (If you are looking at this blog post in a browser that doesn't support APNG, you'll probably just see an empty 5x5 chessboard.)

In this case, the APNG file is about 2.5 times the size of the GIF file, so perhaps the Preview app isn't fully optimizing the APNG output. Also, the APNG file likely would have been smaller if it had been created directly from the original graphic, because the GIF file appears to use dithering in the chessboard squares.

If you have an interest in APNG files, please let us know in the comments.

Get
the MATLAB code

Published with MATLAB® R2018b

This is a guest post from Paul Pilotte, technical marketing manager for data science and predictive analytics. Gartner recognizes MathWorks as... read more >>

]]>If you’ve followed this blog, you’ve seen how MATLAB offers a comprehensive deep learning workflow that simplifies and automates data synthesis, labeling, training, tuning, and deploying deep learning to AI-driven systems, including enterprise applications, embedded functionality, and edge systems. This makes AI accessible to engineers and scientists without previous data science experience. These tools are also broadening the applications of deep learning from image and computer vision to the many applications that use time-series data such as audio, speech, financial time series, and IoT time-stamped data. AI is also on the minds of executives. The office of the CEO and executive management are increasingly making it a strategic imperative to reap the promise of AI by sponsoring a data-centric culture across their organizations. |

- MATLAB for Deep Learning
- MATLAB for Machine Learning
- MATLAB for Data Science
- MATLAB for Enterprise and IT Systems
- MATLAB for Quantitative Finance and Risk Management

Today's guest blogger is Josh Meyer, a Technical Writer for the MATLAB Math and Big Data teams. He is going... read more >>

]]>Today's guest blogger is Josh Meyer, a Technical Writer for the MATLAB Math and Big Data teams. He is going to discuss a common issue encountered in scattered data interpolation, and how to fix it!

A common issue that can arise while performing interpolation on scattered data is that the resulting functional surface seems to be of lower quality than you would expect. For example, when the surface does not seem to pass through all of the sample data points. This post explores how and why the scaling of scattered data can affect the interpolation results.

Let's jump right into an example so you can see what I'm talking about. Imagine you just collected data from sensors to sample some values at several points. Ultimately you want to fit a surface to the data so that you can approximate the value of the underlying function at points where you don't have data. So, you start by plotting.

rng default x = rand(500,1)/100; y = 2.*(rand(500,1)-0.5).*90; vals = (x.*100).^2; ptColor = [.6 .07 .07]; plot3(x,y,vals,'.','Color',ptColor) grid xlabel('x') ylabel('y') zlabel('v(x,y)') title('Scattered Data Points')

Next, you use `scatteredInterpolant` to create an interpolant for the data. This computes an interpolating function for the observed points, allowing you to query the function anywhere within its convex hull. You create a grid of query points, evaluate the interpolant at those points, and plot the functional surface. All done!

F = scatteredInterpolant(x,y,vals); X = linspace(min(x),max(x),25); Y = linspace(min(y),max(y),25); [xq, yq] = meshgrid(X,Y); zq = F(xq,yq); hold on triColor = [0.68 0.88 0.95]; surf(xq,yq,zq,'FaceColor',triColor) hold off xlabel('x') ylabel('y') zlabel('v(x,y)') title('Scattered Data with Interpolated Surface')

... but wait, the result is definitely not what you expected! What are all of those "folds" in the surface? And why does it look like the surface doesn't pass through all the points? Interpolated surfaces should pass through *all* of the data points!

Let's take a quick detour to talk about scattered data interpolation before coming back to this problematic data.

Unlike gridded interpolation where the point locations are well-defined, scattered data presents different challenges. To find the interpolated value at a given query point, you need to use the values of nearby points. But how can you do that when the data is scattered around? The location of a point can't be used to predict the location of another point. And comparing the locations of all the points ad nauseum to determine which are close to a given query point is not a very efficient approach.

```
X = [-1.5 3.2; 1.8 3.3; -3.7 1.5;
-1.5 1.3; 0.8 1.2; 3.3 1.5;
-4.0 -1.0; -2.3 -0.7; 0 -0.5;
2.0 -1.5; 3.7 -0.8; -3.5 -2.9;
-0.9 -3.9; 2.0 -3.5; 3.5 -2.25];
V = X(:,1).^2 + X(:,2).^2;
plot(X(:,1),X(:,2),'r*')
```

To solve this, MATLAB first computes a Delaunay triangulation of the scattered data. This creates a series of triangles out of the data points, such that the circumscribed circles created by the vertices of each triangle do not enclose any points. The computed Delaunay triangulation is unique, up to trivial symmetries. And the best part is, the triangulation can be easily queried to determine what points are closest to a given query point.

This plot shows the Delaunay triangulation and the circumscribed circles for the scattered data. Notice that the red data points lie on the boundaries of one or more circumscribed circles, but none of them lie in the interior of one of the circles.

tr = delaunayTriangulation(X(:,1),X(:,2)); [C,r] = circumcenter(tr); a = C(:,1); b = C(:,2); pos = [a-r, b-r, 2*r, 2*r]; hold on triplot(tr) for k = 1:length(r) rectangle('Position',pos(k,:),'Curvature',[1 1],'EdgeColor',triColor) end xlabel('x') ylabel('y') title('Delaunay Triangulation of Scattered Data') hold off

Armed with the triangulation of the data, finding the value of the interpolated surface at a given query point becomes a problem of querying the triangulation structure to determine which triangle encloses the query point. Then, the data points comprising the vertices of that triangle can be used to calculate the value of the interpolated surface at the query point, depending on which interpolation method is being used (nearest-neighbor, linear, etc...).

This plot shows a query point in a 2-D triangulation. To find the interpolated value at this query point, MATLAB uses the vertices of the enclosing triangle. By repeating this calculation at many different query points, a functional surface for the data forms. This functional surface allows you to make predictions at points where you did not collect data.

triplot(tr) hold on plot(X(:,1),X(:,2),'r*') trisurf(tr.ConnectivityList,X(:,1),X(:,2),V,... 'FaceColor',triColor, ... 'FaceAlpha',0.9) axis([-4, 4, -4, 4, 0, 25]); grid on plot3(-2.6,-2.6,0,'*b','LineWidth', 1.6) plot3([-2.6 -2.6]',[-2.6 -2.6]',[0 13.52]','-b','LineWidth',1.6) hold off view(322.5, 30); text(-2.0, -2.6,'Xq',... 'FontWeight','bold', ... 'HorizontalAlignment','center',... 'BackgroundColor','none'); xlabel('x') ylabel('y') zlabel('v(x,y)') title('Interpolation of Query Point')

Now that we know `scatteredInterpolant` uses a Delaunay triangulation of the data to perform its calculations, let's examine the underlying triangulation of the data we are having trouble interpolating.

tri = delaunayTriangulation(x,y); triplot(tri) hold on plot(x,y,'r*') hold off xlabel('x') ylabel('y') title('Triangulation of Original Data')

What a mess! There are only a few small pockets of reasonable triangles, with most of them being long and thin, connecting points that are far away from each other. It looks like the triangulation *does* pass through all of the data points, while the poor interpolated surface we saw earlier does not. We would expect that interpolated surface to look better.

Based on this triangulation, MATLAB has the not-enviable job of determining which of those long, thin triangles each query point lies in. And this brings us to ...

The poor interpolation results are caused by the very different scales of the data (the x-axis ranges from [0, 0.01] and the y-axis from [-100, 100]) and the resulting long, thin triangles in the triangulation. After MATLAB classifies the query points into their corresponding triangles, it then uses the triangle vertices to find the value of the interpolated surface. Since the long, thin triangles have vertices that are far away from each other, this results in the local values of the interpolated surface being based on the values of far away points, rather than neighboring points.

Here is a GIF showing what happens to a triangulation as the *x*-data is scaled, while the *y*-data is constant. What starts out as a well-defined triangulation quickly becomes full of many long, thin triangles.

The data is effectively being shoved into a smaller and smaller area along the x-axis. If the axes are kept equal, the above animation looks like this.

Clearly, the triangulation underpinning the scattered interpolation calculation is sensitive to distortions in the data. The algorithm expects the *x,y*-coordinates to be reasonably scaled relative to each other. But, why?

This behavior stems from the use of circumscribed circles in the Delaunay triangulation algorithm. When you squeeze the data in one direction, you are effectively altering the radii of the circumscribed circles. Taken to the extreme, as the points get squeezed together and become a straight line in one dimension, the radii of the circumscribed circles goes to infinity since the locus of points equidistant from the center of the circle becomes a straight line. The distribution of triangles is increasingly due to the proximity of the points in the *y* direction, hence why points that have close *y* values but very different *x* values become the triangle vertices.

If we add the circumscribed circles to the above animation, you can see that the radius of the circles blows up as the data gets squeezed.

The solution to this issue is typically to remove the distortions in the data by normalizing. Normalization can improve the interpolation results in some cases when the triangulation has many thin triangles, but in others it can compromise the accuracy of the solution. Whether to use normalization is a judgment made based on the nature of the data being interpolated.

**Benefits**: Normalizing your data can potentially improve the interpolation result when the independent variables have different units and substantially different scales. In this case, scaling the inputs to have similar magnitudes might improve the numerical aspects of the interpolation. An example where normalization would be beneficial is if`x`represents engine speed in RPMs from 500 to 3500, and`y`represents engine load from 0 to 1. The scales of`x`and`y`differ by a few orders of magnitude, and they have different units.

**Cautions**: Use caution when normalizing your data if the independent variables have the same units, even if the scales of the variables are different. With data of the same units, normalization distorts the solution by adding a directional bias, which affects the underlying triangulation and ultimately compromises the accuracy of the interpolation. An example where normalization is erroneous is if both`x`and`y`represent locations and have units of meters. Scaling`x`and`y`unequally is not recommended because 10 m due East should be spatially the same as 10 m due North.

For our problem, let's assume `x` and `y` have different units and normalize them so that they have similar magnitudes. You can use the relatively new `normalize` function to do this; by default it uses Z-scores of the data. This transforms the data so that the original mean $\mu$ becomes 0, and the original standard deviation $\sigma$ becomes 1:

$$x' = \frac{\left(x - \mu\right)}{\sigma}.$$

This normalization is very common and is also called *standardization*.

x = normalize(x); y = normalize(y);

Now that the data is normalized, let's take a look at the triangulation.

tri = delaunayTriangulation(x,y); triplot(tri) hold on plot(x,y,'r*') hold off xlabel('x') ylabel('y') title('Triangulation of Normalized Data')

This is much better! The triangles are mostly well defined. There are only a few pockets of long, thin triangles near the edges. Let's take a look at the scattered interpolation surface now.

X = linspace(min(x),max(x),25); Y = linspace(min(y),max(y),25); [xq, yq] = meshgrid(X,Y); F = scatteredInterpolant(x,y,vals); zq = F(xq,yq); plot3(x,y,vals,'.','Color',ptColor) hold on surf(xq,yq,zq,'FaceColor',triColor) hold off grid on xlabel('x') ylabel('y') zlabel('v(x,y)') title('Normalized Data with Interpolated Surface')

In this case, normalizing the sample points permits `scatteredInterpolant` to compute a better triangulation, and thus a better interpolated surface.

Normalization is just one form of data scaling. With normalization you are changing the shape of the distribution of data and shifting the location of the mean value.

You can also `rescale` the data, changing the extent of its largest and smallest values (for example, making 0 the smallest value and 1 the largest value). When you rescale data you preserve the shape of the distribution but squeeze or expand it along the number line. This changes the mean value but preserves the standard deviation.

Data scaling is important to pay attention to. If the data is not properly scaled, it can hide relationships in the data or distort their strength. The results of some algorithms can vary a lot depending on the exact scaling used. We just saw how the triangulation of scattered data is sensitive to scaling, but it can cause problems in other algorithms as well. For example, machine learning algorithms typically depend on the Euclidean distance between points, so they are sensitive to the scaling of the data. Without proper scaling, some features will have too large a contribution compared to others, skewing the results.

But this is a broad topic best covered in another post!

Do you use scattered data interpolation regularly in your work? If so tell us about it in the comments!

I'd like to thank Cosmin Ionita, Damian Sheehy, and Konstantin Kovalev for their valuable insights while I was writing this post.

Get
the MATLAB code

Published with MATLAB® R2018b

Brett's Pick this week is PIVLab, by William Thielicke.ContentsA Particle Velocimetry Suite of Tools!A video tutorial!One suggestionA Particle Velocimetry Suite... read more >>

]]>Brett's Pick this week is `PIVLab`, by William Thielicke.

Many years ago, as a postdoc at the National Institutes of Health, I was working on a problem with a colleague characterizing the (spiral) shape of the cochlea. Armed with my knowledge of the Image Processing Toolbox, I pretty quickly calculated edges in an image of the structure, and then called `bwtraceboundary` to direct a path along the edge.

*My colleague's eyes glazed over as she informed me that I had just recreated a significant part of her dissertation--one that took her a very long time to implement.*

For *my* dissertation, I implemented from scratch a particle image velocimetry (PIV)-based approach to measuring retinal blood flow by tracking fluorescent particles in the ocular circulation. Coming up with the PIV algorithm constituted a significant portion of my effort. Today, I feel a bit like my colleague did, looking at William's PIVLab! With a very nicely implemented, MATLAB-built app, William facilitated nearly everything I needed to do to track those particles. But his code is light-years nicer and faster than mine was. (This goes back 25 years!)

With PIVLab, I can simply load the images in my video sequence, set some parameters, and start tracking:

In truth, I haven't (yet!) been able to recreate with PIVLab the results from my dissertation. But one thing is sure--this would have been a very useful starting point, and it would have saved me countless weeks of effort! (The flexibility and customizability of MATLAB code--including that on the File Exchange--is truly what makes it so useful.)

I love that William took the trouble of recording a twelve-minute video to get you started using his tool.

The tool is richly featured, and includes many different visualizations and processing functions for looking at particle flows:

I had to convert my video files to sequences of images to get them to work in PIVLab. It's not very difficult to make your app support video files--using VideoReader, for instance! But creating an image sequence from a video is a minor task, too:

vid = VideoReader('C:\Sample Data\ONH_PIV_10.avi'); frame = readFrame(vid); imgh = imshow(frame); index = 1; while hasFrame(vid) imwrite(frame,['C:\Sample Data\ONHFramewise\','Frame_',sprintf('%03i',index),'.tif']); index = index + 1; set(imgh,'cdata',frame) frame = readFrame(vid); drawnow end

It's worth noting that William is among the top 100 most-downloaded authors on the File Exchange--largely on the strength of PIVLab. It is current (the last update was January 2019); it has 80 user reviews hovering, on average, around 5/5 stars; and has been downloaded thousands of times. This file is long overdue for Pick-of-the-Week recognition! William, I appreciate the effort it took to create this fine tool. And I greatly appreciate your sharing it with the MATLAB community!

As always, I welcome your thoughts and comments.

Get
the MATLAB code

Published with MATLAB® R2018b

I’m hoping by now you’ve heard that MATLAB has great visualizations, which can be helpful in deep learning to help... read more >>

]]>netName = 'squeezenet'; net = eval(netName);Then we get our webcam running.

cam = webcam; preview(cam);[caption id="attachment_1086" align="aligncenter" width="300"] Here’s me running this example.[/caption]

Network Name |
Activation Layer Name |

googlenet | 'inception_5b-output' |

squeezenet | 'relu_conv10' |

resnet18 | 'res5b' |

im = imread('peppers.png'); imResized = imresize(im,[inputSize(1),NaN]); imageActivations = activations(net,imResized,layerName);The class activation map for a specific class is the activation map of the ReLU layer, weighted by how much each activation contributes to the final score of the class. The weights are from the final fully connected layer of the network for that class. SqueezeNet doesn’t have a final fully connected layer, so the output of the ReLU layer is already the class activation map.

scores = squeeze(mean(imageActivations,[1 2])); [~,classIds] = maxk(scores,3); classActivationMap = imageActivations(:,:,classIds(1));If you’re using another network (not squeezenet) it looks like this:

scores = squeeze(mean(imageActivations,[1 2])); [~,classIds] = maxk(scores,3); if netName ~= 'squeezenet' fcWeights = net.Layers(end-2).Weights ; fcBias = net.Layers(end-2).Bias; scores = fcWeights*scores + fcBias; weightVector = shiftdim(fcWeights(classIds(1),:),-1); classActivationMap = sum(imageActivations.*weightVector,3); endCalculate the top class labels and the final normalized class scores.

scores = exp(scores)/sum(exp(scores)); maxScores = scores(classIds); labels = classes(classIds);And visualize the results.

subplot(1,2,1); imshow(im); subplot(1,2,2); CAMshow(im,classActivationMap); title(string(labels) + ", " + string(maxScores)); drawnow;The activations for the top prediction are visualized. The top three predictions and confidence are displayed in the title of the plot. Now you can switch from a still image to a webcam, with these lines of code:

h = figure('Units','normalized','Position',[.05 .05 .9 .8]); while ishandle(h) % im = imread('peppers.png'); <-- remove this line im = snapshot(cam);put an ‘end’ to the loop right after drawnow; you’re good to run this in a loop now.

function CAMshow(im,CAM) imSize = size(im); CAM = imresize(CAM,imSize(1:2)); CAM = normalizeImage(CAM); CAM(CAM < .2) = 0; cmap = jet(255).*linspace(0,1,255)'; %' CAM = ind2rgb(uint8(CAM*255),cmap)*255; combinedImage = double(rgb2gray(im))/2 + CAM; combinedImage = normalizeImage(combinedImage)*255; imshow(uint8(combinedImage)); end function N= normalizeImage(I) minimum = min(I(:)); maximum = max(I(:)); N = (I-minimum)/(maximum-minimum); endGrab the entire code with the blue "Get the MATLAB Code" link on the right. Happy visualization! Leave a comment below, or follow me on Twitter! [jpt]

Copyright 2018 The MathWorks, Inc.

Get the MATLAB code

I have a for loop that is transforming my data in a way I did not expect, so I use... read more >>

]]>North American football is the most popular sport in the United States. While popular, it’s also undeniably one of the... read more >>

]]>Sean's pick this week is Distance from points to polyline or... read more >>

]]>Sean's pick this week is Distance from points to polyline or polygon by Michael Yoshpe.

I recently had a problem where I was trying to identify which corners of a property lot were abutting a street.

Below is an example; the polygons are stored in the relatively new `polyshape` class.

load Polyshapes.mat p(1) = plot(lot, "FaceColor", "b"); hold on p(2) = plot(street, "FaceColor", [0.5 0.5 0.5]); xticks([]) yticks([]) legend(["Lot" "Street"])

I looked at the methods for `polyshape` and first thought `nearestvertex` would be the correct way to do it. I.e. given the nearest vertex for each point, calculate the distance to it and apply a threshold.

nidx = nearestvertex(street, lot.Vertices); p(3) = plot(street.Vertices(nidx, 1), street.Vertices(nidx, 2), "r*", "DisplayName", "Nearest Vertex on Street");

At first this looks good, but let's zoom in:

```
load zoomlimits.mat
xlim(xlimits)
ylim(ylimits)
```

Nearest vertex is only looking at the vertices for a street. There can be long distances between the vertices along a boundary so it could return ones that make absolutely no sense e.g. the far side of the street. What I actually need is the distance from the lot vertices to the boundaries of the street.

Unfortunately, there was no obvious polyshape method in a computational geometry sense to do it. I could use linspace to add thousands of points to each boundary to up the number of vertices so `nearestvertex` could get "close enough" but that seemed like a less than optimal means for solving this problem.

I searched MATLAB Answers and came across this answer from MathWorks' Tech Support. This provided a means for finding distance to a line. One could loop over all of the boundaries of the street, calculate the shortest distance, and then keep the minimum at the end. Instead, I searched the File Exchange and found Michael's `p_poly_dist` function. It does exactly what I need (though requires looping over vertices instead of boundaries).

% Grab the boundary from the street. [xbnd, ybnd] = boundary(street); % Loop over lot vertices, calculating shortest distance to street boundary. for ii = size(lot.Vertices, 1):-1:1 dists(ii, 1) = p_poly_dist(lot.Vertices(ii, 1), lot.Vertices(ii, 2), xbnd, ybnd); end % A vertex is on the border if it's within threshold or street interior. threshold = 2; onborder = (dists < threshold) | isinterior(street, lot.Vertices); delete(p(3)) plot(lot.Vertices(onborder, 1), lot.Vertices(onborder, 2), "mp", "DisplayName", "On Street") xlim auto ylim auto

I'd also like to give a shout out to Point To Line Distance by Rik Wisselink - who provided input checking to the tech support answer and extended it to work on multiple points. I didn't use it for this because I did not need 3d and looping over boundaries was many more iterations than looping over vertices.

If you want this functionality in base MATLAB, please contact MathWorks Tech Support and add your vote by letting them know this answer was useful!

Give it a try and let us know what you think here or leave a comment for Michael.

Get
the MATLAB code

Published with MATLAB® R2019a

An impeccably-preserved fossil discovered in Germany was brought back to life with the help of a CT scan, a 3D-printer,... read more >>

]]>“Because the researchers had to decide which of those characteristics were most important, they’ve released their whole data set (with other options) online for anyone who wants to weigh them differently, Nyakatura said. For those who want to try out their own interpretation of Orobates’ locomotion, the team’s interactive simulations are available online to play with.” -PBS News HourThe online simulation enables users to change variables such as power expenditure and balance and see how the gait is affected. It includes models for the four modern-day reptiles as well. [caption id="attachment_2106" align="alignnone" width="602"] Reverse Engineering the Locomotion of a Stem Amniote. Image Credit: École Polytechnique Fédérale de Lausanne[/caption] The code and data for this paper are available online. This includes the MATLAB code used for the data processing and analysis as well as the Webots model. ]]>

I wrote about Football Squares a long time ago. In light of the upcoming Superbowl, a few people have asked... read more >>

]]>a = invhilb(10)<0;Why INVHILB? See this Cody problem.

tick = 0:9; imagesc(tick,tick,a) colormap([1; 0.8]*[1 1 1]) set(gca, ... 'FontSize',12, ... 'XAxisLocation','top', ... 'XTick',tick, ... 'YTick',tick, ... 'XTickLabel','?', ... 'YTickLabel','?') axis square xlabel("Last Digit of the Patriots Score") ylabel("Last Digit of Rams Score")So you can see that we don't actually know which numbers will correspond to each square.

```
url = 'https://uinames.com/api/?amount=12®ion=United%20States';
names = webread(url);
```

These are all the people coming to your party. Don't worry, they're a lot of fun.
```
allNames = string({names.name});
disp(allNames.join(', '))
```

Thomas, Kathleen, Matthew, Jean, Christine, Kathy, Martha, Scott, Brian, Olivia, Harry, RalphNow we'll simulate people buying up the squares. Say they're paying one dollar for each square, for a nice $100 jackpot.

groupSize = 12; nameIndexGrid = randi(groupSize,[10 10]); for i = 1:numel(a) [row,col] = ind2sub([10,10],i); text(col-1,row-1,allNames(nameIndexGrid(col,row)),... 'FontSize', 6, ... 'Color','blue', ... 'HorizontalAlignment','center'); end

xLabels = randperm(10)-1; yLabels = randperm(10)-1; set(gca, ... 'XTickLabel',string(xLabels), ... 'YTickLabel',string(yLabels))

patriotsFinalScore = randi(30); ramsFinalScore = randi(30);Let's make a little anonymous function here, because anonymous functions are cool.

getLastDigit = @(score) score - 10*(floor(score/10));What are the last digits in the score?

patriotsLastDigit = getLastDigit(patriotsFinalScore)

patriotsLastDigit = 6

ramsLastDigit = getLastDigit(ramsFinalScore)

ramsLastDigit = 6

ix1 = find(patriotsLastDigit==xLabels); ix2 = find(ramsLastDigit==yLabels); set(findobj('Type','text'),'Color',0.6*[1 1 1]) patch(ix1-1+[0 1 1 0 0]-0.5,ix2-1+[0 0 1 1 0]-0.5,[1 0.7 0.7],'EdgeColor','none') text(ix1-1,ix2-1,allNames(nameIndexGrid(ix1,ix2))+" wins!",... 'FontSize',18, ... 'FontWeight','bold', ... 'Color','red', ... 'HorizontalAlignment','center');]]>

In a comment following my post about half-precision arithmetic, "Raj C" asked how the parameters for IEEE Standard 754 floating... read more >>

]]>In a comment following my post about half-precision arithmetic, "Raj C" asked how the parameters for IEEE Standard 754 floating point arithmetic were chosen. I replied that I didn't know but would try to find out. I called emeritus U. C. Berkeley Professor W. (Velvel) Kahan, who was the principle architect of 754. Here is what I learned.

First, some background. George Forsythe, Mike Malcolm and I wrote a textbook published in 1977 that began with a chapter on floating point computation. The chapter includes the following table of the floating point parameters for various computers. The only copy of the original book that I have nearby has some handwritten corrections, so I scanned the table in the Greek translation.

This was before personal computers and IEEE 754. It was a real zoo. There were binary and decimal and even a Russian ternary machine. There were a variety of word lengths. Among the binary machines, there were base 2, base 8 and base 16 normalizations.

Around 1976 Intel began development of the 8086 microprocessor, which would become the CPU chip in the first IBM PCs. John Palmer (who had been a student of mine in New Mexico) was in charge of the design of the 8087 floating point coprocessor. He recommended that Kahan become a consultant. Palmer and Kahan convinced Intel to take the unprecedented step of making their design public so that it could be an industry wide standard. That Intel specification became the basis for IEEE 754.

I wrote a two-part series in this blog about 754 floating point and denormals.

Kahan told me that he chose the design parameters for the 8087 and consequently for the 754 standard. He said he looked at industry practice at the time because he wanted existing software to move to the new processor with a minimum of difficulty. The 32- and 64- bit word lengths were dictated by the x86 design. There were two architectures that came close to his vision, the IBM 360/370 and the DEC PDP-11/VAX.

So, the zoo was reduced to these two. Here are their floating point radix `beta` and exponent and fraction lengths.

single double beta exp frac exp frac

IBM 360 16 7 24 11 52

DEC VAX F 2 8 23 DEC VAX D 2 8 55 DEC VAX G 2 11 52

IEEE 754 2 8 23 11 52

Nobody was happy with the base-16 normalization on the IBM. Argonne's Jim Cody called it "wobbling precision". The DEC VAX had two double precision formats. The D format had the same narrow exponent range as single precision. That was unfortunate. As I remember it, DEC's G format with three more bits in the exponent was introduced later, possibly influenced by the ongoing discussions in the IEEE committee.

The VAX F format for single and G format for double have the same bit lengths as the Palmer/Kahan proposal, but the exponent bias is different, so the resulting `realmax` and `realmin` differ by a factor of four. DEC lobbied for some flexibility in that part of the standard, but eventually they had to settle for not quite conforming.

Looking back after 40 years, the IEEE 754 was a remarkable achievement. Today, everybody gratefully accepts it. The zoo has been eliminated. Only the recent introduction of a second format of half precision has us thinking about these questions again.

Thanks, Raj, for asking.

Get
the MATLAB code

Published with MATLAB® R2018b