Do you ever need to explain something to others unfamiliar with your work what it's about? One situation I frequently face is explaining machine learning to audiences who want to learn more about it but are not yet particularly conversant in it. This is regardless of the audience, be... read more >>

]]>Do you ever need to explain something to others unfamiliar with your work what it's about? One situation I frequently face is explaining machine learning to audiences who want to learn more about it but are not yet particularly conversant in it. This is regardless of the audience, be they students, professors, researchers, or folks working in government and industry as scientists and engineers. So, what do I do?

A few years ago I wanted to find a way to explain machine learning in a way that would make it understandable and fun. I came up with an explanation that illustrates what's going on in machine learning without any of the mathematical details.

Most people I know learned regression somewhere along the way, often in a stats class, or perhaps they were exposed to clustering and classification using the famous Fisher Iris dataset (here are our classification and clustering examples). I tried this approach a few times and, as much as I like flowers and sepal length and petal width, I thought I could do better to make machine learning concepts easier to grasp.

I came up with an idea of using animals. Who doesn’t like dogs, cats, and birds, or at least some of these? Well it worked out well and over the last couple years I’ve showed this over a hundred times and always get positive feedback from the audience. A few months ago some colleagues asked me what I was showing customers these days. When I described my animal story they were pretty excited and thought I should record a video to help others understand this machine learning area that everyone seems to want to know more about.

Well, to cut to the chase

(spoiler alert: a cheetah is the regression winner), you can now watch the video.

**Does This Explanation Help?**

Did you find this explanation useful, either for yourself, or to pass along to others? What other concepts have you needed to "illustrate"? Let me know here.

Get
the MATLAB code

Published with MATLAB® R2018b

My guest blogger today is Sebastian Gross. He is an engineer in our academia team in Munich, Germany, and works with our university customers to make them successful in research and teaching. In today’s post he introduces TopoToolbox[1,2].... read more >>

]]>My guest blogger today is Sebastian Gross. He is an engineer in our academia team in Munich, Germany, and works with our university customers to make them successful in research and teaching. In today’s post he introduces TopoToolbox[1,2].

Thank you, Loren, for giving me this opportunity to write about TopoToolbox. This collection of functions offers quantitative methods for the analysis of relief and flow pathways in digital elevation models while not using a GIS environment. One application is to learn about flooding risks in a particular geographical area.

The idea to write a blog post about it came to me when reading news such as ‘Flooding has slammed every Iowa county since 1988, some as many as 17 times’[3]. These make you think. Even closer for me, severe floodings were reported in six German federal states and seven other countries in 2013[4].

Most of us have seen some degree of flooding happen - a street submerged in water, a flooded field, or a river that has left its usual boundaries. Just this year on a private vacation trip my wife and I encountered a bridge which had collapsed due to a river’s force and we had to continue our approach to an archeological site on foot. Research suggests that the risk is increasing because of climate change[5]. So, I was tempted to take a closer look at flooding.

A while back during the European Geoscience Union General Assembly (EGU) in Vienna, Austria, in April 2018, I met Wolfgang Schwanghart who is a geomorphologist at the University of Potsdam in Germany. Wolfgang is one of the authors of TopoToolbox. I had previously met the second author Dirk Scherler as well. Dirk works at the German Research Center for Geosciences (GFZ) in Potsdam and one of his research fields is geomorphology.

TopoToolbox is available via File Exchange[6] as zip-File or from the project’s website[1] in several formats including a toolbox file (.mltbx) for simple installation and management. The toolbox comes with an introduction document in Live Script format, which let’s you easily move along the code sections, read the rich comments, and view the results inline.

You can import topological data from opentopography.org with the command:

DEM = readopentopo();

------------------------------------- readopentopo process: DEM type: SRTMGL3 API url: http://opentopo.sdsc.edu/otr/getdem Local file name: C:\Users\loren\AppData\Local\Temp\tp6995bbab_ef43_46a2_bac9_b701b896fb61.tif Area: 2.4e+03 sqkm ------------------------------------- Starting download: 21-Sep-2018 11:27:03 Download finished: 21-Sep-2018 11:27:13 Reading DEM: 21-Sep-2018 11:27:13 GRIDobj cannot derive a map projection structure. This is either because the grid is in a geographic coordinate system or because geotiff2mstruct cannot identify the projected coordinate system used. TopoToolbox assumes that horizontal and vertical units of DEMs are the same. It is recommended to use a projected coordinate system, preferably UTM WGS84. Use the function GRIDobj/reproject2utm to reproject your grid. DEM read: 21-Sep-2018 11:27:14 Temporary file deleted Done: 21-Sep-2018 11:27:14 -------------------------------------

The process also returns a warning with additional information. We will run the suggested reprojection later.

However, the returned data is from the area in Fresno, California, if you do not specify anything else. By chance, I passed the area a few years back, but I used data from the Bavarian Danube river area highlighted in the article[4].

The dataset of southern Bavaria can be loaded with

```
load('bavaria_dem.mat');
```

To prepare our data for use with TopoToolbox, I follow the warning message's advice and run `reproject2utm` .

DEM = reproject2utm(DEM,90);

The results so far can be displayed with a simple command `imagesc` .

imagesc(DEM);

However, we can achieve better highlighting of ridges, slopes, and mountains (hillshading) with `imageschs`

imageschs(DEM);

Now, back to the original idea of looking at water pathways. Using the `fillsinks` command,

```
DEMf = fillsinks(DEM);
% holes in the map are filled automatically.
```

With the prepared elevation model, we can calculate the flow direction which is stored in a `FLOWobj` . These flow directions are the basis for deriving river networks stored in another object, the `STREAMobj`.

```
FD = FLOWobj(DEMf);
S = STREAMobj(FD,'minarea',1000);
```

We can display the network with the `plot` command.

plot(S);

Finally, the water accumulation in our network can be calculated using the `FLOWobj` .

A = flowacc(FD);

And the resulting water accimulation can be displayed with in a graph easily.

imageschs(DEM,dilate(sqrt(A),ones(5)),'colormap',flowcolor, ... 'colorbarylabel','Flow accumulation [sqrt(# of pixels)]', ... 'ticklabel','nice');

So, why don’t you head over and try the area where you live? The command `readopentopo` can be used with the interactive switch `readopentopo('interactive',true)` to let you choose your area of interest freely.

You get a map window

and can select your favorite area

When you finish selecting the area in the map window of the interactive mode, you can confirm it by clicking a button. It works like a charm.

Let us know how this went for you here !

[1] TopoToolbox – MATLAB-based software for topographic analysis (website, accessed: July 18th, 2018)

[4] 2013 European floods, Wikipedia, (article, accessed: July 18th, 2018)

[6] TopoToolbox (File Exchange, accessed: July 18th, 2018)

Get
the MATLAB code

Published with MATLAB® R2018b

Today I'd like to introduce a guest blogger, Stephen Doe, who works for the MATLAB Documentation team here at MathWorks. In today's post, Stephen shows us new functions for displaying, arranging, and plotting data in tables and timetables.... read more >>

]]>Today I'd like to introduce a guest blogger, Stephen Doe, who works for the MATLAB Documentation team here at MathWorks. In today's post, Stephen shows us new functions for displaying, arranging, and plotting data in tables and timetables.

In R2013b, MATLAB® introduced the `table` data type, as a convenient container for column-oriented data. And in R2016b, MATLAB introduced the `timetable` data type, which is a table that has timestamped rows.

From the beginning, these data types offered advantages over cell arrays and structures. But over the course of several releases, the table and graphics development teams have added many new functions for tables and timetables. These functions add convenient ways to display and arrange tabular data. Also, they offer new ways to make plots or charts directly from tables, without the intermediate step of peeling out variables. As of R2018b, MATLAB boasts many new functions to help you make more effective use of tables and timetables.

To begin, I will use the `readtable` function to read data from a sample file that ships with MATLAB. The file `outages.csv` contains simulated data for electric power outages over a period of 12 years in the United States. The call to `readtable` returns a table, `T`, with six variables and 1468 rows, so I will suppress the output using a semicolon.

```
T = readtable('outages.csv');
```

One typical way to examine the data in a large table is to display the first few rows of the table. You can use indexing to access a subset of rows (and/or a subset of variables, for that matter). For example, this syntax returns the first three rows of `T`.

T(1:3,:)

ans = 3×6 table Region OutageTime Loss Customers RestorationTime Cause ___________ ________________ ______ __________ ________________ ______________ 'SouthWest' 2002-02-01 12:18 458.98 1.8202e+06 2002-02-07 16:50 'winter storm' 'SouthEast' 2003-01-23 00:49 530.14 2.1204e+05 NaT 'winter storm' 'SouthEast' 2003-02-07 21:15 289.4 1.4294e+05 2003-02-17 08:14 'winter storm'

I have a confession to make: I have written many table examples, using that syntax. And occasionally, I **still** catch myself starting with code like `T(3,:)`, which accesses only one row.

Happily, in R2016b we added the `head` function to return the top rows of a table. Here's the call to return the first three rows using the `head` function.

head(T,3)

ans = 3×6 table Region OutageTime Loss Customers RestorationTime Cause ___________ ________________ ______ __________ ________________ ______________ 'SouthWest' 2002-02-01 12:18 458.98 1.8202e+06 2002-02-07 16:50 'winter storm' 'SouthEast' 2003-01-23 00:49 530.14 2.1204e+05 NaT 'winter storm' 'SouthEast' 2003-02-07 21:15 289.4 1.4294e+05 2003-02-17 08:14 'winter storm'

Similarly, `tail` returns the bottom rows of a table. (If you do not specify the number of rows, then `head` and `tail` return eight rows.)

After examining your table, you might find that you want to organize your table by moving related variables next to each other. For example, in `T` you might want to move `Region` and `Cause` so that they are together.

One way to move table variables is by indexing. But if you use indexing and want to keep all the variables, then you must specify them all in order, as shown in this syntax.

T = T(:,{'OutageTime','Loss','Customers','RestorationTime','Region','Cause'})

You also can use numeric indices. While more compact, this syntax is less readable.

T = T(:,[2:5 1 6])

When your table has many variables, it is awkward to move variables using indexing. Starting in R2018a, you can use the `movevars` function instead. Using `movevars`, you only have to specify the variables of interest. Move the `Region` variable so it is before `Cause`.

T = movevars(T,'Region','Before','Cause'); head(T,3)

ans = 3×6 table OutageTime Loss Customers RestorationTime Region Cause ________________ ______ __________ ________________ ___________ ______________ 2002-02-01 12:18 458.98 1.8202e+06 2002-02-07 16:50 'SouthWest' 'winter storm' 2003-01-23 00:49 530.14 2.1204e+05 NaT 'SouthEast' 'winter storm' 2003-02-07 21:15 289.4 1.4294e+05 2003-02-17 08:14 'SouthEast' 'winter storm'

It is also likely that you want to add data to your table. For example, let's calculate the duration of the power outages in `T`. Specify the format to display the duration in days.

```
OutageDuration = T.RestorationTime - T.OutageTime;
OutageDuration.Format = 'dd:hh:mm:ss';
```

It is easy to add `OutageDuration` to the end of a table using dot notation.

T.OutageDuration = OutageDuration;

However, you might want to add it at another location in `T`. In R2018a, you can use the `addvars` function. Add `OutageDuration` so that it is after `OutageTime`.

T = addvars(T,OutageDuration,'After','OutageTime'); head(T,3)

ans = 3×7 table OutageTime OutageDuration Loss Customers RestorationTime Region Cause ________________ ______________ ______ __________ ________________ ___________ ______________ 2002-02-01 12:18 06:04:32:00 458.98 1.8202e+06 2002-02-07 16:50 'SouthWest' 'winter storm' 2003-01-23 00:49 NaN 530.14 2.1204e+05 NaT 'SouthEast' 'winter storm' 2003-02-07 21:15 09:10:59:00 289.4 1.4294e+05 2003-02-17 08:14 'SouthEast' 'winter storm'

Now, let's remove `RestorationTime`. You can easily remove variables using dot notation and an empty array.

T.RestorationTime = [];

However, in R2018a there is also a function to remove table variables. To remove `RestorationTime`, use the `removevars` function.

```
T = removevars(T,'RestorationTime');
head(T,3)
```

ans = 3×6 table OutageTime OutageDuration Loss Customers Region Cause ________________ ______________ ______ __________ ___________ ______________ 2002-02-01 12:18 06:04:32:00 458.98 1.8202e+06 'SouthWest' 'winter storm' 2003-01-23 00:49 NaN 530.14 2.1204e+05 'SouthEast' 'winter storm' 2003-02-07 21:15 09:10:59:00 289.4 1.4294e+05 'SouthEast' 'winter storm'

If your table contains dates and times in a `datetime` array, you can easily convert it to a timetable using the `table2timetable` function. In this example, `table2timetable` converts the values in `OutageTime` to *row times*. Row times are time stamps that label the rows of a timetable.

TT = table2timetable(T); head(TT,3)

ans = 3×5 timetable OutageTime OutageDuration Loss Customers Region Cause ________________ ______________ ______ __________ ___________ ______________ 2002-02-01 12:18 06:04:32:00 458.98 1.8202e+06 'SouthWest' 'winter storm' 2003-01-23 00:49 NaN 530.14 2.1204e+05 'SouthEast' 'winter storm' 2003-02-07 21:15 09:10:59:00 289.4 1.4294e+05 'SouthEast' 'winter storm'

When you display a timetable, it looks very similar to a table. One important difference is that a timetable has fewer variables than you might expect by glancing at the display. `TT` has five variables, not six. The vector of row times, `OutageTime`, is not considered a timetable variable, since its values label the rows. However, you can still access the row times using dot notation, as in `T.OutageTime`. You can use the vector of row times as an input argument to a function. For example, you can use it as the *x*-axis of a plot.

The row times of a timetable do not have to be ordered. If you want to be sure that the rows of a timetable are sorted by the row times, use the `sortrows` function.

TT = sortrows(TT); head(TT,3)

ans = 3×5 timetable OutageTime OutageDuration Loss Customers Region Cause ________________ ______________ ______ __________ ___________ ______________ 2002-02-01 12:18 06:04:32:00 458.98 1.8202e+06 'SouthWest' 'winter storm' 2002-03-05 17:53 04:20:48:00 96.563 2.8666e+05 'MidWest' 'wind' 2002-03-16 06:18 02:17:05:00 186.44 2.1275e+05 'MidWest' 'severe storm'

Now I will show you why I converted `T` to a timetable. Starting in R2018b, you can plot the variables of a table or timetable in a *stacked plot*. In a stacked plot, the variables are plotted in separate *y*-axes, but using a common *x*-axis. And if you make a stacked plot from a timetable, the *x*-values are the row times.

To plot the variables of `TT`, use the `stackedplot` function. The function plots variables that can be plotted (such as numeric, `datetime`, and `categorical` arrays) and ignores variables that cannot be plotted. `stackedplot` also returns properties of the stacked plot as an object that allows customization of the stacked plot.

s = stackedplot(TT)

s = StackedLineChart with properties: SourceTable: [1468×5 timetable] DisplayVariables: {'OutageDuration' 'Loss' 'Customers'} Color: [0 0.4470 0.7410] LineStyle: '-' LineWidth: 0.5000 Marker: 'none' MarkerSize: 6 Use GET to show all properties

One thing you can tell right away from this plot is that there must be a few timetable rows with bad data. There is one point for a power outage that supposedly lasted for over 9,000 days (or 24 years), which would mean it ended some time in the 2040s.

The `stackedplot` function ignored the `Region` and `Cause` variables, because these variables are cell arrays of character vectors. You might want to convert these variables to a different, and more useful, data type. While you can convert variables one at a time, there is now a more convenient way to convert all table variables of a specified data type.

Starting in R2018b, you can convert table variables in place using the `convertvars` function. For example, identify all the cell arrays of character vectors in `TT` (using `iscellstr`) and convert them to `categorical` arrays. Now `Region` and `Cause` contain discrete values assigned to categories. Categorical values are displayed without any quotation marks.

```
TT = convertvars(TT,@iscellstr,'categorical');
head(TT,3)
```

ans = 3×5 timetable OutageTime OutageDuration Loss Customers Region Cause ________________ ______________ ______ __________ _________ ____________ 2002-02-01 12:18 06:04:32:00 458.98 1.8202e+06 SouthWest winter storm 2002-03-05 17:53 04:20:48:00 96.563 2.8666e+05 MidWest wind 2002-03-16 06:18 02:17:05:00 186.44 2.1275e+05 MidWest severe storm

If your table or timetable has variables with values that belong to a finite set of discrete categories, then there are other interesting plots that you can make. Starting in R2017a, you can make a *heat map* of any two variables that contain discrete values using the `heatmap` function. For example, make a heat map of the `Region` and `Cause` variables to visualize where and why outages occur. Again, `heatmap` returns an object so you can customize the plot.

h = heatmap(TT,'Region','Cause')

h = HeatmapChart (Count of Cause vs. Region) with properties: SourceTable: [1468×5 timetable] XVariable: 'Region' YVariable: 'Cause' ColorVariable: '' ColorMethod: 'count' Use GET to show all properties

You also can make a pie chart of any `categorical` variable (as of R2014b), using the `pie` function. However, you cannot call `pie` on a table. So, to make a pie chart of the power outages by region, use dot notation to access the `Region` variable.

pie(TT.Region)

MATLAB also has other functions to reorganize variables in more complex ways, and to join tables. I won't show them all in action, but I will describe some of them briefly. All these functions work with both tables and timetables.

R2018a includes functions to:

- Reorient rows to become variables (
`rows2vars`)

- Invert a nested table-in-table (
`inner2outer`)

And from the original release of tables in R2013b, there are functions to:

Let's table discussion of these new functions for now. But we are eager to hear about your reactions to them. Do they help you make more effective use of tables and timetables? Please let us know here.

Get
the MATLAB code

Published with MATLAB® R2018b

I'd like to introduce this week's guest blogger Alan Weiss. Alan writes documentation for mathematical toolboxes here at MathWorks. ... read more >>

]]>Hi, folks. This post is about a new solver in Global Optimization Toolbox,

- It is best suited to optimizing expensive functions, meaning functions that are time-consuming to evaluate. Typically, you
use
surrogateopt to optimize a simulation or ODE or other expensive objective function. - It is inherently a global solver. The longer you let it run, the more it searches for a global solution.
- It requires no start point, just problem bounds. Of course, you can specify start points if you like.

The solver is closest in concept to a Statistics and Machine Learning Toolbox solver,

Let's try to minimize a simple function of two variables:

,

where and .

x0 = [1,2]; x1 = [-3,-5]; fun = @(x)sum((x/5).^2,2)-4*sech(sum((x-x0).^2,2))-6*sech(sum((x-x1).^2,2)); [X,Y] = meshgrid(linspace(-10,10)); Z = fun([X(:),Y(:)]); Z = reshape(Z,size(X)); surf(X,Y,Z) view([53,3])

You can see that there are three local minima, the global minimum near [–3,–5], one that is nearly as good near [1,2], and a poor one near [0,0]. You can see this because MATLAB® evaluated 10,000 points for the plot. The value of

To search for a global minimum, set finite bounds and call

```
type slowfun
```

function y = slowfun(x,x0,x1) y = sum((x/5).^2,2)-4*sech(sum((x-x0).^2,2))-6*sech(sum((x-x1).^2,2)); pause(0.5)

Set bounds and run the optimization of

fun = @(x)slowfun(x,x0,x1); lb = [-10,-10]; ub = -lb; rng default % for reproducibility tic [xsol,fval] = surrogateopt(fun,lb,ub)

Surrogateopt stopped because it exceeded the function evaluation limit set by 'options.MaxFunctionEvaluations'.

xsol =1×20.8779 1.7580

fval = -3.8348

toc

Elapsed time is 114.326822 seconds.

To try to find a better solution, run the solver again. To run faster, set the

options = optimoptions('surrogateopt','UseParallel',true); tic [xsol,fval] = surrogateopt(fun,lb,ub,options)

Surrogateopt stopped because it exceeded the function evaluation limit set by 'options.MaxFunctionEvaluations'.

xsol =1×2-2.8278 -4.7159

fval = -4.7542

toc

Elapsed time is 20.130099 seconds.

This time,

The

**Construct Surrogate**. In this phase, the algorithm takes a fixed number of quasirandom points within the problem bounds and evaluates the objective function at those points. It then interpolates and extrapolates these points to a smooth function called the surrogate in the entire bounded region. In subsequent phases, the surrogate interpolates the evaluated values at all the quasirandom points where the objective has been evaluated. The interpolation function is a radial basis function, because these functions are computationally inexpensive to create and evaluate. Radial basis functions also make it inexpensive to add new points to the surrogate.

**Search for Minimum**. This phase is more complicated than the**Construct Surrogate**phase, and is detailed below.

In theory, there are two main considerations in searching for a minimum of the objective: refining an existing point to get a better value of the objective function, and searching in places that have not yet been evaluated in hopes of finding a better global minimum.

After constructing a merit function, ** incumbent**, meaning the point that has lowest objective function value for points evaluated since the start of the most recent surrogate construction phase. These sample points are distributed according to a multidimensional normal distribution, centered on the incumbent and truncated to stay within bounds, with standard deviation in each coordinate proportional to the difference between those coordinate bounds. The standard deviations are multiplied by a

After evaluating the merit function on the sample points, the solver chooses the point with lowest merit function value, and evaluates the objective function at that point. This point is called an ** adaptive point**. The algorithm updates the surrogate (interpolation) to include this point.

- If the objective function value at the adaptive point is sufficiently lower than the value at the incumbent, then the adaptive point becomes the incumbent. This is termed a successful search.
- If the objective function value at the adaptive point is not sufficiently lower than the value at the incumbent, then the search is termed unsuccessful.

The scale

- If there have been three successful searches since the last scale change, then the scale is doubled:
sigma becomes2*sigma . - If there have been
max(5,nvar) unsuccessful searches since the last scale change, wherenvar is the number of problem dimensions, then the scale is halved:sigma becomessigma/2 .

Generally,

For details, see the Surrogate Optimization Algorithm section in the documentation.

You can optionally use the

Run the same problem as before, while using the

options.PlotFcn = 'surrogateoptplot'; options.UseParallel = false; rng(4) % For reproducibility [xsol,fval] = surrogateopt(fun,lb,ub,options)

Surrogateopt stopped because it exceeded the function evaluation limit set by 'options.MaxFunctionEvaluations'.

xsol =1×2-2.7998 -4.7109

fval = -4.7532

Here is how to interpret the plot. Starting from the left, follow the green line, which represents the best function value found. The solver reaches the vicinity of the second-best point at around evaluation number 30. Around evaluation number 70, there is a surrogate reset. After this, follow the dark blue incumbent line, which represents the best objective function found since the previous surrogate reset. This line approaches the second-best point after evaluation number 100. There is another surrogate reset before evaluation number 140. Just before evaluation number 160, the solver reaches the vicinity of the global optimum. The solver continues to refine this solution until the end of the evaluations, number 200. Notice that after evaluation number 190, all of the adaptive samples are at or near the global optimum, showing the shrinking scale. There is a similar discussion of interpreting the

I hope that this brief description gives you some understanding of the

Did you enjoy seeing how to use surrogate optimization? Do you have problems that might be addressed by this new solver? Tell us about it here.

]]>Today I’d like to introduce a guest blogger, David Garrison, who is a MATLAB Product Manager here at MathWorks.... read more >>

]]>Today I’d like to introduce a guest blogger, David Garrison, who is a MATLAB Product Manager here at MathWorks.

Hello everyone. A few months ago I announced the MATLAB Online Live Editor Challenge - a competition for students and faculty to show off their live scripts. We received a lot of great entries from universities from all over the world. Loren has been kind enought to let me use her blog to show off the winning entries.

**1st Place: Building a Forest of Trees in MATLAB**

*Ameer Hamza Khan - Hong Kong Polytechnic University*

The Forest of Trees Live Script demonstrates how to create and visualize a tree data structure in MATLAB. It demonstrates how to use classes and object-oriented programming in MATLAB to construct custom data structures and uses MATLAB graphics to visualize and explain the tree data structure. The live script also makes use of numeric sliders to allow the user to customize their tree, and see how changing each option affects the resulting tree by watching the graphical output change in response to the slider.

**2nd Place: How biodiversity is maintained in competitive ecosystems**

*Violeta Calleja Solanas - University of Zaragoza*

This MATLAB live script demonstrates two models that explain species coexistence in ecosystems: high-order interactions and sort-ranged spatial interactions. The script simulates examples of ecosystems and uses live controls to allow users to interact with the models and observe the changes. The assumptions and equations behind each model are explained in the rich text. The script also explores the power law in the distribution of communities' sizes in the simulated ecosystem.

**3rd Place: Neural Networks: The Universality Theorem**

*Mayank Jhamtani - Birla Institute of Technology and Science*

The Neural Networks: Universality Theorem project explores the Universal Approximation Theorem, which states that a single layer of "artificial neurons" can be used to approximate any function, with an arbitrarily small approximation error. This project presents an intuitive proof of the theorem by means of visual aids. The project allows the user to vary the different network parameters to approximate an arbitrary function f(x).

**4th Place: Calculate the Sag of Conductors using MATLAB**

*Timon Viola - Budapest University of Technology and Economics*

This live script shows how to simulate the sag of power line conductors. It allows users to explore the effects of different parameters, such as temperature, conductor type, and tension span, on the calculation of points of the catenary. The results are visualized in a plot.

**5th Place: Digital Processing of Electromyographic Signals for Control**

*David Leonardo Rodriguez Sarmiento: Antonio Nariño University*

This live script shows how complex mathematical calculations of digital signal processing (DSP) can be performed to infer information from biological signals (biosignals) acquired by sensors

**1st Place: The dynamics of rigid bodies system**

*Anna Sibilska-Mroziewicz - Warsaw University of Technology*

The Dynamics of Rigid Bodies Systems Live Script uses the power of live scripts to teach students the concept. It provides detailed graphics to illustrate a given system, and allows students to finalize the equations needed to model the system, providing useful, custom error messages if they model the system incorrectly. Students can adjust various values of system using numeric slider bars. The live script also makes use of the power of Symbolic Math Toolbox in its equations, including specifying units for each parameter in the system.

**2nd Place: Influence Permanent Magnet DC Motor Parameters**

*Alexander Ivanov - Skolkovo Institute of Science and Technology*

This live script explains the working principles of a PMDC motor and how it can be modeled. The transient process is calculated and plotted. The script also explores the influence of the armature resistance, armature inductance, armature inertia, magnetic flux and supply voltage on the transient process.

**3rd Place: Visualization and analysis of an Electrocardiogram Signal**

Constantino Reyes-Aldasoro - City University of London

The Electrocardiogram Live Script uses Signal Processing Toolbox to find peaks of data from an EKG and shows how to refine the peaks based on the user's data. The live script also shows how to gather data from various sources, including data from a web site, and provides some tips on visualizing complex data in MATLAB figures to help see critical regions, such as peaks, more clearly. In addition, it illustrates how to infer heart rate from the peaks of the electrocardiogram data.

Let the winners know what you think of their live scripts here.

Get
the MATLAB code

Published with MATLAB® R2018a

Today's guest blogger is Mary Fenelon, who is the product marketing manager for Optimization and Math here at MathWorks. In today's post she describes how she uses optimization to try to best the rest of the product marketing team.... read more >>

]]>Today's guest blogger is Mary Fenelon, who is the product marketing manager for Optimization and Math here at MathWorks. In today's post she describes how she uses optimization to try to best the rest of the product marketing team.

The National Football League 2018 season will soon be_ (is?)_ upon us. Several of us in the product marketing team here at MathWorks have played in an office league for the last couple of years. I don't follow the NFL very closely so I didn't think about participating until one of my fellow product marketing managers challenged me to do so by remarking that this looked like an optimization problem. That's all it took!

- This is a knockout or survivor style pool. Every week, your goal is to pick the winning team for one game. You can only choose a team once for the entire season. If your team wins or ties you move on to the next week and pick from the remaining teams. As long as you haven’t picked a losing team you can choose from the entire group of NFL teams.
- If you pick a losing team you are not out of the knockout pool. You are now restricted to picking from the teams that had a non-winning record last season. You can try to keep your streak alive, but now you have to do it with a more limited set of teams.
- Whoever is alive for the longest number of weeks wins the knockout pool. The longer you keep the knockout streak alive the bigger the knockout pool grows. It will start at 50% of the pool, and grow onward. The pot will grow until the last person has two losses. If two or more people tie, they will either split the pool or determine a final winner by a joust in the parking lot.
- After your second loss you can still win the secondary pool, which is whoever picks the most winners throughout the season. Once you’ve lost twice you can go back to picking any team as long as you haven’t already picked them. Whoever wins the knockout pool is ineligible for this prize unless there’s unforeseen circumstances (if everyone is knocked out in week 2, for example).

I could choose a team each week without considering that it would be better to choose that team further out. That myopic strategy could leave me with poor choices at the end of the season. Instead, my first strategy decision is to run an optimization each week choosing teams for all of the remaining weeks of the season but use only the choice for the current week. That way, I benefit from updated information as the season progresses while still looking at the whole schedule.

Coming up with a set of choices can be modeled as a linear assignment problem. The goal in an assignment problem is to assign workers to tasks so that the cost of completing all the tasks is minimized. Each worker can only do one task. For the NFL pool, the workers are the teams and the tasks are the weeks.

What to use for the costs of each team-to-week assignment? This choice is the second place where the art and science of modeling comes into play. Art, to think of measures, and science, to validate that the chosen measure gives the desired result. My choice is to use the win probabilities produced by fivethirtyeight.com. This article explains how they built their predictive model using the entire history of the NFL as data. With that measure, the objective is to maximize the sum of the win probabilities of the chosen teams over the entire season.

Linear assignment problems can by solved by the Hungarian algorithm. This implementation on the MATLAB File Exchange is a popular one. These problems can also be solved by the functions in Optimization Toolbox. Formulating the problem as an optimization problem gives me the option to specify additional constraints, for instance, to follow the rule that after one loss, I'm restricted to choose from the losers bracket.

**Season Data**

For this post, I'll generate some data instead of using actual NFL data. I'll use about half the teams and half as many games as the NFL.

nSeasonWeeks = 8; teams = ["Aardvarks","Bats","Cheetahs","Dragons","Emus","Foxes","Giraffes","Hippos","Ibexes","Jackals",... "Koalas","Lemurs","Monkeys","Newts","Otters","Porcupines"]'; nTeams = numel(teams);

Designate some teams as losers from the previous season for the losers bracket. Set the random seed to reproduce the same bracket each time I run the script.

rng(19); nLosers = floor(nTeams/2); idx = randperm(nTeams); losers = teams(idx(1:nLosers));

**Weekly Data**

I'll rerun the model each week, keeping track of the week of the season and which teams I already picked.

Each week I increment the week counter.

thisWeek = 2;

Each week I add the team that was chosen the previous week

```
previousPicks = ["Aardvarks"];
```

Generate the labels for the remaining weeks:

```
weeks = "week" + string(thisWeek:nSeasonWeeks)';
nWeeksLeft = nSeasonWeeks - thisWeek + 1;
```

Get the probabilities.

Here's where I would scrape a web page for predictions. Instead, I'll use a random matrix for the win probabiliites. The random values won't show the patterns one would see in real data, but they are sufficient for showing how to set up and run the optimization model. One step towards more realistic probabilities is to first generate team pairings for the games and a game schedule. This is an optimization problem in itself; for an example see scheduling the ACC basketball conference.

winProbs = rand(nTeams,nWeeksLeft);

I will use the problem-based workflow for optimization introduced in R2017b. This workflow simplifies the process of specifying and solving linear and mixed-integer linear programs, that is, optimization problems with linear constraints and objectives and with variables that can take on continuous or discrete values.

**Variables**

First, define a two dimensional optimvar, `x`, indexed by teams and weeks. The optimization solver will compute values for an `optimvar` include in an optimproblem. I make this a binary variable, that is a variable that only takes on the values of 0 and 1 with this statement:

x = optimvar('x',teams,weeks,'LowerBound',0,'UpperBound',1,'Type','integer');

In my model, a value of 1 for `x(i,j)` will mean that team `i` is chosen for week `j` and a value of 0 means it is not chosen.

To eliminate a team that's already been picked, set the upper bounds of variables corresponding to those teams to 0.

x.UpperBound(previousPicks,weeks) = 0;

**Optimization Problem and Objective**

Next, define the optimization problem and the objective to maximize the sum of the win probabilities of the chosen teams:

```
p = optimproblem;
p.ObjectiveSense = 'maximize';
p.Objective = sum(sum(winProbs.*x));
```

**Constraints**

The first constraint statement generates one constraint for each team: a team can be chosen at most once. The result is an optimconstr. Show the first constraint of each set to check the formulation as it's built.

p.Constraints.eachTeamAtMostOnce = sum(x,2) <= 1; showconstr(p.Constraints.eachTeamAtMostOnce(1))

x('Aardvarks', 'week2') + x('Aardvarks', 'week3') + x('Aardvarks', 'week4') + x('Aardvarks', 'week5') + x('Aardvarks', 'week6') + x('Aardvarks', 'week7') + x('Aardvarks', 'week8') <= 1

The second constraint statement generates one constraint for each week: a team must be choosen.

p.Constraints.mustPickOne = sum(x,1) == 1; showconstr(p.Constraints.mustPickOne(1))

x('Aardvarks', 'week2') + x('Bats', 'week2') + x('Cheetahs', 'week2') + x('Dragons', 'week2') + x('Emus', 'week2') + x('Foxes', 'week2') + x('Giraffes', 'week2') + x('Hippos', 'week2') + x('Ibexes', 'week2') + x('Jackals', 'week2') + x('Koalas', 'week2') + x('Lemurs', 'week2') + x('Monkeys', 'week2') + x('Newts', 'week2') + x('Otters', 'week2') + x('Porcupines', 'week2') == 1

**Solve**

Now solve the optimization problem. The solve function will call the mixed-integer linear programming solver, intlinprog,| b|ecause some of the optimization variables are of type integer

soln = solve(p);

LP: Optimal objective value is -6.661074. Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 0 (the default value). The intcon variables are integer within tolerance, options.IntegerTolerance = 1e-05 (the default value).

**Tabulate choices**

Optimization Toolbox solvers use floating point arithmetic so it's possible that the solution values may not be integral. I want to use the values as logical indexes, so I will round them just to be sure that they are.

picks = round(soln.x); [teampicks,weekpicks] = find(picks); probpicks = find(picks); table(weeks(weekpicks),teams(teampicks),winProbs(probpicks),... 'VariableNames',{'Week','Team','WinProb'})

ans = 7×3 table Week Team WinProb _______ _________ _______ "week2" "Foxes" 0.94616 "week3" "Lemurs" 0.9943 "week4" "Bats" 0.97137 "week5" "Koalas" 0.99744 "week6" "Dragons" 0.91183 "week7" "Ibexes" 0.98344 "week8" "Hippos" 0.85654

The win probabilities will be updated on the results of the prior games as the season progresses. Maybe I don't want to value the probabilities for the end of the season as highly as those in the current week. This can be done by applying a discount factor. In the Live Script version of this post, I set the discount factor with a Live Control to make it easy to experiment with different values.

discountFactor =0.05 scalePerWeek = (1-discountFactor).^(0:nWeeksLeft-1); dProbs = winProbs.*scalePerWeek; p.Objective = sum(sum(x.*dProbs)); soln = solve(p); [teampicks,weekpicks] = find(picks); probpicks = find(picks); table(weeks(weekpicks),teams(teampicks),winProbs(probpicks),... 'VariableNames',{'Week','Team','WinProb'})

discountFactor = 0.05 LP: Optimal objective value is -5.755877. Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 0 (the default value). The intcon variables are integer within tolerance, options.IntegerTolerance = 1e-05 (the default value). ans = 7×3 table Week Team WinProb _______ _________ _______ "week2" "Foxes" 0.94616 "week3" "Lemurs" 0.9943 "week4" "Bats" 0.97137 "week5" "Koalas" 0.99744 "week6" "Dragons" 0.91183 "week7" "Ibexes" 0.98344 "week8" "Hippos" 0.85654

Applying a discount factor of 5% didn't change the results.

I might want to only choose winning teams when I have no losses so that I can pick the best of the losing teams when I have to pick among them.

onlyWinners = false; if onlyWinners p.Constraints.onlyWinners = sum(x(losers,weeks),1) == 0; end

With one loss, I must pick from the losers bracket.

onlyLosers = true; if onlyLosers p.Constraints.onlyLosers = sum(x(losers,weeks),1) >= 1; showconstr(p.Constraints.onlyLosers(1)) end

x('Aardvarks', 'week2') + x('Cheetahs', 'week2') + x('Dragons', 'week2') + x('Emus', 'week2') + x('Foxes', 'week2') + x('Koalas', 'week2') + x('Newts', 'week2') + x('Otters', 'week2') >= 1

soln = solve(p); [teampicks,weekpicks] = find(picks); probpicks = find(picks); table(weeks(weekpicks),teams(teampicks),winProbs(probpicks),... 'VariableNames',{'Week','Team','WinProb'})

LP: Optimal objective value is -4.838951. Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 0 (the default value). The intcon variables are integer within tolerance, options.IntegerTolerance = 1e-05 (the default value). ans = 7×3 table Week Team WinProb _______ _________ _______ "week2" "Foxes" 0.94616 "week3" "Lemurs" 0.9943 "week4" "Bats" 0.97137 "week5" "Koalas" 0.99744 "week6" "Dragons" 0.91183 "week7" "Ibexes" 0.98344 "week8" "Hippos" 0.85654

So how did I do? In both years, I was knocked out fairly early but finished at the top of the secondary pool.

Unfortunately, one player had no losses in both years so there was no secondary pool award. That player has not revealed his strategy but there is speculation that he uses an optimization model. A player with just one loss did reveal his strategy: picking the best team from fivethirtyeight.com each week. That's the myopic strategy I didn't want to use! I can see why it makes sense, though. There are about twice as many teams as games so as long as you can pick from the entire league, there should be some good picks available each week. A greedy strategy also makes sense when the predictions are changing during the season.

I'm not willing to give up on placing well in the secondary pool so I'll keep my assignment model. I can make it greedier by using a higher discount factor. Maybe I should also deal with the uncertainty in the probabilities by using data from more than one source or by using a stochastic optimization approach. Now that I have two years worth of data I can do some model validation before deciding on this year's strategy. It's time to get busy!

Do you have a better approach? Let us know your thoughts here.

Get
the MATLAB code

Published with MATLAB® R2018a

In my travels, I meet with many customers and they are always interested in learning about new MATLAB features and capabilities. One area that has a lot of interest is MATLAB app building. Many MATLAB users are interested in App Designer, GUIDE, and the future of the two app building platforms. So for this blog, I thought I would put some of those questions to Chris Portal, Development Manager for MATLAB Graphics & App Building, and David Garrison, MATLAB Product Manager.... read more >>

]]>In my travels, I meet with many customers and they are always interested in learning about new MATLAB features and capabilities. One area that has a lot of interest is MATLAB app building. Many MATLAB users are interested in App Designer, GUIDE, and the future of the two app building platforms. So for this blog, I thought I would put some of those questions to Chris Portal, Development Manager for MATLAB Graphics & App Building, and David Garrison, MATLAB Product Manager.

- What is the current state of app building in MATLAB?
- Why did MathWorks develop a whole new app building platform?
- How does App Designer compare to GUIDE?
- What does that mean for the future of GUIDE and the apps users have built with it?
- What about users who create their apps programmatically?
- What about apps that use Java extensions?
- How do users take their apps to the Web?
- How do users decide which app building platform is right for them?
- How are you building apps in MATLAB?

**CP:** Currently MATLAB offers two platforms for building apps – GUIDE and App Designer. GUIDE is an older platform that MATLAB users have been using for many years. Although users have been able to build apps of varying levels of sophistication with GUIDE, it has suffered from a number of workflow and usability issues we’ve been wanting to address for our users. Similarly, the component set it supports, which is predominantly the `uicontrol` set, is also very limited and based on some legacy technologies.

**DG:** In R2016a we introduced App Designer as our new app building platform. App Designer integrates the two tasks of building an app – laying out the visual components and programming the behavior. It has a new design canvas that makes it easier to add components and to organize them using tabs and panels. It includes a built-in editor that manages generated code for components in read-only sections and provides editable sections for user-written callback code. It also supports a new family of standard components such as edit fields, buttons, and spinners, as well as gauges, knobs, switches, and lamps for creating instrument panels.

**CP:** A major difference between GUIDE and App Designer is the technology used. GUIDE is based on Java Swing which is no longer being actively developed by Oracle. Building on it would have allowed some short-term wins, but it would not have scaled in the long-term or allowed us to offer web-based workflows for our users. App Designer is built on modern, web-based technologies such as JavaScript, HTML, and CSS, giving us a platform with the flexibility to keep up with the demands of our users and allowing apps to run on the web. Users can keep their existing Java-based apps running and choose to opt into the new platform when the time is right for them.

**DG:** When we introduced App Designer in R2016a, it offered a modern and user-friendly environment for laying out your app, which addressed several usability issues GUIDE has. However, for the first few releases, App Designer had some functional gaps with respect to GUIDE. Common components were missing, MATLAB graphics support was limited, and performance didn’t scale for large apps. With each release, we have been closing these gaps and addressing performance. As of R2018a, App Designer supports nearly all MATLAB 2D and 3D visualizations with pan, zoom, and rotate interactivity; menu support has been added as well as new tree and date picker components; and the code editor is able to scale to build large apps.

**CP:** Another notable difference is the coding model. App Designer generates a MATLAB class for the app, making it easier to program callbacks and share data between different parts of the app in a way that is less error prone than GUIDE. What this means is you no longer need to update a handles structure, or understand the subtleties of when to use guidata vs. appdata vs. UserData. New MATLAB interfaces have also been introduced which are designed specifically for each component. These new interfaces are easier to program to and improve on the older `uicontrol` component used by GUIDE.

**DG:** We know that many MATLAB users have time and intellectual property invested in GUIDE-based apps or in apps they've created programmatically. We will continue to support GUIDE and its associated components and have no current plans to deprecate or remove any of that functionality. Unlike other MATLAB transitions, GUIDE and App Designer can co-exist, which allows us and our users to work through the transition over a series of releases. Our focus right now is on enhancing App Designer to ensure it can serve the needs of MATLAB app builders and helping GUIDE users adopt it.

**CP:** Towards that end, we have released the GUIDE to App Designer Migration Tool for MATLAB in R2018a which eases the process of converting a GUIDE-based app to App Designer. The tool automatically creates an App Designer app with the same layout as your GUIDE app, applies the necessary component configurations, and copies the GUIDE callback code over. From there, you update the callback code to make it compatible with the new App Designer code format. To help users with this step, the tool generates a report that describes how to do the code updates and provides workarounds for some limitations. You can download the tool from the File Exchange on MATLAB Central or from the Add-On Explorer in the MATLAB desktop.

**DG:** We know some users choose not to use an interactive environment like GUIDE or App Designer – they prefer to create their apps programmatically in MATLAB. You can continue to hand-code your apps regardless of which component set you use, whether it’s the older `uicontrol` function, or the newer component set we’ve been expanding since R2016a.

**CP:** Some of these users have used Java Swing to extend the capabilities of their apps including the use of the `javacomponent` function to add custom components. They needed to do this to integrate components we did not support like tabs, trees, and date pickers, and to customize components beyond what was documented, including richer cell level formatting for tables. Other have used the undocumented `JavaFrame` property to do things like maximize or minimize the figure window. And in a few cases, users have leveraged Java Swing directly in order to take advantage of things like Java layout managers to build IDE-like apps. App Designer gives us a foundation to address these long-standing gaps for our users. App Designer has been adding support for missing components and enhancing existing ones. We have a number of features lined up that will help bridge the Java Swing gap, and enable all of our MATLAB users to build more sophisticated apps.

**DG:** We have been actively surveying users to understand how they are using Java to extend their apps. The feedback we hear directly impacts the team’s work. For example, in our survey on `JavaFrame` use, we discovered the number one reason for its use was to programmatically maximize or minimize the figure window. We added documented support for this in R2018a. We also added `uitree` in response to feedback from our survey on `javacomponent` use. Our plan is to have each release of MATLAB address some gap that has led users to go to Java, so we encourage users to fill out these surveys. Finally, we recognize some users may require specialized components that may be of lower priority for most app builders. In order to address this, we are also investigating ways to provide a documented solution for integrating 3rd party JavaScript components in MATLAB apps.

**DG:** There are a couple of ways to do this. One is by using MATLAB Online. MATLAB Online lets you run MATLAB in a desktop web browser from any computer that has access to the internet. MATLAB Online is available with select licenses. Check your eligibility here. You create an app using the desktop version of MATLAB and save it to your MATLAB Drive. To run the app in a web browser, use your MathWorks account to log onto MATLAB Online at matlab.mathworks.com . You can use MATLAB Drive to easily share your app with anybody else who has access to MATLAB Online.

**CP:** The other way is to use MATLAB Compiler. In R2018a, MATLAB Compiler introduced a new feature that allows you to package App Designer apps as a web app. You create your app on the desktop, package it for the web using MATLAB Compiler, and copy the compiled app to a MATLAB web app server you’ve set up, which is also provided with MATLAB Compiler. This results in a URL that can be accessed in a web browser by anyone who has access to the server. The benefit is anyone can run the app in a browser, even if they aren’t MATLAB users. This approach is ideal for sharing apps on-premise, for co-workers to access via a web browser.

**DG:** We recommend users start with App Designer for all new apps unless the app needs to run in a version of MATLAB prior to R2016a. It is the platform we are continuously enhancing and expanding with each release. We also think many users will benefit from migrating their GUIDE apps to App Designer using the migration tool that Chris mentioned. Migration will give your app a more modern look and will make it possible to deploy your app to the Web.

**CP:** Users might consider continuing to use GUIDE for the following reasons:

- The migration tool highlights a limitation that is critical to the app’s workflow and cannot be worked around.
- The app needs to run in older releases of MATLAB that predate App Designer’s release in R2016a.
- The app relies on the use of undocumented Java functionality that is not supported in App Designer.

**LS: Well, I think that's about it. Thank you both for sharing this valuable information with my readers.**

Have you tried App Designer? Let us know here.

Get
the MATLAB code

Published with MATLAB® R2018a

It wasn't until recently that I realized this functionality (memoize) was added to MATLAB in R2017a. Needless to say, the shipping function is different than the solution I presented over 10 years ago. And without the limitations that mine had (limited to elementwise functions that had a single input).

What is Memoization?

Let's Try It

Is That All?

Do You Use Memoization?

From Reference from My 2006 Post

The idea of memoization is to cache function results from specific inputs so if these same inputs are used again, the function can simply return the values computed earlier, without rerunning the computation. This can be useful if you have a function that is very expensive to compute.

Of course, if you run the memoized function a lot, it will take up increasing amounts of memory as unique inputs get added to the list, unless we do something to limit the cache size. That's what MATLAB does now with the function

As in my earlier post, let's try something simple, the function

fmem = memoize(@sin)

fmem = MemoizedFunction with properties: Function: @sin Enabled: 1 CacheSize: 10

y = fmem(pi./(1:5)')

y =5×11.2246e-16 1 0.86603br 0.70711 0.58779

So, we still get the answers we expect.

Now let's compute some more values, some already in the cache and others not.

ymore = fmem(pi./(1:10)')

ymore =10×11.2246e-16 1 0.86603 0.70711 0.58779 0.5 0.43388 0.38268 0.34202 0.30902

Again, no surprises on the out. The values are the ones we expect. I am not doing enough computation here for you to see the benefit of reduced time from caching, however.

Of course not! There are a bunch of choices you can use to control how much information gets cached, etc. Here's some links for more information.

Now let's see how this works. First, what is

fmem

fmem = MemoizedFunction with properties: Function: @sin Enabled: 1 CacheSize: 10

We see what function is being memoized, that caching is enabled, and how many distinct inputs are being cached. Since the inputs are consider collectively and I have called

Let's see what's been cached.

s = stats(fmem)

s =struct with fields:Cache: [1×1 struct] MostHitCachedInput: [1×1 struct] CacheHitRatePercent: 77.778 CacheOccupancyPercent: 40

s.Cache

ans =struct with fields:Inputs: {{1×1 cell} {1×1 cell} {1×1 cell} {1×1 cell}} Nargout: [1 1 1 1] Outputs: {{1×1 cell} {1×1 cell} {1×1 cell} {1×1 cell}} HitCount: [4 9 1 0] TotalHits: 14 TotalMisses: 4

And now let's use another input.

ysomemore = fmem(pi./-(1:12)')

ysomemore =12×1-1.2246e-16 -1 -0.86603 -0.70711 -0.58779 -0.5 -0.43388 -0.38268 -0.34202 -0.30902 ⋮

snew = stats(fmem)

snew =struct with fields:Cache: [1×1 struct] MostHitCachedInput: [1×1 struct] CacheHitRatePercent: 78.947 CacheOccupancyPercent: 40

snew.Cache

ans =struct with fields:Inputs: {{1×1 cell} {1×1 cell} {1×1 cell} {1×1 cell}} Nargout: [1 1 1 1] Outputs: {{1×1 cell} {1×1 cell} {1×1 cell} {1×1 cell}} HitCount: [4 9 1 1] TotalHits: 15 TotalMisses: 4

Now see what happens to the cached if we repeat an input.

yrepeat = fmem(pi./(1:10)')

yrepeat =10×11.2246e-16 1 0.86603 0.70711 0.58779 0.5 0.43388 0.38268 0.34202 0.30902

srepeat = stats(fmem)

srepeat =struct with fields:Cache: [1×1 struct] MostHitCachedInput: [1×1 struct] CacheHitRatePercent: 80 CacheOccupancyPercent: 40

srepeat.Cache

ans =struct with fields:Inputs: {{1×1 cell} {1×1 cell} {1×1 cell} {1×1 cell}} Nargout: [1 1 1 1] Outputs: {{1×1 cell} {1×1 cell} {1×1 cell} {1×1 cell}} HitCount: [4 10 1 1] TotalHits: 16 TotalMisses: 4

I can also clear the cache for a particular function or clear the caches for all memoized functions:

Do you ever use memoization in your code, with or without the MATLAB functions? Let us know how you do this here.

function f = memoize2(F) % one-arg F, inputs testable with == % allow nonscalar input. x = []; y = []; f = @inner; function out = inner(in) out = zeros(size(in)); % preallocate output [tf,loc] = ismember(in,x); % find which in's already computed in x ft = ~tf; % ones to be computed out(ft) = F(in(ft)); % get output values for ones not already in % place new values in storage x = [x in(ft(:).')]; y = [y reshape(out(ft),1,[])]; out(tf) = y(loc(tf)); % fill in the rest of the output values end end

and

function f = memoize1(F) % one-arg F, inputs testable with == x = []; y = []; f = @inner; function out = inner(in) ind = find(in == x); if isempty(ind) out = F(in); x(end+1) = in; y(end+1) = out; else out = y(ind); end end end]]>

I was talking with Mike, my boss, one afternoon. And we had been fiddling with some paper as we spoke. After trimming a page, we ended up with a not skinny strip of stiff paper. As Mike twisted it, we wondered if the envelope we could see was a sine... read more >>

]]>First thing, take picture to load into MATLAB.

```
imshow sinWave.png
```

Next do an eyeball experiment. I select a section of the twisted paper. And overlay a sine wave, with guessed amplitude.

hold on axis on x = [244 329]; y = [170 170]; del = diff(x); nsteps = 100; xx = x(1):del/nsteps:x(2); lenxm1 = length(xx)-1; scale = 4.0; yy1 = y(1)+scale*sin((0:lenxm1)/lenxm1*pi); yy2 = y(1)-scale*sin((0:lenxm1)/lenxm1*pi); plot(xx,yy1,'m',xx,yy2,'m') hold off axis off

Problem is, I don't know what the scale factor should be. I can try some values out. By changing the code and re-running the section. OR I can take advantage of one of the new features available in Live Scripts, a Numerical Slider control. I can find this in the Insert Gallery of the MATLAB Toolstrip.

So let's start again and see what this would look like.

imshow sinWave.png hold on x = [244 329]; y = [170 170]; del = diff(x); nsteps = 100; xx = x(1):del/nsteps:x(2); lenxm1 = length(xx)-1; scale = 7.5

scale = 7.5

yy1 = y(1)+scale*sin((0:lenxm1)/lenxm1*pi); yy2 = y(1)-scale*sin((0:lenxm1)/lenxm1*pi); plot(xx,yy1,'m',xx,yy2,'m') hold off

When you try the interactive controls in the Live Editor, you will see that the section reruns after the control is updated. A nice way to explore the consequences of specific parameter choices for your work!

Next let's see a zoomed in view.

axis([220, 350, 150 190])

Of course we could zoom interactively but since I don't feel like creating a video, I am doing it programmaticaIly instead. You can use the interactive zoom tools for plots in the Live Editor if you prefer.

My exploration by plotting doesn't prove anything, but I do think it's suggestive that a sine wave is at least a good candidate for the envelope of the twisted strip.

By the way, this is a picture of the Live Editor code with the slider in it.

I have found that the interactive tools in the Live Editor make parts of my exploratory work much more efficient and satisfying. What have you been able to do more easily with the Live Editor? Let me know here.

]]>I recently wrote about the relatively new Pause button on the Editor portion of the MATLAB Toolstrip. Let's pause again to think about how we can exploit the awesome Pause button on the toolbar.... read more >>

]]>I recently wrote about the relatively new Pause button on the Editor portion of the MATLAB Toolstrip. Let's pause again to think about how we can exploit the awesome Pause button on the toolbar.

I already told you how you might use this button, of course, while your code is running. But there's even more you can do than I mentioned earlier.

You can also

- do several related tasks,
**while the code is running**(i.e., a pause is not required)

- peek and see what's going on,
- then use
`profile on`, - continue,
- wait a bit,
- then pause again, and, finally,

- use
`profile report`

- open some other file to look at while the code is running
- set some more breakpoints
**while code is running**

and the right thing will happen. You can set the breakpoint by

- a right click of your mouse in the area in on the left in the editor that is green, then
- a breakpoint selection

What actions to you take while using the Pause button in MATLAB? Let us know here.

Get
the MATLAB code

Published with MATLAB® R2018a