こんにちは。道家です。今日は STEM 教育に関わる活動についてのご紹介です。 MathWorksの製品は大学、研究所、企業で使われているのはご存知かと思いますが、高校以下でも活用されているケースがあることご存知でしたか？特に欧米ではかなりSTEM教育という事で MATLAB... read more >>

]]>- カメラ、スマホでボールを投げる動画を撮影
- 動画をアプリで読み込む
- 動画からボールを検出するための画像処理を行う（RGB、HSV、L*a*b色空間で試行錯誤）
- 縮尺変換（Pixel → cm）
- グラフから初速度を読み取る
- 理論式に初速度を入力し、理論値をグラフ化
- 実験データと理論値を比較、考察

I have learned a lot more about Kuramoto oscillators since I wrote my blog post three weeks ago. I am working with... read more >>

]]>I have learned a lot more about Kuramoto oscillators since I wrote my blog post three weeks ago. I am working with Indika Rajapakse at the University of Michigan and Stephen Smale at the University of California, Berkeley. They are interested in the Kuramoto model because they are studying the beating of human heart cells. At this point we have some interesting results and some unanswered questions.

Recall that the Kuramoto model is a system of $n$ ordinary differential equations.

$$\dot{\theta_k} = \omega_k + \frac{\kappa}{n}\sum_{j=1}^n {\sin{(\theta_j-\theta_k)}}, \ k = 1,....,n.$$

Here $\theta_k(t)$ is a real-valued function of $t$ describing the phase of the $k$-th oscillator, $\omega_k$ is the intrinsic frequency of the $k$-th oscillator and the real scalar $\kappa$ is the strength of the coupling between the oscillators.

When $\kappa = 0$, the equations are linear and easily solved, and the oscillators are independent.

$$\theta_k(t)= \omega_k t$$

When $\kappa$ is increased beyond a critical point, the nonlinear term forces the phases of all the oscillators to approach a common limit.

Rajapakse and Smale have devised a new measure of synchronicity, the potential.

$$v = \frac{4}{n^2} \sum_k{ \sum_{j>k} \sin^2{ \left(\frac{\theta_j - \theta_k}{2} \right) }}$$

The $4/n^2$ normalizes the potential so that

$$0 \leq v \leq 1$$

The simulation is started with the $\theta_k$ equally spaced over the interval $[0,2\pi]$. This makes $v$ reach its maximum, $v = 1$, and indicates that the oscillators are completely out of sync. On the other hand, if we ever reach a point when all the $\theta_k$ are equal to a common value, then $v = 0$, indicating complete synchronization.

You can get to the Kuramoto ode system by taking $\partial v/\partial \theta_k$ and forming the gradient of the potential.

$$ \dot{\theta}-\omega = -\frac{\kappa n}{4}\nabla{v}$$

In my previous post, I described the "order parameter", shown in the simulator by the length of the cyan arrow. This varies between 0 initially and 1 at complete syncronization. One of the unanswered questions we have is exactly how the potential and the order parameter are related. A switch in `kuramoto` allows you to monitor either one.

The current version of my simulator is still called `kuramoto`, even though it replaces, and significantly improves, the version I described three weeks ago. The animation above shows it in action. There are five parameters controlled by sliders. The first three affect the dynamics of the system.

`n`, number of oscillators,`kappa`, coupling coefficient,`beta`, "variabilty", half-width of the`omega`distribution

The vector `omega` is constant. Its components are randomly distributed uniformly in the interval `[1-beta,1+beta]` . This is the only source of randomness in the simulation. Setting `beta` to zero makes all the components of `omega` equal and eliminates any randomness.

The angular velocities of `exp(i*theta)` are color coded with the `parula` color map, blues for the fastest, yellows for the slowest, and greens in between.

Two more sliders affect the display, but not the dynamics of the system.

`w`, standard deviation of the width of the ring,`step`, step size of the display.

Since `theta` is real, `exp(i*theta)` lies on the unit circle. Display of a large number of oscillators moving around on the circle with different velocities is crowded. So, we display `r*exp(i*theta)` where `r` is normally distributed with mean one and standard deviation `w`. Setting `w` to zero forces the display to lie exactly on the circle.

Increasing `step` speeds up the simulation, but does not affect the computed solution, as long as it isn't too large. Setting `step` to zero pauses the simulation.

The following figures are static snapshots of the simulator output. All the snapshots are monitoring the potential. All are using the reference frame which rotates with the average `theta`, although this is hard to see in a static snapshot. The first four snapshots, with "many" oscillators, all have `n` = 100.

This first snapshot is the last frame of our animation. It has the default settings,

`kappa`= 0.15,`beta`= 0.25

The oscillators are beginning to sychronize, although the potential has barely dropped below `v` = 0.5.

Increase `kappa` from 0.15 to

`kappa`= 0.20

The oscillators are cooperating, but the potential hovers above `v` = 0.13 .

Set

`kappa`= 0.50,`beta`= 0.20

This produces rapid syncronization and a potential near 0.01.

Set

`kappa`= 0.15 (default)`beta`= 1.00 (maximum)`w`= 0 (minimum)

With `beta` at its maximum, there is lots of random action. The potential never drops below 0.99. But with the display width set to zero, all this turbulence is confined to the circle.

Rajapakse and Smale are particularly interested in proving theorems about the behavior of the Kuramoto system when there are only a small handful of identical oscillators.

`n`= 5`kappa`= 0`beta`= 0

The five oscillators are represented initially as the vertices of a regular pentagon and, with both `kappa` and `beta` equal to zero, appear to stay there. It seems to be pretty boring.

Try

`kappa`= 0`beta`= 0.5

Looks interesting. But, hold it, there is no coupling. As I said at the beginning, the exact solution is

$$\theta_k(t)= \omega_k t$$

with five different, but constant, `omega(k)`. We are just seeing a five-term Fourier series. (I got fooled by this one.)

Try

`kappa`= 0.215`beta`= 0.5

There are actually five oscillators here. Three of them are on top of each other, but it's too soon to tell about the other two. Look at the potential.

I could go on like this all night, but enough for now. You can try some yourself by downloading the simulator from either its individual spot at MATLAB Central or as part of Cleve's Laborarory.

That snapshot I said is boring, isn't so boring after all. How *do* you arrange for a small handful of points to be equally spaced around the unit circle, and what are the consequences of any slight discrepancies. Remember `pi` $\ne \pi$.

Indika Rajapakse and Stephen Smale, "Emergence of function from coordinated cells in a tissue," *Proceedings of the National Academy Sciences,* 114 (7) 1462-1467, 2017, https://www.pnas.org/content/114/7/1462.short.

Dirk Brockman and Steven Strogatz, "Ride my Kuramotocycle", https://www.complexity-explorables.org/explorables/ride-my-kuramotocycle.

Steven Strogatz, *Sync: The Emerging Science of Spontaneous Order*, Hyperion, New York, 2003, link.

Steven Strogatz, *On the Science of Sync*. Video, http://www.cornell.edu/video/steven-strogatz-ted-talk-on-sync.

Steven Strogatz, "From Kuramoto to Crawford: Exploring the onset of synchronization in populations of coupled oscillators," *Physica D* 143, 1-20 (2000). link.

Yoshiki Kuramoto, *Chemical Oscillations, Waves and Turbulence*, Springer, 1984. https://www.springer.com/gp/book/9783642696916.

Get
the MATLAB code

Published with MATLAB® R2018b

Sean's pick this week is tiledlayout by MathWorks's development team. ... read more >>

]]>Sean's pick this week is `tiledlayout` by MathWorks's development team.

R2019b shipped on September 12th and is now available for download!

Check the Release Notes for all of the updates.

I imagine most of you have probably used `subplot` at some point.

Subplot alternatives or extensions are very popular on the File Exchange. In fact, I think it may be the most popular topic of contributions to the FEX. A quick search reveals many of the pains associated with subplots:

- Too much spacing between them
- Too much padding on the edges
- Difficulty in arranging the axes based on figure size
- Scaling issues

Enter R2019b, and `tiledlayout`. Tiled layouts give us all of the above. Let's look at two examples: minimizing whitespace, and resizing/rescaling.

Create a `tiledlayout`, and use `nexttile` to traverse it.

```
figure
tlt = tiledlayout(2, 2);
nexttile
plot(sin(1:100));
nexttile
surf(peaks)
nexttile([1 2]) % Spanned
ribbon(sin(1:100), cos(1:100))
```

And let's get rid of that whitespace!

tlt.Padding = "none"; tlt.TileSpacing = "none";

The above example used a fixed grid of 2x2 like you might use with subplot. There's also an option to "flow" which will rearrange the plots based on the figure size.

```
figure
tlt = tiledlayout("flow");
nexttile
plot(sin(1:100));
nexttile
surf(peaks)
nexttile
ribbon(sin(1:100), cos(1:100))
```

Today, Jose Avendano Arbelaez shares another guest post with us. Make sure you let us know your thoughts in the comments section. – – How did you... read more >>

]]>Figure: *Competition Event for VEX Robotics 2018 game “Turning Point”*

Figure: *Simulations of mobile robots using Simulink*

Figure: *Opening Ceremony for the 2018-2019 VEX Robotics World Championship*

Figure: *Students and Teachers at the MathWorks booth learning how to program robot controllers*

Even if we crunch numbers a lot, there are plenty of times, for example, when reporting, that we need to mix numbers into text.... read more >>

]]>Even if we crunch numbers a lot, there are plenty of times, for example, when reporting, that we need to mix numbers into text. For a very long time, we've been able to create a character vector, and more recently, a `string`, with functions like `sprintf`. Look here for more information on character arrays and strings.

That's fine if we want to create one textual entity, but what if I need an array of them? I can use `sprintf` in a loop, or I can use a newer function, `compose`. Let me show you a bit how they each work.

Let's create some numeric values to use as possible inputs.

nums = [exp(1) pi]

nums = 2.7183 3.1416

Here's character output from `sprintf`:

```
cs1 = sprintf('%d', nums(2))
```

cs1 = '3.141593e+00'

And here's the equivalent `string` output:

```
ss1 = sprintf("%d", nums(2))
```

ss1 = "3.141593e+00"

Now let's try `compose`:

```
cc1 = compose('%d', nums(2))
```

cc1 = 1×1 cell array {'3.141593e+00'}

sc1 = compose("%d", nums(2)) % What do we have in our workspace? whos

sc1 = "3.141593e+00" Name Size Bytes Class Attributes ans 1x4 8 char cc1 1x1 136 cell ccm1 1x2 272 cell cs1 1x12 24 char csm1 1x26 52 char nums 1x2 16 double sc1 1x1 174 string scm1 1x2 252 string ss1 1x1 174 string ssm1 1x1 206 string tc 1x1 120 cell tmpc 1x2 228 cell tmps 1x2 4 char ts 1x2 4 char

What you may notice is that both `sprintf` and `compose` give the same output for the string version. However, for the character output, we get one character array using `sprintf`, but that same array embedded in a cell using `compose`.

clearvars -except nums

Now let's see what happens if we are formatting output with more than a single numeric input.

csm1 = sprintf('%d ',nums) ssm1 = sprintf("%d ",nums) ccm1 = compose('%d',nums) scm1 = compose("%d",nums)

csm1 = '2.718282e+00 3.141593e+00 ' ssm1 = "2.718282e+00 3.141593e+00 " ccm1 = 1×2 cell array {'2.718282e+00'} {'3.141593e+00'} scm1 = 1×2 string array "2.718282e+00" "3.141593e+00"

whos

Name Size Bytes Class Attributes ccm1 1x2 272 cell csm1 1x26 52 char nums 1x2 16 double scm1 1x2 252 string ssm1 1x1 206 string

What we see is that we get exactly the same types of output as before. But for the `string` version with `compose`, we get output matching the dimensions of the input, in this case, `1x2`, whereas with the `sprintf` version, we get a single array for all of the cases. `sprintf` always produces a single array as output and recycles the hole (the formatter) while there is more data to format. `compose` returns arrays of text.

Again, `sprintf` produces one output, regardless of the dimensions of the input.

tmps = sprintf('%d',[1 2]) tmpc = compose('%d',[1 2])

tmps = '12' tmpc = 1×2 cell array {'1'} {'2'}

Also, `sprintf` truncates a hole if you don’t provide data. `compose` leaves it there in case you want to fill it with a subsequent call:

ts = sprintf('\n %s') tc = compose('\n %s')

ts = ' ' tc = 1×1 cell array {'↵ %s'}

Do you need to format text/numbers? What techniques do you use to get the results you want? Let us know here.

Get
the MATLAB code

Published with MATLAB® R2019a

I find myself needing to update one of my functions that takes a while to run, in this case, to replace the contents of... read more >>

]]>- Debugging, Code Editing
`regexprep`,`extractBetween`, strings

Someone recently challenged me to convert the curling simulator we published a few years ago (See this post and this post) to take advantage... read more >>

]]>As you can guess, this is not the kind of challenge I can say no to!

You can find the final result here: The Curling Game App

**Overview**

Here is a diagram illustrating at a high level how the curling app is implemented:

If you download the app, here is a description of the files you will see:

**TestCurling.m:**A test created using the App Testing Framework. This allows me to execute my app programmatically and validate that it behaves as expected.**curling.mlapp:**The main app.**curlingLogic.sfx:**The Stateflow chart where all the logic and actions of the app are defined.**curlingSimulator.slx:**A Simulink model simulating the dynamics of the stones on the ice.**msfcn_showTrack.m and msfcnsweeper.m:**MATLAB S-Function used in the Simulink model to animate the curling game in the app.

Let's now look at each part individually

**App Designer**

As I mentioned in a previous post, Stateflow charts executed in MATLAB are very convenient to define the behavior of MATLAB apps. In such application, the MATLAB app itself only defines the graphical elements in the app, and maps apps callbacks to events in the Stateflow chart.

Here is what the app looks like in design view:

In terms of code, this is very simple. I simply define a startupFcn callback to instantiate the Stateflow Chart, where I pass to it a handle to the MATLAB app. Then for each item in the app (buttons, axes, etc), I define a callback that triggers an event in the Stateflow chart.

**MATLAB Stateflow chart**

This is where all the app functionalities are defined. Here is what it looks like:

This logic divides one throw in 4 states: waiting, pick a target, specify speed and spin, and throw. Transitions between each state are triggered by the app, and in each state I can interact with the app's components like I would do in MATLAB code. You will see more about how the stones are animated at the end of this post.

Once the user clicks the "Go" button in the app, the Stateflow chart starts the simulation and waits for it to finish. For that, I periodically check the simulation status using the after function.

For a few utilities that I needed in multiple places in the chart, I created MATLAB Functions inside Stateflow that I can call from multiple places in the chart. Here is an example function used every time I need to draw a new stone.

**Simulink Model**

Compared to the version I described in my 2018 post, the Simulink model did not change much. The only item I had to modify was the MATLAB S-Function to animate the stones during the simulation.

For the S-Function to interact with the app, it needs a handle to it. Before starting the simulation, the Stateflow chart stores this handle to the app in the S-Function block userdata. That way, the S-function can retrieve it and set the position of the stones at every time step. Here is what the Output method looks like:

**Testing**

When developing an app like this one, it can be tedious to test and debug if you need to manually click the same series of buttons again and again. This is where the App Testing Framework comes into play.

I created a function that programmatically executes one throw. For that, the press method of the App Testing framework allows me to press a button or a specific location in the App.

Then in a test, I call this method with different targets and velocities. At the end, I verify that all the stones are where they are expected.

Here is what my screen looks like when I run the test. Notice the blue circle where the programmatic clicks are happening, along with the Stateflow animation, making it easy for me to validate that it is doing the right thing:

[bcvid id="6081159048001"]**Now it's your turn**

Download this latest version of The Curling Game App on MATLAB Central and let us know what you think in the comments below.

]]>Jiro's Pick this week is Deep Learning Hackathon with Transfer Learning by Paola Jaramillo.Deep learning is becoming more accessible to casual... read more >>

]]>Jiro's Pick this week is Deep Learning Hackathon with Transfer Learning by Paola Jaramillo.

Deep learning is becoming more accessible to casual programmers, especially with many pretrained deep neural networks that people can download from the Add-On Explorer. These pretrained networks are trained on some specific set of images, and they can be a great starting point for new tasks. Adapting a pretrained network for a new task is called transfer learning.

This entry by Paola is an example of transfer learning that won her **first place** at a Hackathon on Deep Learning for Agriculture by Itility. The challenge was to classify the species of plants from the images.

She wrote about it in the Deep Learning Blog, so I'll let you read the details there.

**Comments**

Let us know what you think here or leave a comment for Paola.

Get
the MATLAB code

Published with MATLAB® R2019a

I thought this was worth sharing. From NASA's hubblesite.org: This new Hubble Space Telescope view of Jupiter, taken on June 27, 2019,... read more >>

]]>From NASA's hubblesite.org:

This new Hubble Space Telescope view of Jupiter, taken on June 27, 2019, reveals the giant planet's trademark Great Red Spot, and a more intense color palette in the clouds swirling in Jupiter's turbulent atmosphere than seen in previous years. The colors, and their changes, provide important clues to ongoing processes in Jupiter's atmosphere.

You should definitely click the image to see the full resolution.

Image credits: NASA, ESA, A. Simon (Goddard Space Flight Center), and M.H. Wong (University of California, Berkeley)

]]>モノづくりとMatlabを好きな人たちによる趣味の域を超えたような内容に驚いた。 ~Lightning Talk... read more >>

]]>**~**Lightning Talk 聴講者のアンケートより~

ワクワク感満載の Lightning Talk が、来年も継続できるよう、皆様のご応募お待ちしております。 年に1度の Lightning Talkに限らず「私もこんなことやってみたよ」など披露の場を待っているネタがございましたらご連絡お待ちしております。是非ブログで紹介させてください。 これまでのLightning Talk @ MATLAB EXPO 2019 発表者特集はこちらから： - 第一弾：西垣様「Camera de Mouse」 - 第二弾：わたなべ様「お絵描きでMATLAB使ってます」 [jpmichio]]]>

This post is from Paola Jaramillo, Application Engineer from the Benelux office. Back in February, I attended a hackathon hosted by Itility:... read more >>

]]>- - Traditional image processing skills: use pixel to correctly identify the image
- - Using R and Python based on prior experience with the tools
- - Machine learning approach, preprocessing the images to identify features

imagepath = fullfile(pwd,'Subset_from_NonsegmentedV2'); imds = imageDatastore(imagepath, 'IncludeSubfolders',true,... 'LabelSource','FolderNames')The images were not individually labeled, though they were separated into folders with the name of the specific species as the folder name. imageDatastore can automatically label images based on the folder name, so this saved us quite a lot of effort. We decided before spending time preprocessing the images, we would explore the results of retraining AlexNet from the raw image data. For this, we only needed to resize the images, which is automated by the read function of imageDatastore

imagesize = layers(1).InputSize outputSize = imagesize(1:2); imds.ReadFcn = @(img)imresize(imread(img),outputSize);

[trainDS,valDS] = splitEachLabel(imds,0.7,'randomized')Then we ran the training on a simple AlexNet model. This took approx 7 minutes to train with my laptop w/ GPU. MATLAB automatically detected the GPU and used it for training.

opts = trainingOptions('sgdm','InitialLearnRate',0.0001,... 'ValidationData',valDS,... 'Plots','training-progress',... 'MiniBatchSize', 8,... %change according to the memory availability 'ValidationPatience', 3,... 'ExecutionEnvironment','auto') %'multi-gpu' or 'parallel' for scaling up to HPC hackathon_net = trainNetwork(trainDS, layers_to_train, opts);Training progress plot of the initial model The first training produced an accuracy of 92%. Not bad, but was this enough to win it all? I balanced the dataset to use only 100 of each category.

imds = splitEachLabel(imds,100,'randomized');With the balanced dataset, the accuracy became much higher – resulting in

Tools |
MATLAB |
PyTorch |
Python |
Python |
Python |
TensorFlow-Keras |

Model |
AlexNet | Resnet-50 | VGG-16 | 2-layer CNN | Random Forest | InceptionV3 |

Techniques |
Dataset balancing | Adam optimization | data augmentation (more data) | By color channel | ||

Accuracy |
97% | 88% | 80% | 77% | 53% | 22% |

In the end the winning team used a rather simple 8-layer AlexNet model – but managed to reach an accuracy of 97% on the unlabeled dataset! And here is an interesting detail – not only did this team obtain the highest accuracy, they were also the only ones not using R or Python, but MATLABIt appeared that people were expecting open source to win this challenge, but MATLAB was the winner!

I recently saw some code that transformed the RGB pixel values of an image into a Px3 matrix, such that each row contained the... read more >>

]]>I recently saw some code that transformed the RGB pixel values of an image into a Px3 matrix, such that each row contained the red, green, and blue color components of a single pixel. Today I want to show that code fragment, explain it, and then demonstrate what I think is the fastest possible to perform that transformation in MATLAB.

I tend to use the peppers sample image a lot, so I'll switch things up and use NGC6543 instead:

```
RGB = imread('ngc6543a.jpg');
imshow(RGB)
```

Historical note: This was the first "truecolor" sample image to ship with MATLAB.

`RGB` is a three-dimensional array:

size(RGB)

ans = 650 600 3

The third dimension, which has length three, contains the red, green, and blue component images. Compare, for example, the red and green component images:

imshow(RGB(:,:,1))

imshow(RGB(:,:,2))

Here is the code that I saw for transforming this array into a Px3 matrix, where each row contains the three color component values of one pixel:

red = RGB(:,:,1); green = RGB(:,:,2); blue = RGB(:,:,3); A = [red(:) green(:) blue(:)];

The first five pixels are along the uppermost left edge of the image, and they are all black:

A(1:5,:)

ans = 5×3 uint8 matrix 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The first three lines of code did not give me pause:

red = RGB(:,:,1); green = RGB(:,:,2); blue = RGB(:,:,3);

I have written code just like that approximately 2.7183 bazillion times. It was the next line that made me stop and scratch my head for a moment:

A = [red(:) green(:) blue(:)];

Hmm. The array is being split apart into three components, which are then immediately put back together into a matrix. That suggests ... `reshape`, maybe?

Yes, as it turns out, this can all be done with a call to `reshape`. And, there is a big advantage to doing so, as I'll explain in a moment. First, the new code:

B = reshape(RGB,[],3);

That line of code says, "MATLAB, reshape the array `RGB` so that it is Px3, and will you please just go ahead and figure out what P should be to make it work?"

You see in a call to `reshape`, the number of elements must not change. Here are the number of elements of the array `RGB`:

numel(RGB)

ans = 1170000

To reshape that successfully into a matrix with 3 columns, we can compute how many rows there must be:

P = numel(RGB) / 3

P = 390000

The empty matrix as the second argument to `reshape` signals to MATLAB to compute that size automatically. That works out just fine:

size(B)

ans = 390000 3

So, each column of `B` has 390,000 elements. The first column of `B` contains the first 390,000 elements (in memory order) of `RGB`. The second column of `B` contains the second 390,000 elements of `RGB`, and similarly for the third column. Because of the way MATLAB arrangements elements in memory, these columns are exactly the elements from the red component, the green component, and the blue component of `RGB`.

The big advantage of doing it this way is that MATLAB makes no memory copy of data when calling `reshape`. Because of this optimization, calls to `reshape` are almost instantaneous.

Let `reshape` be your friend.

Get
the MATLAB code

Published with MATLAB® R2019a

Brett's Pick this week is Age and Gender Estimation from Face, by Masayuki Tanaka.Back in 2014, I "co-selected" one of Masayuki's files as a... read more >>

]]>Brett's Pick this week is `Age and Gender Estimation from Face`, by Masayuki Tanaka.

Back in 2014, I "co-selected" one of Masayuki's files as a Pick of the Week when I was looking for ways to quantify blur--I was (rightly) taken to task for calling it "noise"-- in single images. Today, I return to his oeuvre to highlight a fun new submission that estimates age and gender from a photo.

Downloading and installing Masayuki's code will lead you to another of his submissions that detects face parts--eyes, nose, mouth--in images. (That code primarily modifies the defaults of the "face-parts detector" in vision.CascadeObjectDetector, purportedly with better results.) Then his code will automatically install a pre-trained vgg-based neural network that does the classification (gender) and regression (age).

I really like that Masayuki provided sample code for training a network, even though he provided pre-trained models. And I like that his code predicts both age *and* gender. But mostly I just like how much fun I had playing around with it!

Here I was a few years ago; the age-gender predictor got it pretty right:

Apparently, I look a year older if I flip the image left-to-right:

Try it on some of your own images--it's fun! Just be sure to shield it from your partner if it turns out to predict that she is a he, and that "he" is a few years older than she really is!

Oh, and you'll need the Deep Learning Toolbox to run Masayuki's code.

As always, I welcome your thoughts and comments.

Get
the MATLAB code

Published with MATLAB® R2019a

Mind-controlled robots have long been featured in science fiction, but thanks to new research in brain-computer interfaces (BCIs), they are no longer... read more >>

]]>In today’s post Lauren Tabolinsky will share 5 short videos to help you get started using our tools. We want to keep this playlist... read more >>

]]>Fireflies on a summer evening, pacemaker cells, neurons in the brain, a flock of starlings in flight, pendulum clocks mounted on a common wall,... read more >>

]]>Fireflies on a summer evening, pacemaker cells, neurons in the brain, a flock of starlings in flight, pendulum clocks mounted on a common wall, bizarre chemical reactions, alternating currents in a power grid, oscillations in SQUIDs (superconducting quantum interference devices). These are all examples of synchronized oscillators.

The Kuramoto model is a nonlinear dynamic system of coupled oscillators that initially have random natural frequencies and phases. If the coupling is strong enough, the system will evolve to one with all oscillators in phase.

Yoshiki Kuramoto is a Professor Emeritus of physics at Kyoto University. He was born in 1940 and published the first paper about this model in 1974. He was quite surprised when his model turned out to abstractly describe the dynamics of so many different physical systems. Here is a YouTube video made in 2015 of him reminiscing about the model.

The Kuramoto model is a system of $n$ ordinary differential equations

$$\dot{\theta_k} = \omega_k + \frac{\kappa}{n}\sum_{j=1}^n {\sin{(\theta_j-\theta_k)}}, \ k = 1,....,n$$

Here $\theta_k(t)$ is a real-valued function of $t$ describing the state of the $k$-th oscillator, $\omega_k$ is the natural frequency of the $k$-th oscillator and the real scalar $\kappa$ is the strength of the coupling between the oscillators.

When $\kappa = 0$, the equations are linear and the oscillators are independent.

$$\theta_k(t)= \omega_k t$$

When $\kappa$ is increased beyond a critical point, the nonlinear term forces the phases of all the oscillators to approach a common limit.

Here is the `help` entry for my `kuramoto` program.

```
help kuramoto_
```

Kuramoto. Kuramoto's model of synchronizing oscillators. kuramoto(n) has n oscillators. Default is n = 100. The model is a system of n ordinary differential equations. The k-th equation is (d/dt) theta_k = omega_k + kappa/n*sum_j(sin(theta_j-theta_k)) theta_k is the phase of the k-th oscillator, omega_k is the intrinsic frequency of the k-th oscillator, kappa is a scalar coupling parameter. Initially, theta is distributed uniformly in the interval [0,2*pi] and omega is distributed uniformly in the interval [1-b,1+b] where b is a parameter called breadth. The angular velocities of exp(i*theta) are color coded with blues for the fastest, yellows for the slowest, and greens in between. The display radius of exp(i*theta) is distributed normally with mean 1 and standard deviation w where w is a parameter called width. This is purely for visual effect and has no influence on the dynamics. Setting width to zero puts all the oscillators on the unit circle. The interface allows control of n, kappa, the speed of the ode solver, breadth and width. An alternate display mode, called "rotate", is a frame of reference rotating with angular velocity psi, the average of the thetas. A rotating arrow has length |z| where |z|*exp(i*psi) = 1/n*sum_k(exp(i*theta_k)). |z| = 0 indicates no synchronization, |z| = 1 is complete synchronization.

I have made five animated gifs showing the program in action. Each animation is a slight variation of the others. I hope your browser or viewer is showing them moving together. If that isn't happening, let us know in the comments what system you are using.

Here are the default settings of the controls, including `kappa = 0`. The oscillators are each moving at their natural frequency. There is no synchronization.

Bump the coupling parameter up to `kappa = 0.50`. This forces the oscillators to move to a common frequency. The order parameter arrow grows to its full length. Near the end of this sample the slowest yellow is trying to catch up with the others.

You can see this better in the rotating coordinates that I have called "rotate". The blues, which are faster than the average, are moving in one direction, while the slower yellows are moving in the opposite direction.

Turn the coupling parameter back down to 0.12. This is near the threshold value that initiates synchronization. The arrow is growing slowly, but we stop before it reaches full length.

Set the width to zero. This is the same action as the previous animation but confined to the circle. It's harder to see what's happening.

The code for Kuramoto's system of odes is a MATLAB™ one-liner.

function theta_dot = ode(~,theta) theta_dot = omega + kappa/n*sum(sin(theta-theta'))'; end

`n` and `kappa` are real scalars. `omega` and `theta` are real column vectors of length `n`. Singleton expansion makes `theta-theta'` into an `n` -by- `n` matrix with elements `theta(j)-theta(k)`. The sum produces a row vector and the final transpose makes the result a column vector.

I have submitted `kuramoto` to the MATLAB Central File Exchange. Here is the link. I have also included it in version 4.60 of Cleve's Laboratory.

Wikipedia, Kuramoto model, https://en.wikipedia.org/wiki/Kuramoto_model.

Dirk Brockman and Steven Strogatz, "Ride my Kuramotocycle", https://www.complexity-explorables.org/explorables/ride-my-kuramotocycle.

Get
the MATLAB code

Published with MATLAB® R2018b

Sean's pick this week is polygon2voxel by Dirk-Jan Kroon. I was recently asked: "How can you find the intersecting volume of two... read more >>

]]>```
xyz = gallery('uniformdata', [30 3], 0)*100;
DT = delaunayTriangulation(xyz);
figure
tetramesh(DT)
```

```
xyz = gallery('uniformdata', [30 3], 1)*100;
AS = alphaShape(xyz, 40);
figure
plot(AS)
```

Now let's convert these to voxels. I will use just the freeBoundary, i.e. the surface, of the delaunay triamgulation.
[f, t] = freeBoundary(DT); ConvexImage = polygon2voxel(struct('faces', f, 'vertices', t), [100 100 100], 'auto');For an alphaShape,

[f, t] = boundaryFacets(AS); ConcaveImage = polygon2voxel(struct('faces', f, 'vertices', t), [100 100 100], 'auto');We can watch both volumes as a movie.

implay([ConvexImage ConcaveImage])

I recently attended the ICIAM meeting in Valencia, Spain which meant I got to hang out with my pals Carlos Sanchis and Lucas Garcia... read more >>

]]>I recently attended the ICIAM meeting in Valencia, Spain which meant I got to hang out with my pals Carlos Sanchis and Lucas Garcia :-)! Carlos showed me a problem he was working with Professor Fernando Giménez from UPV regarding an app for estimating $\pi$ using Buffon's method. Here's the problem statement from Wikipedia:

*Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor.* *What is the probability that the needle will lie across a line between two strips?*

Interesting that the original intention had nothing to do with computing $\pi$ ! There's some fun, powerful, yet fairly easy code to demonstrate the algorithm.

How many line segments?

N = 1000;

Length of each line?

L = 0.20;

We want the beginning points of the lines to lie between L and 1-L so we don't go outside the unit square.

xb = L + rand(1,N)*(1-2*L); yb = L + rand(1,N)*(1-2*L); angs = rand(1,N)*360; xe = xb + L*cosd(angs); ye = yb + L*sind(angs);

```
ax = axes;
plot(ax,[xb;xe],[yb;ye])
axis square
```

hold on glines = 0:L:1; for i = 1:length(glines) xline(ax, glines(i)); end

n = sum(floor(xb/L) ~= floor(xe/L)); piEstimate = 2 * N / n

piEstimate = 3.1153

```
title("Estimate of \pi is " + piEstimate)
```

This could be a great exercise for the classroom - seeing how the estimates depend on how many line segments and the spacing of the grid. Not to mention running a bunch of times with different random numbers each time. What simple estimation problems do you like to use? Let me know here.

Get
the MATLAB code

Published with MATLAB® R2019a

This post is from Oge Marques, PhD and Professor of Engineering and Computer Science at FAU. Oge is an ACM Distinguished Speaker, book author,... read more >>

]]>*offline augmentation*: which consists of performing the transformations to the images (potentially using MATLAB's batch image processing capabilities [6]) and saving the results on disk, thereby increasing the size of the dataset by a factor equal to the number of transformations performed. This can be acceptable for smaller datasets.*online augmentation*or*augmentation on the fly*: which consists of performing transformations on the mini-batches that would be fed to the model during training. This method is preferred for larger datasets, to avoid a potentially explosive increase in storage requirements.

**augmentedImageDatastore**: which generates batches of new images, after preprocessing the original training images using operations such as rotation, translation, shearing, resizing, or reflection (flipping).**imageDataAugmenter**: which is used to configure the selected preprocessing operations for image data augmentation.

- Rotation
- Reflection around the X (left-right flip) or Y (upside-down flip) axis
- Horizontal and vertical scaling
- Horizontal and vertical shearing
- Horizontal and vertical translation

- When you initialize the
**imageDataAugmenter**variable, you can choose one or more options, e.g., only X and Y reflection and horizontal and vertical scaling, as in the code snippet below.

imageAugmenter = imageDataAugmenter( ... 'RandXReflection', true, ... 'RandXScale',[1,2], ... 'RandYReflection', true, ... 'RandYScale',[1,2]);

- The values that you pass as parameters to some of the options (e.g., [1 2] for the X and Y scaling above) are meant to represent a
*range of values*from which a*random*sample will be picked during the preprocessing step, if that transformation is applied to an image. - There is also an option in
**imageDataAugmenter**for providing a function that determines the range of values for a particular parameter, for example a random rotation between -5 and 5 degrees (see code snippet below).

imageAugmenter = imageDataAugmenter('RandRotation',@() -5 + 10 * rand);

- There are two ways to access the actual preprocessed images (for inspection and display, for example):

- Starting in R2018a, there are read/preview methods on
**augmentedImageDatastore**that allow you to obtain an example batch of images (see code snippet below, which produces a tiled image such as the one in Fig.1, using the Flower Classification example: [8] )

imageAugmenter = imageDataAugmenter('RandRotation',@() -20+40*rand); augImds = ... augmentedImageDatastore(imageSize,imds,'DataAugmentation',imageAugmenter); % Preview augmentation results batchedData = preview(augImds); imshow(imtile(batchedData.input))

- Starting in R2018b, a new method (augment) was added to the
**imageDataAugmenter**, which serves two purposes: it functions as a standalone function-object as well as a configuration object for**augmentedImageDatastore**(see code snippet below, which might** produce the left-right flipped image such as the one in Fig. 2).

In = imread(which('peppers.png')); Aug = imageDataAugmenter('RandXReflection',true); Out = augment(Aug,In); figure, montage({In, Out})

Fig 1. Preview of augmented images processed with random rotation between -20 and 20 degrees.

Fig 2. Example of random reflection ('RandXReflection') around the vertical axis.

The- Choose your training images, which you can store as an
**ImageDatastore**, an object used to manage a collection of image files, where each individual image fits in memory, but the entire collection of images does not necessarily fit. This functionality, available since R2015b, was designed to read batches of images for faster processing in machine learning and computer vision applications. - Select and configure the desired image preprocessing options (for example, range of rotation angles, in degrees, or range of horizontal translation distances, in pixels, from which specific values will be picked randomly) and create an
**imageDataAugmenter**object initialized with the proper syntax. - Create an
**augmentedImageDatastore**, specifying the training images, the size of output images, and the**imageDataAugmenter**to be used. The size of output images must be compatible with the size expected by the input layer of the network. - Train the network, specifying the
**augmentedImageDatastore**as the data source for the**trainNetwork**function. For each iteration of training, the augmented image datastore generates one mini-batch of training data by applying random transformations to the original images in the underlying data from which**augmentedImageDatastore**was constructed (see Fig. 3).

- [1] P. Y. Simard, D. Steinkraus, and J. C. Platt, "Best practices for convolutional neural networks applied to visual document analysis," in 2013 12th International Conference on Document Analysis and Recognition, vol. 2. IEEE Computer Society, 2003, pp. 958-958.
- [2] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber, "Deep, big, simple neural nets for handwritten digit recognition," Neural computation, vol. 22, no. 12, pp. 3207-3220, 2010.
- [3] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, "Smote: synthetic minority over-sampling technique," Journal of artificial intelligence research, vol. 16, no. 1, pp. 321-357, 2002.
- [4] J. Wang and L. Perez, "The Effectiveness of Data Augmentation in Image Classification using Deep Learning", 2017. https://arxiv.org/pdf/1712.04621.pdf
- [5] B. Raj, Data Augmentation | How to use Deep Learning when you have Limited Data - Part 2. https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced
- [6] Mathworks. "Batch Processing Using the Image Batch Processor App". https://www.mathworks.com/help/images/batch-processing-using-the-image-batch-processor-app.html
- [7] Mathworks. "Preprocess Images for Deep Learning". https://www.mathworks.com/help/nnet/ug/preprocess-images-for-deep-learning.html
- [8] O. Marques, "Image classification using data augmentation version 1.1.0", MATLAB Central File Exchange, 2019. https://www.mathworks.com/matlabcentral/fileexchange/68728-image-classification-using-data-augmentation

こんにちは。佐々木です。 今年の夏は本当に暑いですね～。毎朝駅からオフィスに歩くだけで汗だくです・・・。 最近Facebookを何気なく見ていたのですが、こんな画像が目に入りました。 これは MATLAB 英語アカウントから発信された投稿だったのですが、MATLAB... read more >>

]]>