The COVID-19 simulator that I described in a recent blog post is also able to simulate the propagation of fake news by social media on the web.... read more >>

]]>The COVID-19 simulator that I described in a recent blog post is also able to simulate the propagation of fake news by social media on the web.

A simulation begins when a stream of fake news is introduced into the real news on popular social media outlets. The news propagates like a virus epidemic through forwarding, reposting and retweeting. It reaches a peak when the stream stops and there are fewer repostings to influence and it eventually runs its course.

Here is every tenth frame of a typical simulation.

The simulation involves quantities that I call *postings*. They behave like the *agents* in programming environments such as NetLogo. I am not an expert in agent-oriented computing, but it is certainly possible to develop such simulations using MATLAB.

The simple agents in this simulation move around a two-dimensional square web with constant individual velocities. Postings that are not entirely factual are known as *fake news*. They influence other postings when they come in close proximity. New legitimate postings are born and die at specified rates. Postings have an age measured in a fractional number of time steps.

There are six types of postings.

*New*. Bright green. Legitimate news posted during the simulation.

*Current*. Green. Factual, still relevant.

*Stale*. Dark green. Old news, but still factual.

*Fake*. Magenta. Fake news. Infects real news when it gets too close.

*Debunked*. Magenta circle. Proved fake. No longer influential.

*Removed*. Postings removed from the web. Just counted.

The simulation is initiated when a stream of fake postings from a particular source begins. The simulation is terminated when the stream of fakes has been debunked and stops being reposted.

A filter with an adjustable effectiveness divides your news feed into two halves. Many popular browsers have such filters.

The simulation in the animation runs for 268 time steps. The time is on the x-axis, while the changing number of postings is in the title. The stream enters from the upper right corner and rapidly spreads through the unfiltered news that you see. In this example the filter is only effective on about 50 percent of postings, so the complete news feed is contaminated.

Here is the first frame. The are 100 legitimate postings that have not been infected and one fake new posting entering from the upper right corner.

Here is a frame about halfway through at time 125. All five live types are present, and filter has had little effect. The total population has risen to 112, which is the maximum seen for this particular run.

Here is the last frame. The stream has stopped and all the fake news has been debunked. This particular episode is over. The is still some current news that has not been affected and a handful of new postings.

Eight parameters are set by the control panel on the left and can be changed during a simulation.

*infect*. Time steps between postings of fake news.*birth*. Birth rate of real news.*mortality*. Mortality rate.*strength*. Radius of effectiveness of fake news.*duration*. Time steps before a fake posting is busted.*speed*. Time steps between display updates.*filter*. Effectiveness of fake news filter.

In this simulator strength is the radius of the effectiveness of fake news. When a fake posting passes within this distance of a legitimate one, the infection is passed on. The default strength of .03 is about twice the dot size.

The frame on the right shows the running count for each type of posting. The first five types are still on the web, and their total number is *n*, the current population.

At any time during the simulation, you can click on the plot button. A second figure window pops open with a graph like this one. It plots the census over the entire history of the simulation. The parameter values are shown in the title.

This is the history for the simulation is the animation. You can see that the current news population decreases from 100 at time zero to around 10 when the episode is over. The number of both new postings and old postings never gets above 10 in this particular setting. The number of fake postings almost reaches 50 before dropping to zero to end the epidemic. About 60 postings are debunked and a roughly equal number are removed from the web.

The simulator will be available in Version 5.2 of Cleve_Lab, at this link, until I get something that is not itself fake news to replace it.

Get
the MATLAB code

Published with MATLAB® R2019b

I have completely rewritten the COVID-19 simulator that I described in last week's blog post and I now have a second version.... read more >>

]]>I have completely rewritten the COVID-19 simulator that I described in last week's blog post and I now have a second version.

A simulation begins when a stream of infected individuals is introduced into a group of healthy individuals. The infection rapidly spreads through the population, reaches a peak when the stream stops and there are fewer potential victims, and eventually runs its course.

Here is every fifth frame of a typical simulation.

The simulation involves quantities that I call *individuals*. They are reminiscent of the *turtles* in the programming language Logo, developed by MIT's Seymour Papert in the 1960's. Today there are many programming languages that are descendants of Logo. When I lived in New Mexico several years ago, I was one of the judges in a state-wide competition for high-school students called Supercomputing Challenge. (The name has become a misnomer because most of the participants now use their own personal computers or the machines in a school lab.) A programming environment called NetLogo is a popular choice in the New Mexico Challenge. Papert's turtles have become *agents*. I am certainly not an expert in agent-oriented computing. Most of what I do know about the subject was gleaned from those high-schoolers' presentations.

The simple agents in this simulation move around a two-dimensional square world with constant individual velocities. Some of the individuals are infected with a virus which they pass on to others when they come in close proximity. New individuals are born and die at specified rates. Individuals have an age measured in a fractional number of time steps.

There are six types of individuals.

*Youth*. Bright blue. Healthy. Age less than 20. New youth are born during the simulation.

*Adult*. Blue. Healthy. Age between 20 and 65.

*Mature*. Dark blue green. Healthy. Age over 65. They have stopped traveling and remain fixed "at home".

*Sick*. Red. Infected with the virus. Age is irrelevant. They enter the simulation from the upper right at regular time intervals for a specified period of time, then no more enter. They infect others by passing close to them.

*Immune*. Red circles. Formerly infected. After a set length of time, the infection in a sick individual runs its course. That individual then becomes immune and can no longer infect others.

*Dead*. At a specified mortality rate, sick and immune individuals are removed from the simulation. The number of removals is the death count.

The simulation is initiated with one sick individual moving into a group of adults and is terminated when there are no more live sick individuals.

A barrier with an adjustable length divides the square into two halves. Imagine a wall across Italy, from Civitavecchia on the west coast to Civitanova Marche on the east. Or think of a wall between the United States and Mexico.

The simulation in the animation runs for 320 time steps. The time is on the x-axis, while the changing total population is in the title. The infection enters from the northeast and rapidly spreads through the eastern half. A few infected individuals find their way through the narrow gap in the barrier, but the epidemic never gains a foothold in in the west.

Here is the first frame. The are 100 adults of various ages that are not yet infected and one infected individual entering from the northeast corner.

Here is a frame about halfway through at time 180. All five live types are present, but there is no active infection in the west. The total population has risen to 107, which is the maximum seen for this particular run.

Here is the last frame. There are no sick individuals present and no more to come. This particular epidemic is over. There are 96 survivors. A little over half of them had the infection and are now immune. The others were never infected; among them are several youngsters born during the run.

Eight parameters are set by the control panel on the left and can be changed during a simulation.

*n*. Population size.*infect*. Time steps between introduction of new sick adults.*birth*. Birth rate.*mortality*. Mortality rate.*virulence*. Radius of effectiveness of the virus.*duration*. Time steps before a sick individual becomes immune.*speed*. Time steps between display updates.*barrier*. Length of barrier separating two halves of the region.

My dictionary tells me that *virulence* is the severity or harmfulness of a disease. In this simulator virulence is the radius of the effectiveness of the virus. When a sick individual passes within this distance of someone who is not yet infected, the infection is passed on. The default virulence of .02 is about the size of the dots representing the individuals.

The frame on the right shows the running count for each type of individual. The first five types are alive, and their total number is *n*, the current population.

At any time during the simulation, you can click on the plot button. A second figure window pops open with a graph like this one. It plots the census over the entire history of the simulation. The parameter values are shown in the title.

This is the history for the simulation is the animation. You can see that the adult population decreases from 100 at time zero to around 45 when the epidemic is over. The number of both young individuals and mature individuals never gets above 10 is this particular setting. The number of sick individuals peaks near 30 before dropping to zero to end the epidemic. About 40 individuals acquire immunity and a roughly equal number die.

Let's change some of the parameters. First, removing the barrier makes small differences. The infection persists for a great number of time steps, the number of adults drops below 40, the number of deaths rises above 50 and the peak number of sick is below 20.

Increasing the birth rate results in many more youngsters. The number of sick also rises more quickly and peaks at a higher value, but many of the young survive.

Increasing the duration time for the infection causes many more deaths

Increasing the virulence causes a higher and earlier peak in the number of sick.

Finally, increasing all these values at the same time, and increasing a value that is not in the control panel, namely the length of time that new infected individuals are introduced, results in a simulation that reaches the maximum time limit. After 2000 time steps the number of deaths is over 600 and I have set the limit on the y-axis to 200. The other values are oscillating and have not reached a limit. The situation appears to be unstable and is terminated.

I make no claim that this simple simulation accurately models the crisis that our world is currently facing. The population is too small, the geography is too crude and the time steps are unreal. I have made no attempt to incorporate any actual data. Nevertheless, I think it roughly behaves like the real thing.

The simulator itself is available from the Central File Exchange at this link and is included in Version 5.1 of Cleve_Lab at this link.

Get
the MATLAB code

Published with MATLAB® R2019b

... read more >>

]]>```
help covid19
```

Epidemic simulation. Inspired by Washington Post graphic by Harry Stevens, March 14, 2020. https://www.washingtonpost.com/graphics/2020/world/corona-simulator/?itid=sf_ These are the four types of individuals. * Mobile. Susceptible, on the move. Blue (or green) circle, O. * Static. Susceptible, stay at home. Smaller, darker blue circle, o. * Infected. Infects other susceptible individuals within distance "radius". Red-orange asterisk, *. * Immune. Was infected for "duration" steps. No longer infectious. Darker red x. These are the seven controllable parameters, with their starting values. n = 50; population size infected = .10; initial infected fraction static = .20; static fraction radius = .03; infectious distance duration = 200; time of infection speed = .03; speed of propagation barrier = .80; barrier height The epidemic starts in the upper right quadrant. The epidemic ends when all infected individuals have become immune. Folders named covid19 Desktop\covid19

The animation begins with 5 infections in the upper right quadrant. There are 45 unaffected.

The animation ends when there is no more infection. Three individuals have never been infected.

Final frame with complete barrier. Left half is not infected.

One individual is never infected.

Everybody is eventually infected, and in fewer time steps.

Less infection.

The code is one of the experiments in version 5.0 of Cleve_Lab on the MATLAB Central File Exchange, available here.

The code is on the File Exchange by itself, available here.

Get
the MATLAB code

Published with MATLAB® R2019b

Today is Friday, March 13, 2020. In many parts of the world, Friday the 13th is considered unlucky. I've written blog posts about Friday the 13th before, 2012, 2018, but I will have something new to say today.... read more >>

]]>Today is Friday, March 13, 2020. In many parts of the world, Friday the 13th is considered unlucky. I've written blog posts about Friday the 13th before, 2012, 2018, but I will have something new to say today.

**NOTE: Somehow the figures were being misplaced. It seems to be OK now, but I am retaining the anchors in the text, just in case.**

Is Friday the 13 not only *unlucky*, bit is it also *unlikely*? What are the chances that the 13th of any month falls on Friday? Which *months* are more likely to have Friday the 13th? Which *years* are more likely to have Friday the 13th?

Leap years make our calendar a nontrivial mathematical object. The leap year rule can be implemented by this anonymous function.

leapyear = @(y) mod(y,4)==0 & mod(y,100)~=0 | mod(y,400)==0;

This says that leap years happen every four years, except for the turn of centuries not divisible by 400. Let's try a few year numbers.

y = [2020 2021 2000 2100]'; isleap = [y leapyear(y)]

isleap = 2020 1 2021 0 2000 1 2100 0

So, this year is a leap year, but next year is not. The turn of the century two years ago in 2000 was a leap year, but the next turn of the century in 2100 will not be one.

The leap year rule implies that our calendar has a period of 400 years. The calendar for years 2000 to 2399 will be reused for 2400 to 2799. In a 400-year period, there are 97 leap years, 4800 months, 20871 weeks, and 146097 days. So, the average number of days in a calendar year is not 365.25, but

dpy = 365+97/400

dpy = 365.2425

At first glance, you might say that the probability of the 13th of any month being a Friday is 1/7, because there are seven days in a week and they are all equally likely. But that's not quite correct. We can count the number of occurrences of Friday the 13th in the 4800 months in a calendar period. The correct probability is then that count divided by 4800. Since 4800 is not a multiple of 7, the probability does not reduce to 1/7.

The `datetime` object, introduced in MATLAB in R2014b, is the tool we need for this job.

The documentation for `datetime` is available here, or, from within MATLAB, with the command

doc datetime

For example, here is the date and time when I am writing this post.

```
d = datetime('now')
```

d = datetime 13-Mar-2020 21:17:57

I can specify a custom display format that includes the weekday with

d = datetime(d,'Format','eeee, MMMM d, yyyy HH:mm:ss')

d = datetime Friday, March 13, 2020 21:17:57

The available display formats include support for an international standard, ISO 8601.

Let's get on with Friday the 13th. Like most things in MATLAB `datetime` works with vectors. The statements

y = 2000; m = 1:4800; t = 13; v = datetime(y,m,t);

produce a row vector of 4800 `datetime`'s for the 13th of every month. The first few and last few are

fprintf(' %s',v(1:4)) fprintf(' ...\n') fprintf(' %s',v(end-3:end))

13-Jan-2000 13-Feb-2000 13-Mar-2000 13-Apr-2000 ... 13-Sep-2399 13-Oct-2399 13-Nov-2399 13-Dec-2399

The statement

w = weekday(v);

produces a row vector of 4800 flints between 1 (for Sunday) and 7 (for Saturday). The first few and last few are

fprintf('%3d',w(1:4)) fprintf(' ...') fprintf('%3d',w(end-3:end))

5 1 2 5 ... 2 4 7 2

Now to get the counts of weekdays, all we need is

counts = histcounts(w)

counts = 687 685 685 687 684 688 684

There you have it. The count for Friday, 688, is higher than for any other day of the week. The 13th of any month is slightly more likely to fall on a Friday.

The probabilities are

prob = counts/4800

prob = 0.1431 0.1427 0.1427 0.1431 0.1425 0.1433 0.1425

Compare these values with their mean.

avg = mean(prob)

avg = 0.1429

Four probabilities are below the mean, three are above.

Let's pair the counts with the days of the week, using a `categorical` variable, introduced in R2013b.

cat = categorical(w,1:7,weekday_names); summary(cat)

Sun Mon Tue Wed Thu Fri Sat 687 685 685 687 684 688 684

When we plot a histogram, the appropriate labels are provided on the x-axis. Friday the 13th is the winner.

histogram(cat,'BarWidth',0.5,'FaceColor',colors(2)) set(gca,'ylim',[680 690]) title('Weekday counts, 400 years')

**"Weekday counts, 400 years" should be above here.**

Here are several other histograms of distributions of Friday the 13th. First, over the 400-year period of the calendar, which *months* are more likely to have Friday the 13th? Create a categorial variable by months.

friday13s = v(w==6); cat_months = categorical(month(friday13s),1:12,month_names);

The histogram shows us that this month, March, and five other months, have a number of Friday the 13ths that is slightly above average. August and October have the fewest.

histogram(cat_months,'BarWidth',0.5,'FaceColor',colors(3)) set(gca,'ylim',[55 59],'ytick',56:58) title('Friday 13th, month')

**"Friday 13th, month" should be above here.**

Which *years* are more likely to have Friday the 13th? Look at the last digit of the year number.

year_mod10 = mod(year(friday13s),10); digits = split(string(num2str(0:9))); cat_mod10 = categorical(year_mod10,0:9,digits);

The histogram reveals that this year, 2020, and any other years that begin a decade, are the least likely to have unlucky Fridays.

histogram(cat_mod10,'BarWidth',0.5,'FaceColor',colors(4)) set(gca,'ylim',[64 73],'ytick',65:72) title('Friday 13th, last digit of year')

**"Friday 13th, last digit of year" should be above here.**

How many unlikely Fridays are there each year? Here is a rough histogram of the number per year over the entire 400-year period of the calendar. We can see there appears to be between one and three each year, but to be more specific about the details of the distribution is elusive.

years = year(friday13s); plot(2000:2399,histc(years,2000:2399),'.') set(gca,'xlim',[1990 2409],'ylim',[0 4],'ytick',1:3) title('Friday 13th per year, 400 years')

**"Friday 13th per year, 400 years" should be above here.**

Let's concentrate on just this decade, 2020 through 2029. Here is a list of all the 16 Friday the 13ths in this range.

decade = (years>=2020 & years<2030); disp(friday13s(decade)')

13-Mar-2020 13-Nov-2020 13-Aug-2021 13-May-2022 13-Jan-2023 13-Oct-2023 13-Sep-2024 13-Dec-2024 13-Jun-2025 13-Feb-2026 13-Mar-2026 13-Nov-2026 13-Aug-2027 13-Oct-2028 13-Apr-2029 13-Jul-2029

And here is the histogram of the counts. We see two this year (today and again in November). Then there is only one next year, and only one the year after that. But there will be three in 2026. That's the only three-count this decade.

hist(years(decade),2020:0.5:2029,'binwidth',0.5) set(get(gca,'child'),'FaceColor',colors(5)) set(gca,'xlim',[2019 2030],'xtick',2020:2029, ... 'ylim',[0 4],'ytick',1:3) title('Friday 13th per year, 2020-2029')

**"Friday 13th per year, 2020-2029" should be above here.**

If you have stayed with me this long, maybe you are ready for some homework.

- Explain why there has to be at least one Friday the 13th each year.
- Explain why there can never be more than three in any year.
- How many different ways are there to have three in a year? 2026 is one example.

Now I better quit and get this posted while it is still Friday the 13th.

*Good luck.*

Get
the MATLAB code

Published with MATLAB® R2019b

Tomorrow, February 29, 2020 would be Gene Golub's 22nd birthday.... read more >>

]]>Tomorrow, February 29, 2020 would be Gene Golub's 22nd birthday.

Gene was born on Leap Day, February 29, 1932. So, when his birthday came around only every fourth year, we made an especially big deal out of it.

Perhaps foremost among Gene's many accomplishments is the algorithm to compute the SVD, the singular value decomposition. His California license plate was legend.

The animation at the beginning of this blog uses the SVD to compute low-rank approximation of an image. X is a three-dimensional array holding a full color photo of Gene.

```
X = imread('GHG.jpg');
[m,n,p] = size(X);
sizeX = [m,n,p]
imshow(X)
```

sizeX = 259 172 3

Reshape `X` into a two-dimensional matrix $A$ formed from the red, green and blue components.

A = reshape(X,m,n*p); sizeA = [m,n*p] imshow(A)

sizeA = 259 516

Compute the SVD of this wide matrix.

[U,S,V] = svd(double(A));

Use the SVD to generate low-rank approximations, which are then reshaped to full color.

for r = 2:2:50 k = 1:r; B = U(:,k)*S(k,k)*V(:,k)'; B = reshape(B,m,n,p); imshow(B/256) end

This is not a particularly good image compression scheme, nevertheless I think two aspects are amazing. The approximations retain the RGB components. And, relatively low rank produces recognizable images. By rank 50 we can read the white board in the background and see his checkered shirt.

In March, 2007, there was a workshop at Stanford celebrating Gene's 75th birthday, as well as a half-century of Computational Mathematics at Stanford. These coffee mugs, with photos of Golub and George Forsythe, were mementos.

Later that year, we put out a special edition of the NA Digest.

From: Cleve Moler <Cleve.Moler@mathworks.com> Date: Fri, 16 Nov 2007 17:55:42 -0500 Subject: Gene Golub, 1932 - 2007

Gene Golub, founder of the NA Digest, passed away today, Friday, November 16, at the Stanford University Hospital. He was 75 years old.

Gene returned home to Stanford recently from a trip to Hong Kong. He was planning to leave again Tuesday on another trip, this one to Zurich where the ETH was to honor him with a special degree. Instead, Sunday night he went to the emergency room because he was "feeling lousy". On Tuesday, he was found to have AML, acute myelogenous leukemia, a form of cancer that affects the white blood cells. This is a potentially curable disease and he was expecting to begin chemotherapy today. But serious complications developed suddenly over night.

I was able to see Gene for an hour last night and he was in reasonably good spirits. Mike Saunders was trying to get Gene's laptop to use dial-up over the hospital's phone system because Gene said he was a couple of days behind on his email. I was planning to get a wireless card for his machine today. None of us had any idea how suddenly the situation would worsen.

The Stanford iCME students have created a memorial blog at http://genehgolub.blogspot.com .

Our community has lost its foremost member. He was a valued colleague and friend. Goodbye, Gene.

-- Cleve Moler

I wrote a short bio for the National Academy of Engineering memorials NAE Tribute. After summarizing his accomplishments and honors, it concluded:

Everything I have said thus far, however, pales in comparison to Golub's most important characteristic -- his humanity. The numerical analysis and scientific computing community was his family. The closeness and congeniality of this community is due, in large part, to his influence. Thousands of people in dozens of countries knew him simply as "Gene" and visitors to Stanford, particularly young people, often stayed in his home. He remembered everybody's name and their children's birthdays, and he returned visits, traveling frequently to give lectures, attend workshops, or just to see people. His friendships, visits, and e-mails not only led to important algorithms and research papers, but also made the world a more pleasant place.

Nick Trefethen wrote a beautiful tribute for Nature. Fittingly, the subtitle is "Mathematician and godfather of numerical analysis." Here is Nick's last paragraph.

Gene Golub was restless and never entirely happy. He was a demanding friend; behind his back, we all had Gene stories to tell. It was a huge back: Gene was big, dominating any room he was in, and grew more impressive and imposing with the years. Graduate students around the world admired and loved him, and he bought them all dinner when he got the chance. His unexpected death, in Stanford in between speaking at a conference in Hong Kong and flying to Zurich for his eleventh honorary degree, has left the world of numerical analysis orphaned and reverberating.

Get
the MATLAB code

Published with MATLAB® R2019b

While I was working on my posts about Pejorative Manifolds, I was pleased to discover the intriguing patterns created by the roundoff error in the computed eigenvalues of triple Kronecker products.... read more >>

]]>While I was working on my posts about Pejorative Manifolds, I was pleased to discover the intriguing patterns created by the roundoff error in the computed eigenvalues of triple Kronecker products.

The Kronecker product of two matrices $A$ and $B$, denoted by $A \otimes B$ and computed by `kron(A,B)`, is the large matrix containing all possible products of the elements of $B$ by those of $A$.

$$A \otimes B = \left( \begin{array}{rrrr} a_{1,1}B & a_{1,2}B & ... & a_{1,n}B \\ a_{2,1}B & a_{2,2}B & ... & a_{2,n}B \\ ... \\ a_{m,1}B & a_{m,2}B & ... & a_{m,n}B \end{array} \right) $$

The two matrices $A \otimes B$ and $B \otimes A$ are not equal, although they do have the same elements in different orders.

The two matrices $A \otimes (B \otimes C)$ and $(A \otimes B) \otimes C$ are equal. This is the *triple Kronecker product*, $A \otimes B \otimes C$.

The eigenvalues of $A \otimes B$ are all possible products of the eigenvalues of $A$ and those of $B$.

For example, suppose $A$ is the magic square.

$$ A = \left( \begin{array}{rrr} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{array} \right) $$

And $I$ is the identity matrix.

$$ I = \left( \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right) $$

Then $I \otimes A$ is block diagonal with copies of $A$ on the diagonal.

$$ I \otimes A = \left( \begin{array}{rrrrrr} 8 & 1 & 6 & 0 & 0 & 0 \\ 3 & 5 & 7 & 0 & 0 & 0 \\ 4 & 9 & 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 8 & 1 & 6 \\ 0 & 0 & 0 & 3 & 5 & 7 \\ 0 & 0 & 0 & 4 & 9 & 2 \end{array} \right) $$

And $A \otimes I$ has the elements of $A$ distributed along multiple diagonals.

$$ A \otimes I = \left( \begin{array}{rrrrrr} 8 & 0 & 1 & 0 & 6 & 0 \\ 0 & 8 & 0 & 1 & 0 & 6 \\ 3 & 0 & 5 & 0 & 7 & 0 \\ 0 & 3 & 0 & 5 & 0 & 7 \\ 4 & 0 & 9 & 0 & 2 & 0 \\ 0 & 4 & 0 & 9 & 0 & 2 \end{array} \right) $$

The eigenvalues of both $A\otimes I$ and $I \otimes A$ are multiple copies of the eigenvalues of $A$.

A = magic(3); I = eye(2); eig_A = eig(A) eig_IxA = eig(kron(I,A)) eig_AxI = eig(kron(A,I))

eig_A = 15.000000000000004 4.898979485566361 -4.898979485566358 eig_IxA = 15.000000000000004 4.898979485566361 -4.898979485566358 15.000000000000004 4.898979485566361 -4.898979485566358 eig_AxI = 4.898979485566356 -4.898979485566355 15.000000000000002 14.999999999999998 4.898979485566359 -4.898979485566356

Our building blocks are the companion matrices of the polynomials

$$ (s^2-1)^p $$

Expansion involves binomial coefficients. For example, `p` = 4,

$$ (s^2-1)^4 = s^8 - 4 s^6 + 6 s^4 - 4 s^2 + 1 $$

This is the characteristic polynomial of the matrix

A = companion(4)

A = 0 4 0 -6 0 4 0 -1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0

The eigenvalues of `A` are +1 and -1, each repeated `p` times; that's their *algebraic multiplicity*. Their *geometric multiplicity* is only one; they have only one eigenvector each.

The eigenvalues are very sensitive to any kind of error, including roundoff error.

```
format long
e = eig(A)
```

e = -1.000063312800363 + 0.000063312361825i -1.000063312800363 - 0.000063312361825i -0.999936687199636 + 0.000063313248591i -0.999936687199636 - 0.000063313248591i 1.000078270656946 + 0.000000000000000i 0.999999997960933 + 0.000078268622898i 0.999999997960933 - 0.000078268622898i 0.999921733421185 + 0.000000000000000i

The exact eigenvalues can be recovered with

exact = round(e)

exact = -1 -1 -1 -1 1 1 1 1

The computed eigenvalues lie on circles in the complex plane, centered at the exact values, with radius roughly the `p`-th root of roundoff error. In this example, the values centered at +1 happen to be slightly more accurate than those centered at -1.

half_fig eigplot(-1,e) eigplot(+1,e)

An alternative to the traditional companion matrix is the Fiedler companion matrix.

```
A = companion(4,'fiedler')
```

A = 0 4 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 -6 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 4 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 1 0 0

This time the eigenvalues are computed in the opposite order and those at -1 are the more accurate.

e = eig(A)

e = 1.000107000301506 + 0.000000000000000i 1.000000000805051 + 0.000107001107366i 1.000000000805051 - 0.000107001107366i 0.999892998088390 + 0.000000000000000i -1.000062156379228 + 0.000000000000000i -0.999999990704846 + 0.000062147083463i -0.999999990704846 - 0.000062147083463i -0.999937862211083 + 0.000000000000000i

e = flipud(e); half_fig eigplot(-1,e) eigplot(+1,e)

Our graphic `TKP` involves $K$, the triple Kronecker matrix product

$$ K = A \otimes I \otimes B $$

where $A$ and $B$ are companion matrices of the polynomials $(s^2-1)^p$ and $(s^2-1)^r$ and $I$ is the $q$ -by- $q$ identity matrix.

The values of p, q and r are set by controls. Initially they are equal to 4, 3 and 2. The size of $K$ is four times the product of the three sizes, n = 4pqr. Computation time is $O(n^3)$. Values of n larger than about 4000 are possible, but slow.

Half of the eigenvalues of $K$ are equal to +1 and the other half are equal to -1, so these eigenvalues have very high multiplicity and are very sensitive to roundoff error. The display shows the differences between the computed and the exact eigenvalues. The structure of $K$ makes these errors produce provocative patterns on many different circles in the complex plane.

Two kinds of companion matrix with different roundoff behavior are possible. The traditional companion matrix has all the polynomial coefficients in the first row of an upper Hessenberg matrix. The Fiedler companion matrix has the coefficients arranged on the super and subdiagonals of a pentadiagonal matrix.

This is the computational core. The complete program with the controls is available here: tkp.m.

function tkp(p,q,r) A = companion(p); I = eye(q); B = companion(r); K = kron(A,kron(I,B)); e = eig(K); exact = round(e); err = e - exact; plot(err,'.') end

function X = companion(p) c = 1; for j = 1:p c = conv(c,[1 0 -1]); end m = 2*p; X = [-c(2:m+1); eye(m-1,m)]; end

Get
the MATLAB code

Published with MATLAB® R2019b

In an unpublished 1972 technical report "Conserving confluence curbs ill-condition," Velvel Kahan coined the descriptive term *pejorative manifold*. In case you don't use it in everyday conversation, *pejorative* means "expressing contempt or disapproval."... read more >>

In an unpublished 1972 technical report "Conserving confluence curbs ill-condition," Velvel Kahan coined the descriptive term *pejorative manifold*. In case you don't use it in everyday conversation, *pejorative* means "expressing contempt or disapproval."

Velvel's report concerns polynomials with multiple roots, which are usually regarded with contempt because they are so badly conditioned. But Velvel's key observation is that although multiple roots are sensitive to arbitrary perturbation, they are insensitive to perturbations that preserve multiplicity.

Part 1 was about polynomials. This part is about matrix eigenvalues.

The pejorative manifold $\mathcal{M}$ is now the set of all 6-by-6 matrices which have an eigenvalue of multiplicity three at $\lambda$ = 3. These are severe restrictions, of course, and $\mathcal{M}$ is a tiny subset of the set of all matrices. But if we stay within $\mathcal{M}$, life is not nearly so harsh.

The Jordan Canonical Form of a matrix is bidiagonal, with eigenvalues on the diagonal and 1's and 0's on the super-diagonal. In our situation here, each eigenvalue with multiplicity m has a single m-by-m Jordan block with 1's on the superdiagonal. The Jordan blocks for distinct eigenvalues are separated by a zero on the super-diagonal.

Our first matrix has a 3-by-3 block with $\lambda$ = 3, then a 1-by-1 block with $\lambda$ = 2, and finally a 2-by-2 block with $\lambda$ = 1, so the diagonal is

d = [3 3 3 2 1 1];

and the superdiagonal is

j = [1 1 0 0 1];

Here it is

J1 = diag(j,1) A1 = diag(d) + J1

J1 = 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 A1 = 3 1 0 0 0 0 0 3 1 0 0 0 0 0 3 0 0 0 0 0 0 2 0 0 0 0 0 0 1 1 0 0 0 0 0 1

Our second matrix moves one super-diagonal element to swap the multiplicities of $\lambda$ = 2 and $\lambda$ = 1.

J2 = diag([1 1 0 1 0],1) A2 = diag([3 3 3 2 2 1]) + J2

J2 = 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 A2 = 3 1 0 0 0 0 0 3 1 0 0 0 0 0 3 0 0 0 0 0 0 2 1 0 0 0 0 0 2 0 0 0 0 0 0 1

These two matrices were constructed to have the two polynomials from my last post as characteristic polynomials. There is no need to compute the zeros, they are on the diagonals.

p1 = charpoly(A1,'s'); p2 = charpoly(A2,'s');

$$ p1 = s^6-13\,s^5+68\,s^4-182\,s^3+261\,s^2-189\,s+54 $$

$$ p2 = s^6-14\,s^5+80\,s^4-238\,s^3+387\,s^2-324\,s+108 $$

The convex linear combination puts the weights on the superdiagonal and the new eigenvalue of the diagonal.

```
format short
A = 1/3*A1 + 2/3*A2
```

A = 3.0000 1.0000 0 0 0 0 0 3.0000 1.0000 0 0 0 0 0 3.0000 0 0 0 0 0 0 2.0000 0.6667 0 0 0 0 0 1.6667 0.3333 0 0 0 0 0 1.0000

Let's check that the characteristic polynomial is the same as the third polynomial we had last time.

```
p3 = charpoly(A,'s');
```

$$ p3 = s^6-\frac{41\,s^5}{3}+76\,s^4-\frac{658\,s^3}{3}+345\,s^2-279\,s+90 $$

The plots of the three polynomials show how the triple root is more sensitive than either double root, which are, in turn, more sensitive than any of the simple roots.

plot_polys(p1,p2,p3)

A similarity transformation preserves, but disguises, the eigenvalues. Because it's handy, I'll use the HPL-AI matrix from my blog post last December HPL-AI Benchmark.

```
format short
M = HPLAI(6,-1)
```

M = 1.1667 0.1429 0.1250 0.1111 0.1000 0.0909 0.2000 1.1667 0.1429 0.1250 0.1111 0.1000 0.2500 0.2000 1.1667 0.1429 0.1250 0.1111 0.3333 0.2500 0.2000 1.1667 0.1429 0.1250 0.5000 0.3333 0.2500 0.2000 1.1667 0.1429 1.0000 0.5000 0.3333 0.2500 0.2000 1.1667

Here's how we do a similarity transformation in MATLAB.

B = M*A/M

B = 3.0610 1.1351 0.0899 -0.1695 -0.1178 -0.2053 0.0405 3.1519 1.1018 -0.1919 -0.1296 -0.2244 0.1360 0.2922 3.2144 -0.1402 -0.0745 -0.1867 0.1096 0.3808 0.2786 1.8527 0.6013 -0.1919 0.2839 0.6709 0.4447 -0.1222 1.6349 0.1467 1.5590 1.5424 0.7469 -0.1300 -0.0449 0.7517

What did this do to the eigenvalues?

```
format long
e = eig(B)
```

e = 1.000000000000000 + 0.000000000000000i 1.666666666666671 + 0.000000000000000i 1.999999999999999 + 0.000000000000000i 3.000006294572211 + 0.000000000000000i 2.999996852713897 + 0.000005451273553i 2.999996852713897 - 0.000005451273553i

The simple roots have hardly moved. The root with multiplicity three is perturbed by roughly the cube root of precision.

```
format short
e3 = e(4:6) - 3;
r3 = abs(e3)
circle(e3)
```

r3 = 1.0e-05 * 0.6295 0.6295 0.6295

What about the eigenvectors?

[V,~] = eig(B); imagV = imag(V) realV = real(V)

imagV = 1.0e-05 * 0 0 0 0 0 0 0 0 0 0 0.3705 -0.3705 0 0 0 0 0.0549 -0.0549 0 0 0 0 0.0678 -0.0678 0 0 0 0 0.0883 -0.0883 0 0 0 0 0.1225 -0.1225 realV = -0.0601 -0.0519 -0.0904 -0.6942 0.6942 0.6942 -0.0664 -0.0590 -0.1017 -0.1190 0.1190 0.1190 -0.0742 -0.0683 -0.1162 -0.1487 0.1487 0.1487 -0.3413 -0.9310 -0.9488 -0.1983 0.1983 0.1983 0.2883 0.3258 -0.1627 -0.2975 0.2975 0.2975 -0.8870 -0.1275 -0.2033 -0.5950 0.5950 0.5950

The last two vectors have small imaginary components and their real components are nearly the same as the fourth vector. So only the first four columns of `V` are good eigenvectors. We see `B`, and hence `A`, is *defective*. It does not have a full set of linearly independent eigenvectors.

help condeig format short e kappa = condeig(B)

CONDEIG Condition number with respect to eigenvalues. CONDEIG(A) is a vector of condition numbers for the eigenvalues of A. These condition numbers are the reciprocals of the cosines of the angles between the left and right eigenvectors. [V,D,s] = CONDEIG(A) is equivalent to: [V,D] = EIG(A); s = CONDEIG(A); Large condition numbers imply that A is near a matrix with multiple eigenvalues. Class support for input A: float: double, single See also COND. Documentation for condeig doc condeig kappa = 1.6014e+00 2.9058e+00 2.9286e+00 1.3213e+10 1.3213e+10 1.3213e+10

Look carefully at those numbers -- the power of 10 on the first three is zero, while on the last three it is 10. This confirms that the first three eigenvalues are well conditioned, while the fourth is not.

`A` is almost its own JCF, so this is no surprise.

A_jcf = jordan(A)

A_jcf = 3.0000e+00 1.0000e+00 0 0 0 0 0 3.0000e+00 1.0000e+00 0 0 0 0 0 3.0000e+00 0 0 0 0 0 0 1.0000e+00 0 0 0 0 0 0 1.6667e+00 0 0 0 0 0 0 2.0000e+00

But what about `B`? With exact computation, it would have the same JCF.

format short e B_jcf = jordan(B); real_B_jcf = real(B_jcf) imag_B_jcf = imag(B_jcf)

real_B_jcf = 1.0000e+00 0 0 0 0 0 0 1.6667e+00 0 0 0 0 0 0 2.0000e+00 0 0 0 0 0 0 3.0000e+00 0 0 0 0 0 0 3.0000e+00 0 0 0 0 0 0 3.0000e+00 imag_B_jcf = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -2.7873e-06 0 0 0 0 0 0 2.7873e-06

The computed JCF is diagonal. This is, yet again, an example of the fact that the The Jordan Canonical Form Just Does't Compute. Also see Golub and Wilkinson.

I was just about to make that the end of the post when I tried this. I didn't realize it at the time, but the use of the HPL-AI matrix for the "random" similarity transformation has some welcome consequences. The elements of this matrix are the ratios of small integers.

M = sym(M)

M = [ 7/6, 1/7, 1/8, 1/9, 1/10, 1/11] [ 1/5, 7/6, 1/7, 1/8, 1/9, 1/10] [ 1/4, 1/5, 7/6, 1/7, 1/8, 1/9] [ 1/3, 1/4, 1/5, 7/6, 1/7, 1/8] [ 1/2, 1/3, 1/4, 1/5, 7/6, 1/7] [ 1, 1/2, 1/3, 1/4, 1/5, 7/6]

The elements of `A` are also the ratios of small integers.

sym(A)

ans = [ 3, 1, 0, 0, 0, 0] [ 0, 3, 1, 0, 0, 0] [ 0, 0, 3, 0, 0, 0] [ 0, 0, 0, 2, 2/3, 0] [ 0, 0, 0, 0, 5/3, 1/3] [ 0, 0, 0, 0, 0, 1]

The elements of `inv(M)` are ratios of large integers, but I don't have to show them because we're not going to invert `M`, even symbolically. Instead we use *forward slash* to compute the similarity transformation.

B = M*A/M;

The elements of `B` are also the ratios of large integers. Let's just look at the first column; the other columns are similar.

B1 = B(:,1)

B1 = 5261534240243927141/1718872313588352715 69679116377352174/1718872313588352715 46738873967260860/343774462717670543 2898457606578534/26444189439820811 97602262779214116/343774462717670543 2679724812276211392/1718872313588352715

With this exact symbolic computation, the characteristic polynomial of `B` is `p3`.

```
charpoly(B,'s')
```

ans = s^6 - (41*s^5)/3 + 76*s^4 - (658*s^3)/3 + 345*s^2 - 279*s + 90

As a result, the JCF of `B` is correct.

jordan(B)

ans = [ 1, 0, 0, 0, 0, 0] [ 0, 5/3, 0, 0, 0, 0] [ 0, 0, 2, 0, 0, 0] [ 0, 0, 0, 3, 1, 0] [ 0, 0, 0, 0, 3, 1] [ 0, 0, 0, 0, 0, 3]

The third output argument of the symbolic version of `eig` is a vector whose length is the number of linearly independent eigenvectors. This is the sum of the *geometric multiplicities*. In this case, it is 4.

[V,E,k] = eig(B)

V = [ 11/27, 463/6831, 4/9, 7/6] [ 25/54, 31/414, 1/2, 1/5] [ 15/28, 485/5796, 4/7, 1/4] [ 460/63, 1115/2898, 14/3, 1/3] [ -23/9, -157/483, 4/5, 1/2] [ 1, 1, 1, 1] E = [ 5/3, 0, 0, 0, 0, 0] [ 0, 1, 0, 0, 0, 0] [ 0, 0, 2, 0, 0, 0] [ 0, 0, 0, 3, 0, 0] [ 0, 0, 0, 0, 3, 0] [ 0, 0, 0, 0, 0, 3] k = 1 2 3 4

Verify that the eigenvectors work.

BV = B*V VE_k = V*E(k,k)

BV = [ 55/81, 463/6831, 8/9, 7/2] [ 125/162, 31/414, 1, 3/5] [ 25/28, 485/5796, 8/7, 3/4] [ 2300/189, 1115/2898, 28/3, 1] [ -115/27, -157/483, 8/5, 3/2] [ 5/3, 1, 2, 3] VE_k = [ 55/81, 463/6831, 8/9, 7/2] [ 125/162, 31/414, 1, 3/5] [ 25/28, 485/5796, 8/7, 3/4] [ 2300/189, 1115/2898, 28/3, 1] [ -115/27, -157/483, 8/5, 3/2] [ 5/3, 1, 2, 3]

While preparing these two posts about "pejorative manifolds", I discovered some beautiful patterns of roundoff error generated by Triple Kronecker Products with very high multiplicity. That is going to be my next post. Here is a preview. tkp_preview.gif

close

Get
the MATLAB code

Published with MATLAB® R2019b

In an unpublished 1972 technical report "Conserving confluence curbs ill-condition," Velvel Kahan coined the descriptive term *pejorative manifold*. In case you don't use it in everyday conversation, *pejorative* means "expressing contempt or disapproval."... read more >>

In an unpublished 1972 technical report "Conserving confluence curbs ill-condition," Velvel Kahan coined the descriptive term *pejorative manifold*. In case you don't use it in everyday conversation, *pejorative* means "expressing contempt or disapproval."

Velvel's report concerns polynomials with multiple roots, which are usually regarded with contempt because they are so badly conditioned. But Velvel's key observation is that although multiple roots are sensitive to arbitrary perturbation, they are insensitive to perburbations that preserve multiplicity. The same observation can be applied to matrix inverses and eigenvalues.

In this example, the pejorative manifold $\mathcal{M}$ is the set of all sixth degree polynomials which have a zero of multiplicity three at x = 3. These are severe restrictions, of course, and $\mathcal{M}$ is a tiny subset of the set of all polynomials. But if we stay within $\mathcal{M}$, life is not nearly so harsh.

First a pair of polynomials with multiple roots. All of our polynomials have a root of multplicity three at x = 3 as well as other multiple roots. This one a double root at x = 1 and a simple root at x = 2.

```
x = sym('x');
p1 = (x-3)^3*(x-2)*(x-1)^2;
```

$$ p1 = {\left(x-1\right)}^2\,\left(x-2\right)\,{\left(x-3\right)}^3 $$

The next one has a double root at x = 2 and a simple root at x = 1.

p2 = (x-3)^3*(x-2)^2*(x-1);

$$ p2 = \left(x-1\right)\,{\left(x-2\right)}^2\,{\left(x-3\right)}^3 $$

To summarize, we have two polynomials, `p1` and `p2` . Both have degree six and have a triple root at x = 3. In addition, `pk` has a double root at x = `k` .

Let's see the coefficients.

p1 = expand(p1);

$$ p1 = x^6-13\,x^5+68\,x^4-182\,x^3+261\,x^2-189\,x+54 $$

p2 = expand(p2);

$$ p2 = x^6-14\,x^5+80\,x^4-238\,x^3+387\,x^2-324\,x+108 $$

We can solve them exactly and verify that they have the desired roots.

z1 = solve(p1)' z2 = solve(p2)'

z1 = [ 1, 1, 2, 3, 3, 3] z2 = [ 1, 2, 2, 3, 3, 3]

Now let's take a convex linear combination. I am using 1/3 and 2/3, but I could use any other weights as long as their sum is 1. Any such convex linear combination is still in $\mathcal{M}$.

p3 = 1/3*p1 + 2/3*p2;

$$ p3 = x^6-\frac{41\,x^5}{3}+76\,x^4-\frac{658\,x^3}{3}+345\,x^2-279\,x+90 $$

This still has the triple root at x = 3. The roots at x = 1 and x = 2 are now simple and the root at x = 5/3 is the convex linear combination of 1 and 2 with the same coefficents, 5/3 = 1/3*1 + 2/3*2.

z = solve(p3)'

z = [ 1, 5/3, 2, 3, 3, 3]

Looking at plots of the three polynomials, you can appreciate how the triple root at 3 is more sensitive than the blue double root at 1 or the green double root at 2, which are, in turn, more sensitive and any of the simple roots.

plot_polys(p1,p2,p3)

Now a small perburbation. Take a convex linear combination with an irrational weight and use `vpa`, the variable precision floating point arithmetic. We are in a floating point haze with thickness $10^{-32}$ surrounding $\mathcal{M}$.

digits(32) w = 2/(1+sqrt(vpa(5))) q = w*p1 + (1-w)*p2; z = w*1 + (1-w)*2

w = 0.61803398874989484820458683436564 z = 1.3819660112501051517954131656344

Here are the coefficients.

disp(coeffs(q)')

74.626164607505678196952310944256 -240.56541151876419549238077736064 309.12771741751324912622205886993 -203.39009663000588850054313727552 72.583592135001261821544957987612 -13.381966011250105151795413165634 1.0

Find the roots.

z = solve(q)

z = 1.0000000000000000000000000000003 1.3819660112501051517954131656331 2.0000000000000000000000000000026 3.0000000000990590107064189257617 2.9999999999504704946467905371184 + 0.000000000085787619734391393745538952850968i 2.9999999999504704946467905371184 - 0.000000000085787619734391393745538952850968i

The simple roots have survived to full accuracy. But the triple root at x = 3 has been split by the perturbation into three complex numbers on a circle centered at x = 3 with radius roughly the cube root of the working precision.

circle(double(z(4:6)-3))

Where did the cube root of precision come from? We are trying to solve the equation

$$ {\left(x-3\right)}^3\,\sigma(x)=0 $$

where $\sigma(x)$ is not zero near $x = 3$. Using floating point arithmetic with precision $\epsilon$, we instead solve something like

$$ {\left(x-3\right)}^3\,\sigma(x)=\epsilon $$

The solution is

$$ x = 3 + \left( \frac{\epsilon}{\sigma(x)}\right)^{1/3}$$

with the cube root taking on three different complex values.

Let's leave the Symbolic Toolbox briefly. Convert the coefficients in our convex combination to double precision and use the traditional MATLAB function `roots`.

q = fliplr(double(coeffs(p3)))'; disp('q = ') fprintf('%25.15f\n',q) format long z = roots(q)

q = 1.000000000000000 -13.666666666666666 76.000000000000000 -219.333333333333343 345.000000000000000 -279.000000000000000 90.000000000000000 z = 3.000044739105571 + 0.000077484446947i 3.000044739105571 - 0.000077484446947i 2.999910521787618 + 0.000000000000000i 2.000000000002179 + 0.000000000000000i 1.666666666665676 + 0.000000000000000i 1.000000000000047 + 0.000000000000000i

This is the same behavior we had with 32-digit `vpa`. The simple roots at x = 1, 5/3, and 2 are computed to nearly full double precision accuracy, while the triple roots have accuracy roughly `eps^(1/3)`, which is about five digits.

circle(z(1:3)-3)

Subtract a small quantity from the constant term in the polynomial. This moves us out of $\mathcal{M}$.

p = p1 - sym(1.e-4)

p = x^6 - 13*x^5 + 68*x^4 - 182*x^3 + 261*x^2 - 189*x + 539999/10000

We can try to find the roots exactly, but the modified polynomial does not factor over the rationals.

solve(p)

ans = root(z^6 - 13*z^5 + 68*z^4 - 182*z^3 + 261*z^2 - 189*z + 539999/10000, z, 1) root(z^6 - 13*z^5 + 68*z^4 - 182*z^3 + 261*z^2 - 189*z + 539999/10000, z, 2) root(z^6 - 13*z^5 + 68*z^4 - 182*z^3 + 261*z^2 - 189*z + 539999/10000, z, 3) root(z^6 - 13*z^5 + 68*z^4 - 182*z^3 + 261*z^2 - 189*z + 539999/10000, z, 4) root(z^6 - 13*z^5 + 68*z^4 - 182*z^3 + 261*z^2 - 189*z + 539999/10000, z, 5) root(z^6 - 13*z^5 + 68*z^4 - 182*z^3 + 261*z^2 - 189*z + 539999/10000, z, 6)

So find the roots numerically with `vpa`. This time all of the roots, even the simple ones, are affected.

digits(20) z = vpa(solve(p))

z = 0.99647996935769229447 1.0035512830254182854 1.9999000099960012995 2.9856883694814744941 - 0.025815296081878984073i 2.9856883694814744941 + 0.025815296081878984073i 3.0286919986579391324

We constructed `p1` to have a double root at x = 1, a simple root at x = 2, and a triple root at x = 3. Their multiplicty determines how much the perburbation affects them.

e = abs(double(z - z1'))

e = 0.003520030642308 0.003551283025418 0.000099990003999 0.029516982906352 0.029516982906352 0.028691998657939

I am making this a two-parter. The next post is about matrix eigenvalues.

close

Zhonggang Zeng, The Tale of Polynomials, PowerPoint presentation. <http://homepages.neiu.edu/~zzeng/neiu.ppt> , 2003.

W. Kahan, Conserving confluence curbs ill-condition. Technical Report 6, Computer Science, University of California, Berkeley, 1972.

Get
the MATLAB code

Published with MATLAB® R2019b

I have always been fascinated by the names that are used to describe colors. There are dozens of web sites with lists of color names. I was surprised to discover that the shade of blue we use in MathWorks logo is almost the same as the one used by the United States Air Force Academy.... read more >>

]]>I have always been fascinated by the names that are used to describe colors. There are dozens of web sites with lists of color names. I was surprised to discover that the shade of blue we use in MathWorks logo is almost the same as the one used by the United States Air Force Academy.

Here is the "official" type font and color for a MathWorks® logo.

```
clear
M = imread('MW_logo.jpeg');
M = imresize(M,1/6);
imshow(M)
```

Let's retrieve the red, blue and green components of one pixel in the logo. This is MathWorks blue.

mw_blue = double(squeeze(M(25,168,1:3))') show_rgb(mw_blue)

mw_blue = 1 86 150

It's a medium blue with a bit of green and hardly any red.

Martin Krzywinski is a biologist, mathematician and artist at Canada's British Columbia Cancer Agency, Genome Sciences Centre. As befits his name, he maintains a crazy web site, chock full of all kinds of science and art. In particular, he has collected a list of 9,284 named colors from dozens of sources. The list includes 940 Pantone Colors, a standard in the printing industry, with "names" like PMS2945-C.

I won't display all 9,284 entries in his collection. Here is a sample of just a few of them from near the middle of the list,

load krzy krzy names rgb k = 5000:5020; sample = krzy(k,:)

sample = 21×4 table r g b name ___ ___ ___ ___________________________ 164 66 160 "ugly_purple" 164 90 82 "redwood" 164 97 58 "footloose" 165 0 85 "violet_red" 165 101 49 "mai_tai" 165 101 49 "style_pasifika_shore_sand" 165 105 79 "sepia" 165 107 109 "PMS4995" 165 11 94 "jazzberry_jam" 165 110 117 "turkish_rose" 165 113 100 "blast_off_bronze" 165 126 82 "puce" 165 139 111 "mongoose" 165 149 120 "triple_sisal" 165 149 145 "triple_milestone" 165 150 146 "asteroid" 165 151 132 "malta" 165 153 130 "routeburn" 165 154 168 "siesta" 165 155 145 "zorba" 165 156 85 "gingko"

I'm going to compute the $l_1$ vector norm of the differences between all the rgb triples in Krzywinski's list and MathWorks blue. This norm isn't exactly the best way to compute the distance between colors, but it's good enough here. (Notice the singleton expansion, a 9254-by-3 array minus a 1-by-3 array.)

e = sum(abs(rgb - mw_blue),2);

Let's say that two colors are close to each other if this distance is less than 20. Here are the nearby colors.

f = (e < 20); nearby = names(f) show_rgb([mw_blue; rgb(f,:)])

nearby = 7×1 string array "usafa_blue" "PMS2945" "endeavour" "PMS301" "peacock_blue" "bahama_blue" "medium_electric_blue"

The closest is `usafa_blue`, from the United States Air Force Academy in Colorado Springs. I didn't expect this.

usafa_blue = rgb(find(f,1),:) distance = e(find(f,1))

usafa_blue = 0 79 152 distance = 10

I actually got started looking for these color matches by using the Krzywinski color lookup web service.

query_url = 'http://mkweb.bcgsc.ca/colornames/namethatcolor/?'; query = [query_url 'rgb=' sprintf('%d,', mw_blue)]; web(query)

Here are the nearby colors, according to whatever criterion this service uses.

```
type response.txt
```

dodger_blue (3.7) dusk_blue (4.4) bedazzled_blue (4.9) cyan_cobalt_blue (5.0) medium_electric_blue (5.5) lapis_lazuli (5.5) yale_blue (6.3) azure (6.3) st_tropaz (6.5)

I was intrigued by the appearance of `dodger_blue` and `yale_blue` on this list. In case you haven't heard of them, the Dodgers are one of the best U. S. baseball teams not in Boston or New York City and Yale is one of the better U.S. universities not located near Boston or in the San Francisco Bay Area.

There are five `dodger_blue` on the list,

```
f = (names == 'dodger_blue');
show_rgb([mw_blue; rgb(f,:)])
```

There is only one `yale_blue`.

```
f = (names == 'yale_blue');
show_rgb([mw_blue; rgb(f,:)])
```

xkcd is a brilliant web comic strip that Randall Munroe started **almost 15 years ago**. One of Munroe's more serious projects was the xkcd color model. He surveyed over 220,000 people, asking them to name colors. This produced these 954 named colors.

You can download the list.

load xkcd names rgb

Which colors are close to MathWorks blue?

e = sum(abs(rgb - mw_blue),2); f = find(e < 72); nearby = names(f)

nearby = 23×1 string array "deep_turquoise" "deep_aqua" "dusk_blue" "deep_sea_blue" "petrol" "twilight_blue" "dark_aquamarine" "deep_teal" "peacock_blue" "light_navy" "light_navy_blue" "prussian_blue" "darkish_blue" "denim_blue" "teal_blue" "ocean" "bluegreen" "dark_aqua" "dark_cyan" "cobalt" "dark_turquoise" "ocean_blue" "sea_blue"

How close are they?

show_rgb([mw_blue; rgb(f,:)])

Not so close. Actually the xkcd list is already included in the Krzywinski list.

The closest I found was `usafa_blue`

mw_blue usafa_blue show_rgb([mw_blue; usafa_blue])

mw_blue = 1 86 150 usafa_blue = 0 79 152

If you want to do this yourself.

```
type show_rgb
```

function show_rgb(rgb) if any(max(rgb) > 1) rgb = double(rgb)/255; end n = size(rgb,1); set(gcf,'pos',[500 500 420 100*ceil(n/8)]) clf axis([0 1 0 1/8]) axis equal axis off noticks for k = 1:n patch([0 1 1 0 0]/9 + mod(k-1,8)/8, ... [0 0 1 1 0]/9 - .15*floor((k-1)/8), ... rgb(k,:)) end end

Get
the MATLAB code

Published with MATLAB® R2019b

... read more >>

]]>

Inspired by Patsy and https://www.amazon.com/gp/product/B075JDVR53.

```
type xmas_2019.m
```

function xmas_2019 % Christmas greetings. set(gcf,'pos',[800 200 420 315]) clf axis([0 1 0 1]) axis off for k = 1:3 [x,y] = base(k); patch(x,y,gray) [x,y] = edge(k); flame(k) = patch(x,y,'w'); end gif_frame('html/xmas_2019.gif') for k = 1:50 r = randi(2,1,3)-1; j = randi(3,1,1); if norm(r) > 0 flame(j).FaceColor = r; gif_frame end end gif_frame('reset') close function [x,y] = base(k) xo = [.33 .45 .57]; ys = [.38 .30 .44]; x = xo(k) + .10*[0 1 1 0 0]; y = ys(k)*[0 0 1 1 0]; end function [x,y] = edge(k) xo = [.33 .45 .57]; yo = [.38 .30 .44]; e = [2 0 0 2 3]; x = xo(k) + .01*[e 5 10-fliplr(e)]; y = yo(k) + .08*[0:4 5 4:-1:0]; end function g = gray g = [.5 .5 .5]; end end

Get
the MATLAB code

Published with MATLAB® R2018b