Classical Gram-Schmidt and Modified Gram-Schmidt are two algorithms for orthogonalizing a set of vectors. Householder elementary reflectors can be used for the same task. The three algorithms have very different roundoff error properties.... read more >>

]]>Classical Gram-Schmidt and Modified Gram-Schmidt are two algorithms for orthogonalizing a set of vectors. Householder elementary reflectors can be used for the same task. The three algorithms have very different roundoff error properties.

My colleague and friend G. W. Stewart is a Distinguished University Professor Emeritus at the Department of Computer Science, University of Maryland. Everybody knows him as “Pete”. He has never been able to satisfactorily explain the origins of “Pete” to me. It somehow goes back through his father to his grandfather and maybe great grandfather, who were also nicknamed “Pete”.

Pete has written several books on numerical linear algebra. For my blog today I am going to rely on the descriptions and pseudocode from his book “Matrix Algorithms, Volume I: Basic Decompositions”. His pseudocode is MATLAB ready.

The classic Gram-Schmidt algorithm is the first thing you might think of for producing an orthogonal set of vectors. For each vector in your data set, remove its projection onto the data set, normalize what is left, and add it to the orthogonal set. Here is the code. `X` is the original set of vectors, `Q` is the resulting set of orthogonal vectors, and `R` is the set of coefficients, organized into an upper triangular matrix.

```
type gs
```

function [Q,R] = gs(X) % Classical Gram-Schmidt. [Q,R] = gs(X); % G. W. Stewart, "Matrix Algorithms, Volume 1", SIAM, 1998. [n,p] = size(X); Q = zeros(n,p); R = zeros(p,p); for k = 1:p Q(:,k) = X(:,k); if k ~= 1 R(1:k-1,k) = Q(:,k-1)'*Q(:,k); Q(:,k) = Q(:,k) - Q(:,1:k-1)*R(1:k-1,k); end R(k,k) = norm(Q(:,k)); Q(:,k) = Q(:,k)/R(k,k); end end

The entire process can be expressed by matrix multiplication, with orthogonal $Q$ and upper triangular $R$.

$$X = QR$$

This is a rather different algorithm, not just a simple modification of classical Gram-Schmidt. The idea is to orthogonalize against the emerging set of vectors instead of against the original set. There are two variants, a column-oriented one and a row-oriented one. They produce the same results, in different order. Here is the column version,

```
type mgs
```

function [Q,R] = mgs(X) % Modified Gram-Schmidt. [Q,R] = mgs(X); % G. W. Stewart, "Matrix Algorithms, Volume 1", SIAM, 1998. [n,p] = size(X); Q = zeros(n,p); R = zeros(p,p); for k = 1:p Q(:,k) = X(:,k); for i = 1:k-1 R(i,k) = Q(:,i)'*Q(:,k); Q(:,k) = Q(:,k) - R(i,k)*Q(:,i); end R(k,k) = norm(Q(:,k))'; Q(:,k) = Q(:,k)/R(k,k); end end

A Householder reflection $H$ transforms a given vector $x$ into a multiple of a unit vector $e_k$ while preserving length, so $Hx = \pm \sigma e_k$ where $\sigma = ||x||$ .

The matrix $H$ is never formed. The reflection is expressed as a rank-one modification of the identity

$$H = I – uu^T \ \mbox{where} \ ||u|| = \sqrt{2}$$

(The use of $\sqrt{2}$ here is Pete’s signature normalization.)

```
type housegen
```

function [u,nu] = housegen(x) % [u,nu] = housegen(x) % Generate Householder reflection. % G. W. Stewart, "Matrix Algorithms, Volume 1", SIAM, 1998. % [u,nu] = housegen(x). % H = I - uu' with Hx = -+ nu e_1 % returns nu = norm(x). u = x; nu = norm(x); if nu == 0 u(1) = sqrt(2); return end u = x/nu; if u(1) >= 0 u(1) = u(1) + 1; nu = -nu; else u(1) = u(1) - 1; end u = u/sqrt(abs(u(1))); end

Now to apply $H$ to any other $y$, compute the inner product

$$\tau = u^Ty$$

and then simply subtract

$$Hy = x – \tau u$$

This program does not actually compute the QR orthogonalization, but rather computes `R` and a matrix `U` containing vectors that generate the Householder reflectors whose product is Q.

```
type hqrd
```

function [U,R] = hqrd(X) % Householder triangularization. [U,R] = hqrd(X); % Generators of Householder reflections stored in U. % H_k = I - U(:,k)*U(:,k)'. % prod(H_m ... H_1)X = [R; 0] % where m = min(size(X)) % G. W. Stewart, "Matrix Algorithms, Volume 1", SIAM, 1998. [n,p] = size(X); U = zeros(size(X)); m = min(n,p); R = zeros(m,m); for k = 1:min(n,p) [U(k:n,k),R(k,k)] = housegen(X(k:n,k)); v = U(k:n,k)'*X(k:n,k+1:p); X(k:n,k+1:p) = X(k:n,k+1:p) - U(k:n,k)*v; R(k,k+1:p) = X(k,k+1:p); end end

All three of these algorithms compute `Q` and `R` that do a good job of reproducing the data `X`, that is

`Q`*`R`is always close to`X`for all three algorithms.

On the other hand, their behavior is very different when it comes to producing orthogonality. Is `Q'*Q` close to the identity?

- Classic Gram-Schmidt. Usually very poor orthogonality.
- Modified Gram-Schmidt. Depends upon condition of
`X`. Fails completely when`X`is singular. - Householder triangularization. Always good orthogonality.

```
type compare.m
```

function compare(X); % compare(X) % Compare three QR decompositions, % I = eye(size(X)); %% Classic Gram Schmidt [Q,R] = gs(X); qrerr_gs = norm(Q*R-X,inf)/norm(X,inf); ortherr_gs = norm(Q'*Q-I,inf); %% Modified Gram Schmidt [Q,R] = mgs(X); qrerr_mgs = norm(Q*R-X,inf)/norm(X,inf); ortherr_mgs = norm(Q'*Q-I,inf); %% Householder QR Decomposition [U,R] = hqrd(X); QR = R; E = I; for k = size(X,2):-1:1 uk = U(:,k); QR = QR - uk*(uk'*QR); E = E - uk*(uk'*E) - (E*uk)*uk' + uk*(uk'*E*uk)*uk'; end qrerr_h = norm(QR-X,inf)/norm(X,inf); ortherr_h = norm(E-I,inf); %% Report results fprintf('QR error\n') fprintf('Classic: %10.3e\n',qrerr_gs) fprintf('Modified: %10.3e\n',qrerr_mgs) fprintf('Householder: %10.3e\n',qrerr_h) fprintf('\n') fprintf('Orthogonality error\n') fprintf('Classic: %10.3e\n',ortherr_gs) fprintf('Modified: %10.3e\n',ortherr_mgs) fprintf('Householder: %10.3e\n',ortherr_h)

Well conditioned.

compare(magic(7))

QR error Classic: 1.726e-16 Modified: 6.090e-17 Householder: 3.654e-16 Orthogonality error Classic: 3.201e+00 Modified: 1.534e-15 Householder: 1.069e-15

Poorly conditioned, nonsingular.

compare(hilb(7))

QR error Classic: 5.352e-17 Modified: 5.352e-17 Householder: 7.172e-16 Orthogonality error Classic: 5.215e+00 Modified: 1.219e-08 Householder: 1.686e-15

Singular.

compare(magic(8))

QR error Classic: 1.435e-16 Modified: 8.540e-17 Householder: 2.460e-16 Orthogonality error Classic: 5.413e+00 Modified: 2.162e+00 Householder: 2.356e-15

G. W. Stewart, “Matrix Algorithms, Volume 1: Basic Decompositions”, SIAM, 1998.

https://www.amazon.com/Matrix-Algorithms-1-Basic-Decompositions/dp/0898714141

Published with MATLAB® R2016a

]]>At a minisymposium honoring Charlie Van Loan this week during the SIAM Annual Meeting, I will describe several dubious methods for computing the zeros of polynomials. One of the methods is the Graeffe Root-squaring method, which I will demonstrate using my favorite cubic, $x^3-2x-5$.... read more >>

]]>At a minisymposium honoring Charlie Van Loan this week during the SIAM Annual Meeting, I will describe several dubious methods for computing the zeros of polynomials. One of the methods is the Graeffe Root-squaring method, which I will demonstrate using my favorite cubic, $x^3-2x-5$.

What is today often called the Graeffe Root-Squaring method was discovered independently by Dandelin, Lobacevskii, and Graeffe in 1826, 1834 and 1837. A 1959 article by Alston Householder referenced below straightens out the history.

The idea is to manipulate the coefficients of a polynomial to produce a second polynomial whose roots are the squares of the roots of the first. If the original has a dominant real root, it will become even more dominant. When the process is iterated you eventually reach a point where the dominant root can be read off as the ratio of the first two coefficients. All that remains is to take an n-th root to undo the iteration.

Here is an elegant bit of code for producing a cubic whose roots are the squares of the roots of a given cubic.

```
type graeffe
```

function b = graeffe(a) % a = a(1)*x^3 + ... + a(4) is a cubic. % b = graffe(a) is a cubic whose roots are the squares of the roots of a. b = [ a(1)^2 -a(2)^2 + 2*a(1)*a(3) a(3)^2 - 2*a(2)*a(4) -a(4)^2 ];

I discussed my favorite cubic, $z^3-2z-5$, in a series of posts beginning with a historic cubic last December 21st.

A contour plot of the magnitude of this cubic on a square region in the plane shows the dominant real root at approximately $x=2.09$ and the pair of complex conjugate roots with smaller magnitude approximately $|z|=1.55$.

graphic

Repeated application of the transformation essentially squares the coefficients. So the concern is overflow. When I first ran this years ago as a student on the Burroughs B205, I had a limited floating point exponent range and overflow was a severe constraint. Today with IEEE doubles we’re a little better off.

Here is a run on my cubic. I’m just showing a few significant digits of the polynomial coefficients because the important thing is their exponents.

a = [1 0 -2 -5]; k = 0; f = '%5.0f %4.0f %5.0f %8.3e %8.3e %8.3e %20.15f \n'; fprintf(f,k,1,a,Inf); while all(isfinite(a)) k = k+1; a = graeffe(a); n = 2^k; r = (-a(2))^(1/n); fprintf(f,k,n,a,r) end

0 1 1 0.000e+00 -2.000e+00 -5.000e+00 Inf 1 2 1 -4.000e+00 4.000e+00 -2.500e+01 2.000000000000000 2 4 1 -8.000e+00 -1.840e+02 -6.250e+02 1.681792830507429 3 8 1 -4.320e+02 2.386e+04 -3.906e+05 2.135184796196703 4 16 1 -1.389e+05 2.316e+08 -1.526e+11 2.096144583759898 5 32 1 -1.883e+10 1.125e+16 -2.328e+22 2.094553557477412 6 64 1 -3.547e+20 -7.504e+32 -5.421e+44 2.094551481347086 7 128 1 -1.258e+41 1.786e+65 -2.939e+89 2.094551481542327 8 256 1 -1.582e+82 -4.203e+130 -8.636e+178 2.094551481542327 9 512 1 -2.504e+164 -9.665e+260 -Inf 2.094551481542327

So after seven steps we have computed the dominant root to double precision accuracy, but it takes the eighth step to confirm that. And after nine steps we run out of exponent range.

This is a really easy example. To make a robust polynomial root finder we would have to confront under/overflow, scaling, multiplicities, complex roots, and higher degree. As far as I know, no one has ever made a serious piece of mathematical software out of the Graeffe Root-squaring method. If you know of one, please contribute a comment.

Alston S. Householder, “Dandelin, Lobacevskii, or Graeffe?,” *Amer. Math. Monthly*, 66, 1959, pp. 464–466. <http://www.jstor.org/stable/2310626?seq=1#>

Published with MATLAB® R2016a

]]>During the SIAM Annual Meeting this summer in Boston there will be a special minisymposium Wednesday afternoon, July 13, honoring Charlie Van Loan, who is retiring at Cornell. (I use "at" because he's not leaving Ithaca.) I will give a talk titled "19 Dubious Way to Compute the Zeros of a Polynomial", following in the footsteps of the paper about the matrix exponential that Charlie and I wrote in 1978 and updated 25 years later. I really don't have 19 ways to compute polynomial zeros, but then I only have a half hour for my talk. Most of the methods have been described previously in this blog. Today's post is mostly about "roots".... read more >>

]]>During the SIAM Annual Meeting this summer in Boston there will be a special minisymposium Wednesday afternoon, July 13, honoring Charlie Van Loan, who is retiring at Cornell. (I use “at” because he’s not leaving Ithaca.) I will give a talk titled “19 Dubious Way to Compute the Zeros of a Polynomial”, following in the footsteps of the paper about the matrix exponential that Charlie and I wrote in 1978 and updated 25 years later. I really don’t have 19 ways to compute polynomial zeros, but then I only have a half hour for my talk. Most of the methods have been described previously in this blog. Today’s post is mostly about “roots”.

Almost 40 years ago, in the late 1970s, when I was developing the original Fortran-based MATLAB, I wanted to have a command to find the roots of a polynomial. At the time MATLAB was just a primitive matrix calculator. It was not yet a programming environment; I did not yet have M-files or toolboxes.

I had two objectives. I needed to keep the amount of software to a minimum. Mike Jenkins was a friend of mine who had recently submitted his Ph. D. thesis, under Joe Traub’s direction, to Stanford, with what is now known as the Jenkins-Traub algorithm for polynomial zero finding. But that would require way too much memory for my MATLAB. And I wanted to turn the tables on the traditional role the characteristic polynomial plays in linear algebra. First year students are taught to compute eigenvalues of a matrix by finding the zeros of the determinant of its companion matrix. Wilkinson had shown that in the presence of roundoff error this is a bad idea. I wanted to emphasize Wilkinson’s point by using powerful matrix eigenvalue techniques on the companion matrix to compute polynomial roots.

I already had the EISPACK code for the Francis double shift QR iteration to find the eigenvalues of a Hessenberg matrix. It took just a few more lines to find polynomial roots.

For example, MATLAB represents this polynomial $z^5 – 15 z^4 + 85 z^3 – 225 z^2 + 274 z – 120$ by a vector of coefficients.

p = [1 -15 85 -225 274 -120]

p = 1 -15 85 -225 274 -120

The `roots` function finds its roots.

roots(p)

ans = 5.0000 4.0000 3.0000 2.0000 1.0000

MATLAB does this by forming the companion matrix, which is upper Hessenberg, and computing its eigenvalues.

A = [15 -85 225 -274 120 eye(4,5) ] eig(A)

A = 15 -85 225 -274 120 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 ans = 5.0000 4.0000 3.0000 2.0000 1.0000

In the original documentation for “roots” I wrote

It uses order n^2 storage and order n^3 time. An algorithm

designed specifically for polynomial roots might use order n

storage and n^2 time. And the roundoff errors introduced correspond

to perturbations in the elements of the companion matrix,

not the polynomial coefficients.

So, at the time I knew that “roots” had impeccable numerical properties from the matrix eigenvalue point of view, but I wasn’t sure about the numerical properties from the polynomial of view. Were we computing the exact roots of some polynomial near the original one? Maybe not. It depends crucially upon the scaling.

“Roots” is possibly dubious numerically, and certainly dubious from a computation cost point of view.

Almost twenty years later, in 1994 and 1995, a pair of papers, one by Nick Trefethen and K. C. Toh, and one by Alan Edelman and H. Murakami, finally gave “roots” a passing grade numerically, provided the companion matrix is first scaled by balancing.

Balancing was introduced by Parlett and Reinsch in the Algol procedures that spawned EISPACK. Balancing is a diagonal similarity transformation by a matrix of powers of two to attempt to bring a nonsymmetric matrix closer to symmetric by making its row and column norms equal. It is available in MATLAB through the function `balance` and is used by default by `eig`.

To see the importance of balancing, consider the companion matrix of $W_{20}$, the Wilkinson polynomial of degree 20 whose roots are the integers from 1 to 20. The coefficients of this polynomial are huge. The constant term is 20!, which is 2432902008176640000. But that’s not the largest. The coefficient of $z^2$ is 13803759753640704000. Five of the coefficients, those of $z^3$ through $z^7$ are so large that they cannot be exactly represented as double precision floating pointing numbers. So

roots(poly(1:20))

starts with a companion matrix that has coefficients of magnitude `10^19` along its first row and ones on its subdiagonal. It is seriously nonsymmetric.

The elements of the balanced matrix are much more reasonable. They range from just 1/4 to 256. This is the matrix involved in the actual eigenvalue calculation. All the scaling action is captured in the diagonal similarity, whose elements range from $2^{-35}$ to $2^{27}$. This matrix is only involved in scaling the eigenvectors.

So I was lucky on the numerical side. Balancing turned out to provide “roots” satisfactory stability.

Let’s measure the execution time and memory requirements for “roots” with polynomials of very large degree — tens of thousands. Here I just wanted to see how bad it is. Short answer: it takes “roots” almost 40 minutes to find the zeros of a polynomial of order sixteen thousand on my Lenovo X250 laptop.

Here is the experiment. Polynomials of degree `1000*n` for `n = 1:16`.

T = zeros(16,1); for n = 1:16 n r = randn(1,1000*n); tic roots(r); t = toc T(n) = t; !systeminfo | grep 'In Use' end

As expected, the observed exection times are fit nicely by a cubic in the polynomial degree.

time_roots

c = 0 1.8134 0.4632

On the other hand, the measured virtual memory requirements are all other the map. A least squares fit by a quadratic in the polynomial degree $n$ shows very little dependence on the $n^2$ term. It’s fair to say that on today’s machines memory requirements are not a significant restriction for this algorithm.

memory_roots

c = 1.0e+03 * 3.0328 0.0177 0.0002

AMVW are the initials of the last names of Jared Aurentz, Thomas Mach, Raf Vandebril, and David Watkins. It is also the name of their Fortran code for what they claim is a competitor to “roots”. Their recent paper in SIMAX, referenced below, summarizes all the work that has been done on computing the eigenvalues of companion matrices, exploiting their special structure.

They also present their own algorithm where the companion matrix is never formed explicitly, but is represented as a product of Givens rotations. The QR algorithm is carried out on this representation. As a result only $O(n)$ storage and $O(n^2)$ operations are required.

But their software is in Fortran. I was a big Fortran fan for a long time. I used to be considered one of the Fortran gurus at MathWorks. I now don’t even have access to a Fortran compiler. So I can’t try their stuff myself.

So far I’ve been talking about the classic companion matrix where the polynomial coefficients occupy the entire first or last row or column. In 2003 Miroslav Fiedler introduced another version of a companion matrix, a pentadiagonal matrix with the polynomial coefficients alternating with zeros on the super- and subdiagonal and ones and zeros alternating on the next diagonals of the nonsymmetric pentadiagonal matrix. I wrote about it my blog just before last Christmas.

Here’s an example, W_8. First, the coefficients.

c = poly(1:8)'

c = 1 -36 546 -4536 22449 -67284 118124 -109584 40320

The Fiedler companion matrix.

F = fiedler(c(2:end))

F = Columns 1 through 6 36 -546 1 0 0 0 1 0 0 0 0 0 0 4536 0 -22449 1 0 0 1 0 0 0 0 0 0 0 67284 0 -118124 0 0 0 1 0 0 0 0 0 0 0 109584 0 0 0 0 0 1 Columns 7 through 8 0 0 0 0 0 0 0 0 1 0 0 0 0 -40320 0 0

Balance it.

B = balance(F)

B = Columns 1 through 7 36.0000 -17.0625 16.0000 0 0 0 0 32.0000 0 0 0 0 0 0 0 8.8594 0 -5.4807 8.0000 0 0 0 8.0000 0 0 0 0 0 0 0 0 2.0533 0 -0.9012 1.0000 0 0 0 4.0000 0 0 0 0 0 0 0 0 0.8361 0 0 0 0 0 0 0.5000 0 Column 8 0 0 0 0 0 0 -0.6152 0

Check the eigenvalues

e = eig(B)

e = 8.0000 7.0000 6.0000 5.0000 4.0000 3.0000 2.0000 1.0000

If we could find a way to compute the eigenvalues and eigenvectors of the Fiedler companion matrix while exploiting its structure, then we would have a gorgeous algorithm. Beresford Parlett tells me that he’s working on it.

If you have software that I don’t know about, please comment.

Kim-Chuan Toh and Lloyd N. Trefethen, “Pseudozeros of polynomials and pseudospectra of companion matrices,” *Numer. Math.*, 68, 1994. <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.9072&rep=rep1&type=pdf>

Alan Edelman and H. Murakami, “Polynomial roots from companion matrices,” *Mathematics of Computation*, 64, 1995. <http://www.ams.org/journals/mcom/1995-64-210/S0025-5718-1995-1262279-2/>

Jared L. Aurentz, Thomas Mach, Raf Vandebril, and David S. Watkins, “Fast and backward stable computation of roots of polynomials,” *SIAM J. Matrix Anal. Appl.*, 36, 2015. <http://www.math.wsu.edu/faculty/watkins/pdfiles/AMVW15_SIMAX.pdf>

Miroslav Fiedler, A note on companion matrices, Linear Algebra and its Applications 372 (2003), 325-331, <http://citeseerx.ist.psu.edu/viewdoc/download>

Published with MATLAB® R2016a

]]>What does $\sqrt[12]{2}$ have to do with music? What are *equal temperament* and *just intonation*? How can the MATLAB function `rats` help tune a piano? (This post is based in part on the *Music* chapter in my online book, *Experiments in MATLAB*.)... read more >>

What does $\sqrt[12]{2}$ have to do with music? What are *equal temperament* and *just intonation*? How can the MATLAB function `rats` help tune a piano? (This post is based in part on the *Music* chapter in my online book, *Experiments in MATLAB*.)

In the theory of music, an *octave* is an interval with frequencies that range over a factor of two. In most Western music, an octave is divided into 12 *semitones* in a geometric progression. In other words the notes have equal frequency ratios. Since 12 semitones comprise a factor of 2, a single semitone is a factor of $\sqrt[12]{2}$. And because this quantity occurs so often in this discussion, let

$$ \sigma = \sqrt[12]{2} $$

Our MATLAB programs use

```
format short
sigma = 2^(1/12)
```

sigma = 1.0595

Think of $\sigma$ as an important mathematical constant, like $\pi$ and $\phi$.

Here is our miniature piano keyboard with 25 keys.

small_keyboard

This keyboard has two octaves, with white keys labeled **C D … G A B**, plus another **C** key. Counting both white and black, there are twelves keys in each octave. The frequency of each key is a semitone above or below its neighbors. Each black key can be regarded as either the *sharp* of the white below it or the *flat* of the white above it. So the black key between **C** and **D** is both **C** $\sharp$ and **D** $\flat$. There are no **E** $\sharp$ / **F** $\flat$ or **B** $\sharp$ / **C** $\flat$.

A conventional full piano keyboard has 88 keys. Seven complete octaves account for $7 \times 12 = 84$ keys. There are three additional keys at the lower left and one additional key at the upper end. If the octaves are numbered 0 through 7, then a key letter followed by an octave number specifies a unique key. In this notation, two important keys are **C4** and **A4**. The **C4** key is near the center of the keyboard and so is also known as *middle C*. A piano is usually tuned so that the frequency of the **A4** key is 440 Hz. **C4** is nine keys to the left of **A4** so its frequency is

$$\mbox{C4} = 440 \sigma^{-9} \approx 261.6256 \mbox{Hz}$$

Our miniature keyboard has the **A4** key colored green and the **C4** key colored blue.

A piano is almost always tuned with *equal temperament*, the system used by musical instruments with fixed pitch. Starting with **A4** at 440 Hz the keys are tuned in strict geometric progression with ratio $\sigma$. Once this is done, a piano rarely needs adjustment.

The alternative system, *just intonation*, is defined by a vector of ratios involving small integers.

```
format rat
just = [1 16/15 9/8 6/5 5/4 4/3 7/5 3/2 8/5 5/3 7/4 15/8 2]'
```

just = 1 16/15 9/8 6/5 5/4 4/3 7/5 3/2 8/5 5/3 7/4 15/8 2

These ratios allow the generation of overtones, chords and harmony. More about that in a moment. Many musical instruments can be tuned to the key of a particular composition just before playing it. Singers, choirs, and barbershop quartets can naturally use just intonation.

Here is a comparison of equal temperament and just intonation from a strictly numerical point of view. Equal temperament is defined by repeated powers of $\sigma$. while just intonation is defined by a sequence of ratios.

sigma = 2^(1/12); k = (0:12)'; equal = sigma.^k; num = [1 16 9 6 5 4 7 3 8 5 7 15 2]'; den = [1 15 8 5 4 3 5 2 5 3 4 8 1]'; just = num./den; delta = (equal - just)./equal; T = [k equal num den just delta]; fprintf(' k equal just delta\n') fprintf('%4d %12.6f %7d/%d %11.6f %10.4f\n',T')

k equal just delta 0 1.000000 1/1 1.000000 0.0000 1 1.059463 16/15 1.066667 -0.0068 2 1.122462 9/8 1.125000 -0.0023 3 1.189207 6/5 1.200000 -0.0091 4 1.259921 5/4 1.250000 0.0079 5 1.334840 4/3 1.333333 0.0011 6 1.414214 7/5 1.400000 0.0101 7 1.498307 3/2 1.500000 -0.0011 8 1.587401 8/5 1.600000 -0.0079 9 1.681793 5/3 1.666667 0.0090 10 1.781797 7/4 1.750000 0.0178 11 1.887749 15/8 1.875000 0.0068 12 2.000000 2/1 2.000000 0.0000

The last column in the table, `delta`, is the relative difference between the two. We see that `delta` is less than one percent, except for `k = 10`.

This is why the 12-note scale has proved to be so popular. Powers of $\sqrt[12]{2}$ turn out to come very close to important rational numbers, especially 5/4, 4/3, and 3/2. A pair of pitches with a ratio of 3:2 is known as a *perfect fifth*; a ratio of 4:3 is a *perfect fourth*; a ratio of 5:4 is a *major third*.

One of the first songs you learned to sing was

*Do Re Mi Fa So La Ti Do*

If you start at middle C, you will be singing the *major scale* in the key of C. This scale is played on a piano using only the white keys. The steps are not equally spaced. Most of the steps skip over black keys and so are two semitones. But the steps between *Mi* and *Fa* and *Ti* and *Do* are the steps from **E** to **F** and **B** to **C**. There are no intervening black keys and so these steps are only one semitone. In terms of $\sigma$, the C-major scale is

$$\sigma^0 \ \sigma^2 \ \sigma^4 \ \sigma^5 \ \sigma^7 \ \sigma^9

\ \sigma^{11} \ \sigma^{12}$$

The number of semitones between the notes is given by the vector

```
format short
diff([0 2 4 5 7 9 11 12])
```

ans = 2 2 1 2 2 2 1

This sequence of frequencies in our most common scale is surprising.

Like everything else, it all comes down to *eigenvalues*. Musical instruments create sound through the action of vibrating strings or vibrating columns of air that, in turn, produce vibrations in the body of the instrument and the surrounding air. Mathematically, vibrations can be modeled by weighted sums of characteristic functions known as *modes* or *eigenfunctions*. Different modes vibrate at different characteristic frequencies or *eigenvalues*. These frequencies are determined by physical parameters such as the length, thickness and tension in a string, or the geometry of the air cavity. Short, thin, tightly stretched strings have high frequencies, while long, thick, loosely stretched strings have low frequencies.

The simplest model is a one-dimensional vibrating string, held fixed at its ends. The units of the various physical parameters can be chosen so that the length of the string is $2 \pi$. The modes are then simply the functions

$$v_n(x) = \sin{n x}, \ \, n = 1, 2, …$$

Each of these functions satisfy the fixed end point conditions

$$v_n(0) = v_n(2 \pi) = 0$$

The time-dependent modal vibrations are

$$u_n(x,t) = \sin{n x} \sin{n t}, \ \, n = 1, 2, …$$

and the frequency is simply the integer $n$. (Two- and three-dimensional models are much more complicated. This one-dimensional model is all we need here.)

Our Experiments with MATLAB program `vibrating_string` provides a dynamic view. Here is a snapshot showing the first nine modes and the resulting wave traveling along the string.

The ratios in just intonation or equal temperament produce instruments with tunings that generate vibrations with these desired frequencies.

The MATLAB expression `rat(X,tol)` creates a truncated continued fraction approximation of `X` with an accuracy of `tol`. The default `tol` is `1.e-6*norm(X,1)`.

rat(sigma.^(0:12)')

ans = 1 1 + 1/(17 + 1/(-5 + 1/(-2))) 1 + 1/(8 + 1/(6)) 1 + 1/(5 + 1/(4 + 1/(-2))) 1 + 1/(4 + 1/(-7 + 1/(2 + 1/(5)))) 1 + 1/(3 + 1/(-74)) 1 + 1/(2 + 1/(2 + 1/(2 + 1/(2 + 1/(2 + 1/(2)))))) 1 + 1/(2 + 1/(147)) 2 + 1/(-2 + 1/(-2 + 1/(-3 + 1/(4 + 1/(2))))) 2 + 1/(-3 + 1/(-7 + 1/(-81))) 2 + 1/(-5 + 1/(2 + 1/(3 + 1/(-2 + 1/(-15))))) 2 + 1/(-9 + 1/(11)) 2

These are unconventional continued fractions because they contain negative terms. This algorithm is used for `format rat`. The default tolerance is much too picky. The equal temperament ratios do not give small integers.

```
format rat
sigma.^(0:12)'
```

ans = 1 1657/1564 1769/1576 1785/1501 635/504 3544/2655 1393/985 2655/1772 1008/635 3002/1785 1527/857 2943/1559 2

This would not be a satisfactory way to try to tune an instrument. But if I’m much more tolerant and allow a 2 percent error, I get short continued fractions.

rat(sigma.^(0:12)',.02)

ans = 1 1 + 1/(17) 1 + 1/(8) 1 + 1/(5) 1 + 1/(4) 1 + 1/(3) 1 + 1/(2 + 1/(2)) 1 + 1/(2) 2 + 1/(-2 + 1/(-2)) 2 + 1/(-3) 2 + 1/(-5) 2 + 1/(-9) 2

I can make `format rat` use these fractions by telling `rats` that it has only 6 columns to work with. Now I get almost the same ratios as just intonation.

rats(sigma.^(0:12)',6)

ans = 1 1 9/8 6/5 5/4 4/3 7/5 3/2 8/5 5/3 9/5 17/9 2

If any of the music theorists from ancient Greece or the Renaissance had access to MATLAB, they could have used our rational approximation to generate a tuning strategy.

Published with MATLAB® R2016a

]]>The MATLAB functions for the numerical evaluation of integrals has evolved from `quad`, through `quadl` and `quadgk`, to today's `integral`.... read more >>

The MATLAB functions for the numerical evaluation of integrals has evolved from `quad`, through `quadl` and `quadgk`, to today's `integral`.

Several years ago I regarded myself as an expert on numerical methods for approximating integrals. But then I stopped paying attention for a while. So this is an opportunity to get reacquainted. The task involves a real-valued function $f(x)$ of a real variable $x$, defined on an interval $a \le x \le b$. We seek to compute the value of the definite integral,

$$ \int_a^b \ f(x) \ dx $$

I wrote a chapter on the subject in my textbook *Numerical Computing with MATLAB*. Here is a link to that chapter. The chapter is titled "Quadrature". One of the first things I have to do in that chapter is explain that *quadrature* is an old-fashioned term derived from the notion that the value of a definite integral can be approximated by plotting the function on graph paper and then counting the number of little squares that fall underneath the graph. Here is a coarse picture.

For most of the time since the beginning of MATLAB, up until when I wrote the chapter on quadrature twelve years ago, the MATLAB function for evaluating definite integrals was one named `quad`. It was based on adaptive Simpson's method, pioneered by my grad school buddy Bill McKeeman to show off recursion in Algol.

These two figures illustrate the basis for adaptive Simpson's method.

The algorithm works recursively on subintervals of the original interval [a,b]. If the values obtained from the 3-point Simpson's rule and the 5-point composite Simpson's rule agree to within a specified tolerance (which they clearly do not for these figures), then an extrapolation taken from the two values is accepted as the value for the integral over the subinterval. If not, the algorithm is recursively applied to the two halves of the subinterval.

I have used a function named `humps` to demonstrate numerical integration for a long time. It has a strong peak at x = 0.3 and a milder peak at x = 0.9. It has found its way into the MATLAB `demos` directory, so you have it on your own machine. A graph appears below.

$$ h(x) = \frac{1}{(x-0.3)^2+.01} + \frac{1}{(x-0.9)^2+.04} - 6 $$

An interactive graphical app named `quadgui` is included with the programs for *Numerical Computing with MATLAB*, available from MATLAB Central File Exchange. It shows `quad` in action. Here is a snapshot of `quad` part way through computing the integral of `humps` over the interval [0,1]. Integration over the interval to the left of 1/4 is finished. The integration over the subinterval [1/4,3/8] has just been successfully completed. A few samples to the right of 3/8 were computed during the initialization. The integrand has been evaluated 19 times.

Here is the final result. The default tolerance for `quad` is $10^{-6}$ and the value of the integral obtained to six significant digits is 29.8583. The integrand has been evaluated 93 times. Most of those evaluations took place near the peaks where the `humps` varies rapidly. This indicates that the adaptivity works as expected.

`Humps` is a rational function. It is possible to use the Symbolic Toolbox to obtain the indefinite and then the definite integral analytically.

syms x h = 1/((x-3/10)^2+1/100) + 1/((x-9/10)^2+4/100) - 6 I = int(h,x) D = subs(I,1)-subs(I,0) format long Q = double(D)

h = 1/((x - 9/10)^2 + 1/25) + 1/((x - 3/10)^2 + 1/100) - 6 I = 10*atan(10*x - 3) - 6*x + 5*atan(5*x - 9/2) D = 5*atan(1/2) + 10*atan(3) + 10*atan(7) + 5*atan(9/2) - 6 Q = 29.858325395498674

This confirms the six figure value shown by `quadqui`

About the time in 2004 that I was writing the *Quadrature* chapter for *NCM*, the `quadl` function showed up. This uses the *Lobatto* rule.

$$ \int_{-1}^1 \ f(x) \ dx \ \approx \ w_1 f(-1) + w_2 f(-x_1) + w_2 f(x_1) + w_1 f(1) $$

with $x_1 = 1/\sqrt{5}, \ w_1 = 1/6, \ w_2 = 5/6$.

`Quadl` also employs the associated *Kronrod* rule, which provides 13 additional irrational abscissa and weights. The Lobatto-Kronrod rule is much higher order and consequently more accurate than Simpson's rule. As a result `quadl` is usually faster and more accurate than `quad`. But I just devoted a couple of exercises in the book to `quadl`. And I think it never became very popular.

In the mathematical software business when we were developing libraries using Fortran or C, we evaluated the cost of numerical quadrature by counting the number of function evaluations. In 2006 Larry Shampine, the SMU Professor who is a consultant to MathWorks and principal author of our ODE Suite, published the paper referenced below where he changed the ground rules for quadrature functions in the MATLAB environment. Since MATLAB function evaluations are vectorized, we should count the number of vectorized evaluations, not the number of individual evaluations. This is a better predictor of execution time.

Shampine's paper includes a MATLAB function that he calls `quadva`, where `va` stands for "vectorized adaptive". His code uses a 15-point Gauss-Kronrod formula. The Gauss formula is not only higher order than the Lobatto formula with the same number of points, it has the additional advantage that it does not require evaluating the integrand at the end points of an interval. This makes it possible to treat infinite intervals and singularites at the end points.

When we added Shampine's `quadva` to MATLAB, we called it `quadgk`, for "Gauss-Kronrod".

Let's instrument `humps` to investigate the behavior of our quadrature routines on just one simple test case. The functions `quad` and `quadl` call the integrand once with a vector argument to initialize the integration and then call it more times with a shorter vector argument during the adaptive process. This is such an easy problem that `quadgk` gets all the information that it needs during the initialization.

In the following table, "init" is the length of the vector in the initialization call, "kount" is the number of times that `humps` is called after the initialization, "length" is the length of the vector argument in those calls, "err" is the error in the final result, and "time" is the execution time in milliseconds.

$$\begin{array}{lrrrrr} & \mbox{init} & \mbox{kount} & \mbox{length} & \mbox{err\ \ \ \ } & \mbox{time}\\ \mbox{quad} & 7 & 69 & 2 & 7.3 \cdot 10^{-7} & 0.87 \\ \mbox{quadl} & 13 & 37 & 5 & 1.8 \cdot 10^{-10} & 0.78 \\ \mbox{quadgk} & 150 & 0 & - & 2.5 \cdot 10^{-14} & 0.34 \end{array}$$

We see that the three functions perform very differently in this one experiment. The oldest function, `quad`, actually requires the fewest total function evaluations, if we count them in the traditional scalar sense, but it is the least accurate and the slowest. The newest function, `quadgk`, requires only one vectorized function evaluation, and is the fastest and the most accurate.

In 2012, with release 2012a, we came to the realization that most new MATLAB users who wanted to evaluate an integral had a hard time finding the function named "quadgk" that does the job. For all these years, we numerical analysts have been explaining to the rest of the world our quaint term *quadrature*. If that weren't enough, we have to tack on initials of mathematicians. So, we introduced `integral`. Actually `quadgk` is still available. `Integral` is just an easier to find and easier to use version of `quadgk`.

L.F. Shampine, "Vectorized Adaptive Quadrature in MATLAB, *Journal of Computational and Applied Mathematics* 211 (2006), pp.131–140, available at <http://faculty.smu.edu/shampine/rev2vadapt.pdf>

Get
the MATLAB code

Published with MATLAB® R2016a

Gil Strang has produced a MOOC-style video course on Differential Equations and Linear Algebra. I have added some videos about the MATLAB ODE suite. The series is available from the MathWorks Web site, MIT OpenCourseWare and several other popular sources.... read more >>

]]>Gil Strang has produced a MOOC-style video course on Differential Equations and Linear Algebra. I have added some videos about the MATLAB ODE suite. The series is available from the MathWorks Web site, MIT OpenCourseWare and several other popular sources.

MIT's Professor Gilbert Strang has taught 18.06, Linear Algebra, regularly for many years. Over fifteen years ago MIT videotaped the lectures. MIT was also taping 8.01, the first-year physics course, taught by the flamboyant Walter Lewin, in the same lecture room the hour before. They were careful with the videography, with professional camera work, good lighting and good audio. There was no computer graphics, no PowerPoint. Just excellent lectures, using big chalk on actual black boards. You can still see Gil's original lectures by Googling "18.06", or by following this link.

MIT decided to make the lectures freely available on the web, with no strings attached. That was an unusual move at the time. The two courses proved to be very popular. They led to the creation of what MIT now calls OpenCourseWare and were in part responsible for the MOOC, Massive Open Online Courses, revolution in higher education.

Gil told me the experience changed his life. He realized he could reach millions of students instead of just one classroom.

Strang recently published a new book, Differential Equations and Linear Algebra.

He invited MathWorks to join him in producing a series of videos on the subject. I joined the project to describe how MATLAB provides tools for solving ODEs.

Here is a link to the home page for the project at the MathWorks. Learn Differential Equations.

And here is a link to a video where Gil and I have a short chat. Gil and Cleve chat.

Gil produced 55 videos, averaging about 15 minutes each. There are eight topic areas, which map to and complement the textbook, DELA.

- Introduction
- First Order Equations
- Second Order Equations
- Graphical and Numerical Methods
- Vector Spaces and Subspaces
- Eigenvalues and Eigenvectors
- Applied Mathematics and ATA
- Fourier and Laplace Transforms

Here is a screen shot from one of the videos to give you a feel for the style of the presentation.

My part of the series is much smaller, only a dozen videos, also about 15 minutes each. The first video is about Euler's method. My Euler's method code is `ode1.m`. It is 16 lines long; half of the lines are comments. The last three videos are examples. One of them is based on my blog post about the tumbling box ode. Gil also discusses the tumbling box in one of his videos. I have included several hands on exercises that you can try out by downloading the MATLAB code available on the MATLAB Central File Exchange, solving ODEs in MATLAB.

Here is a screen shot from my discussion of stiffness.

The home page for the project at the MathWorks is <http://www.mathworks.com/academia/courseware/learn-differential-equations.html>.

The series is available from MIT OpenCourseWare under the title *Learn Differential Equations: Up Close with Gilbert Strang and Cleve Moler*. The OCW course number is 18-009. The URL is <http://ocw.mit.edu/resources/res-18-009-learn-differential-equations-up-close-with-gilbert-strang-and-cleve-moler-fall-2015>.

The videos are also available through the following distribution sites.

YouTube: <http://www.youtube.com/playlist?list=PLUl4u3cNGP63oTpyxCMLKt_JmB0WtSZfG>.

iTunes: <http://itunes.apple.com/us/itunes-u/id1102923623>.

Internet Archive: <http://archive.org/details/MITRES18-009F15>.

Get
the MATLAB code

Published with MATLAB® R2016a

The equations generating a `surf` plot of the Moebius strip can be parameterized and the parameters allowed to take on expanded values. The results are a family of surfaces that I have been displaying for as long as I have had computer graphics available.... read more >>

The equations generating a `surf` plot of the Moebius strip can be parameterized and the parameters allowed to take on expanded values. The results are a family of surfaces that I have been displaying for as long as I have had computer graphics available.

The animation shows the formation of a classic Moebius strip. Start with a long, thin rectangular strip of material. Bend it into a cylinder. Give the two ends a half twist. Then join them together. The result is a surface with only one side and one boundary. One could traverse the entire length of the surface and return to the starting point without ever crossing the edge.

I am going to investigate the effect of three free parameters:

- $c$: curvature of the cylinder.
- $w$: width of the strip.
- $k$: number of half twists.

These parameters appear in a mapping of variables $s$ and $t$ from the square $-1 <= s,t <= 1$ to a generalized Moebius strip in 3D.

$$ r = (1-w) + w \ s \ \sin{(k \pi t/2)} $$

$$ x = r \ \sin{(c \pi t)}/c $$

$$ y = r \ (1 - (1-\cos{(c\pi t))}/c) $$

$$ z = w \ s \ \cos{(k \pi t/2)} $$

Here is the code. The singularity at $c = 0$ can be avoided by using `c = eps`.

```
type moebius
```

function [x,y,z,t] = moebius(c,w,k) % [x,y,z,t] = moebius(c,w,k) % [x,y,z] = surface of moebius strip, use t for color % c = curvature, 0 = flat, 1 = cylinder. % w = width of strip % k = number of half twists if c == 0 c = eps; end m = 8; n = 128; [s,t] = meshgrid(-1:2/m:1, -1:2/n:1); r = (1-w) + w*s.*sin(k/2*pi*t); x = r.*sin(c*pi*t)/c; y = r.*(1 - (1-cos(c*pi*t))/c); z = w*s.*cos(k/2*pi*t); end

The animation begins with a flat, untwisted strip of width 1/4, that is

$$ c = 0, \ w = 1/4, \ k = 0 $$

The classic Moebius strip is reached with

$$ c = 1, \ w = 1/4, \ k = 1 $$

The final portion of the animation simply changes the viewpoint, not the parameters.

Let's play with colormaps. I'll look at the classic Moebius strip and make the default MATLAB colormap periodic by appending a reversed copy. I think this looks pretty nice.

map = [parula(256); flipud(parula(256))];

The Hue-Saturation-Value color map has received a lot of criticism in MathWorks blogs recently, but it works nicely in this situation. This figure has $k = 4$, so there are two full twists. That's very hard to see from this view. The colors help, but it's important to be able to rotate the view, which we can't do easily in this blog.

Here is the one with three half twists. Like the classic Moebius strip, this has only one side. I've grown quite fond of this guy, so I'll show two views. The HSV color map, with its six colors, works well.

Source: IEEE

I have written in this blog about the glorious marriage between MATLAB and the Dore graphics system on the ill-fated Ardent Titan computer almost 30 years ago. I somehow generated this image using MATLAB, Dore, these parametrized Moebius equations and the Titan graphics hardware in 1988. This was fancy stuff back then. The editors of IEEE Computer were really excited about this cover.

I've never been able to reproduce it. I don't know what the parameters were.

Kermit Sigmon was a professor at the University of Florida. He was an excellent teacher and writer. He wrote a short *MATLAB Primer* that was enormously popular and widely distributed in the early days of the web. It went through several editions and was translated into a couple of other languages.

For the most part, people had free copies of the *MATLAB Primer* without fancy covers. But CRC Press obtained the rights to produce a reasonably priced bound paperback copy which featured a cover with $k = 5$.

Kermit passed away in 1997 and Tim Davis took over the job of editing the *Primer*. He generated more elaborate graphics for his covers.

Get
the MATLAB code

Published with MATLAB® R2016a

A model of the human gait, developed by Nikolaus Troje, is a five-term Fourier series with vector-valued coefficients that are the principal components for data obtained in motion capture experiments involving subjects walking on a treadmill.... read more >>

]]>A model of the human gait, developed by Nikolaus Troje, is a five-term Fourier series with vector-valued coefficients that are the principal components for data obtained in motion capture experiments involving subjects walking on a treadmill.

Today's post is about work done several years ago by Nikolaus Troje when he was at Ruhr-Universität in Bochum, Germany. He has since moved to Queens University in Ontario, Canada, where he is head of the Biomotion Lab. Here is their Web page.

Troje studies the human gait. How exactly do we move when we walk? He also studies how we perceive others when they walk. Can we tell if people are happy or sad just by the way they walk? Can we determine if they are male or female?

To get a feeling for his work, take a look at this demo: BMLgender. When I first saw this demo it was being done with MATLAB, so I asked Troje about it and I have written and talked about it many times ever since. (Today, the demo is computed in the web browser with Java Script.)

Motion capture employs high speed cameras, lights, and reflectors attached to human subjects -- German grad students -- as they walk on a closed track or a treadmill. Once a steady, comfortable pace is established for a single subject, 20 or so steps are recorded. At 120 frames per second, this results in typically 1440 frames. The data from the reflectors are collated into 15 standard locations on the head, arms, torso and legs. As these points move in three dimensions the raw data captured is an array of time series with 45 rows and roughly 1440 columns.

Here are four of the subjects, with the data scaled so that they are all walking at the same speed.

The walking motion is periodic so Troje's model is a five-term Fourier series describing the position of the 15 spots in three dimensions.

$$ p(t) = v_1 + v_2 \cos{\omega t} + v_3 \sin{\omega t} + v_4 \cos{2 \omega t} + v_5 \sin{2 \omega t} $$

Here $\omega$, the scalar frequency, is the only nonlinear parameter. It is determined by Fourier analysis of the data. The coefficients $v_1$ through $v_5$ are *vectors* with 45 components. Once $\omega$ is computed the $v$'s are obtained by *Principal Component Analysis* (also known as *Singular Value Decomposition*) of the data. PCA effectively reduces the 45-by-1440 data matrix to a 45-by-5 coefficient matrix.

The frequency $\omega$ and the coefficient matrix $V$ with columns $v_1, \ldots, v_5$ are computed just once. Then the motion can be computed for any time $t$ with a matrix-vector product.

$$ p(t) = V \ [1 \ \cos{\omega t} \ \sin{\omega t} \ \cos{2 \omega t} \ \sin{2 \omega t}]^T $$

This model usually reproduces the raw data with better than 95% accuracy.

There were several dozen subjects, each with his or her characteristic coefficient matrix $V$. Collect all these $V$'s together into one giant set of coefficients. Then use PCA again to reduce this to one average set of coefficients. You get *eigenwalker*, the *everyman*. Or at least the *typisch deutsche junge Akademiker*.

The *eigenwalker* has five vector coefficients $v_1, \ldots, v_5$. These components don't have any inherent names -- they're just vectors that come out of the principal component analysis. But when we look at how they affect the motion, we can give them names. The first component, $v_1$, is simply the static position. It determines how *eigenwalker* looks at rest. Here are the other four components, one at a time.

The second coefficient, $v_2$, dominants the forward motion so we've called it "stride". The third coefficient, $v_3$, controls a sideways motion so it gets named "sway". The coefficients $v_4$ and $v_5$ drive vertical motions that are in-phase and out-of-phase with the stride; they get fairly arbitrary names of "hop" and "bounce".

When these four motions are added to the at-rest position, they give *eigenwalker*, which is a fairly realistic picture of the human gait.

The PCA does not pick out gender as one of dominant components. But half of the subjects were female and half were male. So we can compute a matrix of coefficients $F$ for the females and a matrix of coefficients $M$ for the males. The top two figures below are the female *eigenwalker* and the male *eigenwalker*. Without looking at the bottom two figures, can you tell which is which?

The bottom two figures overemphasize the gender bias. They run the model with the difference between $F$ and $M$ tripled. Now you can see which is which?

Disney and Pixar know this -- they have to exaggerate the motion to get the point across.

This is Troje's research. How do humans determine the gender of other humans just by watching them walk? Even better -- just by watching a stick figure, or 15 dots, walk?

The *Numerical Computing with MATLAB* collection of programs includes an app that lets you see the *eigenwalker* in action. Sliders vary the strength of the five components and the gender bias. You can also rotate the viewpoint. You will need two files, `walker.m` and `walkers.mat`, available from the MATLAB Central File Exchange.

This has been my own description of an approach that Troje took over a decade ago. He has since replaced Principle Components Analysis with Fourier analysis to represent the movements of an individual walker, and has applied PCA to all sorts of other walker properties on the population level. But I like his original approach because I am such a big fan of the SVD. The following references are mostly about Troje's more recent work.

<http://jov.arvojournals.org/article.aspx?articleid=2192503>

<http://www.biomotionlab.ca/walking.php>

<http://www.biomotionlab.ca/Text/WDP2002_Troje.pdf>

<http://www.biomotionlab.ca/Text/trojeChapter07b.pdf>

Get
the MATLAB code

Published with MATLAB® R2016a

Recent theoretical, observational and computational results establish the possibility that gravitational waves produced by the dark energy created at the dawn of the universe affect the clock rate of silicon digital processors operating at very low temperatures.... read more >>

]]>Recent theoretical, observational and computational results establish the possibility that gravitational waves produced by the dark energy created at the dawn of the universe affect the clock rate of silicon digital processors operating at very low temperatures.

Ed Plum is a professor in the Institute for Theoretical Physics at the U. C. Santa Barbara. He is also a long-time friend of mine and an avid MATLAB fan.

Ed and his colleagues are publishing an important paper today. Here is a link to the journal, *Physical Review Communications*. Ed has shared an advanced copy with me. They have found convincing evidence that the gravitational waves predicted by Einstein's General Theory of Relativity and emanating from "Dark Energy" affect the operating frequency of silicon computer chips. The effect can be observed in experiments operating at temperatures below 2 degrees Kelvin.

The announcement on February 11 of the detection of gravitational waves by the LIGO team thrilled scientists and the public at large worldwide. LIGO stands for Laser Interferometer Gravitational-Wave Observatory. Here is the link to the LIGO home page. Here is their scientific paper in *Physical Review Letters*. Excellent survey articles were published in *The New York Times Sunday Review* and *The New Yorker*.

Here is a Wikipedia animated graphic depicting gravitational waves in the article on General Relativity.

The LIGO experiment is designed to detect powerful one-time astrophysical events such as colliding black holes. The gravitational waves involved have wave lengths that require L-shaped detectors with legs four kilometers long. A wave passes through the detector just once, in a fraction of a second. There are two detectors, located in Washington State and Louisiana.

Photo: <http://www.ligo.caltech.edu >

This beautiful graphic from the *Phys. Rev. Letters* paper shows the event detected by the LIGO team. The top two figures show the actual signals recorded at the two labs. The next two figures show the results from supercomputer runs for the numerical solution of Einstein's equations. The small pair of figures show that the difference between the observations and the simulations is noise. The two color figures show the "chirps", spectrograms plotting frequency versus time. These chirps are readily audible to the human ear.

Source: <http://www.ligo.caltech.edu>

The most commonly accepted model of cosmology hypothesizes *dark energy*, formed at the origin of the universe and permeating all of space, but invisible to optical observation. According to Einstein's General Theory of Relativity, this energy constantly produces short wave length, very low energy gravitational waves. These waves continuously interact with all mass in the universe, but on a scale that is almost impossible to detect.

Plum and his colleagues have evidence from both numerical simulations and actual observations that the dark energy gravitational waves affect the operating frequency of commercial microprocessors. Very slight changes in processor speed can be detected, provided the silicon is exceptionally pure and the observations are made at very low temperatures. It's like immersing a high-end laptop in liquid nitrogen. Here is one of the detectors.

Photo: Plum *et al*

Some time ago, Ed gave me access to one of the first signals his group had detected and we have been quietly including it as one of sample sounds on recent MATLAB distributions. The file is `chirp.mat`. We have reversed the signal to disguise it until now. You should be able to load a copy on your own machine.

clear which chirp.mat load chirp.mat y = flipud(y)'; t = (1:length(y))/Fs; whos

C:\Program Files\MATLAB\R2016a\toolbox\matlab\audiovideo\chirp.mat Name Size Bytes Class Attributes Fs 1x1 8 double t 1x13129 105032 double y 1x13129 105032 double

The first plot is the entire signal. Since the waves are arriving continuously, we see several peaks. The second plot zooms in on one of the peaks.

```
subplot(2,1,1)
plot(t,y)
axis([0 t(end) -1 1])
subplot(2,1,2)
plot(t,y)
axis([.75 .80 -1 1])
xlabel('Time (secs)')
```

I hope you can run this `sound` yourself and hear the chirps. If you are just reading this on the Web, you will have to imagine what this sounds like.

sound(y,Fs)

Here is the picture of one chirp, the spectrogram of about one-eighth of the signal.

clf reset spectrogram(y(1:2048),kaiser(128,18),120,128,Fs,'yaxis')

This is just be beginning. We expect to learn much more about our universe as we are able to refine the silicon in the microprocessors and lower the temperature of the operating environment.

Get
the MATLAB code

Published with MATLAB® R2016a

An extraordinarily creative Danish mathematician, inventor, and poet who often wrote under the Old Norse pseudonym "Kumbel" meaning "tombstone." A direct descendant of the Dutch naval hero of the 16th century who had the same name, Piet Hein was born in Copenhagen and studied at the Institute for Theoretical Physics of the University of Copenhagen (later the Niels Bohr Institute) and the Technical University of Denmark. ... read more >>

]]>

Let's quote the entry about Piet Hein in David Darling's *Enclopedia of Science*, http://www.daviddarling.info/encyclopedia/H/Hein.html

An extraordinarily creative Danish mathematician, inventor, and poet who often wrote under the Old Norse pseudonym "Kumbel" meaning "tombstone." A direct descendant of the Dutch naval hero of the 16th century who had the same name, Piet Hein was born in Copenhagen and studied at the Institute for Theoretical Physics of the University of Copenhagen (later the Niels Bohr Institute) and the Technical University of Denmark.

[Piet Hein] is famed for his many mathematical games, including Hex, Tangloids, Polytaire, TacTix, and the Soma Cube, his advocacy of the superellipse curve in applications as diverse as city planning and furniture making, and his thousands of short, aphoristic poems called Grooks.

This is one line from one of the Grooks.

"Problems worthy of attack prove their worth by hitting back!" -- Piet Hein

My wife and I purchased one of these Piet Hein pocelain baking dishes years ago during a visit to Denmark. It's been a mainstay in our kitchen ever since. This photo is not from our kitchen, it's from a page at http://www.piethein.com featuring http://www.piethein.com/category/dishes-white-22.

Hein's design of the baking dish, as well as many other items, is based on what he called the *superellipse*, a curve that lies between old-fashioned ellipses and rectangles. The important parameter $p$ controls the roundness. Additional parameters $a$ and $b$ determine the aspect ratio. A superellipse with these parameters is the set of points $(x,y)$ satisfing

$$ |\frac{x}{a}|^p + |\frac{y}{b}|^p = 1 $$

If $a$ and $b$ are both equal to one, the equation is

$$ |x|^p + |y|^p = 1 $$

And if $p$ is equal to two it becomes

$$ x^2 + y^2= 1 $$

which defines a circle.

Here is the MATLAB definition.

F = @(x,y) abs(x/a).^p + abs(y/b).^p - 1;

Let's vary `p` between 0.1 and 10 and make an animated gif, pausing briefly as we pass through `p = 1` and `p = 2`.

```
type make_superellipse_movie
```

gif_frame('superellipse_movie.gif') for p = [0.1 0.1 0.1:0.1:1 1 1 1:0.2:2 2 2 2:.5:10] F = @(x,y) abs(x).^p + abs(y).^p - 1; ezplot(F,[-1.2 1.2]) axis square title(['|x|^p + |y|^p = 1, p = ',sprintf('%.1f',p)]) gif_frame end

All of Piet's designs have `p = 2.5`.

p = 2.5; a = 1.5; F = @(x,y) abs(x/a).^p + abs(y).^p - 1; ezplot(F,[-1.8 1.8]) axis equal title(sprintf('Superellipse, p = %.1f',p))

Different scale factors on `x` and `y` make the shape rectangular and `p` greater than two makes it not exactly an ellipse.

I could have said that the superellipse is the unit ball in two dimensions of the vector p-norm,

$$ ||v||_p = \big( \sum_i |v_i]^p \big)^{1/p} $$

In MATLAB this is `norm(v,p)`. In other words the function `F` starring in the first animated gif could have been defined by

F = @(x,y) norm([x,y],p) - 1;

When `p = 1` the unit "ball" becomes the unit diamond. And for `p < 1` the quantity `norm(v,p)` is no longer actually a norm. It fails to satisfy the triangle inequality.

Homework: what is `norm(v,p)` for *negative* `p`, and when `p = -Inf`?

Use the Images option in Google to search for all of the different images of "Soma Cube" available on the Internet. It's awesome.

Here is one of those images, available from http://mathcats.org/ideabank/geometrykits.html. This is all possible shapes with a reentrant corner that can be made out of three or four "cubelets". There are a total of 27 of those cubelets. The goal is to make a 3-by-3-by-3 larger cube.

My father's hobby was woodworking. He had a very nice shop. One year when I was a kid, he and I made 12 Soma Cube puzzle sets like this one to give to our family for Christmas. My set is now over 65 years old, but I still have it in my office at home.

The following is the shortest code segment I've ever presented in over 2-1/2 years of Cleve's Corner blog. In the command window of your own copy of MATLAB you can enter.

soma

This launches an old demo written by Bill McKeeman. Click away, to your heart's content. In this animated gif, watch the solution number change . It's looping over only the first 12 out of the 240 possible puzzle solutions that McKeeman's algorithm finds.

McKeeman and I were buddies in grad school. We spent too many hours perfecting our Some Cube skills. In fact, I can still put the puzzle together behind my back.

Get
the MATLAB code

Published with MATLAB® R2015b