This episode is a special one on Isaac Newton’s Universal Law of Gravity. Discover Newton’s backstory and how it influenced his work, the mechanics of the equation in a way that you can understand and the implications of the equation for our view of the Universe.

In this episode we discuss:

- Newton’s backstory and how it influenced his work
- The mechanics of the equation in a way that you can understand
- The implications of the equation for our view of the Universe

It was a lot of fun -- hope you enjoy it.

]]>The key intuition: **e represents 100% continuous growth**.

Today let's revisit each definition with a colorization viewpoint, describing continuous growth from a few different perspectives.

This definition of e was my starting point on understanding the concept. We start with 1 growing to 2 (100% interest), and then compound that growth more and more frequently.

Eventually, we see that 1 grows to 2.71828..., hitting a speed limit of e.

The trick is distinguishing the role of each "1" in the definition. One is the base quantity, one is the interest, and another is the implicit single unit of time we plan to grow for. Math is so abstract that we don't have these separations labeled individually, they are just quantities interacting.

This definition splits up the compounding process into chunks we can see separately. I like to see each component like a "factory" that is earning money. We start with our initial amount, which builds interest. That interest builds its own interest, whcih builds its own interest, and so on (read more).

From a calculus perpsective, here's what's happening:

- Our initial quantity is 1 (for all time)
- This principal earns 100% continuous interest, and after time x has earned int 1, dx = x . (After 1 unit of time, this is 1)
- After time x, that interest (x) has earned int x ,dx = frac(1)(2) x
^{2}. (After 1 unit of time, this is frac(1)(2)) - After time x, that interest (frac(1)(2)x
^{2}) has earned int frac(1)(2) x^{2},dx = frac(1)(2)frac(1)(3)x^{3}= frac(1)(3!)x^{3}(After 1 unit of time, this is frac(1)(3!) = frac(1)(6))

And so on. Every instant, the entire chain of interest is growing. When learning calculus, you might have repeatedly tried to integrate x just for fun (whatever gets you going). That game is how we end up with e.

Instead of plugging in x=1 to compute e^{1}, we can leave x unspecified to handle any amount of time:

I like seeing how each piece of interest contributes to the whole. Later terms have larger powers, but fight a larger factorial. For very small values of x (like 1%), we can approximate e^{x} as:

For example, earning 1% continuous interest for a single year is still around 1.01, because there isn't much growth from compounding. (And yep, e^.01 = 1.01005.)

This is the shortest definition, but relies on calculus machinery. In short, we're saying that e^{x} always changes by exactly the amount that we have.

The derivative (frac(d)(dx)) measures instantaneous change:

And we say e^{x} is that input that makes this machine return the original value. (In a similar way, "0" is the input that makes addition return the original; 1.0 is the input that makes multiplication return the original value.)

We can check this works. We saw earlier that e^{x} is really this equation:

If you take the derivative each term in the right-hand side we get:

which simplifies to

In other words, every term gets "pulled over" to the left, with the constant 1.0 disappearing (it doesn't change). We have our original pattern, therefore e^{x} is its own derivative. (There's no +C chicanery here, because frac(d)(dx) (e^{x} + C) neq e^{x} + C).

While other functions like f(x) = x^{2} or f(x) = sin(x) may *momentarily* equal their derivative at certain instants, they don't keep it up for all values of x. e^{x} is that Disneyland ride that keeps the magic going forever.

This definition is the most gnarly: instead of talking about e directly, we work backwards.

Define the natural logarithm as the time needed to grow from 1 to a (our desired number), assuming 100% continuous interest. What does that look like?

Let's say we've grown to 4.0. How long to grow to 5.0? Well, assuming 100% interest, we grow 4 units per time period, so it takes us 1/4 of a unit to grow to 5.

And when we're at 5, it'll take us 1/5 of a unit to get to 6.

And so on. The time to grow from to 1 to a is the time from 1 to 2 (1 unit), plus 2 to 3 (1/2 unit), plus 3 to 4 (1/3 unit), until we reach a.

(In reality, we need to break time down microscopically, growing from 1, to 1.1, to 1.2, etc. That's what the integral intfrac(1)(x),dx really does.)

Phew! Once we have the notion of "time to grow from 1 to a" defined, we say e is the number that takes 1 unit of time to reach. In other words:

Here, e is the "base of the natural logarithm".

I feel comfortable with an idea when I can hop between definitions and notice similarities. For example, look at the items that show up in each colorization: do you see where "interest" shows up in each definition? The unit quantity? The idea of perfection or infinitely precise change?

It feels great when e becomes a flexible tool on your bat-belt and not an incantation to memorize.

Happy math.

]]>(source)

**Argh! Why aren't more math concepts introduced this way?**

Most ideas aren't inherently confusing, but their *technical description* can be (e.g., reading sheet music vs. hearing the song.)

My learning strategy is to find what actually helps when learning a concept, and do more of it. Not the stodgy description in the textbook -- what made it click for *you*?

The checklist of what I need is ADEPT: Analogy, Diagram, Example, Plain-English Definition, and Technical Definition.

Here's a few reasons I like the colorized equations so much:

- The plain-English description forces an analogy for the equation. Concepts like "energy", "path", "spin" aren't directly stated in the equation.
- The colors, text, and equations are themselves a diagram. Our eyes bounce back and forth, reading the equation like a map (not a string of symbols).
- The technical description -- our ultimate goal -- is not hidden. We're not choosing between intuition or technical, it's intuition
*for*the technical.

Of course, we still need examples to check our understanding, but 4/5 ain't bad!

I colorized a few of my favorite math topics below. Making the colorizations was surprisingly fun. Like writing a haiku, there's a game to trimming down a concept to its essence.

The number e (2.718...) is the base of growth, generated from universal ideas. Take unit interest, with unit time and compound it perfectly. Read article.

Euler's Formula is one of the most important in math, linking exponents, imaginary numbers, and circles. The intuition: constant growth in a perpendicular direction traces a circle. Read article.

The Fourier Transform builds on Euler's Identity. Using your circular path, spin a signal at a certain speed to isolate the "recipe" at that speed (like separating a smoothie into its ingredients). Read article.

The Pythagorean Theorem is usually thought to apply to triangles only. In fact, it applies to *any* shape, *any* type of 2d area. Triangles are just a convenient starting point. Read article.

Bayes Theorem has a simple intuition: evidence must be diluted by false positives. (Cry wolf and you won't be trusted.) Read article.

These colorized equations are an experiment in conveying the most intuition in the simplest package possible. We don't need VR/AR, holograms, or brain-computer interfaces to understand math -- have we exhausted the possibilities of crayon on a piece of paper?

My short-term goal is create colorized equations for the top 25 equations on the site. Then (not trying to look directly at the sun), gather colorizations for the top (100?) topics we're meant to learn in high school and college.

The idea is to find explanation styles that work, and do more of them.

Happy math.

I have a half-built visual tool to make these. For now, here is the LaTeX template I used:

https://www.overleaf.com/read/cvmtqywqgvvw

The idea got a strong reception on Twitter (thanks Jan):

colorized math equations makes it easier to understand. Great idea!! #ux #math https://t.co/8Btua0cpTl pic.twitter.com/2569UG2Zv4

— Jan Willem Tulp (@JanWillemTulp) December 14, 2017

The top piece of feedback was having accessible versions for color blind readers; I plan to make options available here too.

]]>Why can't we treat equations like one of Aesop's fables, with a lesson buried inside?

Take a look at the Pythagorean Theorem. It seems to be about triangles, right?

Well, sure. But is it only that? Seeing the equation:

has a literal interpretation "The sides of a triangle have a specific mathematical relationship."

Ok, fine. That's some nice zoology. Stepping back, what's really happening?

*The sides of a triangle, which point in different directions, have a mathematical relationship.*

Or better yet:

*Two objects, which exist in different dimensions, can still be compared.*

Here's an analogy: Who's the better athlete, Michael Jordan or Muhammad Ali?

We shouldn't ask who has a better jump shot or uppercut -- that comparison stays within a single dimension, clearly favoring one or the other. A better comparison would encompass all dimensions: *Who was more dominant compared to the competition? Who held more championships? Who advanced the sport more?*

Ah. We need a different type of "expansive" comparison to help each component see the other side.

In the triangle case, we have an "A direction" (East/West) and a "B direction" (North/South). Instead of comparing them directly, we square them to make area. Why?

Here's an intuition. Individual directions point in a single direction within our 2d universe. But area spans *every* direction available in our universe. By converting a single direction to its square (which points in all directions), we have a common "all directions" format that can be compared. It becomes "universal area vs. universal area" and not "pointing North vs. pointing East" (aka jump shots vs. uppercuts).

Will any type of universal self-comparison work? (Squaring, cubing, etc.)

No, unfortunately. The Pythagorean Theorem is special because it shows the *specific* comparison of squaring (a^{2} + b^{2} = c^{2}) keeps a simple relationship between the whole and its parts. There's probably a relationship for cubing, but it's not as clear cut.

**Pythagorean Theorem Fact:** The sides of a right triangle follow a

**Pythagorean Theorem Wisdom:** To compare different things, find a universal way to compare them. (Specifically, square yourself to create area.)

Equations aren't so boring when you look for the moral of the story, right?

In the Pythagorean Theorem, we can imagine spinning our 1d lines into area and comparing that. Here, we can spin each side into a circle:

And yowza, the area matches up!

This is a demonstration of the theorem; the *proof* shows that area will always combine neatly like this. (Read more.)

We can make our "self comparison" analogy more technical with vectors.

- vec a is the vector (a, 0)
- vec b is a vector in a different dimension, (0, b)
- vec c is a vector made from both: vec c = vec a + vec b = (a, 0) + (0, b) = (a,b)

The Pythagorean Theorem says "If we compare each item to itself, the combined self-comparisons of vec a and vec b equal the self-comparison of vec c".

`(c compared to itself) = (a compared to itself) + (b compared to itself)`

In the vector world, what's a self-comparison? A dot product with yourself.

The Pythagorean Theorem is stating:

which works because:

(Since vec a and vec b are perpendicular, their dot product is zero.)

And for the parts:

This specific self-comparison maintains a simple relationship between the whole and its parts (addition).

A simple glance at two vectors doesn't offer a built-in way to compare them; instead, use a derived *scalar* (single number) that allows a comparison to be made.