Multiplication can be repeated addition, scaling, rotating (via imaginary numbers), and more.

What about division? Let's take a look.

]]>Multiplication can be repeated addition, scaling, rotating (via imaginary numbers), and more.

What about division? Let's take a look.

The permutation formula lets us pick 3 items out of 10, in a specific order. To order 3 items from 10, we have 10 options for the first choice, 9 options for the second, and 8 for the third, giving us 10 * 9 * 8 = 720 possibilities.

But how is this process written in most math books?

What's going on?

Well, we just want a portion of the factorial. 10! gives the full sequence (10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1), but we want to stop it after we hit 8. That means we divide by 7!. The only part that's not removed is 10 * 9 * 8.

In this case, division is acting like a brake/boundary/filter that stops the factorial from running hog-wild. (They get out of control, you know.)

Ah! If writing a software program, you wouldn't actually compute 10!, 7! and do a division. What if we needed 3 choices from 1000 options? (1000 factorial has 2568 digits and will make your computer cry. *I told you this would happen!*)

If we realize the role of division as a boundary marker, we can just compute 1000 * 999 * 998 = 997,002,000 and call it a day.

Let's keep going.

Suppose we don't care about the order of the items we pick: ABC is the same as CBA. What to do?

Well, we can apply another division! This time, we don't want a boundary, but want to merge/consolidate/group up similar items. Everything that looks like ABC (e.g., ACB, BAC, BCA, CAB, CBA) should be counted once.

With 3 items there are 3! rearrangements, so the final count is 720/3! = 720/6 = 120 choices.

As a formula:

Neat, right? The division in the permutation formula acts as a boundary, and the division in the combination formula is a type of "group up". I imagine the variations being merged into a single option:

The words we pick frame how we think about an equation. "Divide" implies we're splitting things apart. If we know alternate meanings (repeated subtraction, boundaries, consolidation), we may pick a better description. Saying "divide by k!" doesn't have the same intuition as "consolidate the reorderings".

I think math concepts are fundamentally simple but their written description may not be. (*Ever try to describe how to put on a shirt?*) The goal is finding the words to make the idea click.

Happy math.

]]>This episode is a special one on Isaac Newton’s Universal Law of Gravity. Discover Newton’s backstory and how it influenced his work, the mechanics of the equation in a way that you can understand and the implications of the equation for our view of the Universe.

In this episode we discuss:

- Newton’s backstory and how it influenced his work
- The mechanics of the equation in a way that you can understand
- The implications of the equation for our view of the Universe

It was a lot of fun -- hope you enjoy it.

]]>The key intuition: **e represents 100% continuous growth**.

Today let's revisit each definition with a colorization viewpoint, describing continuous growth from a few different perspectives.

This definition of e was my starting point on understanding the concept. We start with 1 growing to 2 (100% interest), and then compound that growth more and more frequently.

Eventually, we see that 1 grows to 2.71828..., hitting a speed limit of e.

The trick is distinguishing the role of each "1" in the definition. One is the base quantity, one is the interest, and another is the implicit single unit of time we plan to grow for. Math is so abstract that we don't have these separations labeled individually, they are just quantities interacting.

This definition splits up the compounding process into chunks we can see separately. I like to see each component like a "factory" that is earning money. We start with our initial amount, which builds interest. That interest builds its own interest, whcih builds its own interest, and so on (read more).

From a calculus perpsective, here's what's happening:

- Our initial quantity is 1 (for all time)
- This principal earns 100% continuous interest, and after time x has earned int 1, dx = x . (After 1 unit of time, this is 1)
- After time x, that interest (x) has earned int x ,dx = frac(1)(2) x
^{2}. (After 1 unit of time, this is frac(1)(2)) - After time x, that interest (frac(1)(2)x
^{2}) has earned int frac(1)(2) x^{2},dx = frac(1)(2)frac(1)(3)x^{3}= frac(1)(3!)x^{3}(After 1 unit of time, this is frac(1)(3!) = frac(1)(6))

And so on. Every instant, the entire chain of interest is growing. When learning calculus, you might have repeatedly tried to integrate x just for fun (whatever gets you going). That game is how we end up with e.

Instead of plugging in x=1 to compute e^{1}, we can leave x unspecified to handle any amount of time:

I like seeing how each piece of interest contributes to the whole. Later terms have larger powers, but fight a larger factorial. For very small values of x (like 1%), we can approximate e^{x} as:

For example, earning 1% continuous interest for a single year is still around 1.01, because there isn't much growth from compounding. (And yep, e^.01 = 1.01005.)

This is the shortest definition, but relies on calculus machinery. In short, we're saying that e^{x} always changes by exactly the amount that we have.

The derivative (frac(d)(dx)) measures instantaneous change:

And we say e^{x} is that input that makes this machine return the original value. (In a similar way, "0" is the input that makes addition return the original; 1.0 is the input that makes multiplication return the original value.)

We can check this works. We saw earlier that e^{x} is really this equation:

If you take the derivative each term in the right-hand side we get:

which simplifies to

In other words, every term gets "pulled over" to the left, with the constant 1.0 disappearing (it doesn't change). We have our original pattern, therefore e^{x} is its own derivative. (There's no +C chicanery here, because frac(d)(dx) (e^{x} + C) neq e^{x} + C).

While other functions like f(x) = x^{2} or f(x) = sin(x) may *momentarily* equal their derivative at certain instants, they don't keep it up for all values of x. e^{x} is that Disneyland ride that keeps the magic going forever.

This definition is the most gnarly: instead of talking about e directly, we work backwards.

Define the natural logarithm as the time needed to grow from 1 to a (our desired number), assuming 100% continuous interest. What does that look like?

Let's say we've grown to 4.0. How long to grow to 5.0? Well, assuming 100% interest, we grow 4 units per time period, so it takes us 1/4 of a unit to grow to 5.

And when we're at 5, it'll take us 1/5 of a unit to get to 6.

And so on. The time to grow from to 1 to a is the time from 1 to 2 (1 unit), plus 2 to 3 (1/2 unit), plus 3 to 4 (1/3 unit), until we reach a.

(In reality, we need to break time down microscopically, growing from 1, to 1.1, to 1.2, etc. That's what the integral intfrac(1)(x),dx really does.)

Phew! Once we have the notion of "time to grow from 1 to a" defined, we say e is the number that takes 1 unit of time to reach. In other words:

Here, e is the "base of the natural logarithm".

I feel comfortable with an idea when I can hop between definitions and notice similarities. For example, look at the items that show up in each colorization: do you see where "interest" shows up in each definition? The unit quantity? The idea of perfection or infinitely precise change?

It feels great when e becomes a flexible tool on your bat-belt and not an incantation to memorize.

Happy math.

]]>(source)

**Argh! Why aren't more math concepts introduced this way?**

Most ideas aren't inherently confusing, but their *technical description* can be (e.g., reading sheet music vs. hearing the song.)

My learning strategy is to find what actually helps when learning a concept, and do more of it. Not the stodgy description in the textbook -- what made it click for *you*?

The checklist of what I need is ADEPT: Analogy, Diagram, Example, Plain-English Definition, and Technical Definition.

Here's a few reasons I like the colorized equations so much:

- The plain-English description forces an analogy for the equation. Concepts like "energy", "path", "spin" aren't directly stated in the equation.
- The colors, text, and equations are themselves a diagram. Our eyes bounce back and forth, reading the equation like a map (not a string of symbols).
- The technical description -- our ultimate goal -- is not hidden. We're not choosing between intuition or technical, it's intuition
*for*the technical.

Of course, we still need examples to check our understanding, but 4/5 ain't bad!

I colorized a few of my favorite math topics below. Making the colorizations was surprisingly fun. Like writing a haiku, there's a game to trimming down a concept to its essence.

The number e (2.718...) is the base of growth, generated from universal ideas. Take unit interest, with unit time and compound it perfectly. Read article.

Euler's Formula is one of the most important in math, linking exponents, imaginary numbers, and circles. The intuition: constant growth in a perpendicular direction traces a circle. Read article.

The Fourier Transform builds on Euler's Identity. Using your circular path, spin a signal at a certain speed to isolate the "recipe" at that speed (like separating a smoothie into its ingredients). Read article.

The Pythagorean Theorem is usually thought to apply to triangles only. In fact, it applies to *any* shape, *any* type of 2d area. Triangles are just a convenient starting point. Read article.

Bayes Theorem has a simple intuition: evidence must be diluted by false positives. (Cry wolf and you won't be trusted.) Read article.

These colorized equations are an experiment in conveying the most intuition in the simplest package possible. We don't need VR/AR, holograms, or brain-computer interfaces to understand math -- have we exhausted the possibilities of crayon on a piece of paper?

My short-term goal is create colorized equations for the top 25 equations on the site. Then (not trying to look directly at the sun), gather colorizations for the top (100?) topics we're meant to learn in high school and college.

The idea is to find explanation styles that work, and do more of them.

Happy math.

I have a half-built visual tool to make these. For now, here is the LaTeX template I used:

https://www.overleaf.com/read/cvmtqywqgvvw

The idea got a strong reception on Twitter (thanks Jan):

colorized math equations makes it easier to understand. Great idea!! #ux #math https://t.co/8Btua0cpTl pic.twitter.com/2569UG2Zv4

— Jan Willem Tulp (@JanWillemTulp) December 14, 2017

The top piece of feedback was having accessible versions for color blind readers; I plan to make options available here too.

]]>