<feed xmlns="http://www.w3.org/2005/Atom">
  <title><![CDATA[ Zero Wind :: Jamie Wong ]]></title>
  <link href="http://jamie-wong.com/atom.xml" rel="self"/>
  <link href="http://jamie-wong.com/"/>
  <updated>2024-06-11T22:38:31+00:00</updated>
  <id>http://jamie-wong.com/</id>
  <author>
    <name>Jamie Wong</name>
  </author>
  
  <entry>
    <title><![CDATA[ The Hole in the Sky That We Actually Fixed]]></title>
    <link href="http://jamie-wong.com/post/ozone-hole/"/>
    <updated>2024-05-17T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/ozone-hole/</id>
    <content type="html"><![CDATA[ 

 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ For Sale: A Promise to Remove Invisible Gas]]></title>
    <link href="http://jamie-wong.com/post/carbon-dioxide-removal/"/>
    <updated>2023-05-31T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/carbon-dioxide-removal/</id>
    <content type="html"><![CDATA[ 

<style>
  table#suppliers tbody tr:nth-child(2n+1) {
    background: #eee;
  }
  table#suppliers td {
    padding: 10px;
  }
  .dot {
    width: 1em;
    height: 1em;
    display: inline-block;
    border-radius: 1em;
    vertical-align: sub;
  }
  .dot.grey {
    background-color: #b3b3b3;
  }
  .dot.green {
    background-color: #78A270;
  }
  .dot.blue {
    background-color: #527292;
  }

  .dot.suppliers {
    background-color: #0084BD;
  }
  .dot.capital {
    background-color: #00BD40;
  }
  .dot.trust {
    background-color: #EC8F05;
  }
</style>

<p>In a select few corn fields after a harvest, the unwanted stalks, leaves, and cobs are fed into a storage container on the back of a long-haul truck. These agricultural wastes won’t be hitting the road in their loose, fluffy state though. The storage container doesn’t start empty. It contains a complex chemical machine fine-tuned to efficiently convert the corn detritus into three chemical compounds: biochar, bio-oil, and syngas. The biochar, a solid, is scattered back onto the farm fields to improve field fertility. The syngas is, in part, used as fuel to keep the high temperature reaction going. The bio-oil is destined to be injected deep underground. Bizarrely, people will pay for it to be put there, not to save for later use, but with the promise that it will stay there forever.</p>

<p>This whole operation happens under the purview of <a href="https://charmindustrial.com/">Charm Industrial</a>, a leading participant in a growing market of players with a shared goal: suck carbon dioxide directly out of the air and store it away for good. This is carbon dioxide removal, or CDR for short.</p>

<p>The machinery acting as this conceptual vacuum cleaner for the sky is about as diverse as you can imagine for an industrial process. Charm’s bio-oil production is just one of many. Another method starts with trays of powdered rock and ends with cement production. A third involves growing kelp in the open ocean and encouraging it to eventually sink to the bottom.</p>

<p>The market is small for now, but ambitious. To meet the scale of needs for carbon dioxide removal outlined in the Intergovernmental Panel on Climate Change’s (IPCC) 2021 report, our rate of global removal will need to double roughly every 21 months for the next 27 years <sup class="footnote-ref" id="fnref:1"><a rel="footnote" href="#fn:1">1</a></sup>. We’ll need a market with hundreds of billions of dollars in transactions per year. We’ll need an entirely new global industry on the scale of steel or cement.</p>

<p><img src="/images/cdr/CDR_growth_rate.png" alt="Growth rate of needed carbon-dioxide removal from 2000 to 2050" /></p>

<p>This is the rate we need assuming we <em>succeed</em> in dramatically reducing emissions <em>and</em> dramatically scaling up natural climate solutions.</p>

<figure>
<img src="/images/cdr/reductions_and_removals.png">
<figcaption>The most optimistic emission reduction pathway from the IPCC's 2021 Assessment Report. <br/> Original <a href="https://twitter.com/hausfath/status/1424891447652716548">via Zeke Hausfather</a>. Edits in red are my own.</figcaption>
</figure>

<p>So we’ve got some challenges ahead.</p>

<hr />

<p>Since leaving my job as a software engineer at Figma at the end of 2021, I’ve been learning about frontier problems within climate, and seeing where software can play a role. Unfortunately, my exploration was perilously broad. Climate tech encapsulates everything from <a href="https://www.niraenergy.com/">predicting the cost of connecting new solar to the grid</a>, to <a href="https://www.recoolit.com/">chemically transforming refrigerants</a>, to <a href="https://mootral.com/">improved cow feed to reduce methane from their burps</a>, to <a href="https://lowercarboncapital.com/company/electra/">novel steel manufacturing strategies</a>. Each entails a totally different market, with different stakeholders, and different problems.</p>

<p>Without choosing a specific problem domain, I found myself forever skimming across the surface: learning small details here and there, but never learning enough to evaluate real hypotheses about where I, personally, could apply leverage.</p>

<p>So lately I’ve been focusing on a narrow but important slice: carbon dioxide removal. I’d like to share what I’ve learned about the CDR market through reading and a couple dozen conversations with founders, investors, engineers, and scientists. This article is for anyone curious about CDR, but has a slight tilt towards areas where software is playing a role.</p>

<p>Here’s a preview of the path we’ll take:</p>

<p><nav id="TableOfContents">
<ul>
<li><a href="#orienting-ourselves-in-the-broader-context">Orienting ourselves in the broader context</a>
<ul>
<li><a href="#three-crucial-jobs">Three crucial jobs</a></li>
<li><a href="#the-lens-of-the-day">The lens of the day</a></li>
</ul></li>
<li><a href="#suppliers-the-companies-doing-the-main-thing">Suppliers: the companies doing the main thing</a>
<ul>
<li><a href="#permitting-for-anything-that-touches-air-water-or-dirt">Permitting: for anything that touches air, water, or dirt</a></li>
<li><a href="#industrial-partners-piggy-back-on-a-massive-industry">Industrial partners: piggy-back on a massive industry</a></li>
</ul></li>
<li><a href="#capital-show-me-the-money">Capital: show me the money</a>
<ul>
<li><a href="#carbon-removal-credits">Carbon removal credits</a>
<ul>
<li><a href="#the-corporate-buyers">The corporate buyers</a></li>
<li><a href="#the-curators">The curators</a></li>
<li><a href="#pre-sale-of-credits-for-fledgling-companies">Pre-sale of credits for fledgling companies</a></li>
</ul></li>
<li><a href="#venture-capital">Venture capital</a></li>
<li><a href="#grants">Grants</a></li>
<li><a href="#loans">Loans</a></li>
</ul></li>
<li><a href="#building-trust-where-exactly-did-my-money-go">Building trust: where, exactly, did my money go?</a>
<ul>
<li><a href="#fundamental-research-does-the-process-remove-carbon-at-all">Fundamental research: does the process remove carbon at all?</a></li>
<li><a href="#lca-is-it-carbon-negative">LCA: is it carbon negative?</a></li>
<li><a href="#mrv-what-did-my-money-specifically-do">MRV: what did my money, specifically, do?</a></li>
<li><a href="#mrv-service-companies">MRV service companies</a></li>
</ul></li>
<li><a href="#let-s-recap">Let’s recap</a></li>
<li><a href="#where-to-go-from-here">Where to go from here</a></li>
<li><a href="#further-reading-acknowledgements">Further reading &amp; acknowledgements</a></li>
</ul>
</nav></p>

<p>Before we get started, let’s locate ourselves in the space, and select our lens.</p>

<h1 id="orienting-ourselves-in-the-broader-context">Orienting ourselves in the broader context</h1>

<p>Talking about carbon dioxide removal is a bit of a mess.</p>

<p>I’m afraid readers might leave with misguided perspectives like “new technology is <em>the solution</em> to climate change”, or worse “carbon dioxide removal is <em>the solution</em> to climate change”. New technology broadly, and carbon dioxide removal specifically, are squarely in the camp of “necessary, but not sufficient”.</p>

<p><img src="/images/cdr/solution_tree.png" alt="Tree of climate solutions" /></p>

<p>We’ll be exploring a sub-branch of a sub-branch of what’s needed to stabilize our climate. But remember, we need the whole conceptual tree to win this fight, and watchful arborists to tend to each branch and ensure its growth.</p>

<p>As for <em>real</em> trees, they will not be our focus today. Nor will filters placed atop smoke stacks. We’ll be focused on <em>permanent</em> <em>removal</em>.</p>

<h2 id="three-crucial-jobs">Three crucial jobs</h2>

<figure>
<img src="/images/cdr/reductions_and_removals_no_edits.png">
<figcaption>The most optimistic emission reduction pathway from the IPCC's 2021 Assessment Report. <br/> Original <a href="https://twitter.com/hausfath/status/1424891447652716548">via Zeke Hausfather</a></figcaption>
</figure>

<p>If you examine this graph describing an optimistic path to net-zero emissions, the three colors (<span class="grey dot"></span>, <span class="green dot"></span>, <span class="blue dot"></span>) indicate three crucial jobs needed to reach a stable climate over the coming decades:</p>

<ol>
<li><strong>Reduce new emissions (shrink the grey <span class="grey dot"></span>).</strong> This is where the vast majority of the work lies in fighting climate change. This job requires transitioning our electrical grid to renewable energy, electrifying our transportation and heating, and decarbonizing industrial processes like cement and steel production. This is also where technology like filters on smoke stacks, or <a href="https://remoracarbon.com/">filters placed on the exhaust system of semi-trucks</a> belong. Some of these solutions, like replacing a coal power plant with a solar farm plus storage, might be permanent in the sense that the emissions are permanently reduced. But solutions in this camp can’t remove emissions from the atmosphere that happened long ago. So things in this camp are maybe <em>permanent</em>, but not <em>removal</em>.</li>
<li><strong>Use nature to remove carbon from the atmosphere (grow the green <span class="green dot"></span>).</strong> This is where high-quality tree planting initiatives play a role, for example <a href="https://program.tist.org/">TIST</a> and <a href="https://www.terraformation.com/">Terraformation</a>. These do remove carbon dioxide directly from the atmosphere, even if it was emitted long ago. But we can’t be confident how long carbon held in trees will stay there. Forest fires, lumber production, and disease can all result in the carbon returning to the atmosphere. It is <em>removal</em>, but it’s not confidently <em>permanent</em>.</li>
<li><strong>Artificially remove carbon from the atmosphere (grow the blue <span class="blue dot"></span>).</strong> There are a few reasons we need to do this on top of the other two jobs. First, we don’t have credible plans for avoiding certain kinds of emissions, like the partial evaporation of fertilizer from agricultural fields<sup class="footnote-ref" id="fnref:2"><a rel="footnote" href="#fn:2">2</a></sup>. That’s why you never see the grey section go away completely. Second, we want to return our atmosphere to <em>pre-industrial</em> levels of CO2 concentration, not just stop concentrations from rising. That requires removing more carbon-dioxide than we’re emitting, which is why you see the black line representing net emissions eventually going into the negative.</li>
</ol>

<p>Today we’ll only be discussing the third job. I didn’t choose to focus on CDR because it’s the <em>only</em> important job of these three, or even <em>the most important</em> of these three. We need all three. But you can’t develop real insight into any of them if you don’t focus for a while.</p>

<h2 id="the-lens-of-the-day">The lens of the day</h2>

<p>There are many lenses to use when trying to understand CO2 removal. We could ask how we developed consensus for how much we need by 2050, or what spurred the initial research identifying strategies for CO2 removal, or compare different methods by their energy and land area requirements. Those all fascinating angles! But they’re not the focus of this article. Here, our goal is to build intuition for what the industry looks like in 2023.</p>

<p>In terms of big buckets, we’ll be focusing on three:</p>

<ol>
<li><strong>Suppliers (<span class="suppliers dot"></span>):</strong> the companies doing the actual removal, and the partners they need to make it happen</li>
<li><strong>Capital (<span class="capital dot"></span>):</strong> where suppliers get the money to do their job</li>
<li><strong>Trust (<span class="trust dot"></span>):</strong> how the suppliers and capital build and maintain consensus about how much carbon is being removed, and who paid for it</li>
</ol>

<p>We’ll build up the following map, but do so gradually, so don’t worry about imbibing its contents in a single gulp. We’ll use concrete examples of organizations playing these various roles, and aim to be illustrative, rather than exhaustive.</p>

<p><img src="/images/cdr/money_flows.png" alt="Market map showing the flow of money through the CDR ecosystem" /></p>

<p>With boundaries drawn around the subject of the day, we can start to examine what’s inside those boundaries. Let’s start with the suppliers.</p>

<h1 id="suppliers-the-companies-doing-the-main-thing">Suppliers: the companies doing the main thing</h1>

<p><img srcset="/images/cdr/player_cdr_suppliers.png 2x" alt="CDR Suppliers" /></p>

<p>To be a supplier of permanent CDR, you have two crucial jobs:</p>

<ol>
<li><strong>Capture</strong>: how do you pull CO2 from the atmosphere?</li>
<li><strong>Sequestration</strong>: where do you put the captured CO2, and how do you ensure it stays there for 1000+ years?</li>
</ol>

<p>Let’s see how some leading suppliers answer these two questions:</p>

<table id="suppliers">
  <thead>
    <tr>
      <th>Company</th>
      <th>Capture Method</th>
      <th>Sequestration Method</th>
    </tr>
  </thead>

  <tbody>
    <colgroup>
      <col style="width: 24%;">
      <col style="width: 38%;">
      <col style="width: 38%;">
    </colgroup>
    <tr>
      <td><a href="https://charmindustrial.com/">Charm Industrial</a></td>
      <td><strong>Terrestrial biomass</strong>: Plants grow on land and capture CO2 in the process.</td>
      <td><strong>Bio-oil sequestration</strong>: The plants are pyrolyzed (decomposed at high temperature) and converted into bio-oil. The bio-oil is pumped deep underground into exhausted oil wells.</td>
    </tr>

    <tr>
      <td><a href="https://www.runningtide.com/">Running Tide</a></td>
      <td><strong>Aquatic biomass</strong>: Kelp grows in the open ocean, capturing CO2 in the process.</td>
      <td><strong>Biomass sinking</strong>: Kelp sinks to the bottom of the ocean.</td>
    </tr>

    <tr>
      <td><a href="https://www.heirloomcarbon.com/">Heirloom</a></td>
      <td><strong>Direct air capture</strong>: Atmospheric CO2 reacts with powdered calcium hydroxide to form limestone, then that limestone is heated to extract a pure stream of CO2.</td>
      <td><strong>Various</strong>: Since the output is a pure CO2 stream, the sequestration method is flexible. Early versions sequestered this CO2 stream in concrete via a partnership with <a href="https://www.carboncure.com/">CarbonCure</a></td>
    </tr>

    <tr>
      <td><a href="https://www.ebbcarbon.com/">Ebb Carbon</a></td>
      <td><strong>Ocean alkalinity enhancement</strong>: Release an alkaline fluid into the ocean, allowing the ocean to absorb more carbon dioxide from the atmosphere.</td>
      <td><strong>Dissolved inorganic carbon</strong>: The increased alkalinity of the ocean allows it to retain the captured carbon dioxide in the form of dissolved inorganic carbon.</td>
    </tr>

    <tr>
      <td><a href="https://eioncarbon.com/">Eion</a> and <a href="https://www.lithoscarbon.com/">Lithos</a></td>
      <td><strong>Enhanced rock weathering</strong>: Naturally occurring rocks react with carbon dioxide to form stable chemical compounds. Accelerate that process by grinding up the rocks and spreading them on farm fields to increase their reactivity.</td>
      <td><strong>Stable carbon forms</strong>: The compounds resulting from the reaction with atmospheric CO2 accumulate into sedimentary rock or wash out to the ocean via rivers.</td>
    </tr>
  </tbody>
</table>

<p>The problems these companies are facing in scaling removal are varied, but I heard a few problems repeated. One was a problem I never needed to think about for software: permitting.</p>

<h2 id="permitting-for-anything-that-touches-air-water-or-dirt">Permitting: for anything that touches air, water, or dirt</h2>

<p><img srcset="/images/cdr/player_permitting_authorities.png 2x" alt="Permitting Authorities" /></p>

<p>To deploy at scale, removal suppliers need to build industrial infrastructure. Whenever you want to build a big physical <em>anything,</em> permitting law comes into play. Discovering which permits you need, applying for them, and engaging in back-and-forth with their regulating body can be a massive time sink for suppliers.</p>

<p><img srcset="/images/cdr/relationship_cdr_supplier_permiting_authority.png 2x" alt="Suppliers & Permitting Authorities" /></p>

<p>When I visited Charm Industrial, CEO Peter Reinhardt explained that they don’t deploy industrial infrastructure in California, despite company HQ being in San Francisco. Getting permits in California can take years. By contrast, in some more politically right-leaning states, it takes weeks. They’d love to deploy in California, but the timelines make it prohibitive.</p>

<p>When I asked Heirloom’s CEO Shashank Samala about the problems that keep him up at night, permitting was on his list:</p>

<blockquote>
<p>Different jurisdictions have varying rules and requirements for permits such as grading, electrical, building, construction, environmental and other permits. It would be immensely beneficial to have a comprehensive and up-to-date database containing all the necessary information for each permit, in every jurisdiction including contact information and where exceptions can be made. Oftentimes, the challenge lies in the fact that permitting teams are understaffed. We had issues with staff turnover / lost knowledge midway through given how long these permitting timelines are. It is astonishing how outdated some of these procedures can be. For instance, in California, physical signatures on laminated drawings are still required for civil drawing approvals.</p>
</blockquote>

<p>A few early-stage startups have begun working in this space. <a href="https://www.blumensystems.com/">Blumen Systems</a> and <a href="https://www.paces.com/">Paces</a> are both working on automating discovery of permits and siting intelligence for climate tech companies.</p>

<p><img srcset="/images/cdr/player_siting_consultant.png 2x" alt="Siting Consultants" />
<img srcset="/images/cdr/relationship_cdr_supplier_siting_consultant.png 2x" alt="CDR Supplier & Siting Consultants" /></p>

<p>The permitting problem is painful enough that CDR suppliers may design their deployment strategy around it. For some companies, this means deploying in states or countries with less arduous permitting processes. Another strategy is to woo the right industrial partners.</p>

<h2 id="industrial-partners-piggy-back-on-a-massive-industry">Industrial partners: piggy-back on a massive industry</h2>

<p><img srcset="/images/cdr/relationship_cdr_supplier_industry_partner.png 2x" alt="CDR Supplier & Industry Partner" /></p>

<p>Many CDR providers form symbiotic relationships with industry interests. The industrial partners can accelerate CDR scaling by allowing the CDR supplier to operate on their land and use their permits. The CDR companies can save their partners money by turning their partners’ waste products into tools for CDR.</p>

<p>For example:</p>

<ul>
<li>Charm Industrial has agricultural and forestry partners to turn waste streams (corn stover and mechanical forest thinning respectively) into feedstock for its pyrolyzers. The agricultural partners get biochar, a soil-health enhancing material, distributed onto their fields in the process.</li>
<li>Ebb Carbon partners with desalination plants, helping them turn the waste hyper-saline solution extracted from sea water into a chemical input for Ebb. By installing on-site, Ebb gains access to a system for returning large volumes of water to the ocean.</li>
<li>Lithos has mining and agricultural partners. They turn basalt (a waste product of the mining industry) into soil health enhancing minerals for their agricultural partners. The agricultural partners provide them the large land area they need for spreading the basalt to maximize its reactivity.</li>
</ul>

<p>With lab-tested CDR methodologies and industrial partners in hand, suppliers can build out a plan to scale. To put that plan into action, they need cash, and lots of it.</p>

<h1 id="capital-show-me-the-money">Capital: show me the money</h1>

<p>The CDR suppliers have bills to pay, things to buy, and employees that need to eat. CDR suppliers’ capital comes predominately from selling carbon removal credits, venture capital, and research grants.</p>

<h2 id="carbon-removal-credits">Carbon removal credits</h2>

<p><img srcset="/images/cdr/removal_credit.png 2x" alt="A carbon removal credit" /></p>

<p>Suppliers want to sell the removal of CO2 as a service. To make this a purchasable quantity, they parcel up tons of CO2 removed into units sometimes called “carbon removal credits”. The credit will show, at a minimum, how much CO2 was removed, who did the removal, and when the removal happened. Buyers of these credits can use this information as an evidence of fulfilling public facing climate commitments.</p>

<p>So who are these buyers? Currently, nobody has a regulatory requirement to buy carbon removal <sup class="footnote-ref" id="fnref:3"><a rel="footnote" href="#fn:3">3</a></sup>, so anyone doing so is doing it voluntarily.</p>

<p>The industry is targeting $100/ton of CO2 removed, and 3.8 billion tons/year by 2050, which would make for a $380 billion/year market. At that scale, private-sector voluntary markets are going to be woefully inadequate, but we need to bootstrap a market to get there. This requires some buyers to act in economically irrational ways.</p>

<h3 id="the-corporate-buyers">The corporate buyers</h3>

<p>Surprisingly, there are a number of companies willing to do it anyway. Some are driven by a sense of moral obligation to build a better future. Some may expect this market to happen with or without them, so they’d like a seat at the table to shape future legislation. Whatever the motive, it’s been enough to kickstart a sizeable market. According to <a href="https://www.cdr.fyi/">cdr.fyi</a>, over 3 million tons in sales have been made as of May 2023, for a total transaction volume of $365 million USD since the industry’s inception.</p>

<p><img srcset="/images/cdr/player_corporate_buyers.png 2x" alt="Corporate buyers" /></p>

<p>One of the earliest buyers was Stripe <sup class="footnote-ref" id="fnref:4"><a rel="footnote" href="#fn:4">4</a></sup>, who announced in 2019 that they’d spend at least <a href="https://stripe.com/blog/negative-emissions-commitment">$1 million USD/year on carbon removal</a>. In 2020, they <a href="https://stripe.com/blog/first-negative-emissions-purchases">fulfilled that promise</a> by making purchases from Climeworks, Project Vesta, Charm Industrial, and Carbon Cure <sup class="footnote-ref" id="fnref:5"><a rel="footnote" href="#fn:5">5</a></sup>. For 3 of these companies, Stripe was their first customer.</p>

<p><img srcset="/images/cdr/relationship_corporate_buyer_cdr_supplier.png 2x" alt="Corporate buyers & CDR Suppliers" /></p>

<p><a href="https://www.linkedin.com/in/nan-ransohoff-50132a21/">Nan Ransohoff</a>, Stripe’s current Head of Climate, now also leads <a href="https://frontierclimate.com/">Frontier</a>, a coalition of Stripe, Alphabet, Shopify, Meta, and McKinsey which have collectively committed $1 billion towards purchasing carbon removal between 2022 and 2030 <sup class="footnote-ref" id="fnref:6"><a rel="footnote" href="#fn:6">6</a></sup>. This kind of advanced market commitment is crucial for building financial security for suppliers, which in turn provides security for investors knowing that these companies will have revenue to bolster their growth.</p>

<p>On May 23, 2023, <a href="https://www.jpmorganchase.com/news-stories/jpmorgan-chase-seeks-to-scale-investment-in-emerging-carbon-removal-technologies#:~:text=JPMorgan%20Chase%20signed%20a%209,CDR%20solution%20provider%20in%20DAC.">JPMorgan Chase signed contracts to purchase $200 million in carbon removal</a> via Climeworks, Charm Industrial, and a contribution to Frontier.</p>

<p>Microsoft <a href="https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/">explicitly committed to carbon negativity</a>, which requires the purchase of carbon removal. They have a <a href="https://app.powerbi.com/view?r=eyJrIjoiZTU5OTYwN2EtOTI3Ni00NGE0LThjNWItZTUzZTFlNWIxNzFhIiwidCI6ImMxMzZlZWMwLWZlOTItNDVlMC1iZWFlLTQ2OTg0OTczZTIzMiIsImMiOjF9">live dashboard of the removals</a> they’ve purchased contracts for, including which vendors provided it.</p>

<h3 id="the-curators">The curators</h3>

<p><img srcset="/images/cdr/player_enterprise_market_curators.png 2x" alt="Enterprise market curators" /></p>

<p>Early in this process, all purchases were made by bilateral agreements between climate-conscious companies and carbon removal providers, with no market curators in the middle. This meant that buyers had to do due diligence from scratch, and providers had to spend a ton of time on sales. This is one of the inefficiencies that companies like <a href="https://watershed.com/">Watershed</a> and <a href="https://www.patch.io/">Patch</a> are addressing.</p>

<p><img srcset="/images/cdr/relationship_corporate_buyer_market_curator_cdr_supplier.png 2x" alt="Corporate Buyer, Enterprise Market Curator, and CDR Supplier" /></p>

<p>The equivalent for individuals instead of corporations also exists: <a href="https://www.wren.co/">Wren</a> and <a href="https://www.thecommons.earth/">Commons</a> provide subscriptions to individuals who want to offset their carbon footprint, and they buy carbon offsets and removals on your behalf. If you’re looking for a tax-deductible option, check out <a href="https://www.terrasetclimate.org/">Terraset</a>.</p>

<p><img srcset="/images/cdr/player_removal_subscriptions_for_individuals.png 2x" alt="Subscription Services for Individuals" />
<img srcset="/images/cdr/relationship_invidual_removal_subscription_cdr_supplier.png 2x" alt="Individual, Subscription Service for Individuals, and CDR Supplier" /></p>

<h3 id="pre-sale-of-credits-for-fledgling-companies">Pre-sale of credits for fledgling companies</h3>

<p>The money available to carbon removal suppliers once they can deliver removal is encouraging! But the market, as it stands today, is heavily supply constrained. While 3 million tons have been purchased, <a href="https://cdr.fyi/">only ~2% of those tons have actually been delivered</a>. Many suppliers are sold out of commitments for the next several years. So at the moment, we need ways of accelerating our suppliers, and need ways of creating more suppliers. <a href="https://airminers.org/">AirMiners</a> is trying to accelerate the creation of CDR companies by helping them secure early funding through pre-sale of credits, discounted heavily based on the risk of credits never being delivered. <a href="https://twitter.com/nanransohoff/status/1654137081000763393">Frontier also offers pre-purchase for fledgling CDR suppliers</a>.</p>

<p>The more traditional route for pre-revenue companies to get funding is venture capital.</p>

<h2 id="venture-capital">Venture capital</h2>

<p><img srcset="/images/cdr/player_vc.png 2x" alt="Venture Capital Firms" />
<img srcset="/images/cdr/relationship_vc_supplier.png 2x" alt="Venture Capital Firm & CDR Supplier" /></p>

<p>Venture capitalist funds raise money from wealthy individuals and funds-of-funds to invest in portfolios of high-risk companies. They assume that the majority of their portfolio companies fail, but that a few of them yield outsized returns. Making many bets in pure software is relatively cheap, because the cost to evaluate product-market fit is usually the salary of few software engineers for a year or two. CDR companies, by contrast, typically need millions of dollars just to buy equipment, even for lab-scale prototypes. Because CDR is fundamentally about scaling physical processes, both the cost and speed of scaling is much worse than software, making it less appealing to most venture capitalists.</p>

<p>There are a few firms, however, with enough conviction in the space to invest in multiple providers: <sup class="footnote-ref" id="fnref:7"><a rel="footnote" href="#fn:7">7</a></sup></p>

<ul>
<li><a href="https://lowercarboncapital.com/companies/">Lowercarbon Capital</a> invested in Charm Industrial, Heirloom, Running Tide, Verdox, Undo, and Noya. Lowercarbon has a <a href="https://lowercarboncapital.com/2022/04/14/clean-up-on-aisle-earth/">$350 million fund dedicated to carbon removal</a>.</li>
<li><a href="https://www.mcjcollective.com/capital#portfolio">MCJ Collective</a> invested in Charm Industrial, Heirloom, and Noya.</li>
<li><a href="https://www.primeimpactfund.com/">Prime Impact Fund</a> invested in Charm Industrial, Verdox, and Project Vesta.</li>
<li><a href="https://breakthroughenergy.org/our-work/breakthrough-energy-ventures/bev-portfolio/">Breakthrough Energy Ventures</a> invested in Heirloom, and Verdox.</li>
</ul>

<p>Here’s what Yin Lu from MCJ Collective (a purely climate fund) said about how they think about investments into companies with physical assets as part of their portfolio:</p>

<blockquote>
<p>Roughly half of our investments historically have been in pure software companies and we expect that trend to continue.</p>

<p>Roughly half of our investments have some sort of real-world component to them. [&hellip;] Most of our physically oriented companies have created some form of new process such as carbon removal, chemical synthesis, food production, or the like.  What you need to believe for this category is that the industries they are in will be undergoing fundamental change over the next decade due to the move toward decarbonization. This is a once in a lifetime transition that will see new companies emerge to capture large shares of existing markets and achieve scale that was otherwise not possible in previous innovation cycles.</p>
</blockquote>

<p>Each time you raise venture capital, you sell part of the company. Founders looking for additional funding but wary of losing ownership will search for other sources of capital. The most common for early climate tech companies is grants.</p>

<h2 id="grants">Grants</h2>

<p><img srcset="/images/cdr/player_granting_agencies.png 2x" alt="Granting Agencies" /></p>

<p>For novel technology aligned with societal need, there are billions of dollars available in research grants.The government offers a variety of grants that are applicable to some subset of CDR companies:</p>

<ul>
<li>The Office of Clean Air Demonstrations has <a href="https://www.energy.gov/oced/regional-direct-air-capture-hubs">$3.5 billion available in grants to establish direct air capture hubs</a>.</li>
<li>The Advanced Research Projects Agency-Energy (ARPA-E) announced <a href="https://arpa-e.energy.gov/news-and-media/press-releases/us-department-energy-announces-45-million-validate-marine-carbon">$45 million in funding for companies working on validation of ocean-based carbon dioxide removal</a> and also provided <a href="https://arpa-e.energy.gov/technologies/exploratory-topics/direct-air-capture">funding for research into Direct Air Capture (DAC) technology</a>, including <a href="https://arpa-e.energy.gov/technologies/projects/transformative-low-cost-approach-direct-air-mineralization-co2-repeated">a grant to Heirloom for $476,811</a>.</li>
</ul>

<p>Private organizations also offer grants:</p>

<ul>
<li>The Musk Foundation partnered with XPRIZE to offer a <a href="https://www.xprize.org/prizes/carbonremoval">$100 million milestone-based prize for carbon dioxide removal providers</a>.</li>
<li>Additional Ventures created a <a href="https://www.additionalventures.org/initiatives/climate-action/oae-research-award/">$10 million research award to further research into ocean alkalinity enhancement</a>.</li>
</ul>

<p>Unlike venture capital, the money for grants is not exchanged for equity in the company receiving the grant. The granting agency gets nothing financial in return. This makes grants, in a sense, “free money”. But the application process can be time consuming, and grants commonly constrain how the money can be used. Even after grants are issued, they create additional work reporting progress back to the granting agency.</p>

<p><img srcset="/images/cdr/relationship_granting_agency_supplier.png 2x" alt="Granting Agency & CDR Supplier" /></p>

<p>It’s common for CDR companies to have full-time on-staff grant writers or to hire consulting firms to help them fill grants to manage the paperwork load. Some companies skip grants altogether because of their time intensity. To help ease this process, companies like <a href="https://www.streamlineclimate.com/">Streamline</a> and <a href="https://www.pioneerclimate.com/">Pioneer</a>
are leveraging modern AI (e.g. GPT-4) to gather requirements and fill in grant applications.</p>

<p><img srcset="/images/cdr/player_granting_services.png 2x" alt="Grant Service Companies" />
<img srcset="/images/cdr/relationship_cdr_supplier_grant_services.png 2x" alt="CDR Supplier & Grant Service Company" /></p>

<p>Even after grant approval, the time for the money to hit the company’s bank account can sometimes be upwards of a year. This is one place where loans can help.</p>

<h2 id="loans">Loans</h2>

<p><img srcset="/images/cdr/player_loan_providers.png 2x" alt="Loan Providers" /></p>

<p>If you’re quite confident you’ll have money in the future, but you need it now in order to help your company grow faster, you want some kind of loan. <a href="https://enduringplanet.com/products/climate-grant-advance">Enduring Planet</a> builds financial infrastructure to handle two cases of this situation, specifically for climate entrepreneurs.</p>

<p>The first is when you’ve demonstrated a steady stream of revenue. This is unsurprisingly called “<a href="https://enduringplanet.com/products/revenue-based-financing">Revenue Based Financing</a>”. The second, a “<a href="https://enduringplanet.com/products/climate-grant-advance">Climate Grant Advance</a>”, provides money to climate companies who’ve already been approved for a grant, but haven’t received the money yet.</p>

<p><img srcset="/images/cdr/relationship_loan_provider_cdr_supplier.png 2x" alt="CDR Supplier & Grant Service Company" /></p>

<p>As the market matures and the revenue streams for CDR companies becomes less risky, traditional finance organizations will play a larger role here.</p>

<hr />

<p>We’ve outlined two crucial blocks of the market: suppliers to remove carbon, and sources of money to make it happen. This system only works, however, if the folks with the money trust that their money has the intended effect.</p>

<h1 id="building-trust-where-exactly-did-my-money-go">Building trust: where, exactly, did my money go?</h1>

<p>If you owned a grocery store and needed to source some tomatoes, you’d try a few suppliers, and evaluate them based on the price, quality, and reliable delivery. If the tomatoes taste like feet or always show up late, you’d switch suppliers. In the CDR market, customers are buying a promise that an invisible gas is moved from one place to another, where the destination is far out of sight from the customer. So not only is it hard for the customer to “taste” the quality of the product, it’s hard to even know when it’s arrived!</p>

<p>To make the system work, we need a group of third parties where their primary function is to facilitate trust between suppliers and sources of capital.</p>

<p>This starts with building a shared understanding of the core mechanism of removal.</p>

<h2 id="fundamental-research-does-the-process-remove-carbon-at-all">Fundamental research: does the process remove carbon at all?</h2>

<p><img srcset="/images/cdr/players_research_labs_journals.png 2x" alt="Research Labs & Journals" /></p>

<p>The discovery and evaluation of methods for carbon dioxide removal originate in fundamental research within universities and government labs. Through peer review and challenges from subsequent papers, we develop increasing confidence in the fundamental principle behind each method.</p>

<p><img srcset="/images/cdr/relationship_research_labs_journals.png 2x" alt="Research Lab & Journal" /></p>

<p>For example, the paper <a href="https://www.researchgate.net/profile/Karl-Littau/publication/241677572_CO2_extraction_from_seawater_using_bipolar_membrane_electrodialysis/links/573ca88e08ae9f741b2eb8f9/CO2-extraction-from-seawater-using-bipolar-membrane-electrodialysis.pdf">CO</a><a href="https://www.researchgate.net/profile/Karl-Littau/publication/241677572_CO2_extraction_from_seawater_using_bipolar_membrane_electrodialysis/links/573ca88e08ae9f741b2eb8f9/CO2-extraction-from-seawater-using-bipolar-membrane-electrodialysis.pdf">2</a> <a href="https://www.researchgate.net/profile/Karl-Littau/publication/241677572_CO2_extraction_from_seawater_using_bipolar_membrane_electrodialysis/links/573ca88e08ae9f741b2eb8f9/CO2-extraction-from-seawater-using-bipolar-membrane-electrodialysis.pdf">extraction from seawater using bipolar membrane electrodialysis (2012)</a> was peer-reviewed and published in the journal of the Royal Society of Chemistry in their Energy &amp; Environmental Science journal. The first author of the paper, Matthew Eisaman, went on to found Ebb Carbon to implement and scale the practice described in the paper.</p>

<p><a href="https://www.nature.com/articles/s41467-020-16510-3">Ambient weathering of magnesium oxide for CO2 removal from air (2020)</a> was peer-reviewed and published in Nature Communications. The first author, Noah McQueen, is now co-founder of Heirloom, which is scaling this methodology.</p>

<p>This kind of research can serve as the catalyst to convince venture capitalists that a company concept isn’t totally nuts. As part of their investment memo for Heirloom, MCJ Collective cited this underlying science as a source of confidence:</p>

<blockquote>
<p>[T]he basis of Heirloom’s technology stems from research led, in part, by Dr. Jennifer Wilcox who now serves at the U.S. Department of Energy. While there inevitably will be a need to bridge the gap between research performed at the lab bench and commercializing the technology in market, we are confident this risk is managed by the encouraging findings of the research and Heirloom’s iterative prototyping.</p>

<p>&ndash; <a href="https://myclimatejourney.substack.com/p/our-investment-in-heirloom">“Our Investment in Heirloom”</a>, MCJ Collective</p>
</blockquote>

<p>These papers describe fundamental mechanisms and estimation protocols, but typically describe isolated steps at small scale. To understand climate impact, we need to look at the process holistically.</p>

<h2 id="lca-is-it-carbon-negative">LCA: is it carbon negative?</h2>

<p>Having a mechanism which removes carbon dioxide from the atmosphere is useless if you emit more carbon than you remove in the process. Emissions can come from e.g. energy used in manufacturing industrial equipment, operation of that equipment, and transportation. Determining the <em>net</em> removal is the domain of Life Cycle Analysis (LCA).</p>

<figure>
<img src="/images/cdr/lca_math.png">
<figcaption>From <a href="https://d13en5kcqwfled.cloudfront.net/files/Bio-oil-proto-protocol.pdf">“Bio-oil Sequestration: Prototype Protocol for Measurement, Reporting, & Verification”</a></figcaption>
</figure>

<p>Absent relevant regulation, the process for establishing consensus on an LCA for a given company’s removal methodology is still up in the air, but typically involves peer-review from industry interests rather than academics. These LCAs are made publically available for scrutiny.</p>

<p><a href="https://charmindustrial.com/Bio-oil_Sequestration__Protocol_for_Measurement_Reporting_and_Verification.pdf">The framework Charm Industrial uses to evaluate its net emissions</a> was written by <a href="https://www.carbon-direct.com/">Carbon Direct</a> and <a href="https://www.ecoengineers.us/">EcoEngineers</a>, and then reviewed by Charm Industrial, Lowercarbon Capital, Frontier, and others.</p>

<p><a href="https://assets-global.website-files.com/61f2f7381f60618eb5879371/643eff588341be0362c4f557_Running%20Tide%20Framework%20Protocol%202023.pdf">The framework Running Tide uses</a> was written by Running Tide, then reviewed by Lowercarbon Capital, Stripe, Patch, and others.</p>

<p>Once the fundamental removal mechanism is confirmed, the carbon negativity of the entire process is quantified via LCA, and the supplier can run the process outside the lab, the last crucial step is providing an auditing chain for each action in the removal. This is the domain of Measurement, Reporting, and Verification (MRV).</p>

<h2 id="mrv-what-did-my-money-specifically-do">MRV: what did my money, specifically, do?</h2>

<p>To see what MRV looks like in practice, it’s instructive to look at <a href="https://charmindustrial.com/registry">Charm Industrial’s registry</a>. On September 24, 2021, <a href="https://charmindustrial.com/order?orderId=rec1xzlzwBI4vpZ1b">Block purchased 55 tons of CO2 removal from Charm Industrial via Watershed</a>:</p>

<p><img src="/images/cdr/charm_registry_screenshot.png" alt="Screenshot from Charm Industrial's registry" /></p>

<p>To present this information to buyers, you need:</p>

<ul>
<li>Sensors deployed in the field (in this case, probably a scale for bio-oil)</li>
<li>A data collection pipeline to record this data reliably and accurately</li>
<li>Software to convert those raw sensor values into emission and removal numbers using the appropriate method’s LCA process</li>
<li>A way to build out a many-to-many relationship between deliveries and purchases and durably record those purchases to avoid double-counting</li>
</ul>

<p>Most steps in Charm’s process lend themsleves well to direct measurement. If we trust Charm is telling the truth about the scale reading leading up to the bio-oil being injected deep underground, then we can trust the amount of removal being performed. This is also the case for direct air capture systems like Heirloom, where the pure CO2 stream captured can be directly measured.</p>

<p>There are still some sources of uncertainty in these processes, however. Using its <a href="https://carbonplan.org/research/cdr-verification">CDR Verification Framework</a>, CarbonPlan is working on mapping out the sources of uncertainty in calculating net emissions, and assigning “Verification Confidence Levels (VCLs)” ranging from 1 to 5 for different pathways.</p>

<p><img src="/images/cdr/carbonplan_screenshot.png" alt="Screenshot from Carbon Plan's CDR Verfication Framework" /></p>

<p>CarbonPlan assigns “Biomass Carbon Removal and Storage”, the pathway used by Charm, a VCL of 3-5. To examine what the remaining sources of uncertainty are, <a href="https://carbonplan.org/research/cdr-verification/biomass-carbon-removal-and-storage">check out the CarbonPlan website</a>.</p>

<p>For CDR methods using the ocean or soil to sequester carbon, measurement is more difficult, which yields higher uncertainty. For example, for ocean-based methods to accurately estimate their net removal, they need:</p>

<ul>
<li>Access to third party meteorological and oceanographic data like weather predictions, ocean currents, seabed topography, and satellite imagery</li>
<li>A much richer variety of sensors deployed into the ocean on buoys: humidity, barometric pressure, temperature, etc.</li>
<li>More resilient systems for sensor data collection, since the sensors will need to be left unattended for long spans of time</li>
<li>Computationally intensive ocean simulations to estimate the timeline and quantity of the removal</li>
</ul>

<p>Unlike Charm or Heirloom’s processes, it’s infeasible to directly measure the amount of CO2 removed from the atmosphere. The removal takes place over a massive surface area of water, as the ocean itself absorbs CO2 from the atmosphere. Even if we all agree on all the sensor inputs, there’s a lot to debate about the methodology for turning those sensor inputs into the number of tons of CO2 removed.</p>

<p>These are the kinds of challenges that Running Tide and Ebb Carbon face. CarbonPlan assigns Running Tide’s original method, <a href="https://carbonplan.org/research/cdr-verification/ocean-biomass-sinking-no-harvest">“Ocean Biomass Sinking (No Harvest)”</a>, a VCL of 1-2, and Ebb Carbon&rsquo;s method, <a href="https://carbonplan.org/research/cdr-verification/ocean-alkalinity-enhancement-electrochemical">“Ocean Alkalinity Enhancement (Electrochemical)”</a>, a VCL of 3. We can raise these confidence levels over time by developing better measurement strategies through research and experimentation. This is presumably why, as noted in the grants section above, <a href="https://arpa-e.energy.gov/news-and-media/press-releases/us-department-energy-announces-45-million-validate-marine-carbon">ARPA-E announced USD$45 million in funding to validate marine CDR methods</a>.</p>

<p>If we can control uncertainty, ocean and soil-based systems hold a lot of promise. By piggy-backing off of natural processes already happening at massive scale, they tend to be easier to scale quickly with lower energy requirements and require less novel machinery <sup class="footnote-ref" id="fnref:8"><a rel="footnote" href="#fn:8">8</a></sup>.</p>

<p>At the moment, this MRV process is done in-house by the CDR suppliers. This both creates duplication of effort (e.g. every supplier building their own web interface to expose to buyers), and a perverse incentive to over-estimate removal. For now, the market is small enough (and difficult enough to enter as a supplier) that trusting companies to act in good faith is reasonable. As the market scales, however, this will become decreasingly true. So we’ll eventually need MRV-specific companies.</p>

<h2 id="mrv-service-companies">MRV service companies</h2>

<p>A tiny minority of funding (especially venture capital) has gone to companies focused on MRV for CDR. But there are a few players emerging.</p>

<p><img srcset="/images/cdr/players_verification_service_modeling_infra.png 2x" alt="Verification Service & Registry, Earth Systems Modeling Data Infrastructure" /></p>

<p><a href="https://isometric.com/">Isometric</a> is building a multi-pathway registry and verification service. To minimize perverse incentives to fill their registry with as many credits as possible, they charge a per-ton verification fee to the buyers of carbon removal rather than accept per ton registration fees from suppliers. The registry portion of the platform would be similar to Charm Industrial’s registry, but supporting many companies and CDR processes, rather than just one.</p>

<p><img srcset="/images/cdr/relationship_carbon_removal_buyer_verification_service.png 2x" alt="Verification Service & Registry, Earth Systems Modeling Data Infrastructure" /></p>

<p>In order to do estimation and verification of ocean-based carbon removal, both CDR suppliers and verifiers will need infrastructure to run earth systems models. The models themselves come from academia. <a href="https://twitter.com/_Cworthy">⟦C⟧worthy</a> is a non-profit working on improving these models for CDR purposes.</p>

<p><a href="https://www.submarine.earth/">Submarine</a> is adapting these models to support the MRV process and building the data infrastructure needed to run them at scale for use by both CDR suppliers and verification services.</p>

<p><img srcset="/images/cdr/relationship_cdr_supplier_earth_systems_modeling.png 2x" alt="Verification Service & Registry, Earth Systems Modeling Data Infrastructure" /></p>

<h1 id="let-s-recap">Let’s recap</h1>

<p>Okay! That was a lot of information to swallow! Let’s do a quick recap.</p>

<p>To stabilize our climate, we need to dramatically reduce our emissions. But even with optimistic estimates for our rate of reduction, we&rsquo;ll still need technology to remove CO2 directly from the atmosphere. To reach the scale of removal needed, we need to double the rate of global CO2 removal every 21 months for the next 27 years.</p>

<p><img src="/images/cdr/CDR_growth_rate.png" alt="Growth rate of needed carbon-dioxide removal from 2000 to 2050" /></p>

<p>To make this happen, we need to accelerate a variety of market players and processes.</p>

<p><img src="/images/cdr/money_flows.png" alt="Market map showing the flow of money through the CDR ecosystem" /></p>

<ul>
<li><strong>Suppliers:</strong> Companies like Charm Industrial, Running Tide, Heirloom, Eion, Lithos, and Ebb Carbon have two main jobs: capturing carbon dioxide directly from the atmosphere, and storing it away forever in a way that doesn’t leak back into the atmosphere.

<ul>
<li><strong>Permitting:</strong> To build industrial infrastructure, suppliers need permits. This process is long and painful right now. Companies like Blumen and Paces want to make this less painful.</li>
<li><strong>Industry Partners:</strong> To gain access to land and permits, CDR suppliers partner with massive industries like agriculture, mining, steel manufacturing, and desalination plants. In exchange, the partners get decreased cost of waste removal, or co-benefits like improved soil quality.</li>
</ul></li>
<li><strong>Capital:</strong> To fund operations, suppliers have a few key sources of capital:

<ul>
<li><strong>Sale of carbon removal credits:</strong> Corporations and individuals voluntarily buy carbon removal credits. Companies like Watershed and Patch provide curated market access to enterprise customers. Companies like Wren and Commons provide access for individuals. Later, there may be regulatory requirements to buy these, or possibly direct procurement by governments.</li>
<li><strong>Venture capital:</strong> Investment funds like Lowercarbon Capital, and MCJ Collective buy partial ownership of suppliers’ companies.</li>
<li><strong>Grants:</strong> Organizations (mostly government agencies like the DOE or ARPA-E) provide money for research, which suppliers can apply for. To make this process easier, companies like Streamline and Pioneer are helping write and manage grant applications.</li>
<li><strong>Loans:</strong> If suppliers can demonstrate that they will have money available to them down the road either from revenue or grant approvals, financial institutions like Enduring Planet will give them an advance on that capital.</li>
</ul></li>
<li><strong>Trust:</strong> For the market to work, everyone involved must have consensus on how money is turning into permanently removed carbon dioxide.

<ul>
<li><strong>Research:</strong> Academic, governmental, and industry research organizations publish papers on the fundamentals of a given CO2 removal method. These papers are peer reviewed and distributed by journals.</li>
<li><strong>Life cycle analysis (LCA):</strong> Suppliers, usually in conjunction with a third party consultancy, publish a methodology showing the CO2 emissions and removal at every step in the process, demonstrating that the method creates net negative emissions.</li>
<li><strong>Measurement, reporting, and verification (MRV):</strong> As suppliers run their process for removal in the real world, they carefully record measurements along the way. This can include things like the weight of bio-oil being injected, or the ocean salinity reading from a buoy where a fluid was released into the ocean. The LCA is then applied to determine the net removal from the actions of the company, which is entered into a registry so carbon credit buyers can see what their money was used for. Companies like Isometric and Submarine are working on improving these processes, alongside non-profit organizations like ⟦C⟧worthy.</li>
</ul></li>
</ul>

<p>Now then, what should you do with all this information?</p>

<h1 id="where-to-go-from-here">Where to go from here</h1>

<p>When asked what advice she’d offer to software engineers interested in contributing to the climate fight, here&rsquo;s what the founder of <a href="https://www.sparkclimate.org/">Spark Climate Solutions</a> said:</p>

<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">There are two overarching archetypes of paths I&#39;ve seen here: <br><br>(a) find a software engineering job in a climate-related org or company, or <br><br>(b) go figure out where there are holes in the ecosystem, and use problem-solving, technical, and leadership skills to make a difference. <a href="https://t.co/uo8zqWnHn5">https://t.co/uo8zqWnHn5</a></p>&mdash; Erika Reinhardt (@embrein) <a href="https://twitter.com/embrein/status/1553862254235250691?ref_src=twsrc%5Etfw">July 31, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>

<p>If you’re interested in path (a) and find yourself intrigued by carbon dioxide removal, many of the companies referenced in this article are hiring! Here are some job openings:</p>

<ul>
<li><a href="https://jobs.lever.co/runningtide/060d4bbd-e94a-4b7b-ba99-c55d2158ee4c">Full Stack Developer @ Running Tide (Remote USA)</a>: Running Tide does CDR via marine biomass cultivation.</li>
<li><a href="https://ebb-carbon.breezy.hr/p/262f7057317f-software-engineer-full-stack">Full Stack Software Engineer @ Ebb Carbon (San Carlos, California)</a>: Ebb Carbon does CDR via electrochemical ocean alkalinity enhancement.</li>
<li><a href="https://jobs.lever.co/HeirloomTechnologies/94a122b1-d46c-40bc-bedf-4546767cf1c3">Lead Modeling and Simulation Engineer @ Heirloom (Brisbane, California)</a>: Heirloom does CDR via direct air capture.</li>
<li><a href="https://undo.bamboohr.com/careers/138">Technical Lead @ Undo (London, UK)</a>: Undo does CDR via enhanced rock weathering.</li>
<li><a href="https://isometric.com/careers">Platform &amp; SRE Engineering Roles @ Isometric (London, UK)</a>: Isometric is building a verification and multi-pathway registration service for carbon removal.</li>
<li><a href="https://touchgrass.notion.site/Founding-Engineer-Streamline-06274451855444ab9f7e25a172ae0fd4">Founding Engineer @ Streamline (San Francisco, California)</a>: Streamline helps climate companies apply for grants much faster.</li>
<li><a href="https://blumen-systems.notion.site/Jobs-at-Blumen-81ac1ead879a4b6c9be385dde1194dd8">Founding Engineer @ Blumen (San Francisco, California)</a>: Blumen provides siting intelligence tools for geothermal, hydrogen, carbon sequestration, and minerals project developers.</li>
<li><a href="https://watershed.com/jobs">Multiple Software Engineering Positions @ Watershed (San Francisco, NYC, London, Remote)</a>: Watershed, among many other things, provides a curated marketplace for carbon removal purchases.</li>
<li><a href="https://www.patch.io/careers/eea962a8-4581-4873-8663-908e6d440b88">Software Engineer @ Patch (San Francisco)</a>: Patch also does many things, but one is to provide a curated marketplace for carbon removal purchases.</li>
<li><a href="https://stripe.com/jobs/listing/full-stack-engineer-climate/4877410">Full Stack Engineer, Climate @ Stripe. (Remote)</a>: Stripe helped catalyze the start of the permanent CDR market with early purchases.</li>
</ul>

<p>If you’re interested in path (b), here’s the consistent advice I’ve gotten from people that have walked this path <sup class="footnote-ref" id="fnref:9"><a rel="footnote" href="#fn:9">9</a></sup>:</p>

<ol>
<li>Talk to a bunch of people in a specific industry. If the problems are too diverse, narrow down the company category or job role until they become consistent.</li>
<li>Choose a specific problem. Ask people about the problem to understand if it’s a mild annoyance, or the source of a constant living hell. If you want to build a big venture capital-backed business, make sure solving this problem has a path to building a $1 billion company.</li>
<li>Propose a specific solution.</li>
<li>Try to rapidly figure out why your solution is, in fact, a bad idea. If you fail to find evidence it’s a bad idea, try to implement it. If you find strong evidence, return to step 2.</li>
</ol>

<p>In the course of researching and writing this article, I’ve mostly been doing steps 1 and 2. Here are the problems I’m considering exploring for steps 3 and 4:</p>

<ul>
<li>MRV for ocean based CDR</li>
<li>MRV for soil based CDR</li>
<li>Removing duplication of effort for sensor data aggregation for new CDR companies</li>
</ul>

<p>If you’re also interested in path (b), the good news is finding the frontier problems in the space doesn’t take that long, and people will be happy to see you when you get there:</p>

<p><blockquote class="twitter-tweet"><p lang="en" dir="ltr">A weird property of the frontier: finding the edge forces a realization that there’s very few people there, the others who’ve found it are tightly clustered and therefore quite happy to see you, and you all can’t help but ask “where is everybody?” in escalating confusion</p>&mdash; Ryan Orbuch (@orbuch) <a href="https://twitter.com/orbuch/status/1470052994142121997?ref_src=twsrc%5Etfw">December 12, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>

<h1 id="further-reading-acknowledgements">Further reading &amp; acknowledgements</h1>

<p>This article comes in a short lineage of folks building their understanding of CDR from the ground up, and sharing what they learned:</p>

<ul>
<li><a href="https://www.orbuch.com/carbon-removal/">“We Need To Take CO2 Out Of The Sky”</a> by Ryan Orbuch. Ryan is now a Partner at Lowercarbon Capital leading their $350 million carbon removal fund. If you want to stay on top of the space, <a href="https://twitter.com/orbuch">following him on Twitter</a> is a good bet.</li>
<li><a href="https://www.scalingcarbonremoval.com/index.html">“Scaling CDR”</a> by Neil Hacker. Neil is now a Researcher at Isometric, helping to build their verification and registry service for carbon removal.</li>
</ul>

<p>If you enjoyed reading this, here are some other resources you might like:</p>

<ul>
<li>For an alternate take on the current state of CDR, see <a href="https://www.stateofcdr.org/">The State of Carbon Dioxide Removal</a>.</li>
<li>For a crash course in CDR, see AirMiners’ free 6 week cohort-based course called <a href="https://bootup.airminers.org/">Boot Up</a>.</li>
<li>For a video series on the ongoing evolution of industry, see <a href="https://www.youtube.com/playlist?list=PL1je2pACUAbKdS4529vLLHgZR2MGk9KLm">OpenAir’s THIS IS CDR</a>.</li>
<li>For writing on how to shape the growing CDR market, see <a href="https://greatunwind.substack.com/">The Great Unwind</a>.</li>
</ul>

<p>Thanks to the folks at Lowercarbon Capital, MCJ Collective, Heirloom, Charm Industrial, Eion, Ebb Carbon, Isometric, Submarine, Vesta, Running Tide, Airminers, Blumen, Streamline, Avnos, Lillianah, and Lithos for sharing their knowledge and connections in the ecosystem.</p>

<p>Thanks to Ashley Zhang, Dhen Padilla, JN Fang, Jason Benn, Lyn Stoler, Max Krieger, Mishti Sharma, Neil Hacker, Temina Madon, and Ryan Gomba for reading drafts of this article and providing the invaluable feedback needed to shape it.</p>

<p>Special thanks to Owen Wang for being my exploration buddy throughout this endeavour, bouncing ideas around with me, and always reminding me to write down the questions I want to ask before each new meeting. I wouldn’t have had the energy to kickstart this without you.</p>

<p>If you&rsquo;re interested in commissioning writing like this about a climate-related industry vertical, or about your own early-stage climate tech company, get in touch with me at jamie.lf.wong@gmail.com. If you&rsquo;re building at the frontier of CDR (and especially if you&rsquo;re working on MRV for mCDR), please reach out!</p>
<div class="footnotes">

<hr />

<ol>
<li id="fn:1">​​This assumes 100,000 tons will be removed in 2023, and that we need the capability to remove 3,800,000,000 tons/year by 2050.
 <a class="footnote-return" href="#fnref:1"><sup>[return]</sup></a></li>
<li id="fn:2">See ​​<a href="https://cdrprimer.org/read/chapter-1#sec-1-4">https://cdrprimer.org/read/chapter-1#sec-1-4</a>
 <a class="footnote-return" href="#fnref:2"><sup>[return]</sup></a></li>
<li id="fn:3">At time of writing, <a href="https://trackbill.com/bill/california-senate-bill-308-carbon-dioxide-removal-market-development-act/2353484/#:~:text=This%20bill%20would%20enact%20the,gas%20emissions%2C%20as%20determined%20by">SB308</a> is under debate in the California legislature, which <em>would</em> create a regulatory requirement to buy carbon removal.
 <a class="footnote-return" href="#fnref:3"><sup>[return]</sup></a></li>
<li id="fn:4">​​​​Many members of the climate team at Stripe eventually scattered to build critical infrastructure to support CDR: ​​Taylor Francis, Avi Itskovich, and Christian Anderson left to start Watershed which, among other things, provides climate-conscious corporations curated access to buying carbon removal. ​​Ryan Orbuch left to join Lowercarbon Capital, now the leading venture capital fund in carbon removal.
 <a class="footnote-return" href="#fnref:4"><sup>[return]</sup></a></li>
<li id="fn:5">In the process, they open sourced all of the applications and purchase agreements, which serve as a great summary of core methodologies used by all the applying suppliers, free from marketing fluff. You can see them <a href="https://github.com/stripe/carbon-removal-source-materials">here on GitHub</a>.
 <a class="footnote-return" href="#fnref:5"><sup>[return]</sup></a></li>
<li id="fn:6">​​Similar to Stripe’s internal process, Frontier also maintains <a href="https://github.com/frontierclimate/carbon-removal-source-materials">open source applications and purchase agreements on GitHub</a>.
 <a class="footnote-return" href="#fnref:6"><sup>[return]</sup></a></li>
<li id="fn:7">​​Occasionally, folks worry that carbon-dioxide removal is overallocated. But even within climate tech, CDR represents a relatively small portion of VC dollars. <2% of climate sector venture capital went to carbon removal as of 2022. See <a href="https://www.ctvc.co/40b-and-1-000-deals-in-2022-market-downtick/">&rdquo;$40B and 1,000+ deals in 2022 market downtick&rdquo; from CTVC for details</a>.
 <a class="footnote-return" href="#fnref:7"><sup>[return]</sup></a></li>
<li id="fn:8">For more about the risks of biasing too far towards the easily observed processes, see <a href="https://greatunwind.substack.com/p/leveling-the-playing-field-for-open">“Leveling the Playing Field for Open-System Carbon Removal”</a>.
 <a class="footnote-return" href="#fnref:8"><sup>[return]</sup></a></li>
<li id="fn:9">The most highly recommended guide for holding conversations in this phase of the process is <a href="https://www.momtestbook.com/">&ldquo;The Mom Test&rdquo;</a>. It&rsquo;s about how to ask questions in a way that even your own mother wouldn&rsquo;t lie to you to protect your ego. Many friends have informed me that your mother lying you to protect your ego is a decidedly culturally-specific phenomenon.
 <a class="footnote-return" href="#fnref:9"><sup>[return]</sup></a></li>
</ol>
</div>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Debugging Misadventures: Down the Rabbit Hole]]></title>
    <link href="http://jamie-wong.com/post/debugging-misadventures/"/>
    <updated>2020-10-15T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/debugging-misadventures/</id>
    <content type="html"><![CDATA[ 

<style>

@media only screen and (min-width: 1250px) {
    article > p > img {
        left: 50%;
        margin-left: -30vw;
        margin-right: -30vw;
        max-width: 60vw;
        width: 60vw;
        position: relative;
        right: 50%;
    }
}

article > p > img {
  border: 2px solid rgba(0, 100, 0, 0.3);
}
</style>

<p><img src="/images/debugging-misadventures/bug-in-the-maze.jpg" alt="A bug in the maze" /></p>

<p><em>For the past few years for Christmas, I’ve asked people in my family for stories from their lives rather than a physical gift.</em> <em>Last year, my sister Emma, who works as a dentist, asked me for the same. This is the story I sent my sister, along with some illustrations by my very multitalented coworker <a href="https://www.instagram.com/jessichenliu">Jessica Liu</a>.</em></p>

<p>A lot of my day-to-day work as a software engineer involves debugging. We expect the system to do X but it does Y instead. Why?</p>

<p>One of my coworkers, <a href="https://karljiang.com/">Karl</a>, really enjoys this process. He imagines the bug as a sneaky adversary, running around a maze, trying to evade him as he methodically searches through each turn. In the end, he always finds his hidden foe, wins a round of this game, and rests awaiting his next tour of the maze.</p>

<p>All of my most bizarre debugging adventures come from my time working at <a href="https://www.figma.com/">Figma</a>. I’ve visited other offices, found <a href="https://monorail-prod.appspot.com/p/chromium/issues/detail?id=1018028">bugs</a> <a href="https://monorail-prod.appspot.com/p/chromium/issues/detail?id=959052">in</a> <a href="https://monorail-prod.appspot.com/p/chromium/issues/detail?id=966533">Chrome</a>, been bedevilled by <a href="https://github.com/npm/npm/pull/20027">a “Back to the Future” easter egg left in third party software</a>, but the most outrageous lengths brought me into an hour long Lyft ride from San Francisco to Mountain View to meet a stranger who’d never heard of Figma on a Wednesday afternoon.</p>

<p><img src="/images/debugging-misadventures/stuck-outside.jpg" alt="Stuck outside" /></p>

<p>Debugging is often a process of making and testing hypotheses. Since you think the system should be doing one thing but it’s doing another, it must mean that one of your assumptions about how the system works is wrong.</p>

<p>For example, let’s say that your friend is trying to get into your apartment using your electronic keypad but it isn’t working. They might call you and tell you “Hey, I can’t get in.” You told them the code before, so they should be able to get in. Something you’re assuming is true clearly isn’t, so you ask them some questions to test your hypotheses.</p>

<p>“The code is 1499, is that what you’re entering?”<br/>
“Yeah, that’s what I entered. I’ll try it again right now… Nope. Still nothing.”</p>

<p>The first assumption you were testing was that they had the correct code. It turns out that assumption was correct, so that can’t be the problem. Time to test a different hypothesis.</p>

<p>“Do the buttons light up when you press them?”<br/>
“Yep. They light up. When I hit the second 9, the keypad turns red, blinks, and then goes dark.”<br/></p>

<p>Okay, so the batteries aren’t dead. Your assumption that the keypad has power was right too. What could possibly be wrong?<br/></p>

<p>“Uhh, are you sure you’re entering it in right?”<br/>
“Yes. Duh. I’m not an idiot. I press 1, then I press 4, then 9, then 9 again. It goes red. I turn the handle. Door doesn’t budge.”<br/></p>

<p>“Hmm…”</p>

<p>“Are you sure you’re telling me the code right? My hands are freezing out here trying to enter this code, and your fireplace inside is just taunting me.”<br/>
“What fireplace? Wait. What address are you at?”<br/>
“907 Elk St, like you told me.”<br/></p>

<p>You check the last text message you sent your friend. You did indeed send “907 Elk St”. You live at “907 Elm”.</p>

<p><img src="/images/debugging-misadventures/laundry.jpg" alt="The missing sock" /></p>

<p>Different bugs can require wildly different amounts of work to figure out.</p>

<p>One of the key things affecting how long it takes to diagnose and fix a bug is the duration of the feedback cycle. Figuring out the cause of a bug can take anywhere from one to hundreds of hypotheses tests to figure out the root cause of the issue. When it takes 100 cycles of “test hypothesis → form new hypothesis”, a feedback cycle of 10 seconds versus 10 minutes makes the difference between finding the cause in about 15 minutes and finding the cause in two days of full-time work.</p>

<p>All sorts of things can disrupt this feedback cycle.</p>

<p>Some bugs are tricky because they’re non-deterministic, which is a technobabble way of saying “even when you do the exact same things in the exact same order, you get different results.” Every once in a while, the laundry machine seems to swallow a sock, but you don’t know why. You think you might be able to stop it by always tying your socks together before putting them in the laundry machine. On the next load of laundry, you tie all your socks together in pairs and put them in. When you pull them out all your socks are still there! But did you fix the problem, or did you just get lucky this time? You can neither accept nor reject your hypothesis yet.</p>

<p>Another especially frustrating category of bug is nicknamed the “Heisenbug”, after the physicist Werner Heisenberg. Heisenberg was one of the first to assert that the mere act of observing a phenomenon may cause it to change its behaviour. In terms of software, what this means is that as soon as you start trying to change the code to have it tell you more about what it’s doing and what might be going wrong, the bug stops appearing! Imagine if your dentistry patient’s toothache went away when they opened their mouth even slightly.</p>

<p>But most of the gnarliest bugs I’ve had the imperative to work on at Figma suffer from a third kind of feedback loop problem: we can’t reproduce the problem, but some of our users can, and they hit it all the time.</p>

<p>This can happen for all sorts of reasons. They might have some Chrome extension installed that none of us do, and they didn’t realize that was an important part of the puzzle. Their network administrator might’ve blocked some communication channel that’s essential for Figma’s functioning over the internet. After diagnosing this kind of problem, we can usually either find a way of reproducing it by installing their software, or telling them to talk to their own coworkers to resolve the problem.</p>

<p>But the most frustrating version of the “no reproduction” issue is when something about their hardware is interacting poorly with our software. And the most fickle variety of hardware Figma deals with more directly than most other companies is graphics cards.</p>

<p>Graphics cards are the component in your computer that’s specially designed to efficiently perform the kinds of computations needed to update the millions of pixels on your screen at sixty frames per second.</p>

<p>I’m fond of computer graphics in part because <a href="https://twitter.com/ryanjkaplan/status/1300994564187070466">the bugs can be so</a> <a href="https://twitter.com/ryanjkaplan/status/1300994564187070466">entertaining</a>. Sometimes instead of text being drawn legibly on the screen, its upside-down and flickering madly. It’s got a special hall-of-mirrors kind of beauty to it.</p>

<p>These bugs are much less entertaining when they’re causing customers distress and we can’t reproduce the problems on our own hardware.</p>

<hr />

<p>The first time I was involved in debugging a graphics card problem was in October, 2017. A designer at one of our early customers, a recruiting software company called Lever, reported that for some files, their entire canvas went black in Figma. We couldn’t reproduce, but the designer at Lever consistently could. Thankfully, Lever and Figma’s offices were a short bus ride away, so I went with our CTO <a href="https://twitter.com/evanwallace">Evan Wallace</a> and one other coworker to investigate. They lent us their laptop (a MacBook Pro with an unusual graphics card inside), and I watched Evan test hypothesis after hypothesis about what could be going wrong on the machine using some special diagnostic software he’d written. After about 30 minutes, he was able to find a fix. We thanked our gracious hosts at Lever, and returned to our office, everyone quite happy.</p>

<p>The next time was in June, 2018. This time, a number of our users had written into us reporting that with a change in Figma’s graphics code, they started seeing patterns of visual noise appear in their design files. After chatting with many of the users through our support system, we were able to deduce that they all had the same series of graphics card. The GeForce GTX 10 series. This time, none of the affected users were anywhere in San Francisco, so we couldn’t just drop by their office. We could ask them to hop on a phone call, but it would be a lot to ask for them to sit with us for hours, typing what we asked them to into their computer word by word or giving us total remote access. Thankfully, I looked online and saw that Best Buy had laptops with this kind of graphics card in stock. So I took a car to the San Francisco Best Buy location, talked to one of the sales associates to confirm that the laptop I was looking had the card I needed, charged a few thousand dollars to the company card our CEO <a href="https://twitter.com/zoink">Dylan Field</a> had lent me, and was on my merry way. A few hours later we’d figured out the cause and had a workaround to dodge the problem.</p>

<p>But by far the most outrageous instance of this was in May, 2019. We had just started rolling out changes I was spearheading to a core graphics component of Figma when a small portion of our user base wrote in to tell us something was wrong. Instead of the image taking up the full screen as they expected, it was about an eighth of the expected size, stuck up in the top left corner. After some careful analysis, we realized that they were all on Windows 7, and all using a specific version of a graphics card manufactured by Intel. Now knowing how this dance works, we looked to see if any of them were in San Francisco. Nope. One in Mumbai, one in Indonesia. We actually didn’t find any Figma users reporting this problem anywhere in the United States. Digging further, we discovered that this was a pretty old laptop, now out of vogue in America, but relatively common in Russia and South Asia. It was so old, in fact, that nobody sells them any more either.</p>

<p>Well, shit.</p>

<p>We knew that to diagnose this sanely, we needed to have the device in hand. My coworker Ryan mentioned a friend of his works in the device lab at Dropbox and they might be able to lend us a device. Large companies like Dropbox and Google keep large repositories of hardware configurations in their device lab to deal with exactly this contingency. But Figma was still under 100 people at the time, so we were far from having a fully stocked warehouse of obscure graphics cards.</p>

<p>While we waited to hear back from Ryan’s contact, the third teammate working with me on this graphics component, Lauren, turned to me and said “Hey Jamie, how many followers do you have on Twitter again?” “Just over 3000”, I responded. “Why don’t you see if any of them can help you?”</p>

<p>I thought there was no way that would work, but I had nothing to lose, so I sent out a tweet saying “Okay, I need to debug on a really specific old laptop configuration. I need a Window 7 or 8 laptop with an Intel HD Graphics card. If you have one and are in the SF bay area, I&rsquo;ll buy you a meal or a drink if you can let me use it!”</p>

<p>Literally <em>one minute</em> later, a man named Scott replied “How long do you need it for?”</p>

<p>After confirming with him that he can reproduce the problem we’ve been seeing on this laptop, <em>and</em> that he’d be willing to lend me the laptop for a few weeks, I ask if he’d be willing to meet me in a coffeeshop somewhere convenient for him the next day. He agrees.</p>

<p><img src="/images/debugging-misadventures/at-starbucks.jpg" alt="At the coffee shop" /></p>

<p>The next day I hopped in a Lyft from Figma’s office in San Francisco destined for a Starbucks in Mountain View. Once arrived at the Starbucks, I did the awkward “waiting for someone you’ve never met before” dance. Maybe you’ve done this before for a date where the profile picture was unclear, or maybe for a networking connection set up through a friend. After a few of the Starbucks patrons assured me that they were, indeed, not Scott, I sat down and waited.</p>

<p>Scott walked in a few minutes later. I stood up to greet him, and we made some idle chatter while I got us both drinks from the cashier. We settled into some tables nearby. As it turns out, the laptop Scott was going to lend me miraculously already had none of his own personal files on it. It was a spare laptop he kept around to drive custom karaoke events at different anime conventions. As long as I could return or replace the laptop before the event, he was happy to lend it to me. Scott works as a Technical Support Engineer at a big company, and just happened to follow me on Twitter after seeing <a href="http://jamie-wong.com/post/color/">my blog post about color</a>. He’d never heard of Figma before, though was interested in it when I explained a bit more.</p>

<p>After talking for about half an hour, I thanked Scott profusely and returned to San Francisco in another hour long car ride. The day after, I was able to isolate the problem, create a workaround, and unblock our project. Reluctant to give up the laptop since it was the only way we had of debugging similar issues, I asked my manager for permission to expense sending a replacement laptop to Scott. The request was approved, and now Scott is the happy owner of a nicer, newer laptop, paid for by a company he had never heard of for deciding to help a person he had never met.</p>

<p>Scott is a great guy.</p>

<p><em>Thanks to <a href="https://nikhilthota.com/">Nikhil Thota</a>, <a href="https://www.spencerchang.me/">Spencer Chang</a>, <a href="https://medium.com/@andeeliao">Andee Liao</a>, and Lauren Budorick for providing feedback on drafts of this post, to <a href="https://www.instagram.com/jessichenliu">Jessica Liu</a> for the illustrations, and to Karl, Evan, Dylan, Ryan, and Lauren for being a part of this story!</em></p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ A potential employee’s guide to Silicon Valley startup equity]]></title>
    <link href="http://jamie-wong.com/post/valley-equity/"/>
    <updated>2020-09-14T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/valley-equity/</id>
    <content type="html"><![CDATA[ 

<style>
li + li {
  margin-top: 0;
}
p + ul, p + ol {
  margin-top: -30px;
}
</style>

<p><img src="/images/valley-equity/Monopoly-Hero.png?2020-09-20" alt="" /></p>

<p>You’re done your interviews, and you have a few Silicon Valley startup offers in-hand. The offers have a bunch of numbers and you’re not sure how to put them all together. You’re bombarded by terms like ISOs and NSOs and vesting period and cliff and strike price and fair market value that you don’t really understand, but owning 0.1% of the company sounds great in theory, so you go to sign on the dotted line.</p>

<p><strong>Pause.</strong> Subtle differences here can have <em>massive</em> downstream consequences. If you’re unaware of some of these policies, you might end up paying $100,000+ more in taxes than you anticipated, or be forced to choose between staying at a job you don’t love and leaving potential millions on the table.</p>

<p>Back in 2012, I was at a conference called <a href="http://cusec.net/">CUSEC</a>. At the post-conference happy hour, an Engineering Manager at Twitter named <a href="https://twitter.com/chanian">Ian Chan</a> took a bunch of us aside. He told us he was going to tell us something that could potentially save us a ton of money if we ever joined a startup. He proceeded to explain what an 83(b) election is, and he was right — it did and will save me a ton of money. I’d like to pass that knowledge on to you.</p>

<p>This post is going to focus on a narrow but common set of circumstances for employees at early startups (<a href="https://carta.com/blog/options-vs-rsus/">usually &lt;$1B valuation</a>). We’ll assume that the equity part of the offer is for incentive stock options (ISOs), rather than restricted stock units (RSUs), or actual stock if it’s a public company. Companies usually move from ISOs → RSUs → publicly tradable stock as the company grows. This guide is exclusively for ISOs. Since tax implications are important here, and taxes are specific to your locale, this post is also specifically references American taxes.</p>

<p><strong><em>DISCLAIMER: I have zero professional experience in tax or law. I’m an early software engineer at</em></strong> <a href="http://figma.com/"><strong><em>Figma</em></strong></a> <strong><em>(company was ~20 people when I joined). Do not use this post as a tax calculator! Use this post a tool to ask better questions to well informed people about potential outcomes.</em></strong></p>

<hr />

<h1 id="the-standard-deal">The standard deal</h1>

<p><img src="/images/valley-equity/Monopoly-Go.png?2020-09-20" style="max-height:424px"/></p>

<p>A standard deal for an early startup in the valley looks something like this:</p>

<ul>
<li>X shares</li>
<li>Strike price of $Y</li>
<li>4 year vest, 1 year cliff</li>
</ul>

<p>To make calculations easier for the rest of the post, I’m going to use the following semi-realistic numbers:</p>

<ul>
<li>20,000 shares</li>
<li>Strike price of $0.30</li>
<li>4 year vest, 1 year cliff</li>
</ul>

<p>Let’s take a look at what each of these terms means.</p>

<p><strong>20,000 shares:</strong> Your offer letter might say something like “Subject to the approval of the Board, you will be granted an option to purchase 20,000 shares”. This 20,000 is the number of shares from the company you’ll eventually be allowed to buy. This is similar to the # of shares you might buy of a public stock, except that you won’t be able to freely buy and sell these shares whenever you want to.</p>

<p><strong>Strike price of</strong> <strong>$0.30</strong><strong>:</strong> The strike price is the price you’ll pay per share to purchase. So if you want to buy 7,000 of the shares and your strike price is $0.30, then you’ll need to give the company 7,000⨉$0.30=$2,100. Crucially, as the value of the company increases (hopefully), this price does <em>not</em> increase. If the company increases in valuation from $1M to $50M between when you join and when you purchase, you still only pay $2,100, not $105,000. This increase in the value of the shares without an increase in the price to purchase is the whole point of stock options.</p>

<p><strong>4 year vest, 1 year cliff:</strong> This is a standard “vesting schedule”. In simplest terms, this means if you stay for 4 years, you have the right to purchase the full number of shares, and if you stay for less than 1, you leave with nothing.</p>

<p>In more subtlety, it means that at the 1 year mark, you gain the right to purchase <sup>1</sup>&frasl;<sub>4</sub> of the total agreed upon share count (20,000/4=5,000). Every month after that, you’ll be able to purchase <sup>1</sup>&frasl;<sub>48</sub> of the remaining shares (20,000/48=416).</p>

<p><img src="/images/valley-equity/vesting-schedule.png?2020-09-20" alt="" /></p>

<p>Once you hit each vesting date, you aren’t <em>required</em> to purchase. As long as you’re employed at the company, you have right to purchase any shares you’ve already vested.</p>

<p>It’s worth mentioning that many companies do an equity refresher program, where employees are granted additional equity after they’ve been at the company for a while. This is important to mitigate their total compensation decreasing dramatically once they’ve completed vesting their initial grant.</p>

<h1 id="time-to-purchase-the-stock">Time to purchase the stock</h1>

<p><img src="/images/valley-equity/Monopoly-AMT.png?2020-09-20" style="max-height: 424px" /></p>

<p>You’ve made it to your 1 year cliff! Congratulations! You’ve earned the right to purchase some of the company stock. Purchasing the stock is called called “exercising the option”. Since you have a four year vesting schedule, you now have the right to exercise <sup>1</sup>&frasl;<sub>4</sub> of your options. In the above offer, this allows you to purchase 20,000/4=5,000 shares. The purchase cost here is easy to calculate. It’s just the # of shares times the strike price. 5,000⨉$0.30 = $1,500.</p>

<p><img src="/images/valley-equity/price-to-purchase.png?2020-09-20" alt="" /></p>

<p><strong>But what about the taxes?</strong> Taxes can create an extremely employee-hostile situation here.</p>

<p>Let’s say that your company has been absolutely crushing it, and since you’ve joined, the company has increased its valuation 10x from $1M to $10M. The valuation that’s relevant here is called the “fair market value” or the “<a href="https://carta.com/blog/what-is-a-409a-valuation">409A valuation</a>”, which is sometimes different than the valuation in press releases about venture capital fund raises. Companies are required to get a new 409A valuation at least once every 12 months.</p>

<p><img src="/images/valley-equity/409a-valuation-changes.png?2020-09-20" alt="" /></p>

<p>While you paid $0.30 per share, since the company has increased its valuation 10x, as far as the IRS is concerned, those exercised shares are worth 10x as well. So they’re worth $3.00 per share now. This sounds great until you realize that the IRS wants to tax you on the difference between those two amounts.</p>

<p><img src="/images/valley-equity/409a-tax-liability.png?2020-09-20" alt="" /></p>

<p>The exact tax details here are complicated, but the crucial thing is that you can get taxed by this thing called <a href="https://en.wikipedia.org/wiki/Alternative_minimum_tax#Alternative_minimum_tax_calculation">AMT (Alternative Minimum Tax)</a> on the difference<sup class="footnote-ref" id="fnref:1"><a rel="footnote" href="#fn:1">1</a></sup>. Since we’re mostly interested in ballpark figures here, we’ll assume an AMT rate of 35% (28% Federal + 7% California State).</p>

<p>This means you’ll pay around (35%)⨉(5,000)⨉($3.00-$0.30)=$4,725 in taxes <em>on top</em> of the purchase price you pay to the company.
<img src="/images/valley-equity/tax-liability.png?2020-09-20" alt="" /></p>

<p>This brings your new total cash needed to $1,500+$4,725=$6,225. Notice how the taxes here are way more than the purchase price itself.</p>

<p><img src="/images/valley-equity/total-cash-needed.png?2020-09-20" alt="" /></p>

<p>Now you might think “okay, well, that sucks, but I guess I can sell off some of my shares to cover the taxes on it”. Except you can’t. Even though you’ve earned the right to purchase the shares, there’s no legal market to <em>sell</em> the shares yet. <strong>So you can’t sell them</strong>. The shares are “illiquid”<em>.</em> So you’ve been heavily taxed on an asset that you can’t sell, and that might be worth $0 if the company goes bankrupt before you have any chance to sell.</p>

<p>Given that, a reasonable strategy might be to just hold onto the options until you can convert the shares into cash after you buy them. At that point, you <em>will</em> be able to sell shares to cover the taxes.</p>

<h1 id="time-to-quit">Time to quit</h1>

<p><img src="/images/valley-equity/Monopoly-Golden-Handcuffs.png?2020-09-20" style="max-height: 424px" /></p>

<p>Given the scary tax treatment, you decide to wait to exercise. You just hold onto the options instead. You’ve now been at the company for 3 years, and you just got a <em>really</em> tempting offer from another company, so you’re considering quitting.</p>

<p>You might hope that you can just continue holding on to your stock options after you leave until the stock becomes liquid, but in many cases, you’ll be sorely disappointed.</p>

<p>Every stock option agreement will have what’s called a “post-termination exercise window”. This is the amount of time after you leave in which you’ll still be allowed to exercise your stock options. After this window closes, you forfeit all un-exercised stock options. The standard post-termination exercise window is only <em>90 days</em>.</p>

<p>This leaves you with a choice between three crummy strategies:</p>

<ol>
<li>Forfeit your equity</li>
<li>Pay heavy taxes on an illiquid asset that might ultimately be worth $0</li>
<li>Stay at the company, and turn down the other job offer</li>
</ol>

<p>The same calculation as before applies to the exercise strategy, but just to lay it out, let’s assume the company has continued on a rocket trajectory and now has a fair market valuation of $100M, yielding a new fair market share value of $30.</p>

<p><img src="/images/valley-equity/409a-valuation-changes-year-3.png?2020-09-20" alt="" /></p>

<p>You’ve now vested <sup>3</sup>&frasl;<sub>4</sub> of your stock option agreement, allowing you to purchase (<sup>3</sup>&frasl;<sub>4</sub>)⨉20,000=15,000 shares for 15,000⨉$0.30=$4,500. But the IRS now thinks these shares are worth $30.00 each, for a total value of 15,000⨉$30.00=$450,000.</p>

<p>Your AMT tax liability now becomes ($450,000-$4,500)⨉35% = $155,925. So the total amount of cash you need to put in to claim these shares becomes $4,500+$155,925=$160,420. This might be <em>way</em> more cash than you can afford to put in.</p>

<p><img src="/images/valley-equity/price-to-purchase-year-3.png?2020-09-20" alt="" /></p>

<p>This situation is what people sometimes refer to as “golden handcuffs”. Given that their choices if they quit are “spend a ton of money on something very risky” or “walk away with nothing”, many people will simply stay until the company IPOs, even if they’d rather quit.</p>

<h1 id="avoiding-the-golden-handcuffs">Avoiding the golden handcuffs</h1>

<p><img src="/images/valley-equity/Monopoly-Get-Out-Of-Golden-Handcuffs.png?2020-09-20" alt="" /></p>

<p>There are two main ways I know of which allows employees to avoid this situation: long post-termination exercise windows, and early exercise. These are both policies that companies may or may not offer to employees, so you should ask about them when considering your offer!</p>

<h2 id="long-post-termination-exercise-window">Long post-termination exercise window</h2>

<p>Rather than 90 days, some companies offer much longer exercise windows. Figma has a 5 year window for all employees with a tenure of over 2.5 years. Coinbase has a 7-year post-termination exercise window for all employees with with a tenure of over 2 years. Pinterest had 7 before they IPO’d.</p>

<p>A 5 year post-termination exercise window means that you can leave the company and hold onto your options for up to 5 years after you leave. If any time during that 5 years the company’s shares become liquid, you’ll be able to exercise your options, buy the shares, and sell off some of them to cover your taxes. If, any time during that 5 years, the company goes bankrupt, you avoided spending any of your own money purchasing &amp; paying taxes on an asset now worth $0.</p>

<p>There’s some subtlety here where the options change from ISOs to NSOs if you do this, which I won’t go into detail for. See the <a href="https://carta.com/blog/equity-101-exercising-and-taxes/">“ISO and NSO tax treatment” section of this blog post by Carta</a> to learn more. At a high level it sounds like gains from ISOs are taxed as capital gains, and NSOs are taxed as income.</p>

<h2 id="early-exercise">Early exercise</h2>

<p>Another way to avoid the golden handcuffs problem is allow employees to purchase shares <em>before</em> the options are vested. Under an early exercise program, employees can purchase up to their entire grant immediately. If you leave the company before your vesting is complete, the company has the right to buy back the unvested shares at either the strike price or the current market value, whichever is lower.</p>

<p>So let’s say that you’ve just joined a company with the same 20,000 stock option, $0.30 strike price, 4 year vest, 1 year cliff stock option agreement, but that <em>also</em> has early exercise available.</p>

<p>You opt to exercise the entire agreement immediately. You pay 20,000⨉$0.30=$6,000 to the company immediately. To make this beneficial, you also need to file a form with the IRS called an <a href="https://www.cooleygo.com/what-is-a-section-83b-election/">“83(b) election”</a>. This roughly tells the IRS that you want to be taxed on the equity now rather than when the stock options vest. <strong>You have to file your 83(b) within 30 days of exercising. Don’t mess this up.</strong></p>

<p>What about the AMT? Well, you pay the difference between the current market value and the strike price. Assuming you exercise as soon as the company board sets your strike price, the difference should be zero, because the strike price is set at the fair market value! So if you get the timing right for this, you pay $0 in taxes at the point of exercise.</p>

<p><img src="/images/valley-equity/price-to-purchase-early-exercise.png?2020-09-20" alt="" /></p>

<p>You still will get taxed on capital gains when you eventually sell the shares, but at least you can avoid getting taxed on the asset when it’s illiquid.</p>

<p>The downside of this is that you’re giving the company money to purchase an asset that might go down to zero at the point in time where you have the least information (the beginning). The upside is that you have no golden handcuffs! Whenever you choose to leave, you walk away with all of the equity you’ve vested to date without paying a dollar more.</p>

<p>Even if your company doesn’t have early exercise available when you join, it’s still worth discussing. Figma did not offer early exercise when I joined, but adopted the policy before the next 409A valuation, so I was still able to take advantage of it.</p>

<p>Even if you’ve had a new 409A valuation since your strike price was set, it’s possible that the difference between the fair market value and your strike price are still small enough that the tax liability is small, making early exercise still worthwhile.</p>

<h3 id="qualified-small-business-stock">Qualified Small Business Stock</h3>

<p>There&rsquo;s a more subtle benefit of exercising early if the company has gross assets under $50M: the stock may be considered <a href="https://www.brownadvisory.com/us/qsbs-tax-exemption-valuable-benefit-startup-founders-and-builders">Qualified Small Business Stock (QSBS)</a>. If it is considered QSBS, and if you’ve held the stock for 5 years or longer at the point of sale, then you’ll pay 0% capital gains to the IRS during that sale.</p>

<p>To see how much this matters, let’s say that the company IPOs 4 years into your tenure. It does well in the market, and 5 years after your start date, you decide you’d like to sell everything at the current price of $40. You have your entire stock option grant of 20,000 shares vested <em>and</em> exercised now, so the gross sales here will be 20,000⨉$40=$800,000, and the price paid was your strike price times the number of shares, so 20,000⨉$0.30=$6,000. For non-QSBS shares, you’ll be taxed federal capital gains, which is going to be at least 15% and possibly more. ($800,000-$6,000)⨉15%=$119,100. For QSBS shares, you pay no federal capital gains<sup class="footnote-ref" id="fnref:2"><a rel="footnote" href="#fn:2">2</a></sup>, so you save that entire $119,100! You&rsquo;ll likely still owe state capital gains taxes, which can still be hefty, but you have to pay that regardless.</p>

<h1 id="questions-you-should-be-asking">Questions you should be asking</h1>

<p><img src="/images/valley-equity/Monopoly-Important-Questions.png?2020-09-20" style="max-height: 424px" /></p>

<p>If you’re weighing your different offers, here are some questions you should be asking about the equity to make sense of this. Each of these things is unlikely to be in the offer letter.</p>

<ol>
<li>How long is the post-termination exercise window?</li>
<li>Is early exercise available? If not, why not? It&rsquo;s possible the founders are unaware of this kind of policy existing, and would be happy to offer it. You can pitch it as a recruiting benefit to you <em>and</em> future potential employees. You can also push for this policy change even if you&rsquo;ve already joined.</li>
<li>What is the strike price?</li>
<li>What do you think the total growth potential of the company is from current value? 10x? 100x?</li>
<li>What is the total number of shares outstanding? This is the “denominator” to think about when trying to understand what your number of shares means. This will let you calculate your % ownership of the company (ignoring dilution), which will help you gut check the maximum value of your shares assuming the company does really well.</li>
</ol>

<p>Before I signed my <a href="http://figma.com/">Figma</a> offer in 2016, Figma&rsquo;s CEO <a href="https://twitter.com/zoink">Dylan Field</a> said something akin to “do your financial planning assuming this equity ends up being worthless”. All of the examples in this post are <em>incredibly</em> optimistic, so keep in mind that everything could still hit zero.</p>

<p>Plan for the worst, hope for the best, and do the math.</p>

<h1 id="further-reading">Further reading</h1>

<p><a href="https://carta.com/">Carta</a>, a company which manages startup equity, has similar guides with more details but less emphasis on how you can get screwed. They include descriptions of ISOs &amp; NSOs and dilution which I skip in this post.</p>

<ul>
<li><a href="https://carta.com/blog/equity-101-stock-option-basics/">Equity 101 part 1: Startup employee stock options</a></li>
<li><a href="https://carta.com/blog/equity-101-stock-economics/">Equity 101 Part 2: Stock option strike prices</a></li>
<li><a href="https://carta.com/blog/equity-101-exercising-and-taxes/">Equity 101 Part 3: How stock options are taxed</a></li>
<li><a href="https://carta.com/blog/what-is-a-409a-valuation/">What is a 409A valuation?</a></li>
</ul>

<p><em>Thanks to <a href="https://digitalfreepen.com/">Rudi Chen</a> and <a href="https://twitter.com/madebyklau">Kevin Lau</a> for feedback on drafts of this post, to <a href="https://twitter.com/chanian">Ian Chan</a> for making me think about this in the first place, and to <a href="https://twitter.com/b0rk">Julia Evans</a> for writing a <a href="https://jvns.ca/blog/2015/12/30/do-the-math-on-your-stock-options/">similar post about stock options</a> which I’ve sent to a ton of people to in the past. <a href="https://www.vecteezy.com/free-vector/handcuffs">Handcuffs Vector by Vecteezy</a>.</em></p>
<div class="footnotes">

<hr />

<ol>
<li id="fn:1">The rough gist of AMT is that it&rsquo;s a completely parallel taxation structure to regular taxation. You calculate your tax liability under the regular rules, then calculate it under AMT, then you have to pay the maximum. As far as I know, the illiquid gains from your stock exercise are taxable only under AMT, so it&rsquo;s possible that the taxation on the stock gains might be enough to make AMT more than your regular taxes. If, however, your gains are smaller, then it&rsquo;s possible you won&rsquo;t get hit by this because your regular tax might still be more than AMT, even after our stock gains are considered. In that situation, you effectively don&rsquo;t pay tax on the stock gains at the point of exercise.
 <a class="footnote-return" href="#fnref:1"><sup>[return]</sup></a></li>
<li id="fn:2">There are limits to the exemptions on your capital gains ($10M or 10x your investment, whichever is greater). If you actually hit this limit, congratulations! Your taxes are probably much more complicated than mine 😅
 <a class="footnote-return" href="#fnref:2"><sup>[return]</sup></a></li>
</ol>
</div>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Tools for Sanity in Isolation]]></title>
    <link href="http://jamie-wong.com/post/tools-for-sanity"/>
    <updated>2020-04-14T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/tools-for-sanity</id>
    <content type="html"><![CDATA[ 

<style>

.follow-along {
  border: 2px dashed #83A4DC;
  margin: 0 -30px 30px -30px;
  padding: 20px 30px 20px 30px;
}

.follow-along h3 {
  color: black;
}
</style>

<figure>
<img src="/images/tools-for-sanity/zoom.png" />
<figcaption>This is what sanity looks like, I swear</figcaption>
</figure>

<p>I live alone at the moment, and the start of quarantine, was… rocky.</p>

<p>Here’s a jewel of an entry from my journal, from March 13:</p>

<blockquote>
<p>Fuck, this week has been bad.</p>
</blockquote>

<p>But, I think I’m kind of getting the hang of this. Here’s an entry from March 29:</p>

<blockquote>
<p>I’m in a surprisingly good mood right now. I kind of have a headache, but aside from that I feel pretty good. Mood probably like an 8 / 10?</p>
</blockquote>

<p>So I’d like to share a bit about what I think’s been helping bridge the gap between “Fuck, this week has been bad” to “8/10”. This isn’t a guide on how to be Your Best Self™️, or how to productivity hack your way into emerging from quarantine with a PhD and a six pack. This is a collection of ideas that I think have helped me reset myself to general okay-ness. First and foremost, this a post I’m writing as guidance for future-me who has temporarily forgotten all of this and is in need of it again.</p>

<h1 id="getting-out-of-the-pit">Getting out of the pit</h1>

<p><img src="/images/the-pit/transition-graph.png" alt="" /></p>

<p>Earlier this year, I wrote about pulling myself out of the transient depths of despair aka <a href="http://jamie-wong.com/post/the-pit/">the “Everything is Terrible” pit</a>. I find myself looking for my list of tools for exiting the pit frequently. Early on in self-quarantine, I found myself in the same trap of feeling guilty about not being able to do more, but recognizing that I was in the pit helped me.</p>

<p>Here’s a continuation of my journal from the “Fuck, this week has been bad” entry:</p>

<blockquote>
<p>I really want to be a force of positivity. I want to find ways to help my friends connect, and in a way that feels good to me too. To help pull us all out of this anxiety. But joining in a big group call just isn’t the thing that works for. I think that’s what being introverted really means to me. I need to connect with people 1:1, and I just need to do something with them that isn’t talk about COVID. But I don’t know how to get off the topic. I need to invest my time into something, anything that isn’t that. I definitely did notice myself slowly emerging from the pit after watching some Fullmetal Alchemist. I was able to clean my desk a little and put away dishes. And I guess I was able to pull myself out enough to be able to write this.</p>
</blockquote>

<p>Okay, so I got out of the pit through recognizing I was there and taking the baby steps I needed to climb back into the cozy cabin. Next, let’s consider how to extend our stay in the cabin instead of tripping back into the pit every few steps.</p>

<div class="follow-along">
<h3>Follow along with me</h3>

Try writing down the three things that genuinely help you when you’re in a shitty place. I know <em>lots</em> of things that I gravitate towards when I’m in the pit that <em>don’t</em> help, so having an explicit list to look at when I’m in there really helps me.
</div>

<h1 id="rebuilding-my-daily-habits">Rebuilding my daily habits</h1>

<p>Physical isolation from other people coinciding with the entire worldwide media cycle focusing (correctly) on a pandemic was a glorious kind of multi-targeted assassination of everyone’s emotional regulation toolkit. The exercise I get from going to the gym or dance classes? <em>POOF</em>. The ambient social interaction I get from having lunch with your coworkers? <em>GONE</em>. The warm glow of looking forward to parties or group trips? <em>NOPE</em>.</p>

<p>The removal of all of those things is out of my control. So, once I was back on steady ground momentarily, I started re-examining what <em>was</em> within my control to emotionally regulate. Starting with habits.</p>

<p>There’s a certain class of advice that’s:</p>

<ol>
<li>Insulting cliche</li>
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2729718/">Really</a> well <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6795685/">supported</a> by <a href="https://www.sciencedirect.com/science/article/pii/S1469029206000069">science</a></li>
<li>Really easy to ignore, even after you&rsquo;ve followed it in the past</li>
</ol>

<p>For me, four of those standard nuggets of wisdom for emotional regulation are:</p>

<ul>
<li>Exercise regularly</li>
<li>Meditate</li>
<li>Sleep at the same time every day</li>
<li>Talk to friends frequently</li>
</ul>

<p>From the picture of my goal tracker below, you might be able to see some <em>vague</em> sort of pattern for how my mood correlated with my ability to complete my habits.</p>

<figure>
<img src="/images/tools-for-sanity/habit-tracker.png" />
<figcaption>"Sit": meditate, "Lift": strength training, "Dance": take a wild guess, "Connect": connect to a close friend 1:1, "No feeds": don’t spend any time on infinite feed websites, "Early rest": get into bed and away from electronics before midnight.</figcaption>
</figure>

<p>I track my habits on a piece of paper that I have clipped to my bedroom wall. Before you bombard me with your favourite apps for habit tracking, I <em>like</em> having this as a physical piece of paper. First, because I find scribbling in my habit completion way more satisfying that tapping a piece of glass, and second, because it’s an awful lot easier to avoid confronting an empty digital calendar once I start slipping than it is to avoid looking at my own bedroom wall forever.</p>

<p>In case you need to hear it, <em>you can do this</em>. You can build habits. You don’t have to choose any of the same habits for me, but hopefully you have some sense of what things are important for you to be happy, and hopefully <em>some</em> of them are still doable within the confines of your home while by yourself. List them out. Make your own grid on a piece of a paper. My pre-printed one filled up, and I don’t own a printer, so I made my own using a ruler and a pen:</p>

<figure>
<img src="/images/tools-for-sanity/handmade-habit-tracker.png">
<figcaption>The ruler was… necessary. The early grids didn’t really live up to their name.</figcaption>
</figure>

<div class="follow-along">
<h3>Follow along with me</h3>
<p>
Write down the 3-5 daily or near-daily habits you want to uphold. Once you pick them, make a grid for each week on a piece of paper like above. There are lots of other cool visual layouts for this from bullet journal aficionados here: <a href="https://www.bulletjournaladdict.com/collections/50-bullet-journal-habit-tracker-ideas/">50 Habit Tracker Ideas for Bullet Journals</a>. If you want some kind of external accountability, ask a friend to check in with you each day about them, or use an app like <a href="https://www.beeminder.com/">Beeminder</a> or <a href="https://getspar.com/">Spar</a>. I used Beeminder for a while, and it’s pretty effective, though I eventually didn’t like how external the motivation felt.
</p>

I haven’t actually read any books on habit keeping, but I’ve repeatedly heard good things about both <a href="https://jamesclear.com/atomic-habits">“Atomic Habits”</a> and <a href="https://www.amazon.com/Power-Habit-What-Life-Business/dp/081298160X">“The Power of Habit”</a>
</div>

<h1 id="making-a-schedule">Making a schedule</h1>

<p>Having aspirational habits is all fine and good, but until you have a schedule that’s realistic about how much time each habit takes, it’ll be easy for them remain lofty aspirations.</p>

<p>My routine every day now looks like this:</p>
<pre><code>7:00-7:30 wakeup, breakfast, watch anime
7:30-8:30 exercise while listening to podcasts
8:30-9:30 shower &amp; meditate (I call a friend to meditate with)
10:00-6:00 work (except weekends)
6:00-7:00 dinner (I usually call a friend to talk while I cook)
7:00-10:00 different stuff every night
10:00-10:30 journal
10:30-11:30 get ready for bed &amp; read
11:30-7:00 sleep
</code></pre>
<p>As it turns out, I almost always wake up 30-60 minutes later than I’m planning on and exercise a little less than I planned on and go to bed a little later than I was planning on, but I still get a fair chunk of this done.</p>

<p>I want to emphasize that these are the things I chose not because this is how I’m going to “win” at life, but because they seem to actually help me feel like me. If your routine looks more like this:</p>
<pre><code>11:00-11:30 wakeup, breakfast, people watch through your window
11:30-12:00 doodle
12:00-1:00 order delivery from local restaurant, eat while watching 90 day fiancé**
1:00-6:00 free time! watch movies, game, read, doodle, TikTok
6:00-7:00 group call with friends eating leftovers from lunch
7:00-1:00 play Stardew Valley with friends
1:00-11:00 sleep
</code></pre>
<p>…and that makes you feel great, <em>that’s great. Do that.</em></p>

<p>If you can understand what it is you need to be happy, and can schedule that so you don’t have to spend the energy <em>every day</em> to plan how to do them, then you’ll hopefully have more energy to <em>actually</em> do them.</p>

<p>If you have no idea what you need, experiment! Plan a schedule, do your best to stick to it for a week, and then re-evaluate at the end of the week.</p>

<div class="follow-along">
<h3>Follow along with me</h3>
Take the 3-5 habits you picked and figure out what order (and ideally what time) you’re going to do them in. Put it in your calendar if you use it.
</div>

<h1 id="planning-for-the-week">Planning for the week</h1>

<p>I noticed this really dumb pattern in my pre-shelter-in-place life. It would start when I wake up in the morning and say “Gee, I’d really like to have dinner with a friend tonight”. Then I look at the clock, discover I’m late to work, and bolt to catch the next subway to work. On the way, I open my phone to message friends, get distracted, and end up engaging in a work conversation on Slack. Then I’m at work, and every few hours I have the thought that I should <em>really</em> message a friend to grab dinner, but get pulled into something else then forget. Now it’s 6:00pm. I message a few friends, but they all unsurprisingly have plans for the meal that is now <em>30 minutes away.</em> I grab takeout nearby, go home, and am a little bit sad.</p>

<p>Having dinner by myself when I wasn’t planning on it normally is a little demoralizing, but I get my social time in at work, so it’s no big deal. As far as I can tell, whether I had a good conversation with a friend is the single strongest indicator of whether I feel good at the end of the day during shelter in place, so it’s <em>really</em> important that I make this part of my day.</p>

<p>This is where batch planning really helps me out.</p>

<p>Every Sunday, I have a bunch of time blocked off to plan for the coming week.</p>

<p><img src="/images/tools-for-sanity/gcal-planning.png" alt="" /></p>

<p>The social planning part is how I avoid the last-minute-dinner-demoralization. I schedule out the coming week, and think about friends I haven’t caught up with in a while and set up calls over dinner. I’ll talk to them while I’m cooking most of the time.</p>

<p>I think this kind of batch planning has a few benefits over trying to schedule daily.</p>

<p>First, it gives me something to look forward to in the week! One of the most crushing things about Coronavirus for a lot of people has been the <em>horrible lack of things to look forward to.</em> So, make your own things to look forward to!</p>

<p>Second, it protects me against my own anti-social tendencies when I’m in the pit. When I feel like trash, I find it really hard to reach out. But if I batch plan this kind of stuff on Sunday and feel like a dumpster fire on a Tuesday, then Sunday-me’s got my back: friendship call at 6:00pm.</p>

<p>Lastly, I find that I just come up with more interesting ideas when I have explicit time set aside to plan stuff for the week. For example, I had this idea for a silly experiment where I asked a friend to send me a grocery list for a recipe without telling me what the recipe is. Tomorrow I’m going to call her and try to cook the recipe asking her only yes/no and numeric-answer questions and see how hilariously I screw it up. If I was trying to set up calls every day ad-hoc, I don’t think I would’ve stumbled on this kind of idea.</p>

<div class="follow-along">
<h3>Follow along with me</h3>
Think about things you want to happen every week but require advanced planning. For me, figuring out what I wanna cook that week to make a grocery list and figuring out who I want to talk to that week for friendship calls are the main things. Take those things, and put time in your calendar to do them.
</div>

<h1 id="old-nourishing-media">Old, nourishing media</h1>

<p>Beyond the nothing-to-look-forward to conundrum, the early days of shelter-in-place were in part difficult for me because of how overwhelmingly difficult it was to think or talk about <em>anything</em> else. This is true not only for major news outlets, but also for nearly every single content producer I follow. <a href="https://www.youtube.com/watch?v=I5-dI74zxPg">Mark Rober</a>, <a href="https://www.youtube.com/watch?v=cCNW9jO7EyM">vlogbrothers</a>, <a href="https://www.youtube.com/watch?v=Kas0tIxDvrg&amp;t=319s">3blue1brown</a>, <a href="https://www.youtube.com/watch?v=sbEj7M3aZIg">Smarter Every Day</a>, <a href="https://www.npr.org/2020/03/13/815677688/episode-979-medicine-for-the-economy">Planet Money</a>, etc. etc. It makes sense — once something is on everyone’s minds, it’s going to be difficult to <em>produce</em> any content that’s not that. The irony about feeling overwhelmed about this while writing content that is coronavirus topical isn’t lost on me.</p>

<p>Following the news to make well-informed decisions is a noble goal. But consuming a ton of media about crises tends to provide me a load of anxiety without actually compelling me into positive action. In the first few days, I was doing what Hank Green beautifully dubbed <a href="https://www.youtube.com/watch?v=q6xA-oh6xUM">“The Anxious Scroll”</a>:</p>

<blockquote>
<p>When I&rsquo;m doing the anxious scroll I feel as if I&rsquo;m doing something useful, and I&rsquo;m seeing the same 3-5 stories over and over again, so that they seem like 300 - 500 stories, and I want to know more and want to know what it&rsquo;s gonna be like tomorrow, and I want to know what it&rsquo;s gonna be like in 3 weeks, and I feel like I&rsquo;m doing something that&rsquo;s going to uncover that reality and uncover that truth and I’m just not.</p>
</blockquote>

<p>So lately when I do consume media, I’ve been intentionally listening to old media. I finished rewatching <a href="https://www.crunchyroll.com/fullmetal-alchemist-brotherhood">Fullmetal Alchemist: Brotherhood</a>. I’ve been listening to the old Radiolab mini-series G about intelligence and <a href="https://www.wnycstudios.org/podcasts/radiolab/articles/g-relative-genius">the theft of Albert Einstein’s brain</a>. I’ve been rewatching Avatar: The Last Airbender with some friends.</p>

<p>I’m not advocating for sticking your head in the sand and pretending that the world is business as usual. Stay informed enough to make sensible decisions both for yourself and your community, but stay away from the anxious scroll.</p>

<h1 id="laughing-at-myself">Laughing at myself</h1>

<p>I think there’ve been spans of my life where I take myself and my problems altogether too seriously. While I’m a strong proponent of confronting your problems rather than minimizing them by trying to brush them off as inconsequential compared to the problems of others (this helps nobody), I also think there’s a beautiful light in being able to find humour in dark places.</p>

<p>To that effect, I’ll leave you with a little story.</p>

<p>In a group chat I’m in, a few of my friends started exchanging videos of them with their partners or quarantine buddies attempting to do some TikTok challenges, like this:</p>

<p><blockquote class="tiktok-embed" cite="https://www.tiktok.com/@blake.wood/video/6796745042288577797" data-video-id="6796745042288577797" style="max-width: 605px;min-width: 325px;" > <section> <a target="_blank" title="@blake.wood" href="https://www.tiktok.com/@blake.wood">@blake.wood</a> <p>Seesaw challenge completed ✅ <a title="fyp" target="_blank" href="https://www.tiktok.com/tag/fyp">##fyp</a> <a title="xyzcba" target="_blank" href="https://www.tiktok.com/tag/xyzcba">##xyzcba</a> <a title="seesaw" target="_blank" href="https://www.tiktok.com/tag/seesaw">##seesaw</a> <a title="magicboots" target="_blank" href="https://www.tiktok.com/tag/magicboots">##MagicBoots</a> @lorna.bensted</p> <a target="_blank" title="♬ Lalala - Y2K & bbno$" href="https://www.tiktok.com/music/Lalala-6699935602407639814">♬ Lalala - Y2K &amp; bbno$</a> </section> </blockquote> <script async src="https://www.tiktok.com/embed.js"></script></p>

<p>In watching my friends excitedly exchange their attempt videos, I felt a little sad.</p>

<p>I don’t have a romantic partner to do this with at the moment, and I don’t have a quarantine buddy to make an attempt.</p>

<p>What I <em>do</em> have, however, is Photoshop.</p>

<p><img src="/images/tools-for-sanity/self-support.png" alt="" /></p>

<p>Hang in there, even if the only physical support you have around is yourself.</p>

<p><em>Thanks to <a href="https://medium.com/@andeeliao">Andee Liao</a>, <a href="https://www.spencerchang.me/">Spencer Chang</a>, and <a href="https://shiwolfblog.wordpress.com/">Lilly Shi</a> for reading drafts and encouraging me to post this.</em></p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ The Pit, the Cabin, and the Dance Floor]]></title>
    <link href="http://jamie-wong.com/post/the-pit"/>
    <updated>2020-01-28T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/the-pit</id>
    <content type="html"><![CDATA[ 

<style>
hr.the-pit {
  width: 10%;
}

hr.the-pit-break {
  margin-bottom: 3em;
}
</style>

<p><img src="/images/the-pit/transition-graph.png" alt="" /></p>

<p>Okay, Jamie, you’re in the pit again. It’s far from your first stay here, but let’s not get too comfortable. There’s a distinct lack of roses here to stop and smell. Mostly just thorns.</p>

<p>I know it’s not easy. Your body is lead. Your mind is in a slow, murky swirl. Your emotional energy level is at about a 3%.</p>

<p>Let’s go through your move list and take stock.</p>

<p>Move #277: Reach out to a close friend to talk? Sounds promising in net energy, but looks like the activation energy is more like a 10% if you have to be the one to reach out. Rough. Looks like we’ve got to go <em>way</em> down the activation energy list here to gain some ground.</p>

<p>Move #86: Clean your room maybe? Ah, <em>closer,</em> but that sounds more like a 5% kind of task. There’s just so much <em>stuff</em> to clean up. Seems like a lot of work.</p>

<p>Move #435: Read social media? Only requires 1% activation energy! Great! Start reading, I’ll check back in 30.</p>

<hr class="the-pit">

<p>Hey, Jami — Oh&hellip; I see. You’re still in bed, just… just staring at the ceiling. Cooooool.</p>

<p>Let’s get your energy reading. 2%?! You went <em>down</em> a percent? Huh. Guess I didn’t look at the net energy prediction for that social media binge too carefully. Not great.</p>

<p>I’m not even sure what moves exist down here. Obviously the social media move is still there, but that didn’t go so well, so let’s explore some other options.</p>

<p>Let’s see, let’s see. Damn, most of the standard options are just totally out of range right now. Move #451: Read a book, needs 20% activation. Move #555: Go to the gym, needs 15%.</p>

<p>How ab — oh good you’re listening to music now. Wait. Wait, what music is this. Oh.</p>

<p>Move #393: Listen to music that reminds you of your exes. Requirement: 1%. Well, I guess that’s a marginal improvement from staring at the ceiling.</p>

<p>It’s… it’s working? I wouldn’t say you look <em>happy</em> exactly, <strong>but</strong> the meter says 5%!</p>

<p>Holy shit folks, he’s left the bed! 👏</p>

<hr class="the-pit">

<p>Dang you’ve got your combo rolling now. #86: Clean your room. Done. Yield: +2%. #222: Shower. Done. +1%. #16: Water your plant. +1%. Boom. You’ve got some momentum. Meter reads 9%.</p>

<p>Your general vibe reads melancholy but I can see a dim life-is-worth-living glint in your eyes if I look past the dark circles.</p>

<p>After you make your morning smoothie and rewatch an episode of Fullmetal Alchemist (Brotherhood, of course), you’re at a cool 11% energy. I think I even saw you smile briefly. Time for you to plan ahead a little. Let’s revisit #277: Reach out to a friend to talk.</p>

<p>You can do it Jamie, just pick some people and fire off a couple of messages. “You free for dinner tonight?” That’s it. These are people that’ve known you for <em>years</em>. Nothing to fear here.</p>

<hr class="the-pit">

<p>Ahhh you’re back in bed, huh? Glued to Reddit. 7% and… falling. Friends all already have dinner plans and you kicked yourself for not making plans earlier? Yeah, we’ve seen this play-by-play before.</p>

<p>Come on dude, let’s go get some sunlight and some groceries. Maybe try #44: Listen to some comedy. I think Ali Wong’s Baby Cobra is on Spotify now.</p>

<hr class="the-pit">

<p>Okay, okay, now we’re talking! A little bit of laughter, a little bit of Vitamin D glow, and ingredients for a healthy lunch. Let’s get it! This 15% energy reading seems like a great foundation for things to come. You’re a little fragile still, but your options are opening up.</p>

<hr class="the-pit-break">

<p>Alright, I’m going to drop that narrative device before it gets too old. The above, second-person narrative fell out of a framework that congealed in my mind today.</p>

<p>In almost all situations, I want to gain emotional energy and spread that to everyone around me. Every action I take has some effect on my energy level. But the effect is also going to depend on what energy level I’m at <em>right now.</em> And some actions are just not reasonably available to me at certain energy levels.</p>

<p>While thinking about specific percentages has a certain charm, spending a lot of time trying to dial in whether something has an expected value of +4% or +6% isn’t really the point of the framework to me. So let’s talk about three ranges of energy I find myself in: the pit, the cozy cabin, and the dance floor.</p>

<p><img src="/images/the-pit/areas.png" alt="Area descriptions" /></p>

<p>The entire story opening this post takes place in the “Everything is Terrible” pit. When I’m in this state, it’s a struggle to do <em>anything.</em> When I’m this low energy, there is zero creative juice flowing. Focus is stolen by mind fog and rumination. Staying in bed feels both inescapable and shameful.</p>

<p>So when I’m in this state, what I need is <em>not</em> something challenging. I need easy wins. I need reminders that I am, in fact, a reasonably competent human capable of enjoyment and basic task completion. For me, this means consuming familiar media like <a href="https://www.wnycstudios.org/podcasts/anthropocene-reviewed">The Anthropocene Reviewed</a>, <a href="https://www.youtube.com/watch?v=-qv7k2_lc0M&amp;list=PL83DDC2327BEB616D">Key &amp; Peele</a>, and <a href="https://open.spotify.com/album/1xzz3nQdtNFf6V5316IDym">Ali Wong</a>, or just doing really basic maintenance tasks around my apartment like doing laundry.</p>

<p><img src="/images/the-pit/transition-graph.png" alt="Transition graph" /></p>

<p>Once I pull myself out of the pit, I’ve hopefully entered the “Life is Pretty Good” cozy cabin. All things considered, this place is nice. I’ve got some hot chocolate spiked with some Bailey’s in my mug, the fire is crackling, some jazzy Christmas music is playing. This chair is SO. SOFT. I could spend a good long time here without getting sick of it.</p>

<p>From the cozy cabin, whole categories of actions that felt insurmountably challenging start to seem pretty straightforward. I might schedule dinners with friends for the rest of the week, or head to the gym, or take a dance class. And those actions will typically prolong my cabin stay.</p>

<p>But <em>hopefully</em> those actions will also start to unblock a further set of actions: the risky ones that pave the path to the dance floor. If the cabin is the place where I feel at peace, the dance floor is the place where I feel <em>alive.</em></p>

<p>Almost by definition, the actions leading to the “I AM THE SHIT” dance floor have to be risky. When I’m on this metaphorical dance floor, I’m in a state of supreme confidence that only comes from overcoming some sort of fear.</p>

<p>And by definition of “risk”, sometimes the gamble won’t pay off. I ask someone out and get ghosted. I go to a house party and don’t connect. I take the challenging dance class and totally blank when it’s my turn to perform. When this happens, if I’m lucky, I just get bounced to the cabin. If it goes particularly poorly, I might find myself back in the pit.</p>

<p>I like this model because it acknowledges that the things I need when I’m low energy are very different from what I need when I’m high energy. From the pit, I frequently feel frustrated that the higher energy activities feel out of reach, and I’m hoping that reminding myself of this model will be calming.</p>

<p>By thinking about this model explicitly, I was forced me to write down what I think I need in each of these places. I’m hoping that next time I’m in the pit, I can look at this and use it to help me crawl out a little bit faster.</p>

<p>This is where I’d normally try to put some pithy quotation or closing thought tying the whole thing together, but I’ve got three other posts that are in 90%-completion hell, and I kind of need an easy win today. So I’ll just leave you with a question: what does your pit → cabin → dance floor transition plan look like? Send me a picture, I wanna see!</p>

<p>Thanks to <a href="http://owenwang.com/">Owen Wang</a> for helping pull me out of the pit tonight and for coaching me how to better season my rice &amp; beans over the phone. I wouldn’t have had the emotional or caloric energy to write this without you.</p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Feel those feels]]></title>
    <link href="http://jamie-wong.com/post/feel-those-feels/"/>
    <updated>2019-01-11T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/feel-those-feels/</id>
    <content type="html"><![CDATA[ 

<p>I’m 27 now, and have been in two long term relationships for the better part of the last decade. In the grand scheme of a life, this isn’t much, but it’s been enough for my perspectives on conflict to evolve. For the bulk of this relationship history, I thought that it was unambiguously good to wait until the fires of emotion had settled in order to have a calm, collected discussion about whatever was happening. Oftentimes when I did wait like this, the emotional fires would simmer down to a point where I didn’t feel a need to talk about the issue any more, because it was no longer front-of-mind for me. I used to commend myself and others for the ability to stay levelheaded, but I see it differently now.</p>

<hr/>

<p>When we’re little kids, we naturally channel our emotional state as a stream of consciousness. We
cry and scream at a pin drop: when we’re hungry, when we’ve been away from a parent’s embrace for too long, when we’re sleepy, when there’s poop in our diapers, when we have to share our favourite toy but we just don’t wanna. We’re skilled in allowing our emotions to flow, but totally hopeless at guiding the path of the emotional expression. We’re untethered emotional firehoses. This is an ineffective route of expression.</p>

<p>As we mature, we learn that unfettered, uncontrolled expression of emotion is ineffective. We see that people pull away from us when we yell at them. We see that throwing tantrums in public embarrasses the people we’re with. So for many people, like me, we learn to suppress. We turn the emotions inwards, convince ourselves that they’re useless, and believe we can restrain them with the right kind of patience and resolve. In doing so, we avoid the kind of external damage caused by screaming at people, or flying into a violent rage, or breaking our hands punching lockers. But we also compress what should be fleeting moods into something deeper and darker. It might not explode, but that suppression and deeply held hurt pervades our daily experience, taking the form of shame, disconnection, and fear. This is the route of suppression.</p>

<p>So as we mature still, we must learn to find channels to allow that emotion to run its course, but do so constructively. I still struggle a lot with this.</p>

<p>We release through appreciating art which reflects our pain. We release through creating art which reflects our pain. We release through resolving conflict by returning to it with honesty about how we feel and what triggered us to feel that way. This is an effective route of expression.</p>

<hr/>

<p>When I’m upset with my girlfriend, I could take the ineffective route of expression: “You’re 15 minutes late! You always do this. You don’t care about me”.</p>

<p>Or, I can take the route of suppression: proceed with dinner, trying to pretend I was never angry. Telling myself that I had no right to be angry.</p>

<p>Or, I can take a more effective route of expression: “I felt angry when I saw that I had been waiting 15 minutes past the time we agreed upon for you to come to dinner. Next time, would you please either try harder to be on time, or try to let me know how late you’re going to be?” Beginning the dialog this way invites empathy, rather than defensiveness. Once this happens, I might discover the anger relents to reveal the more vulnerable fear that underlies it. And with the empathetic support of a partner, that fear too might be allowed to run its course. And from there, security and playfulness arise.</p>

<hr/>

<p>Going to a therapist by myself, I learned a bit about how to release and integrate past pain by allowing long held, deeply held emotion to flow.</p>

<p>Going to see a couples therapist with my girlfriend, we’ve learned a bit about how to release and integrate past pain we’ve inflicted upon ourselves and each other by allowing each other to see our emotions flow. By providing dedicated space for emotionally charged discussions about our relationship, we see the pain felt by one another. We learn to understand it and empathize with it. And we learn to have emotionally honest conversations outside of the therapy sessions. I think that since we’ve started going to therapy, we don’t have conflicts less frequently, but we reach resolution to conflict much faster when it arises. And each time after we reach resolution, we’re able to return to warmth and play, uninhibited by distance from unspoken conflict.</p>

<p>So if you&rsquo;re curious, and if the resources are available to you as an individual or as a pair, go to a therapist and feel those feels.</p>

<hr/>

<p>Hello little evil<br/>
In the depths of my chest<br/>
You’ve taken shelter too long</p>

<p>How easy for you to hide in the shadows<br/>
Of an unexamined space<br/>
Living in your home<br/>
And giving nothing back</p>

<p>I know you mean no harm<br/>
You are not malicious<br/>
You simply are</p>

<p>How unfortunate for you<br/>
When a stray rock shatters a tinted window<br/>
Of your shrouded home<br/>
And you are brought to light</p>

<p>Blinded<br/>
You thrash, You sob, You scream<br/>
As you are evicted<br/>
And it echos in your host</p>

<p>I wonder when the next rock<br/>
Will happen upon your now departed home<br/>
And reveal your brethren</p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ A letter to my 18 year old self]]></title>
    <link href="http://jamie-wong.com/post/letter-to-my-18-year-old-self/"/>
    <updated>2018-12-29T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/letter-to-my-18-year-old-self/</id>
    <content type="html"><![CDATA[ 

<p>This is an open letter I wrote to myself in 2014 after completing my undergraduate
degree in Software Engineering from the University of Waterloo. It&rsquo;s
addressed to my 2009 self, to be read just before starting university. Now in
2018, I&rsquo;m surprised how much of this still holds and is still needed advice
for me.</p>

<blockquote>
<p>April 25, 2014</p>

<p>Hey you,</p>

<p>It’s me: you. I made it. I have my iron ring<sup class="footnote-ref" id="fnref:1"><a rel="footnote" href="#fn:1">1</a></sup>. I guess you aren’t worried about failing to get here, but here looks a little different than you probably hoped it would. You’re not going to work at Google, and even more surprisingly, you won’t want to. You’ll discover that in your own time, so let me calm some of your fears and instill in you a few more.</p>

<p>In high school, you were the techie among your friends, and that was cool. It made you different and interesting and helped you create cool things to show your friends. In university, you’ll be surrounded by people with similar skill sets and interests. Your classmates will catch up to you. You will panic a little when this happens. Calm down. You’re not competing with them. Respect them, and learn from them, and never compare your accomplishments with theirs.</p>

<p>Make friends within your program. They’ll help you get through material that doesn’t click in your head as quickly as you’re used to. You won’t get integration, electromagnetics, or feedback control the way you got most things in high school. They’ll also end up being your teachers and your roommates. They’ll keep you motivated to do all your work, even though they won’t motivate you to go to class.</p>

<p>Oh yeah &ndash; you’re not going to go to all your classes. Thinking less of the people who skip in first year is going to make you feel pretty stupid when you almost completely stop going to class in upper years. You’ll stop going initially because you learn the material faster by reading textbooks in the library, but eventually you’ll just focus on learning through assignments and cram when you need to. This is maybe not ideal, but it’ll free you to get more exercise.</p>

<p>Make friends outside your program. As odd as this might sound to you now, you’re going to get sick of being surrounded by people in tech 24 / 7. Talk to a diverse group of people. Socializing in university will become a conscious thing. You won’t make friends during lectures like you did in class during high school.</p>

<p>Never stop playing badminton. Join the executive team as early as possible. This will be one of the most transformative things you do in university. Going out for dinner with them will make you feel more socially confident. The friends you make through badminton club will make you feel like part of a community in a very different way than your class does, and a few of them will be critical to meeting the incredible girl I’m with today.</p>

<p>Get more comfortable talking to strangers. Whenever you go meet a group of people, you’re not going to get along with all of them. That’s okay. Calling for pizza delivery over the phone should not be a source of social anxiety. Traveling will help with this immensely.</p>

<p>Open up to people. To get people to open to you, sometimes you have to go first. Find people where this comes naturally for you. When you do find them, don’t hold back, and make sure to let them know you care. Talking to them in groups of people will be very different than talking to them 1-on-1. This is true of everyone - especially your family. Do what you can to stay in touch with your family and get to know them. There’s a lot more going on in their lives than you think. Schedule Skype dates and dinners with family and friends.</p>

<p>Be humble. People will learn about your achievements in their own way, and if they don’t that’s okay. Being approachable and fun to talk to is far more important than deference. Stop wearing tech shirts and hoodies all the time. It makes you look simultaneously aesthetically oblivious and less approachable.</p>

<p>Don’t stop reading. Read non-fiction, science fiction, and some of the classics. Books will alter your perception of the world, lead to some incredibly engaging conversations, and strengthen future friendships. They’ll also give you something interesting to think about during rough periods in your life. You’ll extract more meaning reading a book for 20 minutes than 2 hours of skimming Hacker News. You should still look up this Hacker News thing though, it’s pretty handy.</p>

<p>Experiment with what makes you productive. You’ll think at first that to-do lists work for you, but they really don’t. You work better on pretty much everything in the morning, before you check your email, go on Facebook, or read Hacker News. This is especially true when you have to read a lot of notes, or when you want to write anything at all. Let yourself take naps while studying.</p>

<p>Keep a journal. It will let you reminisce, maintain your confidence, and organize your thoughts in difficult times. No matter when you started, you’ll wish you started earlier. You’ll find the act of writing itself cathartic, even when you do have people to tell your worries to.</p>

<p>Face your problems head on, but use the support you have. Even if your problems seem insignificant compared to others’, that doesn’t give you an excuse to ignore them. If you’ve been thinking the same thought on and off for years, it’s not going to go away by itself.</p>

<p>Oh, and when you go ATV riding, if you feel like you’re shifting around a lot, you’re doing it wrong. Hug the chassis with your thighs. Trust me on this one, you’ll save your ankle a world of hurt.</p>

<p>— Jamie Wong, University of Waterloo Software 2014 Alumnus</p>
</blockquote>
<div class="footnotes">

<hr />

<ol>
<li id="fn:1">The iron ring is a symbol of completion of some undergraduate engineering programs in Canada, and a reminder of the responsibility we hold to keep people safe:  <a href="https://en.wikipedia.org/wiki/Iron_Ring">https://en.wikipedia.org/wiki/Iron_Ring</a>
 <a class="footnote-return" href="#fnref:1"><sup>[return]</sup></a></li>
</ol>
</div>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ speedscope - Interactive Flamegraph Explorer]]></title>
    <link href="http://jamie-wong.com/post/speedscope/"/>
    <updated>2018-08-23T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/speedscope/</id>
    <content type="html"><![CDATA[ 

<style>li + li { margin-top: 0 }</style>

<p><img src="/images/speedscope/hero.gif" alt="speedscope" /></p>

<p><em>An edited version of this post was written for Mozilla Hacks: <a href="https://hacks.mozilla.org/2018/11/cross-language-performance-profile-exploration-with-speedscope/">Cross-language Performance Profile Exploration with speedscope</a>.</em></p>

<p>For the past 9 months, I&rsquo;ve been working on speedscope: a fast, interactive, web-based viewer for large performance profiles. You can use it live at <a href="https://www.speedscope.app/">www.speedscope.app</a>, and read the code on GitHub at <a href="https://github.com/jlfwong/speedscope">jlfwong/speedscope</a>.</p>

<p>It’s inspired by the <a href="https://developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference">performance panel of Chrome developer tools</a> and by <a href="https://github.com/BrendanGregg/flamegraph">Brendan Gregg’s FlameGraphs</a>. If you’ve never heard of flamegraphs before or have heard of them but never understood how to read them, the guide <a href="https://rbspy.github.io/using-flamegraphs/">“Using flamegraphs”</a> from <a href="https://rbspy.github.io/">rbspy</a>’s documentation is wonderful.</p>

<p>The goal of speedscope is to provide a 60fps way of interactively exploring large performance profiles from a variety of sources. It runs totally in-browser, and does not send any profiling data to any servers. Because it runs in-browser, it should work in Chrome and Firefox on Mac, Windows, and Linux (though I’ve only actually tested on Mac 😬).</p>

<p>You can use it offline via the command-line if you install it via <code>npm</code>.</p>
<pre><code>npm install -g speedscope
</code></pre>
<p>Or you can just download a zipfile from <a href="https://github.com/jlfwong/speedscope/releases">the GitHub releases page</a> and open the contained <code>index.html</code> directly if you don’t want to deal with <code>node</code> or <code>npm</code>.</p>

<p>Currently supported import sources include:</p>

<ul>
<li>JavaScript

<ul>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-Chrome">Importing from Chrome</a></li>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-Firefox">Importing from Firefox</a></li>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-Node.js">Importing from Node.js</a></li>
</ul></li>
<li>Ruby

<ul>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-stackprof-(ruby)">Importing from stackprof</a></li>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-rbspy-(ruby)">Importing from rbspy</a></li>
</ul></li>
<li>Native code

<ul>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-Instruments.app">Importing from Instruments.app</a> (macOS)</li>
<li><a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-perf-(linux)">Importing from <code>perf</code></a> (linux)</li>
</ul></li>
</ul>

<p>speedscope also provides two ways of importing from custom sources. Check out <a href="https://github.com/jlfwong/speedscope/wiki/Importing-from-custom-sources">Importing from custom sources</a> for the details. If you’d like to see support for an additional format, please <a href="https://github.com/jlfwong/speedscope/issues?q=is%3Aopen+is%3Aissue+label%3A%22import+source%22">submit an issue</a>, and ideally contribute that support!</p>

<p>speedscope is <a href="https://github.com/jlfwong/speedscope/blob/master/LICENSE">MIT licensed</a>.</p>

<h1 id="what-can-it-do">What can it do?</h1>

<p>speedscope is broken down into three primary views: Time Order, Left Heavy, and Sandwich.</p>

<h2 id="time-order">🕰Time Order</h2>

<p><img src="/images/speedscope/time-ordered-view.png" alt="time order view" /></p>

<p>In the &ldquo;Time Order&rdquo; view (the default), call stacks are ordered left-to-right in the same order as they occurred in the input file, which is usually going to be the chronological order they were recorded in. This view is most helpful for understanding the behavior of an application over time, e.g. &ldquo;first the data is fetched from the database, then the data is prepared for serialization, then the data is serialized to JSON&rdquo;.</p>

<p>The horizontal axis represents the &ldquo;weight&rdquo; of each stack (most commonly CPU time), and the vertical axis shows you the stack active at the time of the sample. If you click on one of the frames, you&rsquo;ll be able to see summary statistics about it.</p>

<h2 id="left-heavy">⬅️Left Heavy</h2>

<p><img src="/images/speedscope/left-heavy-view.png" alt="left heavy view" /></p>

<p>In the &ldquo;Left Heavy&rdquo; view, identical stacks are grouped together, regardless of whether they were recorded sequentially. Then, the stacks are sorted so that the heaviest stack for each parent is on the left &ndash; hence &ldquo;left heavy&rdquo;. This view is useful for understanding where all the time is going in situations where there are hundreds or thousands of function calls interleaved between other call stacks.</p>

<h2 id="sandwich">🥪 Sandwich</h2>

<p><img src="/images/speedscope/sandwich-view.png" alt="sandwich view" /></p>

<p>The &ldquo;Sandwich&rdquo; view is a table view in which you can find a list of all functions and their associated times. You can sort by self time or total time.</p>

<p>It&rsquo;s called the &ldquo;Sandwich&rdquo; view because if you select one of the rows in the table, you can see flamegraphs for all the callers and callees of the selected row.</p>

<h1 id="who-s-using-it">Who’s using it?</h1>

<p>I haven’t done too much to advertise speedscope, but it’s already in use in at least three companies!</p>

<h2 id="figma">Figma</h2>

<p>We’ve been using speedscope at <a href="https://www.figma.com/">Figma</a> for all kinds of performance analysis for our user interface design tool. This is a bit of a cheat because I work at Figma, but it spread from initially a tool that only I used to a tool that’s in use by most people on the teams that work in C++ or TypeScript for performance analysis. One particular superpower that speedscope gives us is the ability to get users to record performance profiles in Chrome containing minified names, then <a href="https://github.com/jlfwong/speedscope/pull/75">remap the stack traces back to development symbols using the <code>.symbols</code> file generated by emscripten</a>. We’ve been using it for months now to help us optimize loading time in particular, and regularly use it to load 100MB+ profiles exported from Chrome.</p>

<h2 id="benchling">Benchling</h2>

<p>Engineers at <a href="https://benchling.com/">Benchling</a> have been using it thanks to <a href="http://www.alangpierce.com/">Alan Pierce</a> using it to implement a really easy-to-use profiler for their backend Python code.</p>

<blockquote>
<p>We recently worked on a project to make backend profiling easier and more accessible, and speedscope has been an important part of that. Now anyone in the company (not just engineers) can take a profile anytime they see something slow in any production instance, post a speedscope link to Slack, and anyone on engineering can help out in the analysis and share insights or ideas for improvements. As one recent example, speedscope recently made it easy to see that a slow operation spent almost all of its time building and running regexes, and switching to Python&rsquo;s <code>string.find</code> when possible sped the operation up by about 75x.</p>
</blockquote>

<h2 id="uber">Uber</h2>

<p><a href="https://github.com/rudro">Rudro Samanta</a> and teammates have been using speedscope at <a href="https://www.uber.com/">Uber</a> to visualize profiles of iOS applications captured via Xcode &amp; Instruments.</p>

<blockquote>
<p>Instruments is great at finding a specific hotspot function across a long running trace of our app (e.g. Font loading &ndash; UIFont fontWithName is slow). With speedscope we&rsquo;re able to find that during this specific transition from one screen to the other, this specific icon load appears to take up more time than we expected and maybe we should pre-load that one large icon or something.</p>
</blockquote>

<h1 id="architecture">Architecture</h1>

<p>While working on speedscope, I tried to stay generally light on dependencies, opting to implement a lot of stuff from scratch rather than trying to glue a bunch of third party libraries together. This serves two major purposes:</p>

<ol>
<li>Small download size. At time of writing, the total compressed download size of <a href="https://www.speedscope.app/">https://www.speedscope.app/</a> is 131KB, and only the first 63KB of that is needed before the page becomes usefully interactive. This fully loads in just under 2.5s over a Fast 3G connection.</li>
<li>Re-inventing the wheel teaches me stuff. This might not be the best strategy for getting product in front of users as fast as possible, but I certainly learn a lot more doing this. As part of speedscope, I implemented a <a href="https://github.com/jlfwong/speedscope/blob/f60ab630be4a66b6a4b20468144a210d3142fa9a/src/lib/math.ts#L7">basic linear algebra math library</a>, <a href="https://github.com/jlfwong/speedscope/blob/f60ab630be4a66b6a4b20468144a210d3142fa9a/src/views/flamechart-pan-zoom-view.tsx#L462">pan/zoom mechanics</a>, a <a href="https://github.com/jlfwong/speedscope/blob/f60ab630be4a66b6a4b20468144a210d3142fa9a/src/lib/lru-cache.ts#L1">least-recently-used cache</a>, and an <a href="https://github.com/jlfwong/speedscope/blob/f60ab630be4a66b6a4b20468144a210d3142fa9a/src/views/scrollable-list-view.tsx#L1">efficient list view</a>, among other things. I feel much better about how I spent the time designing &amp; implementing those than I would have if I spent that time learning the specific APIs of other people’s implementations.</li>
</ol>

<p>That said, there were a few dependencies that were gnarly enough that I didn’t want to roll my own.</p>

<ul>
<li><a href="https://preactjs.com/">Preact</a> for declaratively defining UI views. I generally really enjoy working with React, but wanted a smaller library in terms of byte size since one of my goals was to make speedscope load really fast even from an empty cache on a slow connection.</li>
<li><a href="https://github.com/Khan/aphrodite">Aphrodite</a> for authoring CSS. This deals with all the auto-prefixing for me, avoids classes of bugs around CSS prioritization conflicts, and works nicely with <a href="https://www.typescriptlang.org/">TypeScript</a>, which I’ve really thoroughly enjoyed using for this project. I have a bias here vs. other CSS solutions since I co-authored the first version of Aphrodite while at Khan Academy, so I understand how it works ☺️.</li>
<li><a href="https://redux.js.org/">Redux</a> and <a href="https://github.com/developit/preact-redux">preact-redux</a> for global in-memory state management. I honestly started trying to implement my own global state management solution for speedscope since redux felt like a lot of unnecessarily complexity for what I was trying to do, but I repeatedly ran into bugs or complications that led me back to redux. I ultimately left with a greater appreciation for redux.</li>
<li><a href="https://github.com/nodeca/pako">Pako</a> for extracting zlib encoded data as part of <a href="http://jamie-wong.com/post/reverse-engineering-instruments-file-format/">importing trace files from Instruments</a>. I probably would’ve learned a lot implementing my own decompressor, but it seems like something pretty easy to get wrong, and it’s a pretty small dependency.</li>
<li><a href="https://github.com/jlfwong/speedscope/blob/f60ab630be4a66b6a4b20468144a210d3142fa9a/src/gl/graphics.ts#L1">graphics.ts</a> is a WebGL abstraction layer that I ported from <a href="https://github.com/evanw/sky">https://github.com/evanw/sky</a>. speedscope uses WebGL for flamegraph rendering, which is how it ends up being so fast even for massive profiles. Before that I was using <a href="http://regl.party/">regl</a>, which was delightful for the most part. I ultimately needed to switch away from it because it uses <code>eval</code>, which would preclude speedscope from being usable in environments with a strict <code>Content-Security-Policy</code> header.  See <a href="https://github.com/regl-project/regl/issues/491#issuecomment-414146457">https://github.com/regl-project/regl/issues/491#issuecomment-414146457</a>.</li>
</ul>

<p>As a quick note about build and test infrastructure, speedscope uses <a href="https://jestjs.io/">Jest</a> for writing tests, <a href="https://travis-ci.org/">Travis CI</a> for continuous integration, <a href="https://coveralls.io/">Coveralls</a> for test code coverage reporting, <a href="https://parceljs.org/">Parcel</a> for source code → build artifact transformation, and <a href="https://prettier.io/">prettier</a> for automatic code formatting.</p>

<h1 id="what-s-next">What&rsquo;s next?</h1>

<p>Working on speedscope has provided me a bounty of blogpost material. Working on it led me to writing both <a href="http://jamie-wong.com/post/color/">“Color: From Hexcodes to Eyeballs”</a> and <a href="http://jamie-wong.com/post/reverse-engineering-instruments-file-format/">“Reverse Engineering Intruments’ File Format”</a>, and there’s a lot more material I could extract from this project to write about.</p>

<p>In terms of spreading speedscope to wherever it can be useful, there are two major paths: increasing the number of formats that speedscope can import, and making tighter integrations into speedscope to make the import process seamless. For instance, <a href="https://github.com/tmm1/stackprof/pull/100">I have an open PR on stackprof</a> to make it the default visualization tool that ships with the profiler.</p>

<p>In terms of increasing the capabilities of speedscope itself, one path forward would be to support a wider array of visualizations to e.g. be able to visualize network requests &amp; CPU time on the same time axis, just like Chrome developer tools does. In an extreme version of this, you could imagine importing complex traces from distributed systems and exploring them interactively.</p>

<p>Buuuuuut I think I’m probably going to take a long break before trying to do any of that (though I’m still happy to do code review if someone else wants to!) Let me explain why.</p>

<p>At the beginning of the year, I finished reading <a href="https://www.amazon.com/Habits-Highly-Effective-People-Powerful/dp/0743269519">7 Habits of Highly Effective People</a>, and wrote down my personal mission statement broken down by the different roles I care about: friend, son, brother, student, teacher, etc. Part of what led me to focusing on speedscope for the last 9 months was this part of my mission statement:</p>

<blockquote>
<p>As an <strong>engineer</strong>, I will work to build maintainable, understandable systems and tools which aid people in solving problems of essential complexity in the world.</p>
</blockquote>

<p>I feel confident that speedscope does contribute to that goal in a small way. But I think that I’ve let too much of my time drift towards this. While I like working on speedscope, it’s a mostly solitary endeavor, and even if I was collaborating with other people, I already spend ~40 hours doing satisfying engineering work at Figma. I think I’d like to reclaim more of my time to pursuing creative, collaborative endeavors with other people where most of that <a href="http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/">collaboration doesn’t happen through a sheet of glass</a>. So that’s likely where most of my time is going to go for the rest of this year. This stuff comes in waves, so perhaps I’ll return to sinking a ton more time into this some time in the future.</p>

<p>Part of what I like about side projects and having my own blog is not needing to keep working on stuff when I don’t really feel like it, and not needing to pretend that the work is my lifeblood. While I’m proud of a lot of the technical stuff I’ve written on my blog, I’m ultimately most proud of my <a href="http://jamie-wong.com/post/depression-and-recovery/">post about my experience with depression</a>, because it expresses a distinctly human perspective that I rarely take the time to express.</p>

<p>Welp, this “product launch” post took a weirdly personal turn near the end. If you got this far, here’s a music video from one of my favorite artists 😃.</p>

<iframe width="750" height="420" src="https://www.youtube.com/embed/wEqs91ZCAgc" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Figma, faster 🏎]]></title>
    <link href="http://jamie-wong.com/post/figma-faster/"/>
    <updated>2018-08-13T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/figma-faster/</id>
    <content type="html"><![CDATA[ 

 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Reverse Engineering Instruments’ File Format]]></title>
    <link href="http://jamie-wong.com/post/reverse-engineering-instruments-file-format/"/>
    <updated>2018-06-13T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/reverse-engineering-instruments-file-format/</id>
    <content type="html"><![CDATA[ 

<figure>
<img src="/images/instruments/hero.png">
</figure>

<p>Have you ever wondered how applications store their data? Plenty of file formats like MP3 and JPG are standardized and well documented, but what about custom, proprietary file formats? What do you do when you want to extract data that you know is in a file <em>somewhere</em>, and there are no APIs to extract it?</p>

<p>Over the last few months, I’ve been building a performance visualization tool called <a href="https://www.speedscope.app/">speedscope</a>. It can import CPU profile formats from a variety of sources, like <a href="https://developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference">Chrome</a>, <a href="https://developer.mozilla.org/en-US/docs/Tools/Performance/Flame_Chart">Firefox</a>, and Brendan Gregg’s <a href="https://github.com/BrendanGregg/flamegraph#2-fold-stacks">stackcollapse</a> format.</p>

<figure>
<img src="/images/instruments/speedscope-demo.gif">
</figure>

<p>At <a href="https://www.figma.com/">Figma</a>, I work in a C++ codebase that cross-compiles to asm.js and WebAssembly to run in the browser. Occasionally, however, it’s helpful to be able to profile the native build we use for development and debugging. The tool of choice to do that on OS X is <a href="https://developer.apple.com/library/content/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/index.html">Instruments</a>. If we can extract the right information from the files Instruments outputs, then we can construct <a href="http://www.brendangregg.com/flamegraphs.html">flamecharts</a> to help us build intuition for what’s happening while our code is executing.</p>

<p>Up until this point, all of the formats I’ve been importing into speedscope have been either plaintext or JSON, which lends them to easier analysis. Instruments’ <code>.trace</code> file format, by contrast, is a complex, multi-encoding format which seems to use several hand-rolled binary formats.</p>

<p>This was my first foray into complex binary file reverse engineering, and I’d like to share my process for doing it, hopefully teaching you about some tools along the way.</p>

<p><nav id="TableOfContents">
<ul>
<li><a href="#a-brief-introduction-to-sampling-profilers">A brief introduction to sampling profilers</a></li>
<li><a href="#exploring-with-file-and-tree">Exploring with <code>file</code> and <code>tree</code></a></li>
<li><a href="#finding-strings-with-grep">Finding strings with <code>grep</code></a></li>
<li><a href="#interpreting-the-plist-with-plutil">Interpreting the <code>plist</code> with <code>plutil</code></a></li>
<li><a href="#making-a-binary-plist-parser">Making a binary plist parser</a></li>
<li><a href="#reconstructing-the-object-graph">Reconstructing the object graph</a></li>
<li><a href="#handling-custom-datatypes">Handling custom datatypes</a></li>
<li><a href="#finding-the-list-of-samples-with-find-and-du">Finding the list of samples with <code>find</code> and <code>du</code></a></li>
<li><a href="#exploring-binary-file-contents-with-xxd">Exploring binary file contents with <code>xxd</code></a></li>
<li><a href="#guessing-binary-formats-with-synalyze-it">Guessing binary formats with Synalyze It!</a></li>
<li><a href="#finding-binary-sequences-using-python">Finding binary sequences using python</a></li>
<li><a href="#putting-it-all-together">Putting it all together</a></li>
</ul>
</nav></p>

<p><em>Disclaimer: I got stuck many times trying to understand the file format. For the sake of brevity, what’s presented here is a much smoother process than I really went. If you get stuck trying to do something similar, don’t be discouraged!</em></p>

<h1 id="a-brief-introduction-to-sampling-profilers">A brief introduction to sampling profilers</h1>

<p>Before we dig into the file format, it will be helpful to understand what kind of data we need to extract. We’re trying to import a CPU time profile, which helps us answer the question “where is all the time going in my program?” There are many different ways to analyze runtime performance of a program, but one of the most common is to use a sampling profiler.</p>

<p>While the program being analyzed is running, a sampling profiler will periodically ask the running program “Hey! What are you doing RIGHT NOW?”. The program will respond with its current call stack is (or call stack<em>s</em>, in the case of a multithreaded program), then the profiler will record that call stack along with the current timestamp. A manual way of doing this if you don’t have a profiler is to just <a href="https://stackoverflow.com/a/378024/303911">repeatedly pause the program in a debugger and look at the call stack</a>.</p>

<p>Instruments’ Time Profiler is a sampling profiler.</p>

<figure>
<img src="/images/instruments/instruments-type-select.png">
</figure>

<p>After you record a time profile in Instruments, you can see list of samples with their timestamps and associated call stacks.</p>

<figure>
<img src="/images/instruments/instruments-sample-table.png">
</figure>

<p>This is exactly the information we want to extract: timestamps, and call stacks.</p>

<h1 id="exploring-with-file-and-tree">Exploring with <code>file</code> and <code>tree</code></h1>

<p>If you’d like to follow along with these steps, you can find my test file here: <a href="https://github.com/jlfwong/speedscope/raw/f9032f41001f5a0943677ef7b9bd995a0895123c/sample/profiles/Instruments/8.3.3/simple-time-profile.trace.zip"><code>simple-time-profile.trace</code></a>, which is a profile from Instruments 8.3.3. This is a time profile of a simple program I made specifically for analysis without any complex threading or multi-process behaviour: <a href="https://github.com/jlfwong/speedscope/blob/f9032f41001f5a0943677ef7b9bd995a0895123c/sample/programs/cpp/simple.cpp"><code>simple.cpp</code></a>.</p>

<p>A good first step when trying to analyze any file is to use the <a href="https://linux.die.net/man/1/file">unix <code>file</code> program</a>.</p>

<p><code>file</code> will try to guess the type of a file by looking at its bytes. Here are some examples:</p>
<pre><code>$ file favicon-16x16.png
favicon-16x16.png: PNG image data, 16 x 16, 8-bit colormap, non-interlaced
$ file favicon.ico
favicon.ico: MS Windows icon resource - 3 icons, 48x48, 256-colors
$ file README.md
README.md: UTF-8 Unicode English text, with very long lines
$ file /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome
/Applications/Google Chrome.app/Contents/MacOS/Google Chrome: Mach-O 64-bit executable x86_64
</code></pre>
<p>So let’s see what <code>file</code> has to say about our <code>.trace</code> file.</p>
<pre><code>$ file simple-time-profile.trace
simple-time-profile.trace: directory
</code></pre>
<p>Interesting! So the Instruments <code>.trace</code> file format isn’t a single file, but a directory.</p>

<p>macOS has a concept of a <a href="https://en.wikipedia.org/wiki/Bundle_(macOS)">bundle</a>, which is effectively a directory that can act like a file. This allows many different file formats to be packaged together into a single entity. Other file formats like <a href="https://en.wikipedia.org/wiki/JAR_(file_format)">Java’s <code>.jar</code></a> and <a href="https://en.wikipedia.org/wiki/Office_Open_XML">Microsoft Office’s <code>.docx</code>.</a> accomplish similar goals by grouping many different file formats together in a <a href="https://en.wikipedia.org/wiki/Zip_(file_format)">zip compressed archive</a> (they’re literally just zip archives with different file extensions).</p>

<p>With that in mind, let’s take a look at the directory structure using the <a href="https://linux.die.net/man/1/tree"><code>tree</code> command</a>, installed on my Mac via <code>brew install tree</code>.</p>
<pre><code>$ tree -L 4 simple-time-profile.trace
simple-time-profile.trace
├── Trace1.run
│   ├── RunIssues.storedata
│   ├── RunIssues.storedata-shm
│   └── RunIssues.storedata-wal
├── corespace
│   ├── MANIFEST.plist
│   ├── currentRun
│   │   └── core
│   │       ├── extensions
│   │       ├── stores
│   │       └── uniquing
│   └── run1
│       └── core
│           ├── core-config
│           ├── extensions
│           ├── stores
│           ├── table-manager
│           └── uniquing
├── form.template
├── instrument_data
│   └── 20202640-0B46-4698-ADAD-DF54B3ABE816
│       └── run_data
│           └── 1.run.zip
├── open.creq
└── shared_data
    └── 1.run
</code></pre>
<p>…okay then! There’s a lot going on in here, and it’s not clear where we should be looking for the data we’re interested in.</p>

<h1 id="finding-strings-with-grep">Finding strings with <code>grep</code></h1>

<p>Strings tend to be the easiest kind of data to find. In this case, we expect to find the function names of the program somewhere in the profile. Here’s the main function of the program we profiled:</p>
<div class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">int</span> <span class="nf">main</span><span class="p">(</span><span class="kt">int</span> <span class="n">argc</span><span class="p">,</span> <span class="kt">char</span><span class="o">*</span> <span class="n">argv</span><span class="p">[])</span> <span class="p">{</span>
  <span class="k">while</span> <span class="p">(</span><span class="nb">true</span><span class="p">)</span> <span class="p">{</span>
    <span class="n">alpha</span><span class="p">();</span>
    <span class="n">beta</span><span class="p">();</span>
    <span class="n">gamma</span><span class="p">();</span>
    <span class="n">delta</span><span class="p">();</span>
  <span class="p">}</span>
  <span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div>
<p>If we’re lucky, we’ll find the string <code>gamma</code> somewhere in plaintext in the <code>.trace</code> bundle. If the data were compressed, we might not be so lucky.</p>
<pre><code>$ grep -r gamma simple-time-profile.trace
Binary file simple-time-profile.trace/form.template matches
</code></pre>
<p>Cool, so <code>form.template</code> contains the string <code>gamma</code> in it somewhere. Let’s see what kind of file this is.</p>
<pre><code>$ file simple-time-profile.trace/form.template
simple-time-profile.trace/form.template: Apple binary property list
</code></pre>
<p>So what’s this <code>Apple binary property list</code> thing?</p>

<h1 id="interpreting-the-plist-with-plutil">Interpreting the <code>plist</code> with <code>plutil</code></h1>

<p>From a Google search, I found an article about <a href="https://forensicswiki.org/wiki/Converting_Binary_Plists">converting binary plists</a>, which references a tool called <code>plutil</code> for analyzing and manipulating the contents of binary plists. <code>plutil -p</code> seems especially promising as a way of printing plists in a human readable format.</p>
<pre><code>$ plutil -p simple-time-profile.trace/form.template
{
  &#34;$version&#34; =&gt; 100000
  &#34;$objects&#34; =&gt; [
    0 =&gt; &#34;$null&#34;
    1 =&gt; &#34;rsrc://Template - samplertemplate&#34;
    2 =&gt; {
      &#34;NSString&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 3}
      &#34;NSDelegate&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 0}
      &#34;NSAttributes&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 5}
      &#34;$class&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 11}
    }
    ...(many more entries here, excluded for brvity)
  ]
  &#34;$archiver&#34; =&gt; &#34;NSKeyedArchiver&#34;
  &#34;$top&#34; =&gt; {
    &#34;com.apple.xray.owner.template&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 12}
    &#34;com.apple.xray.instrument.command&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 234}
    &#34;$1&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 163}
    &#34;cliTargetDevice&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 0}
    &#34;com.apple.xray.owner.template.description&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 2}
    &#34;$2&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 164}
    &#34;com.apple.xray.owner.template.version&#34; =&gt; 2.1
    &#34;com.apple.xray.owner.template.iconURL&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 1}
    &#34;$0&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 141}
    &#34;com.apple.xray.run.data&#34; =&gt; &lt;CFKeyedArchiverUID ...&gt;{value = 247}
  }
}
</code></pre>
<p>I wasn’t familiar with many Mac APIs, so the best I could do was just Google search some of the terms in here. <code>CFKeyedArchiverUID</code> shows up a lot here, and that sounds related to <code>NSKeyedArchiver</code>.</p>

<p>A Google search tells me that <a href="https://developer.apple.com/documentation/foundation/nskeyedarchiver"><code>NSKeyedArchiver</code></a> is an Apple-provided API for serialization and deserialization of object graphs into files. If we can figure out how to reconstruct the object graph that was serialized into this, this might be instrumental in extracting the data we need!</p>

<p>A convenient way to explore Cocoa APIs is inside of an <a href="https://blog.udacity.com/2015/03/learn-swift-tutorial-fundamentals.html">XCode Playground</a>. Inside of a playground, I was able to construct an <a href="https://developer.apple.com/documentation/foundation/nskeyedunarchiver"><code>NSKeyedUnarchiver</code></a> and start to pull data out of it, but I quickly ran into problems:</p>

<figure>
<img src="/images/instruments/xcode-playground.png">
</figure>

<p>In particular, we have this error:</p>
<pre><code>cannot decode object of class (XRInstrumentControlState) for key (NS.objects);
the class may be defined in source code or a library that is not linked
</code></pre>
<p>Unsurprisingly, in order to decode the objects stored within a keyed archive, you need to have access to the classes that were used to encode them. In this case, we don’t have the class <code>XRInstrumentControlState</code>, so the archiver has no idea how to decode it!</p>

<p>We could probably work around this limitation by subclassing <code>NSKeyedUnarchiver</code> and overriding <a href="https://developer.apple.com/documentation/foundation/nskeyedunarchiver/1412476-class">the method which decides which class to decode to based on the class name</a>, but I ultimately want to be able to read these files via JavaScript in the browser where I won’t have access to the Cocoa APIs. Given that, it would be helpful to understand how the serialization format works more directly.</p>

<p>To be able to extract data from this file, we’ll both need to be able to do the same thing as <code>plutil -p</code> is doing above, and also do the same thing as a <code>NSKeyedUnarchiver</code> would be doing in reconstructing the object graph from the <code>plist</code> file.</p>

<h1 id="making-a-binary-plist-parser">Making a binary plist parser</h1>

<p>Thankfully, parsing binary plists is a problem that many others have encountered in the past. Here are some binary plists parsers in a variety of languages:</p>

<ul>
<li>JavaScript: <a href="https://github.com/joeferner/node-bplist-parser">node-bplist-parser</a></li>
<li>Java: <a href="https://github.com/songkick/plist/blob/eb8cdd6ccdbc38c1bd0ce647aa9eb0400f8a3e4e/src/com/dd/plist/BinaryPropertyListParser.java#L30">BinaryPropertyListParser.java</a></li>
<li>Python: <a href="https://github.com/farcaller/bplist-python/blob/33b64b2c45f2a2fdf48cbda1748e21e41ccb4336/bplist/bplist.py#L59">bplist.py</a></li>
<li>Ruby: <a href="https://github.com/ckruse/CFPropertyList/blob/master/lib/cfpropertylist/rbBinaryCFPropertyList.rb">rbBinaryCFPropertyList.rb</a></li>
</ul>

<p>Ultimately, I ended up making minor modifications to a binary plist parser that we use at Figma for Sketch import, which you can now find in the speedscope repository in <a href="https://github.com/jlfwong/speedscope/blob/9edd5ce7ed6aaf9290d57e85f125c648a3b66d1f/import/instruments.ts#L772"><code>instruments.ts</code></a>.</p>

<h1 id="reconstructing-the-object-graph">Reconstructing the object graph</h1>

<p>Also helpfully, other people have done analysis on how <code>NSKeyedArchiver</code> serializes its data into a property list. <a href="https://www.mac4n6.com/blog/2016/1/1/manual-analysis-of-nskeyedarchiver-formatted-plist-files-a-review-of-the-new-os-x-1011-recent-items">This blogpost by mac4n6</a>, for example, explores an example of how an object graph can be reconstructed from the property list. It ends up being a relatively straightforward process of <a href="https://github.com/jlfwong/speedscope/blob/9edd5ce7ed6aaf9290d57e85f125c648a3b66d1f/import/instruments.ts#L622">replacing numerical IDs with their corresponding entries in the <code>$object</code> lookup table</a>.</p>

<p>After these replacements are completed, objects will have a property indicating what class they were serialized from. Many common datatypes have consistent serialization formats that we can use to construct a useful representation of the original object.</p>

<p>This too ends up being a task that surprisingly many people have been interested in solving, and have also kindly release source code to solve:</p>

<ul>
<li>Python: <a href="https://github.com/jorik041/ccl-bplist/blob/423670d84c118f66c9fe79122ba37dd856d23595/ccl_bplist.py#L354"><code>ccl_bplist.py</code></a> and <a href="https://github.com/Marketcircle/bpylist"><code>bpylist.py</code></a></li>
<li>JavaScript: <a href="https://github.com/afiedler/sketch-node-parser/blob/2fc464ec4bcead3cd04182021ef0c65c70557f93/src/msArchiver/msUnarchiver.js#L16"><code>msUnarchiver.js</code></a></li>
</ul>

<p>You can see examples of these in <a href="https://github.com/jlfwong/speedscope/blob/9edd5ce7ed6aaf9290d57e85f125c648a3b66d1f/import/instruments.ts#L648"><code>patternMatchObjectiveC</code></a> in the speedscope source code.</p>

<h1 id="handling-custom-datatypes">Handling custom datatypes</h1>

<p>There are datatypes in <code>simple-time-profile.trace/form.template</code>, however, that are specific to Instruments. When we’re trying to reconstruct an object from an <code>NSKeyedArchive</code>, we’re given a <code>$classname</code> variable. If we collect all the Instruments-specific classnames and print them out, we’re left with this:</p>
<pre><code>[
  &#34;XRRecordingOptions&#34;,
  &#34;XRContext&#34;,
  &#34;XRAnalysisCoreDetailNode&#34;,
  &#34;XRAnalysisCoreTableQuery&#34;,
  &#34;XRMainWindowUIState&#34;,
  &#34;XRInstrumentControlState&#34;,
  &#34;XRRunListData&#34;,
  &#34;XRIntKeyedDictionary&#34;,
  &#34;PFTPersistentSymbols&#34;,
  &#34;XRArchitecture&#34;,
  &#34;PFTSymbolData&#34;,
  &#34;PFTOwnerData&#34;,
  &#34;XRCore&#34;,
  &#34;XRThread&#34;,
  &#34;XRBacktraceTypeAdapter&#34;
]
</code></pre>
<p>Stepping back, what we’re trying to figure out here is where the function names and file locations are stored within this file. From surveying the list of Instruments specific classes above, <code>PFTSymbolData</code> seems like a good candidate to contain this information.</p>

<p>A google search of <code>PFTSymbolData</code> yields <a href="https://github.com/mmmulani/class-dump-o-tron/blob/23e965055a75830d905815673e8b533fa08907cb/Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/Frameworks/InstrumentsPlugIn.framework/Versions/A/InstrumentsPlugIn/PFTSymbolData.h#L16">this github page</a> showing a reverse-engineered header file from XCode!</p>
<pre><code>@interface PFTSymbolData : NSObject &lt;NSCoding, CommonSymbol, XRUIStackFrame, NSCopying&gt;
{
    NSString *sourcePath;
    struct XRLineNumData *addressData;
    int numAddresses;
    int addressesCapacity;
    BOOL _missingSymbolName;
    struct _CSRange symbolRange;
    unsigned int fTypeFlags;
    PFTOwnerData *ownerData;
    NSMutableArray *inlinedInstances;
    NSString *symbolName;
}
</code></pre>
<p>These headers were extracted using <a href="http://stevenygard.com/projects/class-dump/"><code>class-dump</code></a>. This was pretty lucky — it just so happens that someone has dumped all of the headers in XCode and put it up in a GitHub repository.</p>

<p>Using the header as a reference and inspecting the data, I was able to reconstruct a semantically useful representation of <code>PFTSymbolData</code></p>

<figure>
<img src="/images/instruments/pftsymboldata.png">
</figure>

<p>You can see the relevant code in <a href="https://github.com/jlfwong/speedscope/blob/9edd5ce7ed6aaf9290d57e85f125c648a3b66d1f/import/instruments.ts#L504"><code>readInstrumentsKeyedArchive</code></a>.</p>

<p>Now we have the symbol table, but we still need the list of samples!</p>

<h1 id="finding-the-list-of-samples-with-find-and-du">Finding the list of samples with <code>find</code> and <code>du</code></h1>

<p>I was hoping that all of the information I was interested in would be in a single file within the <code>.trace</code> bundle, but it turns out we aren’t so lucky.</p>

<p>The next thing I’m looking for is the list of samples collected during instrumentation. Each sample contains a timestamp, so I expect them to be stored as a table of numbers. But I wasn’t even sure what numbers I should be looking for, because I’m not sure how the timestamps are stored. The timestamps could be stored as absolute values since unix epoch, or could be stored as relative to the previous sample, or relative to the start of the profile, and could be stored as floating point values or integers, and those integers might be big endian or small endian.</p>

<p>Overall, I wasn’t really sure how to find data when I didn’t know any values that would definitely be in the data table, so I had a different idea for an approach. I recorded a longer profile, then went looking for big files! I figured that as profiles got longer, the data storing the list of samples should get bigger.</p>

<p>To find potential files of interest, I ran the following <a href="https://en.wikipedia.org/wiki/Pipeline_(Unix)">unix pipeline</a>:</p>
<pre><code>$ find . -type f | xargs du | sort -n | tail -n 10
152     ./corespace/run1/core/stores/indexed-store-9/spindex.0
168     ./corespace/run1/core/stores/indexed-store-12/spindex.0
296     ./corespace/run1/core/stores/indexed-store-3/bulkstore
600     ./form.template
808     ./corespace/run1/core/stores/indexed-store-9/bulkstore
1064    ./corespace/run1/core/stores/indexed-store-12/bulkstore
2048    ./corespace/run1/core/uniquing/arrayUniquer/integeruniquer.data
2048    ./corespace/run1/core/uniquing/typedArrayUniquer/integeruniquer.data
20480   ./corespace/currentRun/core/uniquing/arrayUniquer/integeruniquer.data
20480   ./corespace/currentRun/core/uniquing/typedArrayUniquer/integeruniquer.data
</code></pre>
<p>Let’s break down this pipeline.</p>

<ul>
<li><code>find . -type f</code> finds all files in the current directory, printing them one per line (<a href="https://linux.die.net/man/1/find"><code>find</code> man page</a>)</li>
<li><code>xargs du</code> runs <code>du</code> to find file size using the list piped to it as arguments using <code>xargs</code>. We could alternatively do <code>du $(find . -type f)</code>. (<a href="https://linux.die.net/man/1/xargs"><code>xargs</code> man page</a>, <a href="https://linux.die.net/man/1/du"><code>du</code> man page</a>)</li>
<li><code>sort -n</code> numerically sorts the results in ascending order (<a href="https://linux.die.net/man/1/sort"><code>sort</code> man page</a>`)</li>
<li><code>tail -n 10</code> takes the last 10 lines of output (<a href="https://linux.die.net/man/1/tail"><code>tail</code> man page</a>)</li>
</ul>

<p>We can extend this command to tell us the file type of each of these files:</p>
<pre><code>$ find . -type f | xargs du | sort -n | tail -n 10 | cut -f2 | xargs file
./corespace/run1/core/stores/indexed-store-9/spindex.0:                     data
./corespace/run1/core/stores/indexed-store-12/spindex.0:                    data
./corespace/run1/core/stores/indexed-store-3/bulkstore:                     data
./form.template:                                                            Apple binary property list
./corespace/run1/core/stores/indexed-store-9/bulkstore:                     data
./corespace/run1/core/stores/indexed-store-12/bulkstore:                    data
./corespace/run1/core/uniquing/arrayUniquer/integeruniquer.data:            data
./corespace/run1/core/uniquing/typedArrayUniquer/integeruniquer.data:       data
./corespace/currentRun/core/uniquing/arrayUniquer/integeruniquer.data:      data
./corespace/currentRun/core/uniquing/typedArrayUniquer/integeruniquer.data: data
</code></pre>
<p>The <code>cut</code> command can be used to extract columns of data from a plaintext table. In this case <code>cut -f2</code> selects only the second column of the data. We then run the <code>file</code> command on each resulting file. (<a href="https://linux.die.net/man/1/cut"><code>cut</code> man page</a>)</p>

<p>A file type of <code>data</code> isn’t very informative, so we’ll have to start examining the binary contents to figure out the format ourselves.</p>

<h1 id="exploring-binary-file-contents-with-xxd">Exploring binary file contents with <code>xxd</code></h1>

<p><code>xxd</code> is a tool for taking a “hex dump” of a binary file (<a href="https://linux.die.net/man/1/xxd"><code>xxd</code> man page</a>). A hex dump of a binary file is a representation of a file displaying each byte of the file as a hexadecimal pair.</p>
<pre><code>$ echo &#34;hello&#34; | xxd
00000000: 6865 6c6c 6f0a                           hello.
</code></pre>
<p>The output here shows the offset (<code>00000000</code>:), the hex represenation of the bytes in the file (<code>6865 6c6c 6f0a</code>) and the corresponding ASCII interpretation of those bytes (<code>hello.</code>), with <code>.</code> being used in place of unprintable characters. The <code>.</code> in this case corresponds to the byte <code>0a</code> which in turn corresponds to the ASCII <code>\n</code> character emitted by <code>echo</code>.</p>

<p>Here’s another example using <code>printf</code> to emit 3 bytes with no printable representations.</p>
<pre><code>$ printf &#34;\1\2\3&#34; | xxd
00000000: 0102 03                                  ...
</code></pre>
<p>Let’s use this to explore the biggest file we found.</p>
<pre><code>$ xxd corespace/currentRun/core/uniquing/typedArrayUniquer/integeruniquer.data | head -n 10
00000000: 6745 2301 7e33 0a00 0100 0000 0000 0000  gE#.~3..........
00000010: 0100 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000050: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000060: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000070: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000080: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000090: 0000 0000 0000 0000 0000 0000 0000 0000  ................
</code></pre>
<p>Hmm. It seems like there’s a lot of data in this file that’s all zero’d out. The sample list can’t possibly be all zeros, so we’d rather just look for bits that aren’t zero’d out. The <code>-a</code> flag of <code>xxd</code> can be of help in this situation.</p>
<pre><code>       -a | -autoskip
              toggle autoskip: A single &#39;*&#39; replaces nul-lines.  Default off.


$ xxd -a ./corespace/currentRun/core/uniquing/typedArrayUniquer/integeruniquer.data
00000000: 6745 2301 7e33 0a00 0100 0000 0000 0000  gE#.~3..........
00000010: 0100 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
009ffff0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
</code></pre>
<p>Welp. It doesn’t seem like this file actually contains much useful data. Let’s re-sort our list of files, this time sorting by the number of non-null lines.</p>
<pre><code>$ for f in $(find . -type f); do echo &#34;$(xxd -a $f | wc -l) $f&#34;; done | sort -n | tail -n 10
    1337 ./corespace/currentRun/core/extensions/com.apple.dt.instruments.ktrace.dtac/knowledge-rules-0.clp
    1337 ./corespace/run1/core/extensions/com.apple.dt.instruments.ktrace.dtac/knowledge-rules-0.clp
    2232 ./corespace/currentRun/core/extensions/com.apple.dt.instruments.poi.dtac/binding-rules.clp
    2232 ./corespace/run1/core/extensions/com.apple.dt.instruments.poi.dtac/binding-rules.clp
    2391 ./corespace/run1/core/uniquing/arrayUniquer/integeruniquer.data
    2524 ./corespace/run1/core/stores/indexed-store-12/spindex.0
    2736 ./corespace/run1/core/stores/indexed-store-9/spindex.0
    5148 ./corespace/run1/core/stores/indexed-store-9/bulkstore
    6793 ./corespace/run1/core/stores/indexed-store-12/bulkstore
   18091 ./form.template
</code></pre>
<p>We already know what <code>form.template</code> is, so we’ll start with the second largest. If we look at <code>indexed-store-12/bulkestore</code>, it looks like there might be some useful data in there, starting at offset <code>0x1000</code>.</p>
<pre><code>$ xxd -a ./corespace/run1/core/stores/indexed-store-12/bulkstore | head -n 20
00000000: 0a0a 3412 0300 0000 2800 0000 0010 0000  ..4.....(.......
00000010: 2100 0000 0040 0800 0040 0000 0000 0000  !....@...@......
00000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
*
00001000: 796c 8f2b 0000 0400 0000 0000 0006 0000  yl.+............
00001010: 0004 0000 0040 420f 0000 0000 00fe 0000  .....@B.........
00001020: 00a4 9ddc 2b00 0004 0000 0000 0000 0200  ....+...........
00001030: 0000 0400 0000 4042 0f00 0000 0000 0001  ......@B........
00001040: 0000 5d7f 0a2c 0000 0400 0000 0000 0002  ..]..,..........
00001050: 0000 0004 0000 0040 420f 0000 0000 0002  .......@B.......
00001060: 0100 0039 fb19 2c00 0004 0000 0000 0000  ...9..,.........
00001070: 0000 0000 0400 0000 4042 0f00 0000 0000  ........@B......
00001080: 0401 0000 336b 292c 0000 0400 0000 0000  ....3k),........
00001090: 0000 0000 0004 0000 0040 420f 0000 0000  .........@B.....
000010a0: 0005 0100 0026 e538 2c00 0004 0000 0000  .....&amp;.8,.......
000010b0: 0000 0000 0000 0400 0000 4042 0f00 0000  ..........@B....
000010c0: 0000 0701 0000 4c5a 482c 0000 0400 0000  ......LZH,......
000010d0: 0000 0000 0000 0004 0000 0040 420f 0000  ...........@B...
000010e0: 0000 0009 0100 0040 ce57 2c00 0004 0000  .......@.W,.....
000010f0: 0000 0000 0000 0000 0400 0000 4042 0f00  ............@B..
</code></pre>
<p>The <code>@B</code> in the right column, while not obviously semantically meaningful, seems to repeat at a regular interval. Maybe if can figure out that regular interval, we’ll be able to guess what the structure of the data is. We can try guessing different intervals by using the <code>-c</code> argument of <code>xxd</code>, and try changing the byte grouping using the <code>-g</code> argument.</p>
<pre><code>      -c cols | -cols cols
              format &lt;cols&gt; octets per line. Default 16 (-i: 12, -ps: 30, -b: 6).

     -g bytes | -groupsize bytes
              separate the output of every &lt;bytes&gt; bytes (two hex characters or eight bit-digits each) by a whitespace.  Specify -g 0 to suppress grouping.
</code></pre>
<p>We seem to get alignment between the repeated values when we group data into chunks of 33:</p>
<pre><code>$ xxd -a -c 33 -g0 ./corespace/run1/core/stores/indexed-store-12/bulkstore | cut -d&#39; &#39; -f2 | head -n 10
0a0a34120300000028000000001000002100000000400800004000000000000000
000000000000000000000000000000000000000000000000000000000000000000
*
00000000796c8f2b000004000000000000060000000400000040420f0000000000
fe000000a49ddc2b000004000000000000020000000400000040420f0000000000
000100005d7f0a2c000004000000000000020000000400000040420f0000000000
0201000039fb192c000004000000000000000000000400000040420f0000000000
04010000336b292c000004000000000000000000000400000040420f0000000000
0501000026e5382c000004000000000000000000000400000040420f0000000000
070100004c5a482c000004000000000000000000000400000040420f0000000000
</code></pre>
<p>Sweet! This suggests that this file format uses 33 bytes per entry, and <em>hopefully</em> each of those entries corresponds to one sample in the profile.</p>

<p>Looking around in the directory which contains this <code>bulkstore</code>, we find a helpful sounding file called <code>schema.xml</code>:</p>
<pre><code>$ cat ./corespace/run1/core/stores/indexed-store-12/schema.xml
&lt;schema name=&#34;time-profile&#34; topology=&#34;XRT50_C22_TypeID&#34;&gt;
    &lt;column engineeringType=&#34;XRSampleTimestampTypeID&#34; engineeringName=&#34;Sample Time&#34; mnemonic=&#34;time&#34; topologyField=&#34;XRTraceRelativeTimestampFieldID&#34;&gt;&lt;/column&gt;
    &lt;column engineeringType=&#34;XRThreadTypeID&#34; engineeringName=&#34;Thread&#34; mnemonic=&#34;thread&#34; topologyField=&#34;XRCategory1FieldID&#34;&gt;&lt;/column&gt;
    &lt;column engineeringType=&#34;XRProcessTypeID&#34; engineeringName=&#34;Process&#34; mnemonic=&#34;process&#34;&gt;&lt;/column&gt;
    &lt;column engineeringType=&#34;XRCPUCoreTypeID&#34; engineeringName=&#34;Core&#34; mnemonic=&#34;core&#34;&gt;&lt;/column&gt;
    &lt;column engineeringType=&#34;XRThreadStateTypeID&#34; engineeringName=&#34;State&#34; mnemonic=&#34;thread-state&#34;&gt;&lt;/column&gt;
    &lt;column engineeringType=&#34;XRTimeSampleWeightTypeID&#34; engineeringName=&#34;Weight&#34; mnemonic=&#34;weight&#34;&gt;&lt;/column&gt;
    &lt;column engineeringType=&#34;XRBacktraceTypeID&#34; engineeringName=&#34;Stack&#34; mnemonic=&#34;stack&#34;&gt;&lt;/column&gt;
&lt;/schema&gt;
</code></pre>
<p>Alright, this is looking pretty good! <code>XRSampleTimestampTypeID</code> and <code>XRBacktraceTypeID</code> seem particularly relevant.</p>

<p>The next step is to figure out how these 33 byte entries map onto the fields in <code>schema.xml</code>.</p>

<h1 id="guessing-binary-formats-with-synalyze-it">Guessing binary formats with Synalyze It!</h1>

<p>So far in this exploration, all of the tools I’ve been using come standard on most unix installations, and all are free and open source. While I certainly could have figured this out end-to-end using only tools in that category, my friend <a href="https://petersobot.com/">Peter Sobot</a> introduced me to a tool that made this process much easier.</p>

<p><a href="https://www.synalysis.net/">Synalyze It</a><a href="https://www.synalysis.net/">!</a> is a hex editor and binary analysis tool for OS X. There’s a Windows &amp; Linux version called <a href="https://hexinator.com/">Hexinator</a>. These tools let you make guesses about the structure of file formats (e.g. “I think this file is a list of structs, each of which is 20 bytes, where the first 4 bytes are an unsigned int, and the last 16 bytes are a fixed-length ascii string”), then parse the file based on that guess and display in both a colorized view of the hex dump and in an expandable tree view. This let me guess-and-check several hypotheses about what the structure of the file.</p>

<p>Eventually I was able to guess the length and offsets of the fields I was interested in. Synalyze It! helps you visually parse the information by setting colors for different fields. Here, I’ve set the sample timestamp to be green, and the backtrace ID to be red.</p>

<figure>
<img src="/images/instruments/synalyzeit.png">
</figure>

<p>From looking at the values of the sample time and comparing them with what Instruments was displaying, I was able to infer that the values represented the number of nanoseconds since the profile started as a six byte unsigned integer. I was able to verify this by editing the binary file and then re-opening it in instruments.</p>

<figure>
<img src="/images/instruments/timestamp-found.png">
</figure>

<p>Sweet! So that answers the question of where the sample information is stored, and we know how to interpret the timestamp data. But we still don’t quite know how to turn the backtrace ID into a stack trace.</p>

<p>To try to find the stacks, we can see if the memory addresses identified as part of the symbol table show up anywhere outside of the <code>form.template</code> binary plist.</p>

<h1 id="finding-binary-sequences-using-python">Finding binary sequences using python</h1>

<p>Here’s the same symbol data from earlier.</p>

<figure>
<img src="/images/instruments/pftsymboldata.png">
</figure>

<p>So let’s see if we can find one of these addresses referenced somewhere else in the <code>.trace</code> bundle. We’ll look for the third address in that list, <code>4536213276</code>.</p>

<p>As a first attempt, it’s possible that the number is written as a string.</p>
<pre><code>$ grep -a -R -l &#39;4536213276&#39; .
</code></pre>
<p>No results. Well, that was kind of a long shot. Let’s try the more plausible idea of searching for a binary encoding of this number.</p>

<p>There are two standard ways of encoding multi-byte integers into a byte stream. One is called “little endian” and the other is called “big endian”. In little endian, you place the least significant byte first. In big endian, you place the most significant byte first. Using <a href="https://docs.python.org/2/library/struct.html#format-characters">python’s struct standard library</a>, we can see what each of these representations look like.</p>

<p>The number is too big to fit in a 32 bit integer, so it’s probably a 64 bit integer, which would make sense if it’s a memory address that has to support 64 bit addresses.</p>
<pre><code>$ python -c &#39;import struct, sys; sys.stdout.write(struct.pack(&#34;&gt;Q&#34;, 4536213276));&#39; | xxd
00000000: 0000 0001 0e61 1f1c                      .....a..
$ python -c &#39;import struct, sys; sys.stdout.write(struct.pack(&#34;&lt;Q&#34;, 4536213276));&#39; | xxd
00000000: 1c1f 610e 0100 0000                      ..a.....
</code></pre>
<p><code>&gt;Q</code> instructs <code>struct.pack</code> to encode the number as a big endian 64 bit unsigned integer, and <code>&lt;Q</code> corresponds to a little endian 64 bit unsigned integer.</p>

<p>If you split up the bytes, you can see it’s the same bytes in both encodings, just in reverse order:</p>

<figure>
<img src="/images/instruments/endianness.png">
</figure>

<p>Now we can use a little python program to search for files with the value we care about.</p>
<pre><code>$ cat search.py
import os, glob, struct

addr = 4536213276
little = struct.pack(&#39;&lt;Q&#39;, addr)
big = struct.pack(&#39;&gt;Q&#39;, addr)

for (dirpath, dirnames, filenames) in os.walk(&#39;.&#39;):
  for f in filenames:
    path = os.path.join(dirpath, f)
    contents = open(path).read()
    if little in contents:
      print &#39;Found little in %s&#39; % path
    elif big in contents:
      print &#39;Found big in %s&#39; % path
$ python search.py
Found big in ./form.template
Found little in ./corespace/run1/core/uniquing/arrayUniquer/integeruniquer.data
</code></pre>
<p>Sweet! The value is in two places: one little endian, one big endian. The <code>form.template</code> one we already knew about — that’s where we found this address in the first place. The second location in <code>integeruniquer.data</code> is one we haven’t explored. It also was one of the files we found when searching for files with large amounts of non-zero data in them.</p>

<p>After fumbling around in this file with Synalyze It! for a while, I discovered that file is aptly named, and contains arrays of integers packed as a 32 bit length followed by a list of 64 bit integers.</p>

<figure>
<img src="/images/instruments/synalyzeit-integeruniquer.png">
</figure>

<p>So <code>integeruniquer.data</code> contains an array of arrays of 64 bit integers. Neat!
It seems like each 64 bit int is either a memory address or an index into the array of arrays. This was the last piece of the puzzle we need to parse the profiles.</p>

<h1 id="putting-it-all-together">Putting it all together</h1>

<p>So overall, the final process looks like this:</p>

<ol>
<li>Find the list of samples by finding a <code>bulkstore</code> file adjacent to a <code>schema.xml</code> which contains the string <code>&lt;schema name=&quot;time-profile</code><code>&quot;</code>.</li>
<li>Extract a list of <code>(timestamp, backtraceID)</code> tuples from the <code>bulkstore</code></li>
<li>Using the <code>backtraceID</code> as an index into the array represented by <code>arrayUniquer/integeruniquer.data</code>, convert the list of <code>(timestamp, backtraceID)</code> tuples into a list of <code>(timestamp, address[])</code> tuples</li>
<li>Parse the <code>form.template</code> binary plist and extract the symbol data from <code>PFTSymbolData</code> from the resulting <code>NSKeyedArchive</code>. Convert this into a mapping from <code>address</code> to <code>(function name, file path)</code> pairs.</li>
<li>Using the <code>address → (function name, file path)</code> mapping in conjunction with the <code>(timestamp address[])</code> tuple list, construct a list of <code>(timestamp, (function name, file path)[])</code> tuples. This is the final information needed to construct a flamegraph!</li>
</ol>

<p>Phew! That was a lot of digging for what ultimately ends up being a relatively straightforward data extraction. You can find the implementation in <a href="https://github.com/jlfwong/speedscope/blob/721246752f5e897f9c5a7c8c325fe55a79681ef2/import/instruments.ts#L439"><code>importFromInstrumentsTrace</code></a> in the source for speedscope on GitHub.</p>

<p>If you do get the chance to <a href="https://www.speedscope.app">give speedscope a try</a>, please tweet <a href="https://twitter.com/jlfwong">@jlfwong</a> and let me know what you think 🙂.</p>

<p><em>Thanks to</em> <a href="https://petersobot.com/"><em>Peter Sobot</em></a><em>,</em> <a href="http://rykap.com/"><em>Ryan Kaplan</em></a>, <em>and</em> <a href="http://digitalfreepen.com/"><em>Rudi Chen</em></a> <em>for providing feedback on the draft of this post.</em></p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Color: From Hexcodes to Eyeballs]]></title>
    <link href="http://jamie-wong.com/post/color/"/>
    <updated>2018-04-03T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/color/</id>
    <content type="html"><![CDATA[ 

<p><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.9.0/katex.min.css" integrity="sha384-TEMocfGvRuD1rIAacqrknm5BQZ7W7uWitoih+jMNFXQIbNl16bO8OZmylH/Vi/Ei" crossorigin="anonymous"></p>

<style>

.datatable td {
  padding: 0 5px 0 5px;
  border: 1px solid black;
  text-align: right;
}
</style>

<p><img src="/images/color/Hero.png" alt="" /></p>

<p><em>This post is also available in Russian: <a href="https://habr.com/post/353582/">Цвет: от шестнадцатеричных кодов до глаза</a>, and Japanese: <a href="https://postd.cc/color/">色：ヘキサコードから眼球まで</a>.</em></p>

<p>Why do we perceive <code>background-color: #9B51E0</code> as this particular purple?</p>

<p><img src="/images/color/Purple.png" alt="" /></p>

<p>This is one of those questions where I thought I’d known the answer for a long time, but as I inspected my understanding, I realized there were pretty significant gaps.</p>

<p>Through an exploration of electromagnetic radiation, optical biology, colorimetry, and display hardware, I hope to start filling in some of these gaps.
If you want to skip ahead, here&rsquo;s the lay of the land we&rsquo;ll be covering:</p>

<p><nav id="TableOfContents">
<ul>
<li><a href="#electromagnetic-radiation">Electromagnetic radiation</a></li>
<li><a href="#visible-light">Visible light</a></li>
<li><a href="#human-perceived-brightness">Human perceived brightness</a></li>
<li><a href="#quantifying-color">Quantifying color</a></li>
<li><a href="#optical-biology">Optical biology</a></li>
<li><a href="#color-spaces">Color spaces</a></li>
<li><a href="#wright-guild-s-color-matching-experiments">Wright &amp; Guild’s color matching experiments</a></li>
<li><a href="#visualizing-color-spaces-chromaticity">Visualizing color spaces &amp; chromaticity</a></li>
<li><a href="#gamuts-and-the-spectral-locus">Gamuts and the spectral locus</a></li>
<li><a href="#cie-xyz-color-space">CIE XYZ color space</a></li>
<li><a href="#screen-subpixels">Screen subpixels</a></li>
<li><a href="#srgb">sRGB</a></li>
<li><a href="#srgb-hexcodes">sRGB hexcodes</a></li>
<li><a href="#gamma-correction">Gamma correction</a></li>
<li><a href="#from-hexcodes-to-eyeballs">From Hexcodes to Eyeballs</a></li>
<li><a href="#a-brief-note-about-brightness-setting">A brief note about brightness setting</a></li>
<li><a href="#stuff-i-left-out">Stuff I left out</a></li>
<li><a href="#references">References</a></li>
</ul>
</nav></p>

<p>Otherwise, let’s start with the physics.</p>

<h1 id="electromagnetic-radiation">Electromagnetic radiation</h1>

<p>Radio waves, microwaves, infrared, visible light, ultraviolet, x-rays, and gamma rays are all forms of electromagnetic radiation. While these things all go by different names, these names really only label different ranges of wavelengths within the electromagnetic spectrum.</p>

<figure>
<img src="/images/color/electromagneticSpectrum.png">
<figcaption>The electromagnetic spectrum</figcaption>
</figure>

<p>The smallest unit of electromagnetic radiation is a photon. The energy contained within a photon is proportional to the frequency of its corresponding wave, with high energy photons corresponding with high frequency waves.</p>

<p>To really understand color, we need to first understand radiation. Let’s take a closer look at the radiation of an incandescent light bulb.</p>

<figure>
<img src="/images/color/incandescent.png">
<figcaption>Photo by <a href="https://unsplash.com/photos/HfR0W6HW_Cw">Alex Iby</a></figcaption>
</figure>

<p>We might want to know how much energy the bulb is radiating. The <strong>radiant flux</strong> ($$\Phi_e$$) of an object is the total energy emitted per second, and is measured in Watts. The radiant flux of a 100W incandescent lightbulb is about 80W, with the remaining 20W being converted directly to non-radiated heat.</p>

<p>If we want to know how much of that energy comes from each wavelength, we can look at the <strong>spectral flux</strong>. The spectral flux ($$\Phi_{e,\lambda}$$) of an object is radiant flux per unit wavelength, and is typically measured in Watts/nanometer.</p>

<p>If we were to graph the spectral flux of our incandescent lightbulb as a function of wavelength, it might look something like this:</p>

<figure>
<img src="/images/color/SpectralFlux1.png">
</figure>

<p>The area under this curve will give the radiant flux. As an equation, $$\Phi_e = \int_0^\infty \Phi_{e,\lambda}(\lambda) d\lambda$$.
In this case, the area under the curve will be about 80W.</p>

<figure>
<img src="/images/color/SpectralFlux2.png">
</figure>

<div>$$\Phi_{e}^{\text{bulb}} = \int_0^\infty \Phi_{e,\lambda}^\text{bulb}(\lambda) d\lambda = 80\text{W}$$</div>

<p>Now you might’ve heard from eco-friendly campaigns that incandescent lightbulbs are brutally inefficient, and might be thinking to yourself, “well, 80% doesn’t seem so bad”.</p>

<p>And it’s true — an incandescent lightbulb is a pretty efficient way to convert electricity into radiation. Unfortunately, it’s a terrible way to convert electricity into <em>human visible</em> radiation.</p>

<h1 id="visible-light">Visible light</h1>

<p>Visible light is the wavelength range of electromagnetic radiation from $$\lambda = 380\text{nm}$$ to $$\lambda = 750\text{nm}$$. On our graph of an incandescent bulb, that’s the shaded region below.</p>

<figure>
<img src="/images/color/SpectralFlux3.png">
</figure>

<div>$$\int_{380 \text{nm}}^{750 \text{nm}} \Phi_{e,\lambda}^\text{bulb}(\lambda) d\lambda = 8.7W$$</div>

<p>Okay, so the energy radiated <em>within the visible spectrum</em> is $$8.7W$$ for an efficiency of $$8.7\%$$. That seems pretty awful. But it gets worse.</p>

<p>To understand why, let’s consider why visible light is, well, <em>visible</em>.</p>

<h1 id="human-perceived-brightness">Human perceived brightness</h1>

<figure>
<img src="/images/color/bweye.png">
<figcaption>Photo by <a href="https://unsplash.com/photos/QaGNhezu_5Q">Christopher Burns</a></figcaption>
</figure>

<p>Just as we saw that an incandescent light bulb doesn’t radiate equally at all wavelengths, our eyes aren’t equally sensitive to radiation at all wavelengths. If we measure a human eye’s sensitivity to every wavelength, we get a <a href="https://en.wikipedia.org/wiki/Luminosity_function">luminosity function</a>. The standard luminosity function, $$\bar y(\lambda)$$ looks like this:</p>

<figure>
<img src="/images/color/SpectralFlux4.png">
</figure>

<p>The bounds of this luminosity function <em>define</em> the range of visible light. Anything outside this range isn’t visible because, well, our eyes aren’t sensitive to it!</p>

<p>This curve also shows that our eyes are <em>much</em> more sensitive to radiation at 550nm than they are to radiation at either 650nm or 450nm.</p>

<p>Other animals have eyes that are sensitive to a different range of wavelengths, and therefore different luminosity functions. Birds can see ultraviolet radiation in the range between $$\lambda=300\text{nm}$$ to $$\lambda=400\text{nm}$$, so if scholarly birds had defined the electromagnetic spectrum, that would’ve been part of the “visible light” range for them!</p>

<figure>
<img src="/images/color/owl.png">
<figcaption>Photo by <a href="https://unsplash.com/photos/0J6cTw0V2lE">Timothy Rhyne</a></figcaption>
</figure>

<p>By multiplying the graph of spectral flux with the luminosity function $$\bar y(\lambda)$$, we get a function which describes the contributions to human perceived brightness for each wavelength emitted by a light source.</p>

<figure>
<img src="/images/color/SpectralFlux5.png">
</figure>

<p>This is the <strong>spectral luminous flux (</strong>$$\Phi_{v,\lambda}$$<strong>)</strong>. To acknowledge this is about human perception rather than objective power, the luminous flux is typically measured in lumens rather than Watts using a conversion ratio of 683.002 lm/W.</p>

<div>$$\Phi_{v,\lambda}(\lambda) = 683.002 \frac{\text{lm}}{\text{W}} \cdot \bar y(\lambda) \cdot \Phi_{e,\lambda}(\lambda)$$</div>

<p>The <strong>luminous flux (</strong>$$\Phi_v$$<strong>)</strong> of a light source is the total <em>human perceived</em> power of the light.</p>

<p>Just as we calculated the radiant flux by taking the area under the spectral flux curve, we can find the luminous flux by taking the area under the spectral <em>luminous</em> flux curve, with a constant conversion from perceived watts to lumens:</p>

<figure>
<img src="/images/color/SpectralFlux5.5.png">
</figure>

<div>$$\Phi_{v}^\text{bulb} = \int_0^\infty \bar y(\lambda) \cdot \Phi_{e,\lambda}^\text{bulb}(\lambda) d\lambda = 683.002 \frac{\text{lm}}{\text{W}} \cdot 2.4\text{W} \approx 1600 \text{lm}$$</div>

<p>So the luminous flux of our 100W incandescent lightbulb is a measly 2.4W or 1600lm! The bulb has a luminous efficiency of 2.4%, a far cry from the 80% efficiency converting electricity into radiation.</p>

<p>Perhaps if we had a light source that concentrated its emission into the visible range, we’d be able to get more efficient lighting. Let’s compare the spectra of incandescent, fluorescent, and LED bulbs:</p>

<figure>
<img src="/images/color/SpectralFlux6.png">
</figure>

<p>And indeed, we can see that far less of the radiation in fluorescent or led bulbs is wasted on wavelengths that humans can’t see. Where incandescent bulbs might have an efficiency of 1-3%, fluorescent bulbs can be around 10% efficient, and LED bulbs can achieve up to 20% efficiency!</p>

<p>Enough about brightness, let’s return to the focus of this post: color!</p>

<h1 id="quantifying-color">Quantifying color</h1>

<figure>
<img src="/images/color/lemon.png">
<figcaption>Photo by <a href="https://unsplash.com/photos/sil2Hx4iupI">Lauren Mancke</a></figcaption>
</figure>

<p>How might we identify a given color? If I have a lemon in front of me, how can I tell you over the phone what color it is? I might tell you “the lemon is yellow”, but which yellow? How would you precisely identify each of these yellows?</p>

<p><img src="/images/color/Yellows.png" alt="Shades of yellow" /></p>

<p>Armed with the knowledge that color is humans’ interpretation of electromagnetic radiation, we might be tempted to define color mathematically via spectral flux. Any human visible color will be some weighted combination of the monochromatic (single wavelength) colors. Monochromatic colors are also known as spectral colors.</p>

<figure>
<img src="/images/color/Rainbow.png">
<figcaption>The monochromatic colors by wavelength</figcaption>
</figure>

<p>For any given object, we can measure its emission (or reflectance) spectrum, and use that to precisely identify a color. If we can reproduce the spectrum, we can certainly reproduce the color!</p>

<p>The sunlight reflected from a point on a lemon might have a reflectance spectrum that
looks like this:</p>

<figure>
<img src="/images/color/ReflectanceSpectrum.png">
</figure>

<p><em>Note: the power and spectral distribution of radiation that reaches your eye is going to depend upon</em> <em>the</em> <em>power &amp; emission spectrum of the light source, the distance of the light source from the illuminated object, the size and shape of the object, the absorption spectrum of the object, and your distance from the object. That’s a lot to think about, so let’s focus just on what happens when that light hits your eye. Let’s also disregard units for now to focus on concepts.</em></p>

<p>When energy with this spectral distribution hits our eyes, we perceive it as “yellow”. Now let’s say I take a photo of the lemon and upload it to my computer. Next, I carefully adjust the colors on my screen until a particular point of the on-screen lemon is imperceptibly different from the color of the actual lemon in my actual hand.</p>

<p>If you were to measure the spectral power distribution coming off of the screen, what would you expect the distribution to look like? You might reasonably expect it to look similar to the reflectance spectrum of the lemon above. But it would actually look something like this:</p>

<figure>
<img src="/images/color/EmissionSpectrum.png">
</figure>

<p>Two different spectral power distributions that look the same to human observers are called <a href="https://en.wikipedia.org/wiki/Metamerism_(color)"><strong>metamers</strong></a>.</p>

<figure>
<img src="/images/color/Metamers1.png">
</figure>

<p>To understand how this is possible, let’s take a look at the biology of the eye.</p>

<h1 id="optical-biology">Optical biology</h1>

<figure>
<img src="/images/color/coloreye.png">
<figcaption>Photo by <a href="https://unsplash.com/photos/UbJMy92p8wk">Amanda Dalbjörn </a></figcaption>
</figure>

<p>Our perception of light is the responsibility of specialized cells in our eyes called “rods” and “cones”. Rods are predominately important in low-light settings, and don’t play much role in color vision, so we’ll focus on the cones.</p>

<p>Humans typically have 3 kinds of cones. Having three different kinds of cones makes humans “trichromats”. There is, however, at least one confirmed case of a <a href="http://nymag.com/scienceofus/2015/02/what-like-see-a-hundred-million-colors.html">tetrochromat human</a>! Other animals have even more cone categories. <a href="http://theoatmeal.com/comics/mantis_shrimp">Mantis shrimp</a> have <em>sixteen</em> different kinds of cones.</p>

<p>Each kind of cone is labelled by the range of wavelengths of light they are excited by. The standard labelling is “S”, “M”, and “L” (short, medium, long).</p>

<figure>
<img src="/images/color/Cones.png">
</figure>

<p>These three curves indicate how sensitive the corresponding cone is to each wavelength. The highest point on each curve is called the “peak wavelength”, indicating the wavelength of radiation that the cone is most sensitive to.</p>

<p>Let’s see how our cones process the light bouncing off the lemon in my hand.</p>

<figure>
<img src="/images/color/ConeExcitation1.png">
</figure>

<p>By looking at the normalized areas under the curves, we can see how much the radiation reflected from the real lemon excites each of cones. In this case, the normalized excitations of the S, M, and L cones are 0.02, 0.12, and 0.16 respectively. Now let’s repeat the process for the on-screen lemon.</p>

<figure>
<img src="/images/color/ConeExcitation2.png">
</figure>

<p>Despite having totally different radiation spectra reaching the eye, the cone <em>excitations</em> are the same (S=0.02, M=0.12, L=0.16). That’s why the point on the real lemon and the point on the digital lemon look the same to us!</p>

<figure>
<img src="/images/color/Metamers2.png">
<figcaption>
The normalized cone area under the stimulation curves will always be equal for all 3 cone types in the case of metamers.
</figcaption>
</figure>

<p>Our 3 sets of cones reduce any spectral flux curve $$\Phi_e(\lambda)$$ down to a triplet of three numbers $$(S, M, L)$$, and every distinct $$(S, M, L)$$ triplet will be a distinct color! This is pretty convenient, because (0.02, 0.12, 0.16) is much easier to communicate than a complicated continuous function. For the mathematically inclined, our eyes are doing a dimensional reduction from an infinite dimensional space into 3 dimensions, which is a pretty damn cool thing to be able to do subconsciously.</p>

<p>This $$(S, M, L)$$ triplet is, in fact, our first example of a <strong>color space</strong>.</p>

<h1 id="color-spaces">Color spaces</h1>

<p>Color spaces allow us to define with numeric precision what color we’re talking about. In the previous section, we saw that a specific yellow could be represented as (0.02, 0.12, 0.16) in the SML color space, which is more commonly known as the <a href="https://en.wikipedia.org/wiki/LMS_color_space">LMS color space</a>.</p>

<p>Since this color space is describing the stimulation of cones, by definition any human visible color can be represented by positive LMS coordinates (excluding the extremely rare tetrachromat humans, who would need four coordinates instead of three).</p>

<p>But, alas, this color space has some unhelpful properties.</p>

<p>For one, not all triplet values (also called <strong>tristimulus values)</strong> are <em>physically possible</em>. Consider the LMS coordinates (0, 1, 0). To physically achieve this coordinate, we would need to find some way of stimulating the M cones without stimulating the L or S cones <em>at all</em>. Because the M cone’s sensitivity curve significantly overlaps at least one of L or S at all wavelengths, this is impossible!</p>

<figure>
<img src="/images/color/Cones.png">
<figcaption>
Any wavelength which stimulates the M cone will also stimulate either the L or S cone (or both!)
</figcaption>
</figure>

<p>A problematic side effect of this fact is that it’s really difficult to increase stimulation of only one of the cones. This, in particular, would make it not a great candidate for building display hardware.</p>

<p>Another historical, pragmatic problem was that the cone sensitivities weren’t accurately known until the 1990&rsquo;s, and a need to develop a mathematically precise model of color significantly predates that. The first significant progress on that front came about in the late 1920’s.</p>

<h1 id="wright-guild-s-color-matching-experiments">Wright &amp; Guild’s color matching experiments</h1>

<p>In the late 1920’s, William David Wright and John Guild conducted experiments to precisely define color in terms of contributions from 3 specific wavelengths of light.</p>

<p>Even though they may not have known about the three classes of cones in the eye, the idea that all visible colors could be created as the combination of three colors had been proposed at least a hundred years earlier.</p>

<figure>
<img src="/images/color/tricolor.png">
<figcaption>
An example of a tricolor theory by Charles Hayter, 1826
</figcaption>
</figure>

<p>Wright &amp; Guild had the idea to construct an apparatus that would allow test subjects to reconstruct a test color as the combination of three fixed wavelength light sources. The setup would’ve looked something like this:</p>

<figure>
<img src="/images/color/ColorMatching1.png">
</figure>

<p>The experimenter would set the lamp on the bottom to a target wavelength, (for instance, 600nm) then ask the test subject to adjust the three lamp power controls until the colors they were seeing matched.</p>

<figure>
<img src="/images/color/ColorMatching2.png">
</figure>

<p>The power settings of the three dials give us a (red, green, blue) triplet identifying the pure spectral color associated with 600nm. Repeating this process every 5nm with about 10 test subjects, a graph emerges showing the amounts of red (700nm), green (546nm), and blue (435nm) light needed to reconstruct the appearance of a given wavelength. These functions are known as <strong>color matching function (CMFs)</strong>.</p>

<p>These particular color matching functions are known as $$\bar r(\lambda)$$, $$\bar g(\lambda)$$, and $$\bar b (\lambda)$$.</p>

<figure>
<img src="/images/color/cmfs1.png">
</figure>

<p>This gives the pure spectral color associated with 600nm an $$(R, G, B)$$ coordinate of (0.34, 0.062, 0.00). This is a value in the <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space#CIE_RGB_color_space">CIE 1931 RGB color space</a>.</p>

<p>Hold on though — what does it mean when the functions go negative, like here?</p>

<figure>
<img src="/images/color/cmfs2.png">
</figure>

<p>The pure spectral color associated with 500nm has an $$(R, G, B)$$ coordinate of (-0.72, 0.85, 0.48). So what exactly does that -0.72 mean?</p>

<p>It turns out that no matter what you set the red (700nm) dial to, it will be impossible to match a light outputting at 500nm, regardless of the values of blue and green dials. You can, however, make the two sides match by adding red light to the <em>bottom</em> side.</p>

<figure>
<img src="/images/color/ColorMatching3.png">
</figure>

<p>The actual setup probably would’ve had a full set of 3 variable power, fixed wavelength lights on either side of the divider to allow any of them to be adjusted to go negative.</p>

<p>Using our color matching functions, we can match any monochromatic light using a combination of (possibly negative) amounts of red (700nm), green (546nm), and blue (435nm) light.</p>

<p>Just as we were able to use our L, M, and S cone sensitivity functions to determine cone excitation for any spectral distribution, we can do the same thing with our color matching functions. Let’s apply that to the lemon color from before:</p>

<figure>
<img src="/images/color/ColorMatchingLemon.png">
</figure>

<p>By taking the area under the curve of the product of the spectral curve and the color matching functions, we’re left with an $$(R, G, B)$$ triplet (1.0, 0.8, 0.2) uniquely identifying this color.</p>

<p>While the $$(L, M, S)$$ color space gave us a precise way to <em>identify</em> colors, this $$(R, G, B)$$ color space gives us a precise way to <em>reproduce</em> colors. But, as we saw in the color matching functions, any colors with a negative $$(R, G, B)$$ coordinate can’t actually be reproduced.</p>

<figure>
<img src="/images/color/cmfs3.png">
</figure>

<p>But this graph only shows which spectral colors can’t be reproduced. What about non-spectral colors? Can I produce pink with an R, G, B combination? What about teal?</p>

<p>To answer these questions, we’ll need a better way of visualizing color space.</p>

<h1 id="visualizing-color-spaces-chromaticity">Visualizing color spaces &amp; chromaticity</h1>

<p>So far most of our graphs have put wavelength on the horizontal axis, and we’ve plotted multiple series to represent the other values of interest.</p>

<figure>
<img src="/images/color/WavelengthPlots.png">
</figure>

<p>Instead, we could plot color as a function of $$(R, G, B)$$ or $$(L, M, S)$$. Let’s see what color plotted in 3D $$(R, G, B)$$ space looks like.</p>

<figure>
<img src="/images/color/LinearRGBCube.png">
</figure>

<p>Cool! This gives us a visualization of a broader set of colors, not just the spectral colors of the rainbow.</p>

<p>A simple way to reduce this down to two dimensions would be to have a separate plot for each pair of values, like so:</p>

<figure>
<img src="/images/color/RGBPairPlots.png">
<figcaption>Component pairs plotted, holding the third coordinate constant</figcaption>
</figure>

<p>In each of these plots, we discard one dimension by holding one thing constant. Rather than holding one of red, green, and blue constant, it would be really nice to have a plot showing all the colors of the rainbow &amp; their combinations, while holding <em>lightness</em> constant.</p>

<p>Looking at the cube pictures again, we can see that (0, 0, 0) is black, and (1, 1, 1) is white.</p>

<figure>
<img src="/images/color/LinearRGBCube.png">
</figure>

<p>What happens if we slice the cube diagonally across the plane containing $$(1, 0, 0)$$, $$(0, 1, 0)$$, and $$(0, 0, 1)$$?</p>

<figure>
<img src="/images/color/TriangleSliceRGB.png">
</figure>

<p>This triangle slice of the cube has the property that $$R + G + B = 1$$, and we can use $$R + G + B$$ as a crude approximation of lightness. If we take a top-down view of this triangular slice, then we get this:</p>

<figure>
<img src="/images/color/rgChromaticity1.png">
</figure>

<p>This two dimensional representation of color is called <strong>chromaticity</strong>. This particular kind is called <a href="https://en.wikipedia.org/wiki/Rg_chromaticity"><strong>rg chromaticity</strong></a>. Chromaticity gives us information about the <em>ratio</em> of the primary colors independent of the lightness.</p>

<p>This means we can have the same chromaticity at many different intensities.</p>

<figure>
<img src="/images/color/rgChromaticity2.png">
</figure>

<p>We can even make a chromaticity graph where the intensity varies with r &amp; g in order to maximize intensity while preserving the ratio between $$R$$, $$G$$, and $$B$$.</p>

<figure>
<img src="/images/color/rgChromaticity3.png">
</figure>

<p>Chromaticity is a useful property of a color to consider because it stays constant as the intensity of a light source changes, so long as the light source retains the same spectral distribution. As you change the brightness of your screen, chromaticity is the thing that stays constant!</p>

<p>There are many different ways of dividing chromaticity into two dimensions. One of the common methods is used in both the HSL and HSV color spaces. Both color spaces split chromaticity into “hue” and “saturation”, like so:</p>

<figure>
<img src="/images/color/HSL.png">
</figure>

<p>It might appear at a glance that the rg chromaticity triangle and these hue vs. saturation squares contains every color of the rainbow. It’s time to revisit those pesky negative values in our color matching functions.</p>

<h1 id="gamuts-and-the-spectral-locus">Gamuts and the spectral locus</h1>

<p>If we take our color matching functions $$\bar r(\lambda)$$, $$\bar g(\lambda)$$, and $$\bar b(\lambda)$$ and use them to plot the rg chromaticities of the spectral colors, we end up with a plot like this:</p>

<figure>
<img src="/images/color/rgChromaticityPlot1.png">
</figure>

<p>The black curve with the colorful dots on it shows the chromaticities of all the pure spectral colors. The curve is called the <strong>spectral locus</strong>. The stars mark the wavelengths of the variable power test lamps used in the color matching experiments.</p>

<p>If we overlay our previous chromaticity triangles onto this chart, we’re left with this:</p>

<figure>
<img src="/images/color/rgChromaticityPlot2.png">
</figure>

<p>The area inside the spectral locus represents all of the chromaticities that are visible to humans. The checkerboard area represents chromaticities that humans can recognize, but that <em>cannot</em> be reproduced by any positive combination of 435nm, 546nm, and 700nm lights. From this diagram, we can see that we’re unable to reproduce any of the spectral colors between 435nm and 546 nm, which includes pure cyan.</p>

<p>The triangle on the right without the checkerboard is all of the chromaticities that <em>can</em> be reproduced by a positive combination. We call the area that can be reproduced the <strong>gamut</strong> of the color space.</p>

<p>Before we can <em>finally</em> return to hexcodes, we have one more color space we need to cover.</p>

<h1 id="cie-xyz-color-space">CIE XYZ color space</h1>

<p>In 1931, the International Comission on Illumination convened and created two color spaces. The first was the RGB color space we’ve already discussed, which was created based on the results of Wright &amp; Guild’s color matching experiments. The second was the XYZ color space.</p>

<p>One of the goals of the XYZ color space was to have positive values for all human visible colors, and therefore have all chromaticities fit in the range [0, 1] on both axes. To achieve this, a linear transformation of RGB space was carefully selected.</p>

<div style="font-size: 80%">
$$
\begin{bmatrix} X \\ Y \\ Z \end{bmatrix}
= \frac{1}{b_{21}} \begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{bmatrix} \begin{bmatrix} R \\ G \\ B \end{bmatrix}
= \frac{1}{0.17697} \begin{bmatrix} 0.49000 & 0.31000 & 0.20000 \\ 0.17697 & 0.81240 & 0.010630 \\ 0.0000 & 0.010000 & 0.99000 \end{bmatrix} \begin{bmatrix} R \\ G \\ B \end{bmatrix}
$$</div>

<p>The analog of rg chromaticity for XYZ space is xy chromaticity and is the more standard coordinate system used for chromaticities diagrams.</p>

<figure>
<img src="/images/color/xyChromaticityPlot.png">
</figure>

<p>Gamuts are typically represented by a triangle placed into an xy chromaticity diagram. For instance, here’s the gamut of CIE RGB again, this time in xy space.</p>

<figure>
<img src="/images/color/gamut1.png">
</figure>

<p>With an understanding of gamuts &amp; chromaticity, we can finally start to discuss how digital displays are able to display an intended color.</p>

<h1 id="screen-subpixels">Screen subpixels</h1>

<p>Regardless of the manufacturer of your display, if you took a powerful magnifying glass to your display, you would find a grid of pixels, where each pixel is composed of 3 types of subpixels: one type emitting red, one green, and one blue. It might look something like this:</p>

<figure>
<img src="/images/color/Subpixels.png">
</figure>

<p>Unlike the test lamps used in the color matching experiments, the subpixels do not emit monochromatic light. Each type of subpixel has its own spectral distribution, and these will vary from device to device.</p>

<figure>
<img src="/images/color/subpixelSpectra.png">
<figcaption>MacBook Air subpixel spectral data from <a href="https://fluxometer.com/rainbow/">f.luxometer</a></figcaption>
</figure>

<p>Using <a href="https://support.apple.com/guide/colorsync-utility/welcome/mac">ColorSync Utility</a> on my Macbook Pro, I was able to determine the xy space gamut of my screen.</p>

<figure>
<img src="/images/color/gamut3.png">
</figure>

<p>Notice that the corners of the gamut no longer lie along the spectral locus. This makes sense, since the subpixels do not emit pure monochromatic light. This gamut represents the full range of chromaticities that this monitor can faithfully reproduce.</p>

<p>While gamuts of monitors will vary, modern monitors should try to enclose a specific other gamut: sRGB.</p>

<h1 id="srgb">sRGB</h1>

<p>sRGB (“standard Red Green Blue”) is a color space created by HP and Microsoft in 1996 to help ensure that color data was being transferred faithfully between mediums.</p>

<p>The standard specifies the chromaticities of the red, green, and blue primaries.</p>

<table class='datatable'>
<thead>
<tr><th>Chromaticity</th> <th>Red</th> <th>Green</th> <th>Blue</th></tr>
</thead>
<tbody>
<tr><td>x</td> <td>0.6400</td> <td>0.3000</td> <td>0.1500</td></tr>
<tr><td>y</td> <td>0.3300</td> <td>0.6000</td> <td>0.0600</td></tr>
<tr><td>Y</td> <td>0.2126</td> <td>0.751</td> <td>0.0722</td></tr>
</tbody>
</table>

<p>If we plot these, we wind up with a gamut similar to, but slightly smaller than, the MacBook LCD screen.</p>

<figure>
<img src="/images/color/gamut2.png">
</figure>

<p>There are parts of the official sRGB gamut that aren’t within the MacBook Pro LCD gamut, meaning that the LCD can’t faithfully reproduce them. To accommodate for that, my MacBook seems to use a modified sRGB gamut.</p>

<figure>
<img src="/images/color/gamut4.png">
</figure>

<p>sRGB is the default color space used almost everywhere, and is the standard color space used by browsers (<a href="https://www.w3.org/TR/css-color-4/#color-type">specified in the CSS standard</a>). All of the diagrams in this blog post are in sRGB color space. That means that all colors outside of the sRGB gamut aren&rsquo;t accurately reproduced in the diagrams in this post!</p>

<p>Which brings us, finally, to how colors are specified on the web.</p>

<h1 id="srgb-hexcodes">sRGB hexcodes</h1>

<p><code>#9B51E0</code> specifies a color in sRGB space. To convert it to its associated (R, G, B) coordinate, we divide each of the three components by <code>0xFF</code> aka 255. In this case:</p>
<pre><code>0x9B/0xFF = 0.61
0x51/0xFF = 0.32
0xE0/0xFF = 0.88
</code></pre>
<p>So the coordinate associated with <code>#9BE1E0</code> is $$(0.61, 0.32, 0.88)$$.</p>

<p>Before we send these values to the display hardware to set subpixel intensities, there’s <em>one</em> more step: gamma correction.</p>

<h1 id="gamma-correction">Gamma correction</h1>

<p>With each coordinate in RGB space being given 256 possible values, we want to ensure that each adjacent pair is as different as possible. For example, we want <code>#030000</code> to be as different from <code>#040000</code> as <code>#F40000</code> is from <code>#F50000</code>.</p>

<p>Human vision is much more sensitive to small changes in low energy lights than small changes to high energy lights, so we want to allocate more of the 256 values to representing low energy values.</p>

<p>To see how, let’s imagine we wanted to encode greyscale values, and only had 3 bits to do it, giving us 8 possible values.</p>

<p>If we plot grey values as a linear function of energy, it would look something like this:</p>

<figure>
<img src="/images/color/linearEnergy.png">
</figure>

<p>We’ll call our 3 bit encoded value $$Y$$. If our encoding scheme spaces out each value we encode evenly ($$Y = \frac{\left\lfloor8E\right\rfloor}{8}$$), then it would look like this:</p>

<figure>
<img src="/images/color/gamma1.png">
</figure>

<p>You can see that the perceptual difference between $$Y=0$$ and $$Y=1$$ is significantly greater than the difference between $$Y=6$$ and $$Y=7$$.</p>

<p>Now let’s see what happens if we use a power function instead. Let’s try $$Y = \left(\frac{\left\lfloor8E\right\rfloor}{8}\right)^2$$.</p>

<figure>
<img src="/images/color/gamma2.png">
</figure>

<p>We’re getting much closer to perceptual uniformity here, where each adjacent pair of values is as different as any other adjacent pair.</p>

<p>This process of taking energy values and mapping them to discrete values is called <strong>gamma encoding</strong>. The inverse operations (converting discrete values to energy values) is called <strong>gamma decoding</strong>.</p>

<p>In general form, gamma correction has the equation $$V_{out} = A V_{in}^\gamma$$. The exponent is the greek letter “gamma”, hence the name.</p>

<p>The encoding &amp; decoding rules for sRGB use a similar idea, but slightly more complex.</p>

<div>$$C_\mathrm{linear}= \begin{cases}\frac{C_\mathrm{sRGB}}{12.92}, & C_\mathrm{sRGB}\le0.04045\\ \left(\frac{C_\mathrm{sRGB}+0.055}{1.055}\right)^{2.4}, & C_\mathrm{sRGB}>0.04045 \end{cases}$$</div>

<p>If we plot sRGB values against linear values, it would look like this:</p>

<figure>
<img src="/images/color/gamma3.png">
</figure>

<p>Okay! That was the last piece we needed to understand to see how we get from hex codes to eyeballs! Let’s do the walkthrough 😀</p>

<h1 id="from-hexcodes-to-eyeballs">From Hexcodes to Eyeballs</h1>

<p>First, we take <code>#9B51E0</code>, split it up into its R, G, B components, and normalize those components to be the range $$[0, 1]$$.</p>

<figure>
<img src="/images/color/summary1.png">
</figure>

<p>This gives us a coordinate of $$(0.61, 0.32, 0.88)$$ in sRGB space. Next, we take our sRGB components and convert them to linear values.</p>

<figure>
<img src="/images/color/summary2.png?v2">
</figure>

<p>This gives us a coordinate $$(0.33, 0.08, 0.75)$$ in linear RGB space. These values are used to set the intensity of the subpixels on the screen.</p>

<figure>
<img src="/images/color/summary3.png?v2">
</figure>

<p>The spectral distributions of the subpixels combine to a single spectral distribution for the whole pixel.</p>

<figure>
<img src="/images/color/summary4.png">
</figure>

<p>The electromagnetic radiation travels from the pixel through your cornea and hits your retina, exciting your 3 kinds of cones.</p>

<figure>
<img src="/images/color/summary5.png">
</figure>

<p>Putting it all together for a different color, we’re left with the image that opens this post!</p>

<figure>
<img src="/images/color/Hero.png">
</figure>

<h1 id="a-brief-note-about-brightness-setting">A brief note about brightness setting</h1>

<figure>
<img src="/images/color/brightness.png">
</figure>

<p>Before sRGB values are converted into subpixel brightness, they’ll be attenuated by the device’s brightness setting. So the <code>0xFF0000</code> on a display at 50% brightness might match the <code>0x7F0000</code> on the same display at 100% brightness.</p>

<p>In an ideal screen, this would mean that regardless of the brightness setting, black pixels $$(0, 0, 0)$$ would emit no light. Most phone &amp; laptop screens are LCD screens, however, where each subpixel is a filter acting upon white light. This video is a great teardown of how LCDs work:</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/jiejNAUwcQ8" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>

<p>The filter is imperfect, so as brightness is increased, black pixels will emit light as the backlight bleeds through. OLED screens (like on the iPhone X and Pixel 2) don’t use a backlight, allowing them to have a consistent black independent of screen brightness.</p>

<h1 id="stuff-i-left-out">Stuff I left out</h1>

<p>This post intentionally glosses over many facets of color reproduction and recognition. For instance, we didn’t talk about what your brain does with the cone excitation information in the <a href="https://psych.ucalgary.ca/PACE/VA-Lab/colourperceptionweb/theories.htm">opponent-process theory</a> or the effects of <a href="https://en.wikipedia.org/wiki/Color_constancy">color constancy</a>. We didn’t talk about <a href="https://en.wikipedia.org/wiki/Additive_color">additive color</a> vs. <a href="https://en.wikipedia.org/wiki/Subtractive_color">subtractive color</a>. We didn’t talk about <a href="http://www.colour-blindness.com/general/how-it-works-science/">color blindness</a>. We didn’t talk about the difference between <a href="https://en.wikipedia.org/wiki/Photometry_(optics)#Photometric_quantities">luminous flux, luminous intensity, luminance, illuminance, and luminous emittance</a>. We didn’t talk about <a href="https://en.wikipedia.org/wiki/ICC_profile">ICC device color profiles</a> or what programs like <a href="https://justgetflux.com/">f.lux</a> do to color perception.</p>

<p>I left them out because this post is already way too long! As a <a href="https://twitter.com/amtinits">friend of mine</a> said: even if you&rsquo;re a person who understands that most things are deeper than they look, color is way deeper than you would reasonably expect.</p>

<h1 id="references">References</h1>

<p>I spent an unusually large portion of the time writing this post just reading because I kept discovering that I was missing something I needed to explain as completely as I’d like.</p>

<p>Here’s a short list of the more helpful ones:</p>

<ul>
<li><a href="https://medium.com/hipster-color-science/a-beginners-guide-to-colorimetry-401f1830b65a">A Beginner’s Guide to Colorimetry</a></li>
<li><a href="https://en.wikipedia.org/wiki/CIE_1931_color_space">CIE 1931 Color Space</a></li>
<li><a href="https://en.wikipedia.org/wiki/HSL_and_HSV">HSL and HSV</a></li>
<li><a href="https://en.wikipedia.org/wiki/Gamma_correction">Gamma correction</a></li>
<li><a href="https://www.amazon.com/Real-Time-Rendering-Third-Tomas-Akenine-Moller/dp/1568814240?ie=UTF8&amp;tag=stackoverfl08-20">Real-Time Rendering, Third Edition</a> p210-217</li>
</ul>

<p>I also needed to draw upon many data tables to produce the charts in this post:</p>

<ul>
<li><a href="http://www.cvrl.org/">University College London Color &amp; Vision Research Laboratory database</a> (XYZ color matching functions, cone fundamentals)</li>
<li><a href="https://fluxometer.com/rainbow/#!id=iPad%20Pro/6500K-iPad%20Pro">fluxometer.com</a> (RGB LCD screen subpixel spectra)</li>
<li><a href="https://archive.org/details/gov.law.cie.15.2004">CIE 15: Technical Report: Colorimetry, 3rd edition</a> (RGB color matching funtions)</li>
</ul>

<p>Special thanks to <a href="http://www.coopernetics.com/blog/">Chris Cooper</a> and <a href="http://rykap.com/">Ryan Kaplan</a> for providing feedback on the draft of this post.</p>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.9.0/katex.min.js" integrity="sha384-jmxIlussZWB7qCuB+PgKG1uLjjxbVVIayPJwi6cG6Zb4YKq0JIw+OMnkkEC7kYCq" crossorigin="anonymous"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.9.0/contrib/auto-render.min.js" integrity="sha384-IiI65aU9ZYub2MY9zhtKd1H2ps7xxf+eb2YFG9lX6uRqpXCvBTOidPRCXCrQ++Uc" crossorigin="anonymous"></script>

<script>
renderMathInElement(document.body, {
  delimiters: [
    {left: "$$", right: "$$", display: false},
    {left: "\\[", right: "\\]", display: false},
  ]
});
</script>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Depression & Recovery]]></title>
    <link href="http://jamie-wong.com/post/depression-and-recovery/"/>
    <updated>2017-05-31T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/depression-and-recovery/</id>
    <content type="html"><![CDATA[ 

<style>
span.date {
    font-size: 75%;
    color: #ccc;
    display: block;
    margin-top: -10px;
}
</style>

<p><img src="/images/depression/Header.svg" /></p>

<p>This is going to be a long post, so here’s the TL;DR that I want you to take away. I started experiencing my first bout of major depressive disorder in 2016, and am recovering now with the aide of meditation and 75mg daily of an anti-depressant called Zoloft (aka Sertraline). Talk therapy, medication, and meditation have all been important parts of my ongoing recovery. I’d love to talk to you if you think you might be in the same boat and want support pursuing help. If you don’t have the same symptoms I described here, please, please don’t take that to suggest you don’t need to seek some variety of help.</p>

<p>I recognize that those options are not available to everyone, and recognize that I’m fortunate to have the socioeconomic freedom to pursue all these options.</p>

<p>If you want something easier to digest, I’d suggest <a href="https://medium.com/@johngreen/my-nerdcon-stories-talk-about-mental-illness-and-creativity-bfac9c29387e">John Green’s post about his Mental Illness</a>, and <a href="http://hyperboleandahalf.blogspot.com/2013/05/depression-part-two.html">Hyperbole and a Half’s “Depression Part 2”</a>. I hope this post can help a few people feel like they’re not alone in their struggle, just like those two posts did for me.</p>

<p>And because it will become <em>blindingly</em> apparent through this post, yes, indeed, I have gone full Valley: I’m an engineer in San Francisco working at a startup, attending meditation retreats, seeing mental health professionals, and went to Burning Man last year.</p>

<p>With that in mind, let’s go:</p>

<hr />

<p>For most of my life, my emotional life has been pretty healthy. If I think about it in a ridiculously oversimplified good vs. bad feels over time framework, my emotional world might look like this:</p>

<p><img src="/images/depression/One.svg" /></p>

<p>At the beginning of 2016, my subjective world started to change.</p>

<h2 id="the-onset">The Onset</h2>

<p><span class="date">January - March 2016</span></p>

<p>The year started out on good footing, with motivation for work, exercise, living in a house of good friends, and finally having the chance to live with my girlfriend, Bonnie, again instead of mostly co-existing through walls of Facebook messenger chats and video calls across three timezones.</p>

<p>The first change seem innocuous enough. Standing outside the doorway of one of my housemate’s rooms, I commented that I’d noticed that I didn’t have the same visceral reaction to spending time with friends as I used to. He commented that it sounded pretty weird, and that maybe I should seek out therapy. I brushed off the notion, seeing it as a pretty minor problem, and went about my life. I started to notice the same thing would happen going on hikes. Any visceral sense of serenity was eerily missing.</p>

<p>My subjective world started to look like this:</p>

<p><img src="/images/depression/Two.svg" /></p>

<p>The second change, though more gradual, was more obviously problematic. Over the course of months, I noticed my general mood decline. I’d have days waking up feeling kind of crummy with no obvious cause. As this evolved, my desire to interact with coworkers over lunch all but vanished. There were many days where I couldn’t bring myself to do it at all, so I hid in small single-chair rooms reserved for video calls.</p>

<p>The whole graph shifted down.</p>

<p><img src="/images/depression/Three.svg" /></p>

<p>The last change was downright disturbing. I’d look down at my hands and experience them as just these objects I control. I’d look in the mirror and see a person that I <em>rationally</em> knew was me, but had no visceral sense of recognition. Photos of me in the past felt like looking at a bizarre alternative timeline that I could remember but not feel. From searching around, the best terms I’ve found to describe this are “dissociation” or “depersonalization”.</p>

<p>By the end of February, I knew something had to change.</p>

<p>So I did the first thing that enters any Valley yuppy’s head: quit my job to go travel the world to try to find myself. March 25th, 2016 was my last day at Khan Academy.</p>

<figure>
<img src="/images/depression/ka.jpg">
<figcaption>
A few of the great folks I worked closely with at Khan Academy
</figcaption>
</figure>

<p>By March 31st, I had landed in Hong Kong on a one-way ticket.</p>

<h2 id="travel">Travel</h2>

<p><span class="date">April - June 2016</span></p>

<p>This is not a post about how travel helped me face my inner demons or expanded my comfort zone or created lifelong friendships. Travel has done wonders for me in the past, but it’s not exactly a silver bullet for mental health.</p>

<p>As my general mental health deteriorated further, a perverse reversal took place.</p>

<p><img src="/images/depression/Four.svg" /></p>

<p>See that dip to the right of the green line? Yeah, that’s not normal, and it merits some explanation.</p>

<p>When I went to Hong Kong in 2014 with some friends, I remembered a distinct feeling of serenity standing in the Nan Liang Garden.</p>

<p><img src="/images/depression/hk.jpg"></p>

<p>The sky, the trees, and the pagoda all brought me comfort, peace, and a gratitude for the world.</p>

<p>When I returned in 2016, here was my initial feeling: nothing. I might as well have been standing in the parking lot of Walmart.</p>

<p>So that reaction would explain a flatline, but why is there a dip? Let me show you an excerpt of the internal chatter of a chronic ruminator.</p>

<blockquote>
<p>“Hey, I remember being here, last time it was super great.”</p>

<p>“Hmm, this time I don’t think I really feel the same connection.&rdquo;</p>

<p>“Maybe I just need to relax more, let me give that a try.”</p>

<p>“Nope, still nothing. Well that’s disappointing.”</p>

<p>“Well if this doesn’t lift my mood, maybe nothing else will.”</p>

<p>“Why am I thinking this way, it’s self destructive — I should stop thinking this way.”</p>

<p>“I’ll bet this is going to happen every time I go anywhere nice that I’ve been before.”</p>

<p>“What’s wrong with me? Am I going to be like this forever?”</p>

<p>“What’s the point of living if I can only experience good things in a superficial way?”</p>
</blockquote>

<p>Positive experiences had now become an exercise in “let’s painstakingly analyze how deeply broken my emotional mind is”. Not the most constructive internal dialogue.</p>

<p>Every time anything good would happen, I would cling on as hard as I could to any positive aspect of it, then be intensely distraught when it would inevitably fade almost immediately after I had left the experience.</p>

<p>Perhaps the <em>most</em> soul-crushing parts of the trip were the periods where I actually did feel better for days at a time. Each time it happened, I had this gloriously optimistic thought: “Maybe it’s over. Maybe I’m going to feel normal again from now on!” And each time I could feel my sensitivity to the world receding, it felt like a force pulling away everything good in my present and future. I could never figure out a way to return to that feeling of normalcy. Once, I felt that lucidity kick in for about 10 minutes during a car ride, then mysteriously vanish right afterwards.</p>

<p><img src="/images/depression/Five.svg" /></p>

<p>Travel had, in previous long trips, been about novel experiences and collecting stories. But now novel experiences were just one more opportunity to scrutinize my reduced sensitivity. And collecting stories isn’t much fun when they don’t <em>feel</em> like your own stories.</p>

<p>So, after hopping on and off planes for a few months, I returned to California to look for work and seek professional help.</p>

<h2 id="therapy-tears">Therapy &amp; Tears</h2>

<p><span class="date">July - September 2016</span></p>

<p>I’m fortunate to have had enough friends, family, and acquaintances share their experiences going to see mental health professionals that I had a significantly reduced mental block for pursuing help. I hope I can take one more brick off that wall for those reading this.</p>

<p>On advice from my friend David, I emailed four different therapists asking for a free 15 minute phone consultation (these consultations are available from most therapists). They replied to set up times, and I found that for the half hour leading up to each calls, I was basically non-functional and reverted to lying in bed watching Key &amp; Peele videos on YouTube to curb my nervousness.</p>

<p>During the first of the calls, I described my problems, and the person very politely said that yes, they thought it was something they could probably help me with. That seemed like a positive signal to me, but I figured I’d wait to hear from the other therapists.</p>

<p>On my next call, I spoke to a therapist near Redwood City named <a href="http://www.sandylillie.com/">Sandy Lillie</a>. I described my problems, and she very matter-of-factly told me it sounded like I had been through something traumatic and had emotionally shut down.</p>

<p>Unless you’ve been there yourself, it’s hard to appreciate how validating it is to hear from an authoritative source that yes, you do have a real problem. It’s not just “part of getting older” or a “slump”, it’s a real problem that can and should be addressed.</p>

<p>That validation was so powerful that I spent the next half hour as a teary mass under the blankets in my room. I cancelled the rest of the phone consultations and booked my first 75 minute appointment.</p>

<hr />

<p>During the first appointment, I told Sandy about my relationship with Bonnie, my family, and a bit about how I view myself and work.</p>

<p>I cried. A lot. Like half a waste basket full of tissues a lot. The most surprising part to me was not <em>that</em> I cried, but <em>when</em> I cried. Areas of my life I had never considered as harbors of pain were being poked and prodded and all their juicy bits were spilling out of my eyeballs into the ample supply of Kleenex provided.</p>

<p>I’m grateful at this point that I’ve been keeping journals for years and occasionally taking the time to put a stream-of-consciousness dump in there, because I find it remarkably difficult to remember how I felt at a given time. Thankfully, I can have July 2016-me tell the story of driving home after my first appointment:</p>

<blockquote>
<p>When I got in the car and started driving back, I found myself in this bizarre state between laughing and crying half of the way back. Then I started worrying that we had a flat and was running low on gas so my brain started focusing on practicalities again. For a little while there, it almost felt like the symptoms of dissociation were fading, and that I could experience the beauty of the landscape I was driving through.</p>

<p>I’m going back again on Monday. That feeling of laughing and crying is incredible.</p>
</blockquote>

<hr />

<p>After every few sessions of therapy, I’d have a huge emotional release. It felt like there were these buried pressurized tanks of feelings that were being dug up and punctured. Whenever it happened, I felt genuinely better in a sustained way for a few days.</p>

<p><img src="/images/depression/Six.svg" /></p>

<p>When I noticed this trend, I started to go hunt for things in my past for emotional release outside of therapy sessions. I read journal entries from up to 4 years past to try to extract some nourishing tears. Perversely, whenever I noticed myself feeling especially dead, I’d go hunting for ways to make myself cry. It really was true that feeling something other than “dead” or self-pitying was significant.</p>

<p>Every time I did this though, the relief period got shorter and less intense.</p>

<p><img src="/images/depression/Seven.svg" /></p>

<p>Until I ran out of past sadness to extract.</p>

<p>The only thing I would cry about was how dead I felt, which, unlike digging up my past, did not feel cathartic at all. It felt pathetic. Useless. Shameful.</p>

<h2 id="feeling-supported">Feeling Supported</h2>

<p><span class="date">August 2016</span></p>

<p>At that point, I started feeling deeply disconnected again. I realized that my visceral connection with Bonnie was all but gone, and that she could feel it too. I told her that when I looked into her eyes, I wanted desperately to feel connected.</p>

<p>People talk about eyes acting as the windows into people’s soul. Through them, you see your past experiences with them, your future plans, your shared hopes, and the struggles you’ve overcome together.</p>

<p>But all I saw was glass.</p>

<p>It was the difference between reading a love letter and staring at illegible pen scratches on a piece of dead tree. The difference between your home and a house staged for sale. A baseball caught from a home run vs one bought at the store. In each case, each item of the pair are objectively the same, but what they each <em>represent</em> is fundamentally different.</p>

<p>We talked about our future, and she shared her vision for it. I admitted I couldn’t see it. She asked if I wanted to keep trying. I said yes. She asked why. I struggled.</p>

<p>Here’s August 2016-me’s account of what followed:</p>

<blockquote>
<p>I squirmed uncomfortably, feeling dread that I didn’t have an answer, until it rushed back with “I want to go back to Kitchener”. And I sobbed. Hard. [Kitchener was the first place we’d lived together — we shared an apartment for 11 months starting in 2014.]</p>

<p>After I said that, and kept crying, I felt like I could connect to my past. I felt like me. I felt like I found myself underneath this judgmental voice in my head. I talked about good times in Kitchener. Cooking together, going to the grocery story together, rock climbing together. I told her I just wanted the simple life back, and wanted to be a kid.</p>

<p>&hellip;</p>

<p>I could feel it go in and out. I could feel the true part of me come back to me, then recede behind whatever’s blocking it. When it was there, memories of the past flooded back to me, and they felt like mine. I was able to recognize myself in the past. I told Bonnie that everything felt smaller when I felt like myself. Like we were kids.</p>

<p>I told her I was afraid we couldn’t go back. But that I wanted to try. And I was afraid that the morning after, I wouldn’t be able to get back to this. Bonnie told me that she thinks she does get a visceral reaction from looking into my eyes, but hasn’t seen anything there in a while.</p>

<p>After using up nearly all the toilet paper for our tears, and after I felt like the real me had receded back behind the barrier entirely for the night, we crawled into bed and went back to sleep.</p>
</blockquote>

<p>The next day I felt dead again, but from then on I knew that Bonnie understood there really was something wrong with me. I had support.</p>

<p>Someone close to me finally knew I was honestly trying.</p>

<h2 id="therapy-pure-rage">Therapy &amp; Pure Rage</h2>

<p><span class="date">September 2016</span></p>

<p>Returning from the emotional overwhelm that is Burning Man, I had a new insight into my emotional life: I am absolutely awful at productively expressing anger.</p>

<p>Sometime during high school, it became apparent to me that my default reaction mode while angry was to lash out, and that was less than ideal. As the introspective voice in my head expanded its domain over my life, at some point anger came under its purview. The voice’s approach was to try to explain to me why anger was not the productive response to situations, and why my anger was irrational and unfair most of the time.</p>

<p>To some extent, this is a useful skill, but I didn’t realize just how much I had bottled up until my fifth therapy session.</p>

<p>During that session, Sandy asked if I could role play talking to a specific person that had deeply hurt me. I agreed, and as I started to express myself coldly, Sandy pushed me to be more expressive.</p>

<p>Here’s September 2016-me recounting what followed:</p>

<blockquote>
<p>I could feel my whole body trembling with rage, and could feel myself fighting internally to express it. Sandy tried to encourage me to put some volume behind “Fuck You”, but I found it was too hard for me. Instead, I said it quietly between tears. Eventually, I managed to hit a threshold, and started screaming. No words, just raw, vocally expressed anger. I think I did that for about half an hour, almost until the end of the session. When I opened my eyes, the world felt a little bit more vivid again, like it did after the first time I had a long crying session during therapy.</p>

<p>It’s hard to describe the sound I made during that session. I had <em>definitely</em> never heard that sound come out of a human, even in movies. It didn’t feel so much like something I was doing as it did like a compressed demonic gas rapidly exiting my throat, dragging claws along my vocal chords on the way out. It wasn’t the last time I heard that sound.</p>
</blockquote>

<p>The experience of the following 2 weeks or so was predictable:</p>

<p><img src="/images/depression/Eight.svg" /></p>

<p>By November, pretty much nothing seemed to be helping, and I’d begun to experience significant social anxiety even when I was in groups exclusively composed of friends I’d known for years. My general mood decline had also steadily continued, with probably a day or two a week starting out in bed crying for no discernible reason.</p>

<p><img src="/images/depression/Nine.svg" /></p>

<p>When I say “for no discernible reason”, I don’t mean that I had some minor bad experience that I was blowing out of proportion, I mean that between sobs I was contemplating what was externally wrong and couldn’t think of a single thing.</p>

<p>Have you ever tried to comfort someone who feels like crap for <em>no</em> reason? It’s not such an easy task.</p>

<h2 id="meditation">Meditation</h2>

<p><span class="date">November 2016 - Present</span></p>

<p>For American Thanksgiving, I had a friend, also coincidentally named Sandy, over for dinner. Sandy was the second of my friends to become a dedicated meditator, and had spent a lot of time digesting hours and hours and hours of meditation audiobooks. He’d seen such dramatic improvements in his life from meditation that he was getting interested in trying to share some of those benefits with friends (or honestly, with whoever would listen).</p>

<p>When I told him how I was feeling and he suggested meditation, I was initially a little hesitant. I’d meditated fairly regularly using the Headspace app at the end of 2015, but had stopped once depression had set in for two reasons. The first was that I felt even <em>more</em> disconnected from the world after each meditation session than before, and the second was that when my mind felt scattered, the meditation sessions just seemed to repeatedly emphasize to me how little control over my brain I did have.</p>

<p>Sandy helped me a lot with the second objection by reframing meditation for me. He told me that the immediate goal during meditation is not long-held focus, it’s to notice when it drifts away. This sounds like a trivial difference, but it was key in how I responded to my mind drifting.</p>

<p>Each time my mind drifted before, I’d feel like I was failing and would be upset by how scattered my brain was. After the reframe, each time I’d notice it drift, I’d have this feeling of “Aha! I noticed! I succeeded!”.</p>

<p>With this framing, the world became just <em>slightly</em> more vivid after each session. It was still frustrating a significant amount of the time both during and after the session, but it paid off in the end.  My mood began to stabilize back to where it was a few months earlier.</p>

<p><img src="/images/depression/Ten.svg" /></p>

<p>After meditating, I still wouldn’t generally feel like seeking out social interaction, but at least I didn’t <em>dread</em> it.</p>

<p>In December 2016, I had what I’d consider my first major mood reversal due to meditation. The context was that I was feeling terrible and having a hard time finding a quiet place to meditate because of roommates. Eventually, I locked myself in the bathroom with the fan on to drown out any sound and sat down to meditate on the toilet.</p>

<p>Here’s my journaled account of what happened that night during meditation:</p>

<blockquote>
<p>Get very frustrated at night and break down and cry, but manage to break the cycle! Another part of my mind broke the downward spiral by repeating “You didn’t give up!” over and over and over again until the tears stopped and I felt happy again. I looked at face covered in tears and snot in the mirror, and was okay with what I saw :)</p>
</blockquote>

<p><img src="/images/depression/Eleven.svg" /></p>

<p>Despite the occasional boost from meditation, I was still feeling generally horrible on the whole. When I went back home for Christmas, I discovered that my social anxiety extended not only to friends, but also to family. I found myself frequently wanting to exit conversation with my parents and barricade myself in my room. I spent a lot of time using my terrible coping mechanism of burying myself in information through page after page of reading Reddit.</p>

<h2 id="anti-depressants">Anti-Depressants</h2>

<p><span class="date">January 2017 - Present</span></p>

<p>By January 2017, I turned to the option I told myself I would only turn towards if nothing else had worked: medication.</p>

<p>After the last visit to my therapist, she told me that from all the things she had heard from me, she could understand why I felt pain, but not of this magnitude. The struggles I had were real, but not uncommon. She suggested that it might be wise to try medication, and told me I’d need to go see an MD for that since she couldn’t prescribe medication (she has a PhD in Clinical Psychology).</p>

<p>From contemplating it for a year and going through therapy, I was able to precisely articulate what my emotional world looked like, and in two sessions with a psychiatrist, I was diagnosed with Major Depressive Disorder. He offered many potential treatment paths, among them medication.</p>

<p>On January 20, 2017, he prescribed me 50mg daily of Zoloft, with a 25mg daily ramp up period for a week. He told me that I shouldn’t expect to see any benefit in the first two weeks, and that was just a trial period meant to evaluate whether the side effects were tolerable.</p>

<p>Despite seeing medication as a reasonable option for other people, the first day of starting medication still felt incredibly hard. In part, it was because I was afraid of side effects like suicide ideation. Thankfully, suicidal ideation and urges have not been part of my particular flavor of depression.</p>

<p>The other part of the internal struggle was that it felt, in some ways, like admitting that I was entirely broken, and had failed to heal myself because I was weak. “Weak and useless” were a recurring pair of words in my journals and internal dialogue for most of 2016.</p>

<p>I don’t think I would’ve had the courage to start if it were not for support from family and for reading <a href="https://medium.com/@johngreen/my-nerdcon-stories-talk-about-mental-illness-and-creativity-bfac9c29387e">John Green’s post about his mental illness</a>.</p>

<p>I know almost the exact point where the medication started working. It was late at night on January 26, 2017.</p>

<blockquote>
<p>I was feeling pretty shitty through dance and afterwards, but decided I should meditate at home. After I did, I felt better. Near the end of the meditation, I wanted to give up, but didn’t let myself. Just as I previously had a voice tell me “You didn’t give up”, this time I had a voice tell me “You will get stronger” and repeat that over and over again.</p>
</blockquote>

<p>The day after that, I felt better too. And the day after that.</p>

<p>I’d been burned by previous desperate optimism, so I was hesitant to believe this was truly the beginning to a sustained period of emotional stability.</p>

<p>But now, four months later, I can fairly confidently proclaim: I’m stable.</p>

<p><img src="/images/depression/Twelve.svg" /></p>

<p>Interestingly, one of the first things to vanish as the result of the medication taking effect was any aversion to taking the medicine. Seeing it as a personal failing that I couldn’t work through myself started seeming as ridiculous as having an aversion to eye glasses because it demonstrates a failure on your part to <em>will</em> your eyeballs to reshape themselves.</p>

<p>In 2013, there were more than <a href="https://en.wikipedia.org/wiki/Sertraline">41 million prescriptions of Zoloft alone</a>, and that’s just in the US! That’s more than 1 for every 10 people in the country! What a bizarre thing to feel a social taboo over.</p>

<p>That said, there is something unsettling about the fact that after a year of struggle, tears, disconnection, existential crisis, and generally shitty feelings, the catalyst for recovery is half the size of a penny and costs me about $25/mo after insurance.</p>

<p><img src="/images/depression/zoloft.jpg" /></p>

<p>Brain chemistry is a hell of a thing.</p>

<h2 id="restarting-life">Restarting Life</h2>

<p><span class="date">February 2017</span></p>

<p>Once I became more confident I was emerging from depression, I realized that I had to actually make some <em>choices</em> in life again. Up until that point, my decisions about what to do were largely guided by “what is the least likely to cause catastrophic mental meltdown?”.</p>

<p>Trying to change any of my negative habits was totally out of the question, since failing to do that would’ve been a crushing sign of additional failure in a life that felt full of weakness and uselessness.</p>

<p>But once I started recovering, I had a newfound certainty that for most plausible things that could happen in the external world, my internal world would be okay. So I started to make changes.</p>

<p>The first thing to go was the emotional crutch I’ve held for <em>years</em> of drowning myself in information to plaster over things I don’t want to think about. I quit my Facebook news feed, Twitter, Hacker News, and Reddit cold turkey. More generally the rule is to stay off sites that are infinite time sinks (which usually have “engagement” as their core metric). <a href="https://journal.thriveglobal.com/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3">“How Technology is Hijacking Your Mind”</a> is a great overview of how horribly misaligned many technology products are with the best interests of their users.</p>

<p>With the exception of a few days checking Twitter while at a conference, I’ve stayed completely off since February 5th. The time was replaced by watching educational content on YouTube, and reading books. When I get stuck at work, I just get up and walk around, maybe grab a healthy snack, maybe chat with coworkers.</p>

<p>It was interesting to see that in the first three weeks or so, I still had a compulsive habit of unlocking my phone and searching for those applications. I would opening Chrome and start typing “reddit.com” or “news.ycombinator.com”, before catching myself and closing it. I did this several times a day, which demonstrates oh-so-clearly to me how much of a compulsion it was and how little rational decision making went into the use of those websites.</p>

<p>Even now, four months later, I occasionally find myself sitting in front of my computer mentally cycling through all the sites I used to frequent to figure out which one I haven’t looked at in the last hour. Thankfully, I usually now have the sense to just close my laptop when I notice this happening. I suspect I’m much more aware of when this is happening due to the mindfulness meditation practices I’ve been doing.</p>

<p>The second thing was to organize my goals revolving around a monthly theme. Last month was “friends” — I enjoyed spending time with friends again! I decided to invite friends over to cook with, and reached out to some friends to video chat, one of whom I hadn’t talked to in years. It was great.</p>

<p><img src="/images/depression/goals.png" /></p>

<h2 id="meditation-may">Meditation May</h2>

<p><span class="date">May 2017</span></p>

<p>May’s theme, now just finishing up, was meditation.</p>

<p>You might be wondering why I was still pursuing meditation when it had seemingly fallen far short of the improvement seen via pharmaceuticals.</p>

<p>Well, for starters, I consider the pills to be a catalyst for stability, but I suspect meditation had a lot to do with the rate of improvement. I started seeing benefits in the first two weeks, when I was told to not expect much until week 4-6.</p>

<p>Secondly, I don’t want to take pills forever. The recommended period is to stay with the dosage for a year after the depressive symptoms subside, and I’d like to do whatever I can to prevent relapse once I stop medicating. That said, if I had to choose medicating forever vs. dealing with depressive symptoms forever, I’d choose medication hands down. I’d just rather do what I can to deal with neither ☺️. Mindfulness meditation seems to be one of the few things with statistically significant results in preventing remission. <a href="http://jamanetwork.com/journals/jamapsychiatry/fullarticle/210951">One study</a> found that it was as effective as maintenance level dosages of anti-depressants for preventing remission.</p>

<p>Lastly, and perhaps most critically, the pills boosted and stabilized my baseline mood, but things still felt a bit <em>off</em>.</p>

<p><img src="/images/depression/Twelve.svg" /></p>

<p>Before the pills, I would feel shitty for no apparent reason. Afterwards, I would feel pretty-okay for no apparent reason. Pleasant experiences no longer had the perverse negative effect they previously had, but they also didn’t deliver much by way of connection or lasting joy.</p>

<p>But why would I expect meditation to help with this? Isn’t meditation only helpful for de-stressing or halting incessant mental chatter?</p>

<p>To explain, let’s talk a bit about lunchtime.</p>

<h3 id="eating-mindfully">Eating Mindfully</h3>

<p>Let me describe the process of eating as it routinely took place for me in years past.</p>

<p>Once seated to eat, I’d collect a spoonful of food, and, if I’m lucky, stay focused long enough to taste the first bit going into my mouth. For each bite after that, some part of my brain would be scheduling my day or planning for the future or replaying an awkward encounter from the previous day. Another part of my brain would be guiding my eyes to provide enough visual information for my hand to construct a continuous stream of sustenance to be delivered to my mouth, as my teeth and throat work in tandem to get those nutrients to my stomach. That leaves just about…. zero percent of my conscious brain to actually <em>taste</em> the damn food. Unsurprisingly, this meant that most meals felt unremarkable.</p>

<p>Now I try to do the following: collect a spoonful of food, look at it carefully, focus visually on what I’m about to eat. Next, put it in my mouth, and focus on the taste and texture of the food, ideally with my eyes closed. Once I’ve fully tasted it and swallowed, only <em>then</em> do I go in search of my next bit of food to eat from my plate and return to step one.</p>

<p>This might seem silly or trivial, but give it a try on your next meal, even if it’s just your morning toast. You might discover how little you’ve been tasting your food too.</p>

<p>So what does this have to do with meditation? Everything! Meditation can systematically cultivate your capacity to concentrate on the most presently relevant thing. The critical bit needed to enjoy the taste of your food is the ability to repeatedly and gently return your attention to the thing that’s most important in the current moment.</p>

<p>In the book “<a href="https://www.amazon.com/Mindful-Way-Through-Depression-Unhappiness/dp/1593851286">The Mindful Way Through Depression</a>”, the authors describe the experience of one depressed individual mindfully eating a single raisin:</p>

<blockquote>
<p>“I was very aware of what we were doing. I’ve never tasted a raisin like that before. Actually, I’ve never noticed what a raisin looked like. At first it looked dead and crinkly, but then I noticed how the light struck it in different ways, like a jewel. When I first put it in my mouth, it was hard at first to stop myself from instantly chewing it. Then, when I was exploring it with my tongue, I was able to tell which side was which — but there was no taste. Then when eventually I bit down on it — wow, it was absolutely amazing. I’d never tasted anything like it.”</p>
</blockquote>

<p>During his lectures at the meditation retreat I recently returned from, our teacher, <a href="http://www.shinzen.org/">Shinzen Young</a>, recounted the story of how he first knew his meditation was bearing fruit. Every morning at the monastery in Japan, their breakfast would be the same thing: plain rice porridge with no seasoning. One day he woke up and realized he was <em>excited</em> for his morning rice porridge. Shinzen grew up in California, and yet he couldn’t remember being as excited for belgian waffles with syrup, fresh hash browns, and fresh squeezed orange juice as he was for this bowl of rice and water.</p>

<p>Through meditation, it’s possible to systematically cultivate control over your attention in ways that will allow you appreciate <em>everything</em> in life in a deeper way.</p>

<h3 id="return-of-sensitivity">Return of Sensitivity</h3>

<p>After meditation, I notice that everything just seems more <em>interesting.</em> The position of birds wings as they swooped, the complex cacophony of sounds everywhere I go, the feeling of my bedsheets on my skin, the feeling of my shoe when I first slip it on, even the stucco pattern of the walls in my apartment.</p>

<p><img src="/images/depression/Thirteen.svg" /></p>

<p>As I’ve racked up hours of meditation over the last few months, I’ve noticed that the interesting-ness of everything tends to be present to a small degree even when I haven’t meditated in a while. It’s always subtly there in the background.</p>

<p>An even stranger phenomenon that I haven’t quite reached, but that I have friends that can attest to is this:</p>

<p><img src="/images/depression/Fourteen.svg" /></p>

<p>I won’t say too much about this, but suffice to say I have a friend that now actively seeks out opportunities to eat uncomfortably hot peppers by themselves, or to sit for a while without a shirt in a snowstorm, and emerge from it quite happy. Personally, I’ve come to see that I can quite enjoy freezing cold showers.</p>

<h2 id="what-didn-t-work-for-me">What Didn’t Work for Me</h2>

<p>I tried a number of things that I didn’t dedicate sections in this post to because they didn’t do much good for me.</p>

<h3 id="exercise">Exercise</h3>

<p>Exercise, while I’m sure good for my muscles, didn’t have a clear positive impact on my mental health. On days where I didn’t perform as well at the gym as I’d hoped I would, it generally felt net-negative. The evidence for exercise combatting depression does seem <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC474733/">pretty strong</a> even when compared to Zoloft/Sertraline specifically, but that may be because the study group was starting from a baseline of relative inactivity, whereas I think I was already comfortably getting more than the recommended 30 minutes of exercise 3 times weekly.</p>

<p>The act of scheduling exercise on a regular basis and setting goals was, however, quite helpful. I would plan out my entire day hour by hour to prevent decision fatigue which could bring my day to a lying-on-the-floor-staring-at-the-ceiling halt.</p>

<p>Here’s one such schedule from my journal:</p>
<pre><code>7:30am wakeup, brush teeth, smoothie
8:00am drive to gym
8:30am gym
9:45am drive home
10:00am shower
10:10am 10min headspace
10:20am setup time machine hard drive downstairs
10:30am watch ML while eating peanut butter toast
12:00pm lunch, eating lentil soup, continue watching ML
12:30pm 10min headspace
12:40pm Work on signed distance fields blogpost
5:00pm Dinner: lazy fried rice
6:00pm Walk to MV caltrain
</code></pre>
<h3 id="supplements">Supplements</h3>

<p>I tried taking <a href="https://examine.com/supplements/fish-oil/">high EPA fish oil</a> and <a href="https://examine.com/supplements/vitamin-d/">vitamin D</a> supplements since they’ve both been somewhat effective for treating depression in studies. I didn’t notice any significant effect, and the fish oil was especially uncomfortable for me since I’m vegetarian, so I stopped. Some of the studies suggested that the fish oil was only effective for people already on anti-depressants, but I had already stopped taking the fish oil once I started on Zoloft.</p>

<h3 id="talking-to-friends">Talking to Friends</h3>

<p>I really wish that I found talking to friends about depression helpful, but I can’t honestly say it was. This was really only helpful to the extent that I felt I could help others seek help for their own conditions and to the extent that it encouraged me to seek help for myself.</p>

<p>There’s a special kind of frustration in knowing that you have friends that want to help you and not knowing a single thing they can do to help. I found that talking about my depression generally left me feeling self-pity afterwards, and not at all more understood, despite the earnest efforts of many friends.</p>

<h3 id="learned-optimism">Learned Optimism</h3>

<p>I sought out the book <a href="https://www.amazon.com/Learned-Optimism-Martin-P-Seligman/dp/1442341130">“Learned Optimism”</a> after it was described in <a href="https://www.youtube.com/watch?v=wYPp4nG7qw4">a conference talk by Reginald Braithwaite</a>, but quickly found that the description of depression in that book didn’t align with what I was experiencing. I found the descriptions in <a href="https://www.amazon.com/s/ref=nb_sb_ss_i_1_12?url=search-alias%3Dstripbooks&amp;field-keywords=the+mindful+way+through+depression&amp;sprefix=the+mindful+%2Cstripbooks%2C234&amp;crid=36HF465JM7S00">“The Mindful Way through Depression”</a> much more relatable.</p>

<h2 id="closing-thoughts">Closing Thoughts</h2>

<p>So that’s my journey so far. Come next year as I try to step away from anti-depressants, I might have more to say. I consider myself quite fortunate to have been so responsive to pharmaceutical treatments at relatively low dosages with no noticeable side effects, and recognize that it’s not smooth sailing for everyone that chooses to pursue that path.</p>

<p>If you’re interested in reading more about meditation, I recommend Shinzen Young’s “<a href="http://www.shinzen.org/wp-content/uploads/2016/08/SeeHearFeelIntroduction_ver1.8.pdf">See, Hear, Feel</a>” introduction. It’s <em>long</em>, but pretty information dense. If you can’t be bothered to invest so much time up front, <a href="http://headspace.com/">Headspace</a> has a low initial commitment. If you’re interested in retreats, you can find the schedule of his retreats here: <a href="http://www.shinzen.org/retreat-schedule/">Retreats - Shinzen Young</a>. I appreciate that he doesn’t ask you to dogmatically believe much of anything — he’s largely decoupled meditation practices from their religious doctrines, and hasn’t sacrificed depth of knowledge in the process.</p>

<p>If you want to talk to me about my experiences with medication, therapy, mindfulness, or share your own, you can reach by email at jamie.lf.wong@gmail.com (or Facebook messenger if we’re friends).</p>

<p>Best of luck out there in the world — it’s a tricky place!</p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Simulating the Physical World]]></title>
    <link href="http://jamie-wong.com/post/simulating-the-physical-world/"/>
    <updated>2017-05-01T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/simulating-the-physical-world/</id>
    <content type="html"><![CDATA[ 

<p><link rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.7.1/katex.min.css"
integrity="sha384-wITovz90syo1dJWVh32uuETPVEtGigN07tkttEqPv+uR2SE/mbQcG7ATL28aI9H0"
crossorigin="anonymous"></p>

<style>
canvas, svg {
    background-size: contain;
    background-repeat: no-repeat;
}
</style>

<figure>
<canvas id="header-sim" width="600" height="600" style="margin: 0 auto;
background-image: url(/images/umbrella.png); user-select: none;"></canvas>
<figcaption>Move your mouse left and right to control the wind.</figcaption>
</figure>

<p><em>This post is also available in Russian: <a href="https://habr.com/post/338992/">Симуляция физического мира</a>.</em></p>

<p>How might you go about simulating rain? Or any physical process over time, for
that matter?</p>

<p>In a simulation, whether it be rain, the airflow over a plane wing, or a slinky
slinking down some stairs, we can frame the entire simulation with two pieces of
information:</p>

<ol>
<li>What is the state of everything at the beginning of the simulation?</li>
<li>How does that state change from one moment of time to the next?</li>
</ol>

<p>By &ldquo;state of everything&rdquo;, I mean any non-constant information needed either to
determine what the scene looks like at a given moment, or about how the scene
will change from one moment to the next. The position of a raindrop, the
direction of the wind, and the velocity of each piece of the slinky are all
examples of state.</p>

<p>If we represent the entire state of our scene with one big vector \( \vec y
\), then we can mathematically reformulate the two pieces of information above
into these two statements:</p>

<ol>
<li>What is a value \(y_0\) that satisfies \( y(t_0) = y_0 \)?</li>
<li>What is a function \(f \) that satisfies \( \frac{dy(t)}{dt} = f(t, y(t))
\)?</li>
</ol>

<p>If you&rsquo;re confused about why you&rsquo;d want to store the whole state in one big
vector, bear with me. This is one of those cases where it might seem we go way
over the top with generality, but I promise there are some interesting tidbits
from looking at a general model for simulation.</p>

<p>If you&rsquo;re confused about <em>how</em> you might store the whole state of a scene in one
big vector, let&rsquo;s look at a simple example. Let&rsquo;s consider a 2D simulation with
2 particles.
Each particle has a position \( \vec x \) and a velocity \( \vec v \).</p>

<script src="/javascripts/rain-simulation.js"></script>

<script>
(function() {
var canvas = document.getElementById("header-sim");
Simulation.main(canvas);

var lastX;
var lastY;

// Convert touch moves to mousemoves to allow wind control on mobile
canvas.addEventListener("touchmove", function(event) {
    var touches = event.changedTouches,
        first = touches[0];

    // This is a silly hack to avoid needing to change any of the rain
    // simulation code because I *really* want this post to be done.
    var bounds = canvas.getBoundingClientRect();
    var simulatedEvent = document.createEvent("MouseEvent");
    simulatedEvent.initMouseEvent("mousemove", true, true, window, 1,
                                  first.screenX * 600 / bounds.width,
                                  first.screenY * 600 / bounds.height,
                                  first.clientX * 600 / bounds.width,
                                  first.clientY * 600 / bounds.height,
                                  false, false, false, false, 0/*left*/, null);

    first.target.dispatchEvent(simulatedEvent);

    // On iOS, if the page is scrolling, requestAnimationFrame stops being
    // called. So if we think the user is trying to drag left or right to
    // control the wind speed, we prevent scrolling.
    if (lastX && lastY) {
        var dx = lastX - first.screenX;
        var dy = lastY - first.screenY;
        if (Math.abs(dx) > Math.abs(dy)) {
            event.preventDefault();
        }
    }
    lastX = first.screenX;
    lastY = first.screenY;
}, true);
})();
</script>

<figure>
<img src="/images/2particles.svg">
</figure>

<p>So to make \( \vec y \), all we have to do is smoosh \( \vec x_1 \),  \(
\vec v_1 \), \( \vec x_2 \), and \( \vec v_2 \) into one 8 element vector,
like this:</p>

<div>$$
\vec y = \begin{bmatrix}
\vec x_1 \\
\vec v_1 \\
\vec x_2 \\
\vec v_2
\end{bmatrix} = \begin{bmatrix}
x_{1x} \\ x_{1y} \\ v_{1x} \\ v_{1y} \\ x_{2x} \\ x_{2y} \\ v_{2x} \\ v_{2y}
\end{bmatrix}
$$</div>

<p>And if you&rsquo;re confused why we want to find \( f(t, y(t)) \) instead of any old
definition of \( \frac{dy(t)}{dt} \), the point is to find the derivative only
terms of our current state, \( y(t) \), constants, and the time itself. If
that&rsquo;s impossible, it likely that there&rsquo;s some bit of state we forgot to
consider.</p>

<h1 id="the-initial-state">The Initial State</h1>

<p>Defining \( \vec y_0 \) defines the initial state of the simulation. So if the
initial state of our two-particle simulation looks something like this:</p>

<figure>
<img src="/images/particlesetup.svg">
</figure>

<p>Then our numerical initial conditions might look like this:</p>

<div>$$
\vec x_1(t_0) = \begin{bmatrix} 2 \\ 3 \end{bmatrix},
\vec v_1(t_0) = \begin{bmatrix} 1 \\ 0 \end{bmatrix},
\vec x_2(t_0) = \begin{bmatrix} 4 \\ 1 \end{bmatrix},
\vec v_2(t_0) = \begin{bmatrix} -1 \\ 0 \end{bmatrix}
$$</div>

<p>Smooshing that together into a single vector, we get our \( \vec y_0 \).</p>

<div>$$
\vec y_0 = \vec y(t_0) = \begin{bmatrix}
x_{1x} \\ x_{1y} \\ v_{1x} \\ v_{1y} \\ x_{2x} \\ x_{2y} \\ v_{2x} \\ v_{2y}
\end{bmatrix} = \begin{bmatrix}
2 \\ 3 \\ 1 \\ 0 \\ 4 \\ 1 \\ -1 \\ 0
\end{bmatrix}
$$</div>

<h1 id="the-derivative-function">The Derivative Function</h1>

<p>\( \vec y_0 \) tells us the initial state, so now all we need is some way of
getting from the initial state to the state a tiny bit into the future, and from
<em>that</em> state a tiny bit further, and so on.</p>

<p>With that in mind, let&rsquo;s solve for \( f \) in the equation \(
\frac{dy(t)}{dt} = f(t, y(t)) \). So let&rsquo;s take the derivative of \( y(t) \).</p>

<div>$$
\frac{dy(t)}{dt} = \frac{d}{dt} \begin{bmatrix}
x_{1x} \\
x_{1y} \\
v_{1x} \\
v_{1y} \\
x_{2x} \\
x_{2y} \\
v_{2x} \\
v_{2y}
\end{bmatrix} = \begin{bmatrix}
\frac{dx_{1x}}{dt} \\ \\
\frac{dx_{1y}}{dt} \\ \\
\frac{dv_{1x}}{dt} \\ \\
\frac{dv_{1y}}{dt} \\ \\
\frac{dx_{2x}}{dt} \\ \\
\frac{dx_{2y}}{dt} \\ \\
\frac{dv_{2x}}{dt} \\ \\
\frac{dv_{2y}}{dt}
\end{bmatrix}
$$</div>

<p>Woah! That&rsquo;s a tall formula! Don&rsquo;t worry, we can hopefully make it feel less
intimidating by breaking \( \vec y \) back out to its constituent parts.</p>

<div>$$\begin{aligned}
\frac{d \vec x_1(t)}{dt} = \begin{bmatrix}
    \frac{dx_{1x}}{dt} \\ \\
    \frac{dx_{1y}}{dt}
\end{bmatrix},

\frac{d \vec v_1(t)}{dt} = \begin{bmatrix}
    \frac{dv_{1x}}{dt} \\ \\
    \frac{dv_{1y}}{dt}
\end{bmatrix} \\ \\

\frac{d \vec x_2(t)}{dt} = \begin{bmatrix}
    \frac{dx_{2x}}{dt} \\ \\
    \frac{dx_{2y}}{dt}
\end{bmatrix},

\frac{d \vec v_2(t)}{dt} = \begin{bmatrix}
    \frac{dv_{2x}}{dt} \\ \\
    \frac{dv_{2y}}{dt}
\end{bmatrix}
\end{aligned}$$</div>

<p>\( \vec x_1 \) and \( \vec x_2 \) are going to follow similar rules to one
another, as will \( \vec v_1 \) and \( \vec v_2 \). So despite the ball of
notation above, all we really want to find are the following two things:</p>

<div>$$
\frac{d \vec x}{dt} \textnormal{  and  } \frac{d \vec v}{dt}
$$</div>

<p>How we define those two derivatives is the real nuts and bolts of the
simulation, and to make this an <em>interesting</em> simulation and not just a program
where random stuff happens, we can to look to physics for inspiration.</p>

<h1 id="kinematics-dynamics">Kinematics &amp; Dynamics</h1>

<p>A bit of introductory kinematics and dynamics will go a long way in making our
simulation interesting. Let&rsquo;s start with the real basics.</p>

<p>\( \vec x \) represents position, and the first derivative of position with
respect to time is velocity, \( \vec v \).  In turn, the first derivative of
velocity with respect to time is acceleration, \( \vec a \).</p>

<p>It might seem by now that we&rsquo;ve already answered our question of finding our
derivative function \( f \), since we know the following:</p>

<div>$$
\frac{d \vec x}{dt} = \vec v \textnormal{  and  } \frac{d \vec v}{dt} = \vec a
$$</div>

<p>We have indeed rather nailed down \( \frac{d \vec x}{dt} \) since \( \vec v
\) is part of our state vector \( \vec y(t) \), but we&rsquo;ll need to go a teensy
bit further for the second equation since \( \vec a \) is not.</p>

<p>Newton&rsquo;s second law comes in handy here: \( \vec F = m \vec a \). If we assume
the mass of our particles are known, then we can re-arrange that equation and
we&rsquo;re left with this:</p>

<div>$$
\frac{d \vec v}{dt} = \vec a = \frac{\vec F}{m}
$$</div>

<p>Well hold on, \( \vec a \) wasn&rsquo;t part of \( \vec y(t) \), neither is \(
\vec F \), so this hardly seems like progress (remember, we need our derivative
function to only be a function of \( \vec y(t) \) and \( t \)).  But indeed
it is progress, because we have all sorts of handy formulas that govern forces
in the natural world.</p>

<p>Let&rsquo;s pretend, for our simple example, that the only force acting upon the
particles is the gravitational attraction between them. In that case, we can
determine \( \vec F \) using Newton&rsquo;s law of universal gravitation:</p>

<div>$$
F = G \frac{m_1 m_2}{r^2}
$$</div>

<p>Where \( G \) is the gravitational constant \( 6.67 \times 10^{-11}
\frac{Nm^2}{kg^2} \), and \( m_1 \) and \( m_2 \) are the masses of our
particles (which we assume are constant).</p>

<p>In order to do our simulation, we need a direction too, and we also need to
define \( r \) in terms of some part of \( \vec y(t) \).
If we say that \( \vec F_1 \) is the force acting upon particle 1, then we can
do that like so:</p>

<div>$$\begin{aligned}
\vec F_1 &= G \frac{m_1 m_2}{|\vec x_2 - \vec x_1|^2} \left[
    \frac{\vec x_2 - \vec x_1}{|\vec x_2 - \vec x_1|}
\right]
= G \frac{m_1 m_2(\vec x_2 - \vec x_1)}{|\vec x_2 - \vec x_1|^3} \\ \\
\vec F_2 &= G \frac{m_2 m_1}{|\vec x_1 - \vec x_2|^2} \left[
    \frac{\vec x_1 - \vec x_2}{|\vec x_1 - \vec x_2|}
\right]
= G \frac{m_2 m_1(\vec x_1 - \vec x_2)}{|\vec x_1 - \vec x_2|^3}
\end{aligned}$$</div>

<p>Let&rsquo;s recap. The changing state of our two particle system is entirely described
by \( \vec x_1, \) \( \vec v_1, \) \( \vec x_2, \) and \( \vec v_2 \).
And those change over time like so:</p>

<div>$$\begin{aligned}
\frac{d \vec x_1}{dt} &= \vec v_1 \\ \\
\frac{d \vec x_2}{dt} &= \vec v_2 \\ \\
\frac{d \vec v_1}{dt} &= \vec a_1 = \frac{\vec F_1}{m_1} =
G \frac{m_2 (\vec x_2 - \vec x_1)}{|\vec x_2 - \vec x_1|^3} \\ \\
\frac{d \vec v_2}{dt} &= \vec a_2 = \frac{\vec F_2}{m_2} =
G \frac{m_1 (\vec x_1 - \vec x_2)}{|\vec x_1 - \vec x_2|^3}
\end{aligned}$$</div>

<p>Now we have all the information that makes this simulation different from all
other simulations: \(\vec y_0\) and \(f\).</p>

<div>$$\begin{aligned}
\vec y_0 &= \vec y(0) &= \begin{bmatrix}
\vec x_1(0) \\
\vec v_1(0) \\
\vec x_2(0) \\
\vec v_2(0)
\end{bmatrix} &= \begin{bmatrix}
(2, 3) \\
(1, 0) \\
(4, 1) \\
(-1, 0)
\end{bmatrix} \\ \\

f(t, y(t)) &= \frac{d\vec y}{dt}(t) &= \begin{bmatrix}
\frac{d\vec x_1}{dt}(t) \\ \\
\frac{d\vec v_1}{dt}(t) \\ \\
\frac{d\vec x_2}{dt}(t) \\ \\
\frac{d\vec v_2}{dt}(t)
\end{bmatrix} &= \begin{bmatrix}
\vec v_1(t) \\ \\
G \frac{m_2 \big(\vec x_2(t) - \vec x_1(t)\big)}{|\vec x_2(t) - \vec x_1(t)|^3} \\ \\
\vec v_2(t) \\ \\
G \frac{m_1 \big(\vec x_1(t) - \vec x_2(t)\big)}{|\vec x_1(t) - \vec x_2(t)|^3}
\end{bmatrix}

\end{aligned}$$</div>

<p>Now then, now that we have a rigorously defined simulation, how might we go
about turning it into a mesmerizing animation?</p>

<p>In case you&rsquo;ve written simulations or games before, you might jump to
something like this:</p>
<pre><code>x += v * delta_t
v += F/m * delta_t
</code></pre>
<p>Let&rsquo;s take a step back and consider <em>why</em> that works.</p>

<h1 id="a-differential-equation">A Differential Equation</h1>

<p>Before diving into implementation, let&rsquo;s step back a second and see what
information we have and what information we need. We have a \( y_0 \) that
satisfies \( y(t_0) = y_0 \), and we have an \( f \) that satisfies \(
\frac{dy}{dt}(t) = f(t, y(t)) \). What we <em>want</em> is a function that can predict
the state of the system at any point in time. Phrased mathematically, we want
\( y(t) \).</p>

<p>With this in mind, if you squint carefully at \( \frac{dy}{dt}(t) = f(t, y(t))
\), you might spy that it&rsquo;s an equation that relates \( y \) with its
derivative \( \frac{dy}{dt} \). That makes it a differential equation! More
specifically, it makes it a <a href="https://www.khanacademy.org/math/differential-equations/first-order-differential-equations">first order, ordinary differential equation</a>. If
we <em>solve</em> this differential equation, we get the function \( y(t) \).</p>

<p>Solving for \( y(t) \) given \( y_0 \) and \( f \) is called an <a href="https://en.wikipedia.org/wiki/Initial_value_problem">initial
value problem</a>.</p>

<h1 id="numerical-integration">Numerical Integration</h1>

<p>Some initial value problems have easily found analytic solutions, but for
complex simulations this tends to be infeasible. So let&rsquo;s try to figure out a
method of approximating the solution.</p>

<p>Let&rsquo;s explore a simpler initial value problem:</p>

<p>Given \( y(0) = 1 \) and \( \frac{dy}{dt}(t) = f(t, y(t)) = y(t) \), find an
approximation of \( y(t) \).</p>

<p>Let&rsquo;s examine the problem from a graphical perspective by considering the value
and tangent line at \(t = 0 \). From our givens, we have \( y(0) = 1 \) and
\( \frac{dy}{dt}(t) = y(t) = 1 \).</p>

<p><svg width="400" viewBox="0 0 1 1"
    style="background-image: url(/images/emptygraph.png);"
    preserveAspectRatio="xMinYMin meet">
    <path d="M0 0.833 L1 0.333" fill="none" stroke="#EB5757"
    stroke-width="0.005" stroke-dasharray="0.01 0.01"/>
    <circle cx="0.333" cy="0.666" r="0.01" fill="#EB5757" />
</svg></p>

<p>So we don&rsquo;t know what \( y(t) \) looks like yet, but we do know that it should
follow the tangent line near to \( t = 0 \). So let&rsquo;s estimate \( y(0 + h)
\) for some small value of \( h \) by following the tangent line. We&rsquo;ll use
\( h = 0.5 \) for now.</p>

<p><svg width="400" viewBox="0 0 1 1" style="background-image:
url(/images/emptygraph.png);" preserveAspectRatio="xMinYMin meet">
    <path d="M0 0.833 L1 0.333" fill="none" stroke="#EB5757"
    stroke-width="0.005" stroke-dasharray="0.01 0.01"/>
    <circle cx="0.333" cy="0.666" r="0.01" fill="#EB5757" />
    <circle cx="0.5" cy="0.583" r="0.01" fill="#EB5757" />
</svg></p>

<p>Symbolically, we&rsquo;re estimating \( y(h) \) like so:</p>

<div>$$\begin{aligned}
y(h) \approx y(0) + h \frac{dy}{dt}(0) &= y(0) + h f(0, y(0)) \\
&= y(0) + h y(0) \\
&= 1 + h
\end{aligned}$$</div>

<p>So for \( h = 0.5 \), \( y(h) \approx 1.5 \).</p>

<p>Now we can repeat this process. Even though we don&rsquo;t know the exact value of \(
y(h) \), as long as we have a pretty good approximation of it, we can figure
out a pretty good approximation of the tangent line at that point too!</p>

<div>$$\begin{aligned}
f(t, y(t)) &= y(t) \\
f(0.5, 1.5) &= 1.5
\end{aligned}$$</div>

<p><svg width="400" viewBox="0 0 1 1" style="background-image:
url(/images/emptygraph.png);" preserveAspectRatio="xMinYMin meet">
    <path d="M0 0.95833 L1 0.20833" fill="none" stroke="#EB5757"
    stroke-width="0.005" stroke-dasharray="0.01 0.01"/>
    <circle cx="0.333" cy="0.666" r="0.01" fill="#EB5757" />
    <circle cx="0.5" cy="0.583" r="0.01" fill="#EB5757" />
</svg></p>

<p>Then we can follow this tangent line \( h \) units to the right as well.</p>

<p><svg width="400" viewBox="0 0 1 1" style="background-image:
url(/images/emptygraph.png); background-size: contain;"
preserveAspectRatio="xMinYMin meet">
    <path d="M0 0.95833 L1 0.20833" fill="none" stroke="#EB5757"
    stroke-width="0.005" stroke-dasharray="0.01 0.01"/>
    <circle cx="0.333" cy="0.666" r="0.01" fill="#EB5757" />
    <circle cx="0.5" cy="0.583" r="0.01" fill="#EB5757" />
    <circle cx="0.666" cy="0.45833" r="0.01" fill="#EB5757" />
</svg></p>

<p>We can repeat the process again, with a tangent slope of \( f(t, y(t)) = f(1,
2.25) = 2.25\):</p>

<p><svg width="400" viewBox="0 0 1 1" style="background-image:
url(/images/emptygraph.png);" preserveAspectRatio="xMinYMin meet">
    <path d="M0 1.2083 L1 0.0833" fill="none" stroke="#EB5757"
    stroke-width="0.005" stroke-dasharray="0.01 0.01"/>
    <circle cx="0.333" cy="0.666" r="0.01" fill="#EB5757" />
    <circle cx="0.5" cy="0.583" r="0.01" fill="#EB5757" />
    <circle cx="0.666" cy="0.45833" r="0.01" fill="#EB5757" />
    <circle cx="0.833" cy="0.270833" r="0.01" fill="#EB5757" />
</svg></p>

<p>Stating the procedure recursively, we have this:</p>

<div>$$\begin{aligned}
t_{i+1} &= t_i + h \\
y_{i+1} &= y_i + h f(t_i, y_i)
\end{aligned}$$</div>

<p>This is called the &ldquo;Forward Euler&rdquo; method of numerical integration. This is the
general case of the step <code>x += v * delta_t</code>!</p>

<p>In this particular initial value problem, our steps look like so:</p>

<div>$$\begin{aligned}
t_{i+1} &= t_i + h \\
y_{i+1} &= y_i + h y_i
\end{aligned}$$</div>

<p>This method lends itself well to representing the computations in a table, like
so:</p>

<div>$$\begin{aligned}
t_0 &= 0, &
y_0 &= 1
&
&
&\\

t_1 &= 0.5, &
y_1 &= y_0 + h y_0
&=& 1 + 0.5 (1)
&=& 1.5 \\

t_2 &= 1, &
y_2 &= y_1 + h y_1
&=& 1.5 + 0.5 (1.5)
&=& 2.25 \\

t_3 &= 1.5, &
y_3 &= y_2 + h y_2
&=& 2.25 + 0.5 (2.25)
&=& 3.375 \\
\end{aligned}$$</div>

<p>It turns out that this particular initial value problem has a nice analytical
solution: \( y(t) = e^t \)</p>

<figure>
<svg width="400" viewBox="0 0 1 1" style="background-image:
url(/images/expgraph.png);"
preserveAspectRatio="xMinYMin meet">
</svg>
<figcaption>Graph of \( y(t) = e^t \)</figcaption>
</figure>

<p>When approximating the solution with Forward Euler, what do you think happens as
the time step \( h \) gets smaller?</p>

<figure>
<svg width="400" viewBox="0 0 1 1" style="background-image:
url(/images/expgraph.png);" id="graph1" preserveAspectRatio="xMinYMin
meet"></svg>
<div>
    <span id="hval">\( h=0.5 \)</span>
</div>
<div>
    <input id="hrange" type="range" min="0.1" max="0.5" step="0.01" value="0.5"
/>
</div>
<figcaption>Move the slider left and right to control the value of
h.</figcaption>
</figure>

<script src="https://d3js.org/d3.v4.min.js"></script>

<script>(function() {
var x = d3.scaleLinear().domain([-1, 2]).range([0, 1]);
var y = d3.scaleLinear().domain([-1, 5]).range([1, 0]);

var graph = d3.select("#graph1");

var line = d3.line()
    .x(function(d) { return x(d[0]); })
    .y(function(d) { return y(d[1]); });

var path = graph.append("path")
    .attr("stroke", "#EB5757")
    .attr("stroke-width", "0.005")
    .attr("fill", "none");

function update(circles) {
    circles
        .attr("cx", function(d) { return x(d[0]); })
        .attr("cy", function(d) { return y(d[1]); })
        .attr("r", "0.01")
        .attr("fill", "#EB5757")
}

function render(data) {
    path.datum(data)
        .attr("d", line);

    var circles = graph.selectAll("circle").data(data)

    update(circles)
    update(circles.enter().append("circle"))
    circles.exit().remove()
}

function euler(t0, t1, y0, f, h) {
    var data = [];
    var t_i = t0;
    var y_i = y0;
    data.push([t_i, y_i]);
    while (true) {
        y_i = y_i + h * f(t_i, y_i);
        t_i += h;
        data.push([t_i, y_i]);
        if (t_i > t1) break;
    }
    return data;
}

d3.select("#hrange").on("input change", function(data, index, nodes) {
    katex.render("h=" + nodes[0].value, hval);
    render(euler(0, 2, 1, function(t, y) { return y; }, parseFloat(nodes[0].value, 10)))
})

render(euler(0, 2, 1, function(t, y) { return y; }, 0.5));

})();</script>

<p>The error between the approximate solution and the exact solution decreases as
\( h \) decreases! In addition to decreasing \( h \) to decrease the error,
you could also use an alternative method of numerical integration that may
provide better error bounds, such as the <a href="https://en.wikipedia.org/wiki/Midpoint_method">Midpoint method</a>, <a href="https://en.wikipedia.org/wiki/Midpoint_method">Runge-Kutta
methods</a>, and <a href="https://en.wikipedia.org/wiki/Linear_multistep_method">linear multistep methods</a>.</p>

<h1 id="let-s-get-coding">Let&rsquo;s get coding!</h1>

<p>Just as we can generalize the definition of a simulation mathematically, we can
generalize the <em>implementation</em> of a simulation programmatically.</p>

<p>Since I&rsquo;m most familiar with JavaScript but appreciate the clarity type
annotations can give code, all the code samples will be written in
<a href="https://www.typescriptlang.org/index.html">TypeScript</a>.</p>

<p>Let&rsquo;s start with a version that assumes that <code>y</code> is a one-dimensional array of
numbers, just as in the mathematical explorations.</p>
<div class="highlight"><pre><code class="language-typescript" data-lang="typescript"><span class="kd">function</span> <span class="nx">runSimulation</span><span class="p">(</span>
    <span class="c1">// y(0) = y0
</span><span class="c1"></span>    <span class="nx">y0</span>: <span class="kt">number</span><span class="p">[],</span>
    <span class="c1">// dy/dt(t) = f(t, y(t))
</span><span class="c1"></span>    <span class="nx">f</span><span class="o">:</span> <span class="p">(</span><span class="nx">t</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">y</span>: <span class="kt">number</span><span class="p">[])</span> <span class="o">=&gt;</span> <span class="kt">number</span><span class="p">[],</span>
    <span class="c1">// display the current state of the simulation
</span><span class="c1"></span>    <span class="nx">render</span><span class="o">:</span> <span class="p">(</span><span class="nx">y</span>: <span class="kt">number</span><span class="p">[])</span> <span class="o">=&gt;</span> <span class="k">void</span>
<span class="p">)</span> <span class="p">{</span>
    <span class="c1">// Step forward 1/60th of a second at a time
</span><span class="c1"></span>    <span class="c1">// This results in realtime simulation if playback is 60fps
</span><span class="c1"></span>    <span class="kr">const</span> <span class="nx">h</span> <span class="o">=</span> <span class="mi">1</span> <span class="o">/</span> <span class="mf">60.0</span><span class="p">;</span>

    <span class="kd">function</span> <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">ti</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">yi</span>: <span class="kt">T</span><span class="p">)</span> <span class="p">{</span>
        <span class="nx">render</span><span class="p">(</span><span class="nx">yi</span><span class="p">)</span>
        <span class="nx">requestAnimationFrame</span><span class="p">(</span><span class="kd">function</span><span class="p">()</span> <span class="p">{</span>
            <span class="kr">const</span> <span class="nx">fi</span> <span class="o">=</span> <span class="nx">f</span><span class="p">(</span><span class="nx">ti</span><span class="p">,</span> <span class="nx">yi</span><span class="p">)</span>

            <span class="c1">// t_{i+1} = t_i + h
</span><span class="c1"></span>            <span class="kr">const</span> <span class="nx">tNext</span> <span class="o">=</span> <span class="nx">ti</span> <span class="o">+</span> <span class="nx">h</span>

            <span class="c1">// y_{i+1} = y_i + h f(t_i, y_i)
</span><span class="c1"></span>            <span class="kr">const</span> <span class="nx">yNext</span> <span class="o">=</span> <span class="p">[]</span>
            <span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">j</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="nx">j</span> <span class="o">&lt;</span> <span class="nx">y</span><span class="p">.</span><span class="nx">length</span><span class="p">;</span> <span class="nx">j</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
                <span class="nx">yNext</span><span class="p">.</span><span class="nx">push</span><span class="p">(</span><span class="nx">yi</span><span class="p">[</span><span class="nx">j</span><span class="p">]</span> <span class="o">+</span> <span class="nx">h</span> <span class="o">*</span> <span class="nx">fi</span><span class="p">[</span><span class="nx">j</span><span class="p">]);</span>
            <span class="p">}</span>

            <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">tNext</span><span class="p">,</span> <span class="nx">yNext</span><span class="p">)</span>
        <span class="p">})</span>
    <span class="p">}</span>
    <span class="nx">simulationStep</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="nx">y0</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div>
<p>It turns out to be pretty inconvenient to always operate on 1-dimensional arrays
of numbers, so we can abstract away the addition and multiplication operations
on the simulation state into an interface, then define our general simulation
code concisely using <a href="https://www.typescriptlang.org/docs/handbook/generics.html">TypeScript Generics</a><sup class="footnote-ref" id="fnref:1"><a rel="footnote" href="#fn:1">1</a></sup>.</p>
<div class="highlight"><pre><code class="language-typescript" data-lang="typescript"><span class="kr">interface</span> <span class="nx">Numeric</span><span class="p">&lt;</span><span class="nt">T</span><span class="p">&gt;</span> <span class="p">{</span>
    <span class="nx">plus</span><span class="p">(</span><span class="nx">other</span>: <span class="kt">T</span><span class="p">)</span><span class="o">:</span> <span class="nx">T</span>
    <span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span>: <span class="kt">number</span><span class="p">)</span><span class="o">:</span> <span class="nx">T</span>
<span class="p">}</span>

<span class="kd">function</span> <span class="nx">runSimulation</span><span class="p">&lt;</span><span class="nt">T</span> <span class="na">extends</span> <span class="na">Numeric</span><span class="err">&lt;</span><span class="na">T</span><span class="p">&gt;</span><span class="o">&gt;</span><span class="p">(</span>
  <span class="nx">y0</span>: <span class="kt">T</span><span class="p">,</span>
  <span class="nx">f</span><span class="o">:</span> <span class="p">(</span><span class="nx">t</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">y</span>: <span class="kt">T</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="nx">T</span><span class="p">,</span>
  <span class="nx">render</span><span class="o">:</span> <span class="p">(</span><span class="nx">y</span>: <span class="kt">T</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="k">void</span>
<span class="p">)</span> <span class="p">{</span>
    <span class="kr">const</span> <span class="nx">h</span> <span class="o">=</span> <span class="mi">1</span> <span class="o">/</span> <span class="mf">60.0</span><span class="p">;</span>

    <span class="kd">function</span> <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">ti</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">yi</span>: <span class="kt">T</span><span class="p">)</span> <span class="p">{</span>
        <span class="nx">render</span><span class="p">(</span><span class="nx">yi</span><span class="p">)</span>
        <span class="nx">requestAnimationFrame</span><span class="p">(</span><span class="kd">function</span><span class="p">()</span> <span class="p">{</span>
            <span class="c1">// t_{i+1} = t_i + h
</span><span class="c1"></span>            <span class="kr">const</span> <span class="nx">tNext</span> <span class="o">=</span> <span class="nx">ti</span> <span class="o">+</span> <span class="nx">h</span>
            <span class="c1">// y_{i+1} = y_i + h f(t_i, y_i)
</span><span class="c1"></span>            <span class="kr">const</span> <span class="nx">yNext</span> <span class="o">=</span> <span class="nx">yi</span><span class="p">.</span><span class="nx">plus</span><span class="p">(</span><span class="nx">f</span><span class="p">(</span><span class="nx">ti</span><span class="p">,</span> <span class="nx">yi</span><span class="p">).</span><span class="nx">times</span><span class="p">(</span><span class="nx">h</span><span class="p">))</span>
            <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">yNext</span><span class="p">,</span> <span class="nx">tNext</span><span class="p">)</span>
        <span class="p">})</span>
    <span class="p">}</span>
    <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">y0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div>
<p>The cool thing about having this general code is it lets us focus on the core of
the simulation: what about <em>this</em> simulation is different from other
simulations. Using the example of our two particle simulation from before:</p>
<div class="highlight"><pre><code class="language-typescript" data-lang="typescript"><span class="c1">// Represents the state of the entire
</span><span class="c1">// two particle simulation at a point in time.
</span><span class="c1"></span><span class="kr">class</span> <span class="nx">TwoParticles</span> <span class="kr">implements</span> <span class="nx">Numeric</span><span class="p">&lt;</span><span class="nt">TwoParticles</span><span class="p">&gt;</span> <span class="p">{</span>
    <span class="kr">constructor</span><span class="p">(</span>
        <span class="kr">readonly</span> <span class="nx">x1</span>: <span class="kt">Vec2</span><span class="p">,</span> <span class="kr">readonly</span> <span class="nx">v1</span>: <span class="kt">Vec2</span><span class="p">,</span>
        <span class="kr">readonly</span> <span class="nx">x2</span>: <span class="kt">Vec2</span><span class="p">,</span> <span class="kr">readonly</span> <span class="nx">v2</span>: <span class="kt">Vec2</span>
    <span class="p">)</span> <span class="p">{</span> <span class="p">}</span>

    <span class="nx">plus</span><span class="p">(</span><span class="nx">other</span>: <span class="kt">TwoParticles</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">return</span> <span class="k">new</span> <span class="nx">TwoParticles</span><span class="p">(</span>
            <span class="k">this</span><span class="p">.</span><span class="nx">x1</span><span class="p">.</span><span class="nx">plus</span><span class="p">(</span><span class="nx">other</span><span class="p">.</span><span class="nx">x1</span><span class="p">),</span> <span class="k">this</span><span class="p">.</span><span class="nx">v1</span><span class="p">.</span><span class="nx">plus</span><span class="p">(</span><span class="nx">other</span><span class="p">.</span><span class="nx">v1</span><span class="p">),</span>
            <span class="k">this</span><span class="p">.</span><span class="nx">x2</span><span class="p">.</span><span class="nx">plus</span><span class="p">(</span><span class="nx">other</span><span class="p">.</span><span class="nx">x2</span><span class="p">),</span> <span class="k">this</span><span class="p">.</span><span class="nx">v2</span><span class="p">.</span><span class="nx">plus</span><span class="p">(</span><span class="nx">other</span><span class="p">.</span><span class="nx">v2</span><span class="p">)</span>
        <span class="p">);</span>
    <span class="p">}</span>

    <span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span>: <span class="kt">number</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">return</span> <span class="k">new</span> <span class="nx">TwoParticles</span><span class="p">(</span>
            <span class="k">this</span><span class="p">.</span><span class="nx">x1</span><span class="p">.</span><span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span><span class="p">),</span> <span class="k">this</span><span class="p">.</span><span class="nx">v1</span><span class="p">.</span><span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span><span class="p">),</span>
            <span class="k">this</span><span class="p">.</span><span class="nx">x2</span><span class="p">.</span><span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span><span class="p">),</span> <span class="k">this</span><span class="p">.</span><span class="nx">v2</span><span class="p">.</span><span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span><span class="p">)</span>
        <span class="p">)</span>
    <span class="p">}</span>
<span class="p">}</span>

<span class="c1">// dy/dt (t) = f(t, y(t))
</span><span class="c1"></span><span class="kd">function</span> <span class="nx">f</span><span class="p">(</span><span class="nx">t</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">y</span>: <span class="kt">TwoParticles</span><span class="p">)</span> <span class="p">{</span>
    <span class="kr">const</span> <span class="p">{</span> <span class="nx">x1</span><span class="p">,</span> <span class="nx">v1</span><span class="p">,</span> <span class="nx">x2</span><span class="p">,</span> <span class="nx">v2</span> <span class="p">}</span> <span class="o">=</span> <span class="nx">y</span><span class="p">;</span>
    <span class="k">return</span> <span class="k">new</span> <span class="nx">TwoParticles</span><span class="p">(</span>
        <span class="c1">// dx1/dt = v1
</span><span class="c1"></span>        <span class="nx">v1</span><span class="p">,</span>
        <span class="c1">// dv1/dt = G*m2*(x2-x1)/|x2-x1|^3
</span><span class="c1"></span>        <span class="nx">x2</span><span class="p">.</span><span class="nx">minus</span><span class="p">(</span><span class="nx">x1</span><span class="p">).</span><span class="nx">times</span><span class="p">(</span><span class="nx">G</span> <span class="o">*</span> <span class="nx">m2</span> <span class="o">/</span> <span class="nb">Math</span><span class="p">.</span><span class="nx">pow</span><span class="p">(</span><span class="nx">x2</span><span class="p">.</span><span class="nx">minus</span><span class="p">(</span><span class="nx">x1</span><span class="p">).</span><span class="nx">length</span><span class="p">(),</span> <span class="mi">3</span><span class="p">)),</span>
        <span class="c1">// dx2/dt = v2
</span><span class="c1"></span>        <span class="nx">v2</span><span class="p">,</span>
        <span class="c1">// dv2/dt = G*m1*(x1-x1)/|x1-x2|^3
</span><span class="c1"></span>        <span class="nx">x1</span><span class="p">.</span><span class="nx">minus</span><span class="p">(</span><span class="nx">x2</span><span class="p">).</span><span class="nx">times</span><span class="p">(</span><span class="nx">G</span> <span class="o">*</span> <span class="nx">m1</span> <span class="o">/</span> <span class="nb">Math</span><span class="p">.</span><span class="nx">pow</span><span class="p">(</span><span class="nx">x1</span><span class="p">.</span><span class="nx">minus</span><span class="p">(</span><span class="nx">x2</span><span class="p">).</span><span class="nx">length</span><span class="p">(),</span> <span class="mi">3</span><span class="p">))</span>
    <span class="p">)</span>
<span class="p">}</span>

<span class="c1">// y(0) = y0
</span><span class="c1"></span><span class="kr">const</span> <span class="nx">y0</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">TwoParticles</span><span class="p">(</span>
    <span class="cm">/* x1 */</span> <span class="k">new</span> <span class="nx">Vec2</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span>
    <span class="cm">/* v1 */</span> <span class="k">new</span> <span class="nx">Vec2</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
    <span class="cm">/* x2 */</span> <span class="k">new</span> <span class="nx">Vec2</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span>
    <span class="cm">/* v2 */</span> <span class="k">new</span> <span class="nx">Vec2</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="p">)</span>

<span class="kr">const</span> <span class="nx">canvas</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nx">createElement</span><span class="p">(</span><span class="s2">&#34;canvas&#34;</span><span class="p">)</span>
<span class="nx">canvas</span><span class="p">.</span><span class="nx">width</span> <span class="o">=</span> <span class="mi">400</span><span class="p">;</span>
<span class="nx">canvas</span><span class="p">.</span><span class="nx">height</span> <span class="o">=</span> <span class="mi">400</span><span class="p">;</span>
<span class="kr">const</span> <span class="nx">ctx</span> <span class="o">=</span> <span class="nx">canvas</span><span class="p">.</span><span class="nx">getContext</span><span class="p">(</span><span class="s2">&#34;2d&#34;</span><span class="p">)</span><span class="o">!</span><span class="p">;</span>
<span class="nb">document</span><span class="p">.</span><span class="nx">body</span><span class="p">.</span><span class="nx">appendChild</span><span class="p">(</span><span class="nx">canvas</span><span class="p">);</span>

<span class="c1">// Display the current state of the simulation
</span><span class="c1"></span><span class="kd">function</span> <span class="nx">render</span><span class="p">(</span><span class="nx">y</span>: <span class="kt">TwoParticles</span><span class="p">)</span> <span class="p">{</span>
    <span class="kr">const</span> <span class="p">{</span> <span class="nx">x1</span><span class="p">,</span> <span class="nx">x2</span> <span class="p">}</span> <span class="o">=</span> <span class="nx">y</span><span class="p">;</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fillStyle</span> <span class="o">=</span> <span class="s2">&#34;white&#34;</span><span class="p">;</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fillRect</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">400</span><span class="p">,</span> <span class="mi">400</span><span class="p">);</span>

    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fillStyle</span> <span class="o">=</span> <span class="s2">&#34;black&#34;</span><span class="p">;</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">beginPath</span><span class="p">();</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">ellipse</span><span class="p">(</span><span class="nx">x1</span><span class="p">.</span><span class="nx">x</span><span class="o">*</span><span class="mi">50</span> <span class="o">+</span> <span class="mi">200</span><span class="p">,</span> <span class="nx">x1</span><span class="p">.</span><span class="nx">y</span><span class="o">*</span><span class="mi">50</span> <span class="o">+</span> <span class="mi">200</span><span class="p">,</span> <span class="mi">15</span><span class="p">,</span> <span class="mi">15</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span> <span class="o">*</span> <span class="nb">Math</span><span class="p">.</span><span class="nx">PI</span><span class="p">);</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fill</span><span class="p">();</span>

    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fillStyle</span> <span class="o">=</span> <span class="s2">&#34;red&#34;</span><span class="p">;</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">beginPath</span><span class="p">();</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">ellipse</span><span class="p">(</span><span class="nx">x2</span><span class="p">.</span><span class="nx">x</span><span class="o">*</span><span class="mi">50</span> <span class="o">+</span> <span class="mi">200</span><span class="p">,</span> <span class="nx">x2</span><span class="p">.</span><span class="nx">y</span><span class="o">*</span><span class="mi">50</span> <span class="o">+</span> <span class="mi">200</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span> <span class="o">*</span> <span class="nb">Math</span><span class="p">.</span><span class="nx">PI</span><span class="p">);</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fill</span><span class="p">();</span>
<span class="p">}</span>

<span class="c1">// Run the thing!
</span><span class="c1"></span><span class="nx">runSimulation</span><span class="p">(</span><span class="nx">y0</span><span class="p">,</span> <span class="nx">f</span><span class="p">,</span> <span class="nx">render</span><span class="p">)</span>
</code></pre></div>
<p>If we tweak the numbers a bit, we can get a simulation of the moon&rsquo;s orbit!</p>

<figure>
<canvas id="earthmoon" width="400" height="400"></canvas>
<figcaption>
Earth-Moon orbital simulation, 1px=2500km. <br/>
1 second of simulation time is 1 day of real-world time. <br/>
Earth & Moon are shown at 10x their correct proportional size.
</figcaption>
</figure>

<p>If you want to play with the code, here it is: <a href="http://codepen.io/jlfwong/pen/mmmXVK">Earth Moon Simulation on
Codepen</a>.</p>

<script>
(function() {
var Vec2 = (function () {
    function Vec2(x, y) {
        if (x === void 0) { x = 0; }
        if (y === void 0) { y = 0; }
        this.x = x;
        this.y = y;
    }
    Vec2.prototype.plus = function (other) { return new Vec2(this.x + other.x, this.y + other.y); };
    Vec2.prototype.times = function (scalar) { return new Vec2(this.x * scalar, this.y * scalar); };
    Vec2.prototype.minus = function (other) { return new Vec2(this.x - other.x, this.y - other.y); };
    Vec2.prototype.length2 = function () { return (this.x * this.x + this.y * this.y); };
    Vec2.prototype.length = function () { return Math.sqrt(this.length2()); };
    return Vec2;
}());
function runSimulation(y0, f, render) {
    var h = (24 * 60 * 60) * 1 / 60.0;
    function simulationStep(yi, ti) {
        render(yi);
        requestAnimationFrame(function () {
            // t_{i+1} = t_i + h
            var tNext = ti + h;
            // y_{i+1} = y_i + h f(t_i, y_i)
            var yNext = yi.plus(f(ti, yi).times(h));
            simulationStep(yNext, tNext);
        });
    }
    simulationStep(y0, 0.0);
}
var TwoParticles = (function () {
    function TwoParticles(x1, v1, x2, v2) {
        this.x1 = x1;
        this.v1 = v1;
        this.x2 = x2;
        this.v2 = v2;
    }
    TwoParticles.prototype.plus = function (other) {
        return new TwoParticles(this.x1.plus(other.x1), this.v1.plus(other.v1), this.x2.plus(other.x2), this.v2.plus(other.v2));
    };
    TwoParticles.prototype.times = function (scalar) {
        return new TwoParticles(this.x1.times(scalar), this.v1.times(scalar), this.x2.times(scalar), this.v2.times(scalar));
    };
    return TwoParticles;
}());

var canvas = document.getElementById("earthmoon");
var ctx = canvas.getContext("2d");
ctx.fillStyle = "rgba(0, 0, 0, 0, 1)";
ctx.fillRect(0, 0, 400, 400);

function render(y) {
    var x1 = y.x1, x2 = y.x2;
    ctx.fillStyle = "rgba(0, 0, 0, 0.05)";
    ctx.fillRect(0, 0, 400, 400);
    var rEarth = 6.371e6 / 1e9;
    var rMoon = 1.73e6 / 1e9;
    ctx.fillStyle = "rgba(45, 66, 143, 1)";
    ctx.beginPath();
    ctx.ellipse((x1.x / 1e9) * 400 + 200, (x1.y / 1e9) * 400 + 200, rEarth * 400 * 10, rEarth * 400 * 10, 0, 0, 2 * Math.PI);
    ctx.fill();
    ctx.fillStyle = "rgba(189, 189, 189, 1)";
    ctx.beginPath();
    ctx.ellipse((x2.x / 1e9) * 400 + 200, (x2.y / 1e9) * 400 + 200, rMoon * 400 * 10, rMoon * 400 * 10, 0, 0, 2 * Math.PI);
    ctx.fill();
}
var G = 6.67e-11;
var m1 = 5.972e24;
var m2 = 7.34e22;
function f(t, y) {
    var x1 = y.x1, v1 = y.v1, x2 = y.x2, v2 = y.v2;
    return new TwoParticles(
    // dx1/dt = v1
    v1,
    // dv1/dt = G*m2*(x2-x1)/|x2-x1|^3
    x2.minus(x1).times(G * m2 / Math.pow(x2.minus(x1).length(), 3)),
    // dx2/dt = v2
    v2,
    // dv2/dt = G*m1*(x1-x1)/|x1-x2|^3
    x1.minus(x2).times(G * m1 / Math.pow(x1.minus(x2).length(), 3)));
}
var y0 = new TwoParticles(
/* x1 */ new Vec2(0, 0),
/* v1 */ new Vec2(0, -13.22),
/* x2 */ new Vec2(3.6e8, 0),
/* v2 */ new Vec2(0, 1.076e3));
runSimulation(y0, f, render);
})();
</script>

<h1 id="collisions-constraints">Collisions &amp; Constraints</h1>

<p>While that mathematical model of things I presented does model the physical
world, the numerical integration method sadly falls apart in certain situations.</p>

<p>Consider a simulation of a bouncing ball.</p>

<figure>
<canvas id="bounce1" width="400" height="400"></canvas>
</figure>

<script>(function() {
var canvas = document.getElementById("bounce1"); var ctx =
canvas.getContext("2d");

var y = 0.8; // m
var v = 0;
var a = -9.8; // m/s^2
var secondsPerFrame = 1 / 60.0;
var r = 0.2;
var iterationsPerFrame = 30;

function render(y) {
    ctx.clearRect(0, 0, 400, 400);
    ctx.fillStyle = '#EB5757';
    ctx.beginPath();
    ctx.ellipse(200, 400 - ((y+r) * 300), r * 300, r * 300, 0, 0, 2 * Math.PI);
    ctx.fill();
}

function tick() {
    render(y);

    var h = secondsPerFrame / iterationsPerFrame;
    for (var i = 0; i < iterationsPerFrame; i++) {
        y += v * h;

        if (y <= 0 && v <= 0) {
            // Perfectly elastic collision
            v = -v;
        }

        v += a * h;
    }

    requestAnimationFrame(tick);
}
tick();
})()</script>

<p>The entire state of the simulation is:</p>

<div>$$
\vec y = \begin{bmatrix}
x \\
v
\end{bmatrix}
$$</div>

<p>Where \( x \) is the ball&rsquo;s height off the ground and \( v \) is the
velocity of the ball. If we drop the ball from a height of 0.8 meters, we have:</p>

<div>$$
\vec y_0 = \begin{bmatrix}
0.8 \\
0
\end{bmatrix}
$$</div>

<p>If we plot \( x(t) \), it would look something like this:</p>

<p><img width="400" height="400" src="/images/bouncegraph.png" /></p>

<p>While the ball is falling, we can construct our derivative function \( f \)
fairly easily:</p>

<div>$$
f(t, y(t)) = \frac{dy}{dt}(t) = \begin{bmatrix}
    \frac{dx}{dt}(t) \\ \\
    \frac{dv}{dt}(t)
\end{bmatrix} = \begin{bmatrix}
    v \\
    a
\end{bmatrix}
$$</div>

<p>While accelerating only under the influence of gravity, \( a = -g = -9.8
\frac{m}{s^2} \).</p>

<p>But what happens when the ball hits the ground? We know the ball has hit the
ground when \( x = 0 \). But in our numerical integration, it&rsquo;s possible that
the ball might be above the ground at one time step, and <em>in</em> the ground the
next time step: \( x_i &gt; 0, x_{i+1} &lt; 0 \).</p>

<p>We could solve this by rewinding time to find the time \( t_c \) at which the
collision happens \( (t_i &lt; t_c &lt; t_{i+1}) \). But once we&rsquo;ve found that, we
still don&rsquo;t have a way to define \( \frac{dv}{dt} \) that would result in the
velocity instantaneously changing to be upwards instead of downwards.</p>

<p>It&rsquo;s possible to work this all out by having the collision take a finite amount
of time to take place and applying some force \( F \) over that timespan \(
\Delta t \), but it&rsquo;s usually easier to just support some kind of discrete
constraints to the simulation.</p>

<p>We can also do more than one iteration of the physics simulation per rendered
frame to reduce the amount by which the ball penetrates the floor before
bouncing.  With that in mind, our core simulation code changes to this:</p>
<div class="highlight"><pre><code class="language-typescript" data-lang="typescript"><span class="kd">function</span> <span class="nx">runSimulation</span><span class="p">&lt;</span><span class="nt">T</span> <span class="na">extends</span> <span class="na">Numeric</span><span class="err">&lt;</span><span class="na">T</span><span class="p">&gt;</span><span class="o">&gt;</span><span class="p">(</span>
  <span class="nx">y0</span>: <span class="kt">T</span><span class="p">,</span>
  <span class="nx">f</span><span class="o">:</span> <span class="p">(</span><span class="nx">t</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">y</span>: <span class="kt">T</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="nx">T</span><span class="p">,</span>
  <span class="nx">applyConstraints</span><span class="o">:</span> <span class="p">(</span><span class="nx">y</span>: <span class="kt">T</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="nx">T</span><span class="p">,</span>
  <span class="nx">iterationsPerFrame</span>: <span class="kt">number</span><span class="p">,</span>
  <span class="nx">render</span><span class="o">:</span> <span class="p">(</span><span class="nx">y</span>: <span class="kt">T</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="k">void</span>
<span class="p">)</span> <span class="p">{</span>
    <span class="kr">const</span> <span class="nx">frameTime</span> <span class="o">=</span> <span class="mi">1</span> <span class="o">/</span> <span class="mf">60.0</span>
    <span class="kr">const</span> <span class="nx">h</span> <span class="o">=</span> <span class="nx">frameTime</span> <span class="o">/</span> <span class="nx">iterationsPerFrame</span>

    <span class="kd">function</span> <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">yi</span>: <span class="kt">T</span><span class="p">,</span> <span class="nx">ti</span>: <span class="kt">number</span><span class="p">)</span> <span class="p">{</span>
        <span class="nx">render</span><span class="p">(</span><span class="nx">yi</span><span class="p">)</span>
        <span class="nx">requestAnimationFrame</span><span class="p">(</span><span class="kd">function</span> <span class="p">()</span> <span class="p">{</span>
            <span class="k">for</span> <span class="p">(</span><span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="nx">i</span> <span class="o">&lt;</span> <span class="nx">iterationsPerFrame</span><span class="p">;</span> <span class="nx">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
                <span class="nx">yi</span> <span class="o">=</span> <span class="nx">yi</span><span class="p">.</span><span class="nx">plus</span><span class="p">(</span><span class="nx">f</span><span class="p">(</span><span class="nx">ti</span><span class="p">,</span> <span class="nx">yi</span><span class="p">).</span><span class="nx">times</span><span class="p">(</span><span class="nx">h</span><span class="p">))</span>
                <span class="nx">yi</span> <span class="o">=</span> <span class="nx">applyConstraints</span><span class="p">(</span><span class="nx">yi</span><span class="p">)</span>
                <span class="nx">ti</span> <span class="o">=</span> <span class="nx">ti</span> <span class="o">+</span> <span class="nx">h</span>
            <span class="p">}</span>
            <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">yi</span><span class="p">,</span> <span class="nx">ti</span><span class="p">)</span>
        <span class="p">})</span>
    <span class="p">}</span>
    <span class="nx">simulationStep</span><span class="p">(</span><span class="nx">y0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div>
<p>And then we can implement our bouncing ball like so:</p>
<div class="highlight"><pre><code class="language-typescript" data-lang="typescript"><span class="kr">const</span> <span class="nx">g</span> <span class="o">=</span> <span class="o">-</span><span class="mf">9.8</span><span class="p">;</span> <span class="c1">// m / s^2
</span><span class="c1"></span><span class="kr">const</span> <span class="nx">r</span> <span class="o">=</span> <span class="mf">0.2</span><span class="p">;</span> <span class="c1">// m
</span><span class="c1"></span>
<span class="kr">class</span> <span class="nx">Ball</span> <span class="kr">implements</span> <span class="nx">Numeric</span><span class="p">&lt;</span><span class="nt">Ball</span><span class="p">&gt;</span> <span class="p">{</span>
    <span class="kr">constructor</span><span class="p">(</span><span class="kr">readonly</span> <span class="nx">x</span>: <span class="kt">number</span><span class="p">,</span> <span class="kr">readonly</span> <span class="nx">v</span>: <span class="kt">number</span><span class="p">)</span> <span class="p">{</span> <span class="p">}</span>
    <span class="nx">plus</span><span class="p">(</span><span class="nx">other</span>: <span class="kt">Ball</span><span class="p">)</span> <span class="p">{</span> <span class="k">return</span> <span class="k">new</span> <span class="nx">Ball</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">x</span> <span class="o">+</span> <span class="nx">other</span><span class="p">.</span><span class="nx">x</span><span class="p">,</span> <span class="k">this</span><span class="p">.</span><span class="nx">v</span> <span class="o">+</span> <span class="nx">other</span><span class="p">.</span><span class="nx">v</span><span class="p">)</span> <span class="p">}</span>
    <span class="nx">times</span><span class="p">(</span><span class="nx">scalar</span>: <span class="kt">number</span><span class="p">)</span> <span class="p">{</span> <span class="k">return</span> <span class="k">new</span> <span class="nx">Ball</span><span class="p">(</span><span class="k">this</span><span class="p">.</span><span class="nx">x</span> <span class="o">*</span> <span class="nx">scalar</span><span class="p">,</span> <span class="k">this</span><span class="p">.</span><span class="nx">v</span> <span class="o">*</span> <span class="nx">scalar</span><span class="p">)</span> <span class="p">}</span>
<span class="p">}</span>

<span class="kd">function</span> <span class="nx">f</span><span class="p">(</span><span class="nx">t</span>: <span class="kt">number</span><span class="p">,</span> <span class="nx">y</span>: <span class="kt">Ball</span><span class="p">)</span> <span class="p">{</span>
    <span class="kr">const</span> <span class="p">{</span> <span class="nx">x</span><span class="p">,</span> <span class="nx">v</span> <span class="p">}</span> <span class="o">=</span> <span class="nx">y</span>
    <span class="k">return</span> <span class="k">new</span> <span class="nx">Ball</span><span class="p">(</span><span class="nx">v</span><span class="p">,</span> <span class="nx">g</span><span class="p">)</span>
<span class="p">}</span>

<span class="kd">function</span> <span class="nx">applyConstraints</span><span class="p">(</span><span class="nx">y</span>: <span class="kt">Ball</span><span class="p">)</span><span class="o">:</span> <span class="nx">Ball</span> <span class="p">{</span>
    <span class="kr">const</span> <span class="p">{</span> <span class="nx">x</span><span class="p">,</span> <span class="nx">v</span> <span class="p">}</span> <span class="o">=</span> <span class="nx">y</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">x</span> <span class="o">&lt;=</span> <span class="mi">0</span> <span class="o">&amp;&amp;</span> <span class="nx">v</span> <span class="o">&lt;</span> <span class="mi">0</span><span class="p">)</span> <span class="p">{</span>
        <span class="k">return</span> <span class="k">new</span> <span class="nx">Ball</span><span class="p">(</span><span class="nx">x</span><span class="p">,</span> <span class="o">-</span><span class="nx">v</span><span class="p">)</span>
    <span class="p">}</span>
    <span class="k">return</span> <span class="nx">y</span>
<span class="p">}</span>

<span class="kr">const</span> <span class="nx">y0</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">Ball</span><span class="p">(</span>
    <span class="cm">/* x */</span> <span class="mf">0.8</span><span class="p">,</span>
    <span class="cm">/* v */</span> <span class="mi">0</span>
<span class="p">)</span>

<span class="kd">function</span> <span class="nx">render</span><span class="p">(</span><span class="nx">y</span>: <span class="kt">Ball</span><span class="p">)</span> <span class="p">{</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">clearRect</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">400</span><span class="p">,</span> <span class="mi">400</span><span class="p">)</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fillStyle</span> <span class="o">=</span> <span class="s1">&#39;#EB5757&#39;</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">beginPath</span><span class="p">()</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">ellipse</span><span class="p">(</span><span class="mi">200</span><span class="p">,</span> <span class="mi">400</span> <span class="o">-</span> <span class="p">((</span><span class="nx">y</span><span class="p">.</span><span class="nx">x</span> <span class="o">+</span> <span class="nx">r</span><span class="p">)</span> <span class="o">*</span> <span class="mi">300</span><span class="p">),</span> <span class="nx">r</span> <span class="o">*</span> <span class="mi">300</span><span class="p">,</span> <span class="nx">r</span> <span class="o">*</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span> <span class="o">*</span> <span class="nb">Math</span><span class="p">.</span><span class="nx">PI</span><span class="p">)</span>
    <span class="nx">ctx</span><span class="p">.</span><span class="nx">fill</span><span class="p">()</span>
<span class="p">}</span>

<span class="nx">runSimulation</span><span class="p">(</span><span class="nx">y0</span><span class="p">,</span> <span class="nx">f</span><span class="p">,</span> <span class="nx">applyConstraints</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="nx">render</span><span class="p">)</span>
</code></pre></div>
<p>You can play around with this version of the code here: <a href="http://codepen.io/jlfwong/pen/LyyQMr">Bouncing Ball on
Codepen</a>.</p>

<h1 id="implementers-beware">Implementers beware!</h1>

<p>While this general description of simulations has some nice properties, it
doesn&rsquo;t necessarily yield the most high performance simulations. I find it a
nice framework to think about the behavior of simulations, though there&rsquo;s
certainly a lot of unnecessary overhead.</p>

<p>The rain simulation that starts this post is, for instance, not implemented
using the patterns here. Instead, it was an experiment using the <a href="https://en.wikipedia.org/wiki/Entity%E2%80%93component%E2%80%93system">Entity
Component System</a> architectural pattern. You can see the source for the rain
simulation here: <a href="https://github.com/jlfwong/graphics-experiments/blob/master/particles2/particles2.ts">Rain Simulation source on GitHub</a>.</p>

<h1 id="until-next-time">Until next time!</h1>

<p>Something about the intersection of math, physics, and programming really
strikes a chord with me. Getting simulations up and running and rendering makes
for a very special kind of <a href="/2013/05/05/something-out-of-nothing/">something out of nothing</a>.</p>

<p>SIGGRAPH course notes served as a form of inspiration here, just as they did
with the <a href="/2016/08/05/webgl-fluid-simulation/">fluid simulation</a>. Check out <a href="https://www.cs.cmu.edu/~baraff/sigcourse/">&ldquo;An Introduction to Physically Based
Modelling&rdquo;</a> course notes from SIGGRAPH 2001 if you want a much more rigorous
look into this material. I have the notes from 1997 linked because it looks like
Pixar took the 2001 versions down.</p>

<p>Thanks to <a href="http://rmaggiecai.com/">Maggie Cai</a> for the beautiful illustration of the couple with the
umbrella and for having the patience to meticulously choose colors when I can
barely tell the difference between blue and grey.</p>

<p>And in case you were wondering, the illustration was made in <a href="https://www.figma.com">Figma</a>. 😃</p>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.7.1/katex.min.js"
integrity="sha384-/y1Nn9+QQAipbNQWU65krzJralCnuOasHncUFXGkdwntGeSvQicrYkiUBwsgUqc1"
crossorigin="anonymous"></script>

<script
src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.7.1/contrib/auto-render.min.js"
integrity="sha384-dq1/gEHSxPZQ7DdrM82ID4YVol9BYyU7GbWlIwnwyPzotpoc57wDw/guX8EaYGPx"
crossorigin="anonymous"></script>

<script>
renderMathInElement(document.body);
</script>
<div class="footnotes">

<hr />

<ol>
<li id="fn:1">The same pattern would work using duck typing in dynamic languages like Python and JavaScript. This pattern would be nicer in languages supporting operator overloading, such as Python, C++, Scala, etc.
 <a class="footnote-return" href="#fnref:1"><sup>[return]</sup></a></li>
</ol>
</div>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Bezier Curves from the Ground Up]]></title>
    <link href="http://jamie-wong.com/post/bezier-curves/"/>
    <updated>2016-12-29T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/bezier-curves/</id>
    <content type="html"><![CDATA[ 

<p><link rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/katex.min.css"></p>

<p><em>This post is also available in Japanese: <a href="https://postd.cc/bezier-curves/">一から学ぶベジェ曲線</a>.</em></p>

<p>How do you describe a straight line segment? We might think about a line segment
in terms of its endpoints. Let&rsquo;s call those endpoints \( P_0 \) and \( P_1
\).</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg">
    <path d="M50 50 L550 150" stroke="black" stroke-width="2"/>
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
</svg></p>

<p>To define the line segment rigorously, we might say &ldquo;the set of all points along
the line through \( P_0 \) and \( P_1 \) which lie between \( P_0 \) and
\( P_1 \)&ldquo;, or perhaps this:</p>

<p>$$
L(t) = (1 - t) P_0 + t P_1, 0 \le t \le 1
$$</p>

<p>Conveniently, this definition let&rsquo;s us easily find the coordinate of the point
any portion of the way along that line segment. The midpoint, for instance, lies
at \( L(0.5) \).</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg">
    <path d="M50 50 L550 150" stroke="black" stroke-width="2"/>
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(300, 100)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="-20" y="30" font-family="KaTeX_Math">
            L(0.5)
        </text>
    </g>
</svg></p>

<div>$$
L(0.5) = (1 - 0.5) P_0 + 0.5 P_1 = \begin{bmatrix}
    0.5(P_{0_x} + P_{1_x}) \\
    0.5(P_{0_y} + P_{1_y})
\end{bmatrix}
$$</div>

<p>We can, in fact, <em>linearly interpolate</em> to any value we want between the two
points, with arbitrary precision. This allows us to do fancier things, like
trace the line by having the \( t \) in \( L(t) \) be a function of time.</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg">
    <path d="M50 50 L550 150" stroke="black" stroke-width="2"/>
    <path d="M50 50 L50 50" stroke="#EB5757" stroke-width="2" id="p1" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(50, 50)" id="Lg1">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="-20" y="30" font-family="KaTeX_Math" id="Lt1">
            L(0.5)
        </text>
    </g>
</svg></p>

<script>
(function() {
    var t = 0;
    var dt = 0.004;

    var x0 = 50;
    var y0 = 50;
    var x1 = 550;
    var y1 = 150;

    var Lg1 = document.getElementById("Lg1");
    var Lt1 = document.getElementById("Lt1");
    var p1 = document.getElementById("p1");

    (function next() {
        t += dt;
        if (t < 0 || t > 1) {
            dt *= -1;
            t = Math.min(1, Math.max(0, t));
        }

        var x = x0 + t * (x1 - x0);
        var y = y0 + t * (y1 - y0);

        Lg1.setAttribute("transform",  "translate(" + x + "," + y + ")");
        Lt1.innerHTML = "L(" + t.toFixed(2) + ")";
        p1.setAttribute("d", "M50 50 L" + x.toFixed(2) + " " + y.toFixed(2));

        requestAnimationFrame(next);
    })();
})();
</script>

<p>If you got this far, you might now be wondering, &ldquo;What does this have to do with
curves?&ldquo;. Well, it seems quite intuitive that you can precisely describe a line
segment with only two points. How might you go about precisely describing this?</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg">
    <path d="M50 50 Q550 150 550 50" stroke="black" fill="none" stroke-width="2"/>
</svg></p>

<p>It turns out that this <em>particular</em> kind of curve can be described by only 3
points!</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg">
    <path d="M50 50 Q550 150 550 50" stroke="black" fill="none" stroke-width="2"/>
    <path d="M50 50 L550 150" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <path d="M550 150 L 550 50" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
</svg></p>

<p>This is called a <em>Quadratic Bezier Curve</em>. A line segment, donning a fancier
hat, might be called a <em>Linear Bezier Curve</em>. Let&rsquo;s investigate why.</p>

<p>First, let&rsquo;s consider what it looks like when we interpolate between \( P_0 \)
and \( P_1 \) while simultaneously interpolating between \( P_1 \) and \(
P_2 \).</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg" id="quad1">
    <path d="M50 50 L550 150" stroke="black" fill="none" stroke-width="1"
    stroke-dasharray="1, 2" />
    <path d="M550 150 L 550 50" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <path d="" stroke="#EB5757" fill="none" stroke-width="1" class="p01" />
    <path d="" stroke="#EB5757" fill="none" stroke-width="1" class="p12" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
    <g transform="translate(50, 50)" class="gB01">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="-20" y="30" font-family="KaTeX_Math">
            B<tspan baseline-shift="sub" font-size="70%">0,1</tspan>(<tspan class="t01">0.00</tspan>)
        </text>
    </g>
    <g transform="translate(550, 150)" class="gB12">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="-90" y="0" font-family="KaTeX_Math">
            B<tspan baseline-shift="sub" font-size="70%">1,2</tspan>(<tspan class="t12">0.00</tspan>)
        </text>
    </g>
</svg></p>

<script>
(function() {
    var t = 0;
    var dt = 0.004;

    var x0 = 50;
    var y0 = 50;
    var x1 = 550;
    var y1 = 150;
    var x2 = 550;
    var y2 = 50;

    var quad1 = document.getElementById("quad1");
    var gB01 = quad1.querySelector(".gB01");
    var gB12 = quad1.querySelector(".gB12");
    var t01 = quad1.querySelector(".t01");
    var t12 = quad1.querySelector(".t12");
    var p01 = quad1.querySelector(".p01");
    var p12 = quad1.querySelector(".p12");

    (function next() {
        t += dt;
        if (t < 0 || t > 1) {
            dt *= -1;
            t = Math.min(1, Math.max(0, t));
        }

        var x01 = x0 + t * (x1 - x0);
        var y01 = y0 + t * (y1 - y0);

        gB01.setAttribute("transform",  "translate(" + x01 + "," + y01 + ")");
        t01.innerHTML = t.toFixed(2);
        p01.setAttribute("d", "M" + x0 + " " + y0 + " L" + x01.toFixed(2) + " " + y01.toFixed(2));

        var x12 = x1 + t * (x2 - x1);
        var y12 = y1 + t * (y2 - y1);

        gB12.setAttribute("transform",  "translate(" + x12 + "," + y12 + ")");
        t12.innerHTML = t.toFixed(2);
        p12.setAttribute("d", "M" + x1 + " " + y1 + " L" + x12.toFixed(2) + " " + y12.toFixed(2));

        requestAnimationFrame(next);
    })();
})();
</script>

<div>$$
\begin{aligned}
B_{0,1}(t) = (1 - t) P_0 + t P_1, 0 \le t \le 1 \\
B_{1,2}(t) = (1 - t) P_1 + t P_2, 0 \le t \le 1 \\
\end{aligned}
$$</div>

<p>Now let&rsquo;s linearly interpolate between \( B_{0, 1}(t) \) and \( B_{1, 2}(t)
\)&hellip;</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg" id="quad2">
    <path d="M50 50 L550 150" stroke="black" fill="none" stroke-width="1"
    stroke-dasharray="1, 2" />
    <path d="M550 150 L 550 50" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <path d="" stroke="#EB5757" fill="none" stroke-width="1" class="p01" />
    <path d="" stroke="#EB5757" fill="none" stroke-width="1" class="p12" />
    <path d="" stroke="black" fill="none" stroke-width="0.5" class="p012" />
    <path d="" stroke="#27AE60" fill="none" stroke-width="1.5" class="pB012" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
    <g transform="translate(50, 50)" class="gB01">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
    <g transform="translate(550, 150)" class="gB12">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
    <g transform="translate(550, 150)" class="gB012">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="-20" y="30" font-family="KaTeX_Math">
            B<tspan baseline-shift="sub" font-size="70%">0,1,2</tspan>(<tspan class="t012">0.00</tspan>)
        </text>
    </g>
</svg></p>

<div>$$
\begin{aligned}
B_{0,1,2}(t) = (1 - t) B_{0,1}(t) + t B_{1,2}(t), 0 \le t \le 1 \\
\end{aligned}
$$</div>

<script>
(function() {
    var t = 0;
    var dt = 0.004;

    var x0 = 50;
    var y0 = 50;
    var x1 = 550;
    var y1 = 150;
    var x2 = 550;
    var y2 = 50;

    var quad = document.getElementById("quad2");
    var gB01 = quad.querySelector(".gB01");
    var gB12 = quad.querySelector(".gB12");
    var gB012 = quad.querySelector(".gB012");
    var p01 = quad.querySelector(".p01");
    var p12 = quad.querySelector(".p12");
    var p012 = quad.querySelector(".p012");
    var pB012 = quad.querySelector(".pB012");
    var t012 = quad.querySelector(".t012");

    (function next() {
        t += dt;
        if (t < 0 || t > 1) {
            dt *= -1;
            t = Math.min(1, Math.max(0, t));
        }

        var x01 = x0 + t * (x1 - x0);
        var y01 = y0 + t * (y1 - y0);

        gB01.setAttribute("transform",  "translate(" + x01 + "," + y01 + ")");
        p01.setAttribute("d", "M" + x0 + " " + y0 + " L" + x01.toFixed(2) + " " + y01.toFixed(2));

        var x12 = x1 + t * (x2 - x1);
        var y12 = y1 + t * (y2 - y1);

        gB12.setAttribute("transform",  "translate(" + x12 + "," + y12 + ")");
        p12.setAttribute("d", "M" + x1 + " " + y1 + " L" + x12.toFixed(2) + " " + y12.toFixed(2));

        var x012 = x01 + t * (x12 - x01);
        var y012 = y01 + t * (y12 - y01);

        t012.innerHTML = t.toFixed(2);

        gB012.setAttribute("transform",  "translate(" + x012 + "," + y012 + ")");

        pB012.setAttribute("d", "M" + x01.toFixed(2) + " " + y01.toFixed(2) + " " +
                                "L" + x012.toFixed(2) + " " + y012.toFixed(2));

        p012.setAttribute("d", "M" + x01.toFixed(2) + " " + y01.toFixed(2) + " " +
                               "L" + x12.toFixed(2) + " " + y12.toFixed(2));

        requestAnimationFrame(next);
    })();
})();
</script>

<p>Notice that the equation for \( B_{0, 1, 2}(t) \) looks remarkably similar to
the equations for \( B_{0, 1} \) and \( B_{1, 2} \). Let&rsquo;s see what
happens when we trace the path of \( B_{0, 1, 2}(t) \).</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet"
    xmlns="http://www.w3.org/2000/svg" id="quad3">
    <path d="M50 50 Q550 150 550 50" stroke="#2F80ED" fill="none"
    stroke-width="2" stroke-dasharray="100%" class="curve"/>
    <path d="M50 50 L550 150" stroke="black" fill="none" stroke-width="1"
    stroke-dasharray="1, 2" />
    <path d="M550 150 L 550 50" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <path d="" stroke="#EB5757" fill="none" stroke-width="1" class="p01" />
    <path d="" stroke="#EB5757" fill="none" stroke-width="1" class="p12" />
    <path d="" stroke="black" fill="none" stroke-width="0.5" class="p012" />
    <path d="" stroke="#27AE60" fill="none" stroke-width="1.5" class="pB012" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
    <g transform="translate(50, 50)" class="gB01">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
    <g transform="translate(550, 150)" class="gB12">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
    <g transform="translate(550, 150)" class="gB012">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
</svg></p>

<script>
(function() {
    var t = 0;
    var dt = 0.004;

    var x0 = 50;
    var y0 = 50;
    var x1 = 550;
    var y1 = 150;
    var x2 = 550;
    var y2 = 50;

    var quad = document.getElementById("quad3");
    var gB01 = quad.querySelector(".gB01");
    var gB12 = quad.querySelector(".gB12");
    var gB012 = quad.querySelector(".gB012");
    var p01 = quad.querySelector(".p01");
    var p12 = quad.querySelector(".p12");
    var p012 = quad.querySelector(".p012");
    var pB012 = quad.querySelector(".pB012");
    var curve = quad.querySelector(".curve");
    var curveLength = curve.getTotalLength();

    curve.setAttribute("stroke-dasharray", curveLength);

    (function next() {
        t += dt;
        if (t < 0 || t > 1) {
            dt *= -1;
            t = Math.min(1, Math.max(0, t));
        }

        var x01 = x0 + t * (x1 - x0);
        var y01 = y0 + t * (y1 - y0);

        gB01.setAttribute("transform",  "translate(" + x01 + "," + y01 + ")");
        p01.setAttribute("d", "M" + x0 + " " + y0 + " L" + x01.toFixed(2) + " " + y01.toFixed(2));

        var x12 = x1 + t * (x2 - x1);
        var y12 = y1 + t * (y2 - y1);

        gB12.setAttribute("transform",  "translate(" + x12 + "," + y12 + ")");
        p12.setAttribute("d", "M" + x1 + " " + y1 + " L" + x12.toFixed(2) + " " + y12.toFixed(2));

        var x012 = x01 + t * (x12 - x01);
        var y012 = y01 + t * (y12 - y01);

        gB012.setAttribute("transform",  "translate(" + x012 + "," + y012 +
        ")");

        pB012.setAttribute("d", "M" + x01.toFixed(2) + " " + y01.toFixed(2) + " " +
                                "L" + x012.toFixed(2) + " " + y012.toFixed(2));

        p012.setAttribute("d", "M" + x01.toFixed(2) + " " + y01.toFixed(2) + " " +
                               "L" + x12.toFixed(2) + " " + y12.toFixed(2));

        curve.setAttribute("d", "M" + x0 + " " + y0 + " " +
                                "Q" + x01 + " " + y01 + " " + x012 + " " + y012);

        requestAnimationFrame(next);
    })();
})();
</script>

<p>We get our curve!</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet"
    xmlns="http://www.w3.org/2000/svg">
    <path d="M50 50 Q550 150 550 50" stroke="black" fill="none" stroke-width="2"/>
    <path d="M50 50 L550 150" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <path d="M550 150 L 550 50" stroke="black" fill="none" stroke-width="1" stroke-dasharray="1, 2" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
</svg></p>

<h1 id="higher-order-bezier-curves">Higher Order Bezier Curves</h1>

<p>Just as we get a quadratic bezier by interpolating between two linear bezier
curves, we get a <span style="color:#9B51E0">cubic bezier curve</span> by
interpolating between two <span style="color:#2F80ED">quadratic bezier
curves</span>:</p>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet"
    xmlns="http://www.w3.org/2000/svg" id="cubic1">
    <path d="M50 50" fill="none" stroke="black" stroke-width="1" class="line" />
    <path d="M50 50" fill="none" stroke="#27AE60" stroke-width="1" class="line-prog" />
    <path d="M50 50 Q50 150 550 50" stroke="#2F80ED" fill="none"
    stroke-width="2" class="p012" />
    <path d="M50 150 Q550 50 550 150" stroke="#2F80ED" fill="none"
    stroke-width="2" class="p123" />
    <path d="M50 50 C50 150 550 50 550 150" fill="none" stroke="#9B51E0"
    stroke-width="3" class="p0123" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(50, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">3</tspan>
        </text>
    </g>
    <g transform="translate(50, 50)" class="g012">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
    <g transform="translate(50, 150)" class="g123">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
    <g transform="translate(50, 50)" class="g0123">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
    </g>
</svg></p>

<script>
(function() {
    var t = 0;
    var dt = 0.004;

    function B(P) {
        function _B(P, k, n, t) {
            if (k == n) return P[n];
            var left = _B(P, k, n-1, t);
            var right = _B(P, k+1, n, t);
            return [
                (1-t) * left[0] + t * right[0],
                (1-t) * left[1] + t * right[1]
            ];
        }

        return function(t) {
            return _B(P, 0, P.length-1, t);
        }
    };

    var P = [
        [50, 50],
        [50, 150],
        [550, 50],
        [550, 150]
    ];

    // It's silly and pretty inefficient to this like this, but *shrug*!
    var B0123 = B(P);

    var B012 = B(P.slice(0, 3));
    var B123 = B(P.slice(1, 4));

    var B01 = B(P.slice(0, 2));
    var B12 = B(P.slice(1, 3));
    var B23 = B(P.slice(2, 4));

    var cubic = document.getElementById("cubic1");
    var p012 = cubic.querySelector(".p012");
    var p123 = cubic.querySelector(".p123");
    var p0123 = cubic.querySelector(".p0123");

    var g012 = cubic.querySelector(".g012");
    var g123 = cubic.querySelector(".g123");
    var g0123 = cubic.querySelector(".g0123");

    var line = cubic.querySelector(".line");
    var lineProg = cubic.querySelector(".line-prog");

    (function next() {
        t += dt;
        if (t < 0 || t > 1) {
            dt *= -1;
            t = Math.min(1, Math.max(0, t));
        }

        var P0123 = B0123(t);

        var P012 = B012(t);
        var P123 = B123(t);

        var P01 = B01(t);
        var P12 = B12(t);
        var P23 = B23(t);

        g0123.setAttribute("transform", "translate(" + P0123[0] + "," + P0123[1] + ")");
        g012.setAttribute("transform", "translate(" + P012[0] + "," + P012[1] + ")");
        g123.setAttribute("transform", "translate(" + P123[0] + "," + P123[1] + ")");

        p012.setAttribute("d", "M" + P[0][0] + " " + P[0][1] +
                              " Q" + P01[0] + " " + P01[1] +
                               " " + P012[0] + " " + P012[1]);
        p123.setAttribute("d", "M" + P[1][0] + " " + P[1][1] +
                              " Q" + P12[0] + " " + P12[1] +
                               " " + P123[0] + " " + P123[1]);
        p0123.setAttribute("d", "M" + P[0][0] + " " + P[0][1] +
                               " C" + P01[0] + " " + P01[1] +
                                " " + P012[0] + " " + P012[1] +
                                " " + P0123[0] + " " + P0123[1]);
        line.setAttribute("d", "M" + P012[0] + " " + P012[1] +
                               " " + P123[0] + " " + P123[1]);
        lineProg.setAttribute("d", "M" + P012[0] + " " + P012[1] +
                                   " " + P0123[0] + " " + P0123[1]);

        requestAnimationFrame(next);
    })();
})();
</script>

<div>$$
\begin{aligned}
B_{0,1,2,3}(t) = (1 - t) B_{0,1,2}(t) + t B_{1,2,3}(t), 0 \le t \le 1 \\
\end{aligned}
$$</div>

<p><svg viewBox="0 0 600 200" preserveAspectRatio="xMinYMin meet"
    xmlns="http://www.w3.org/2000/svg" id="cubic1">
    <path d="M50 50 C50 150 550 50 550 150" fill="none" stroke="black"
    stroke-width="3" class="p0123" />
    <path d="M50 50 L 50 150 M550 50 L 550 150" stroke="black" fill="none"
    stroke-width="1" stroke-dasharray="1, 2" />
    <g transform="translate(50, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">0</tspan>
        </text>
    </g>
    <g transform="translate(50, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">1</tspan>
        </text>
    </g>
    <g transform="translate(550, 50)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">2</tspan>
        </text>
    </g>
    <g transform="translate(550, 150)">
        <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <text x="0" y="-12" font-family="KaTeX_Math">
            P<tspan baseline-shift="sub" font-size="70%">3</tspan>
        </text>
    </g>
</svg></p>

<p>You may have a sneaking suspicion at this point that there&rsquo;s a nice recursive
definition lurking here. And indeed there is:</p>

<div>$$
\begin{aligned}
B_{k,...,n}(t) &= (1 - t) B_{k,...,n-1}(t) + t B_{k+1,...,n}(t), 0 \le t \le 1
\\
B_{i}(t) &= P_{i}
\end{aligned}
$$</div>

<p>Or, expressed (concisely but inefficiently) in TypeScript, it might look like this:</p>
<div class="highlight"><pre><code class="language-typescript" data-lang="typescript"><span class="kr">type</span> <span class="nx">Point</span> <span class="o">=</span> <span class="p">[</span><span class="kt">number</span><span class="p">,</span> <span class="kt">number</span><span class="p">];</span>
<span class="kd">function</span> <span class="nx">B</span><span class="p">(</span><span class="nx">P</span>: <span class="kt">Point</span><span class="p">[],</span> <span class="nx">t</span>: <span class="kt">number</span><span class="p">)</span><span class="o">:</span> <span class="nx">Point</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="nx">P</span><span class="p">.</span><span class="nx">length</span> <span class="o">===</span> <span class="mi">1</span><span class="p">)</span> <span class="k">return</span> <span class="nx">P</span><span class="p">[</span><span class="mi">0</span><span class="p">];</span>
    <span class="kr">const</span> <span class="nx">left</span>: <span class="kt">Point</span> <span class="o">=</span> <span class="nx">B</span><span class="p">(</span><span class="nx">P</span><span class="p">.</span><span class="nx">slice</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="nx">P</span><span class="p">.</span><span class="nx">length</span> <span class="o">-</span> <span class="mi">1</span><span class="p">),</span> <span class="nx">t</span><span class="p">);</span>
    <span class="kr">const</span> <span class="nx">right</span>: <span class="kt">Point</span> <span class="o">=</span> <span class="nx">B</span><span class="p">(</span><span class="nx">P</span><span class="p">.</span><span class="nx">slice</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="nx">P</span><span class="p">.</span><span class="nx">length</span><span class="p">),</span> <span class="nx">t</span><span class="p">);</span>
    <span class="k">return</span> <span class="p">[(</span><span class="mi">1</span> <span class="o">-</span> <span class="nx">t</span><span class="p">)</span> <span class="o">*</span> <span class="nx">left</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="nx">t</span> <span class="o">*</span> <span class="nx">right</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span>
            <span class="p">(</span><span class="mi">1</span> <span class="o">-</span> <span class="nx">t</span><span class="p">)</span> <span class="o">*</span> <span class="nx">left</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">+</span> <span class="nx">t</span> <span class="o">*</span> <span class="nx">right</span><span class="p">[</span><span class="mi">1</span><span class="p">]];</span>
<span class="p">}</span>
<span class="c1">// Evaluate a cubic spline at t=0.7
</span><span class="c1"></span><span class="nx">B</span><span class="p">([[</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.42</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.58</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">],</span> <span class="p">[</span><span class="mf">1.0</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">]],</span> <span class="mf">0.7</span><span class="p">)</span>
</code></pre></div>
<h1 id="cubic-bezier-curves-in-vector-images">Cubic Bezier Curves in Vector Images</h1>

<p>As it happens, cubic bezier curves seem to be the right balance between
simplicity and accuracy for many purposes. These are the kind of curves you&rsquo;ll
most often see in vector editing tools like <a href="https://www.figma.com">Figma</a>.</p>

<figure>
    <img src="/images/figmacubic.png" style="width: 500px" />
    <figcaption>A cubic bezier curve in <a href="https://www.figma.com">Figma</a></figcaption>
</figure>

<p>You can think of the two filled in circles <span style="color: #2EC1FF">●
</span> as \( P_0 \) and \( P_3 \), and the two diamonds <span style="color:
#2EC1FF">◇</span> as \( P_1 \) and \( P_2 \). These are the fundamental
building blocks of more complex curved vector constructions.</p>

<p>Font glyphs are specified in terms of bezier curves in TrueType (.ttf) fonts.</p>

<figure>
    <img src="/images/vectore.png" style="width: 400px"/>
    <figcaption>A lower-case "e" in <a
    href="http://www.fonts2u.com/free-serif-italic.font">Free Serif Italic</a>
    shown as a <a
    href="https://medium.com/figma-design/introducing-vector-networks-3b877d2b864f#.95e6iz9he">vector
    network</a> of cubic bezier curves</figcaption>
</figure>

<p>The Scalable Vector Graphics (.svg) file format uses bezier curves as one of its
two <a href="https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Paths#Bezier_Curves">curve primitives</a>, which are used extensively in this:</p>

<figure>
    <img src="/images/ghostscripttiger.svg" style="width: 400px" />
    <figcaption>The <a
href="https://en.wikipedia.org/wiki/Talk%3AGhostscript#Origin_of_tiger.eps.3F_.28aka_.22cubic_spline_tiger.22.29">Cubic
Spline Tiger</a> in SVG format.</figcaption>
</figure>

<h1 id="cubic-bezier-curves-in-animation">Cubic Bezier Curves in Animation</h1>

<p>While bezier curves have their most obvious uses in representing spacial curves,
there&rsquo;s no reason why they can&rsquo;t be used to represented curved relationships
between other quantities. For instance, rather than relating \( x \) and \(y
\), <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/single-transition-timing-function#Keywords_for_common_timing-functions">CSS transition timing functions</a> relate a time ratio with an output
value ratio.</p>

<style>
.linear {
    transition: all 1s linear;
}

.ease {
    /* This is ease reversed */
    transition: all 1s cubic-bezier(0.75, 0.0, 0.75, 0.9);
}

.ease.end {
    transition: all 1s ease;
}

.ease-in {
    /* This is ease-in reversed */
    transition: all 1s ease-out;
}

.ease-in.end {
    transition: all 1s ease-in;
}

.ease-in-out {
    transition: all 1s ease-in-out;
}

.ease-out {
    /* This is ease-out reversed */
    transition: all 1s ease-in;
}

.ease-out.end {
    transition: all 1s ease-out;
}

.custom-ease {
    transition: all 1s cubic-bezier(0.5, 1, 0.5, 0);
}

.ball.end {
    transform: translateY(-100px);
}

.timeline {
    transition: transform 1s linear;
}

.timeline.end {
    transform: translateX(100px);
}
</style>

<figure>
<svg viewBox="0 0 600 150" preserveAspectRatio="xMinYMin meet" xmlns="http://www.w3.org/2000/svg" id="anim0">
    <g transform="translate(50, 25)">
        <circle class="ball linear" cx="0" cy="100" r="5" fill="#F2994A"
    stroke-width="0" />
    </g>
    <g transform="translate(150, 25)">
        <circle class="ball ease" cx="0" cy="100" r="5" fill="#F2994A"
    stroke-width="0" />
    </g>
    <g transform="translate(250, 25)">
        <circle class="ball ease-in" cx="0" cy="100" r="5" fill="#F2994A"
    stroke-width="0" />
    </g>
    <g transform="translate(350, 25)">
        <circle class="ball ease-in-out" cx="0" cy="100" r="5" fill="#F2994A" stroke-width="0" />
    </g>
    <g transform="translate(450, 25)">
        <circle class="ball ease-out" cx="0" cy="100" r="5" fill="#F2994A" stroke-width="0" />
    </g>
    <g transform="translate(550, 25)">
        <circle class="ball custom-ease" cx="0" cy="100" r="5" fill="#F2994A" stroke-width="0" />
    </g>
</svg>
<figcaption>Transition timing functions defined by bezier curves</figcaption>
</figure>

<p>Cubic bezier curves are one of two ways of expressing timing functions in CSS
(<a href="https://developer.mozilla.org/en-US/docs/Web/CSS/single-transition-timing-function#The_steps()_class_of_timing-functions"><code>steps()</code></a> being the other).  The <code>cubic-bezier(x1, y1, x2, y2)</code> notation
for CSS timing functions specifies the coordinates of \( P_1 \) and \( P_2
\) of a cubic bezier curve.</p>

<figure>
    <img src="/images/easing-function.svg" width="400">
    <figcaption>Diagram of <code>transition-timing-function: cubic-bezier(x1, y1, x2, y2)</code></figcaption>
</figure>

<p>Let&rsquo;s pretend we&rsquo;re trying to animate an orange ball moving. In all of these
diagrams, the <span style="color: #EB5757">red lines representing time</span>
move at a constant speed.</p>

<p><svg viewBox="0 0 600 400" preserveAspectRatio="xMinYMin meet"
    xmlns="http://www.w3.org/2000/svg" id="anim1">
    <g transform="translate(25, 30)">
        <text x="50" y="-16" font-size="70%" text-anchor="middle">
            linear
        </text>
        <g class="timeline">
            <path d="M0 0 L0 100" fill="none" stroke-width="1" stroke-dasharray="1, 2" stroke="#EB5757" />
        </g>
        <path d="M0 100 L 100 0" fill="none" stroke-width="2" stroke="black" />
        <circle cx="0" cy="100" r="4" fill="white" stroke="black" stroke-width="2" />
        <g transform="translate(0, 100)">
            <text x="10" y="0" font-size="70%" alignment-baseline="middle">
                (0.00, 0.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <g transform="translate(100, 0)">
            <text x="-10" y="0" font-size="70%" text-anchor="end"
                alignment-baseline="middle">
                (1.00, 1.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <circle cx="100" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <g class="ball linear">
            <path d="M130 100 L 0 100" stroke-width="1" stroke-dasharray="1, 2" stroke="#F2994A" />
            <circle cx="130" cy="100" r="5" fill="#F2994A" stroke-width="0" />
        </g>
    </g>
    <g transform="translate(225, 30)">
        <text x="50" y="-16" font-size="70%" text-anchor="middle">
            ease
        </text>
        <g class="timeline">
            <path d="M0 0 L0 100" fill="none" stroke-width="1" stroke-dasharray="1, 2" stroke="#EB5757" />
        </g>
        <path d="M0 100 C25  90 25 0 100 0" fill="none" stroke-width="2" stroke="black" />
        <path d="M0 100 L25  90 M25 0 L100 0" fill="none" stroke-dasharray="1, 2" stroke-width="1" stroke="black" />
        <circle cx="0" cy="100" r="4" fill="white" stroke="black" stroke-width="2" />
        <g transform="translate(25, 90)">
            <text x="10" y="0" font-size="70%" alignment-baseline="middle">
                (0.25, 0.10)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <g transform="translate(25, 0)">
            <text x="-10" y="0" font-size="70%" text-anchor="end"
                alignment-baseline="middle">
                (0.25, 1.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <circle cx="100" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <g class="ball ease">
            <path d="M130 100 L 0 100" stroke-width="1" stroke-dasharray="1, 2" stroke="#F2994A" />
            <circle cx="130" cy="100" r="5" fill="#F2994A" stroke-width="0" />
        </g>
    </g>
    <g transform="translate(425, 30)">
        <text x="50" y="-16" font-size="70%" text-anchor="middle">
            ease-in
        </text>
        <g class="timeline">
            <path d="M0 0 L0 100" fill="none" stroke-width="1" stroke-dasharray="1, 2" stroke="#EB5757" />
        </g>
        <path d="M0 100 C42 100 100 0 100 0" fill="none" stroke-width="2" stroke="black" />
        <path d="M0 100 L42 100 M100 0 L100 0" fill="none" stroke-dasharray="1,
        2" stroke-width="1" stroke="black" />
        <circle cx="0" cy="100" r="4" fill="white" stroke="black" stroke-width="2" />
        <g transform="translate(42, 100)">
            <text x="10" y="0" font-size="70%" alignment-baseline="middle">
                (0.42, 0.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <g transform="translate(100, 0)">
            <text x="-10" y="0" font-size="70%" text-anchor="end"
                alignment-baseline="middle">
                (1.00, 1.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <circle cx="100" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <g class="ball ease-in">
            <path d="M130 100 L 0 100" stroke-width="1" stroke-dasharray="1, 2" stroke="#F2994A" />
            <circle cx="130" cy="100" r="5" fill="#F2994A" stroke-width="0" />
        </g>
    </g>
    <g transform="translate(25, 225)">
        <text x="50" y="-16" font-size="70%" text-anchor="middle">
            ease-out
        </text>
        <g class="timeline">
            <path d="M0 0 L0 100" fill="none" stroke-width="1" stroke-dasharray="1, 2" stroke="#EB5757" />
        </g>
        <path d="M0 100 C0 100 58 0 100 0" fill="none" stroke-width="2" stroke="black" /> <path d="M0 100 L0 100 M58 0 L100 0" fill="none" stroke-dasharray="1, 2"
        stroke-width="1" stroke="black" />
        <circle cx="0" cy="100" r="4" fill="white" stroke="black" stroke-width="2" />
        <g transform="translate(0, 100)">
            <text x="10" y="0" font-size="70%" alignment-baseline="middle">
                (0.00, 0.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <g transform="translate(58, 0)">
            <text x="-10" y="0" font-size="70%" text-anchor="end"
                alignment-baseline="middle">
                (0.58, 1.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <circle cx="100" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <g class="ball ease-out">
            <path d="M130 100 L 0 100" stroke-width="1" stroke-dasharray="1, 2" stroke="#F2994A" />
            <circle cx="130" cy="100" r="5" fill="#F2994A" stroke-width="0" />
        </g>
    </g>
    <g transform="translate(225, 225)">
        <text x="50" y="-16" font-size="70%" text-anchor="middle">
            ease-in-out
        </text>
        <g class="timeline">
            <path d="M0 0 L0 100" fill="none" stroke-width="1" stroke-dasharray="1, 2" stroke="#EB5757" />
        </g>
        <path d="M0 100 C42 100 58 0 100 0" fill="none" stroke-width="2" stroke="black" />
        <path d="M0 100 L42 100 M58 0 L100 0" fill="none" stroke-dasharray="1, 2" stroke-width="1" stroke="black" />
        <circle cx="0" cy="100" r="4" fill="white" stroke="black" stroke-width="2" />
        <g transform="translate(42, 100)">
            <text x="10" y="0" font-size="70%" alignment-baseline="middle">
                (0.42, 0.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <g transform="translate(58, 0)">
            <text x="-10" y="0" font-size="70%" text-anchor="end"
                alignment-baseline="middle">
                (0.58, 1.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <circle cx="100" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <g class="ball ease-in-out">
            <path d="M130 100 L 0 100" stroke-width="1" stroke-dasharray="1, 2" stroke="#F2994A" />
            <circle cx="130" cy="100" r="5" fill="#F2994A" stroke-width="0" />
        </g>
    </g>
    <g transform="translate(425, 225)">
        <text x="50" y="-16" font-size="70%" text-anchor="middle">
            (custom)
        </text>
        <g class="timeline">
            <path d="M0 0 L0 100" fill="none" stroke-width="1" stroke-dasharray="1, 2" stroke="#EB5757" />
        </g>
        <path d="M0 100 C50 0 50 100 100 0" fill="none" stroke-width="2" stroke="black" />
        <path d="M0 100 L50 0 M50 100 L100 0" fill="none" stroke-dasharray="1, 2" stroke-width="1" stroke="black" />
        <circle cx="0" cy="100" r="4" fill="white" stroke="black" stroke-width="2" />
        <g transform="translate(50, 0)">
            <text x="-10" y="0" font-size="70%" alignment-baseline="middle" text-anchor="end">
                (0.50, 1.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <g transform="translate(50, 100)">
            <text x="10" y="0" font-size="70%"
                alignment-baseline="middle">
                (0.50, 0.00)
            </text>
            <circle cx="0" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        </g>
        <circle cx="100" cy="0" r="4" fill="white" stroke="black" stroke-width="2" />
        <g class="ball custom-ease">
            <path d="M130 100 L 0 100" stroke-width="1" stroke-dasharray="1, 2" stroke="#F2994A" />
            <circle cx="130" cy="100" r="5" fill="#F2994A" stroke-width="0" />
        </g>
    </g>
</svg></p>

<script>
(function() {
    var nextVal = 0;
    (function next() {
        Array.prototype.forEach.call(document.querySelectorAll(".ball"),
                                     function(ball) {
            if (nextVal == 0) {
                ball.classList.remove("end");
            } else {
                ball.classList.add("end");
            }
        });
        Array.prototype.forEach.call(document.querySelectorAll(".timeline"),
                                     function(timeline) {
            if (nextVal == 0) {
                timeline.classList.remove("end");
            } else {
                timeline.classList.add("end");
            }
        });
        nextVal = nextVal == 100 ? 0 : 100;
        setTimeout(next, 1000);
    })();
})();
</script>

<h1 id="why-bezier-curves">Why Bezier Curves?</h1>

<p>Bezier curves are a beautiful abstraction for describing curves. The most
commonly used form, cubic bezier curves, reduce the problem of describing and
storing a curve down to storing 4 coordinates.</p>

<p>Beyond the efficiency benefits, the effect of moving the 4 control points on the
curve shape is intuitive, making them suitable for direct manipulation editors.</p>

<p>Since 2 of the points specify the endpoints of the curve, composing many bezier
curves into more complex structures with precision becomes easy. The exact
specification of endpoints is always what makes it so convenient in the
animation case: the only sensible value of the easing function at \( t = 0\%
\) is the initial value, and the only sensible value at \( t = 100\% \) is
the final value.</p>

<p>A less obvious benefit is that the line from \( P_0 \) to \( P_1 \)
specifies the tangent of the curve leaving \( P_0 \). This means if you have
two joint curves with mirrored control points, the slope at the join point is
guaranteed to be the same on either side of the join.</p>

<figure>
    <img src="/images/jointbezier.png" style="width: 500px" />
    <figcaption>Left: Two joint cubic bezier curves with mirrored control
    points.  Right: control points not mirrored.</figcaption>
</figure>

<p>A major benefit of mathematical construct like bezier curves is the ability to
leverage decades of mathematical research to solve most problems you might run
into, completely agnostic to the rest of your problem domain.</p>

<p>For instance, to make this post, I had to learn how to split a bezier curve at a
given value of \( t \) in order to animate the curves above. I was quickly
able to find a well written article on the matter: <a href="https://pomax.github.io/bezierinfo/#splitting">A Primer on Bézier Curves:
Splitting Curves</a>.</p>

<h1 id="resources-and-further-reading">Resources and Further Reading</h1>

<ul>
<li><a href="https://pomax.github.io/bezierinfo">A Primer on Bézier Curves</a> in addition to having a description of using
deCasteljau&rsquo;s algorithm to draw and split curves, this free online book seems to
be a pretty comprehensive intro.</li>
<li><a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve">Bézier curve on Wikipedia</a> shows many different mathematical formulas of
bezier curves beyond the recursive definition shown here. It also contains the
original animations that made bezier curves seem so evidently elegant to me.</li>
</ul>

<p>Also a shoutout to Dudley Storey for his article <a href="http://thenewcode.com/744/Make-SVG-Responsive">Make SVG Responsive</a>, which
allowed all of the inline SVG in this article to work nicely on mobile.</p>

<script
src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/katex.min.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/contrib/auto-render.min.js"></script>

<script>
renderMathInElement(document.body);
</script>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Delete and Heal for Vector Networks]]></title>
    <link href="http://jamie-wong.com/post/delete-and-heal-for-vector-networks/"/>
    <updated>2016-11-17T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/delete-and-heal-for-vector-networks/</id>
    <content type="html"><![CDATA[ 

 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Bending the Dynamic vs Static Language Tradeoff]]></title>
    <link href="http://jamie-wong.com/post/bending-the-pl-curve"/>
    <updated>2016-10-26T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/bending-the-pl-curve</id>
    <content type="html"><![CDATA[ 

<p>Here are some things I think people want out of a programming language.</p>

<ol>
<li><strong>Iteration Speed</strong>: A sub-second edit-run cycle</li>
<li><strong>Correctness Checking</strong>: A compiler that can tell me that my code is
probably wrong without me having to run every single codepath</li>
<li><strong>Concise syntax</strong>: Express intent fluidly without a lot of boilerplate</li>
<li><strong>Editing support</strong>: Autocompletion, go to definition, go to usage,
refactoring support that always works</li>
<li><strong>Debugging support</strong>: Errors that are easy to debug</li>
</ol>

<p>First, I want to talk about the tradeoffs that two historical camps make here,
then about how these tradeoffs are being bent, and finally how people bend them
the opposite way and put themselves in sad, unnecessary hell.</p>

<p><nav id="TableOfContents">
<ul>
<li><a href="#the-dynamically-typed-camp">The Dynamically Typed Camp</a>
<ul>
<li><a href="#iteration-speed">Iteration Speed</a></li>
<li><a href="#correctness-checking">Correctness Checking</a></li>
<li><a href="#concise-syntax">Concise Syntax</a></li>
<li><a href="#editing-support">Editing Support</a></li>
<li><a href="#debugging-support">Debugging Support</a></li>
</ul></li>
<li><a href="#the-statically-typed-camp">The Statically Typed Camp</a>
<ul>
<li><a href="#iteration-speed-1">Iteration Speed</a></li>
<li><a href="#correctness-checking-1">Correctness Checking</a></li>
<li><a href="#editing-support-1">Editing Support</a></li>
<li><a href="#debugging-support-1">Debugging Support</a></li>
</ul></li>
<li><a href="#stuck">Stuck</a></li>
<li><a href="#the-best-of-both-worlds">The Best of Both Worlds</a>
<ul>
<li><a href="#type-inference">Type Inference</a></li>
<li><a href="#decoupling-type-checking-from-code-generation">Decoupling Type Checking from Code Generation</a></li>
<li><a href="#linting">Linting</a></li>
<li><a href="#faster-compilers">Faster Compilers</a></li>
<li><a href="#better-compiler-error-messages">Better Compiler Error Messages</a></li>
</ul></li>
<li><a href="#all-the-downside">All the Downside</a>
<ul>
<li><a href="#runtime-failures-in-a-static-language">Runtime Failures in a Static Language</a></li>
<li><a href="#slow-transpiling-bundling-in-a-dynamic-language">Slow Transpiling/Bundling in a Dynamic Language</a></li>
</ul></li>
<li><a href="#closing-thoughts">Closing thoughts</a></li>
</ul>
</nav></p>

<h1 id="the-dynamically-typed-camp">The Dynamically Typed Camp</h1>

<p>The dynamically typed camp of languages is where Python, JavaScript, PHP, Ruby,
and Scheme live. This is where I&rsquo;ve spent most of my professional career,
including nearly all of the time I spent at <a href="https://www.khanacademy.org">Khan Academy</a>.</p>

<h2 id="iteration-speed">Iteration Speed</h2>

<p>Dynamically typed languages do great here. You edit a file, re-run your program
(possibly it automatically restarts itself), and you&rsquo;re probably back in
business in less than a second. This is absolutely wonderful for iterating on
ideas quickly and especially for doing things like pixel pushing in UIs.</p>

<h2 id="correctness-checking">Correctness Checking</h2>

<p>This is where dynamically typed languages fall flat on their faces. Let&rsquo;s say
you have a whole bunch of classes with a method called <code>.render(title)</code>, and
they all currently take one argument. Now you want to change <em>one</em> of those
functions to look like <code>.render(title, description)</code>. Someone sends you a code
review with this in it, and you ask if they updated all the right places.  They
respond &ldquo;I hope so&rdquo;, and if you have a 100KLOC codebase, with minimal test
coverage, then that&rsquo;s pretty much as good as you&rsquo;re going to get.</p>

<h2 id="concise-syntax">Concise Syntax</h2>

<p>Dynamic languages here are generally pretty solid. With no type definitions,
there&rsquo;s just generally less to type.  To use opposite extremes here, the
difference here is pretty clear.</p>
<div class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># Python</span>
<span class="n">a</span> <span class="o">=</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">]</span>
</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Old versions of Java
</span><span class="c1"></span><span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;</span> <span class="n">list</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;();</span>
<span class="n">list</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">1</span><span class="o">);</span>
<span class="n">list</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">2</span><span class="o">);</span>
<span class="n">list</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">3</span><span class="o">);</span>
</code></pre></div>
<h2 id="editing-support">Editing Support</h2>

<p>For the same reason as correctness checking, doing refactorings that get more
complex than a global string search and replace start to seriously suck in
dynamic languages. If you want to delete a method with a common name, or get
autocomplete on an object passed into a function, you might be shit out of luck.</p>

<p>You can still get <em>useful</em> autocomplete, but it&rsquo;ll be at least a little crippled
because of the fundamental ambiguity of what a line of code does until it&rsquo;s
actually run in dynamically typed languages.</p>

<h2 id="debugging-support">Debugging Support</h2>

<p>The ability to drop down into a REPL with the full power of the language
whenever something crashes is actually one of the major reasons I&rsquo;ve been so
happy in the dynamic camp for so long. In Python this looks like
<code>pdb.set_trace()</code>, in JavaScript, like <code>debugger</code>, in Ruby, like <code>binding.pry</code>.
In the middle of your breakpoint, you can define new functions, invoke arbitrary
functions, write data to files &ndash; whatever you want.</p>

<h1 id="the-statically-typed-camp">The Statically Typed Camp</h1>

<p>Industry staples like C++, Java, and Objective-C live here with their functional
comrades Haskell and OCaml, plus some new company of Go, Swift, Scala, Rust, and
Elm. I&rsquo;m living in this land for the foreseeable at <a href="https://www.figma.com/">Figma</a> working in C++
and TypeScript.</p>

<h2 id="iteration-speed-1">Iteration Speed</h2>

<p>This is arguably the biggest downside of statically typed languages. Type
checking, as it turns out, is frequently slow. And since in most of these
languages, type resolution is pre-requisite to code generation, slow type
checking means slow compiling. Slow compiling means slow iteration time.</p>

<p>While I was an intern at Facebook, I needed to make some changes to WebKit. The
compile time on my Macbook was ~15 minutes for every change. Suffice to say,
this was not a fun experience. On the upside, I did read most of <a href="https://git-scm.com/book/en/v2">Pro Git</a>
while I was waiting for XCode to build.</p>

<h2 id="correctness-checking-1">Correctness Checking</h2>

<p>If your <code>Post</code> and <code>Picture</code> classes both have a <code>.render</code>, and you want to
change the signature on <code>Post</code> but not on <code>Picture</code>, you&rsquo;ve got no troubles in
static land.  An IDE will make this as easy as right clicking and &ldquo;Change
Signature&rdquo;.  And if you do decide to do it manually because you need to go
manually decide on that second argument value at all the new call-sites, no
problem &ndash; your compiler will quite happily tell you if you done goofed or not.</p>

<p>The level of safety you get here varies wildly by language. Most notable, most
compilers don&rsquo;t work too hard to try to figure out if a pointer is <code>null</code> before
telling you your code is A-OK.</p>

<p>In C++, the compiler will quite happily let you do this:</p>
<div class="highlight"><pre><code class="language-c++" data-lang="c++"><span class="n">User</span><span class="o">*</span> <span class="n">a</span> <span class="o">=</span> <span class="k">nullptr</span><span class="p">;</span>
<span class="n">a</span><span class="o">-&gt;</span><span class="n">setName</span><span class="p">(</span><span class="s">&#34;Gertrude&#34;</span><span class="p">)</span>
</code></pre></div>
<p>Haskell and Scala do their best to dodge this problem by not letting you have
<code>null</code>, instead representing optional fields explicitly with a
<code>Maybe User</code>/<code>Option[User]</code>, where it forces you to deal with the fact that it
might be missing, and not just assume it&rsquo;s there.</p>

<h2 id="editing-support-1">Editing Support</h2>

<p>Statically typed languages kill it here. Since, by definition, the type of every
variable must be known without needing to execute the code, your editor can be
quite confident which operations are valid on which variables, and helpfully
autocomplete them. It can also facilitate things like field renaming, automatic
documentation lookup, consistently working go-to definition, and go-to usages.</p>

<h2 id="debugging-support-1">Debugging Support</h2>

<p>My experience varies here, but for the most part have been displeased by my
debugging experiences in statically typed languages. While gdb and friends will
let you evaluate certain expressions, you lose the ability to do arbitrary
manipulations like define debugging helper functions or easily write function
invocations on anything templated.</p>

<p>A particularly nasty class of this where you don&rsquo;t get any interactive console
at all to debug is complex compile errors. In C++, clang improved this
dramatically, but for code like this:</p>
<div class="highlight"><pre><code class="language-c++" data-lang="c++"><span class="cp">#include</span> <span class="cpf">&lt;vector&gt;</span><span class="cp">
</span><span class="cp">#include</span> <span class="cpf">&lt;algorithm&gt;</span><span class="cp">
</span><span class="cp"></span><span class="kt">int</span> <span class="nf">main</span><span class="p">()</span>
<span class="p">{</span>
    <span class="kt">int</span> <span class="n">a</span><span class="p">;</span>
    <span class="n">std</span><span class="o">::</span><span class="n">vector</span><span class="o">&lt;</span> <span class="n">std</span><span class="o">::</span><span class="n">vector</span> <span class="o">&lt;</span><span class="kt">int</span><span class="o">&gt;</span> <span class="o">&gt;</span> <span class="n">v</span><span class="p">;</span>
    <span class="n">std</span><span class="o">::</span><span class="n">vector</span><span class="o">&lt;</span> <span class="n">std</span><span class="o">::</span><span class="n">vector</span> <span class="o">&lt;</span><span class="kt">int</span><span class="o">&gt;</span> <span class="o">&gt;::</span><span class="n">const_iterator</span> <span class="n">it</span> <span class="o">=</span> <span class="n">std</span><span class="o">::</span><span class="n">find</span><span class="p">(</span> <span class="n">v</span><span class="p">.</span><span class="n">begin</span><span class="p">(),</span> <span class="n">v</span><span class="p">.</span><span class="n">end</span><span class="p">(),</span> <span class="n">a</span> <span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p>gcc used to output &gt; 15000 characters of errors (<a href="http://codegolf.stackexchange.com/questions/1956/generate-the-longest-error-message-in-c">see the rest here</a>), which
starts like this:</p>
<pre><code>/usr/include/c++/4.6/bits/stl_algo.h: In function ‘_RandomAccessIterator
std::__find(_RandomAccessIterator, _RandomAccessIterator, const _Tp&amp;,
std::random_access_iterator_tag) [with _RandomAccessIterator =
__gnu_cxx::__normal_iterator*, std::vector &gt; &gt;, _Tp = int]’:
/usr/include/c++/4.6/bits/stl_algo.h:4403:45:   instantiated from ‘_IIter std::find(_IIter, _IIter, const _Tp&amp;) [with _IIter = __gnu_cxx::__normal_iterator*, std::vector &gt; &gt;, _Tp = int]’
error_code.cpp:8:89:   instantiated from here
/usr/include/c++/4.6/bits/stl_algo.h:162:4: error: no match for ‘operator==’ in ‘__first.__gnu_cxx::__normal_iterator::operator* [with _Iterator = std::vector*, _Container = std::vector &gt;, __gnu_cxx::__normal_iterator::reference = std::vector&amp;]() == __val’
/usr/include/c++/4.6/bits/stl_algo.h:162:4: note: candidates are:
</code></pre>
<p>So that sucks, and I don&rsquo;t even have anything I can play with to introspect what
the issue is.</p>

<h1 id="stuck">Stuck</h1>

<p>So for an era of programming, it felt like you were kind of stuck between two
worlds, each of which had pretty crappy tradeoffs. Then the two camps stopped
yelling across the river at each other and started to recognize that the other
team was maybe onto something. You see a similar middle ground emerging in
the Object Oriented vs. Functional holy war with languages like Scala and Swift
taking an OO syntax, functional thinking approach, and JavaScript being kind of
accidentally multi-paradigm.</p>

<p>But back to types. Let&rsquo;s talk about how people are trying to have their cake and
eat it too.</p>

<h1 id="the-best-of-both-worlds">The Best of Both Worlds</h1>

<h2 id="type-inference">Type Inference</h2>

<p>Something about the following line of Java just feels insulting.</p>
<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;</span> <span class="n">list</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;();</span>
</code></pre></div>
<p>Why do I need to specify the type information twice? This feels super dumb. More
generally, if I do:</p>
<pre><code>a = somefunction()
</code></pre>
<p>And the compiler knows the return type of <code>somefunction</code>, I, as the programmer,
shouldn&rsquo;t be forced to tell the computer information it already knows.</p>

<p>The more complete version of this idea is type inference, and the first time I
saw it was in Haskell, as concisely explained in <a href="http://learnyouahaskell.com/types-and-typeclasses">Learn You a Haskell for Great
Good!: Types and Typeclasses</a>, and is explained with a bit more detail in
<a href="https://www.typescriptlang.org/docs/handbook/type-inference.html">TypeScript&rsquo;s Type Inference Documentation</a>.</p>

<p>So we get concise syntax without sacrificing type information.</p>

<p>It&rsquo;s also now made its way into C++ via the <a href="http://en.cppreference.com/w/cpp/language/auto">C++11 auto keyword</a>, and is a
feature of most modern statically typed languages like Scala, Swift, Rust, and
Go.</p>

<h2 id="decoupling-type-checking-from-code-generation">Decoupling Type Checking from Code Generation</h2>

<p>If you define your language very carefully, you can make the compiler output not
dependent on the types (i.e. ignore the type information completely), and then
run type checking completely separately. The easiest way of defining a language
like this is to start with a dynamically typed language and start adding type
annotations. Facebook&rsquo;s <a href="https://flowtype.org/">Flow</a> does this by adding type annotations to
JavaScript.</p>

<p>For instance, a bit of Flow annotated JavaScript might look like this:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">// @flow
</span><span class="c1"></span><span class="kd">function</span> <span class="nx">bar</span><span class="p">(</span><span class="nx">x</span><span class="p">)</span><span class="o">:</span> <span class="nx">string</span> <span class="p">{</span>
  <span class="k">return</span> <span class="nx">x</span><span class="p">.</span><span class="nx">length</span><span class="p">;</span>
<span class="p">}</span>
<span class="nx">bar</span><span class="p">(</span><span class="s1">&#39;Hello, world!&#39;</span><span class="p">);</span>
</code></pre></div>
<p>Compilation here is incredibly fast, because all it does is strip the type
annotations to produce this:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="kd">function</span> <span class="nx">bar</span><span class="p">(</span><span class="nx">x</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">return</span> <span class="nx">x</span><span class="p">.</span><span class="nx">length</span><span class="p">;</span>
<span class="p">}</span>
<span class="nx">bar</span><span class="p">(</span><span class="s1">&#39;Hello, world!&#39;</span><span class="p">);</span>
</code></pre></div>
<p>While the type checker runs in the background, or possible only on demand.
With this type information, you can get better correctness guarantees, and much
better IDE support. We get all of this without the normal increase in iteration
time that comes with a blocking type checker.</p>

<p>Facebook took a similar approach to type annotating PHP with its language
<a href="http://hacklang.org/">Hack</a>. Python 3.5 introduced progressive typing too, as described in the
<a href="https://docs.python.org/3/library/typing.html"><code>typing</code> module</a>.</p>

<p>An interesting side effect of having a compile target that closely resembles the
source is that you get all the benefits of the interactive debugging mentioned
before. Microsoft&rsquo;s <a href="https://www.typescriptlang.org/">TypeScript</a> takes a very similar approach to Flow,
except that you can set a flag to localize type checking to file-internal type
checking, making the assumption that all imports are of the correct type, which
speeds up type checking considerably.</p>

<h2 id="linting">Linting</h2>

<p>Linting is a form of static analysis that happens outside of a compiler, and
typically on a dynamic language. One of the earliest ones I&rsquo;d heard of was
Douglas Crockford&rsquo;s <a href="http://www.jslint.com/">JSLint</a> that, despite the possibly dynamic nature of
your program, might be able to confidently point out mistakes. There are
countless tools that do this for various languages, and I go into more depth
about the value of them in <a href="http://jamie-wong.com/2015/02/02/linters-as-invariants/">Linters as Invariants</a>. This gives you a small
subset of the correctness guarantees that you get from a statically typed
language, like the guarantee that you aren&rsquo;t using a variable that isn&rsquo;t
declared anywhere, but typically isn&rsquo;t very helpful for inter-file analysis.</p>

<p>Good linters, like good IDE support, will allow for automatic fixing of errors,
like with the <a href="http://eslint.org/docs/user-guide/command-line-interface#fix">ESLint <code>--fix</code> flag</a>.</p>

<h2 id="faster-compilers">Faster Compilers</h2>

<p>This one is pretty self explanatory. If your compiler is super fast, the
edit-run iteration time isn&rsquo;t an issue. Boom.</p>

<p>To do this properly, you need to design the language carefully with that as a
design goal. Go did this.</p>

<blockquote>
<p>Go is an attempt to combine the ease of programming of an interpreted,
dynamically typed language with the efficiency and safety of a statically typed,
compiled language. It also aims to be modern, with support for networked and
multicore computing. Finally, working with Go is intended to be fast: it should
take at most a few seconds to build a large executable on a single computer.</p>

<p>&ndash; <a href="https://golang.org/doc/faq#creating_a_new_language">Go FAQ</a></p>
</blockquote>

<h2 id="better-compiler-error-messages">Better Compiler Error Messages</h2>

<p>There have been numerous attempts to make debugging compilation errors a
non-issue by having sensible human-readable error messages, notably in Elm,
which Evan Czaplicki describes in his post <a href="http://elm-lang.org/blog/compiler-errors-for-humans">Compiler Errors for Humans</a>. The
cool thing about this is that it&rsquo;s seeing adoption by slightly more main-stream
languages like Rust, as Jonathan Turner explains in <a href="https://blog.rust-lang.org/2016/08/10/Shape-of-errors-to-come.html">Shape of errors to
come</a>. Much earlier, improvements to C++ error messages were a selling point
of clang over gcc as described in <a href="http://clang.llvm.org/diagnostics.html">Expressive Diagnostics</a>. But Elm and Rust
take it steps further.</p>

<p>So if you have a smart enough compiler, ideally one that can suggest to you how
to fix your problem, then some of the frustration of debugging those issues melt
away.</p>

<h1 id="all-the-downside">All the Downside</h1>

<p>Conversely, some very old techniques in the static world, and some very new
techniques in the dynamic world put you in a world of pain by taking downsides
from both camps.</p>

<h2 id="runtime-failures-in-a-static-language">Runtime Failures in a Static Language</h2>

<p>One of my least pleasant recurring memories working as an intern at Square in
Java was waiting a few minutes for a Java build, only to have it crash on boot
with a runtime error. This was happening because of fancy dynamic runtime
dependency injection with <a href="https://github.com/google/guice">Guice</a>, which is why Square later wrote a library
that does it at compile time called <a href="http://square.github.io/dagger/">Dagger</a> to fix the problem.</p>

<p>The more general Bad Situation to avoid here are things that pass type checking,
cause runtime crashes, and require a recompile to fix.</p>

<p>Examples of this include things like:</p>

<ul>
<li>using runtime dependency injection like Guice</li>
<li>using raw pointers instead of <a href="http://stackoverflow.com/questions/106508/what-is-a-smart-pointer-and-when-should-i-use-one">smart pointers</a> in C++.</li>
<li>liberal use of <code>void*</code> in C/C++ or using the <code>Object</code> type in Java then doing
run-time downcasts all over the place</li>
</ul>

<p>Now you&rsquo;re waiting for 15s for compilation, only to find that your code crashes
on boot repeatedly.</p>

<h2 id="slow-transpiling-bundling-in-a-dynamic-language">Slow Transpiling/Bundling in a Dynamic Language</h2>

<p>Just as the example above takes a static language that should be safe and makes
it unsafe, you can take a dynamic language and make it slow to iterate on! The
most common culprit of this is doing complex transpilation for a lot of code,
and doing that on every code change.</p>

<p>&ldquo;Transpiling&rdquo; started gaining momentum in the web development world when
compile-to-css languages like <a href="http://lesscss.org/">Less</a> and <a href="http://sass-lang.com/">Sass</a>, and compile-to-js
languages like <a href="http://coffeescript.org/">CoffeeScript</a> rolled around, but really blew up when
<a href="https://facebook.github.io/react/">React</a>, <a href="https://webpack.github.io/">webpack</a> and <a href="https://babeljs.io/">babel</a> started becoming a trio of choice.</p>

<p>The idea of using a more expressive languages than CSS and JavaScript to write
safer, more readable code is wonderful, but if you&rsquo;re not careful, you&rsquo;ve now
managed to inherit the increased edit-run cycle time of a statically typed
language without inheriting any of the correctness guarantees.</p>

<p>Congratulations, you now have an unresponsive, unmaintainable mess.</p>

<h1 id="closing-thoughts">Closing thoughts</h1>

<p>Overall, I&rsquo;m pretty happy with the direction that things are going in PL world.
I hope much of the near future will be built on TypeScript (with
<a href="https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-0.html"><code>--strictNullChecks</code></a>) instead of JavaScript on the front end, Rust instead
of C++, Scala instead of Java, Go instead of Node.js, and Swift instead of
Objective-C.</p>

<p>I haven&rsquo;t had the chance to play much with Go, Rust, or Swift, but things sound
kinda rosy.</p>

<p>If you like thinking about language tradeoffs and want stories more informed by
experience, you should read through Steve Yegge&rsquo;s <a href="https://sites.google.com/site/steveyegge2/is-weak-typing-strong-enough">Is Weak Typing Strong
Enough?</a>.</p>

<p><strong>EDIT</strong>: My fellow uWaterloo Software Engineering 2014 classmate, <a href="http://mhyee.com/">Ming-Ho
Yee</a>,
is a PhD student in programming language design, so he naturally had some things
to say about this post. You can read his thoughts in <a href="http://mhyee.com/blog/pl_blog_response.html">his response</a>.</p>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Fluid Simulation (with WebGL demo)]]></title>
    <link href="http://jamie-wong.com/2016/08/05/webgl-fluid-simulation/"/>
    <updated>2016-08-05T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/2016/08/05/webgl-fluid-simulation/</id>
    <content type="html"><![CDATA[ 

<p><link rel="stylesheet" 
href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/katex.min.css"></p>

<figure>
<canvas id="demo" width="600" height="600"></canvas>
<figcaption>Click and drag to change the fluid flow. Double click to 
reset.</figcaption>
</figure>

<p><em>Note: The demos in this post rely on WebGL features that might not be
implemented in mobile browsers.</em></p>

<p>About a year and a half ago, I had a passing interest in trying to figure out
how to make a fluid simulation. At the time, it felt just a bit out of my reach,
requiring knowledge of shaders, vector calculus, and numerical computation that
were all just a little bit past my grasp. At the time, I was working through the
<a href="https://www.cs.ubc.ca/~rbridson/fluidsimulation/fluids_notes.pdf">Fluid Simulation Course Notes from SIGGRAPH 2007</a>, and was struggling with
the math. Now armed with a bit more knowledge and a lot more time, and with the
help of other less dense resources like <a href="http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch38.html">GPU Gems Chapter 38.  Fast Fluid
Dynamics Simulation on the GPU</a>, I was finally able to figure out enough to
get something working. I am still a beginner at simulations like this, and I&rsquo;m
going to brazenly ignore things like numerical stability, but hopefully I can
help leapfrog you past a few places I got stuck.</p>

<p>We&rsquo;re going to work with the simplest 2D fluid simulation, where the entire area
is full of fluid, and we&rsquo;re going to ignore viscosity.</p>

<p><nav id="TableOfContents">
<ul>
<li><a href="#the-velocity-field">The Velocity Field</a></li>
<li><a href="#advection">Advection</a></li>
<li><a href="#advecting-the-velocity-field">Advecting the Velocity Field</a></li>
<li><a href="#divergent-fields">Divergent Fields</a></li>
<li><a href="#navier-stokes">Navier-Stokes</a></li>
<li><a href="#solving-for-pressure">Solving for Pressure</a></li>
<li><a href="#iteratively-solving-the-pressure-equation">Iteratively Solving the Pressure Equation</a></li>
<li><a href="#all-together-now">All Together Now!</a></li>
<li><a href="#implementation">Implementation</a></li>
<li><a href="#references">References</a></li>
</ul>
</nav></p>

<h1 id="the-velocity-field">The Velocity Field</h1>

<p>As compared to a rigid, unrotating solid, where every bit of the thing has to be
moving in the same direction at the same speed, each bit of a fluid might be
moving differently. One way to model this is to use a vector field representing
velocity. For any given \( (x, y) \) coordinate, this field will tell you the
velocity of the fluid at that point.</p>

<p>$$
\vec u(x, y) = (u_x, u_y)
$$</p>

<p>A nice way to get an intuition about what a given field looks like is to sample
the function in a grid of points, then draw arrows starting at each grid point
whose size and orientation are dictated by the value of the function at that
point.
For the purposes of this post, we&rsquo;re always going to be working over the domain
\( x \in [-1, 1] \), \( y \in [-1, 1] \).</p>

<p>For instance, here&rsquo;s a very simple field \( \vec u(x, y) = (1, 0) \)
representing everything moving at a constant speed to the right.</p>

<p><img src="/images/16-08-01/vecfield1.png"></p>

<p>And here&rsquo;s a more interesting one \( \vec u(x, y) = (x, y) \) where things
move away from the origin, increasing in speed the farther away from the origin
they are.</p>

<p><img src="/images/16-08-01/vecfield2.png"></p>

<p>We&rsquo;re going to play with this one,
\( \vec u(x, y) = \left( \sin (2 \pi y), \sin (2 \pi x) \right) \), since it
creates some interesting visual results once we start making the fluid move
accordingly.</p>

<p><img src="/images/16-08-01/vecfield3.png"></p>

<p>For a more thorough introduction to vector fields, check out the <a href="https://www.khanacademy.org/math/multivariable-calculus/thinking-about-multivariable-function/visualizing-vector-valued-functions/v/vector-fields-introduction">Introduction
to Vector Fields</a> video on Khan Academy. The rest of the videos on
multivariate calculus might prove useful for understanding concepts in fluid
flow too.</p>

<p>Now then, let&rsquo;s get things moving.</p>

<h1 id="advection">Advection</h1>

<p>Advection is the transfer of a property from one place to another due to the
motion of the fluid. If you&rsquo;ve got some black dye in some water, and the water
is moving to the right, then surprise surprise, the black dye moves right.</p>

<canvas id="advection1" width="400" height="400"></canvas>

<p>If the fluid is moving in a more complex manner, that black dye will get pulled
through the liquid in a more complex manner.</p>

<canvas id="advection2" width="400" height="400"></canvas>

<p>Before we dive into how advection works, we need to talk a bit about the format
of the data underlying these simulations.</p>

<p>The simulation consist of two fields: color and velocity. Each field is
represented by a two dimensional grid. For simplicity, we use the same
dimensions as the output pixel grid.</p>

<p>Previously, I described the velocity field as an analytical function \( \vec
u(x, y) \). In practice, that analytical function is only used to initialize
the grid values.</p>

<p>The simulation runs by stepping forward bit-by-bit in time, with the state of
the color and velocity grids depending only on the state of the color and
velocity grids from the previous time step. We&rsquo;ll use \( \vec u(\vec p, t) \)
to represent the velocity grid at 2d position \( \vec p \) and time \( t \),
and \( \vec c(\vec p, t) \) to represent the color in the same manner.</p>

<p>So how do we move forward in time? Let&rsquo;s just talk about how the color field
changes for now. If we consider each grid point as a little particle in the
fluid, then one approach is to update the color of the fluid where that particle
will be, one time step in the future.</p>

<div>$$
\vec c(\vec p + \vec u(\vec p, t) \Delta t, t + \Delta t) := \vec c(\vec p, t)
$$</div>

<p><img src="/images/16-08-01/advection1.png"></p>

<p>In order to run these simulations in real-time at high resolution, we want to
implement them on the GPU. It turns out that this method of updating the value
at the new location of the particle is difficult to implement on the GPU.</p>

<p>First, the position we want to write, \( \vec p + \vec u(\vec p, t) \Delta t
\) might not lie on a grid point, so we&rsquo;d have to distribute the impact of the
write across the surrounding grid points. Second, many of our imaginary
particles might end up in the same place, meaning we need to analyze the entire
grid before we decide what the new values of each grid point might be.</p>

<p>So, instead of figuring out where our imaginary particles at the grid points <em>go
to</em>, we&rsquo;ll figure out where they <em>came from</em> in order to calculate the next time
step.</p>

<div>$$
\vec c(\vec p, t + \Delta t) := \vec c(\vec p - \vec u(\vec p) \Delta t, t)
$$</div>

<p><img src="/images/16-08-01/advection2.png"></p>

<p>With this scheme, we only need to write to a single grid point, and we don&rsquo;t
need to consider the contributions of imaginary particles coming from multiple
different places.</p>

<p>The last teensy hurdle is figuring out the value of \( \vec c(\vec p - \vec
u(\vec p, t) \Delta t, t ) \), since \( \vec p - \vec u(\vec p, t) \) might
not be at a grid point. We can hop this hurdle using <a href="https://en.wikipedia.org/wiki/Bilinear_interpolation">bilinear interpolation</a>
on the surrounding 4 grid points (the ones linked by the dashed grey rectangle
above).</p>

<h1 id="advecting-the-velocity-field">Advecting the Velocity Field</h1>

<p>Barring a bizarre sequence of perfectly aligned fans underneath the liquid,
there&rsquo;s no reason why the velocity field wouldn&rsquo;t change over time. Just as
black ink would move through the fluid, so too will the velocity field itself!
Just as we can <em>advect</em> \( \vec c \) through \( \vec u \), we can also
<em>advect</em> \( \vec u \) through itself!</p>

<p>Intuitively you can think of it this way: a particle moving in a certain
direction will continue moving in that direction, even after it&rsquo;s moved.</p>

<p>Since we&rsquo;re storing velocity in a grid just like we did with color, we can use
the exact same routine to advect velocity through itself. Below, watch the
velocity change over time, with an initial velocity field of \( \vec u = (1,
\sin(2 \pi y)) \).</p>

<figure>
<canvas id="advectV1" width="400" height="400"></canvas>
<figcaption>See how the changes you make by dragging propagate through space via 
advection.</figcaption>
</figure>

<p>If you tried playing around with this, and saw a bunch of weird hard edges and
might&rsquo;ve thought to yourself &ldquo;I don&rsquo;t think fluids work like that&hellip;&rdquo;, you&rsquo;d be
right. We&rsquo;re missing an important ingredient, but before we look at the
solution, let&rsquo;s take a closer look at the problem.</p>

<h1 id="divergent-fields">Divergent Fields</h1>

<p>Something about the velocity field below makes this intuitively not feel like a
fluid.  Fluids just don&rsquo;t <em>behave</em> like this.</p>

<canvas id="divergent1" width="400" height="400"></canvas>

<p>Same problem with this one&hellip;</p>

<canvas id="divergent2" width="400" height="400"></canvas>

<p>If you look at where the arrows are pointing in each of the above 2 simulations,
you&rsquo;ll see that there are spots where the all the arrows point away from that
spot, and others where all the arrows point toward that spot.  Assuming the
volume of the liquid is staying constant, the density of the fluid has to be
changing for such a velocity field to be possible.</p>

<p>Water is roughly incompressible. That means that at every spot, you have to have
the same amount of fluid entering that spot as leaving it.</p>

<p>Mathematically, we can represent this fact by saying a field is
<em>divergence-free</em>. The divergence of a velocity field \( \vec u \), indicated
with \( div(\vec u) \) or \( \nabla \cdot \vec u \), is a measure of how
much net stuff is entering or leaving a given spot in the field. For our 2D
velocity field, it&rsquo;s defined like this:</p>

<div>$$\begin{aligned}
\nabla \cdot \vec u &=
    \left( \frac{\partial}{\partial x}, \frac{\partial}{\partial y} \right) 
    \cdot
    \left( u_x, u_y \right) \\
&= \frac{\partial u_x}{\partial x} + \frac{\partial u_y}{\partial y}
\end{aligned}$$</div>

<p>The first of the two not-very-fluidy fields above has an equation \( \vec u(x,
y) = (x, y) \). Taking the divergence, we find:</p>

<div>$$\begin{aligned}
\nabla \cdot \vec u &=
    \frac{\partial}{\partial x}(x) + \frac{\partial}{\partial y}(y) \\
&= 1 + 1 \\
&= 2
\end{aligned}$$</div>

<p>This positive value tells us that, in all places, more stuff is leaving that
point than entering it. In physical terms, this means that the density is
decreasing uniformly everywhere.</p>

<p>The other not-very-fluidy field has an equation \( \vec u(x, y) = \sin(2 \pi
x), 0) \). If we look at its divergence, we see:</p>

<div>$$\begin{aligned}
\nabla \cdot \vec u &=
    \frac{\partial}{\partial x}(\sin(2 \pi x)) + \frac{\partial}{\partial y}(0) 
\\
&= 2 \pi \cos (2 \pi x)
\end{aligned}$$</div>

<p>Which tells us that in some places, density is increasing (where \( \nabla
\cdot \vec u &lt; 0 \)), and in others, density is decreasing (where \( \nabla
\cdot \vec u &gt; 0 \)).</p>

<p>Doing the same operation on the more fluidy looking swirly velocity field \(
\vec u = (\sin ( 2 \pi y), \sin ( 2 \pi x ) \) that you saw in the section
about advection, we discover \( \nabla \cdot \vec u = 0 \).</p>

<p>An incompressible fluid will have a divergence of zero everywhere. So, if we
want our simulated fluid to look kind of like a real fluid, we better make sure
it&rsquo;s divergence-free.</p>

<p>Since our velocity field undergoes advection and can be influenced by clicking
and dragging around the fluid, having an initially divergence-free velocity
field isn&rsquo;t enough to guarantee that the field will continue to be
divergence-free. For example, if we take our swirly simulation and start
advecting the velocity field through itself, we end up with something divergent:</p>

<canvas id="divergent3" width="400" height="400"></canvas>

<p>So we need a way of taking a divergent field and <em>making</em> it divergence-free. To
understand what force makes that happen in the real world, we need to talk about
some honest-to-goodness physics.</p>

<h1 id="navier-stokes">Navier-Stokes</h1>

<p>The Navier-Stokes equations describe the motion of fluids. Here are the
Navier-Stokes equations for incompressible fluid flow:</p>

<div>$$\begin{aligned}
& \frac{\partial \vec u}{\partial t} =
    -\vec u \cdot \nabla \vec u
    -\frac{1}{\rho} \nabla p + \nu \nabla^2 \vec u + \vec F \\
\\
& \nabla \cdot \vec u = 0
\end{aligned}$$</div>

<p>Where \( \vec u \) is the velocity field, \( \rho \) is density, \( p \)
is pressure,
\( \nu \) is the kinematic viscosity, and \( \vec F \) is external forces
acting upon the fluid.</p>

<p>Since we&rsquo;re pretending the viscosity of our fluid is zero, we can drop the \(
\nu \) term in the first equation. In our simple simulation, external forces
are only applied by dragging the mouse, so we&rsquo;ll ignore that term for now,
opting to allow it to influence the velocity field directly.</p>

<p>Dropping those terms, we&rsquo;re left with the following:</p>

<div>$$
\frac{\partial \vec u}{\partial t} =
    -\vec u \cdot \nabla \vec u
    -\frac{1}{\rho} \nabla p
$$</div>

<p>We can expand this to its partial derivative form, expanding vector components
to leave us with only scalar variables.</p>

<div>$$\begin{aligned}
\begin{bmatrix}
    \frac{\partial u_x}{\partial t} \\
    \\
    \frac{\partial u_y}{\partial t}
\end{bmatrix} &=
    -
    \begin{bmatrix}
        \frac{\partial u_x}{\partial x} & \frac{\partial u_x}{\partial y} \\
        \\
        \frac{\partial u_y}{\partial x} & \frac{\partial u_y}{\partial y}
    \end{bmatrix}
    \begin{bmatrix}
        u_x \\
        \\
        u_y
    \end{bmatrix}
    -
    \frac{1}{\rho}
    \begin{bmatrix}
        \frac{\partial p}{\partial x} \\
        \\
        \frac{\partial p}{\partial y}
    \end{bmatrix}
    \\
\begin{bmatrix}
    \frac{\partial u_x}{\partial t} \\
    \\
    \frac{\partial u_y}{\partial t}
\end{bmatrix} &=
    \begin{bmatrix}
        - u_x \frac{\partial u_x}{\partial x}
        - u_y \frac{\partial u_x}{\partial y}
        - \frac{1}{\rho} \frac{\partial p}{\partial x} \\
        \\
        - u_x \frac{\partial u_y}{\partial x}
        - u_y \frac{\partial u_y}{\partial y}
        - \frac{1}{\rho} \frac{\partial p}{\partial y}
    \end{bmatrix}
\end{aligned}$$</div>

<p>Remembering that these fields are all functions on \( (x, y, t) \), we can
approximate the partial derivatives with <a href="https://en.wikipedia.org/wiki/Finite_difference#Forward.2C_backward.2C_and_central_differences">finite differences</a>. For instance,
we can approximate the partial derivative of \( u_x \) with respect to \( t
\) like so:</p>

<div>$$
\frac{\partial u_x}{\partial t} \approx \frac{u_x(x, y, t + \Delta t) - u_x(x, 
y, t)}{\Delta t} $$</div>

<p>Because the procedure ends up being the same for both components, we&rsquo;ll focus on
only the \( x \) component here. Applying finite differences to all of the
partial derivatives, we have this:</p>

<div>$$
\begin{aligned}
\frac{u_x(x, y, t + \Delta t) - u_x(x, y, t)}{\Delta t} =&
    -u_x(x, y, t)
        \frac{u_x(x + \epsilon, y, t) - u_x(x - \epsilon, y, t)}{2 \epsilon}
    \\
    &
    -u_y(x, y, t)
        \frac{u_x(x, y + \epsilon, t) - u_x(x, y - \epsilon, t)}{2 \epsilon}
    \\
    &
    -\frac{1}{\rho}
        \frac{p(x + \epsilon, y, t) - p(x - \epsilon, y, t)}{2 \epsilon}
\end{aligned}
$$</div>

<p>Ultimately what we want is \( \vec u(x, y, t + \Delta t) \), which will tell
us, for a given point, what the velocity will be at the next time step. So let&rsquo;s
solve for that by rearranging the big long formula above:</p>

<div>$$
\begin{aligned}
u_x(x, y, t + \Delta t) = &
    u_x(x, y, t)
    \\
    &

    - u_x(x, y, t) \Delta t
        \frac{u_x(x + \epsilon, y, t) - u_x(x - \epsilon, y, t)}{2 \epsilon}
    \\
    &
    -u_y(x, y, t) \Delta t
        \frac{u_x(x, y + \epsilon, t) - u_x(x, y - \epsilon, t)}{2 \epsilon}
    \\
    &
    -\frac{1}{\rho} \Delta t
        \frac{p(x + \epsilon, y, t) - p(x - \epsilon, y, t)}{2 \epsilon}
\end{aligned}
$$</div>

<p>If you look at the first three terms in this expression, what does it look like
they conceptually represent? It looks like they represent the next velocity
after we&rsquo;ve taken into account changes due to the motion of the fluid itself.
That sounds an awful like advection as discussed earlier. In fact, it will work
quite well if we substitute the velocity field after it&rsquo;s undergone advection.
We&rsquo;ll call the advected velocity field \( \vec u ^ a \). So now we have:</p>

<div>$$
\begin{aligned}
u_x(x, y, t + \Delta t) =
    u^a_x(x, y, t)
    -\frac{1}{\rho} \Delta t
        \frac{p(x + \epsilon, y, t) - p(x - \epsilon, y, t)}{2 \epsilon}
\end{aligned}
$$</div>

<p>So after all of that, we have an equation that relates the velocity field at the
next time tick to the current velocity field after it&rsquo;s undergone advection,
followed by application of pressure.</p>

<p>We know that a divergence-free field that undergoes advection isn&rsquo;t necessarily
still divergence-free, and yet we know that the Navier-Stokes equations for
impressible flow represent divergence-free velocity fields, so
therefore we have our answer about what in nature prevents the velocity field
from becoming divergent: pressure!</p>

<h1 id="solving-for-pressure">Solving for Pressure</h1>

<p>Now that we have an equation that relates \( \vec u \) to \( p \). This is
where the math gets messy. We start from the second Navier-Stokes equation for
incompressible flow, applied at time \( t + \Delta t \), and apply finite
differences again:</p>

<div>$$
\begin{aligned}
\nabla \cdot \vec u & = 0
\\

\frac{\partial u_x}{\partial x} + \frac{\partial u_y}{\partial y} & = 0
\\

\frac{
    u_x(x + \epsilon, y, t + \Delta t) - u_x(x - \epsilon, y, t + \Delta t)
}{
    2 \epsilon
} +
\\
\\
\frac{
    u_y(x, y + \epsilon, t + \Delta t) - u_y(x, y - \epsilon, t + \Delta t)
}{
    2 \epsilon
} & = 0
\end{aligned}
$$</div>

<p>Here, we can substitute our equations for \( \vec u \) expressed in terms of
\( \vec u ^ a \) and \( p \) to get this monster:</p>

<div>$$\begin{aligned}
0 = \frac{1}{2 \epsilon} \left(

    \left(
    u^a_x(x + \epsilon, y, t)
    -\frac{1}{\rho} \Delta t
        \frac{p(x + 2\epsilon, y, t) - p(x, y, t)}{2 \epsilon}
    \right)

    \right. \\ \left.

    -
    \left(
    u^a_x(x - \epsilon, y, t)
    -\frac{1}{\rho} \Delta t
        \frac{p(x, y, t) - p(x - 2\epsilon, y, t)}{2 \epsilon}
    \right)

    \right. \\ \left.

    +
    \left(
    u^a_y(x, y + \epsilon, t)
    -\frac{1}{\rho} \Delta t
        \frac{p(x, y + 2\epsilon, t) - p(x, y, t)}{2 \epsilon}
    \right)

    \right. \\ \left.

    -
    \left(
    u^a_y(x, y - \epsilon, t)
    -\frac{1}{\rho} \Delta t
        \frac{p(x, y, t) - p(x, y - 2\epsilon, t)}{2 \epsilon}
    \right)
\right)
\end{aligned}$$</div>

<p>Rearranging to have all of the \( p \) terms on the left and all the \( \vec
u ^ a \) terms on the right, and multiplying both sides by \( 2 \epsilon \),
we have:</p>

<div>$$
-\frac{\Delta t}{2 \epsilon \rho}
\left(

\begin{matrix}
 4 p(x, y, t) \\
-p(x + 2 \epsilon, y, t) \\
-p(x - 2 \epsilon, y, t) \\
-p(x, y + 2 \epsilon, t) \\
-p(x, y - 2 \epsilon, t)
\end{matrix}

\right)
=

\begin{matrix}
 u^a_x(x + \epsilon, y, t) \\
-u^a_x(x - \epsilon, y, t) \\
+u^a_y(x, y + \epsilon, t) \\
-u^a_y(x, y - \epsilon, t)
\end{matrix}

$$</div>

<p><em>Note: the above expression is a scalar expression, despite being laid out in a
somewhat vector-y form.</em></p>

<p>At this point, it&rsquo;s helpful to remember that, for the purposes of the
simulation, we&rsquo;re not interested in knowing the value of \( p \) everywhere:
we only care about knowing its value at enough places to calculate the value of
the velocities at the grid points. To meet that end, we can similarly calculate
\( p \) on the grid. To accomplish this, we can make \( \epsilon \) the
distance between adjacent grid cells.</p>

<p>The above equation yields a new equation for every \( (x, y) \) of a grid
point we substitute. For the purposes of discussion, let&rsquo;s assume that the gap
between adjacent cells is 0.1 units, so \( \epsilon = 0.1 \). Let&rsquo;s examine
what the equation yields for \( (x, y) = (0.3, 0.7) \).</p>

<div>$$\begin{aligned}
-\frac{\Delta t}{2 \epsilon \rho}
\left(

\begin{matrix}
 4 p(0.3, 0.7, t) \\
-p(0.3 + 2(0.1), 0.7, t) \\
-p(0.3 - 2(0.1), 0.7, t) \\
-p(0.3, 0.7 + 2(0.1), t) \\
-p(0.3, 0.7 - 2(0.1), t)
\end{matrix}

\right)
& =

\begin{matrix}
 u^a_x(0.3 + 0.1, 0.7, t) \\
-u^a_x(0.3 - 0.1, 0.7, t) \\
+u^a_y(0.3, 0.7 + 0.1, t) \\
-u^a_y(0.3, 0.7 - 0.1, t)
\end{matrix}

\\

\frac{\Delta t}{2 \epsilon \rho}
\left(

\begin{matrix}
 4 p(0.3, 0.7, t) \\
-p(0.5, 0.7, t) \\
-p(0.1, 0.7, t) \\
-p(0.3, 0.9, t) \\
-p(0.3, 0.5, t)
\end{matrix}

\right)
& =

\begin{matrix}
 u^a_x(0.4, 0.7, t) \\
-u^a_x(0.2, 0.7, t) \\
+u^a_y(0.3, 0.8, t) \\
-u^a_y(0.3, 0.6, t)
\end{matrix}

\end{aligned}$$</div>

<p>All the values on the right hand side of this equation are known, and on the
left we have 5 unknowns: the value of \( p \) at 5 different grid locations.</p>

<p>If we repeat this process and evaluate \( (x, y) \) at every grid point, we
get one equation with 5 unknowns for each grid location. If our grid has \( n
\times m \) grid locations in it, then we have \( n \times m \) equations,
each with 5 unknowns.</p>

<p>If you&rsquo;re wondering about what&rsquo;s happening at the edges, we&rsquo;re going to lazily
side-step that question by making our grid wrap around: if you ask for the
velocity past the bottom edge, you&rsquo;ll get a value near the top edge.</p>

<p>Before we move on, our notation is getting a bit clunky, so let&rsquo;s clean it up a
tad since we know we&rsquo;re working on a grid. For the next part, we&rsquo;ll say \(
p_{i,j} = p(i \epsilon, j \epsilon, t) \), and we&rsquo;ll stick all the known values
together into a value \( d \) (for <strong>d</strong>ivergence), like so:</p>

<div>$$
d_{i,j} = -\frac{2 \epsilon \rho}{\Delta t}
\begin{pmatrix}
 u^a_x((i + 1) \epsilon, j \epsilon, t) \\
-u^a_x((i - 1) \epsilon, j \epsilon, t) \\
+u^a_y(i \epsilon, (j + 1) \epsilon, t) \\
-u^a_y(i \epsilon, (j - 1) \epsilon, t)
\end{pmatrix}
$$</div>

<p>With this nicer notation, we can express the system of equations on pressure
that we&rsquo;re trying to solve like so:</p>

<div>$$
4p_{i, j} - p_{i+2,j} - p_{i-2,j} - p_{i,j+2} - p_{i,j-2} = d_{i, j}
$$</div>

<h1 id="iteratively-solving-the-pressure-equation">Iteratively Solving the Pressure Equation</h1>

<p>Solving for \( p_{i, j} \) for every grid point analytically would be an
enormous mess. Instead, we&rsquo;re going to use an <em>iterative</em> method of solving this
system of equations, where each iteration provides values closer and closer to a
real solution. We&rsquo;re going to use the <a href="http://college.cengage.com/mathematics/larson/elementary_linear/5e/students/ch08-10/chap_10_2.pdf">Jacobi Method</a>.</p>

<p>In the Jacobi method, we first rearrange our equation to isolate one term, like
so:</p>

<div>$$
p_{i, j} = \frac{
    d_{i, j} + p_{i+2,j} + p_{i-2,j} + p_{i,j+2} + p_{i,j-2}
}{4}
$$</div>

<p>Next, we make an initial guess for all of our unknowns. We&rsquo;ll call this initial
guess \( p_{i,j}^{(0)} \), and just set it to 0 everywhere.</p>

<p>Here&rsquo;s where the iteration comes in: our next guess, \( p_{i, j}^{(1)} \) is
obtained by plugging in our initial guess into the above formula:</p>

<div>$$
p_{i, j}^{(1)} = \frac{
    d_{i, j} + p_{i+2,j}^{(0)} + p_{i-2,j}^{(0)} + p_{i,j+2}^{(0)} + p_{i,j-2}^{(0)}
}{4}
$$</div>

<p>And, more generally, each iteration relies upon the previous one:</p>

<div>$$
p_{i, j}^{(k)} = \frac{
    d_{i, j} + p_{i+2,j}^{(k-1)} + p_{i-2,j}^{(k-1)} + p_{i,j+2}^{(k-1)} + p_{i,j-2}^{(k-1)}
}{4}
$$</div>

<p>You would usually run this until the values of one iteration are equal to the
values from the previous iteration, rounded to a certain accuracy. For our
purposes, we&rsquo;re more interested in this running in a consistent period of time,
so we&rsquo;ll arbitrarily run this for 10 iterations, and hope the result is accurate
enough to look realistic.</p>

<p>For a bit of intuition on <em>why</em> this converges to a solution, check out
<a href="http://math.stackexchange.com/questions/1255790/what-is-the-intuition-behind-matrix-splitting-methods-jacobi-gauss-seidel/1255821#1255821">Algebraic Pavel&rsquo;s answer on Math Exchange</a>.</p>

<h1 id="all-together-now">All Together Now!</h1>

<p>Phew! That was a lot to get through. Now let&rsquo;s put it all together. Roughly, as
pseudo-code, here&rsquo;s our whole simulation:</p>
<pre><code>initialize color field, c
initialize velocity field, u

while(true):
    u_a := advect field u through itself
    d := calculate divergence of u_a
    p := calculate pressure based on d, using jacobi iteration
    u := u_a - gradient of p
    c := advect field c through velocity field u
    draw c
    wait a bit
</code></pre>
<p>Here are the key formulas for those steps on grid coordinates \( (i, j) \),
uncluttered by derivations:</p>

<p>Advecting field \( vec u \) through itself:</p>

<div>$$
\vec u^a_{i,j} = \vec u^a(x:=i \epsilon, y:= j \epsilon, t + \Delta t) := \vec 
u(x - u_x(x, y) \Delta t, y - u_y(x, y) \Delta t, t)
$$</div>

<p>Divergence of \( \vec u_a \) (multiplied by constant terms):</p>

<div>$$
d_{i,j} = -\frac{2 \epsilon \rho}{\Delta t} (
    u^a_{x_{i+1, j}} - u^a_{x_{i-1, j}} +
    u^a_{y_{i, j+1}} - u^a_{y_{i,j-1}}
)
$$</div>

<p>Pressure calculation Jacobi iteration step, with \( p_{i, j}^{(0)} = 0 \):</p>

<div>$$
p_{i, j}^{(k)} = \frac{
    d_{i, j} + p_{i+2,j}^{(k-1)} + p_{i-2,j}^{(k-1)} + p_{i,j+2}^{(k-1)} + p_{i,j-2}^{(k-1)}
}{4}
$$</div>

<p>Subtracting the pressure gradient from the advected velocity field:</p>

<div>$$
\begin{aligned}
u_{x_{i, j}} &:= u^a_{x_{i,j}}
    -\frac{\Delta t}{2 \rho \epsilon} (p_{i + 1, j} - p_{i - 1, j})
\\
u_{y_{i, j}} &:= u^a_{y_{i, j}}
    -\frac{\Delta t}{2 \rho \epsilon} (p_{i, j + 1} - p_{i, j - 1})
\end{aligned}
$$</div>

<p>Advecting the color field through the final velocity field:</p>

<div>$$
\vec {c^a} = \vec{c^a}(x:=i \epsilon, y:= j \epsilon, t + \Delta t) := \vec c(x 
- u_x(x, y) \Delta t, y - u_y(x, y) \Delta t)
$$</div>

<h1 id="implementation">Implementation</h1>

<canvas id="implementation1" width="400" height="400"></canvas>

<p>Pulling all those steps together, you can make something like this! Woohoo! When
I got this working for the first time, I was pretty ecstatic.</p>

<p>I won&rsquo;t delve too far into the implementation, but you can have a look at it
yourself: <a href="https://github.com/jlfwong/blog/blob/master/static/javascripts/fluid-sim.js">fluid-sim.js</a>. It relies upon the elegant <a href="https://github.com/evanw/lightgl.js">lightgl.js</a>, which
is an abstraction layer on top of WebGL that makes it much nicer to work with.
Unlike THREE.js, it doesn&rsquo;t make any assumptions about you wanting any concept
of a camera or lighting or that you&rsquo;re working in 3D at all.</p>

<p>The key technique for running the simulation efficiently is doing all the hard
work on the GPU. To meet this need, all of the computations are done via the
<a href="http://webglfundamentals.org/webgl/lessons/webgl-image-processing-continued.html">render to texture</a> technique, ping-ponging which texture is being rendered
to facilitate reading and writing to the same conceptual texture (e.g. reading
from the velocity field and writing to the velocity field representing the next
time step).</p>

<p>Each one of the major components of the algorithm is implemented in a separate
shader. There&rsquo;s a shader for advection, a shader for calculating the divergence,
one for a single iteration of the Jacobi method, and another for subtracting the
pressure gradient from the advected velocity.</p>

<h1 id="references">References</h1>

<p>To make this, I had to draw from a lot of difference references, many of which
are linked inline in the post.</p>

<ul>
<li><p><em><a href="https://www.cs.ubc.ca/~rbridson/fluidsimulation/fluids_notes.pdf">Fluid Simulation Course Notes from SIGGRAPH 2007</a></em>: Now a textbook, this
is a pretty mathematically dense tutorial. It took me 4 or 5 times reading
through most sections to make sense of it, and ultimately I only understood
parts of it after I did the derivations myself. It uses the more complex
<a href="https://en.wikipedia.org/wiki/Conjugate_gradient_method">conjugate gradient method</a> instead of the Jacobi method to solve the system
of pressure equations, which I got completely lost in, and abandoned. It delves
into a lot of arguments about numerical accuracy and uses a more complex grid
layout than I did, which I still don&rsquo;t follow fully. It also has resources for
other kinds of fluid simulations, like heightfield simulation, and smoothed
particle hydrodynamics.</p></li>

<li><p><em><a href="http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch38.html">GPU Gems Chapter 38.  Fast Fluid Dynamics Simulation on the GPU</a></em>: This
was the single most useful reference I found, and describes something very
similar to this post.  It walks through specific implementation ideas, and gave
me a much better intuition for advection. Some of the math (or at least the
notation) seems shaky here. I think the Gaussian &ldquo;splat&rdquo; formula is missing a
negative sign inside of the \( exp() \), and I&rsquo;m not sure what the notation
\( (\vec u \cdot \nabla) u_x \) means in the first Navier-Stokes equation,
since \( \nabla \cdot \vec u = 0 \) in the second equation.</p></li>

<li><p><a href="http://college.cengage.com/mathematics/larson/elementary_linear/5e/students/ch08-10/chap_10_2.pdf">&ldquo;Elementary Linear Algebra&rdquo; by Ron Larson, Section 10.2: Iterative Methods
for Solving Linear Systems</a>. This had a much clearer explanation of the
Jacobi method than the GPU Gems chapter that allowed me to derive the pressure
solve iteration myself. The full textbook can be found here: <a href="https://www.amazon.com/Elementary-Linear-Algebra-Ron-Larson/dp/1133110878">&ldquo;Elementary Linear
Algebra&rdquo; on amazon.com</a>.</p></li>

<li><p><a href="https://29a.ch/sandbox/2012/fluidcanvas/">Jonas Wagner&rsquo;s fluid simulation on <code>canvas</code></a>, and particularly the source
for it (<a href="https://29a.ch/sandbox/2012/fluidcanvas/fluid.js">fluid.js</a>) were helpful for understanding what a full solution
actually looks like. It&rsquo;s also how I found the GPU Gems article in the first
place. Jonas went on later to reimplement his solution in WebGL: <a href="https://29a.ch/2012/12/16/webgl-fluid-simulation">WebGL Fluid
Simulation</a>.</p></li>
</ul>

<script 
src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/katex.min.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/contrib/auto-render.min.js"></script>

<script>
renderMathInElement(document.body);
</script>

<script src="/javascripts/lightgl.js"></script>

<script src="/javascripts/fluid-sim.js"></script>

<script>

new FluidSim("demo", {
    threshold: false,
    advectV: true,
    applyPressure: true,
    showArrows: false,
    initCFn: [
        'step(1.0, mod(floor((x + 1.0) / 0.2) + floor((y + 1.0) / 0.2), 2.0))',
        'step(1.0, mod(floor((x + 1.0) / 0.3) + floor((y + 1.0) / 0.3), 2.0))',
        'step(1.0, mod(floor((x + 1.0) / 0.4) + floor((y + 1.0) / 0.4), 2.0))'
    ],
    dyeSpots: true,
    size: 600,
});

new FluidSim("advection1", {
    threshold: false,
    advectV: false,
    initVFn: ['1.0', '0.0']
});

new FluidSim("advection2", {
    threshold: false,
    advectV: false
});

new FluidSim("advectV1", {
    threshold: false,
    advectV: true,
    initVFn: ['1.0', 'sin(2.0 * 3.1415 * x)']
});

new FluidSim("divergent1", {
    threshold: false,
    advectV: false,
    initVFn: ['x', 'y']
});

new FluidSim("divergent2", {
    threshold: false,
    advectV: false,
    initVFn: ['sin(2.0 * 3.1415 * x)', '0.0']
});

new FluidSim("divergent3", {
    threshold: false,
    advectV: true
});

new FluidSim("implementation1", {
    threshold: false,
    advectV: true,
    applyPressure: true,
    showArrows: false
});

</script>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ Ray Marching and Signed Distance Functions]]></title>
    <link href="http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/"/>
    <updated>2016-07-15T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/</id>
    <content type="html"><![CDATA[ 

<p><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/katex.min.css"></p>

<iframe width="400" height="400" frameborder="0" 
src="https://www.shadertoy.com/embed/4tcGDr?gui=false&t=3.44&paused=false&muted=false" 
allowfullscreen></iframe>

<p>I&rsquo;ve always been fascinated by demoscenes: short, real-time generated
audiovisual demos, usually in very, very small executables. This one, by an
artist named &ldquo;reptile&rdquo;, comes from a 4KB compiled executable. No external assets
(images, sound clips, etc.) are used &ndash; it&rsquo;s <em>all</em> in that 4KB.</p>

<iframe width="560" height="315" 
src="https://www.youtube.com/embed/roZ-Cgxe9bU?list=PLVbS70ERPhCCGKc-MdKsH03R7o6TNbGoZ" 
frameborder="0" allowfullscreen></iframe>

<p>To get a intuitive grip on how small 4KB is, a 1080p video that this produces is
40MB, or ~10,000 times larger than the executable that produces it.  Keep in
mind that the executable <em>also</em> contains the code to generate the music.</p>

<p>One of the techniques used in many demo scenes is called ray marching. This
algorithm, used in combination with a special kind of function called &ldquo;signed
distance functions&rdquo;, can create some pretty damn cool things in real time.</p>

<p><nav id="TableOfContents">
<ul>
<li><a href="#signed-distance-functions">Signed Distance Functions</a></li>
<li><a href="#the-raymarching-algorithm">The Raymarching Algorithm</a></li>
<li><a href="#surface-normals-and-lighting">Surface Normals and Lighting</a></li>
<li><a href="#moving-the-camera">Moving the Camera</a></li>
<li><a href="#constructive-solid-geometry">Constructive Solid Geometry</a></li>
<li><a href="#model-transformations">Model Transformations</a>
<ul>
<li><a href="#rotation-and-translation">Rotation and Translation</a></li>
<li><a href="#uniform-scaling">Uniform Scaling</a></li>
<li><a href="#non-uniform-scaling-and-beyond">Non-uniform scaling and beyond</a></li>
</ul></li>
<li><a href="#putting-it-all-together">Putting it all together</a></li>
<li><a href="#references">References</a></li>
</ul>
</nav></p>

<h1 id="signed-distance-functions">Signed Distance Functions</h1>

<p>Signed distance functions, or SDFs for short, when passed the coordinates of a
point in space, return the shortest distance between that point and some
surface. The sign of the return value indicates whether the point is inside that
surface or outside (hence <em>signed</em> distance function). Let&rsquo;s look at an example.</p>

<p>Consider a sphere centered at the origin. Points inside the sphere will have a
distance from the origin less than the radius, points on the sphere will have
distance equal to the radius, and points outside the sphere will have distances
greater than the radius.</p>

<p>So our first SDF, for a sphere centered at the origin with radius 1, looks like
this:</p>

<div>$$ f(x, y, z) = \sqrt{x^2 + y^2 + z^2} - 1 $$</div>

<p>Let&rsquo;s try some points:</p>

<div>$$ \begin{aligned}
f(1, 0, 0) &= 0 \\
f(0, 0, 0.5) &= -0.5 \\
f(0, 3, 0) &= 2
\end{aligned} $$</div>

<p>Great, \( (1, 0, 0) \) is on the surface, \( (0, 0, 0.5) \) is inside the
surface, with the closest point on the surface 0.5 units away, and \( (0, 3, 0)
\) is outside the surface with the closest point on the surface 2 units away.</p>

<p>When we&rsquo;re working in GLSL shader code, formulas like this will be vectorized.<br />
Using the <a href="https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm">Euclidean norm</a>, the above SDF looks like this:</p>

<div>$$ f(\vec{p}) = ||\vec{p}|| - 1 $$</div>

<p>Which, in GLSL, translates to this:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="k">vec3</span> <span class="n">p</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">return</span> <span class="n">length</span><span class="p">(</span><span class="n">p</span><span class="p">)</span> <span class="o">-</span> <span class="mf">1.0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div>
<p>For a bunch of other handy SDFs, check out <a href="http://iquilezles.org/www/articles/distfunctions/distfunctions.htm">Modeling with Distance
Functions</a>.</p>

<h1 id="the-raymarching-algorithm">The Raymarching Algorithm</h1>

<p>Once we have something modeled as an SDF, how do we render it? This is where the
ray marching algorithm comes in!</p>

<p>Just as in raytracing, we select a position for the camera, put a grid in front
of it, send rays from the camera through each point in the grid, with each grid
point corresponding to a pixel in the output image.</p>

<figure>
<img src="/images/16-07-11/raytrace.png">
<figcaption>From "Ray tracing" on Wikipedia</figcaption>
</figure>

<p>The difference comes in how the scene is defined, which in turn changes our
options for finding the intersection between the view ray and the scene.</p>

<p>In raytracing, the scene is typically defined in terms of explicit geometry:
triangles, spheres, etc. To find the intersection between the view ray and the
scene, we do a series of geometric intersection tests: where does this ray
intersect with this triangle, if at all? What about this one? What about this
sphere?</p>

<p><em>Aside: For a tutorial on ray tracing, check out <a href="http://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/how-does-it-work">scratchapixel.com</a>. If
you&rsquo;ve never seen ray tracing before, the rest of this article might be a bit
tricky.</em></p>

<p>In raymarching, the entire scene is defined in terms of a signed distance
function. To find the intersection between the view ray and the scene, we start
at the camera, and move a point along the view ray, bit by bit. At each step, we
ask &ldquo;Is this point inside the scene surface?&rdquo;, or alternately phrased, &ldquo;Does the
SDF evaluate to a negative number at this point?&ldquo;.  If it does, we&rsquo;re done!  We
hit something. If it&rsquo;s not, we keep going up to some maximum number of steps
along the ray.</p>

<p>We <em>could</em> just step along a very small increment of the view ray every time,
but we can do much better than this (both in terms of speed and in terms of
accuracy) using &ldquo;sphere tracing&rdquo;.  Instead of taking a tiny step, we take the
maximum step we know is safe without going through the surface: we step by the
distance to the surface, which the SDF provides us!</p>

<figure>
<img src="/images/16-07-11/spheretrace.jpg">
<figcaption>From <a 
href="https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-8-pixel-displacement-mapping-distance-functions">GPU 
Gems 2: Chapter 8.</a></figcaption>
</figure>

<p>In this diagram, \( p_0 \) is the camera. The blue line lies along the ray
direction cast from the camera through the view plane. The first step taken is
quite large: it steps by the shortest distance to the surface. Since the point
on the surface closest to \( p_0 \) doesn&rsquo;t lie along the view ray, we keep
stepping until we eventually get to the surface, at \( p_4 \).</p>

<p>Implemented in GLSL, this ray marching algorithm looks like this:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">depth</span> <span class="o">=</span> <span class="n">start</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="k">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mo">0</span><span class="p">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">MAX_MARCHING_STEPS</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">float</span> <span class="n">dist</span> <span class="o">=</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="n">eye</span> <span class="o">+</span> <span class="n">depth</span> <span class="o">*</span> <span class="n">viewRayDirection</span><span class="p">);</span>
    <span class="k">if</span> <span class="p">(</span><span class="n">dist</span> <span class="o">&lt;</span> <span class="n">EPSILON</span><span class="p">)</span> <span class="p">{</span>
        <span class="c1">// We&#39;re inside the scene surface!</span>
        <span class="k">return</span> <span class="n">depth</span><span class="p">;</span>
    <span class="p">}</span>
    <span class="c1">// Move along the view ray</span>
    <span class="n">depth</span> <span class="o">+=</span> <span class="n">dist</span><span class="p">;</span>

    <span class="k">if</span> <span class="p">(</span><span class="n">depth</span> <span class="o">&gt;=</span> <span class="n">end</span><span class="p">)</span> <span class="p">{</span>
        <span class="c1">// Gone too far; give up</span>
        <span class="k">return</span> <span class="n">end</span><span class="p">;</span>
    <span class="p">}</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">end</span><span class="p">;</span>
</code></pre></div>
<p>Combining that with a bit of code to select the view ray direction
appropriately, the sphere SDF, and making any part of the surface that gets hit
red, we end up with this:</p>

<iframe width="640" height="360" frameborder="0" 
src="https://www.shadertoy.com/embed/llt3R4?gui=true&t=10&paused=true&muted=false">
</iframe>

<p>Voila, we have a sphere! (Trust me, it&rsquo;s a sphere, it just has no shading yet.)</p>

<p>For all of the example code, I&rsquo;m using <a href="http://shadertoy.com/">http://shadertoy.com/</a>. Shadertoy is a
tool that lets you prototype shaders without needing to write any OpenGL/WebGL
boilerplate. You don&rsquo;t even write a vertex shader &ndash; you just write a fragment
shader and watch it go.</p>

<p>The code is commented, so you should go check it out and experiment with it. At
the top of the shader code for each part, I&rsquo;ve left some challenges for you to
try to test your understanding. To get to the code, hover over the image above,
and click on the title.</p>

<h1 id="surface-normals-and-lighting">Surface Normals and Lighting</h1>

<p>Most lighting models in computer graphics use some concept of <a href="https://en.wikipedia.org/wiki/Normal_(geometry)">surface
normals</a> to calculate what color a material should be at a given point on the
surface. When surfaces are defined by explicit geometry, like polygons, the
normals are usually specified for each vertex, and the normal at any given point
on a face can be found by interpolating the surrounding vertex normals.</p>

<p>So how do we find surface normals for a scene defined by a signed distance
function? We take the <a href="https://en.wikipedia.org/wiki/Gradient">gradient</a>! Conceptually, the gradient of a function
\( f \) at point \( (x, y, z) \) tells you what direction to move in from
\( (x, y, z) \) to most rapidly increase the value of \( f \). This will be
our surface normal.</p>

<p>Here&rsquo;s the intuition: for a point on the surface, \( f \) (our SDF), evaluates
to zero.  On the inside of that surface, \( f \) goes negative, and on the
outside, it goes positive.  So the direction at the surface which will bring you
from negative to positive most rapidly will be orthogonal to the surface.</p>

<p>The gradient of \( f(x, y, z) \) is written as \( \nabla f \). You can
calculate it like so:</p>

<div>$$
\nabla f = \left(
    \frac{\partial f}{\partial x},
    \frac{\partial f}{\partial y},
    \frac{\partial f}{\partial z}
\right)
$$</div>

<p>But no need to break out the calculus chops here. Instead of taking the real
derivative of the function, we&rsquo;ll do an approximation by sampling points around
the point on the surface, much like how you learned to calculate slope in a
function as rise-over-run before you learned how to do derivatives.</p>

<div>$$
\vec n = \begin{bmatrix}
    f(x + \varepsilon, y, z) - f(x - \varepsilon, y, z) \\
    f(x, y + \varepsilon, z) - f(x, y - \varepsilon, z) \\
    f(x, y, z + \varepsilon) - f(x, y, z - \varepsilon)
\end{bmatrix}
$$</div>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="cm">/**
</span><span class="cm"> * Using the gradient of the SDF, estimate the normal on the surface at point p.
</span><span class="cm"> */</span>
<span class="k">vec3</span> <span class="n">estimateNormal</span><span class="p">(</span><span class="k">vec3</span> <span class="n">p</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">return</span> <span class="n">normalize</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span>
        <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span><span class="n">p</span><span class="p">.</span><span class="n">x</span> <span class="o">+</span> <span class="n">EPSILON</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">y</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">z</span><span class="p">))</span> <span class="o">-</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span><span class="n">p</span><span class="p">.</span><span class="n">x</span> <span class="o">-</span> <span class="n">EPSILON</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">y</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">z</span><span class="p">)),</span>
        <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span><span class="n">p</span><span class="p">.</span><span class="n">x</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">y</span> <span class="o">+</span> <span class="n">EPSILON</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">z</span><span class="p">))</span> <span class="o">-</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span><span class="n">p</span><span class="p">.</span><span class="n">x</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">y</span> <span class="o">-</span> <span class="n">EPSILON</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">z</span><span class="p">)),</span>
        <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span><span class="n">p</span><span class="p">.</span><span class="n">x</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">y</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">z</span>  <span class="o">+</span> <span class="n">EPSILON</span><span class="p">))</span> <span class="o">-</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span><span class="p">(</span><span class="n">p</span><span class="p">.</span><span class="n">x</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">y</span><span class="p">,</span> <span class="n">p</span><span class="p">.</span><span class="n">z</span> <span class="o">-</span> <span class="n">EPSILON</span><span class="p">))</span>
    <span class="p">));</span>
<span class="p">}</span>
</code></pre></div>
<p>Armed with this knowledge, we can calculate the normal at any point on the
surface, and use that to apply lighting with the <a href="https://en.wikipedia.org/wiki/Phong_reflection_model">Phong reflection model</a>
from two lights, and we get this:</p>

<iframe width="640" height="360" frameborder="0" 
src="https://www.shadertoy.com/embed/lt33z7?gui=true&t=7.45&paused=true&muted=false">
</iframe>

<p>By default, all of the animated shaders in this post are paused to prevent it
from making your computer sound like a jet taking off. Hover over the shader and
hit play to see any animated effects.</p>

<h1 id="moving-the-camera">Moving the Camera</h1>

<p>I won&rsquo;t dwell on this too long, because this solution isn&rsquo;t unique to
ray marching. Just as in raytracing, for transformations on the camera, you
transform the view ray via transformation matrices to position and rotate the
camera. If that doesn&rsquo;t mean anything to you, you want to follow through the
<a href="http://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/how-does-it-work">ray tracing tutorial on scratchapixel.com</a>, or perhaps check out <a href="http://www.codinglabs.net/article_world_view_projection_matrix.aspx">this blog
post on codinglabs.net</a>.</p>

<p>Figuring out how to orient the camera based on a series of translations and
rotations isn&rsquo;t always terribly intuitive though. A much nicer way to think
about it is &ldquo;I want the camera to be at this point, looking at this other
point.&rdquo; This is exactly what <a href="https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml"><code>gluLookAt</code></a> is for in OpenGL.</p>

<p>Inside a shader, we can&rsquo;t use that function, but we can look at the <code>man</code> page
by running <code>man gluLookAt</code>, take a peek at how it calculates its own
transformation matrix, and then make our own in GLSL.</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="cm">/**
</span><span class="cm"> * Return a transformation matrix that will transform a ray from view space
</span><span class="cm"> * to world coordinates, given the eye point, the camera target, and an up vector.
</span><span class="cm"> *
</span><span class="cm"> * This assumes that the center of the camera is aligned with the negative z axis in
</span><span class="cm"> * view space when calculating the ray marching direction.
</span><span class="cm"> */</span>
<span class="n">mat4</span> <span class="n">viewMatrix</span><span class="p">(</span><span class="k">vec3</span> <span class="n">eye</span><span class="p">,</span> <span class="k">vec3</span> <span class="n">center</span><span class="p">,</span> <span class="k">vec3</span> <span class="n">up</span><span class="p">)</span> <span class="p">{</span>
	<span class="k">vec3</span> <span class="n">f</span> <span class="o">=</span> <span class="n">normalize</span><span class="p">(</span><span class="n">center</span> <span class="o">-</span> <span class="n">eye</span><span class="p">);</span>
	<span class="k">vec3</span> <span class="n">s</span> <span class="o">=</span> <span class="n">normalize</span><span class="p">(</span><span class="n">cross</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">up</span><span class="p">));</span>
	<span class="k">vec3</span> <span class="n">u</span> <span class="o">=</span> <span class="n">cross</span><span class="p">(</span><span class="n">s</span><span class="p">,</span> <span class="n">f</span><span class="p">);</span>
	<span class="k">return</span> <span class="n">mat4</span><span class="p">(</span>
		<span class="k">vec4</span><span class="p">(</span><span class="n">s</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">),</span>
		<span class="k">vec4</span><span class="p">(</span><span class="n">u</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">),</span>
		<span class="k">vec4</span><span class="p">(</span><span class="o">-</span><span class="n">f</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">),</span>
		<span class="k">vec4</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
	<span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p>Since spheres look the same from all angles, I&rsquo;m switching to a cube here.
Placing the camera at \( (8, 5, 7) \) and pointing it at the origin using our
new <code>viewMatrix</code> function, we now have this:</p>

<iframe width="640" height="360" frameborder="0" 
src="https://www.shadertoy.com/embed/Xtd3z7?gui=true&t=0&paused=true&muted=false">
</iframe>

<h1 id="constructive-solid-geometry">Constructive Solid Geometry</h1>

<p>Constructive solid geometry, or CSG for short, is a method of creating complex
geometric shapes from simple ones via boolean operations. This diagram from
WikiPedia shows what&rsquo;s possible with the technique:</p>

<figure>
<img src="/images/16-07-11/csg.png">
<figcaption>From "Constructive solid geometry" on Wikipedia</figcaption>
</figure>

<p>CSG is built on 3 primitive operations: intersection ( \( \cap \) ), union (
\( \cup \) ), and difference ( \( - \) ).</p>

<p>It turns out these operations are all concisely expressible when combining two
surfaces expressed as SDFs.</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">intersectSDF</span><span class="p">(</span><span class="k">float</span> <span class="n">distA</span><span class="p">,</span> <span class="k">float</span> <span class="n">distB</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">return</span> <span class="n">max</span><span class="p">(</span><span class="n">distA</span><span class="p">,</span> <span class="n">distB</span><span class="p">);</span>
<span class="p">}</span>

<span class="k">float</span> <span class="n">unionSDF</span><span class="p">(</span><span class="k">float</span> <span class="n">distA</span><span class="p">,</span> <span class="k">float</span> <span class="n">distB</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">return</span> <span class="n">min</span><span class="p">(</span><span class="n">distA</span><span class="p">,</span> <span class="n">distB</span><span class="p">);</span>
<span class="p">}</span>

<span class="k">float</span> <span class="n">differenceSDF</span><span class="p">(</span><span class="k">float</span> <span class="n">distA</span><span class="p">,</span> <span class="k">float</span> <span class="n">distB</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">return</span> <span class="n">max</span><span class="p">(</span><span class="n">distA</span><span class="p">,</span> <span class="o">-</span><span class="n">distB</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p>If you set up a scene like this:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span> <span class="n">samplePoint</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mf">1.2</span><span class="p">)</span> <span class="o">*</span> <span class="mf">1.2</span><span class="p">;</span>
    <span class="k">float</span> <span class="n">cubeDist</span> <span class="o">=</span> <span class="n">cubeSDF</span><span class="p">(</span><span class="n">samplePoint</span><span class="p">)</span> <span class="o">*</span> <span class="mf">1.2</span><span class="p">;</span>
    <span class="k">return</span> <span class="n">intersectSDF</span><span class="p">(</span><span class="n">cubeDist</span><span class="p">,</span> <span class="n">sphereDist</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p>Then you get something like this (see section below about scaling to see where
the division and multiplication by 1.2 comes from).</p>

<iframe width="640" height="360" frameborder="0" 
src="https://www.shadertoy.com/embed/MttGz7?gui=true&t=0&paused=true&muted=false">
</iframe>

<p>In this same Shadertoy, you can play around with the union and difference
operations too if you edit the code.</p>

<p>It&rsquo;s interesting to consider the SDF produced by these binary operations to try
to build an intuition for why they work.</p>

<div>$$ \begin{aligned}
sceneSDF(\vec p) &= intersectSDF(cube(\vec p), sphere(\vec p)) \\
                 &= max(cube(\vec p), sphere(\vec p))
\end{aligned}
$$</div>

<p>Remember that the region where an SDF is negative represents the interior of the
surface. For the above intersection, the \( sceneSDF \) can only be negative
if <em>both</em> \( cube(\vec p) \) and \( sphere(\vec p) \) are negative.  Which
means that we only consider a point inside the scene surface if it&rsquo;s inside both
the cube and the sphere. Which is exactly the definition of CSG intersection!</p>

<p>The same kind of logic applies to union. If either function is negative, the
resulting scene SDF will be negative, and therefore inside the surface.</p>

<div>$$ \begin{aligned}
sceneSDF(\vec p) &= unionSDF(cube(\vec p), sphere(\vec p)) \\
                 &= min(cube(\vec p), sphere(\vec p))
\end{aligned}
$$</div>

<p>The difference operation was the trickiest for me to wrap my head around.</p>

<div>$$ \begin{aligned}
sceneSDF(\vec p) &= differenceSDF(cube(\vec p), sphere(\vec p)) \\
                 &= max(cube(\vec p), -sphere(\vec p))
\end{aligned}
$$</div>

<p>What does the negation of an SDF mean?</p>

<p>If you think again about what the negative and positive region of the SDF mean,
you can see that the negative of an SDF is an inversion of the inside and
outside of a surface. Everything that was considered inside the surface is now
considered outside and vice versa.</p>

<p>This means you can consider the difference to be the intersection of the first
SDF and the <em>inversion</em> of the second SDF. So the resulting scene SDF is only
negative when the first SDF is negative and the second SDF is positive.</p>

<p>Switching back to geometric terms, that means that we&rsquo;re inside the scene
surface if and only if we&rsquo;re inside the first surface and outside the second
surface &ndash; exactly the definition of CSG difference!</p>

<h1 id="model-transformations">Model Transformations</h1>

<p>Being able to move the camera gives us some flexibility, but being able to move
individual parts of the scene independently of one another certainly gives a lot
more. Let&rsquo;s explore how to do that.</p>

<h2 id="rotation-and-translation">Rotation and Translation</h2>

<p>To translate or rotate a surface modeled as an SDF, you can apply the inverse
transformation to the point before evaluating the SDF.</p>

<p>Just as you can apply different transformations to different meshes, you can
apply different transformations to different parts of the SDF &ndash; just only send
the transformed ray to the part of the SDF you&rsquo;re interested in. For instance,
to make the cube bob up and down, leaving the sphere in place, but still taking
the intersection, you can do this:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span> <span class="n">samplePoint</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mf">1.2</span><span class="p">)</span> <span class="o">*</span> <span class="mf">1.2</span><span class="p">;</span>
    <span class="k">float</span> <span class="n">cubeDist</span> <span class="o">=</span> <span class="n">cubeSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">+</span> <span class="k">vec3</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">sin</span><span class="p">(</span><span class="n">iGlobalTime</span><span class="p">),</span> <span class="mf">0.0</span><span class="p">));</span>
    <span class="k">return</span> <span class="n">intersectSDF</span><span class="p">(</span><span class="n">cubeDist</span><span class="p">,</span> <span class="n">sphereDist</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p><em>Shadertoy reference note: iGlobalTime is a uniform variable set by Shadertoy
that contains the number of seconds since playback started.</em></p>

<iframe width="640" height="360" frameborder="0" 
src="https://www.shadertoy.com/embed/XtcGWn?gui=true&t=17&paused=true&muted=false" 
allowfullscreen></iframe>

<p>If you do transformations like this, is the resulting function still a signed
distance field? For rotation and translation, it is, because they are &ldquo;rigid
body transformations&rdquo;, meaning they preserve the distances between points.</p>

<p>More generally, you can apply any rigid body transformation by multiplying the
sampled point by the inverse of your transformation matrix.</p>

<p>For instance, if I wanted to apply a rotation matrix, I could do this:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="n">mat4</span> <span class="n">rotateY</span><span class="p">(</span><span class="k">float</span> <span class="n">theta</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">float</span> <span class="n">c</span> <span class="o">=</span> <span class="n">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">);</span>
    <span class="k">float</span> <span class="n">s</span> <span class="o">=</span> <span class="n">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">);</span>

    <span class="k">return</span> <span class="n">mat4</span><span class="p">(</span>
        <span class="k">vec4</span><span class="p">(</span><span class="n">c</span><span class="p">,</span> <span class="mo">0</span><span class="p">,</span> <span class="n">s</span><span class="p">,</span> <span class="mo">0</span><span class="p">),</span>
        <span class="k">vec4</span><span class="p">(</span><span class="mo">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mo">0</span><span class="p">,</span> <span class="mo">0</span><span class="p">),</span>
        <span class="k">vec4</span><span class="p">(</span><span class="o">-</span><span class="n">s</span><span class="p">,</span> <span class="mo">0</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="mo">0</span><span class="p">),</span>
        <span class="k">vec4</span><span class="p">(</span><span class="mo">0</span><span class="p">,</span> <span class="mo">0</span><span class="p">,</span> <span class="mo">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
    <span class="p">);</span>
<span class="p">}</span>

<span class="k">float</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span> <span class="n">samplePoint</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mf">1.2</span><span class="p">)</span> <span class="o">*</span> <span class="mf">1.2</span><span class="p">;</span>

    <span class="k">vec3</span> <span class="n">cubePoint</span> <span class="o">=</span> <span class="p">(</span><span class="n">invert</span><span class="p">(</span><span class="n">rotateY</span><span class="p">(</span><span class="n">iGlobalTime</span><span class="p">))</span> <span class="o">*</span> <span class="k">vec4</span><span class="p">(</span><span class="n">samplePoint</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">)).</span><span class="n">xyz</span><span class="p">;</span>

    <span class="k">float</span> <span class="n">cubeDist</span> <span class="o">=</span> <span class="n">cubeSDF</span><span class="p">(</span><span class="n">cubePoint</span><span class="p">);</span>
    <span class="k">return</span> <span class="n">intersectSDF</span><span class="p">(</span><span class="n">cubeDist</span><span class="p">,</span> <span class="n">sphereDist</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p>&hellip;but if you&rsquo;re using WebGL here, there&rsquo;s no built in matrix inversion routine
in GLSL  for now, but you can just do the opposite transform. So the above scene
function changes to the equivalent:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sceneSDF</span><span class="p">(</span><span class="k">vec3</span> <span class="n">samplePoint</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mf">1.2</span><span class="p">)</span> <span class="o">*</span> <span class="mf">1.2</span><span class="p">;</span>

    <span class="k">vec3</span> <span class="n">cubePoint</span> <span class="o">=</span> <span class="p">(</span><span class="n">rotateY</span><span class="p">(</span><span class="o">-</span><span class="n">iGlobalTime</span><span class="p">)</span> <span class="o">*</span> <span class="k">vec4</span><span class="p">(</span><span class="n">samplePoint</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">)).</span><span class="n">xyz</span><span class="p">;</span>

    <span class="k">float</span> <span class="n">cubeDist</span> <span class="o">=</span> <span class="n">cubeSDF</span><span class="p">(</span><span class="n">cubePoint</span><span class="p">);</span>
    <span class="k">return</span> <span class="n">intersectSDF</span><span class="p">(</span><span class="n">cubeDist</span><span class="p">,</span> <span class="n">sphereDist</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div>
<p>For more transformation matrices, refer to any intro to graphics textbook, or
check out these slides: <a href="http://web.cs.wpi.edu/~emmanuel/courses/cs543/slides/lecture04_p1.pdf">3D Affine transforms</a>.</p>

<h2 id="uniform-scaling">Uniform Scaling</h2>

<p>Okay, let&rsquo;s get back to this weird scaling trick we glossed over before:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mf">1.2</span><span class="p">)</span> <span class="o">*</span> <span class="mf">1.2</span><span class="p">;</span>
</code></pre></div>
<p>The division by 1.2 is scaling the sphere up by 1.2x (remember that we apply the
<em>inverse</em> transform to the point before sending it to the SDF). But why do we
multiply by the scaling factor afterwards? Let&rsquo;s examine doubling the size for
the sake of simplicity.</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mi">2</span><span class="p">)</span> <span class="o">*</span> <span class="mi">2</span><span class="p">;</span>
</code></pre></div>
<p>Scaling is not a rigid body transformation &ndash; it doesn&rsquo;t preserve the distance
between points. If we transform \( (0, 0, 1) \) and \( (0, 0, 2) \) by
dividing them by 2 (which results in a uniform upscaling of the model), then the
distance between the points switches from 1 to 0.5.</p>

<div>$$ \begin{aligned}
||(0, 0, 1) - (0, 0, 2)|| &= 1 \\
||(0, 0, 1) - (0, 0, 0.5)|| &= 0.5
\end{aligned} $$</div>

<p>So when we sample our scaled point in <code>sphereSDF</code>, we&rsquo;d end up getting back
<em>half</em> the distance that the point really is from the surface of our transformed
sphere.  The multiplication at the end is to compensate for this distortion.</p>

<p>Interestingly, if we try this out in a shader, and use no scale correction, or a
smaller value for scale correction, the exact same thing is rendered. Why?</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="c1">// All of the following result in an equivalent image</span>
<span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mi">2</span><span class="p">)</span> <span class="o">*</span> <span class="mi">2</span><span class="p">;</span>
<span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mi">2</span><span class="p">);</span>
<span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mi">2</span><span class="p">)</span> <span class="o">*</span> <span class="mf">0.5</span><span class="p">;</span>
</code></pre></div>
<p>Notice that regardless of how we scale the SDF, the <em>sign</em> of the distance
returned stays the same. The <em>sign</em> part of &ldquo;signed distance field&rdquo; is still
working, but the <em>distance</em> part is now lying.</p>

<p>To see why this is a problem, we need to re-examine how the ray marching
algorithm works.</p>

<p><img src="/images/16-07-11/spheretrace.jpg"></p>

<p>Recall that at every step of the ray marching algorithm, we want to move a
distance along the view ray equal to the shortest distance to the surface. We
predict that shortest distance using the SDF. For the algorithm to be <em>fast</em>, we
want the those steps to be as large as possible, but if we undershoot, the
algorithm still <em>works</em>, it just requires more iterations.</p>

<p>But if we <em>overestimate</em> distance, we have a real problem. If we try to scale
down the model without correction, like this:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">sphereDist</span> <span class="o">=</span> <span class="n">sphereSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="mf">0.5</span><span class="p">);</span>
</code></pre></div>
<p>Then the sphere disappears completely. If we overestimate the distance, our
raymarching algorithm might step <em>past</em> the surface, never finding it.</p>

<p>For any SDF, we can safely scale it uniformly like so:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">dist</span> <span class="o">=</span> <span class="n">someSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="n">scalingFactor</span><span class="p">)</span> <span class="o">*</span> <span class="n">scalingFactor</span><span class="p">;</span>
</code></pre></div>
<h2 id="non-uniform-scaling-and-beyond">Non-uniform scaling and beyond</h2>

<p>If we want to scale a model non-uniformly, how can we safely avoid the distance
overestimation problem described in the scaling section above? Unlike in uniform
scaling, we can&rsquo;t exactly compensate for the distance distortion caused by the
transform. It was possible in uniform scaling because all dimensions were scaled
equally, so regardless of where the closest point on the surface to the sampling
point is, the scaling compensation will be the same.</p>

<p>But for non-uniform scaling, we need to know where the closest point on the
surface is to know how much to correct the distance by.</p>

<p>To see why this is the case, consider the SDF for the unit sphere, scaled to
half its size along the x axis, with its other dimensions preserved.</p>

<div>$$
\begin{aligned}
    sphereSDF(x, y, z) &= \sqrt{(2x)^2 + y^2 + z^2} - 1 \\
\end{aligned}
$$</div>

<p>If we evaluate the SDF at \( (0, 2, 0) \), we get back a distance of 1 unit.
This is correct: the closest point on the surface of the sphere is \( (0, 1, 0)
\). But if evaluate at \( (2, 0, 0) \), we get back a distance of 3 units,
which is not right. The closest point on the surface is \( (0.5, 0, 0) \),
yielding a world-coordinate distance of 1.5 units.</p>

<p>So, just as in uniform scaling, we need to correct the distance returned by the
SDF to avoid overestimating the distance, but by how much? The overestimation
factor varies depending on where the point is and where the surface is.</p>

<p>Since it&rsquo;s usually okay to underestimate the distance, we can just multiply by
the smallest scaling factor, like so:</p>
<div class="highlight"><pre><code class="language-glsl" data-lang="glsl"><span class="k">float</span> <span class="n">dist</span> <span class="o">=</span> <span class="n">someSDF</span><span class="p">(</span><span class="n">samplePoint</span> <span class="o">/</span> <span class="k">vec3</span><span class="p">(</span><span class="n">s_x</span><span class="p">,</span> <span class="n">s_y</span><span class="p">,</span> <span class="n">s_z</span><span class="p">))</span> <span class="o">*</span> <span class="n">min</span><span class="p">(</span><span class="n">s_x</span><span class="p">,</span> <span class="n">min</span><span class="p">(</span><span class="n">s_y</span><span class="p">,</span> <span class="n">s_z</span><span class="p">));</span>
</code></pre></div>
<p>The principle for other non-rigid transformations is the same: as long as the
sign is preserved by the transformation, you just need to figure out some
compensation factor to ensure that you&rsquo;re never overestimating the distance to
the surface.</p>

<h1 id="putting-it-all-together">Putting it all together</h1>

<p>With the primitives in this post, you can now create some pretty interesting,
complex scenes. Combining those with a simple trick of using the normal vector
as the ambient/diffuse component of the material, and you can create something
like the shader at the start of the post. Here it is again.</p>

<iframe width="640" height="360" frameborder="0" 
src="https://www.shadertoy.com/embed/4tcGDr?gui=true&t=3.44&paused=true&muted=false" 
allowfullscreen></iframe>

<h1 id="references">References</h1>

<p>There is <em>way</em> more to learn about rendering signed distance functions. One of
the most profilic writers on the subject is <a href="http://iquilezles.org/">Inigo Quilez</a>. I learned most
of the content of this post from reading his website and his shader code.  He&rsquo;s
also one of the co-creators of Shadertoy.</p>

<p>Some of the interesting SDF related material from his site that I didn&rsquo;t cover
at all includes <a href="http://www.iquilezles.org/www/articles/smin/smin.htm">smooth blending between surfaces</a> and <a href="http://www.iquilezles.org/www/articles/rmshadows/rmshadows.htm">soft shadows</a>.</p>

<p>Other references:</p>

<ul>
<li><a href="https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-8-pixel-displacement-mapping-distance-functions">GPU Gems 2: Chapter 8. Per-Pixel Displacement Mapping with Distance
Functions</a></li>
<li><a href="https://www.shadertoy.com/view/XsB3Rm">Raymarching Sample Code on Shadertoy</a></li>
<li><a href="http://iquilezles.org/www/articles/distfunctions/distfunctions.htm">Modeling with Distance Functions</a></li>
</ul>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/katex.min.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.6.0/contrib/auto-render.min.js"></script>

<script>
renderMathInElement(document.body);
</script>
 ]]></content>
  </entry>
  
  <entry>
    <title><![CDATA[ The Monoculture and Me]]></title>
    <link href="http://jamie-wong.com/post/monoculture-and-me/"/>
    <updated>2016-07-10T00:00:00+00:00</updated>
    <id>http://jamie-wong.com/post/monoculture-and-me/</id>
    <content type="html"><![CDATA[ 

<figure>
<img src="/images/monoculture/haagen-dazs.jpg">
<figcaption>Photo credit <a href="https://www.sfgate.com/author/katie-dowd/?fbclid=IwAR3lSxrY4tk182FYYmzezFfRP7ChfOQUwPXrgLSefoBEDKLqTzoSDr1nan0">Katie Dowd</a>
</figure>

<p><em>Note from Dec, 2019: This is a piece I wrote in 2016 while between jobs, originally posted as a note on Facebook. There&rsquo;s a pretty bitter tone underlying this. I don&rsquo;t feel quite the same way any more, but it was a pretty honest representation of how I felt in 2016.</em></p>

<p>I’m on the Caltrain back to Mountain View following a Friday night karaoke session. I hear a guy and a girl across the aisle two rows up talking the standard Silicon Valley talk. Startup equity, interns, and rent. “I’m just out of college, so I can’t take a job just on equity — what am I going to tell my landlord? I’ll give you 1% of the 20% of $0, ten years from now?” They laugh and talk, and I hear the same conversation topics I’ve heard cycled hundreds of times.
I find myself pulled away from reading my book, unable to stop eavesdropping on this conversation that irritates me so much.</p>

<hr/>

<p>I’m trailing behind my dad’s rattling shopping cart in Grade 11, absorbed in my copy of “The Programming Contest Training Manual”. I’m excited to sponge up the secrets of the book to apply them to the next monthly contest, clawing my way up the ranks of the couple dozen young Ontarians who compete in algorithm contests. When the contest rolls around, I work alongside the only other 3 people I know with a firm grasp of what “algorithm” means. We rank top in Ottawa, failing only to best our competitors in Toronto.
I’m ecstatic, and slowly fall from the adrenaline high of the timed competition.</p>

<hr/>

<p>I’m sitting at my dining room table in Mountain View, chatting with a friend from out of town. “Why do you all listen to the same music?” he asks. The conversation meanders to the hobbies, and onto the standard trio of hobbies among the techies that compose most of our friends and acquaintances: rock climbing, biking, and weight lifting. “Doesn’t anyone write stories or poetry?” our friend demands. The Bay Area residents at the table ponder the question, and simultaneously shake our heads.</p>

<hr/>

<p>I gather with friends in the undergraduate software engineering lab, which is a glorified seating area with an abundance of outlets and a distinct lack of natural light. Having been <em>the</em> computer kid among my high school friends, I’m now excited to share my knowledge and code editor configuration with my newfound freshman comrades. I talk at length with a new friend, preaching the virtues of a command line interface over GUIs. I set up a server for my classmates to experiment on. I write frequently on my blog, tailoring each post to the audience of my classmates. I discover a security vulnerability in a friends’ side project and they’re fascinated to hear about it. I spend 9 hours a week training with a classmate to represent my university in an international programing competition (on our C team). I teach a classmate graph algorithms as we walk to and from class.
I’m energized by my peers and savour every technical detail I gather from them.</p>

<hr/>

<p>I’m at a friends’ house party in San Francisco, and catch up with a few classmates I see every few months since our graduation over a year ago. I introduce myself to a few other party attendees and ask where they work, secretly hoping they aren’t an engineer. Infrastructure at Uber. Or maybe it was Machine Learning at Twitter. Or perhaps Web Developer at Square. I try to steer the conversation by asking what they do outside of work, hoping to avoid this conversation ending with commiseration over JavaScript. They list some combination of rock climbing, biking, weight lifting, drinking with friends, and TV.</p>

<hr/>

<p>I’m lounging at the back of a first year calculus lecture, casually checking my application status for my first internship. The previously “pending” status surprisingly now reads “selected”. I nearly jump out of my seat in the middle of our professor explanation of something or other in terms of a parachuting grizzly bear. I marvel at the possibility of achieving my goal of interning at Google, and doing it on my first internship! My friends propose that my middle initials LF now stand for “Lucky Fucker”.</p>

<hr/>

<p>I’m walking towards the dull, rising roar of a day concert far enough for you to hear but not see. I’m back after a several month break from life as a new grad software engineer. The roar slowly evolves into voices as I approach the entrance to a Google-employee-and-friends-only concert headlined by OK Go. I pass through the gates with a friend, donning a free bracelet yielding free alcohol at the free concert. I see a sea of engineers swaying a little to the music, decorated with t-shirts advertising a plethora of Google projects.
I was looking forward to coming back to California after months abroad, but this was a little too much Valley for me at once. It seemed only fitting that I followed a Tesla with an Apple sticker and the license plate “1984MAC” on the way to the venue.</p>

<hr/>

<p>The shift was slow: from wide eyed aspiration to learn everything I can about technology to disdain for those who want to discuss programming at mealtime. Instead of taking pride in my computing abilities, I now shy away from it when talking to friends and acquaintances and change the subject.</p>

<p>When I complain about the monoculture, when I list density of engineers as a primary reason for wanting to leave the Bay Area in 3-4 years, when I judge every person I don’t know who talks tech on public transit or in coffee shops, I judge myself. Because I’m that engineer at the gym lifting weights. I’m that stranger at the house party that my conversation partner hopes is not a developer. I’m the guy talking excitedly about the HTC Vive on the Caltrain while others groan because they’re overhearing a conversation they’ve heard before.
I miss being the top ranked competitor in Ottawa in a competition nobody in my school had even heard of. I miss having easy to pursue, obvious aspirations to work the Google dream job. I miss being <em>the</em> computer guy among my friends.</p>

<p>So when I see Red Bull ads specifically targeting programmers, <a href="https://www.flickr.com/photos/yourdon/23155535556?fbclid=IwAR2TnlHjUZiVHzL2me6AJ284Am_yPmZxp8nslS7dMf3yZ9ZZWG3PdgT1lyA">billboards for Twilio reading “Ask your developer”</a>, or <a href="https://www.sfgate.com/bayarea/article/Haagen-Dazs-ads-BART-tech-ice-cream-7461906.php?fbclid=IwAR3L8gHH7XgybwPPrksi16jjXN9Vx4V800rxXNYGC3ZFC-9HSQ7rVgeetkI">Haagen Daaz ads featuring JavaScript</a>, I cringe, because it reminds me that I am not a special snowflake.</p>
 ]]></content>
  </entry>
  
</feed>
