Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /customers/c/9/b/feedsapi.com/httpd.www/advance/libraries/Zend/Cache/Backend.php on line 66 Warning: Cannot modify header information - headers already sent by (output started at /customers/c/9/b/feedsapi.com/httpd.www/advance/libraries/Zend/Cache/Backend.php:66) in /customers/c/9/b/feedsapi.com/httpd.www/advance/makefulltextfeed.php on line 316 Warning: Use of undefined constant SIMPLEPIE_FILE_SOURCE_NONE - assumed 'SIMPLEPIE_FILE_SOURCE_NONE' (this will throw an Error in a future version of PHP) in /customers/c/9/b/feedsapi.com/httpd.www/advance/libraries/humble-http-agent/SimplePie_HumbleHttpAgent.php on line 28 Warning: Use of undefined constant SIMPLEPIE_FILE_SOURCE_NONE - assumed 'SIMPLEPIE_FILE_SOURCE_NONE' (this will throw an Error in a future version of PHP) in /customers/c/9/b/feedsapi.com/httpd.www/advance/libraries/humble-http-agent/SimplePie_HumbleHttpAgent.php on line 28 Warning: Cannot modify header information - headers already sent by (output started at /customers/c/9/b/feedsapi.com/httpd.www/advance/libraries/Zend/Cache/Backend.php:66) in /customers/c/9/b/feedsapi.com/httpd.www/advance/libraries/feedwriter/FeedWriter.php on line 89 Hacker News https://news.ycombinator.com/ Links for the intellectually curious, ranked by readers. Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares http://vmls-book.stanford.edu/ http://vmls-book.stanford.edu/ <p><br/>This book is used as the textbook for the course <a href="http://www.stanford.edu/class/ee103">EE103</a> (Stanford) and <a href="http://www.seas.ucla.edu/~vandenbe/ee133a.html">EE133A</a> (UCLA), where you will find additional related material.</p> <p>If you find an error not listed in our <a href="http://vmls-book.stanford.edu/vmls_errata.html">errata list</a>, please do let us know about it.</p> <p><a href="http://www.stanford.edu/~boyd/">Stephen Boyd</a> &amp; <a href="http://www.ee.ucla.edu/~vandenbe/">Lieven Vandenberghe</a></p> <h2>Download</h2> <p>Copyright in this book is held by Cambridge University Press, who have kindly agreed to allow us to keep the book available on the web.</p> <h2>Additional material</h2> <p>You're welcome to use the lecture slides posted below, but we'd appreciate it if you acknowledge the source.</p> <h2>Catalog links</h2> Fri, 14 Dec 2018 03:48:32 +0000 http://vmls-book.stanford.edu/ Virgin Galactic successfully reaches space https://www.bbc.com/news/business-46550862 https://www.bbc.com/news/business-46550862 <figure class="media-landscape has-caption full-width lead"><span class="image-and-copyright-container"> <img class="js-image-replace" alt="Virgin Galactic plane" src="https://ichef.bbci.co.uk/news/320/cpsprodpb/1E38/production/_104763770_vg_third_powered_flight_-_take_off_hd_ready.jpg" width="976" height="549"/><span class="off-screen">Image copyright</span> <span class="story-image-copyright">Virgin Galactic</span> </span> <figcaption class="media-caption"><span class="off-screen">Image caption</span> <span class="media-caption__text"> Virgin Galactic's plane will be carried to a height of 12,000m before its rocket ignites </span> </figcaption></figure><p class="story-body__introduction">The latest test flight by Sir Richard Branson's Virgin Galactic successfully rocketed to the edge of space and back. </p><p>The firm's SpaceShipTwo passenger rocket ship reached a height of 82.7km, beyond the altitude at which US agencies have awarded astronaut wings.</p><p>It marked the plane's fourth test flight and followed earlier setbacks in the firm's space programme.</p><p>Sir Richard is in a race with Elon Musk and Jeff Bezos to send the first fee-paying passengers into space.</p><p>He founded the commercial spaceflight company in 2004, shortly after Mr Musk started SpaceX and Jeff Bezos established Blue Origin.</p><p>In 2008, Virgin Galactic first promised sub-orbital spaceflight trips for tourists would be taking place "within 18 months". It has since regularly made similar promises to have space flights airborne in the near future.</p><figure class="media-landscape no-caption full-width"><span class="image-and-copyright-container"> </span> </figure><figure class="media-landscape no-caption full-width"><span class="image-and-copyright-container"> </span> </figure><p>But delays and a fatal crash in 2014 prevented Sir Richard's original ambitions.</p><p>On Thursday, the SpaceShipTwo passenger rocket ship took off from the Mojave Desert in California.</p><p>The company said the space ship's motor burned for 60 seconds, travelling at 2.9 times the speed of sound as it gained height.</p><p>The rocket carried two pilots and a mannequin named Annie as a stand-in passenger, as well as four research experiments for NASA. </p><p>"Today we have shown Virgin Galactic can open space to the world," Sir Richard said.</p><p>The US government has awarded astronaut wings to pilots who ventured farther than roughly 80km beyond earth's surface.</p><p>But Thursday's flight did not breach the 100km Karman Line, which many organisations use to resolve debates about where space begins. </p><p>While the trip marked a milestone for Virgin Galactic, the firm's rivals have already ventured farther - albeit without humans on board.</p><p>SpaceX, in partnership with NASA, is planning crewed missions for early next year. Mr Bezos has also said Blue Origin plans to send its first crew to space in 2019. </p><p>Virgin Galactic, which is charging $250,000 for a 90-minute flight, has said more than 600 people have bought tickets or put down deposits for an eventual voyage. </p> Thu, 13 Dec 2018 17:33:51 +0000 https://www.bbc.com/news/business-46550862 RISC-V Will Stop Hackers? https://hackaday.com/2018/12/13/risc-v-will-stop-hackers-dead-from-getting-into-your-computer/ https://hackaday.com/2018/12/13/risc-v-will-stop-hackers-dead-from-getting-into-your-computer/ <p>The greatest hardware hacks of all time were simply the result of finding software keys in memory. The AACS encryption debacle — the 09 F9 key that allowed us to decrypt HD DVDs — was the result of encryption keys just sitting in main memory, where it could be read by any other program. DeCSS, the hack that gave us all access to DVDs was again the result of encryption keys sitting out in the open.</p> <p>Because encryption doesn’t work if your keys are just sitting out in the open, system designers have come up with ingenious solutions to prevent evil hackers form accessing these keys. One of the best solutions is the hardware enclave, a tiny bit of silicon that protects keys and other bits of information. Apple has an entire line of chips, Intel has hardware extensions, and all of these are black box solutions. They do work, but we have no idea if there are any vulnerabilities. If you can’t study it, it’s just an article of faith that these hardware enclaves will keep working.</p> <p>Now, there might be another option. RISC-V researchers are <a href="https://keystone-enclave.org/" target="_blank">busy creating an Open Source hardware enclave</a>. This is an Open Source project to build secure hardware enclaves to store cryptographic keys and other secret information, and they’re doing it in a way that can be accessed and studied. Trust but verify, yes, and that’s why this is the most innovative hardware development in the last decade.</p> <h2>What is an enclave?</h2> <p>Although as a somewhat new technology, processor enclaves have been around for ages. The first one to reach the public consciousness would be the Secure Enclave Processor (SEP) found in the iPhone 5S. This generation of iPhone introduced several important technological advancements, including Touch ID, the innovative and revolutionary M7 motion coprocessor, and the SEP security coprocessor itself. The iPhone 5S was a technological milestone, and the new at the time SEP stored fingerprint data and cryptographic keys beyond the reach of the actual SOC found in the iPhone.</p> <p>The iPhone 5S SEP was designed to perform secure services for the rest of the SOC, primarily relating to the Touch ID functionality. Apple’s revolutionary use of a secure enclave processor was extended with the 2016 release of the Touch Bar MacBook Pro and the use of the Apple T1 chip. The T1 chip was again used for TouchID functionality, and demonstrates that Apple is the king of vertical integration.</p> <p>But Apple isn’t the only company working on secure enclaves for their computing products. <a href="https://software.intel.com/sgx" target="_blank">Intel has developed the SGX extension</a> which allows for hardware-assisted security enclaves. These enclaves give developers the ability to hide cryptographic keys and the components for digital rights management inside a hardware-protected bit of silicon. AMD, too, has hardware enclaves with the Secure Encrypted Virtualization (SEV). ARM has Trusted Execution environments. While the Intel, AMD, and ARM enclaves are bits of silicon on other bits of silicon — distinct from Apple’s approach of putting a hardware enclave on an entirely new chip — the idea remains the same. You want to put secure stuff in secure environments, and enclaves allow you to do that.</p> <p>Unfortunately, these hardware enclaves are black boxes, and while they do provide a small attack surface, there are problems. AMD’s SEV <a href="https://arxiv.org/pdf/1805.09604.pdf" target="_blank">is already known to have serious security weaknesses</a>, and it is believed SEV does not offer protection from malicious hypervisors, only from accidental hypervisor vulnerabilities. Intel’s Management engine, while not explicitly a hardware enclave, <a href="https://hackaday.com/2017/05/02/is-intels-management-engine-broken/">has been shown to be vulnerable to attack</a>. The problem is that these hardware enclaves are black boxes, and security through obscurity does not work at all.</p> <h2>The Open Source Solution</h2> <p>At last week’s RISC-V Summit, researchers at UC Berkeley <a href="https://keystone-enclave.org/files/keystone-risc-v-summit.pdf" target="_blank">released their plans for the Keystone Enclave, an Open Source secure enclave based on the RISC-V</a> (PDF). Keystone is a project to build a Trusted Execution Environment (TEE) with secure hardware enclaves based on the RISC-V architecture, the same architecture that’s going into completely Open Source microcontrollers and (soon) Systems on a Chip.</p> <p><a href="https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png" target="_blank"><img data-attachment-id="336277" data-permalink="https://hackaday.com/2018/12/13/risc-v-will-stop-hackers-dead-from-getting-into-your-computer/keystoneoverview/" data-orig-file="https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png" data-orig-size="1140,747" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="keystoneoverview" data-image-description="" data-medium-file="https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=400&amp;h=262" data-large-file="https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=800" class="size-medium wp-image-336277 alignright" src="https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=400&amp;h=262" alt="" width="400" height="262" srcset="https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=400&amp;h=262 400w, https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=800&amp;h=524 800w, https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=250&amp;h=164 250w, https://hackadaycom.files.wordpress.com/2018/12/keystoneoverview.png?w=768&amp;h=503 768w" sizes="(max-width: 400px) 100vw, 400px"/></a>The goals of the Keystone project are to build a chain of trust, starting from a silicon Root of Trust stored in tamper-proof hardware. this leads to a Zeroth-stage bootloader and a tamper-proof platform key store. Defining a hardware Root of Trust (RoT) is exceptionally difficult; you can always decapsulate silicon, you can always perform some sort of analysis on a chip to extract keys, and if your supply chain isn’t managed well, you have no idea if the chip you’re basing your RoT on is actually the chip in your computer. However, by using RISC-V and its Open Source HDL, this RoT can at least be studied, unlike the black box solutions from Intel, AMD, and ARM vendors.</p> <p>The current plans for Keystone include memory isolation, an open framework for building on top of this security enclave, and a minimal but Open Source solution for a security enclave.</p> <p><a href="https://hackadaycom.files.wordpress.com/2018/12/trusted.png" target="_blank"><img data-attachment-id="336278" data-permalink="https://hackaday.com/2018/12/13/risc-v-will-stop-hackers-dead-from-getting-into-your-computer/trusted/" data-orig-file="https://hackadaycom.files.wordpress.com/2018/12/trusted.png" data-orig-size="1439,809" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Trusted" data-image-description="" data-medium-file="https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=400" data-large-file="https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=800&amp;h=450" class="aligncenter size-large wp-image-336278" src="https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=800&amp;h=450" alt="" width="800" height="450" srcset="https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=800&amp;h=450 800w, https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=250&amp;h=141 250w, https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=400&amp;h=225 400w, https://hackadaycom.files.wordpress.com/2018/12/trusted.png?w=768&amp;h=432 768w, https://hackadaycom.files.wordpress.com/2018/12/trusted.png 1439w" sizes="(max-width: 800px) 100vw, 800px"/></a></p> <p>Right now, the Keystone Enclave is testable on various platforms, including QEMU, FireSim, and on real hardware with the SiFive Unleashed. There’s still much work to do, from formal verification to building out the software stack, libraries, and adding hardware extensions.</p> <p>This is a game changer for security. Silicon vendors and designers have been shoehorning in hardware enclaves into processors for nearly a decade now, and Apple has gone so far as to create their own enclave chips. All of these solutions are black boxes, and there is no third-party verification that these designs are inherently correct. The RISC-V project is different, and the Keystone Enclave is the best chance we have for creating a truly Open hardware enclave that can be studied and verified independently.</p> Fri, 14 Dec 2018 02:27:28 +0000 Brian Benchoff https://hackaday.com/2018/12/13/risc-v-will-stop-hackers-dead-from-getting-into-your-computer/ DevHub: TweetDeck for GitHub – Android, iOS and Web with 99% Code Sharing https://github.com/devhubapp/devhub https://github.com/devhubapp/devhub <div class="Box-body p-6"> <article class="markdown-body entry-content" itemprop="text"><p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://user-images.githubusercontent.com/619186/49823485-eed18480-fd66-11e8-88c0-700d840ad4f1.png"><img src="https://user-images.githubusercontent.com/619186/49823485-eed18480-fd66-11e8-88c0-700d840ad4f1.png" height="100"/></a><br/><span><b>DevHub</b>: <span>TweetDeck for GitHub [BETA]</span><br/><span>Android, <a href="https://itunes.apple.com/br/app/devhub-for-github/id1191864199?l=en&amp;mt=8" rel="nofollow">iOS</a> &amp; <a href="https://devhubapp.com/" rel="nofollow">Web</a> with <b>99% code sharing</b> between them</span><br/><a href="https://devhubapp.com/" rel="nofollow">devhubapp.com</a> </span></p> <p><a href="https://devhubapp.com/" rel="nofollow"><img src="https://user-images.githubusercontent.com/619186/49800542-dfcee000-fd2e-11e8-8e08-ff95c5872513.png" alt="DevHub Desktop"/></a></p> <p align="center"> <a href="https://devhubapp.com/" rel="nofollow"> <img alt="DevHub Mobile - Events" height="620" src="https://user-images.githubusercontent.com/619186/49802010-ebbca100-fd32-11e8-94d6-e8efb1b5dda8.PNG"/><img alt="DevHub Mobile - Event Filters" height="620" src="https://user-images.githubusercontent.com/619186/49802011-ebbca100-fd32-11e8-80c1-5d7cad609e7b.png"/><img alt="DevHub Mobile - Notification Filters" height="620" src="https://user-images.githubusercontent.com/619186/49802012-ebbca100-fd32-11e8-8740-54ac8741edec.PNG"/></a> </p> <br/><h2><a id="user-content-why" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#why"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Why</h2> <p>DevHub helps you take back control of your GitHub workflow and stay on top of everything important going on.</p> <h3><a id="user-content-features" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#features"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Features</h3> <ul><li><strong>Columns layout</strong>: Like TweetDeck, you can see at a quick glance everything that is going on; made for power users;</li> <li><strong>Inbox Zero</strong>: Clear all the seen items and keep your mind clean; Archived items will be moved to a separate place;</li> <li><strong>Filters</strong>: Apply different filters to each column; remove all the noise and make them show just what you want;</li> <li><strong>Enhanced notifications</strong>: See all the relevant information before opening the notification, like issue/pull request status, comment content, release description, etc.;</li> <li><strong>Sanely watch repositories</strong>: Keep up to date with repositories' activities without using the <code>watch</code> feature so your notifications don't get cluttered;</li> <li><strong>Stalker mode</strong>: Follow user activities without using the <code>follow</code> button and see activities that GitHub doesn't show on your feed, like issue comments and pushed commits;</li> <li><strong>Dashboard spier</strong>: See other users' home screen (their GitHub dashboard) so you can discover new interesting people and repositories;</li> <li><strong>Save for later</strong>: Save any activity or notification for later, so you don't forget to get back to them;</li> <li><strong>Theme support</strong>: Choose between 6 light or dark themes;</li> <li><strong>And more!</strong>: Native apps, keyboard shortcuts, open source, modern tech stack, ...</li> </ul><br/><h3><a id="user-content-next-features" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#next-features"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Next features:</h3> <blockquote> <p>Which one do you want first? Any other recommendations? Search <a href="https://github.com/devhubapp/devhub/issues">existing issues</a> or open a new one</p> </blockquote> <br/><h2><a id="user-content-github-enterprise" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#github-enterprise"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>GitHub Enterprise</h2> <p>This is a paid feature. If you are interested, please contact us via e-mail: <a href="mailto:enterprise@devhubapp.com">enterprise@devhubapp.com</a>. You can also always contact me via twitter (<a href="https://twitter.com/brunolemos" rel="nofollow">@brunolemos</a>) or <a href="https://github.com/devhubapp/devhub/issues">open an issue</a> here.</p> <br/><h2><a id="user-content-keyboard-shortcuts" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#keyboard-shortcuts"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Keyboard shortcuts</h2> <table><thead><tr><th>Key</th> <th>Action</th> <th>Implemented</th> </tr></thead><tbody><tr><td><code>Esc</code></td> <td>Close current open modal</td> <td><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji></td> </tr><tr><td><code>a</code>, <code>n</code></td> <td>Add a new column</td> <td><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji></td> </tr><tr><td><code>1</code>...<code>9</code></td> <td>Go the the <code>nth</code> column</td> <td><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji></td> </tr><tr><td><code>0</code></td> <td>Go to the last column</td> <td><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji></td> </tr><tr><td><code>j</code>, <code>k</code></td> <td>Move down/up inside a column</td> <td><a href="https://github.com/devhubapp/devhub/blob/6157822c7723c85e11bf4bd781656a0204f81ab2/packages/components/src/screens/MainScreen.tsx#L94-L145">Contribute</a></td> </tr><tr><td><code>s</code></td> <td>Toggle save item for later</td> <td><a href="https://github.com/devhubapp/devhub/blob/fbe728fb106712092df1341aba5fdf12807e1f11/packages/components/src/components/cards/partials/NotificationCardHeader.tsx#L125-L133">Contribute</a></td> </tr><tr><td><code>Arrow keys</code> + <code>Space</code></td> <td>Focus on elements and press things</td> <td><a href="https://github.com/devhubapp/devhub/pulls">Contribute</a></td> </tr><tr><td><code>Alt + Arrow keys</code></td> <td>Move current column</td> <td><a href="https://github.com/devhubapp/devhub/pulls">Contribute</a></td> </tr></tbody></table><h2><a id="user-content-tech-stack" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#tech-stack"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Tech Stack</h2> <h2><a id="user-content-contributing" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#contributing"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Contributing</h2> <p>Pull Requests, bug reports and feature requests are more than welcome! <br/></p> <blockquote> <p>If the feature is big, please open an issue first for discussion.</p> </blockquote> <h3><a id="user-content-running-it-locally" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#running-it-locally"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Running it locally</h3> <ul><li><code>git clone git@github.com:devhubapp/devhub.git</code></li> <li><code>yarn</code></li> <li><code>yarn dev</code></li> </ul><p>That's it. It will start three workers: <code>TypeScript compilation watcher</code>, <code>Web server</code> (create-react-app) and the <code>Mobile server</code> (react-native packager). The browser will open automatically.</p> <p>To open the mobile projects, use:</p> <blockquote> <p>Note: See License below. For example, you are allowed to use this locally, but not allowed to distribute the changed app to other people or remove its paid features, if any.</p> </blockquote> <br/><h2><a id="user-content-author" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#author"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Author</h2> <p>Follow me on Twitter: <a href="https://twitter.com/brunolemos" rel="nofollow">@brunolemos</a></p> <p><a href="https://twitter.com/brunolemos" rel="nofollow"><img src="https://camo.githubusercontent.com/3279168d7d688ccbae9300e26c7000b8c9da2000/68747470733a2f2f747769747465722e636f6d2f6272756e6f6c656d6f732f70726f66696c655f696d6167653f73697a653d6f726967696e616c" height="100" data-canonical-src="https://twitter.com/brunolemos/profile_image?size=original"/></a></p> <br/><h2><a id="user-content-license" class="anchor" aria-hidden="true" href="https://github.com/devhubapp/devhub#license"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>License</h2> <p>Copyright (c) 2018 <a href="https://twitter.com/brunolemos" rel="nofollow">Bruno Lemos</a>.</p> <p>This is project provided as is without any warranties.<br/>By using this app you agree with its <a href="https://github.com/devhubapp/devhub/blob/master/PRIVACY.md">privacy</a> policy and the <a href="https://github.com/devhubapp/devhub/blob/master/LICENSE.md">license</a> below:</p> <ul><li> <p><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji> You are encouraged to use, share and submit pull requests with improvements;</p> </li> <li> <p><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji> You are allowed to use the official hosted version (<a href="https://devhubapp.com/" rel="nofollow">devhubapp.com</a>) on your company or commercial projects;</p> </li> <li> <p><g-emoji class="g-emoji" alias="white_check_mark" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2705.png">✅</g-emoji> You are allowed to use the source code for personal non-commercial purposes only, like studying or contributing;</p> </li> <li> <p><g-emoji class="g-emoji" alias="no_entry_sign" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/1f6ab.png">🚫</g-emoji> You are not allowed to distribute this app anywhere, neither changed versions of this app, including but not limited to Apple Store, Google Play or Web; Changes to the source code can only be used locally, taking in consideration the other points of this License;</p> </li> <li> <p><g-emoji class="g-emoji" alias="no_entry_sign" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/1f6ab.png">🚫</g-emoji> You are not allowed to charge people for this app, neither bypass its paid features, if any;</p> </li> </ul></article></div> Thu, 13 Dec 2018 23:21:35 +0000 https://github.com/devhubapp/devhub Exploring the .NET Core Runtime http://www.mattwarren.org/2018/12/13/Exploring-the-.NET-Core-Runtime/ http://www.mattwarren.org/2018/12/13/Exploring-the-.NET-Core-Runtime/ <p> If you're looking to support a charity this Christmas, consider helping out the Book Trust </p><div readability="196.11648682116"> <span class="post-date">13 Dec 2018 - 1984 words</span> <p>It seems like this time of year anyone with a blog is doing some sort of ‘advent calendar’, i.e. 24 posts leading up to Christmas. For instance there’s a <a href="https://sergeytihon.com/2018/10/22/f-advent-calendar-in-english-2018/">F# one</a> which inspired a <a href="https://crosscuttingconcerns.com/The-Second-Annual-C-Advent">C# one</a> (<em>C# copying from F#, that never happens</em> 😉)</p> <p>However, that’s a bit of a problem for me, I struggled to write 24 posts <a href="http://mattwarren.org/postsByYear/#2016-ref">in my most productive year</a>, let alone a single month! Also, I mostly blog about <a href="http://mattwarren.org/tags/#Internals">‘.NET Internals’</a>, a subject which doesn’t necessarily lend itself to the more ‘<em>light-hearted</em>’ posts you get in these ‘advent calendar’ blogs.</p> <p><strong>Until now!</strong></p> <hr/><p>Recently I’ve been giving a talk titled <strong>from ‘dotnet run’ to ‘hello world’</strong>, which attempts to explain everything that the .NET Runtime does from the point you launch your application till “Hello World” is printed on the screen:</p> <p>But as I was researching and presenting this talk, it made me think about the <em>.NET Runtime</em> as a whole, <a href="http://mattwarren.org/2017/03/23/Hitchhikers-Guide-to-the-CoreCLR-Source-Code/#high-level-overview"><em>what does it contain</em></a> and most importantly <strong>what can you do with it</strong>?</p> <p><strong>Note:</strong> this is mostly for <em>informational</em> purposes, for the <em>recommended way</em> of achieving the same thing, take a look at this excellent <a href="https://natemcmaster.com/blog/2017/12/21/netcore-primitives/">Deep-dive into .NET Core primitives</a> by <a href="https://twitter.com/natemcmaster">Nate McMaster</a>.</p> <hr/><p>In this post I will explore what you can do <strong>using only the code in the <a href="https://github.com/dotnet/coreclr">dotnet/coreclr</a> repository</strong> and along the way we’ll find out more about how the runtime interacts with the wider <a href="https://dotnet.microsoft.com/">.NET Ecosystem</a>.</p> <p>To makes things clearer, there are <strong>3 challenges</strong> that will need to be solved before a simple “Hello World” application can be run. That’s because in the <a href="https://github.com/dotnet/coreclr">dotnet/coreclr</a> repository there is:</p> <ol><li>No <strong>compiler</strong>, that lives in <a href="https://github.com/dotnet/roslyn/">dotnet/Roslyn</a></li> <li>No <strong>Framework Class Library (FCL)</strong> a.k.a. ‘<a href="https://github.com/dotnet/corefx">dotnet/CoreFX</a>’</li> <li>No <code class="highlighter-rouge">dotnet run</code> as it’s implemented in the <a href="https://github.com/dotnet/cli/tree/release/2.2.2xx/src/dotnet/commands/dotnet-run">dotnet/CLI</a> repository</li> </ol><hr/><h2 id="building-the-coreclr">Building the CoreCLR</h2> <p>But before we even work through these ‘challenges’, we need to build the CoreCLR itself. Helpfully there is really nice guide available in <a href="https://github.com/dotnet/coreclr#building-the-repository">‘Building the Repository’</a>:</p> <blockquote readability="11.195512820513"> <p>The build depends on Git, CMake, Python and of course a C++ compiler. Once these prerequisites are installed the build is simply a matter of invoking the ‘build’ script (<code class="highlighter-rouge">build.cmd</code> or <code class="highlighter-rouge">build.sh</code>) at the base of the repository.</p> <p>The details of installing the components differ depending on the operating system. See the following pages based on your OS. There is no cross-building across OS (only for ARM, which is built on X64). You have to be on the particular platform to build that platform.</p> </blockquote> <p>If you follow these steps successfully, you’ll end up with the following files (at least on Windows, other OSes may produce something slightly different):</p> <p><img src="http://mattwarren.org/images/2018/12/CoreCLR%20Build%20Artifacts.png" alt="CoreCLR Build Artifacts"/></p> <hr/><h2 id="no-compiler">No Compiler</h2> <p>First up, how do we get around the fact that we don’t have a compiler? After all we need some way of turing our simple “Hello World” code into a .exe?</p> <div class="language-csharp highlighter-rouge" readability="6.5"><div class="highlight" readability="8"><pre class="highlight"><code><span class="k">namespace</span> <span class="nn">Hello_World</span> <span class="p">{</span> <span class="k">class</span> <span class="nc">Program</span> <span class="p">{</span> <span class="k">static</span> <span class="k">void</span> <span class="nf">Main</span><span class="p">(</span><span class="kt">string</span><span class="p">[]</span> <span class="n">args</span><span class="p">)</span> <span class="p">{</span> <span class="n">Console</span><span class="p">.</span><span class="nf">WriteLine</span><span class="p">(</span><span class="s">"Hello World!"</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <p>Fortunately we do have access to the <a href="https://github.com/dotnet/coreclr/tree/master/src/ilasm">ILASM tool (IL Assembler)</a>, which can turn <a href="https://en.wikipedia.org/wiki/Common_Intermediate_Language">Common Intermediate Language (CIL)</a> into an .exe file. But how do we get the correct IL code? Well, one way is to write it from scratch, maybe after reading <a href="https://amzn.to/2QPpiTY">Inside NET IL Assembler</a> and <a href="https://amzn.to/2Ca34UI">Expert .NET 2.0 IL Assembler</a> by Serge Lidin (yes, amazingly, 2 books have been written about IL!)</p> <p>Another, much easier way, is to use the amazing <a href="https://sharplab.io/">SharpLab.io site</a> to do it for us! If you paste the C# code from above into it, you’ll <a href="https://sharplab.io/#v2:EYLgtghgzgLgpgJwDQxASwDYB8ACAGAAhwEYBuAWACgqA7CMOKABwgGM4CAJODDAewD6AdT4IMAEyoBvKgTlEATEWIB2WfJmV525QDYiAFgIBZCGhoAKEngDaAXQIQEAcygBKdToKavXkgE4LACJuXj4CETFxAEIgtwotXwBfTwIUyiSgA==">get the following IL code</a>:</p> <div class="highlighter-rouge" readability="7.5"><div class="highlight" readability="10"><pre class="highlight"><code>.class private auto ansi '&lt;Module&gt;' { } // end of class &lt;Module&gt; .class private auto ansi beforefieldinit Hello_World.Program extends [mscorlib]System.Object { // Methods .method private hidebysig static void Main ( string[] args ) cil managed { // Method begins at RVA 0x2050 // Code size 11 (0xb) .maxstack 8 IL_0000: ldstr "Hello World!" IL_0005: call void [mscorlib]System.Console::WriteLine(string) IL_000a: ret } // end of method Program::Main .method public hidebysig specialname rtspecialname instance void .ctor () cil managed { // Method begins at RVA 0x205c // Code size 7 (0x7) .maxstack 8 IL_0000: ldarg.0 IL_0001: call instance void [mscorlib]System.Object::.ctor() IL_0006: ret } // end of method Program::.ctor } // end of class Hello_World.Program </code></pre></div></div> <p>Then, if we save this to a file called ‘HelloWorld.il’ and run the cmd <code class="highlighter-rouge">ilasm HelloWorld.il /out=HelloWorld.exe</code>, we get the following output:</p> <div class="highlighter-rouge" readability="7.5"><div class="highlight" readability="10"><pre class="highlight"><code>Microsoft (R) .NET Framework IL Assembler. Version 4.5.30319.0 Copyright (c) Microsoft Corporation. All rights reserved. Assembling 'HelloWorld.il' to EXE --&gt; 'HelloWorld.exe' Source file is ANSI HelloWorld.il(38) : warning : Reference to undeclared extern assembly 'mscorlib'. Attempting autodetect Assembled method Hello_World.Program::Main Assembled method Hello_World.Program::.ctor Creating PE file Emitting classes: Class 1: Hello_World.Program Emitting fields and methods: Global Class 1 Methods: 2; Emitting events and properties: Global Class 1 Writing PE file Operation completed successfully </code></pre></div></div> <p><strong>Nice, so part 1 is done, we now have our <code class="highlighter-rouge">HelloWorld.exe</code> file!</strong></p> <h2 id="no-base-class-library">No Base Class Library</h2> <p>Well, not exactly, one problem is that <code class="highlighter-rouge">System.Console</code> lives in <a href="https://github.com/dotnet/corefx/tree/release/2.2/src/System.Console/src/System">dotnet/corefx</a>, in there you can see the different files that make up the implementation, such as <code class="highlighter-rouge">Console.cs</code>, <code class="highlighter-rouge">ConsolePal.Unix.cs</code>, <code class="highlighter-rouge">ConsolePal.Windows.cs</code>, etc.</p> <p>Fortunately, the nice CoreCLR developers included a simple <code class="highlighter-rouge">Console</code> implementation in <code class="highlighter-rouge">System.Private.CoreLib.dll</code>, the <a href="https://github.com/dotnet/coreclr/tree/master/src/System.Private.CoreLib">managed part of the CoreCLR</a>, which was previously known as <a href="https://github.com/dotnet/coreclr/tree/release/2.2/src/mscorlib">‘mscorlib’</a> (before it <a href="https://github.com/dotnet/coreclr/pull/17926">was renamed</a>). This internal version of <code class="highlighter-rouge">Console</code> is <a href="https://github.com/dotnet/coreclr/blob/master/src/System.Private.CoreLib/src/Internal/Console.cs">pretty small and basic</a>, but it provides enough for what we need.</p> <p>To use this ‘workaround’ we need to edit our <code class="highlighter-rouge">HelloWorld.il</code> to look like this (note the change from <code class="highlighter-rouge">mscorlib</code> to <code class="highlighter-rouge">System.Private.CoreLib</code>)</p> <div class="highlighter-rouge" readability="7.5"><div class="highlight" readability="10"><pre class="highlight"><code>.class public auto ansi beforefieldinit C extends [System.Private.CoreLib]System.Object { .method public hidebysig static void M () cil managed { .entrypoint // Code size 11 (0xb) .maxstack 8 IL_0000: ldstr "Hello World!" IL_0005: call void [System.Private.CoreLib]Internal.Console::WriteLine(string) IL_000a: ret } // end of method C::M ... } </code></pre></div></div> <p><strong>Note:</strong> You can achieve the same thing with C# code instead of raw IL, by invoking the C# compiler with the following cmd-line:</p> <div class="highlighter-rouge" readability="6"><div class="highlight" readability="7"><pre class="highlight"><code>csc -optimize+ -nostdlib -reference:System.Private.Corelib.dll -out:HelloWorld.exe HelloWorld.cs </code></pre></div></div> <p><strong>So we’ve completed part 2, we are able to at least print “Hello World” to the screen without using the CoreFX repository!</strong></p> <hr/><p>Now this is a nice little trick, but I wouldn’t ever recommend writing real code like this. Compiling against <code class="highlighter-rouge">System.Private.CoreLib</code> isn’t the right way of doing things. What the compiler normally does is compile against the publicly exposed surface area that lives in <a href="https://github.com/dotnet/corefx">dotnet/corefx</a>, but then at run-time a process called <a href="https://docs.microsoft.com/en-us/dotnet/framework/app-domains/type-forwarding-in-the-common-language-runtime">‘Type-Forwarding’</a> is used to make that ‘reference’ implementation in CoreFX map to the ‘real’ implementation in the CoreCLR. For more on this entire process see <a href="https://blog.lextudio.com/the-rough-history-of-referenced-assemblies-7d752d92c18c">The Rough History of Referenced Assemblies</a>.</p> <p>However, only a <a href="http://mattwarren.org/2017/03/23/Hitchhikers-Guide-to-the-CoreCLR-Source-Code/#high-level-overview">small amount of managed code</a> (i.e. C#) actually exists in the CoreCLR, to show this, the directory tree for <a href="https://github.com/dotnet/coreclr/tree/master/src/System.Private.CoreLib">/dotnet/coreclr/src/System.Private.CoreLib</a> is <a href="https://gist.github.com/mattwarren/6b36567b51e3adca6c1ca684e72b8f6f">available here</a> and the tree with all ~1280 .cs files included is <a href="https://gist.github.com/mattwarren/abc4e194b71e78eb9fa5a550a379a0a1">here</a>.</p> <p>As a concrete example, if you look in CoreFX, you’ll see that the <a href="https://github.com/dotnet/corefx/tree/master/src/System.Reflection/src">System.Reflection implementation</a> is pretty empty! That’s because it’s a ‘partial facade’ that is eventually <a href="https://github.com/dotnet/corefx/blob/release/2.2/src/System.Reflection.Emit/src/System.Reflection.Emit.csproj#L19">‘type-forwarded’ to System.Private.CoreLib</a>.</p> <p>If you’re interested, the entire API that is exposed in CoreFX (but actually lives in CoreCLR) is <a href="https://github.com/dotnet/corefx/blob/master/src/System.Runtime/ref/System.Runtime.cs">contained in System.Runtime.cs</a>. But back to our example, here is the code that describes all the <a href="https://github.com/dotnet/corefx/blob/master/src/System.Runtime/ref/System.Runtime.cs#L3035-L3048"><code class="highlighter-rouge">GetMethod(..)</code> functions</a> in the ‘System.Reflection’ API.</p> <p>To learn more about ‘type forwarding’, I recommend watching <a href="https://www.youtube.com/watch?v=vg6nR7hS2lI">‘.NET Standard - Under the Hood’</a> (<a href="https://www.slideshare.net/terrajobst/net-standard-under-the-hood">slides</a>) by <a href="https://twitter.com/terrajobst">Immo Landwerth</a> and there is also some more in-depth information in <a href="https://github.com/dotnet/standard/blob/master/docs/history/evolution-of-design-time-assemblies.md">‘Evolution of design time assemblies’</a>.</p> <p><strong>But why is this code split useful</strong>, from the <a href="https://github.com/dotnet/corefx#net-core-libraries-corefx">CoreFX README</a>:</p> <blockquote readability="6.6048387096774"> <p><strong>Runtime-specific library code</strong> (<a href="https://github.com/dotnet/coreclr/tree/master/src/System.Private.CoreLib">mscorlib</a>) lives in the CoreCLR repo. It needs to be built and versioned in tandem with the runtime. The rest of CoreFX is <strong>agnostic of runtime-implementation and can be run on any compatible .NET runtime</strong> (e.g. <a href="https://github.com/dotnet/corert">CoreRT</a>).</p> </blockquote> <p>And from the other point-of-view, in the <a href="https://github.com/dotnet/coreclr#relationship-with-the-corefx-repository">CoreCLR README</a>:</p> <blockquote readability="19.632768361582"> <p>By itself, the <code class="highlighter-rouge">Microsoft.NETCore.Runtime.CoreCLR</code> package is actually not enough to do much. One reason for this is that the CoreCLR package tries to minimize the amount of the class library that it implements. <strong>Only types that have a strong dependency on the internal workings of the runtime are included</strong> (e.g, <code class="highlighter-rouge">System.Object</code>, <code class="highlighter-rouge">System.String</code>, <code class="highlighter-rouge">System.Threading.Thread</code>, <code class="highlighter-rouge">System.Threading.Tasks.Task</code> and most foundational interfaces).</p> <p>Instead most of the class library is implemented as independent NuGet packages that simply use the .NET Core runtime as a dependency. Many of the most familiar classes (<code class="highlighter-rouge">System.Collections</code>, <code class="highlighter-rouge">System.IO</code>, <code class="highlighter-rouge">System.Xml</code> and so on), live in packages defined in the <a href="https://github.com/dotnet/corefx">dotnet/corefx</a> repository.</p> </blockquote> <p>One <strong>huge benefit</strong> of this approach is that <a href="https://www.mono-project.com/">Mono</a> can share <a href="https://mobile.twitter.com/matthewwarren/status/987292012520067072">large amounts of the CoreFX code</a>, as shown in this tweet:</p> <hr/><h2 id="no-launcher">No Launcher</h2> <p>So far we’ve ‘compiled’ our code (well technically ‘assembled’ it) and we’ve been able to access a simple version of <code class="highlighter-rouge">System.Console</code>, but how do we actually run our <code class="highlighter-rouge">.exe</code>? Remember we can’t use the <code class="highlighter-rouge">dotnet run</code> command because that lives in the <a href="https://github.com/dotnet/cli/tree/release/2.2.2xx/src/dotnet/commands/dotnet-run">dotnet/CLI</a> repository (and that would be breaking the rules of this <em>slightly contrived</em> challenge!!).</p> <p>Again, fortunately those clever runtime engineers have thought of this exact scenario and they built the very helpful <code class="highlighter-rouge">corerun</code> application. You can read more about in <a href="https://github.com/dotnet/coreclr/blob/master/Documentation/workflow/UsingCoreRun.md">Using corerun To Run .NET Core Application</a>, but the td;dr is that it will only look for dependencies in the same folder as your .exe.</p> <p>So, to complete the challenge, we can now run <code class="highlighter-rouge">CoreRun HelloWorld.exe</code>:</p> <div class="highlighter-rouge" readability="6"><div class="highlight" readability="7"><pre class="highlight"><code># CoreRun HelloWorld.exe Hello World! </code></pre></div></div> <p><strong>Yay, the least impressive demo you’ll see this year!!</strong></p> <p>For more information on how you can ‘host’ the CLR in your application I recommend this excellent tutorial <a href="https://docs.microsoft.com/en-us/dotnet/core/tutorials/netcore-hosting">Write a custom .NET Core host to control the .NET runtime from your native code</a>. In addition, the docs page on <a href="https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/a51xd4ze(v=vs.100)">‘Runtime Hosts’</a> gives a nice overview of the different hosts that are available:</p> <blockquote readability="7"> <p>The .NET Framework ships with a number of different runtime hosts, including the hosts listed in the following table.</p> <table><thead><tr><th>Runtime Host</th> <th>Description</th> </tr></thead><tbody readability="7"><tr readability="3"><td>ASP.NET</td> <td>Loads the runtime into the process that is to handle the Web request. ASP.NET also creates an application domain for each Web application that will run on a Web server.</td> </tr><tr readability="8"><td>Microsoft Internet Explorer</td> <td>Creates application domains in which to run managed controls. The .NET Framework supports the download and execution of browser-based controls. The runtime interfaces with the extensibility mechanism of Microsoft Internet Explorer through a mime filter to create application domains in which to run the managed controls. By default, one application domain is created for each Web site.</td> </tr><tr readability="3"><td>Shell executables</td> <td>Invokes runtime hosting code to transfer control to the runtime each time an executable is launched from the shell.</td> </tr></tbody></table></blockquote> </div> Thu, 13 Dec 2018 20:30:36 +0000 http://mattwarren.org/2018/12/13/Exploring-the-.NET-Core-Runtime/ FLI: A binary interface to let Scheme use Python, Lua, Ruby etc's Library https://github.com/guenchi/FLI https://github.com/guenchi/FLI <div class="Box-body p-6"> <article class="markdown-body entry-content" itemprop="text"> <p>A binary interface let Scheme use Python, Lua, Ruby etc's Library</p> <p>This project is inspired by the Julia language. The FFI interface provided by Chez is used to embed the interpreter or JIT compiler of other languages into the Scheme program (CPython, Luajit etc) or to link the compiled object code with the C binary interface. (OCaml, Golang etc).</p> <p>For now, PyCall is runnable. The next step is to do further packaging work, making cross-language library calls more convenient and simple.</p> <p>Implementation priority: Python &gt; Julia &gt; Javascript &gt; OCaml</p> <p>Sources:</p> <p><a href="https://github.com/JuliaPy/PyCall.jl/blob/master/src/PyCall.jl">https://github.com/JuliaPy/PyCall.jl/blob/master/src/PyCall.jl</a></p> <p><a href="https://docs.python.org/2.5/ext/callingPython.html" rel="nofollow">https://docs.python.org/2.5/ext/callingPython.html</a></p> <p><a href="http://www.linux-nantes.org/~fmonnier/OCaml/ocaml-wrapping-c.html" rel="nofollow">http://www.linux-nantes.org/~fmonnier/OCaml/ocaml-wrapping-c.html</a></p> <p><a href="http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual033.html#htoc281" rel="nofollow">http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual033.html#htoc281</a></p> </article></div> Fri, 14 Dec 2018 01:42:51 +0000 https://github.com/guenchi/FLI Neural Ordinary Differential Equations https://arxiv.org/abs/1806.07366 https://arxiv.org/abs/1806.07366 <div readability="10.710526315789">(Submitted on 19 Jun 2018 (<a href="https://arxiv.org/abs/1806.07366v1">v1</a>), last revised 22 Oct 2018 (this version, v3))</div><div readability="38"> From: Ricky T. Q. Chen [<a href="https://arxiv.org/show-email/8698f71b/1806.07366">view email</a>] <br /><b><a href="https://arxiv.org/abs/1806.07366v1">[v1]</a></b> Tue, 19 Jun 2018 17:50:12 UTC (4,832 KB)<br /><b><a href="https://arxiv.org/abs/1806.07366v2">[v2]</a></b> Wed, 3 Oct 2018 00:13:07 UTC (4,912 KB)<br /><b>[v3]</b> Mon, 22 Oct 2018 22:06:50 UTC (4,912 KB)<br /></div> Thu, 13 Dec 2018 23:09:12 +0000 https://arxiv.org/abs/1806.07366 Robinhood launches 3% checking account https://techcrunch.com/2018/12/13/robinhood-free-checking-and-savings-accounts/ https://techcrunch.com/2018/12/13/robinhood-free-checking-and-savings-accounts/ <p id="speakable-summary"> is undercutting the big banks by forgoing brick-and-mortar branches with its new zero-fee checking and savings account features. With no overdraft or monthly fees, a juicy 3 percent interest rate and a claim of more U.S. ATMs than the five biggest banks combined, Robinhood is using the scalability of software to pass impressive perks on to customers. <span>The free stock trading app already used that approach to attack brokers like E*Trade and Charles Schwab that charge a per-trade fee. Now it’s breaking into the larger financial services market with a model that could put the squeeze on Wells Fargo, Chase and Bank of America.</span></p> <p><span>Today Robinhood <a href="https://blog.robinhood.com/news/2018/12/13/introducing-robinhood-checking-amp-savings">launches</a> checking and savings accounts in the U.S. with a Mastercard debit card issued through Sutton Bank that starts shipping December 18th. Users earn 3 percent on all the dough they keep with Robinhood, yet there’s no minimum balance or fees for monthly membership, overdrafts, foreign transactions or card replacements. That’s a pretty sweet deal compared to the other leading banks that all charge for some of that or offer much lower interest rates. The trade-off is that while customers get 24/7 live text chat support, they won’t be able to walk into a local bank branch. Users who want early access can <a href="https://checking.robinhood.com/">sign up here</a>.</span></p> <p><img class="aligncenter size-large wp-image-1758478" src="https://techcrunch.com/wp-content/uploads/2018/12/RH_Checking_Savings_Phone-Grid.png?w=680" alt="" width="680" height="453"/></p> <p><span>Robinhood expects to turn a profit thanks to a lean 300-employee operation, earning a margin on investing your money in U.S. treasuries and a revenue share with Mastercard on interchange fees charged to merchants when you swipe. The launch could be critical to keeping Robinhood worthy of its $5.6 billion valuation from when it took a $363 million Series D in March just a year after raising at a $1.3 billion valuation. The 6 million-user app invested in</span><span> launching a free cryptocurrency trading exchange early this year only to see coin prices plummet and mainstream interest fall off. But with banks hammering users with surprise fees and mediocre user experience, there’s a huge opportunity for a mobile-first startup to disrupt how we store money.</span></p> <p>“Brick-and-mortar locations are costly. Our goal with this product was to build a completely digital experience so we can reduce our overhead so we can pass more of the value back to customers,” Robinhood co-CEO Baiju Bhatt tells me. [Disclosure: I know Bhatt and co-CEO Vlad Tenev from college.] “Saving accounts in the U.S. pay on average 0.09 percent and we all know the banks are making far more than that from the deposits. With Robinhood you earn 3 percent off all of your money. Mental math is hard, so if you look at the median U.S. household that has about $8,000 in liquid savings, they’d earn<span class="Apple-converted-space"> </span>$240 a year.”</p> <p><img class="aligncenter size-large wp-image-1758506" src="https://techcrunch.com/wp-content/uploads/2018/12/Robinhood-Checking-Savings-Fee-Comparison-Chart.png?w=680" alt="" width="680" height="327"/></p> <p>Robinhood will be sending invites to users in January for the new feature that they can use exclusively or alongside their existing bank. Anyone approved to use Robinhood’s stock brokerage is eligible, but users can also sign up directly for checking and savings with no obligation to trade stocks. Robinhood claims signing up won’t impact your credit score. Users get to customize a Robinhood-branded debit card that’s accepted wherever Mastercard is. Because the feature is run within Robinhood’s brokerage, it’s ensured by the SIPC instead of the FDIC, but you still get the same insurance on up to $250,000 cash. [Update: A lot of people are confused and think no FDIC means you’re not insured. That’s wrong. If a market downturn causes a decline in value of the treasury securities Robinhood invests your checking/savings money in to earn interest, you’re insured up to $250,000 and will get your money back. Only money you personally gamble on stocks is at risk, as is standard.]</p> <p>One of the most appealing features of Robinhood checking and savings is getting access to 75,000 free-to-use ATMs in places like Target, Walgreens and 7-Eleven. Users won’t be able to tell just by looking at an ATM whether it’s in the network, but the Robinhood app features a map for finding the nearest one. You can deposit checks via Robinhood’s app too, and if you need to send a check, you can just tell the startup how much to deliver to whom and it will mail the check for you.</p> <p class="p1">“These fees like overdraft fees — they’re not fees millionaires are paying. It’s ordinary folks paying. It’s actually more expensive for those that have less money and it’s cheaper for those that have more money. We think that isn’t right and we think that’s bad business,” Bhatt gripes. Because Robinhood built its own clearing house for moving money, and it lacks the overhead of traditional banks, it’s able to save enough money to make its no-fee structure work. “We want to build a financial services company that democratizes America’s financial system.”</p> <p><img class="aligncenter size-large wp-image-1758489" src="https://techcrunch.com/wp-content/uploads/2018/12/Robinhood-debit-card.png?w=680" alt="" width="680" height="453"/></p> <p>Robinhood will have to convince users it’s worthy of their trust, as a security breach could be disastrous. There’s also the question of whether people are ready to ditch their bank branch. “Behaviors about and going into a branch are definitely changing,” says Bhatt. My biggest concern was not having any consistency in who I talk to when I need banking help. Bhatt tells me the company plans to roll out more personalized customer service features in the coming months, but there may always be edge cases that make the lack of in-person support annoying.</p> <p>Getting into banking could open a lucrative revenue stream for Robinhood as it charts its path to IPO. The startup <a href="https://techcrunch.com/2018/11/27/robinhood-hires-20-year-amazon-veteran-to-cfo-role-as-high-flying-startup-eyes-ipo/">recently hired Jason Warnick</a>, a 20-year veteran of Amazon, to be its CFO and get it prepped to go public. Wall Street will want to see a more robust business that’s not as vulnerable to foes like stock brokerage Charles Schwab, which is already lowering fees to stay competitive with Robinhood. Not only will checking and savings see users move more money into their Robinhood accounts that it can invest to earn a profit, but it also poises the startup to tackle more financial services in the future. More lucrative products like loans could make paying 3 percent much easier for Robinhood to handle.</p> <span/> Thu, 13 Dec 2018 15:31:09 +0000 https://techcrunch.com/2018/12/13/robinhood-free-checking-and-savings-accounts/ Rate-distortion optimization https://fgiesen.wordpress.com/2018/12/10/rate-distortion-optimization/ https://fgiesen.wordpress.com/2018/12/10/rate-distortion-optimization/ <p>“Rate-distortion optimization” is a term in lossy compression that sounds way more complicated and advanced than it actually is. There’s an excellent chance that by the end of this post you’ll go “wait, that’s it?”. If so, great! My goal here is just to explain the basic idea, show how it plays out in practice, and maybe get you thinking about other applications. So let’s get started!</p> <h3>What does “rate-distortion optimization” actually mean?</h3> <p>The mission statement for lossless data compression is pretty clear. A lossless compressor transforms one blob of data into another one, ideally (but not always) smaller, in a way that is reversible: there’s another transform (the decoder) that, when applied to the second blob, will return an <em>exact</em> copy of the original data.</p> <p>Lossy compression is another matter. The output of the decoder is usually at least slightly different from the original data, and sometimes very different; and generally, it’s almost always possible to take an existing lossily-compressed piece of data, say a short video or MP3 file, make a slightly smaller copy by literally deleting a hundred bytes from the middle of the file with a hex editor, and get another version of the original video (or audio) file that is still clearly recognizable yet degraded. Seriously, if you’ve never done this before, try it, especially with MP3s: it’s quite hard to mangle a MP3 file in a way that will make it not play back anymore, because MP3s have no required file-level headers, tolerate arbitrary junk in the middle of the bitstream, and are self-synchronizing.</p> <p>With lossless compression, it makes sense to ask “what is the smallest I can make this file?”. With lossy compression, less so; you can generally get files far smaller than you would ever want to use, because the data is degraded so much it’s barely recognizable (if that). Minimizing file size alone isn’t interesting; we want to minimize size while simultaneously maximizing the quality of the result. The way we do this is by coming up with some error metric that measures the distance between the original image and the result the decoder will actually produce given a candidate bitstream. Now our bitstream has two associated values: its length in bits, the (bit) <em>rate</em>, usually called R in formulas, and a measure of how much error was introduced as a result of the lossy coding process, the <em>distortion</em>, or D in formulas. R is almost always measured in bits or bytes; D can be in one of many units, depending on what type of error metric is used.</p> <p>Rate-distortion optimization (RDO for short) then means that an encoder considers several possible candidate bit streams, evaluates their rate and distortion, and tries to make an optimal choice; if possible, we’d like it to be <em>globally</em> optimal (i.e. returning a true best-possible solution), but at the very least optimal among the candidates that were considered. Sounds great, but what does “better” mean here? Given a pair <img src="https://s0.wp.com/latex.php?latex=%28R_1%2CD_1%29&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="(R_1,D_1)" title="(R_1,D_1)" class="latex"/> and another pair <img src="https://s0.wp.com/latex.php?latex=%28R_2%2CD_2%29&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="(R_2,D_2)" title="(R_2,D_2)" class="latex"/>, how do we tell which one is better?</p> <h3>The pareto frontier</h3> <p>Suppose what we have 8 possible candidate solutions we are considering, each with their own rate and distortion scores. If we prepare a scatter plot of rate on the x-axis versus distortion on the y-axis, we might get something like this:</p> <p><img data-attachment-id="7110" data-permalink="https://fgiesen.wordpress.com/2018/12/10/rate-distortion-optimization/rdo_scatter/" data-orig-file="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png?w=497" data-orig-size="452,280" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Rate vs. Distortion scatterplot" data-image-description="" data-medium-file="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png?w=497?w=300" data-large-file="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png?w=497?w=452" src="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png?w=497" alt="Rate vs. Distortion scatterplot" class="alignnone size-full wp-image-7110" srcset="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png 452w, https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png?w=150 150w, https://fgiesen.files.wordpress.com/2018/12/rdo_scatter.png?w=300 300w" sizes="(max-width: 452px) 100vw, 452px"/></p> <p>The thin, light-gray candidates aren’t very interesting, because what they have in common is that there is at least one other candidate solution that is strictly better than them in every way. That is, some other candidate point has the same (or lower) rate, and also the same (or lower) distortion as the grayed-out points. There’s just no reason to ever pick any of those points, based on the criteria we’re considering, anyway. In the scatterplot, this means that there is at least one other point that is both to the left (lower rate) and below (lower distortion).</p> <p>For any of the points in the diagram, imagine a horizontal and a vertical line going through it, partitioning the plane into four quadrants. Any point that ends up in the lower-left quadrant (lower rate and lower distortion) is clearly superior. Likewise, any point in the upper-right quadrant (higher rate and higher distortion) is clearly inferior. The situation with the other two quadrants isn’t as clear.</p> <p>Which brings us to the other four points: the three fat black points, and the red point. These are all points that have no other points to the left and below them, meaning they are <a href="https://en.wikipedia.org/wiki/Pareto_efficiency">pareto efficient</a>. This means that, unlike the situation with the light gray points, we can’t pick another candidate that improves one of the metrics without making the other worse. The set of points that are pareto efficient constitutes the <em>pareto frontier</em>.</p> <p>These points are not all the same, though. The three fat black points are not just pareto efficient, but are also on the convex hull of the point set (the lower left contour of the convex hull here is drawn using blue lines), whereas the red point is not. The points that are both pareto and on the convex hull of the pareto frontier are particularly important, and can be characterized in a different way.</p> <p>Namely, imagine taking a straightedge, angling it so that it’s either horizontal, “descending” (with reference to our graph) from left to right, or vertical, and then sliding it slowly “upwards” from the origin without changing its orientation until it hits one of the points in our set. It will look something like this:</p> <p><img data-attachment-id="7111" data-permalink="https://fgiesen.wordpress.com/2018/12/10/rate-distortion-optimization/rdo_scatter2/" data-orig-file="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png?w=497" data-orig-size="452,280" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Rate vs. Distortion scatterplot with lines" data-image-description="" data-medium-file="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png?w=497?w=300" data-large-file="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png?w=497?w=452" src="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png?w=497" alt="Rate vs. Distortion scatterplot with lines" class="alignnone size-full wp-image-7111" srcset="https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png 452w, https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png?w=150 150w, https://fgiesen.files.wordpress.com/2018/12/rdo_scatter2.png?w=300 300w" sizes="(max-width: 452px) 100vw, 452px"/></p> <p>The shading here is meant to suggest the motion of the green line; we keep sliding it up until it eventually catches on the middle of our three fat black points. If we change the angle of our line to something else, we can manage to hit the other two black points, but not the red one. This has nothing to do with this particular problem and is a general property of convex sets: any vertices of the set must be extremal in some direction.</p> <p>Getting a bit more precise here, the various copies of the green line I’ve drawn correspond to lines</p> <p><img src="https://s0.wp.com/latex.php?latex=w_1+R+%2B+w_2+D+%3D+%5Cmathrm%7Bconst.%7D&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="w_1 R + w_2 D = \mathrm{const.}" title="w_1 R + w_2 D = \mathrm{const.}" class="latex"/></p> <p>and the constraints I gave on the orientation of the line boil down to <img src="https://s0.wp.com/latex.php?latex=w_1%2C+w_2+%5Cge+0&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="w_1, w_2 \ge 0" title="w_1, w_2 \ge 0" class="latex"/> (with at least one being nonzero). Sliding our straightedge until we hit a point corresponds to the minimization problem</p> <p><img src="https://s0.wp.com/latex.php?latex=%5Cmin_i+w_1+R_i+%2B+w_2+D_i&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="\min_i w_1 R_i + w_2 D_i" title="\min_i w_1 R_i + w_2 D_i" class="latex"/></p> <p>for a given choice of w<sub>1</sub> and w<sub>2</sub>, and the three black points are the three possible solutions we might get for our set of candidate points. So we’ve switched from purely minimizing rate or purely minimizing distortion (both of which, in general, tend to give somewhat pathological results) towards minimizing some linear combination of the two with non-negative weights; and doing so with various choices of the weights w<sub>1</sub> and w<sub>2</sub> will allow us to sweep out the lower left convex hull of the pareto frontier, which is often the “interesting” part (although, as the red point in our example illustrates, this process excludes some of the points on the pareto frontier).</p> <p>This does not seem particularly impressive so far: we don’t want to purely minimize one quantity or the other, so instead we’re minimizing a linear combination of the two. That seems it would’ve been the obvious next thing to try. But looking at the characterization above does at least give us some idea on what looking at these linear combinations ends up doing, and exactly what we end up giving up (namely, the pareto points not on the convex hull). And there’s another massively important aspect to consider here.</p> <h3>The Lagrangian connection</h3> <p>If we take our linear combination above and divide through by w<sub>1</sub> or w<sub>2</sub> (assuming they are non-zero; dividing our objective by a scalar constant does not change the results of the optimization problem), respectively, we get:</p> <p><img src="https://s0.wp.com/latex.php?latex=L_1+%3D+R+%2B+%5Cfrac%7Bw_2%7D%7Bw_1%7D+D+%3D%3A+R+%2B+%5Clambda_1+D&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="L_1 = R + \frac{w_2}{w_1} D =: R + \lambda_1 D" title="L_1 = R + \frac{w_2}{w_1} D =: R + \lambda_1 D" class="latex"/><br/><img src="https://s0.wp.com/latex.php?latex=L_2+%3D+D+%2B+%5Cfrac%7Bw_1%7D%7Bw_2%7D+R+%3D%3A+D+%2B+%5Clambda_2+R&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="L_2 = D + \frac{w_1}{w_2} R =: D + \lambda_2 R" title="L_2 = D + \frac{w_1}{w_2} R =: D + \lambda_2 R" class="latex"/></p> <p>which are essentially the <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier">Lagrangians</a> we would get for continuous optimization problems of the form “minimize R subject to D=const.” (L<sub>1</sub>) and “minimize D subject to R=const.” (L<sub>2</sub>); that is, if we were in a continuously differentiable setting (for data compression we usually aren’t), trying to solve the problems of either minimizing bit rate while hitting a set distortion target or minimizing distortion within a given bit rate limit woud lead us to study the same type of expression. Generalizing one step further, allowing not just equality but also inequality constraints (i.e. rate or distortion <em>at most</em> a certain value, rather then requiring exact match) still leads to essentially the same functions, this time by way of the <a href="https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions">KKT conditions</a>.</p> <p>In this post, I intentionally went “backwards” by starting with the Lagrangian-esque expressions and then mentioning the connection to continuous optimization problems because I want to emphasize that this type of linear combination of the different metrics arises naturally, even in a fully discrete setting; starting out with Lagrange or KKT multipliers would get us into technical difficulties with discrete decisions that do not necessary admit continuously differentiable objectives or constraint functions. Since the whole process makes sense even without explicitly mentioning Lagrange multipliers at all, that seemed like the most natural way to handle it.</p> <h3>What this means in practice</h3> <p>I hopefully now have you convinced that looking at the minima of the linear combination</p> <p><img src="https://s0.wp.com/latex.php?latex=w_1+R+%2B+w_2+D&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="w_1 R + w_2 D" title="w_1 R + w_2 D" class="latex"/></p> <p>is a sensible thing to do, and both our direct derivation and the two Lagrange multiplier formulations for continuous problems I mentioned lead us towards it. Neither our direct derivation nor the Lagrange multiplier version tells us what to set our mystery weight parameters to, however. In fact, the Lagrange multiplier method flat-out tells you that for every instance of your optimization problem, there <em>exist</em> the right values for the Lagrange multipliers that correspond to an optimum of the problem you’re interested in, but it doesn’t tell you how to get them.</p> <p>However, what’s nice is that the same thing also works in reverse, as we saw earlier with our line-sweeping procedure: picking the angle of the line we’re sliding around corresponds to picking a Lagrange multiplier. No matter which one we pick, we are going to end up finding an optimal point for <em>some</em> trade-off, namely one that is pareto; it just might not be the exact trade-off we wanted.</p> <p>For example, suppose we decide that a single-variable parameterization like in the Lagrange multiplier scenario is convenient, and we pick something like L<sub>1</sub>, namely w<sub>1</sub> fixed at 1, w<sub>2</sub> = λ. We were assuming from the beginning that we have some method of generating candidate solutions; all that’s left to do is to rank them and pick a winner. Let’s start with λ=1, which leads to a solution with some (R,D) pair that minimizes <img src="https://s0.wp.com/latex.php?latex=R+%2B+D&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="R + D" title="R + D" class="latex"/>. Note it’s often useful to think of these quantities with units attached; we typically measure R in bits and [D] is whatever it is, so the unit of λ must be [λ] = bits/[D], i.e. λ is effectively an exchange rate that tells us how many units of distortion are worth as much as a single bit. We can then look at the R and D values of the solution we got back. If say we’re trying to hit a certain bit rate, then if R is close to that target, we might be happy and just stop. If R is way above the target bit rate, we overshot, and clearly need to penalize high distortions less; we might try another round with λ=0.1 next. Conversely, if say R is a few percent below the target rate, we might try another round with slightly higher lambda, say λ=1.02, trying to penalize distortion a bit more so we spend more bits to reduce it, and see if that gets us even closer.</p> <p>With this kind of process, even knowing nothing else about the problem, you can systematically explore the options along the pareto frontier and try to find a better fit. What’s nice is that finding the minimum for a given choice of parameters (λ in our case) doesn’t require us to store all candidate options considered and rank them later; storing all candidates is not a big deal in our original example, where we were only trying to decide between a handful of options, but in practice you often end up with huge search spaces (exponential in the problem size is not unheard of), and being able to bake it down to a single linear function is convenient in other ways; for example, it tends to work well with efficient dynamic programming solutions to problems that would be infeasible to handle with brute force.</p> <h3>Wait, that’s it?</h3> <p>Yes, pretty much. Instead of trying to purely minimize bit rate or distortion, you use a combination <img src="https://s0.wp.com/latex.php?latex=R+%2B+%5Clambda+D&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="R + \lambda D" title="R + \lambda D" class="latex"/> and vary λ to taste to hit your targets. Often, you know enough about your problem space to have a pretty good idea of what values λ should have; for example, in video compression, it’s pretty common to tie the λ used when coding residuals to quantizer step size, based on the (known) behavior of the quantizer and the (expected) characteristics of real-world signals. But even when you don’t know anything about your data, you can always use a search process for λ as outlined above (which is, of course, slower).</p> <p>Now the one thing to note in practice is that you rarely use just a single distortion metric; for example, in video coding, it’s pretty common to use one distortion metric when quantizing residuals, a different one for motion search, and a third for overall block mode decision. In general, the more frequently something is done (or the bigger the search space is), the more willing codecs are to make compromises with their distortion measure to enable more efficient searches. In general, good results require both decent metrics and doing a good job exploring the search space, and accepting some defects in the metrics in exchange for a significant increase in search space covered in the same amount of time is often a good deal.</p> <p>But the basic process is just this: measure bit rate and distortion (in your unit of choice) for every option you’re considering, and then rank your options based on their combined <img src="https://s0.wp.com/latex.php?latex=R+%2B+%5Clambda+D&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="R + \lambda D" title="R + \lambda D" class="latex"/> (or <img src="https://s0.wp.com/latex.php?latex=D+%2B+%5Clambda%27+R&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="D + \lambda' R" title="D + \lambda' R" class="latex"/>, which is a different but equivalent parameterization) scores. This gives you points on the lower left convex hull of the pareto frontier, which is what you want.</p> <p>This applies in other settings as well. For example, the various lossless compressors in Oodle are, well, lossless, but they still have a bunch of decisions to make, some of which take more time in the decoder than others. For a lossless codec, measuring “distortion” doesn’t make any sense (it’s always 0), but measuring decode time does; so the Oodle encoders optimize for a trade-off between compressed size and decode time.</p> <p>Of course, you can have more parameters too; for example, you might want to combine these two ideas and do joint optimization over bit rate, distortion, and decode time, leading to an expression of the type <img src="https://s0.wp.com/latex.php?latex=R+%2B+%5Clambda+D+%2B+%5Cmu+T&amp;bg=f9f7f5&amp;fg=444444&amp;s=0" alt="R + \lambda D + \mu T" title="R + \lambda D + \mu T" class="latex"/> with two Lagrange multipliers, with λ as before, and a second multiplier μ that encodes the exchange rate from time units into bits.</p> <p>Either way, the details of this quickly get complicated, but the basic idea is really quite simple. I hope this post de-mystifies it a bit.</p> <div id="jp-post-flair" class="sharedaddy sd-like-enabled"><div class="sharedaddy sd-block sd-like jetpack-likes-widget-wrapper jetpack-likes-widget-unloaded" id="like-post-wrapper-9777542-7109-5c13494238186" data-src="//widgets.wp.com/likes/index.html?ver=20180319#blog_id=9777542&amp;post_id=7109&amp;origin=fgiesen.wordpress.com&amp;obj_id=9777542-7109-5c13494238186" data-name="like-post-frame-9777542-7109-5c13494238186"><h3 class="sd-title">Like this:</h3><p><span class="button"><span>Like</span></span> <span class="loading">Loading...</span></p><span class="sd-text-color"/><a class="sd-link-color"/></div> <p> <h3 class="jp-relatedposts-headline"><em>Related</em></h3> </p></div> Wed, 12 Dec 2018 06:36:13 +0000 https://fgiesen.wordpress.com/2018/12/10/rate-distortion-optimization/ Mail Loop From Hell (2012) https://blog.dbrgn.ch/2012/7/29/mail-loop-from-hell/ https://blog.dbrgn.ch/2012/7/29/mail-loop-from-hell/ <p>Found in <cite>#django</cite> on freenode, Jul 12, 2012. All names are edited.</p> <pre class="literal-block"> 11:16 &lt; abrt&gt; since it's quiet in here I'll tell you a story. 11:16 &lt; abrt&gt; back in 1992, I had just graduated university and was interning at a government facility in newport news 11:16 &lt; abrt&gt; along with some friends from college. We made $7.25/hr and were living large. 11:16 &lt; qns&gt; hahahaha 11:17 &lt; qns&gt; You sound like Kevin Mitinick. 11:17 &lt; abrt&gt; we used to play practical jokes on each other all the time. 11:17 &lt; abrt&gt; mitnick was a pussy compared to us 11:17 &lt; qns&gt; :O 11:17 &lt; abrt&gt; anyway, I managed to break into my friend's university UNIX account. guessed his password. easy. 11:17 &lt; abrt&gt; how well do you know UNIX? 11:18 &lt; qns&gt; not well yet 11:18 &lt; abrt&gt; well, back in the day, they didn't have postfix or qmail any of these fancy mailservers 11:18 &lt; abrt&gt; they ran sendmail 11:18 &lt; abrt&gt; and they allowed individual .forward files 11:19 &lt; abrt&gt; the purpose of the .forward file was to forward your email that came to your account to the address in the .forward file. 11:19 &lt; abrt&gt; anyway, after I broke into my friend Matt's account, I set up his .forward file to be "everyone@***.edu" which I knew was an alias for the entire college. 11:19 &lt; abrt&gt; I had just learned how to forge sendmail headers and was going to send him a very embarrassing email "from his girlfriend" 11:20 &lt; abrt&gt; fortunately for me, I decided to do a test run at 1730 on a Friday. Assuming the test run went well, the embarrassing forged email would go out the following Monday. 11:20 &lt; abrt&gt; so I sent a "this is a test" to Matt. 11:21 &lt; abrt&gt; and went home, drank some beers with Matt and Steve, and had a great weekend 11:21 &lt; abrt&gt; Monday morning I get into the lab and everyone's quiet, sort of whispering, and looking at me 11:21 &lt; abrt&gt; fuck me, right? 11:21 &lt; abrt&gt; I log into the gov UNIX system - and I have 13000 emails 11:22 &lt; abrt&gt; what I had forgotten was that "everyone@***.edu" included Matt. 11:22 &lt; abrt&gt; so the email would get sent to everyone, including him, then he would add 10 lines of header, forward it to everyone, including him, .... 11:22 &lt; abrt&gt; mail loop from hell. 11:22 &lt; qns&gt; Did you get in trouble? 11:22 &lt; abrt&gt; well, here's the thing 11:22 &lt; abrt&gt; this was summer '92 11:22 &lt; abrt&gt; nobody at school, right? 11:23 &lt; abrt&gt; everyone had their email forwarded elsewhere 11:23 &lt; abrt&gt; and the professors got jobs at places like Camp Peary, and FBI, and other research organizations, .... 11:23 &lt; qns&gt; So you help them? 11:23 &lt; abrt&gt; and those systems couldn't handle the volume of mail, and they never thought to put the mail spool on its on separate partition 11:23 &lt; abrt&gt; so their systems crashed. 11:24 &lt; qns&gt; haha 11:24 &lt; qns&gt; So you triggered chaos all over. 11:24 &lt; abrt&gt; I managed to bring down 13 CIA offices, all FBI offices east of the Mississippi, and the entire Southeastern university Research Network. 11:24 &lt; etgr&gt; You can claim to have hacked the FBI 11:24 &lt; qns&gt; using e-mail. 11:24 &lt; abrt&gt; along with various other systems, but those were the biggies 11:24 &lt; qns&gt; I'd have shat myself 11:24 &lt; abrt&gt; I pretty much did. 11:25 &lt; abrt&gt; But back then, like possession of a fake ID, nobody really knew what to do to you for this sort of thing 11:25 &lt; abrt&gt; so I got a slap on the wrist, almost fired, and had to write a letter of apology to the head of the computer lab at university 11:25 &lt; abrt&gt; and I lost my university email account. :( 11:26 &lt; qns&gt; hahahahaha 11:26 &lt; abrt&gt; today I'd probably be sent to Guantanamo 11:26 &lt; qns&gt; Or you'd mysteriously disappear. :P 11:26 &lt; abrt&gt; anyway, that's my story for the evening. 11:26 &lt; qns&gt; I need a story like that on my resume. 11:26 &lt; abrt&gt; nah 11:26 &lt; abrt&gt; here's the thing 11:26 &lt; abrt&gt; that story doesn't go on a resume 11:27 &lt; abrt&gt; but - fast forward 10 years later. 11:27 &lt; qns&gt; Ahh 11:27 &lt; abrt&gt; I'm getting my clearance 11:27 &lt; abrt&gt; being interviewed by the suits from OPM 11:27 &lt; abrt&gt; and they leave the room, come back with a folder, and say, "Tell us about SURANet and the CIA in 1992" 11:27 &lt; abrt&gt; THAT's when I shat myself. 11:28 &lt; abrt&gt; BUT - good news - I got my clearance despite my history :) 11:28 &lt; qns&gt; Were they impressed? 11:28 &lt; abrt&gt; nah, they were laughing </pre> <p>After reading this story, I started a new bookmark list: <a class="reference external" href="https://dbrgn.ch/stories-from-the-internet.html">Stories from the Internet</a>. Feel free to follow it, and also send me new candidates if you know of any :)</p> Thu, 13 Dec 2018 17:14:42 +0000 Danilo Bargen https://blog.dbrgn.ch/2012/7/29/mail-loop-from-hell/ Google Earth Studio https://www.google.com/earth/studio/ https://www.google.com/earth/studio/ <p class="standalone"> Powerful motion design, all in the browser. Earth Studio gives you the tools you need to create professional content with Google&amp;nbspEarth imagery. </p><p>Read more in the <a href="https://earth.google.com/studio/docs/" target="_blank">Documentation</a>. </p> <div class="video-carousel"> <ul class="titles" readability="5"><li readability="3"> <h3>Keyframe Animation</h3> <p>Earth Studio uses keyframes, just like other industry-standard animation tools. Move the globe, set a keyframe, rinse and repeat. It’s that easy.</p> </li> <li readability="1"> <h3>Quick-Start Projects</h3> <p>Create an orbit, or fly from point to point. Select from up to five templates to get started - no animation experience needed.</p> </li> <li readability="0"> <h3>Animatable Effects</h3> <p>Animate custom attributes such as the sun's position, the camera's field of view and more.</p> </li> <li readability="0"> <h3>3D Camera Export</h3> <p>Easily add map labels and pins in post production. Earth Studio supports camera export to Adobe After Effects.</p> </li> </ul> </div> Fri, 14 Dec 2018 01:30:12 +0000 https://www.google.com/earth/studio/ Instacart and Amazon-owned Whole Foods are parting ways https://techcrunch.com/2018/12/13/instacart-and-amazon-owned-whole-foods-are-parting-ways/ https://techcrunch.com/2018/12/13/instacart-and-amazon-owned-whole-foods-are-parting-ways/ <p id="speakable-summary"> has announced this morning it will no longer be doing business with Whole Foods, a U.S. organic grocery chain the company launched a partnership with in 2014. This comes roughly one year after closed its <a href="https://techcrunch.com/2017/06/16/report-amazon-is-gobbling-whole-foods-for-a-reported-13-7-billion/">$13.7 billion acquisition</a> of Whole Foods; Amazon, of course, has its own grocery delivery service, AmazonFresh.</p> <p>Currently, Instacart has 1,415 in-store shoppers, or paid Instacart couriers, at 76 Whole Foods locations; 243 of those couriers, who exclusively deliver groceries from Whole Foods, will no longer be able to make Instacart deliveries beginning February 10, when the company officially winds down its partnership. Instacart says they have already placed 75 percent of those workers in new roles, though 25 percent, or about 60 workers, have been laid off.</p> <p>Instacart added that 75 percent of the 1,415 total shoppers, or 1,016 people, are also expected to be placed in new stores, meaning layoffs could surpass 350.</p> <p><span>A person familiar with the matter told TechCrunch that significant developments over the last 18 months forced Instacart to wind down its relationship earlier than planned. </span>Whole Foods didn’t immediately respond to a request for comment.</p> <p>Whole Foods will fully exit the Instacart marketplace, which allows shoppers to order from more than 300 retailers, including Costco, Walmart and Sam’s Club, in 2019.</p> <p dir="ltr">In a <a href="https://medium.com/shopper-news/update-to-our-partnership-with-whole-foods-market-f114eda737f">blog post</a> this morning, Instacart founder and chief executive officer Apoorva Mehta (pictured above) said the company will be offering transfer bonuses to all current Whole Foods couriers being transitioned to new stores. As for those being laid off as part of the dissolution of the partnership, Instacart will provide a separation package based on a minimum of three months of maximum monthly pay in 2018.</p> <p dir="ltr">Instacart pays 70,000 people to shop for its customers. The 1,415 affected by the news may seem like a small fraction, but it’s bad news for the business, which has likely been bracing for this since Amazon CEO <a href="https://techcrunch.com/2017/06/16/report-amazon-is-gobbling-whole-foods-for-a-reported-13-7-billion/">signed the Whole Foods check</a> in 2017.</p> <p dir="ltr">VCs, however, seem to be confident in Instacart’s ability to compete with Amazon. The company raised <a href="https://techcrunch.com/2018/10/16/instacart-raises-another-600m-at-a-7-6b-valuation/">$600 million</a> at a $7.6 billion valuation in October, just six months after it brought in a <a href="https://techcrunch.com/2018/04/05/instacart-is-reportedly-raising-another-150-million/">$150 million round</a> and roughly eight months after a<a href="https://techcrunch.com/2018/02/12/instacart-has-raised-another-200m-at-a-4-2b-valuation/"> $200 million financing that valued the business at $4.2 billion</a>.</p> Thu, 13 Dec 2018 17:52:54 +0000 https://techcrunch.com/2018/12/13/instacart-and-amazon-owned-whole-foods-are-parting-ways/ Etsy’s experiment with immutable documentation https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/ https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/ <h3>Introduction</h3> <p>Writing documentation is like trying to hit a moving target. The way a system works changes constantly, so as soon as you write a piece of documentation for it, it starts to get stale. And the systems that need docs the most are the ones being actively used and worked on, which are changing the fastest. So <strong>the most important docs go stale the fastest!</strong> <a href="https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/#ref1"><sup>1</sup></a></p> <p>Etsy has been experimenting with a radical new approach: <strong>immutable documentation</strong>.</p> <p>Woah, you just got finished talking about how documentation goes stale! So doesn’t that mean you have to update it all the time? How could you make documentation read-only?</p> <h3>How docs go stale</h3> <p>Let’s back up for a sec. When a bit of a documentation page becomes outdated or incorrect, it typically doesn’t invalidate the entire doc (unless the system itself is deprecated). It’s just a part of the doc with a code snippet, say, which is maybe using an outdated syntax for an API.</p> <p>For example, we have a command-line tool called <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code>that lets us query the dev and prod databases from our VMs. Our internal wiki has a doc page that discusses various tools that we use to query the dbs. The part that discusses ‘dbconnect’ goes something like:</p> <pre>Querying the database via dbconnect ...&#13; &#13; ((section 1))&#13; dbconnect is a script to connect to our databases and query them. [...]&#13; &#13; ((section 2))&#13; The syntax is:&#13; &#13; % dbconnect &lt;shard&gt;</pre> <p>Section 1 gives context about <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code> and why it exists, and section 2 gives tactical details of how to use it.</p> <p>Now say a switch is added so that <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect --dev &lt;shard&gt;</code> queries the dev db, and <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect --prod &lt;shard&gt;</code> queries the prod db. Section 2 above now needs to be updated, because it’s using outdated syntax for the <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code> command. But the contextual description in section 1 is still completely valid. So this doc page is now technically stale as a whole because of section 2, but the narrative in section 1 is still very helpful!</p> <p>In other words, the parts of the doc that’s most likely to go stale are the tactical, operational details of the system. <strong><em>How</em> to use the system is constantly changing. But the narrative of <em>why</em> the system exists and the context around it is less likely to change quite so quickly.</strong></p> <blockquote readability="6"><p><strong><em>How</em></strong> to use the system is constantly changing. But the narrative of <strong><em>why</em></strong> the system exists and the context around it is less likely to change quite so quickly.</p></blockquote> <h3>Docs can be separated into how-docs and why-docs</h3> <p>Put another way: ‘code tells how, docs tell why’  <a href="https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/#ref2"><sup>2</sup></a>. Code is constantly changing, so the more code you put into your docs, the faster they’ll go stale. To codify this further, let’s use the term “<strong>how-doc</strong>” for operational details like code snippets, and “<strong>why-doc</strong>” for narrative, contextual descriptions  <a href="https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/#ref3"><sup>3</sup></a>. <strong>We can mitigate staleness by limiting the amount we mix the how-docs with the why-docs.</strong></p> <blockquote readability="5"><p>We can mitigate staleness by limiting the amount we mix the how-docs with the why-docs.</p></blockquote> <h3>Documenting a command using Etsy’s FYI system</h3> <p>At Etsy we’ve developed <strong>a system for adding how-docs directly from Slack</strong>. It’s called “FYI”. The purpose of FYI is to make documenting tactical details — commands to run, syntax details, little helpful tidbits — as frictionless as possible.</p> <blockquote readability="5"><p>FYI is a system for adding how-docs directly from Slack.</p></blockquote> <p>Here’s how we’d approach documenting <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code> using FYIs <a href="https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/#ref4"><sup>4</sup></a>:</p> <p>Kaley was searching the wiki for how to connect to the dbs from her VM, to no avail. So she asks about it in a Slack channel:</p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image3-1.png"><img class="aligncenter wp-image-5193 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image3-1.png" alt="hey @here anyone remember how to connect to the dbs in dev? I forget how. It’s something like dbconnect etsy_shard_001A but that’s not working" width="687" height="80" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image3-1.png 687w, https://codeascraft.com/wp-content/uploads/2018/10/image3-1-300x35.png 300w" sizes="(max-width: 687px) 100vw, 687px"/></a></p> <p>When she finds the answer, she adds an FYI using the <code class="EnlighterJSRAW" data-enlighter-language="null">?fyi</code> command (using our <a href="https://github.com/RJ/irccat">irccat integration</a> in Slack <a href="https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/#ref5"><sup>5</sup></a>):</p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image9-1.png"><img class="aligncenter wp-image-5194 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image9-1.png" alt="?fyi connect to dbs with `dbconnect etsy_shard_000_A` (replace `000` with the shard number). `A` or `B` is the side" width="677" height="109" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image9-1.png 677w, https://codeascraft.com/wp-content/uploads/2018/10/image9-1-300x48.png 300w" sizes="(max-width: 677px) 100vw, 677px"/></a></p> <p>Jason sees Kaley add the FYI and mentions you can also use <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code> to list the databases:</p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image5-1.png"><img class="aligncenter wp-image-5195 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image5-1.png" alt="you can also do `dbconnect -l` to get a list of all DBs/shards/etc, and it works for dev-proxy on or off" width="613" height="70" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image5-1.png 613w, https://codeascraft.com/wp-content/uploads/2018/10/image5-1-300x34.png 300w" sizes="(max-width: 613px) 100vw, 613px"/></a></p> <p><span>Kaley then adds the <code class="EnlighterJSRAW" data-enlighter-language="null">:fyi:</code> </span><span>Slack reaction (reacji) to his comment to save it as an FYI:</span></p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image13.png"><img class="aligncenter wp-image-5196 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image13.png" alt="you can also do `dbconnect -l` to get a list of all DBs/shards/etc, and it works for dev-proxy on or off" width="687" height="108" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image13.png 687w, https://codeascraft.com/wp-content/uploads/2018/10/image13-300x47.png 300w" sizes="(max-width: 687px) 100vw, 687px"/></a></p> <p>A few weeks later, Paul-Jean uses the FYI query command <code class="EnlighterJSRAW" data-enlighter-language="null">?how</code> to search for info on connecting to the databases, and finds Kaley’s FYI <a href="https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/#ref6"><sup>6</sup></a>:</p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image14.png"><img class="aligncenter wp-image-5197 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image14.png" alt="?how database connect" width="680" height="175" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image14.png 680w, https://codeascraft.com/wp-content/uploads/2018/10/image14-300x77.png 300w" sizes="(max-width: 680px) 100vw, 680px"/></a></p> <p><span>He then looks up FYIs mentioning <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code> specifically to discover Jason’s follow-up comment:</span></p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image2-1.png"><img class="aligncenter wp-image-5198 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image2-1.png" alt="?how dbconnect" width="695" height="255" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image2-1.png 695w, https://codeascraft.com/wp-content/uploads/2018/10/image2-1-300x110.png 300w" sizes="(max-width: 695px) 100vw, 695px"/></a></p> <p><span>But he notices that the <code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code> command has been changed since Jason’s FYI was added: there is now a switch to specify whether you want dev or prod databases. So he adds another FYI to supplement Jason’s:</span></p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/image10-1.png"><img class="aligncenter wp-image-5199 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/image10-1.png" alt="?fyi to get a list of all DBs/shards/etc in dev, use `dbconnect --dev`, and to list prod DBs, use `dbconnect --prod` (default)" width="808" height="88" srcset="https://codeascraft.com/wp-content/uploads/2018/10/image10-1.png 808w, https://codeascraft.com/wp-content/uploads/2018/10/image10-1-300x33.png 300w, https://codeascraft.com/wp-content/uploads/2018/10/image10-1-768x84.png 768w" sizes="(max-width: 808px) 100vw, 808px"/></a></p> <p>Now <code class="EnlighterJSRAW" data-enlighter-language="null">?how dbconnect</code> returns Paul-Jean’s FYI first, and Jason’s second:</p> <p><a href="https://codeascraft.com/wp-content/uploads/2018/10/how-dbconnect.jpeg"><img class="aligncenter wp-image-5202 size-full" src="https://codeascraft.com/wp-content/uploads/2018/10/how-dbconnect.jpeg" alt="?how dbconnect" width="631" height="230" srcset="https://codeascraft.com/wp-content/uploads/2018/10/how-dbconnect.jpeg 631w, https://codeascraft.com/wp-content/uploads/2018/10/how-dbconnect-300x109.jpeg 300w" sizes="(max-width: 631px) 100vw, 631px"/></a></p> <h3>FYIs trade completeness for freshness</h3> <p>Whenever you do a <code class="EnlighterJSRAW" data-enlighter-language="null">?how</code> query, matching <strong>FYIs are always returned most recent first.</strong> So you can always update how-docs for dbconnect by adding an FYI with the keyword “dbconnect” in it. This is crucial, because it means <strong>the freshest docs always rise to the top of search results</strong>.</p> <p><strong>FYIs are immutable</strong>, so Paul-Jean doesn’t have to worry about changing any FYIs created by Jason. He just adds them as he thinks of them, and the timestamps determine the priority of the results. <strong>How-docs change so quickly, it’s easier to just replace them than try to edit them. So they might as well be immutable.</strong></p> <blockquote readability="7"><p>How-docs change so quickly, it’s easier to just replace them than try to edit them. So they might as well be immutable.</p></blockquote> <p>Since <strong>every FYI has an explicit timestamp</strong>, it’s easy to gauge how current they are relative to API versions, OS updates, and other internal milestones. <strong>How-docs are inherently stale, so they might as well have a timestamp showing exactly how stale they are</strong>.</p> <blockquote readability="7"><p>How-docs are inherently stale, so they might as well have a timestamp showing exactly how stale they are.</p></blockquote> <p>The tradeoff is that FYIs are just short snippets. There’s no room in an FYI to add much context. In other words, <strong>FYIs mitigate staleness by trading completeness for freshness</strong>.</p> <blockquote readability="5"><p>FYIs mitigate staleness by trading completeness for freshness</p></blockquote> <p>Since FYIs lack context, there’s still a need for why-docs (eg a wiki page) about connecting to dev/prod dbs, which mentions the <span><code class="EnlighterJSRAW" data-enlighter-language="null">dbconnect</code></span>  command along with other relevant resources. But if the how-docs are largely left in FYIs, those why-docs are less likely to go stale.</p> <p>So <strong>FYIs allow us to decouple how-docs from why-docs</strong>. The tactical details are probably what you want in a hurry. The narrative around them is something you sit back and read on a wiki page.</p> <blockquote readability="5"><p>FYIs allow us to decouple how-docs from why-docs</p></blockquote> <h3/> <h3>What FYIs are</h3> <p>To summarize, FYIs are:</p> <ul><li><b>How-docs</b><span>: code snippets, API details, or helpful tips that explain </span><i><span>how</span></i><span> to use a system</span></li> <li><b>Fresh</b><span>: searching FYIs gives most recent matches first, and adding them is easy</span></li> <li><b>Time-stamped</b><span>: every FYI has an explicit timestamp, so you know exactly how stale it is</span></li> <li><b>Immutable</b><span>: instead of editing an FYI you just add another one with more info</span></li> <li><b>Discoverable</b><span><span>: using the  <code class="EnlighterJSRAW" data-enlighter-language="null">?how</code> </span></span>command</li> <li><b>Short</b><span>: about the length of a sentence</span></li> <li><b>Unstructured</b><span>: just freeform text</span></li> <li><b>Collaborative</b><span>: FYIs quickly share knowledge within or across teams</span></li> <li><b>Immediate</b><span><span>: use  <code class="EnlighterJSRAW" data-enlighter-language="null">?fyi </code> </span></span>or just tag a slack message with the  <code class="EnlighterJSRAW" data-enlighter-language="null">:fyi:</code> reaction</li> </ul><h3>What FYIs are NOT</h3> <p><span>Similarly, FYIs are NOT:</span></p> <ul><li><b>Why-docs</b><span>: FYIs are short, helpful tidbits, not overviews, prose or narratives</span></li> <li><b>Wiki pages</b><span> or READMEs: why-docs belong on the wiki or in READMEs</span></li> <li><b>Code comments</b><span>: a code comment explains </span><i>what</i><span> your code does better than any doc</span></li> </ul><h3>Conclusions</h3> <p>Etsy has recognized that technical documentation is a mixture of two distinct types: a narrative that explains <em>why</em> a system exists (“why-docs”), and operational details that describe <em>how</em> to use the system (“how-docs”). In trying to overcome the problem of staleness, the crucial observation is that how-docs typically change faster than why-docs do. Therefore the more how-docs are mixed in with why-docs in a doc page, the more likely the page is to go stale.</p> <p>We’ve leveraged this observation by creating an entirely separate system to hold our how-docs. The FYI system simply allows us to save Slack messages to a persistent data store. When someone posts a useful bit of documentation in a Slack channel, we tag it with the <code class="EnlighterJSRAW" data-enlighter-language="null">:fyi:</code> reacji to save it as a how-doc. We then search our how-docs directly from Slack using a bot command called <code class="EnlighterJSRAW" data-enlighter-language="null">?how</code>.</p> <p>FYIs are immutable: to update them, we simply add another FYI that is more timely and correct. Since FYIs don’t need to contain narrative, they’re easy to add, and easy to update. The <code class="EnlighterJSRAW" data-enlighter-language="null">?how</code> command always returns more recent FYIs first, so fresher matches always have higher priority. In this way, the FYI system combats documentation staleness by trading completeness for freshness.</p> <p>We believe the separation of operational details from contextual narrative is a useful idea that can be used for documenting all kinds of systems. We’d love to hear how you feel about it! And we’re excited to hear about what tooling you’ve built to make documentation better in your organization. Please get in touch and share what you’ve learned. Documentation is hard! Let’s make it better!</p> <h3>Acknowledgements</h3> <p>The FYI system was designed and implemented by Etsy’s FYI Working Group: Paul-Jean Letourneau, Brad Greenlee, Eleonora Zorzi, Rachel Hsiung, Keyur Govande, and Alec Malstrom. Special thanks to Mike Lang, Rafe Colburn, Sarah Marx, Doug Hudson, and Allison McKnight for their valuable feedback on this post.</p> <h3>References</h3> <ol><li id="ref1">From <a href="https://blog.jooq.org/2013/02/26/the-golden-rules-of-code-documentation/">“The Golden Rules of Code Documentation”</a>: “It is almost impossible without an extreme amount of discipline, to keep external documentation in-sync with the actual code and/or API.”</li> <li id="ref2">Derived from “code tells what, docs tell why” in <a href="https://hackernoon.com/write-good-documentation-6caffb9082b4">this HackerNoon post</a>.</li> <li id="ref3">The similarity of the terms “how-doc” and “why-doc” to the term <a href="https://www.tldp.org/LDP/abs/html/here-docs.html">here-doc</a> is intentional. For any given command, a here-doc is used to send data into the command in-place, how-docs are a way to document how to use the command, and why-docs are a description of why the command exists to begin with.</li> <li id="ref4">You can replicate the FYI system with any method that allows you save Slack messages to a predefined, searchable location. So for example, one could simply install the <a href="https://get.slack.help/hc/en-us/articles/360000482666-Copy-messages-to-another-channel-instantly">Reacji Channeler bot</a>, which lets you assign a Slack reacji of your choosing to cause the message to be copied to a given channel. So you could assign an “fyi” reacji to a new channel called “#fyi”, for example. Then to search your FYIs, you would simply go to the #fyi channel and search the messages there using the Slack search box.</li> <li id="ref5">When the <code class="EnlighterJSRAW" data-enlighter-language="null">:fyi:</code> reacji is added to a Slack message (or the <code class="EnlighterJSRAW" data-enlighter-language="null">?fyi</code> irccat command is used), an <a href="https://api.slack.com/custom-integrations/outgoing-webhooks">outgoing webhook</a> sends a POST request to <code class="EnlighterJSRAW" data-enlighter-language="null">irccat.etsy.com</code> with the message details. This triggers a PHP script to save the message text to a SQLite database, and sends an acknowledgement back to the Slack <a href="https://api.slack.com/custom-integrations/incoming-webhooks">incoming webhook</a> endpoint. The acknowledgement says “OK! Added your FYI”, so the user knows their FYI has been successfully added to the database.</li> <li id="ref6">Searching FYIs using the <code class="EnlighterJSRAW" data-enlighter-language="null">?how</code> command uses the same architecture as for adding an FYI, except the PHP script queries the SQLite table, which supports full-text search via the <a href="https://www.sqlite.org/fts3.html">FTS plugin</a>.</li> </ol> Thu, 13 Dec 2018 17:31:38 +0000 https://codeascraft.com/2018/10/10/etsys-experiment-with-immutable-documentation/ Bootstrap 3.4.0 released https://blog.getbootstrap.com/2018/12/13/bootstrap-3-4-0/ https://blog.getbootstrap.com/2018/12/13/bootstrap-3-4-0/ <span class="post-date">13 Dec 2018</span> <p>That’s not a typo—today we’re shipping Bootstrap 3.4.0, a long overdue update to address some quality of life issues, XSS fixes, and build tooling updates to make it easier for us, and you, to develop.</p> <p>While we’d planned for ages to do a fresh v3.x update, we lost steam as energy was focused on all the work in v4. Early this year, <a href="https://github.com/twbs/bootstrap/issues/25679">one issue in particular</a> gained a ton of momentum from the community and the core team decided to do a huge push to pull together a solid release. I regret the time it took to pull this release together, especially given the security fixes, but with the improvements under the hood, v3 has never been easier to develop and maintain. Thanks for your continued support along the way!</p> <p>Keep reading for what’s changed and a look ahead at what’s coming in v4.2.0.</p> <h2 id="whats-new">What’s new</h2> <p>While we haven’t publicly worked on v3.x in years, we’ve heard from all of you during that time that we needed to do a new release to address</p> <ul><li><strong>New:</strong> Added a <code class="highlighter-rouge">.row-no-gutters</code> class.</li> <li><strong>New:</strong> Added docs searching via Algolia.</li> <li><strong>Fixed:</strong> Resolved an XSS issue in Alert, Carousel, Collapse, Dropdown, Modal, and Tab components. See <a href="https://snyk.io/vuln/npm:bootstrap:20160627">https://snyk.io/vuln/npm:bootstrap:20160627</a> for details.</li> <li><strong>Fixed:</strong> Added padding to <code class="highlighter-rouge">.navbar-fixed-*</code> on modal open</li> <li><strong>Fixed:</strong> Removed the double border on <code class="highlighter-rouge">&lt;abbr&gt;</code> elements.</li> <li>Removed Gist creation in web-based Customizer since anonymous gists were disabled long ago by GitHub.</li> <li>Removed drag and drop support from Customizer since it didn’t work anymore.</li> </ul><p>Our documentation and tooling saw massive updates as well to make it easier to work on v3.x, for ourselves and for you.</p> <ul><li>Added a dropdown to the docs nav for newer and previous versions.</li> <li>Update the docs to use a new <code class="highlighter-rouge">baseurl</code>, <code class="highlighter-rouge">/docs/3.4/</code>, to version the v3.x documentation like we do with v4.</li> <li>Reorganized the v3 docs CSS to use Less.</li> <li>Switched to BrowserStack for tests.</li> <li>Updated links to always use https and fix broken URLs.</li> <li>Replaced ZeroClipboard with clipboard.js</li> </ul><p><a href="https://getbootstrap.com/docs/3.4/">Head to the Bootstrap 3.4 docs</a> to see the latest in action. <a href="https://github.com/twbs/bootstrap/pull/27288">Check out the v3.4.0 pull request</a> for even more context on what’s changed.</p> <h2 id="upgrading">Upgrading</h2> <p>Upgrade your Bootstrap 3 projects to v3.4.0 with <code class="highlighter-rouge">npm i bootstrap@previous</code> or <code class="highlighter-rouge">npm i bootstrap@3.4.0</code>. This release won’t be available via Bower to start given the package manager was deprecated and has largely been unused by us in v4 for well over a year. Stay tuned for CDN and Rubygem updates.</p> <h2 id="open-collective">Open Collective</h2> <p>Also new with our v3.4 is the creation of an <a href="https://opencollective.com/bootstrap">Open Collective page</a> to help support the maintainers contributing to Bootstrap. The team has been very excited about this as a way to be transparent about maintainer costs (both time and money), as well as recognition of efforts.</p> <h2 id="v42-and-beyond">v4.2 and beyond</h2> <p>We’ve been working on a <a href="https://github.com/twbs/bootstrap/projects/6?fullscreen=true">huge v4.2 update</a> for several months now. Our attention has largely been on advancing the project and simplifying it’s dependencies, namely by removing our jQuery dependency. That work has sparked a keen interest in a moderately scoped v5 release, so we’ve been taking our sweet time with v4.2 to sneak in as many new features as we can.</p> <p>After we ship v4.2, we’ll plan for point releases to address any bugs and improvements as y’all start to use the new version. From there, we’ll start to share more plans on v5 to remove jQuery, drop support for older browsers, and clear up some cruft. This won’t be a sweeping rewrite, but rather an iterative improvement on v4. Stay tuned!</p> <p>&lt;3,<br/><a href="https://twitter.com/mdo">@mdo</a> &amp; <a href="https://github.com/twbs">team</a></p> Fri, 14 Dec 2018 00:39:14 +0000 https://blog.getbootstrap.com/2018/12/13/bootstrap-3-4-0/ Parachute use to prevent death and major trauma when jumping from aircraft https://www.bmj.com/content/363/bmj.k5094 https://www.bmj.com/content/363/bmj.k5094 <div id="abstract-1" readability="20.5"><h2 class="">Abstract</h2><div id="sec-1" class="subsection" readability="8"><p id="p-1"><strong>Objective</strong> To determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft.</p></div><div id="sec-2" class="subsection" readability="7"><p id="p-2"><strong>Design</strong> Randomized controlled trial.</p></div><div id="sec-3" class="subsection" readability="7"><p id="p-3"><strong>Setting</strong> Private or commercial aircraft between September 2017 and August 2018.</p></div><div id="sec-4" class="subsection" readability="8"><p id="p-4"><strong>Participants</strong> 92 aircraft passengers aged 18 and over were screened for participation. 23 agreed to be enrolled and were randomized.</p></div><div id="sec-5" class="subsection" readability="8"><p id="p-5"><strong>Intervention</strong> Jumping from an aircraft (airplane or helicopter) with a parachute versus an empty backpack (unblinded).</p></div><div id="sec-6" class="subsection" readability="8"><p id="p-6"><strong>Main outcome measures</strong> Composite of death or major traumatic injury (defined by an Injury Severity Score over 15) upon impact with the ground measured immediately after landing.</p></div><div id="sec-7" class="subsection" readability="11"><p id="p-7"><strong>Results</strong> Parachute use did not significantly reduce death or major injury (0% for parachute <em>v</em> 0% for control; P&gt;0.9). This finding was consistent across multiple subgroups. Compared with individuals screened but not enrolled, participants included in the study were on aircraft at significantly lower altitude (mean of 0.6 m for participants <em>v</em> mean of 9146 m for non-participants; P&lt;0.001) and lower velocity (mean of 0 km/h <em>v</em> mean of 800 km/h; P&lt;0.001).</p></div><div id="sec-8" class="subsection" readability="14"><p id="p-8"><strong>Conclusions</strong> Parachute use did not reduce death or major traumatic injury when jumping from aircraft in the first randomized evaluation of this intervention. However, the trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice.</p></div></div><div id="sec-9" readability="23.884773662551"><h2 class="">Introduction</h2><p id="p-9">Parachutes are routinely used to prevent death or major traumatic injury among individuals jumping from aircraft. However, evidence supporting the efficacy of parachutes is weak and guideline recommendations for their use are principally based on biological plausibility and expert opinion.<a id="xref-ref-1-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-1">1</a><a id="xref-ref-2-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-2">2</a> Despite this widely held yet unsubstantiated belief of efficacy, many studies of parachutes have suggested injuries related to their use in both military and recreational settings,<a id="xref-ref-3-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-3">3</a><a id="xref-ref-4-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-4">4</a> and parachutist injuries are formally recognized in the World Health Organization’s ICD-10 (international classification of diseases, 10th revision).<a id="xref-ref-5-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-5">5</a> This could raise concerns for supporters of evidence-based medicine, because numerous medical interventions believed to be useful have ultimately failed to show efficacy when subjected to properly executed randomized clinical trials.<a id="xref-ref-6-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-6">6</a><a id="xref-ref-7-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-7">7</a></p><p id="p-10">Previous attempts to evaluate parachute use in a randomized setting have not been undertaken owing to both ethical and practical concerns. Lack of equipoise could inhibit recruitment of participants in such a trial. However, whether pre-existing beliefs about the efficacy of parachutes would, in fact, impair the enrolment of participants in a clinical trial has not been formally evaluated. To address these important gaps in evidence, we conducted the first randomized clinical trial of the efficacy of parachutes in reducing death and major injury when jumping from an aircraft.</p></div><div id="sec-10" readability="66.965253468171"><h2 class="">Methods</h2><div id="sec-11" class="subsection" readability="37"><h3>Study protocol</h3><p id="p-11">Between September 2017 and August 2018, individuals were screened for inclusion in the PArticipation in RAndomized trials Compromised by widely Held beliefs aboUt lack of Treatment Equipoise (PARACHUTE) trial. Prospective participants were approached and screened by study investigators on commercial or private aircraft.</p><p id="p-12">For the commercial aircraft, travel was related to trips the investigators were scheduled to take for business or personal reasons unrelated to the present study. Typically, passengers seated close to the study investigator (typically not known acquaintances) would be approached mid-flight, between the time of initial seating and time of exiting the aircraft. The purpose and design of the study were explained. Owing to difficulty in enrolling patients at several thousand meters above the ground, we expanded our approach to include screening members of the investigative team, friends, and family. For the private aircraft, the boarding of aircraft was done for the explicit purpose of participating in the trial.</p><p id="p-13">All participants were asked whether they would be willing to be randomized to jump from the aircraft at its current altitude and velocity. Potential study participants completed an anonymous survey using a survey app on the screening investigator’s phone or tablet. Responses were transmitted to an online database upon landing for later analysis.</p><p id="p-14"> We enrolled individuals willing to participate in the trial and meeting inclusion criteria in the study. We randomized patients (1:1) to the intervention or the control. We obtained written informed consent. Participants were then instructed to jump from the aircraft after being provided their assigned device. Jumps were conducted at two sites in the US: Katama Airfield in Martha’s Vineyard, MA (conducted by investigators from the Beth Israel Deaconess Medical Center), and the Yankee Air Museum in Belleville, MI (conducted by investigators from the University of Michigan). The same protocol was followed at each site, but the type of aircraft (airplane <em>v</em> helicopter) differed between the two sites. </p></div><div id="sec-12" class="subsection" readability="12"><h3>Study population</h3><p id="p-15">Participants aged 18 and over, seated on an aircraft, and deemed to be rational decision makers by the enrolling investigator were eligible. Only participants who were willing to be randomized in the study were ultimately enrolled and randomized. Most of the participants who were randomized were study investigators.</p></div><div id="sec-13" class="subsection" readability="23"><h3>Interventions</h3><p id="p-16">Participants were randomized to wear either a parachute (National 360, National Parachute Industries, Inc, Palenville, NY; or Javelin Odyssey, Sun Path Products, Inc, Raeford, NC; supplementary materials fig 1) or an empty backpack (The North Face, Inc, Alameda, CA; or Javelin Odyssey Gearbag, Sun Path Products, Inc). The interventions were not blinded to either participants or study investigators.</p></div><div id="sec-14" class="subsection" readability="13"><h3>Randomization</h3><p id="p-17">We used block randomization, stratified by site and sex with a block size of two. The trial statistician created the randomization sequence by using the R package blockrand. The research team had previously assigned unique numeric identifiers to each participant. At both sites, only one team member had access to the list of numeric identifiers. Participants were verbally assigned their treatment, which was done by order of enrolment. Allocation was not concealed to the investigator who assigned the treatment.</p></div><div id="sec-15" class="subsection" readability="33.973946360153"><h3>Data collection</h3><p id="p-18">We collected data on basic demographic characteristics during screening by using paper forms or the survey app.<a id="xref-ref-8-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-8">8</a> Characteristics included age, sex, ethnic group, height, and weight. We also collected information on participants’ medical history including a history of broken bones, acrophobia (fear of heights), previous parachute use, family history of parachute use, and frequent flier status. Flight characteristics included carrier, velocity, altitude, make and model of the aircraft, the individual’s seating section, and whether the flight was international or domestic. Velocity and altitude were captured by using flight information provided by aircraft on individual television screens when available, as well as through pilot announcements. When neither was directly available, visual estimations were made by the study investigators.</p><p id="p-19">At the time of each jump, researchers recorded the altitude and velocity of the aircraft, and conducted a follow-up interview with each participant to ascertain vital status and to record any injuries sustained from the free fall within five minutes of impact with the ground, and again at 30 days after impact. We collected data electronically or with paper forms and uploaded the data to an online deidentified, password protected database.</p></div><div id="sec-16" class="subsection" readability="13.951105937136"><h3>Outcomes</h3><p id="p-20">The primary outcome was the composite of death and major traumatic injury, defined by an Injury Severity Score greater than 15, within five minutes of impact. The Injury Severity Score is a commonly used anatomical scoring system to grade the severity of traumatic injuries.<a id="xref-ref-9-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-9">9</a> Separate scores are assigned to each of six anatomical regions, and the three most highly injured regions contribute to a final score ranging from 0 to 75. Higher scores indicate a more severe injury. Secondary outcomes included death and major traumatic injury assessed at 30 days after impact using the Injury Severity Score, as well as 30 day quality of life assessed by the Short Form Health Survey. The Short Form Health Survey is a multipurpose questionnaire that measures a patient’s overall health-related quality of life based on mental and physical functioning.<a id="xref-ref-10-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-10">10</a></p></div><div id="sec-17" class="subsection" readability="26"><h3>Statistical analysis</h3><p id="p-21">The primary efficacy analysis tested the hypothesis that parachute use is superior to the control in preventing death and major traumatic injury. Based on an assumption of an average jump altitude of 4000 meters (typical of skydiving) and the anticipated effect of impact with the Earth at terminal velocity on human tissue, we projected that 99% of the control arm would experience the primary outcome at ground impact with a relative risk reduction of 95% in the intervention arm. A sample size of 14 (7 in each arm) would yield 99% power to detect this difference at a two sided α of 0.05. In anticipation of potential withdrawal after enrolment owing to last minute anxieties, a total sample size of 20 participants was targeted. Analysis was performed on an intention-to-treat basis. We performed secondary subgroup analyses stratified by aircraft type (airplane <em>v</em> helicopter) and previous parachute use through formal tests of statistical interaction.</p><p id="p-22">We summarized continuous variables by mean (standard deviation) and categorical variables by frequency and percentage. We tabulated baseline characteristics of the two trial arms to examine for potential imbalance in variables. We tested for differences between the outcomes of the two trial arms by using Student’s t test (continuous variables) and Fisher’s exact test (categorical variables). To better understand what drove the willingness to participate in the trial, we also compared characteristics of individuals who were screened but chose not to enroll with individuals who enrolled. Baseline characteristics between those enrolled and not enrolled were compared using the same statistical tests. Confidence intervals for the difference in continuous outcomes between the two arms were constructed using T distributions. We could not calculate confidence intervals for the difference between arms (eg, risk difference, odds ratio, or relative risk) because no events were observed for any of the binary outcomes in either arm.</p><p id="p-23">We performed all analyses by using SAS version 9.4 (SAS Institute Inc, Cary, NC). A P value greater than 0.05 was statistically significant.</p></div></div><div id="sec-18" readability="20.991661216481"><h2 class="">Results</h2><div id="sec-19" class="subsection" readability="28.423943661972"><h3>Study population</h3><p id="p-24">A total of 92 individuals were screened and surveyed regarding their interest in participating in the PARACHUTE trial. Among those screened, 69 (75%) were unwilling to be randomized or found to be otherwise ineligible by investigators. <a id="xref-fig-1-1" class="xref-fig" href="https://www.bmj.com/content/363/bmj.k5094#F1">Figure 1</a> shows that a total of 23 individuals were deemed eligible for randomization.</p><p id="p-26"><a id="xref-table-wrap-1-1" class="xref-table" href="https://www.bmj.com/content/363/bmj.k5094#T1">Table 1</a> shows that the baseline characteristics of enrolled participants were generally similar between the intervention and control arms. The median age of randomized participants was 38 years and 13 (57%) were male. Three (13%) of the randomized participants had previous parachute use and nine (39%) had a history of acrophobia. <a id="xref-table-wrap-2-1" class="xref-table" href="https://www.bmj.com/content/363/bmj.k5094#T2">Table 2</a> shows that participants in the study were similar to those screened but not enrolled with regard to most demographic and clinical characteristics. However, participants were less likely to be on a jetliner, and instead were on a biplane or helicopter (0% <em>v</em> 100%; P&lt;0.001), were at a lower mean altitude (0.6 m, SD 0.1 <em>v</em> 9146 m, SD 2164; P&lt;0.001), and were traveling at a slower velocity (0 km/h, SD 0 <em>v</em> 800 km/h, SD 124; P&lt;0.001) (<a id="xref-table-wrap-2-2" class="xref-table" href="https://www.bmj.com/content/363/bmj.k5094#T2">table 2</a>).</p><div id="T1" class="table pos-float"><div class="table" readability="5.75"><div class="table-caption" readability="8"><span class="table-label">Table 1</span> <p id="p-27" class="first-child">Baseline characteristics of participants randomized to parachute versus control. Values are numbers (percentages) unless stated otherwise</p></div></div></div><div id="T2" class="table pos-float"><div class="table" readability="5.7017543859649"><div class="table-caption" readability="8"><span class="table-label">Table 2</span> <p id="p-28" class="first-child">Baseline characteristics of participants versus screened individuals. Values are numbers (percentages) unless stated otherwise</p></div></div></div><p id="p-30">Among the 12 participants randomized to the intervention arm, the parachute did not deploy in all 12 (100%) owing to the short duration and altitude of falls. Among the 11 participants randomized to receive an empty backpack, none crossed over to the intervention arm. <a id="xref-fig-2-1" class="xref-fig" href="https://www.bmj.com/content/363/bmj.k5094#F2">Figure 2</a> shows a representative jump (additional jumps are shown in supplementary materials fig 2).</p><div id="F2" class="fig pos-float type-figure odd"><div class="fig-inline"><div class="highwire-figure" readability="5.025462962963"><div xmlns:xhtml="http://www.w3.org/1999/xhtml" class="fig-caption" readability="8"><span class="fig-label">Fig 2</span> <p id="p-31" class="first-child">Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground </p></div></div></div></div></div><div id="sec-20" class="subsection" readability="13.573913043478"><h3>Outcomes</h3><p id="p-32"><a id="xref-table-wrap-3-1" class="xref-table" href="https://www.bmj.com/content/363/bmj.k5094#T3">Table 3</a> shows the results for the primary and secondary outcomes. There was no significant difference in the rate of death or major traumatic injury between the treatment and control arms within five minutes of ground impact (0% for parachute <em>v</em> 0% for control; P&gt;0.9) or at 30 days after impact (0% for parachute <em>v</em> 0% for control; P&gt;0.9). Health status as measured by the Short Form Health Survey was similar between groups (43.9, SD 1.8 for parachute <em>v</em> 44.0, SD 2.4 for control; P=0.9; mean difference of 0.1, 95% confidence interval −2.0 to 2.2). In subgroup analyses, there were no significant differences in the effect of parachute use on outcomes when stratified by type of aircraft or previous parachute use (P&gt;0.9 for interaction for both comparisons). </p><div id="T3" class="table pos-float"><div class="table" readability="5.59"><div class="table-caption" readability="8"><span class="table-label">Table 3</span> <p id="p-33" class="first-child">Event rates for primary and secondary endpoints. Values are numbers (percentages) unless stated otherwise</p></div></div></div></div></div><div id="sec-21" readability="91.336073348087"><h2 class="">Discussion</h2><p id="p-34">We have performed the first randomized clinical trial evaluating the efficacy of parachutes for preventing death or major traumatic injury among individuals jumping from aircraft. Our groundbreaking study found no statistically significant difference in the primary outcome between the treatment and control arms. Our findings should give momentary pause to experts who advocate for routine use of parachutes for jumps from aircraft in recreational or military settings.</p><p id="p-35">Although decades of anecdotal experience have suggested that parachute use during jumps from aircraft can save lives, these observations are vulnerable to selection bias and confounding. Indeed, in seminal work published in the <em>BMJ</em> in 2003, a systematic search by Smith and Pell for randomized clinical trials evaluating the efficacy of parachutes during gravitational challenge yielded no published studies.<a id="xref-ref-1-2" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-1">1</a> In part, our study was designed as a response to their call to (broken) arms in order to address this critical knowledge gap.</p><p id="p-36">Beliefs about the efficacy of commonly used, but untested, interventions often influence daily clinical decision making. These beliefs can expose patients to unnecessary risk without clear benefit and increase healthcare costs.<a id="xref-ref-11-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-11">11</a> Beliefs grounded in biological plausibility and expert opinion have been proven wrong by subsequent rigorous randomized evaluations.<a id="xref-ref-12-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-12">12</a> The PARACHUTE trial represents one more such historic moment.</p><p id="p-37">Should our results be reproduced in future studies, the end of routine parachute use during jumps from aircraft could save the global economy billions of dollars spent annually to prevent injuries related to gravitational challenge.</p><p id="p-38">A minor caveat to our findings is that the rate of the primary outcome was substantially lower in this study than was anticipated at the time of its conception and design, which potentially underpowered our ability to detect clinically meaningful differences, as well as important interactions. Although randomized participants had similar characteristics compared with those who were screened but did not enroll, they could have been at lower risk of death or major trauma because they jumped from an average altitude of 0.6 m (SD 0.1) on aircraft moving at an average of 0 km/h (SD 0). Clinicians will need to consider this information when extrapolating to their own settings of parachute use.</p><p id="p-39">Opponents of evidence-based medicine have frequently argued that no one would perform a randomized trial of parachute use. We have shown this argument to be flawed, having conclusively shown that it is possible to randomize participants to jumping from an aircraft with versus without parachutes (albeit under limited and specific scenarios). In our study, we had to screen many more individuals to identify eligible and willing participants. This is not dissimilar to the experiences of other contemporary trials that frequently enroll only a small fraction of the thousands of patients screened. Previous research has suggested that participants in randomized clinical trials are at lower risk than patients who are treated in routine practice.<a id="xref-ref-13-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-13">13</a><a id="xref-ref-14-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-14">14</a> This is particularly relevant to trials examining interventions that the medical community believes to be effective: lack of equipoise often pushes well meaning but ill-informed doctors or study investigators to withhold patients from study participation, as they might believe it to be unethical to potentially deny their patients a treatment they (wrongly) believe is effective.</p><p id="p-40">Critics of the PARACHUTE trial are likely to make the argument that even the most efficacious of treatments can be shown to have no effect in a randomized trial if individuals who would derive the greatest benefit selectively decline participation. The critics will claim that although few medical treatments are likely to be as effective as parachutes,<a id="xref-ref-15-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-15">15</a> the exclusion of selected patients could result in null trial results, whether or not the intervention being evaluated was truly effective. The critics might further argue that although randomized controlled trials are the gold standard for evaluating treatments, their results are not always guaranteed to be relevant for clinicians. It will be up to the reader to determine the relevance of these findings in the real world.</p><div id="sec-22" class="subsection" readability="68.889747003995"><h3>Strengths and weaknesses of this study</h3><p id="p-41">A key strength of the PARACHUTE trial was that it was designed and initially powered to detect differences in the combination of death and major traumatic injury. Although the use of softer endpoints, such as levels of fear before and after jumping, or its surrogates, such as loss of urinary continence, could have yielded more power to detect an effect of parachutes, we believe that that our selection of bias-resistant endpoints that are meaningful to all patients increases the clinical relevance of the trial.</p><p id="p-42">The study also has several limitations. First and most importantly, our findings might not be generalizable to the use of parachutes in aircraft traveling at a higher altitude or velocity. Consideration could be made to conduct additional randomized clinical trials in these higher risk settings. However, previous theoretical work supporting the use of parachutes could reduce the feasibility of enrolling participants in such studies.<a id="xref-ref-16-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-16">16</a></p><p id="p-43">Second, our study was not blinded to treatment assignment. We did not anticipate a strong placebo effect for our primary endpoint, but it is possible that other subjective endpoints would have necessitated the use of a blinded sham parachute as a control. </p><p id="p-44">Third, the individuals screened but not enrolled in the study were limited to passengers unfortunate enough to be seated near study investigators during commercial flights, and might not be representative of all aircraft passengers. The participants who did ultimately enroll, agreed with the knowledge that the aircraft were stationary and on the ground. </p><p id="p-45">Finally, although all endpoints in the study were prespecified, we were unable to register the PARACHUTE trial prospectively. We attempted to register this study with the Sri Lanka Clinical Trials Registry (application number APPL/2018/040), a member of the World Health Organization’s Registry Network of the International Clinical Trials Registry Platform. After several rounds of discussion, the Registry declined to register the trial because they thought that “the research question lacks scientific validity” and “the trial data cannot be meaningful.” We appreciated their thorough review (and actually agree with their decision).</p><p id="p-46">The PARACHUTE trial satirically highlights some of the limitations of randomized controlled trials. Nevertheless, we believe that such trials remain the gold standard for the evaluation of most new treatments. The PARACHUTE trial does suggest, however, that their accurate interpretation requires more than a cursory reading of the abstract. Rather, interpretation requires a complete and critical appraisal of the study. In addition, our study highlights that studies evaluating devices that are already entrenched in clinical practice face the particularly difficult task of ensuring that patients with the greatest expected benefit from treatment are included during enrolment. </p><p id="p-47">To safeguard this last issue, we see several solutions. First, overcoming such a hurdle requires extreme commitment on the part of the investigators, clinicians, and patients; thankfully, recent examples of such efforts do exist.<a id="xref-ref-17-1" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-17">17</a> Second, stronger efforts could be made to ensure that definitive trials are conducted before new treatments become inculcated into routine practice, when greater equipoise is likely to exist. Third, the comparison of baseline characteristics and outcomes of study participants and non-participants should be utilized more frequently and reported consistently to facilitate the interpretation of results and the assessment of study generalizability.<a id="xref-ref-14-2" class="xref-bibr" href="https://www.bmj.com/content/363/bmj.k5094#ref-14">14</a> Finally, there could be instances where clinical beliefs justifiably prevent a true randomized evaluation of a treatment from being conducted.</p></div><div id="sec-23" class="subsection" readability="14"><h3>Conclusion</h3><p id="p-48">Parachute use compared with a backpack control did not reduce death or major traumatic injury when used by participants jumping from aircraft in this first randomized evaluation of the intervention. This largely resulted from our ability to only recruit participants jumping from stationary aircraft on the ground. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials evaluating their effectiveness could selectively enroll individuals with a lower likelihood of benefit, thereby diminishing the applicability of trial results to routine practice. Therefore, although we can confidently recommend that individuals jumping from small stationary aircraft on the ground do not require parachutes, individual judgment should be exercised when applying these findings at higher altitudes.</p><div class="boxed-text" id="boxed-text-1"><div id="sec-24" class="subsection"><h4>What is already known on this topic</h4><ul class="list-simple " id="list-1" readability="1"><li id="list-item-1" readability="1"><p id="p-49">Parachutes are routinely used to prevent death or major traumatic injury among individuals jumping from aircraft, but their efficacy is based primarily on biological plausibility and expert opinion</p></li><li id="list-item-2" readability="1"><p id="p-50">No randomized controlled trials of parachute use have yet been attempted, presumably owing to a lack of equipoise</p></li></ul></div><div id="sec-25" class="subsection"><h4>What this study adds</h4><ul class="list-simple " id="list-2" readability="-0.5"><li id="list-item-3" readability="0"><p id="p-51">This randomized trial of parachute use found no reduction in death or major injury compared with individuals jumping from aircraft with an empty backpack</p></li><li id="list-item-4" readability="-1"><p id="p-52">Lack of enrolment of individuals at high risk could have influenced the results of the trial</p></li></ul></div></div></div></div> Fri, 14 Dec 2018 00:49:03 +0000 https://www.bmj.com/content/363/bmj.k5094 Make photomosaics, GIFs, and murals from pictures in Python with ML/OpenCV https://github.com/worldveil/photomosaic https://github.com/worldveil/photomosaic <div class="Box-body p-6"> <article class="markdown-body entry-content" itemprop="text"> <p>Creating fun photomosaics, GIFs, and murals from your family pictures using ML &amp; similarity search.</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/side-by-side.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/side-by-side.jpg"/></a> </p> <p>If you'd like to use a cool web interface, upload your photos, and get them printed and mailed to you, try my service: <a href="http://photofun.strikingly.com" rel="nofollow">http://photofun.strikingly.com</a>. (we delete all the photos after.)</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/header.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/header.jpg"/></a> </p> <p>Because I tend to get carried away with things, you can also (unrelated to photomosaics, but related to doing cool things with your photo collection) make facial montages aligned on a particular person's face:</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/face_montage.gif"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/face_montage.gif"/></a> </p> <p>This makes use of an embedding network, a simple linear classifier on top, and a warp matrix for each image to align the eyes and scale it appropriately. You just need to make a folder with a few examples so it can learn which face to include.</p> <h2><a id="user-content-how-does-it-work" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#how-does-it-work"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>How does it work?</h2> <p>We're doing the digital equivalent of a very old technique - creating mosaics:</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/fish.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/fish.jpg" height="200"/></a> </p> <p>except instead of using physical tiles, you can use your photo collection, emojis, or any set of digital images you'd like.</p> <p>Take a target image, say, a family photo. You can recreate that target image as a mosaic using a "codebook" of other images as tiles. If you intelligently search through and pick the best "codebook" image in your tileset, you can create arbitrarily good recreations of your target image.</p> <p>This project cuts up the target image into tiles (you control the tile size with <code>scale</code> parameter), and for each tile patch, uses the L2 similarity metric (with an ultrafast lookup using Facebook's <a href="https://github.com/facebookresearch/faiss">faiss</a> library) to find the closest codebook tile image to replace it with.</p> <p>Since this lookup is quite fast</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/lookup_speed.png"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/lookup_speed.png" height="300"/></a> </p> <p>you can even do this for each frame in a video and create videomosaics (see <code>video.py</code>). You can also run a battery of fun performance metrics with <code>performance.py</code> if you're really curious.</p> <h2><a id="user-content-setup" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#setup"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Setup</h2> <p>Ensure you have installed:</p> <ul><li><code>Docker</code></li> <li><code>XQuartz</code> (version 2.7.5 or higher) if you'd like to run the <code>interactive.py</code> OpenCV GUI explorer. Otherwise you don't need it.</li> </ul><p>For <code>XQuartz</code>, <a href="https://blogs.oracle.com/oraclewebcentersuite/running-gui-applications-on-native-docker-containers-for-mac" rel="nofollow">turn on the Remote setting</a>, and quit and restart <code>XQuartz</code> (!).</p> <p>I've only tested this on my Mac OS X, but since it's Dockerized it should run anywhere Docker does!</p> <p>Next, build the Docker images and run a container:</p> <div class="highlight highlight-source-shell"><pre><span class="pl-c"><span class="pl-c">#</span> build the Docker image (may take a while!)</span> sh build.sh <span class="pl-c"><span class="pl-c">#</span> launch an Docker container running an iPython notebook server</span> sh launch.sh <span class="pl-c"><span class="pl-c">#</span> then go to http://localhost:8888/</span> <span class="pl-c"><span class="pl-c">#</span> there you'll be able to run scripts and view GUI </span></pre></div> <p>If you'd like to SSH into the Docker container itself, after running the above:</p> <p>Finally, and most importantly, get together some photos and videos you'd like to either create images from (use as mosaic tiles) or create mosaics of (turn your photos/videos into mosaics). I took my iPhone photos/videos for the last few years and threw them all in a folder, and you can see some of the cool results below.</p> <h2><a id="user-content-photomosaic-scripts" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#photomosaic-scripts"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Photomosaic Scripts</h2> <p>Note that the default setting for all of these scripts are to use caching, which means once you've indexed a particular folder of photos at a certain scale (read: tile size), you'll never need to do it again.</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/caching.png"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/caching.png" height="300"/></a> </p> <p>If you add or delete even a single file from the folder, photomosaic is smart enough to know to reindex. Cached index pickle files are stored by default in the <code>cache</code> folder.</p> <h3><a id="user-content-1-creating-mosaics-from-an-image" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#1-creating-mosaics-from-an-image"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>1) Creating mosaics from an image</h3> <p>Reconstruct an image using a set of other images, downsized and used as tiles.</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/beach-mosaic-scale-8-small.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/beach-mosaic-scale-8-small.jpg" height="300"/></a> </p> <div class="highlight highlight-source-shell"><pre>$ python mosaic.py \ --target <span class="pl-s"><span class="pl-pds">"</span>media/example/beach.jpg<span class="pl-pds">"</span></span> \ --savepath <span class="pl-s"><span class="pl-pds">"</span>media/output/%s-mosaic-scale-%d.jpg<span class="pl-pds">"</span></span> \ --codebook-dir <span class="pl-s"><span class="pl-pds">"</span>your/codebook/tiles/directory/<span class="pl-pds">"</span></span> \ --scale 8 \ --height-aspect 4 \ --width-aspect 3 \ --vectorization-factor 1</pre></div> <p>Arguments:</p> <ul><li><code>--savepath</code>: where to save it. %s is the original filename and %d will be the scale</li> <li><code>--target</code>: the image we're trying to reconstruct from other tile images</li> <li><code>--codebook-dir</code>: the images we'll create tiles out of (codebook)</li> <li><code>--scale</code>: how large/small to make the tiles. Multipler on the aspect ratio.</li> <li><code>--height-aspect</code>: height aspect</li> <li><code>--width-aspect</code>: width aspect</li> <li><code>--vectorization-factor</code>: if we downsize the feature vector before querying (generally don't need to adjust this)</li> </ul><h3><a id="user-content-2-creating-mosaic-videos" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#2-creating-mosaic-videos"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>2) Creating mosaic videos</h3> <p>Do the same, but with every frame of a video!</p> <p>Example (Tipper concert):</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/tipper-video.gif"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/tipper-video.gif"/></a> </p> <div class="highlight highlight-source-shell"><pre>$ python video.py \ --target <span class="pl-s"><span class="pl-pds">"</span>path/to/your/video.mov<span class="pl-pds">"</span></span> \ --savepath <span class="pl-s"><span class="pl-pds">"</span>media/output/%s-at-scale-%d.mp4<span class="pl-pds">"</span></span> \ --codebook-dir <span class="pl-s"><span class="pl-pds">"</span>your/codebook/tiles/directory/<span class="pl-pds">"</span></span> \ --scale 10 \ --height-aspect 4 \ --width-aspect 3</pre></div> <p>Only use <code>*.mp4</code> for the savepath output, that's all I support for now.</p> <p>Arguments:</p> <ul><li><code>--target</code>: the video we're trying to reconstruct from other tile images</li> <li><code>--codebook-dir</code>: the images we'll create tiles out of (codebook)</li> <li><code>--scale</code>: how large/small to make the tiles. Multipler on the aspect ratio.</li> <li><code>--height-aspect</code>: height aspect</li> <li><code>--width-aspect</code>: width aspect</li> <li><code>--savepath</code>: save our video as output to here (only tested on .mp4 extensions)</li> </ul><p><code>ffmpeg</code> is used for the audio splicing, since OpenCV can't really handle that.</p> <p>You can adjust aspect ratio here too, but those and more are optional arguments.</p> <h3><a id="user-content-3-exploring-mosaic-scales-interactively" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#3-exploring-mosaic-scales-interactively"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>3) Exploring mosaic scales interactively</h3> <p>Not sure which scale will look best? Want to play around with some different settings? Run this.</p> <p>Then just press the <code>s</code> key and you'll save the selected scale to disk!</p> <p>Alternatively, press <code>ESC</code> to exit the window without saving.</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/interactive.png"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/interactive.png" height="400"/></a> </p> <div class="highlight highlight-source-shell"><pre>$ python interactive.py \ --target <span class="pl-s"><span class="pl-pds">"</span>path/to/your/pic.jpg<span class="pl-pds">"</span></span> \ --savepath <span class="pl-s"><span class="pl-pds">"</span>media/output/interactive-%s-at-scale-%d.jpg<span class="pl-pds">"</span></span> \ --codebook-dir <span class="pl-s"><span class="pl-pds">"</span>your/codebook/directory/<span class="pl-pds">"</span></span> \ --min-scale 1 \ --max-scale 12</pre></div> <p>Arguments:</p> <ul><li><code>--target</code>: the image we're trying to reconstruct from other tile images</li> <li><code>--codebook-dir</code>: the images we'll create tiles out of (codebook)</li> <li><code>--min-scale</code>: start at this scale value (int)</li> <li><code>--max-scale</code>: let user increase scale up to this value (int)</li> <li><code>--savepath</code>: where to save it. %s is the original filename and %d will be the scale</li> </ul><p>You can adjust aspect ratio here too, but those and more are optional arguments.</p> <h3><a id="user-content-4-create-a-gif-from-a-series-of-mosaics-at-varying-tile-scales" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#4-create-a-gif-from-a-series-of-mosaics-at-varying-tile-scales"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>4) Create a GIF from a series of mosaics at varying tile scales</h3> <p>This will create a series of mosaics for a range of scales and then combined them together as a GIF with a specified frames per second. You can adjust the order with <code>--ascending</code>.</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/small.gif"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/small.gif"/></a> </p> <div class="highlight highlight-source-shell"><pre>$ python make_gif.py \ --target <span class="pl-s"><span class="pl-pds">"</span>path/to/pic.jpg<span class="pl-pds">"</span></span> \ --savepath <span class="pl-s"><span class="pl-pds">"</span>media/output/%s-from-%d-to-%d.gif<span class="pl-pds">"</span></span> \ --codebook-dir <span class="pl-s"><span class="pl-pds">"</span>your/codebook/directory/<span class="pl-pds">"</span></span> \ --min-scale 5 \ --max-scale 25 \ --fps 3 \ --ascending 0</pre></div> <p>If you pick a large range of scales, expect to wait a half and hour or so, depending on your machine.</p> <p>Note that the first time you run this on a container you might see a <code>Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.</code> message, that's normal.</p> <h4><a id="user-content-optimizing-gif-file-size" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#optimizing-gif-file-size"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Optimizing GIF file size</h4> <p>If you simply run the above, you might get a 200 MB GIF file, which is absurd. The easiest way to remedy this is with a tool like <code>gifsicle</code>.</p> <p>Here's what I'd suggest:</p> <div class="highlight highlight-source-shell"><pre>$ brew install gifsicle $ gifsicle -O3 --resize-height 400 --colors 256 <span class="pl-k">&lt;</span> your/gigantic.gif <span class="pl-k">&gt;</span> totally/reasonable/sized.gif</pre></div> <p>For example, I reduced a 130 MB GIF to 2 MB one using that command. <a href="https://ezgif.com" rel="nofollow">EZgif</a> is a surprisingly good online tool for compressing GIFs with different tradeoffs, but they only support GIFs up to 100 MB in size.</p> <h2><a id="user-content-other-settings" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#other-settings"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Other settings</h2> <p>Here are a few other settings that allow you to tweak the visual output.</p> <h3><a id="user-content-1-randomness---randomness" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#1-randomness---randomness"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>1) Randomness (<code>--randomness</code>)</h3> <p>If you'd like to bring a little chaos into your photomosaics, use the randomness parameter.</p> <p>It's a float in the range <code>[0, 1)</code> that is the probability a given tile will be filled in, not with the closest tile in the codebook, but rather a completely random one.</p> <p>Example (at 0.05):</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/randomness.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/randomness.jpg" height="400"/></a> </p> <h3><a id="user-content-2-stabilization-for-videomosaics---stabilization-threshold" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#2-stabilization-for-videomosaics---stabilization-threshold"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>2) Stabilization for Videomosaics (<code>--stabilization-threshold</code>)</h3> <p>Videomosaics are just a repeated application per frame of the photomosaic functionality. Therefore, tiny changes from frame to frame might cause the same object in the video to be represented with different tiles. This isn't terrible but it gives us less visual stability because it's always changing.</p> <p><code>--stabilization-threshold</code> is a float which represents a fraction of the previous distance for that tile. We only replace the tile in that slot if:</p> <pre><code>`current closest tile's distance` &lt; `--stabilization-threshold` * `last frame's distance` </code></pre> <p>Otherwise, we simply keep the tile the same for that frame. This is a crude stability heuristic, and in the future I could certainly do something smarter.</p> <h3><a id="user-content-3-opacity---opacity" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#3-opacity---opacity"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>3) Opacity (<code>--opacity</code>)</h3> <p>Some photomosaics "cheat" a bit and just layer on a watered down version of the original image in a specified ratio along with the mosaic tiles. This is a popular enough technique I decided to include it. Simply use the <code>--opacity</code> flag:</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/opacity.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/opacity.jpg" height="400"/></a> </p> <div class="highlight highlight-source-shell"><pre>$ python mosaic.py \ --target <span class="pl-s"><span class="pl-pds">"</span>media/example/beach.jpg<span class="pl-pds">"</span></span> \ --savepath <span class="pl-s"><span class="pl-pds">"</span>media/output/%s-mosaic-scale-%d.jpg<span class="pl-pds">"</span></span> \ --codebook-dir media/pics/ \ --scale 13 \ --height-aspect 4 \ --width-aspect 3 \ --opacity 0.4</pre></div> <h3><a id="user-content-4-best-k---best-k" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#4-best-k---best-k"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>4) Best-K (<code>--best-k</code>)</h3> <p>You might notice that many of your photomosaics will have large regions of similar color and so a single image gets tiled over large portions of your image. If you'd like to throw in a little (sensible) randomness, instead of using the (<code>--randomness</code>) sledgehammer, you can use the <code>--best-k</code>.</p> <p>At each tile, with <code>--best-k</code>, <code>k</code> top matches will be chosen from randomly, weighted roughly inversely by distance (so "closer" images are more likely).</p> <p>Here's the same image as above, but with <code>--best-k 5</code>:</p> <p align="center"> <a target="_blank" rel="noopener noreferrer" href="https://github.com/worldveil/photomosaic/blob/master/media/readme/best-k.jpg"><img src="https://github.com/worldveil/photomosaic/raw/master/media/readme/best-k.jpg" height="400"/></a> </p> <div class="highlight highlight-source-shell"><pre>$ python mosaic.py \ --target <span class="pl-s"><span class="pl-pds">"</span>media/example/beach.jpg<span class="pl-pds">"</span></span> \ --savepath <span class="pl-s"><span class="pl-pds">"</span>media/output/%s-mosaic-scale-%d.jpg<span class="pl-pds">"</span></span> \ --codebook-dir media/pics/ \ --scale 13 \ --height-aspect 4 \ --width-aspect 3 \ --opacity 0.4 \ --best-k 5</pre></div> <h3><a id="user-content-face-montages" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#face-montages"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Face Montages</h3> <p>I really wanted to make face montages, so even though they don't have anything to do with photomosaics, here they are!</p> <p>Basically this means a GIF of a single person from different photos but all aligned on that person's face.</p> <p>The way it works:</p> <ol><li>Put together a folder of photos (<code>--target-face-dir</code>) with ONLY the face you want in the montage (yourself, for example). Selfies are great for this.</li> <li>Put together a folder of photos with ANYONE ELSE's face in them (<code>--other-face-dir</code>). The more the better. Just don't have your face in them. If you're really short on them / have a lot of group photos, crop yourself out.</li> <li>Put together a directory of photos you'd like to draw from to make the montage (<code>--photos-dir</code>).</li> </ol><p>I have included an academic dataset (the <a href="http://www.vision.caltech.edu/html-files/archive.html" rel="nofollow">Caltech Faces Dataset</a>) of 450 faces in the <code>media/faces/other_faces</code> (that are unlikely to be you) as a starting point. If you make use of this for some academic reason, please do cite both them and <code>dlib</code>.</p> <p>If you want good accuracy, I'd try to add at least 100 photos to both the <code>--target-face-dir</code> and the <code>--other-face-dir</code>. I added about that and as a result the <code>face_montage.py</code> script had about 1 false positive per 300 photos (easily removed before running the <code>create_gif_from_photos_folder.py</code> step).</p> <p>Here's <a href="https://www.kairos.com/blog/60-facial-recognition-databases" rel="nofollow">a place to find many, many more pictures with random faces</a>.</p> <p>Anyway, enough description. To run the facial embeddings, train the linear classifier, and align the faces:</p> <div class="highlight highlight-source-shell"><pre>$ python face_montage.py \ --target-face-dir media/faces/will \ --other-face-dir media/faces/other_faces \ --photos-dir media/pics \ --output-size 800 \ --savedir media/output/montage_will/ \ --sort-by-photo-age</pre></div> <p>then to actually compile them into a GIF, use the <code>--savedir</code> from above and then run:</p> <div class="highlight highlight-source-shell"><pre>$ python create_gif_from_photos_folder.py \ --photos-dir media/output/montage_will/ \ --fps 7 \ --fuzz 3 \ --order ascending</pre></div> <p>It's nice to separate these two steps since you might want to remove false positives from the folder created in the first step, remove unflattering pics, or mess around with how many frames per second you'd like in the resulting GIF. I implemented caching on the embedding, but running over a full set of photos (4,000+ for just the segment of my photos library I had the patience to run over) still can take some time.</p> <h3><a id="user-content-using-ffprobe--ffmpeg" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#using-ffprobe--ffmpeg"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Using <code>ffprobe</code> / <code>ffmpeg</code></h3> <p>A few of the routines in this project make use of parameters from the video/audio files. I often call the command line utilities directly by spinning up a separate process, which is a little icky, but gets the job done.</p> <p>FFProbe is an excellent tool for this, and the command line interface is quite powerful. I recommend <a href="https://trac.ffmpeg.org/wiki/FFprobeTips" rel="nofollow">this guide</a> for getting the handle on it.</p> <p>Similarly, <code>ffmpeg</code> makes splicing audio/video streams and recombining them easy. A few good resources for <code>ffmpeg</code> specifically:</p> <h3><a id="user-content-imagemagick" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#imagemagick"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>ImageMagick</h3> <p>The <code>convert</code> tool is also nice for making GIFs:</p> <pre><code>$ cd your/cool/folder/with/jpg/images $ convert -delay 5 -layers optimize *.jpg output.gif </code></pre> <p>then you may want to apply the gifsicle trick for compressing/resizing that GIF to make it be a reasonable size.</p> <h3><a id="user-content-unit-tests" class="anchor" aria-hidden="true" href="https://github.com/worldveil/photomosaic#unit-tests"><svg class="octicon octicon-link" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"/></svg></a>Unit tests</h3> <p>There is a small (but embarrassingly incomplete) test suite that you can run with:</p> <p>Not much coverage at the moment.</p> </article></div> Thu, 13 Dec 2018 18:43:05 +0000 https://github.com/worldveil/photomosaic A ‘Self-Aware’ Fish Raises Doubts About a Cognitive Test https://www.quantamagazine.org/a-self-aware-fish-raises-doubts-about-a-cognitive-test-20181212/ https://www.quantamagazine.org/a-self-aware-fish-raises-doubts-about-a-cognitive-test-20181212/ <p>Next, the researchers marked the fish that seemed to be catching on. They injected a bit of brown material (or clear, for a control) under the skin of each fish’s throat. Afterward, some of the fish seemed to study the marks in front of the mirror. Then they scraped their throats against rocks or the sandy bottom of their tanks — a common fish behavior for removing irritants, Jordan said. The fish often followed this maneuver by swimming back up to the mirror. Three out of the four fish that made it this far in the study passed the mirror test, the authors concluded.</p> <p>The researchers spent more than three years trying to get the paper published. Peer review is a largely cloaked process in which experts in a field respond anonymously to papers that have been submitted to journals. But Gallup signed his reviews of the cleaner wrasse paper, which were “violently anti,” Jordan said.</p> <p>In Albany, Gallup chuckled at the suggestion that the fish had recognized themselves. To him, the demonstrated behavior was too ambiguous. He wrote in one of his reviews that when a wrasse scraped its throat, maybe it was pantomiming an instruction for what the mirrored fish should do — as in “You’ve got some mustard on your chin,” said Jordan, who called this alternate explanation “incredibly far-fetched.”</p> <p>Reiss also reviewed the paper several times for different publications, she said. She wasn’t convinced that behaviors like swimming upside down showed that fish were testing how the mirror worked. She and Gallup also found it problematic that the brown mark resembled a parasite — to which wrasses instinctively react — unlike the unnatural marks on other animals. “I think for a claim like this, the evidence has to be much stronger,” Reiss said.</p> <p>In response to the reviewers’ objections, Jordan and his co-authors added more control experiments to their study. Now that the paper has finally been accepted for publication, Jordan thinks the grueling revision period made the study stronger. “And, you know, I didn’t die in the process,” he joked.</p> <p><a href="https://psychology.barnard.edu/profiles/alexandra-horowitz">Alexandra Horowitz</a>, a psychologist at Barnard College in New York City who studies dog cognition, called the wrasse study “amazing.” She added, “I think it … challenges our presumptive notions about what fish can or cannot experience.”</p> <p>Jordan wants the world to know how smart fish can be. But, he said, “I am the last to say that fish are as smart as chimpanzees. Or that the cleaner wrasse is equivalent to an 18-month-old baby. It’s not.” Rather, he thinks the main point of his paper has more to do with science than fish: “The mirror test is probably not testing for self-awareness,” he said. The question then is what it is doing, and whether we can do better.</p> <h2>What Is Self-Awareness?</h2> <p>Sometimes it’s easy to tell that an animal really doesn’t understand mirrors. The writer <a href="https://marylauraphilpott.com/about/">Mary Laura Philpott</a> has frequently been awakened in the wee hours of the morning by a loud knocking on her door in Nashville, Tennessee. When she opens the door, she finds only a small turtle. She nicknamed the prankster reptile Frank. Eventually she came to suspect that Frank might be challenging or attacking the strange turtle he sees in the reflective part of her door — night after night after night.</p> <p>But just because one individual animal fails a mirror test doesn’t mean every member of its species would do the same. It’s a more meaningful positive test than a negative one. And even when animals do recognize themselves in mirrors, researchers are divided about what that implies.</p> <p>“Recognition of one’s own reflection would seem to require a rather advanced form of intellect,” <a href="http://doi.org/10.1126/science.167.3914.86">Gallup wrote</a> in 1970. “These data would seem to qualify as the first experimental demonstration of a self-concept in a subhuman form.”</p> <p>Either a species shows self-awareness or it doesn’t, as Gallup describes it — and most don’t. “And that’s prompted a lot of people to spend a lot of time trying to devise ways to salvage the intellectual integrity of their favorite laboratory animals,” he told me.</p> <p>But Reiss and other researchers think self-awareness is more likely to exist on a continuum. <a href="http://www.pnas.org/cgi/doi/10.1073/pnas.0503935102">In a 2005 study</a>, the Emory University primatologist Frans de Waal and his co-authors showed that capuchin monkeys make more eye contact with a mirror than they do with a strange monkey behind Plexiglas. This could be a kind of intermediate result between self-awareness and its lack: A capuchin doesn’t seem to understand the reflection is itself, but it also doesn’t treat the reflection as a stranger.</p> <p>Scientists also have mixed feelings about the phrase “self-awareness,” for which they don’t agree on a definition. Reiss thinks the mirror test shows “one aspect of self-awareness,” as opposed to the whole cognitive package a human has. The biologists Marc Bekoff of the University of Colorado, Boulder, and Paul Sherman of Cornell University have <a href="https://doi.org/10.1016/j.tree.2003.12.010">suggested a spectrum of “self-cognizance”</a> that ranges from brainless reflexes to a humanlike understanding of the self.</p> <p>Jordan likes the idea of a spectrum, and thinks cleaner wrasse would fall at the lower end of self-cognizance. He points out that moving your tail before it gets stepped on, or scraping a parasite off your scales, isn’t the same as sitting and pondering your place in the universe. Others in the field have supported his contention that the mirror test doesn’t test for self-awareness, he said. “I think the community wants a revision and a reevaluation of how we understand what animals know,” Jordan said.</p> <p>One thing on which most scientists in the field do agree is that there’s a link between recognizing yourself in a mirror and being social. The species that perform well on mirror tests all live in groups. In <a href="https://doi.org/10.1007/BF03393991">an intriguing 1971 study</a> by Gallup and others, chimpanzees born in captivity and raised in isolation failed the mirror test. The chimps that passed the test had been born in the wild, in social groups. Gallup thought this finding supported the ideas of the philosopher George Herbert Mead of the University of Chicago, who said our sense of self is shaped by our interactions with others. “[T]here could not be an experience of a self simply by itself,” Mead wrote in 1934.</p> <p>Gallup sees a clear connection between recognizing yourself in a mirror, understanding something about others’ states of mind, and even empathizing. “Once you can become the object of your own attention, and you can begin to think about yourself, you can use your experience to infer comparable experiences in others,” Gallup said. No species evolved looking in mirrors, but some of us can see ourselves reflected in our companions.</p> <h2>The Mirror as a Window</h2> <p>The sociality of Asian elephants helped researchers to design a better mirror test in 2006. <a href="http://www.hunter.cuny.edu/psychology/people/faculty/biopsychology/plotnik">Joshua Plotnik</a>, a comparative psychologist now at Hunter College in New York City, worked on the study with de Waal and Reiss. In <a href="http://doi.org/10.1037/0735-7036.103.2.122">an earlier test that elephants failed</a>, the animals had been in an enclosure, looking at a small mirror. For the revised test, the researchers used an eight-foot-by-eight-foot mirror, so the elephants could see their whole bodies at once. They also let the elephants approach the mirror so that they could stand on their back legs to look behind it or kneel to peer beneath it.</p> <p>They also tested elephants in pairs, which “gave them an opportunity to use their partner as a frame of reference,” Plotnik said. When an elephant saw a friend standing in the mirror next to a stranger, she might be able to deduce that the strange elephant was herself.</p> <p>This time, one of three elephants passed the test. Plotnik said the researchers have promising results from other elephants that haven’t been published yet.</p> <p>“You have to really try to take the perspective of the animal that you’re working with,” Plotnik said. For example, elephants like being dirty and might not care about marks on their bodies, unlike grooming animals such as chimpanzees. Gorillas groom, but they hate making direct eye contact with others. This might help explain why they haven’t had the same success in the mirror test as chimps or orangutans.</p> <p>Plotnik thinks future experiments should take an animal’s particular motivations and perceptions into account. For example, the mirror test is visual, but elephants are more interested in what they smell and hear. “Is it fair if you test an animal that’s not a primarily visual animal and they fail?” Plotnik said. “You could make that argument for dogs.”</p> <p>Dogs are <a href="https://doi.org/10.1016/j.beproc.2017.08.001">lousy at recognizing themselves</a> in mirrors. But Horowitz recently designed an “olfactory mirror test” for dogs. She found that dogs spent longer sniffing samples of their own urine when it had an extra scent “mark” added to it.</p> <p>“It’s challenging for us as visual creatures to imagine ourselves into the sensory worlds of nonvisual animals,” Horowitz said. But we have to do it, she thinks, if we want to understand how their minds work.</p> <p>Reiss, who calls Horowitz a friend, doesn’t think the olfactory mirror study proves dogs can recognize themselves. But she thinks the experiment is an interesting jumping-off point. “How else can we [design] tests to get glimpses into what animals know about themselves?” she said.</p> <p>As empathetic as <em>Homo sapiens</em> is, we struggle to place ourselves in the viewpoints of other species. Yet this kind of understanding could help us not just to grasp our own place in the world but to protect the world. For example, Plotnik said, a lack of habitat for Asian elephants is driving conflict between the endangered species and humans. “I think a lot of what’s missing from the debate around how to solve this conflict is the elephant’s perspective,” he said. The kind of insight we get from putting pachyderms in front of mirrors might be a helpful window into their minds.</p> <p>Several mirrors decorate the walls of Gallup’s office, partially hidden behind the towers of papers. It’s just a coincidence, he told me — the mirrors were there when he moved in. He got up from his chair to show me another coincidence born of pareidolia, <a href="https://doi.org/10.1016/j.cortex.2014.01.013">our mind’s inclination to look for faces</a>. In the black wood grain of his office door, a student had once pointed out the barely discernible face of a gorilla. Gordon traced it for me: an eye, another eye, two nostrils. He directed me to stand in front of the door and move back and forth until I saw it.</p> <p>Suddenly the light caught the grain in just the right way and the gorilla’s giant face emerged. It stared back at me directly, as a real gorilla never would, like a glimpse straight into the unknowable mind of an animal. “I do see it!” I said. Gallup laughed delightedly. “Isn’t it amazing?” he asked. Then it was gone.</p> Thu, 13 Dec 2018 13:04:29 +0000 https://www.quantamagazine.org/a-self-aware-fish-raises-doubts-about-a-cognitive-test-20181212/ Amazon uses dummy parcels to catch thieves https://www.bbc.com/news/technology-46552611 https://www.bbc.com/news/technology-46552611 <figure class="media-landscape has-caption full-width lead"><span class="image-and-copyright-container"> <img class="js-image-replace" alt="Amazon parcels" src="https://ichef.bbci.co.uk/news/320/cpsprodpb/504C/production/_104765502_amazonparcel.gif" width="976" height="549"/><span class="off-screen">Image copyright</span> <span class="story-image-copyright">Getty Images</span> </span> <figcaption class="media-caption"><span class="off-screen">Image caption</span> <span class="media-caption__text"> Amazon delivers millions of packages every day </span> </figcaption></figure><p class="story-body__introduction">Amazon has teamed up with police in the US in an effort to stop thieves who steal parcels left outside homes.</p><p>Officers in New Jersey are planting dummy boxes fitted with GPS trackers, coupled with hidden doorbell cameras, at homes around the city of Jersey.</p><p>The homes selected for the experiment were chosen using the city's own crime statistics combined with mapping data of theft locations supplied by Amazon.</p><p>One box was stolen three minutes after it was "delivered".</p><figure class="media-landscape has-caption full-width"><span class="image-and-copyright-container"> <span class="off-screen">Image copyright</span> <span class="story-image-copyright">Getty Images</span> </span> <figcaption class="media-caption"><span class="off-screen">Image caption</span> <span class="media-caption__text"> Having parcels left outside is convenient for many but is also a risk </span> </figcaption></figure><p>Amazon told AP: "We appreciate the increased effort by local law enforcement to tackle package theft and remain committed to assisting however we can."</p><p>The US Postal Service expects to deliver about 900 million packages in the run-up to Christmas.</p><p>Last year, Amazon launched a service called Amazon Key that allowed homeowners with smart locks to let couriers open their doors via an app and leave parcels inside.</p><p>While that may be a step too far for many, there are other ways to protect deliveries:</p><ul class="story-body__unordered-list"><li class="story-body__list-item">Have them delivered to a workplace or a friend who is home during the day</li> <li class="story-body__list-item">Insist that deliveries must be signed for</li> <li class="story-body__list-item">Install cameras that will give police some video evidence</li> <li class="story-body__list-item">Use a service that provides a storage box drivers can unlock by entering a code on a keypad</li> </ul><p>Amazon also provides lockers for people to pick up parcels in locations such as shopping centres, convenience stores, airports, train stations and universities.</p> Thu, 13 Dec 2018 18:49:00 +0000 https://www.bbc.com/news/technology-46552611 Firefighting foam heats up coal fire debate (2010) https://www.earthmagazine.org/article/hot-hell-firefighting-foam-heats-coal-fire-debate-centralia-pa https://www.earthmagazine.org/article/hot-hell-firefighting-foam-heats-coal-fire-debate-centralia-pa <p>By some accounts, Hell on Earth is located directly below Centralia, Pa.: Smoke rises from the cracked ground, smoldering sinkholes open without warning, and what is left of the town’s abandoned houses and surrounding woodlands is scorched and covered in a layer of smelly sulfur. Once a productive mining town in eastern Pennsylvania’s valuable anthracite coal region, Centralia has been reduced to a smoky ghost town, lacking even a zip code, by an underground coal fire that has been burning for nearly 50 years.</p> <p>In May 1962, a fire set in an unregulated dump spread to an exposed coal seam and soon invaded the maze of mine tunnels and coal seams underlying Centralia, once a close-knit coal-mining town of 2,600 residents, about an hour northeast of Harrisburg. Early attempts to put out the fires failed. In the 1980s, after a number of residents were sickened by carbon monoxide fumes and a 12-year-old child was nearly killed by a sudden ground collapse, the town was deemed uninhabitable and most of the residents were bought out and relocated.</p> <p>More than 20 years after the last attempts to put out the flames, the Centralia inferno still shows no signs of burning out on its own, now burning along multiple coal seams in four different directions. Firefighting technology has come a long way in the past two decades, however, and now an innovative Texas-based company wants to take a crack at putting out Centralia’s fires, hoping to prove once and for all that valuable mines don’t need to be left to burn.</p> <h3> To extinguish, or not to extinguish?</h3> <p>Coal fires are a problem all over the world. Such fires endanger nearby communities, waste precious resources and produce tons of noxious and greenhouse gases. Centralia is not the only coal fire burning in the United States. In fact, it’s just one of 38 burning in Pennsylvania alone. The hundreds of underground fires in the United States, from Pennsylvania to Alabama to Wyoming, combined with the thousands thought to be burning in China, India and elsewhere, are one of the largest sources of carbon dioxide and pollution on Earth.</p> <p>China alone loses between 100 million and 200 million tons of coal each year to mine fires, as much as 20 percent of their annual production, according to the <a href="http://www.itc.nl/external/coalfire/activities/overview.html" target="_blank">International Institute for Geo-Information Science and Earth Observation</a>, based in Enschede, Netherlands. The Institute estimates that carbon dioxide emissions from these fires are as high as 1.1 billion metric tons, more than the total carbon dioxide emissions from automobiles in the United States. Second to China is India, where 10 million tons of coal burns annually in mine fires, contributing a further 51 million metric tons of carbon dioxide to the atmosphere.</p> <p>In addition to carbon dioxide, “coal fires produce as many as 60 different toxic compounds, many of which are carcinogenic,” says Glenn Stracher, an expert on coal fires at East Georgia College in Swainsboro. Such toxins include arsenic, selenium, fluorine, sulfur, lead, copper, bismuth, tin, germanium and mercury, he says. Robert Finkelman, a medical geologist at the University of Texas at Dallas, estimates that 40 tons of mercury are released every year by uncontrolled coal fires, which is on a par with the amount produced by coal-fired power plants in the United States. “If we could extinguish those fires,” he says, “it would make a worthwhile contribution to reducing mercury pollution as well as carbon dioxide and other toxic elements.”</p> <p>Despite their impact, many coal fires are left to burn because mine fires are too dangerous, costly and difficult to deal with. Part of the problem is that effective firefighting options are limited. Using water can cause steam explosions. Cement and fly ash slurries from the surface can sometimes be used to seal mine openings and cut off the fire’s oxygen and fuel supply, but this technique usually only works on smaller fires — not big blazes like Centralia, which has a cavernous network of oxygen-supplying tunnels. Another technique is to dig out the fire and smother it on the surface, but again, this technique is too dangerous and too costly to use on a fire as large as Centralia. Because of these firefighting problems, fires like Centralia aren’t extinguished unless they directly impact nearby communities or are near a gas line or are otherwise imminently dangerous, Stracher says.</p> <p>The issue of how to deal with Centralia is particularly difficult for a state like Pennsylvania. “Pennsylvania has the largest abandoned mine problem in the country and we’re working with a limited budget,” says Tom Rathbun, a spokesperson for <a href="http://www.portal.state.pa.us/portal/server.pt/community/abandoned_mine_reclamation/13961/centralia/588959" target="_blank">Pennsylvania’s Department of Environmental Protection </a>(PDEP), based in Harrisburg. PDEP works closely with the federal <a href="http://www.osmre.gov/" target="_blank">Office of Surface Mining (OSM)</a> to determine which of Pennsylvania’s many mine reclamation projects are the most pressing, and all funding must be approved by the federal agency.</p> <p>After Congress passed the <a href="http://www.osmre.gov/topic/SMCRA/publiclaw95-87.shtm" target="_blank">Surface Mining Control and Reclamation Act</a> in 1977, OSM inventoried abandoned mines around the country and prioritized them according to danger to the public. Open pits, open mine shafts and exposed coal seams where fires might start were given priority-funding status, Priority 1. Putting out already-existing fires that were not deemed a direct danger to the public was not.</p> <p>“Our first role is to prevent new fires from happening,” says Bill Ehler, a project manager with OSM. The best recourse for dealing with coal fires is to keep them from lighting in the first place, or to put them out early, before they get Centralia-sized, he says. “Centralia is a unique case. We tried to abate it not long after it ignited and found it was already too far out of control. <a href="http://www.pittsburghlive.com/x/pittsburghtrib/news/s_679163.html" target="_blank">Our only option was to move the community</a>, which took an act of Congress,” he says. “Buying out a town is generally not our usual process.”</p> <p>After it was abandoned in the late 1980s, Centralia was classified as a Priority 2 site, where it has remained, as the fires are not “presenting a danger to the public at this time,” Rathbun says. “The problem is that Pennsylvania has about $1.4 billion in Priority 1 sites,” he says. “We need to take care of those before we worry about Centralia. The Office of Surface Mines simply will not approve spending most of our budget on one project with low priority.”</p> <p>But as practical as that assessment might be, Stracher, who has studied Centralia for 25 years, begs to differ. “They aren’t taking into account the environmental consequences of Centralia, which are catastrophic,” he says. “Even from an energy point of view, it’s a catastrophe. That’s a lot of high-grade coal under there that’s completely going to waste.” And it’s releasing dangerous toxins into the air as well as polluting the groundwater, he says. Those are some good reasons to try to put out the flames.</p> <h3> Previous attempts to extinguish Centralia</h3> <p>“We get an awful lot of calls from people who have lots of ideas about how to put out Centralia,” Rathbun says. “Centralia is a different animal though, a very large and complex system. Somebody would have to do some serious geologic work and make a plan that would have an almost guaranteed chance of working before we would seriously consider it.”</p> <p>Nobody knows how large the area burning under Centralia is today. The last detailed study of the fire was in 1983: PDEP and OSM hired GAI Consultants, an engineering and environmental consulting firm based in Pittsburgh, to map the fire and develop mitigation plans. At that time, GAI found that the fire was spread over almost 80 hectares, and estimated it could eventually burn close to 1,500 hectares. But now, says Stanley Michalski, a geologist on that team, “we really have no idea how big it is or where exactly it’s burning.”</p> <p>At the end of the study, GAI estimated it would cost upward of $600 million to completely dig out the fire, which was the only viable firefighting technique at the time. “Everybody choked on that number,” Michalski says. And so the federal government bought out the town for $42 million (although a few residents refused to leave), and for the next 20 years, the fire has continued to burn.</p> <p>Then, two years ago, Michalski drew up a new, detailed plan to cut off one of Centralia’s four main fires. “Centralia’s fire is shaped like the letter H, with four limbs moving off in four different directions,” he says. Michalski, who has drawn up successful coal firefighting plans for several other Pennsylvania fires, proposed cutting off the fire’s northwest limb by filling strategic slices of the mine with coal combustion ash that wouldn’t fuel the advancing fire. “If this method worked on this one limb, we could then build three more barriers and effectively contain the fire in a box where it could eventually burn itself out,” he says.</p> <p>Michalski took his idea to PDEP and was met with a lot of positive feedback, he says. But, to his knowledge, no further action was taken by the agency.</p> <h3> A new approach</h3> <p>Now, a commercial firefighting company, <a href="http://www.cafsco.com/" target="_blank">CAFSCO Fire Control</a>, based in Fort Worth, Texas, has its sights set on Centralia. The company’s compressed nitrogen foam system — originally invented to combat forest fires — has been adapted to fight underground coal fires with much success.</p> <p>“There are no limits to the types of mines or size of fire that we can put out,” says Lisa LaFosse, co-owner of CAFSCO. “We can fill up any mine with foam,” she says. Furthermore, adds Mark Cummins, founder and co-owner of CAFSCO, the company’s system is safer and more cost-effective than the tried-and-true technique of digging out the fires. “We can put out a fire at a tenth of the usual cost, and we don’t even have to see the fire to fight it,” he says. CAFSCO’s method would also allow for future mining of an area’s unburned coal seams, unlike digging out the fire.</p> <p>CAFSCO’s biodegradable fire-suppressing foam works in a number of ways to put out underground coal fires. Pumped into an underground mine through surface boreholes, the foam quickly expands to fill all the available space, saturating the interior of the mine from floor to ceiling, effectively soaking all the fuel and smothering the fire. The expansion of the foam also creates positive pressure in the underground spaces of the mine, forcing out any unconsumed oxygen that could further feed the fire. CAFSCO’s foam differs from other types of firefighting foam in that it contains no oxygen, only nitrogen, which works chemically to smother the fire.</p> <p>The system has worked in dozens of coal fires across the United States, according to CAFSCO. In 2007, CAFSCO put out its largest coal fire yet, pumping more than 700 million gallons of foam into the Consolidated Buchanan No. 1 coal mine in Claypool Hill, Va. Cummins says Centralia could be put out in about a month, for about $60 million. “I understand the difficulties of the Centralia fire, but I know what this foam is capable of doing and I really believe we can put it out,” he says.</p> <p>Stracher, for one, thinks CAFSCO can get the Centralia job done. “I’ve seen this foam in action and it’s really unbelievable what it can do,” he says. Rathbun, however, remains wary of the new foam technique because he has not yet seen it in use. Before PDEP would consider approving such a project, “we would need to see more evidence, track records, proof that it works,” he says. “Centralia is a huge project and we don’t have the money to experiment with it.”</p> <p>CAFSCO staff are currently working with PDEP on some smaller fires in Pennsylvania’s anthracite region, hoping to establish a relationship with the agency and work their way up to Centralia. “I’m about ready to retire and I’d love to go out on the Centralia fire,” Cummins says. “Hopefully we can convince them it’s high time to put this thing out.”</p> <p>Budget issues aside, Stracher sees extinguishing Centralia as an opportunity to change how coal fires are dealt with, not just in the United States, but also around the world. “If we can put out Centralia, one of the most problematic and longest-burning fires in the U.S., it could be a turning point,” he says. “Now that we have the technology to deal with this staggering source of pollution, it’s high time we start putting it to use.”</p> Thu, 13 Dec 2018 11:42:35 +0000 https://www.earthmagazine.org/article/hot-hell-firefighting-foam-heats-coal-fire-debate-centralia-pa