mattt*Mattt Thompsonhttp://mattt.me/2013-11-05T00:00:00ZMattt Thompsonm@mattt.me$1 Millionhttp://mattt.me/2013/dreamforce-hackathon/2013-11-05T00:00:00Z2013-11-05T00:00:00ZMattt Thompsonm@mattt.me<p>Enterprise is a loaded term among hackers. It's time that we started to think bigger about our role in shaping the digital world.</p>
<p><strong>$1 Million.</strong> That's the grand prize for the <a href="http://events.developerforce.com/dreamforce/hackathon">hackathon this year at Dreamforce</a>, the annual Salesforce developer conference.</p>
<p>If that number doesn't make you perk up a little, you've spent too much time around Silicon Valley types. <em>Yeah, a billion dollars is cool, but you know what? A million dollars is pretty cool, too.</em></p>
<p>And if a million dollars seems disproportionate to the iPad you're used to competing for at hackathons, that's because the stakes of enterprise software development are that much higher. We're talking orders of magnitude.</p>
<h2>Enterprise is a <em>Huge</em> Market</h2>
<p>Tech entrepreneurship in San Francisco has devolved into a mass hysteria—hundreds of unbelievably niche, banal startups designed for 20-something upper-middle class white guys attempting to disrupt who knows what for no apparent reason other than to become bait for a quick acquihire by Facebook, Google, or Yahoo.</p>
<p>Meanwhile, enterprise software remains one of the fastest-growing and most profitable markets for software developers. With an install base that's comparable to your favorite mobile platform, Salesforce is a product that millions of business people use everyday to do their job.</p>
<p>Rather than playing the lottery with yet another consumer-facing social photo gaming app, why not make something that will give you paying customers from day 1? Instead of chasing the dream of a billion dollar IPO, how about the very real chance at building a million dollar business in a few years?</p>
<h2>Enterprise Software is F***ing Hard</h2>
<p>Hackers tend to write off enterprise software as being boring, or too easy. Anyone who thinks that enterprise software is boring lacks imagination. Anyone who thinks that enterprise software is easy has never tried making it before.</p>
<p>Unlike lifestyle apps for well-to-do urbanites, business applications carry with them real financial implications. That alphabet soup of corporate requirements, from NDAs and SLAs to ACLs and VPNs exist for a reason, and that reason usually involves millions if not billions of dollars in liabilities.</p>
<p>When you develop on Salesforce, however, you're way ahead on this front. Use it as a secure data store, and automatically take advantage of its auditing system, its role-based permissions, and access restriction infrastructure. Not only that, but Salesforce will take care of all of the other boring things, like control panels, charts, and content aggregation.</p>
<h2>It Doesn't Have To Be Boring</h2>
<p>Back to that million dollars up for grabs (not to mention the 50k for 2nd place, and sizable prizes for other runner-ups):</p>
<p>As hackers, we've constructed a world of web hooks and deploy scripts and auto-responders that automate away little rough edges of our lives. Salesforce is doing the same thing for businesses.</p>
<p>Most of us don't have the context to understand just how much of business is done manually. Honestly, most of us would be shocked—<em>shocked!</em>—at how much busywork there is in copying information between different forms, or manually sending out reports, or physically counting products to do inventory. As such, it's hard for us to understand why people get so excited about what appears so obvious to us.</p>
<p>There is so much good we can do in the world as programmers. Like the <a href="http://www.nytimes.com/2013/07/27/nyregion/in-lieu-of-money-toyota-donates-efficiency-to-new-york-charity.html">Toyota engineers who improved wait times at The Food Bank for New York City</a>, we have the opportunity to make real change by applying our expertise where it's needed.</p>
<p>Whether it's healthcare or government or manufacturing or logistics, we need to start thinking bigger about our ability to change the world around us. Every aspect of our lives could be improved with an API, so let's start building them.</p>
<h2>It's Not About How Well You Know Salesforce</h2>
<p><strong>It's not about how well you know Salesforce. It's about creating something that solves a real business problem (and there are plenty of those to go around).</strong></p>
<p>You could create a <a href="http://www.heroku.com">Heroku</a> app that augments Force.com customer data with social networks, or performs advanced geospatial queries on companies with <a href="http://postgis.net">PostGIS</a>. Perhaps you could create a low-friction teleconferencing service that combines Salesforce authentication with <a href="http://www.webrtc.org">WebRTC</a>. You could even write a simple iOS app that quickly reads barcodes to lookup product information.</p>
<p>Honestly, if you're reading this, your chances are probably pretty darn good. </p>
<p>If I weren't an employee (and thus legally prevented from entering), I'd be all over this. Instead, I'll be helping to pick the winner, and I'd love nothing more than to be able to give one of you a million dollars for making a killer app.</p>
<p><a href="http://dreamforcehackerpass.eventbrite.com">Register is now free, so go ahead and give it a shot.</a></p>
Open Source & Commercial Supporthttp://mattt.me/2012/open-source-and-commercial-support/2012-07-06T00:00:00Z2012-07-06T00:00:00ZMattt Thompsonm@mattt.me<p>Several authors of open source libraries have recently expressed their discouragement about the expectation to provide free support to users. This is written in response to all of this, to fellow open source developers.</p>
<blockquote>
<p>Several authors of open source libraries have recently expressed their discouragement about the expectation to provide free support to users. This is written in response to all of this, to fellow open source developers.</p>
</blockquote>
<p>Code is our passion.</p>
<p>We love to solve interesting problems and share our work with others.<br>
We give away our code free of charge, often with no more than the expectation to have our names follow our work.<br>
We are delighted to share this freely with others, but there are ultimately limits to how much we can help.</p>
<p><strong>When developers use our software, it creates an imbalance: they depend on our work, but not the other way around.</strong></p>
<p>GitHub resolves some of that by allowing others to contribute back to the project, but this doesn't solve specific problems one might encounter. Stack Overflow also helps by distributing the support burden, but this too is limited by the availability and patience of the community. Some situations are outside the scope of a <a href="http://en.wikipedia.org/wiki/Gift_economy">gift economies</a>, and this is currently an unsolved problem for us.</p>
<p><strong>Developers want the option to pay for help when they need it, and we should create the infrastructure necessary to make that possible.</strong></p>
<h2>A Proposal</h2>
<p>Develop a set of standard operating practices for open source authors to provide commercial support, including:</p>
<ol>
<li>A standard contract that protects the intellectual property of clients (i.e. allow safe access to source code), as well as that of the developer.</li>
<li>Suggestions on how to form a legal entity (e.g. LLC) to operate under when providing support.</li>
</ol>
<h3>What This Means</h3>
<ul>
<li>Open Source authors have an opportunity to be compensated for their work.</li>
<li>Companies have the ability to get the help they need from those who know the most about their problem.</li>
</ul>
<h3>What This <em>Does Not</em> Mean</h3>
<ul>
<li>Open Source authors have an <em>obligation</em> to be compensated for their work.</li>
<li>Individuals and companies will <em>need</em> to pay to receive any support.</li>
</ul>
<h2>Open Questions</h2>
<p>I understand that socioeconomic structures are complex and fragile, and I do not take any of the possible implications of this proposal lightly. There are still some important questions that need to be resolved. Please consider the following:</p>
<ul>
<li>How can we, as leaders in open source, best communicate the relationship between authors and users?</li>
<li>What is the best pricing structure for support? For those whom this would be a side-project, a flat, hourly rate seems appropriate. But commercial open source companies usually offer monthly or yearly support plans. Which of these (if any) should be encoded in a standard contract?</li>
<li>Who in an open source project can offer support? What are the implications for open source contributors when money is introduced into the equation?</li>
<li>How can we minimize the effect of commercial support on stifling innovation? How can we prevent this from becoming a kind of <a href="http://en.wikipedia.org/wiki/Racket_(crime)">racket</a></li>
</ul>
Creativity and Terrorhttp://mattt.me/2012/creativity-and-terror/2012-01-03T00:00:00Z2012-01-03T00:00:00ZMattt Thompsonm@mattt.me<p>Deep within anyone who creates, you will find a lingering terror that one day, they will be outed as a fraud... Or maybe it's just me.</p>
<p>Deep within anyone who creates, you will find a lingering terror that one day, they will be outed as a fraud. Or maybe it's just me.</p>
<p>There is a chilling, complicated suspicion that over and over again, people overestimate my ability, and over the years that has compounded into a systematic deception that I cannot possibly escape. It keeps me up on nights after days in which ugly ideas prevailed my craft–bad days, off days, wasted days. By morning, it has resounded itself into a droning worry that my best days are behind you.</p>
<p>No amount of success resolves this feeling. In fact, success only deepens the fear.</p>
<p>To be precise, this fear is composed of two related concerns: one that you aren't nearly as capable as your reputation would suggest, and another that the magic propping up the deception will finally dissipate, thereby realizing the former concern.</p>
<p>Both of these concerns ultimately derive from the subjective, ephemeral nature of creativity.</p>
<p>In ancient Greece, Muses were the source of all knowledge and creativity. Harsh mistresses that they were, it would become the perennial occupation of artists, philosophers, and poets alike to court their favor.</p>
<p>In our modern understanding, creativity is but an exercise of reconstituting pieces of our prior experiences. We steal ideas. Nothing is truly original. Divinity is replaced by a chaotic process, albeit equally unexplainable.</p>
<p>Because we are blind to everyone else's creative process, we cannot reconcile the naked ideas behind our work with the unexplained genius we see in others. In reality, it's one in the same. Our creative output merely reflects a function of intellectual diet and chance; we are wholly products of our environments. Everyone else is equally terrified.</p>
<p>Because we are sensitive to the whims of inspiration, we cannot help but despair when it fades. All we can do is resist the temptation of cargo cults, to endlessly attempt to recreate exact conditions in the hopes of having lightning strike twice. Everyone else is equally terrified.</p>
<p>It would seem to be a sucker's game, then, to continue chasing the affection of muses, no? It's fun, certainly, but it's foolish to seek fulfillment in this alone.</p>
<p>Perhaps the way out of this cycle is to enjoy your craft for what it is. Be satisfied with improving your craft, and let what comes of it to be a gift to yourself and those who you share your work with. Make it your resolution for the year, if you're into that.</p>
<p>If everyone else is equally terrified, there's nothing to be scared about.</p>
"Because We're All Persons": Empathy and Open Sourcehttp://mattt.me/2011/empathy-and-open-source/2011-08-15T00:00:00Z2011-08-15T00:00:00ZMattt Thompsonm@mattt.me<p>By the time you read this, or maybe as I write this now, last week's drama in the Ruby community has likely been, in large part, forgotten. This is not an article about which is better, or what you should use, or why someone is right, or why someone is wrong. This is about something more universal, and infinitely more interesting: us.</p>
<p>By the time you read this, or maybe as I write this now, last week's drama in the Ruby community has likely been forgotten. It's a new week, and <a href="http://news.ycombinator.com/news">Hacker News</a> isn't going to read itself, after all.</p>
<p>In this particular case, the dispute stemmed from <a href="https://github.com/sstephenson">Sam Stephenson's</a> release of <a href="https://github.com/sstephenson/rbenv">rbenv</a>, which bills itself as a replacement to <a href="https://github.com/wayneeseguin/rvm">RVM</a>, a staple of every dutiful Rubyist's setup. On comment threads and Twitter feeds, this release has led to a conversation that has been excessive in rhetoric, and, at times, lacking in decorum (from every side of this issue, mind you). This all came to a head when <a href="https://github.com/wayneeseguin">Wayne Seguin</a> announced <a href="https://twitter.com/#!/wayneeseguin/status/102009061343629312">that he was done with RVM development for now</a>.</p>
<p>This is not an article about which is better, or what you should use, or why someone is right, or why someone is wrong. This is about something more universal, and infinitely more interesting: us. </p>
<h2>We Are Not Scientists</h2>
<p>Open source software incorporates high-minded idealism: we operate in a pure meritocracy, where the best ideas and the best code ultimately rise to the top, on the strength of their intrinsic, objective superiority alone. This should be familiar to anyone in the realm of the sciences, where we throw out incorrect, invalidated theories, and replace them with a new one (which may one day itself be supplanted).</p>
<p><em>I call bullshit.</em></p>
<p>Our code, though it encodes logic, has no absolute truth. We are not scientists, discovering eternal, universal laws. We are poets turned engineers, bearing our passions in a cold, binary medium. We write code in human languages for a human audience. It is human all the way down, from its flaws to its beauty.</p>
<h2>We Are Mortal</h2>
<p>The great tragedy of ideas are how they inspire and cultivate what will ultimately replace them. Just like the aging Hollywood starlet, who one day is edged out by a perky teenaged up-and-comer from the Midwest (who incidentally grew up watching and adoring this starlet, considering her the reason for her getting into show business in the first place).</p>
<p>To take a cue from <a href="http://en.wikipedia.org/wiki/The_Selfish_Gene">Richard Dawkins</a>, it's the <em>gene</em> that's selfish and ultimately propagates, not the organism. This is not Pokémon, after all. </p>
<p>Our projects, like us, are but temporary vessels for ideas. In time, all projects languish and are either replaced or forgotten. Perhaps there are a lot of us who subconsciously got into this gig on the promise that in code we could preserve a lasting digital legacy… how sad to realize how little truth this bares.</p>
<p>Immortality is an empty goal for software, anyway. What really matters is helping each other.</p>
<h2>We Are A Gift Culture</h2>
<p>As <a href="http://blog.steveklabnik.com/">Steve Klabnik</a> said at the <a href="http://lonestarrubyconf.com/">Lone Star Ruby Conf</a>, talking about his work on <a href="http://shoesrb.com/">Shoes</a>: the payoff of open source work is the realization that an hour of your time can save three for everyone else who uses it. </p>
<p>If there is any truth in software, it's that underlying spirit of beneficence. Open source is a gift culture–an economy and society built on how much we can help one another. We compete by cooperating. </p>
<p>Our digital medium allows us infinite abundance. It offers us a unique ability to literally share equally with everyone else. On top of that, we software developers happen to be, on average, pretty well-off doing what we do, which means that we create open source software as leisure, and not out of necessity. Given all of this, don't we owe it to ourselves to rise above the cruelty of survival instincts?</p>
<h2>We Are Not Ideas</h2>
<p>This is not to say that software can't or shouldn't compete to be the best–that's part of the fun! We just need a reminder that all software is written by people. People probably more like you than most people you'll ever meet.</p>
<p>This is especially true–and again, tragic–for those who create libraries in with comparable alternatives. By instinct, we magnify our situation into a narcissism of incredibly small differences. There are few people, for instance, who are intimate with the logistics of managing multiple installations of Ruby; one would think that two such individuals would get along pretty well, all things considered.</p>
<h2>We Are All Persons</h2>
<p>If you contribute to an open-source library, I offer a simple challenge. Reach out to the author or maintainer of a “competing” project… and just say hi. Tell them how much you respect their hard work (and mean it). Extend humanity with all sincerity. While you may differ in opinions, you have more in common than you'd expect.</p>
<p>It's humbling, but you'll be glad you did it. And if you do, write about it in the comments–I'd love to hear about your experience.</p>
<p><img alt="@sstephenson: "Why do people take things so personally?" @wycats: "because we're all persons."" src="http://cdn.matttthompson.com/images/wycats-because-were-all-persons.png" /></p>
Lost Recordings of _why's Last Lecturehttp://mattt.me/2010/lost-recordings-of-_whys-last-lecture/2010-05-26T00:00:00Z2010-05-26T00:00:00ZMattt Thompsonm@mattt.me<p>Of <a href="http://artandcode.ning.com/page/presenters-march-2009">all of the presenters at art&&code</a>, there was one who perfectly captured the spirit of it all. His name was _why. He wore a blue flower on his lapel, and carried an autoharp around with him. He was and remains a hero and source of inspiration for me, and I was lucky enough to be his student (at least for a few hours).</p>
<p><img alt="art&&code Poster" src="http://cdn.matttthompson.com/images/art-and-code-lady.jpg" /></p>
<p>On a rainy, blustery weekend in March of 2009, the <a href="http://www.art.cfa.cmu.edu/">Carnegie Mellon School of Art</a> hosted the
<a href="http://artandcode.ning.com/page/motivation-march-2009">art&&code symposium</a>. It was a low-key event, with maybe one or two hundred in attendance. It gathered together artists, coders, hackers, students and teachers alike. We came to talk about programming environments like <a href="http://processing.org/">Processing</a>, <a href="http://scratch.mit.edu/">Scratch</a>, <a href="http://www.openframeworks.cc/">openFrameworks</a>, and <a href="http://vvvv.org/">vvvv</a>, and how they can be used to make art, and more importantly, how these tools can educate and inspire an interest in programming for young people.</p>
<p>Of <a href="http://artandcode.ning.com/page/presenters-march-2009">all of the presenters</a>, there was one who perfectly captured the spirit of it all. His name was _why. He wore a blue flower on his lapel, and carried an autoharp around with him. He was and remains a hero and source of inspiration for me, and I was lucky enough to be his student (at least for a few hours).</p>
<p>“Drawing Cats with Hackety Hack” was the name of his session. As soon as everyone took their seats, _why started in with a dramatic chord on his autoharp. On instinct, I opened up <a href="http://tapedeckapp.com/">TapeDeck</a> and pressed record.</p>
<p>It was a few months later, in August, that _why made <a href="http://whymirror.github.com/">his sudden disappearance from the Internet</a>. _why existed mostly as an online persona, and made very few personal appearances. This was one of his last. I had completely forgotten that I recorded the lecture, until I stumbled across the files a few weeks back. </p>
<p>The audio quality isn’t great—it’s my computer’s internal mic, facing the wrong way, no less. There’s a good half hour I missed because my laptop ran out of battery. You can hear the strained whirring of my computer fans, along with other strange computer noises.</p>
<p>Yet, these recordings are very special to me, and I thought I might share them with everyone else. Here are the interesting bits with _why talking and singing that I could salvage from my footage.</p>
<ol>
<li>
<a href="http://cdn.matttthompson.com/audio/01-introduction-and-the-jennings-kids-are-here.mp3">Introduction and The Jennings Kids Are Here</a></li>
</ol>
<p>“Drawing Cats with Hackety Hack” is exactly what this class is about. _why introduces the kinds of cats we're about to draw together (recording starts at number 3, though). Special guest appearance by <a href="http://flong.com/">Golan Levin</a>, who organized the conference.</p>
<ol>
<li>
<a href="http://cdn.matttthompson.com/audio/02-hopes-of-hackety-hack-and-the-ridiculousness-of-web-programming.mp3">Hopes of Hackety Hack and the Ridiculousness of Web Programming</a></li>
</ol>
<p>On HTML, CSS, & Javascript: “You're a smart kid, but three languages? Are you kidding me? Are you seriously kidding me? That's ridiculous, I can't accept that… I mean, three languages that are inexplicably combined. Where does HTML begin and Javascript end?”</p>
<ol>
<li>
<a href="http://cdn.matttthompson.com/audio/03-the-keybohohohoard-should-not-be-ignohohohored.mp3">The Keybohohohoard Should Not Be Ignohohohored</a></li>
</ol>
<p>_why breaks out into song as he reminds us of the utility of the keyboard in programming interfaces for teaching kids.</p>
<ol>
<li>
<a href="http://cdn.matttthompson.com/audio/04-fake-viruses.mp3">Fake Viruses</a></li>
</ol>
<p>Everybody's made a fake virus program—even _why! Perhaps this should be part of a young programmer's rites of passage.</p>
<ol>
<li>
<a href="http://cdn.matttthompson.com/audio/05-i-dont-really-care-if-it-crashed-i-just-want-you-to-feel-smart.mp3">I Don't Really Care if it Crashed, I Just Want You to Feel Smart</a></li>
</ol>
<p>A musical conclusion to the class. No matter what happened with programming, it was alright because we were having fun and able to laugh about it.</p>
<p>_why’s mission with <a href="http://hacketyhack.heroku.com/">Hackety Hack</a>, and indeed in much of his work, was to make programming more about whimsey than frustration, more about feeling smart than feeling dumb. That day remains one of my most cherished memories, and I submit these with the utmost respect and gratitude to _why. Wherever you are, I wish you all the best. We all still have a lot to learn from you.</p>
<p>If you haven’t already, you should read his 2003 essay <a href="http://viewsourcecode.org/why/hacking/theLittleCodersPredicament.html">“The Little Coder’s Predicament”</a>. His lecture from art&&code <a href="http://vimeo.com/5047563">is also worth a watch</a>.</p>
Achievement Unlocked! Human Morality When Life Becomes A Videogamehttp://mattt.me/2010/achievement-unlocked/2010-05-17T00:00:00Z2010-05-17T00:00:00ZMattt Thompsonm@mattt.me<p>Loved by some and derided by others, achievement systems are nonetheless as essential to the fabric of videogaming today as power-ups were from the days of Contra, Super Mario Bros., and MegaMan. It is a trend unlikely to go away anytime soon, given not only its commercial viability, but—perhaps more importantly—its grounding in human nature.
It's not just fun and games, though. I believe that this is a matter of immense significance to the role of technology in our daily lives, with far-reaching implications into human morality in this brave new century.</p>
<p><img alt="Achievement Unlocked!" src="http://cdn.matttthompson.com/images/achievement-unlocked.jpg" /></p>
<p>Both commercially and critically, videogames command respect as a part of mainstream culture. This activity once relegated to dank and secluded arcades is a now a past-time to be enjoyed by everyone. Mom does yoga with Wii Fit and tends to her virtual crops on Farmville. Dad gets his adrenaline pumping on Modern Warfare or relaxes to a round of Civilization IV. Even jocks—the natural enemy of nerds, the old guard of videogaming—will spend untold hours in front of their latest copy of Madden. And who doesn’t have a pair of plastic guitars tucked away by the TV?</p>
<p>Even with videogames being more diverse in scope and audience than ever before, there is a common thread throughout: achievement systems.</p>
<p>Regardless of age, sex, or religious allegiance to console makers, we are all equally susceptible to the insatiable drive of collectable badges, leader boards, and point systems. We are all human, and thus become slaves to the steady drip of dopamine these systems deliver to the pleasure centers of our brain. It’s an addiction, albeit a virtual one.</p>
<p>With such incredible influence over players, it’s no surprise that this trend is catching on. Xbox LIVE awards Gamerscore points for completing achievements in hundreds of games. Valve builds achievements into games like Team Fortress 2 and Left 4 Dead to create a compelling meta-gaming experience. And lest we forget casual gaming, Zynga was recently given a $5 billion valuation (take that for what you will), following the success of games like FarmVille and Mafia Wars, which are little else other than achievement systems.</p>
<p>Loved by some and derided by others, achievement systems are nonetheless as essential to the fabric of videogaming today as power-ups were from the days of Contra, Super Mario Bros., and MegaMan. It is a trend unlikely to go away anytime soon, given not only its commercial viability, but—perhaps more importantly—its grounding in human nature.</p>
<p>It’s not just fun and games, though. I believe that this is a matter of immense significance to the role of technology in our daily lives, with far-reaching implications into human morality in this brave new century.</p>
<h2>Ubiquitous, Disposable Technology</h2>
<p>In his 2010 D.I.C.E. presentation, Jesse Schell asks us to consider the future of achievement systems in conjunction with another trend: disposable technology. As a consequence of Moore’s Law, Schell explains, it becomes less expensive to produce more powerful computers with each passing year. “If anyone here ever bought a Furby, the Furby costs $20, $30. It has more technology in it than they used to put a man on the moon. People have now thrown out their Furbies because it’s kind of dumb. It’s disposable technology.”</p>
<p>If this rate keeps up, we can expect everyday objects to become increasingly self-aware. Sensors, tiny embedded computers, video displays, and touch interfaces; they all exist today, and will be orders of magnitude cheaper in just a decade—cheap enough to put in anything and everything.</p>
<p>For instance, it’s not far-fetched to imagine an internet-enabled toothbrush (an example Schell uses in his talk). Oral hygiene is just one of those things that people don’t think about too often. Even if you brush twice a day, chances are you brush for under a minute, less than the recommended two or three. </p>
<p>However, if your toothbrush was more self aware…
Brush your teeth for 3 minutes? +10 points!
Oh, you brushed and flossed every day that week? +100 points!!
Want to share this on Facebook? Hells yeah!</p>
<p>Before you discredit the possibility of imaginary points impacting your brushing habits, consider that virtual crops have <a href="http://www.newser.com/story/85513/12-year-old-blows-1400-on-farmville.html">caused kids incur massive debt</a>, <a href="http://www.huffingtonpost.com/2010/03/30/dimitar-kerin-fired-over-_n_518635.html">ruined careers</a>, and <a href="http://www.chinasmack.com/2009/stories/happy-farms-popular-online-game.html">ended relationships</a>. All because of virtual crops on virtual farms.</p>
<p>Shoes will award points for regular exercise, pillows for getting a healthy amount of sleep, and reading lights for getting through those books you’d been putting off. Badges will be issued for biking rather than driving to work all week, or eating healthy home-cooked meals instead of fast-food. Toilets will give you a virtual thumbs up for each time you remember to put the seat back down.</p>
<p>Yet the implications are far greater than encouraging better habits. Those same ubiquitous sensors afforded by disposable technology and necessitated to keep track of your progress in achievement systems will end up recording your every action. This is Schell’s concluding point in his talk:</p>
<p>“These sensors that we’re going to have on us and all around us and everywhere are going to be tracking, watching what we’re doing forever. Our grandchildren will know every book that we read. That legacy will be there, will be remembered. And you get to thinking about how, wow, is it possible maybe that — since all this stuff is being watched and measured and judged, that maybe I should change my behavior a little bit and be a little better than I would have been? So it could be that these systems are all crass commercialization and it’s terrible. But it’s possible that they will inspire us to be better people, if the game systems are designed right.”</p>
<p>In our post-religious society, achievement systems stand to become a prescriptive moral entity comparable to God. Instead of promises of heaven in exchange for doing good deeds (+10 points!), going to church (+50 points!), or partaking in the sacraments (+1000 points each!)<a href="#footnote-1">1</a>, we are watched, judged, and awarded points by a technological deity.<a href="#footnote-2">2</a></p>
<p>1 Depending on your particular interpretation of salvation
2 Sacrilege: -100 points!</p>
<p>The comparison to religion is apt, I think. When computers gain a certain level of omniscience, people will change the way they behave. It is karma, only undoubtedly real.</p>
<p>Slowly, the line between videogames and life will blur, and then vanish. In a postmodern world, achievement systems will provide the meta-narrative absent in our everyday existence. When life is the ultimate sandbox game, those collectable achievements are what keep you playing. This is all to say, when life becomes a videogame, achievement systems may not seem so gimmicky after all.</p>
<p>Photo Credit: <a href="http://www.flickr.com/photos/kimonomania/2946030842/">Kimonomania</a></p>
Designing an IPA keyboard for the iPadhttp://mattt.me/2010/designing-an-ipa-keyboard-for-the-ipad/2010-04-26T00:00:00Z2010-04-26T00:00:00ZMattt Thompsonm@mattt.me<p>With the iPhone, iPad, and similar devices, we are seeing a transition into a new paradigm of touch screen interfaces, wherein the physical interface becomes virtual, able to dynamically adapt as needed to fit any context. Imagine what that could do for a classically difficult problem of Linguistics: typing IPA</p>
<p>Phonetics will always hold a special place in my heart. It’s what first got me interested in Linguistics. Making exotic sounds and representing them with even more exotic characters—it was love at first <a href="http://en.wikipedia.org/wiki/Phoneme">phoneme</a>.</p>
<p>Anyway, those exotic characters I fell in love with of course make up the <a href="http://en.wikipedia.org/wiki/International_Phonetic_Alphabet">International Phonetic Alphabet</a>, or IPA. IPA is a standard way to represent the spectrum of human speech across virtually every language with a sophisticated level of nuance. It’s as essential to Linguistics, both academically and culturally, as the periodic table is to Chemistry.</p>
<p>Yet despite its immense significance, there is an utter lack of hardware or software to support its transcription onto computers. For anyone not lucky enough to be fluent in <a href="http://en.wikipedia.org/wiki/TIPA">LaTeX</a>, it’s an arduous process of <a href="http://en.wikipedia.org/wiki/Alt_codes">ALT- codes</a> and copy-pasta. This is not just a problem for phonologists. I believe that this technological disconnect is the single greatest barrier to the growth of Linguistics as a field. </p>
<p>At the very least, this is an interesting problem for someone interested in technology, design, and linguistics. As luck would have it, the problem just got a whole lot easier with the release of the iPad.</p>
<h2>You Can't Spell iPad Without IPA</h2>
<p>For the better part of the last century, our lives have been controlled by the push of a button—physical interfaces resulting in electromechanical output. With the iPhone, iPad, and similar devices, we are seeing a transition into a new paradigm of touch screen interfaces, wherein the physical interface becomes virtual, able to dynamically adapt as needed to fit any context.</p>
<p>In these heady times, I though it’d be cool to employ some Blue Sky Solutioneering™, and see what’s possible. But first, let’s set some design goals to make sure we’re heading in the right direction:</p>
<ul>
<li><p><strong>Efficient</strong>: An input method’s sole function is to be an optimal method of transcribing thought into representation. A new IPA interface would be successful if and only if it is able to offer an improvement over competing alternatives. Though the bar is set pretty low in this respect, it should be at least comparable to the speed one could achieve writing in, say, <a href="http://en.wikipedia.org/wiki/Kirshenbaum">Kirshenbaum, or ASCII-IPA</a>.</p></li>
<li><p><strong>Intuitive</strong> / Familiar: When one refers to something as being “intuitive”, they usually mean “familiar”. Thus, an ideal interface should capture the best organizing principles for IPA and translate them into an interface with ideas borrowed from various input methods for other orthographies. As ubiquitous as <a href="http://en.wikipedia.org/wiki/File:IPA_chart_2005_png.svg">IPA charts</a> are in understanding phonology, there is much to be glanced from earlier traditions, such as the foundational work done by <a href="http://en.wikipedia.org/wiki/Jakobson">Jakobson</a>, <a href="http://en.wikipedia.org/wiki/Nikolai_Trubetzkoy">Trubetzkoy</a>, and others from the <a href="http://en.wikipedia.org/wiki/Prague_circle">Prague Circle</a>, or Chomsky and Halle’s seminal <a href="http://en.wikipedia.org/wiki/The_Sound_Pattern_of_English">“The Sound Pattern of English”</a>.</p></li>
<li><p><strong>Completeness</strong>: Given the dozens of symbols and diacritics in its repertoire, representing IPA in its entirety becomes quite a daunting goal. In an ideal interface, one would have access to the full range of the alphabet. However, since usability is the main priority, certain pragmatic affordances might have to be made.</p></li>
</ul>
<h2>Inspiration</h2>
<p>Keeping these goals in mind, I set out to create some initial designs to iterate upon. I drew heavily on the international keyboards available on the iPad. Here are some concepts I found to be informative, and how I translated them into an interface for IPA.</p>
<h3>Romanized Japanese Input</h3>
<p><img alt="Japanese Kana Input" src="http://cdn.matttthompson.com/images/japanese-keyboard-input.jpg" /></p>
<p>Compared to English, Japanese has a fairly <a href="http://en.wikipedia.org/wiki/Japanese_phonology">constrained phonetic inventory</a>. Using the <a href="http://en.wikipedia.org/wiki/Hepburn_romanization">Hepburn romanization system</a>, one can transcribe Japanese syllables by typing how it would be spelled in English. For instance, ma becomes ま tsu turns into つ, and a is written as あ. Kanji has many homophones, so for example, あお could be written as 青, 碧, 蒼, and 襖. An IME, or <a href="http://en.wikipedia.org/wiki/Input_method_editor">input method editor</a>, suggests characters that correspond to a particular phonetic input, from which a user picks what they meant. In practice, IMEs are orders of magnitude faster than having to write characters by hand. Even in Japan, where keyboards have a direct one-to-one mapping of kana onto the keyboard, most people prefer this romanized method of input.</p>
<p><img alt="IPA Consonant Suggestions" src="http://cdn.matttthompson.com/images/ipa-consonant-suggestions.jpg" /></p>
<p>One could imagine a similar method of text expansion being used for IPA input. For example, if a user types in th, the system could recommend th, θ and tʰ. Taken a step further, suggestions could also include voiced variants (so ð in addition to θ), or in the case of vowels, any related monophthongs or diphthongs. There are several standard ASCII ↔ IPA schemes that could be included as well, such as <a href="http://en.wikipedia.org/wiki/X-SAMPA">X-SAMPA</a>.</p>
<h3>Chinese Handwriting Input</h3>
<p><img alt="Chinese Writing Input" src="http://cdn.matttthompson.com/images/chinese-handwriting-input.jpg" /></p>
<p>While phonetic input using Kana for Japanese or Pinyin for Chinese is an effective means of transcribing text, there are some cases in which handwritten input is desirable (for instance, characters that you don’t know how to pronounce). Based on the position of your strokes, a handwriting interface like the one shown above can present you with possible matches. Similar to the romanized Japanese interface, a rough input is refined to an exact match using suggestions.</p>
<p>Borrowing from this concept, I designed a more intuitive system for transcribing vowels. Unlike consonants, which are relatively distinct from one another, vowels are nebulous, flowing one into another. Although the same method of ASCII expansion described above would allow for vowel transcription as well, it’s often hard enough to figure out which vowel is the best fit (let alone what its ASCII representation is).</p>
<p><img alt="Chinese Writing Input" src="http://cdn.matttthompson.com/images/ipa-vowel-input.jpg" /></p>
<p>As such, I employed that <a href="http://en.wikipedia.org/wiki/Vowel#Articulation">familiar trapezoid structure</a>, which spatially maps vowels to regions of the mouth. Just touch the general region of articulation, and select the desired vowel from the list of nearest neighbors provided on the right. As you drag your finger around, the suggestions would dynamically update accordingly. Not only is this a convenient interface, but it serves as a useful reference to enforce the relationships between the vowels and their place of articulation.</p>
<h3>Long Touch Gestures</h3>
<p><img alt="English Vowel Long Touch Gesture Input" src="http://cdn.matttthompson.com/images/english-keyboard-input-detail.jpg" /></p>
<p>One of the interface idioms used on the iPad is to display a contextual menu when you tap and hold certain objects for a second or two. Pictured above above, I touch and hold down on the a key to reveal a collection of related characters and diacritical variants.</p>
<p>With so many ways to ornament characters, this method of hiding complexity is nearly essential to keeping the interface clean and functional. For instance, by tapping and holding t, we could expect to see tʰ, tʷ, and tˠ, along with all of its voiced and other coarticulated variants.</p>
<h2>Scratching the Surface</h2>
<p>Of course, this is only the beginning in thinking about the potential for touch-screen IPA interfaces. I just wanted to get these ideas out there, so I could get some initial feedback. If you have any ideas for how to organize or present the alphabet, or how to improve the UI, please put them in the comments!</p>
Tell Me a Storyhttp://mattt.me/2010/tell-me-a-story/2010-02-08T00:00:00Z2010-02-08T00:00:00ZMattt Thompsonm@mattt.me<p>As a product of evolution, we humans are cognitively endowed with the ability to make sense of nature. Yet, we are a pre-historic being in a post-modern world. So how do we make sense of everything? Well, among other things, we tell stories.</p>
<p>Modern physics teaches us that there is more to truth than meets the eye; or than meets the all too limited human mind, evolved as it was to cope with medium-sized objects moving at medium speeds through medium distances in Africa.Richard Dawkins, “What is True?”</p>
<p>As a product of evolution, we humans are cognitively endowed with the ability to make sense of nature. We have an intuitive grasp on shapes and how objects can be manipulated in space. We can track animals and determine patterns in the weather and the seasons. We possess an innate notion of fairness and how to mitigate social interactions. </p>
<p>While these kinds of skills were useful for our ancestors in navigating the world for the last million generations, a lot has changed in the last 10,000 years, and almost as much has changed in the last century alone. We’ve gone from the savanna to the concrete jungle. From loin cloths and wooden clubs to business suits and ballpoint pen. From hunting to channel surfing. (You get the idea)</p>
<p>We are a pre-historic being in a post-modern world. So how do we make sense of everything? Well, among other things, we tell stories.</p>
<p>An atom wants to be at its lowest energy level, and does so by filling its electron shells in a particular order. Autophagy is a process by which cells eating their own internal components and invading microbial invaders. Genes are selfish, and do everything in their interest of self-replication, even at the expense of their organism. </p>
<p>Translating the complexities of chemical or biological systems into a story about actors playing different roles is a tradition as old as civilization itself. Going back to the earliest creation myths across all cultures, humans have told stories of trickster gods stealing the sun, or the sparks that created the planets and the stars to make sense of the universe. </p>
<p>Science, like mythology, creates an abstraction from the mechanistic underpinnings of the reality it describes. This tradition extends to computer science as well, in how we understand computational systems.</p>
<p>Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.Donald Knuth, Literate Programming </p>
<h3>So, with apologies to <a href=""http://en.wikipedia.org/wiki/Alan_Kay"">Alan Kay</a>, Object-Oriented Programming wasn’t so much invented as discovered.</h3>
<p>Why does OOP work so well as a paradigm? How can programmers identify and debug problems without thinking about the underlying 1’s and 0’s? It’s no coincidence—in fact, it all goes back to our evolutionary heritage. </p>
<p>Seeing as how our ancestors had a lot of practice thinking in terms of people with intentions who act on the world using tools, it comes as no surprised that we have become quite adapt at thinking about other things in that way, too. By contrast, our ancestors never had to deal with transistors, logic gates, bits, or bytes, so it takes some mental acrobatics for us to understand that even on its most basic level.</p>
<p>What I’m trying to get at is that, although we often fancy ourselves scientists, mathematicians, or architects, we should be reminded that at the end of the day, programming is about telling stories. Why? Because that’s the only chance we have at understanding computers.</p>
<p>As storytellers, programmers are tasked with constructing a comprehensive narrative about the way a system works. Using a <a href=""http://www.ruby-lang.org/en/"">language elegant enough</a>, we can succeed in telling our story without “breaking the spell”. We can refer to libraries to incorporate common functionality in the same way a writer might summon devices from the greek tragedies. A penchant for pedantic linguistic frivolity can be just as annoying as overly-clever meta-programming hacks.</p>
<p>If we are to become better story tellers, perhaps we should head to the fiction section to find our next programming book.</p>
<p>Photo Credit: <a href="http://www.flickr.com/photos/verbola/4193543895/in/set-72157603938293517/">Francesc Gelonch</a></p>
Forgetting to Rememberhttp://mattt.me/2009/forgetting-to-remember/2009-11-23T00:00:00Z2009-11-23T00:00:00ZMattt Thompsonm@mattt.me<p>The Ebbinghaus Forgetting Curve describes the way humans retain knowledge. Learning is, in a way, just a process of continually not forgetting things.</p>
<p><img alt="Ebbinghaus Forgetting Curve" src="http://cdn.matttthompson.com/images/forgetting-curve.jpg" /></p>
<p>The <a href="http://en.wikipedia.org/wiki/Forgetting_curve">Ebbinghaus Forgetting Curve</a> describes the way humans retain knowledge.</p>
<p>When we learn something for the first time, it stays fresh in our minds for a little while. For instance, if I tell you “the Japanese word for Dog is ‘Inu’ (犬)”, you should have no problem answering when I ask you about it in the next paragraph. Ready?</p>
<p>What’s the Japanese word for Dog?</p>
<p>“Inu”. Right. Easy.</p>
<p>Well, if I ask you again in a few paragraphs, you may well have forgotten it. This is what the Forgetting Curve describes: As time passes, your ability to recall a particular fact diminishes until, after a while, it’s forgotten.</p>
<p>This inescapable decline seems pretty dismal. Don’t worry, though! The fact that you’re able to read and understand this very sentence proves that learning is, in fact, possible. (In case you forgot)</p>
<p>Learning is, in a way, just a process of continually not forgetting things.</p>
<p>We learn and remember facts by refreshing our memory from time to time, before it completely fades away. Doing this enough times makes it stick in our minds, for good.</p>
<p>Dog? “Inu”. Great.</p>
<p>There’s an art to figuring out when to review something—when to “reset” the forgetting curve. Turns out, the optimal time to do this is just as you're about to forget it. This is the premise of a technique called spaced rehearsal.</p>
<p>For nearly a century, psychologists and teachers alike have demonstrated the efficacy of spaced rehearsal in the lab and in the classroom. It’s the invisible thread that connects so much of how we learn today, from flash cards to <a href="http://en.wikipedia.org/wiki/Pimsleur_language_learning_system">Pimsleur</a>.</p>
<p>But for all of its advantages, spaced rehearsal is difficult to do by yourself. Keeping track of what you need to learn next carries a cognitive load, which only increases as the number of things you are studying grows.</p>
<p>Luckily, this is a problem that can be solved by software. Let computers figure out what you need to study instead! That way, you can focus on actually learning.</p>
<p>That’s the foundation of <a href="http://smart.fm">Smart.fm</a>: a tool to manage your “brain bank”, allowing you to learn faster and retain knowledge longer. We all want to be more intelligent, more sophisticated, more articulate. With Smart.fm, you can get there by studying less, and have fun while you’re at it.</p>
<p>Over the last 6 months of working at Smart.fm, I’ve become fascinated in the way technology can hack our brains, and how it has the potential to completely transform and democratize education. A cognitive revolution is underway, and being on the front lines has been an unbelievable experience.</p>
<p>Some really amazing things are on the way.</p>
<p>Out of curiosity, do you remember <a href="http://smart.fm/items/247851">how to say “dog” in Japanese</a>?</p>
Chroma-Hash, Revisitedhttp://mattt.me/2009/chroma-hash-revisited/2009-11-05T00:00:00Z2009-11-05T00:00:00ZMattt Thompsonm@mattt.me<p>Based on a resurgence of interest in Chroma-Hash, I thought it’d be useful to revisit this oft-misunderstood project.</p>
<p>Based on a resurgence of interest in Chroma-Hash (hi <a href="http://www.reddit.com/r/programming/comments/a150t/a_regular_confirm_password_field_but_100x_cooler/">reddit</a>!), I thought it’d be useful to revisit this <a href="http://mattt.me/2009/07/chroma-hash-a-belated-introduction/">oft-misunderstood project</a>. </p>
<p>If you haven’t already seen it already, <a href="http://mattt.github.com/Chroma-Hash/">you should check it out</a>. Even if you have, you might want to <a href="http://mattt.github.com/Chroma-Hash/?again">play around with it again</a> to see if any new insights come to mind (go ahead, it’ll only take a minute). </p>
<p><a href="http://github.com/mattt/Chroma-Hash/">Chroma-Hash</a> has been a particularly interesting project because of the controversy it creates in discussions. Some people won’t understand how it could ever be useful, while it couldn’t be any clearer to others. There’s often a back-and-forth about the potential security risks of the system and the pragmatics of how those risks are insignificant.</p>
<p>I’ve learned a lot from reading through these various threads, and I wanted to share some of my thoughts to help bring light to the discussion. Or maybe just fuel the fire, perhaps.</p>
<p>Alright, let’s get right to it then:</p>
<h2>So what’s The Point?</h2>
<p>Good question! It started out as a simple UI experiment, but it soon developed into something that I think could be really useful. Here are some of the use cases that emerged from various feedback and iterations:</p>
<h3>Use Case #1: Password Confirmation</h3>
<p>Perhaps the most obvious use case from the demo, Chroma-Hash allows you to quickly compare the contents of two secure text fields. It’s common for a signup flow to ask you to type your password twice (to make sure you didn’t mistype it). With this visualization, a user can instantly check to see if what she typed was the same each time, without having to submit the form.</p>
<p>In a similar vein to password confirmation at signup, Chroma-Hash can be helpful when logging in from day-to-day. Whenever you log into your webmail or your favorite social network, you could expect to see your signature color combination. If not, you’d know that you somehow fudged it along the way. Especially for sites with 3 strike lockouts, Chroma-Hash could save a lot of needless frustration.</p>
<h3>Use Case #2: Anti-Phishing Mechanism</h3>
<p>As you might have caught on from the last use-case, Chroma-Hash could be effective in mitigating the risk of a phishing attack. Similar to the account-specific images that online banking systems recently added, your password becomes a visual signature that you can look for. Websites can securely serve unique color signatures by issuing a hash salt through a browser cookie, for instance.</p>
<p>Let’s say you go to a site that you think is PayPal. If you start to type your password and you’re getting unfamiliar colors (or no colors show up at all, for that matter), you’ll know something’s fishy. </p>
<h3>Use Case #3: Password Strength Feedback</h3>
<p>Let’s think back to signup flow: When creating a Google account, for instance, you’ll get visual feedback of the strength of your password as you type it in. Stop after 5 characters, and a partially-filled red bar will accompany a message telling you to pick something stronger.</p>
<p>Similarly, Chroma-Hash has a parameter to specify the minimum number of characters before colors start to display. Until that threshold is reached, all the user will see are boring, gray bars. It’s more implicit than using strong colors and words, but there’s something to be said about ambient feedback, no?</p>
<h3>Use Case #4: Clipped or Constrained Input Feedback</h3>
<p>Let’s say you’re like me, and you go a little over-the-top with passwords. Although it’s a bit of an edge case, Chroma-Hash can be useful for providing visual feedback when a user types beyond the boundaries of a field.</p>
<p>Conversely, if there is a cap on the maximum amount of input, the lack of visual feedback, in that colors stop changing even though you keep typing, can just as well provide a cue to stop.</p>
<h2>Objections, Concerns, and Questions</h2>
<p>Reading through various threads on <a href="http://www.reddit.com/r/programming/comments/95hxr/chromahash_a_sexy_nonreversible_live/">reddit</a> and <a href="http://news.ycombinator.com/item?id=729556">Hacker News</a> gave me a much deeper insight into everything from aesthetics and usability to security and information theory. A lot of feedback was immensely useful, and helped take Chroma-Hash to that next level.</p>
<p>There may still be some legitimate usability or security concerns, but after several iterations, I’m confident that Chroma-Hash is a robust UI component. That said, I’d love to be proven wrong so I can continue to improve it even more.</p>
<p>For your consideration, here are some common concerns that are raised, along with my response to them.</p>
<h3>“MD5 Is Weak”</h3>
<p>From my understanding, a weak hash function is exactly what makes something like MD5 well-suited to this application. Usually, a hashing function is rated on its aversion to collision. For instance, if you are taking a checksum of a file, you’d want to be confident two files with the same checksum have the same content—otherwise, no collisions. </p>
<p>In the case of Chroma-Hash, collisions add security. If passwords have the same checksum, it makes it harder to isolate which one a checksum represents. Collisions are good for the purposes of the visualization. Collisions are what make security possible with Chroma-Hash.</p>
<h3>“Knowing the Colors is Knowing the Password”</h3>
<p>Although Chroma-Hash seems to cover the entire spectrum, it’s actually constrained to a limited palette. Small palette means more collisions, which means that it’s more secure. Here are two techniques that are used to minimize the use of colors:</p>
<ul>
<li><p>Grayscale Threshold - As mentioned in Use Case #3, colors don’t show up below a specified number of characters (by default, 6). Instead, the bars display in 4-bit monochrome. Not only are the colors difficult to differentiate based on plain eyesight, but within that range, you are nearly guaranteed to have collisions. Bump up the minimum, and it becomes exponentially more difficult to trace through to the final password.</p></li>
<li><p>Least-Place Crushing - Above the grayscale threshold, the range of colors are still constrained. Based on a <a href="http://blog.iangreenleaf.com/2009/08/making-chroma-hash-less-leaky.html">great insight by Ian Young</a>, the least place of a hexadecimal color can be rounded down without compromising aesthetics. For instance, the color #CE2029 is nearly the same as #C02020, but the latter could be any one of 3375 colors. Since humans can’t perceive these slight differences, the extra (leaky) detail is removed without cost. Your password still looks like “Red Green Purple” with or without that extra place.</p></li>
</ul>
<p>This naturally brings us to…</p>
<h3>“What About Colorblind People?”</h3>
<p>Consider the last points about grayscale and constrained palettes. Although I’m not colorblind myself, I would suppose that having some form of colorblindness is comparable to these two examples. In an extreme case, in which someone could not differentiate any color at all—everything is grayscale—you could still use Chroma-Hash. </p>
<p>The only snag is that without perceiving the additional color information, they have a greater chance of colliding, as you see it. Because one-off input is unlikely to be consistently similar one could imagine this system as still somewhat useful (expected dark gray, black, white; got light gray, white, black: no match).</p>
<h2>“Chroma-Evangelism” OR “The Many Colors of Chroma-Hash”</h2>
<p>Last but not least, I wanted to point out some of the awesome contributions other developers have made to this whole experiment. They took the ideas behind Chroma-Hash and ported it to their favorite libraries and languages, and added so much more along the way. I’m truly humbled by your contributions.</p>
<ul>
<li><a href="http://github.com/leegao/pyChroma">pyChroma - leegao (Lee Gao)</a></li>
<li><a href="http://github.com/foxxtrot/Chroma-Hash">YUI3 - foxxtrot (Jeff Craig)</a> </li>
<li><a href="http://github.com/wki/Chroma-Hash/">Prototype - wki (Wolfgang Kinkeldei)</a></li>
</ul>
<p>Also, an Objective-C port and sound-based version by me:</p>
<ul>
<li><a href="http://github.com/mattt/CHSecureTextField">CHSecureTextField</a></li>
<li><a href="http://github.com/mattt/sonic-Hash">Sonic-Hash</a></li>
</ul>
<p>Interested? Feel free to <a href="http://github.com/mattt/Chroma-Hash/">fork Chroma-Hash</a> and make something even cooler!</p>
<h2>Finally, a Shout-Out</h2>
<p>I would be remiss without mentioning the main inspiration behind Chroma-Hash, <a href="http://lab.arc90.com/2009/07/09/hashmask-another-more-secure-experiment-in-password-masking/">HashMask</a> by <a href="http://www.umbrae.net/">Chris Dary</a> of <a href="http://lab.arc90.com/">Arc90</a>. Cheers, Chris!</p>
Chroma-Hash: A Belated Introductionhttp://mattt.me/2009/chroma-hash-a-belated-introduction/2009-07-29T00:00:00Z2009-07-29T00:00:00ZMattt Thompsonm@mattt.me<p>Yesterday, I posted <a href="http://mattt.github.com/Chroma-Hash/">Chroma-Hash</a>, an experiment in how to visualize the live-input of secure fields, such as a password on a login screen. So far, I’ve received a lot of great feedback, as well as a number of questions that I thought deserved a proper response.</p>
<p>Hey, go check out the <a href="http://mattt.me/2009/11/chroma-hash-revisited/">more recent blog post about Chroma-Hash</a> for a better and more up-to-date explanation of everything.</p>
<p>Yesterday, I posted <a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">Chroma-Hash</a>, an experiment in how to visualize the live-input of secure fields, such as a password on a login screen. So far, I’ve received a lot of great feedback, as well as a number of questions that I thought deserved a proper response.</p>
<p>Before I go into any details, I invite you to <a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">check out the live demo</a>, (if you haven’t seen it already), so you can get a clear idea of what Chroma-Hash does.</p>
<h2>The Concept</h2>
<p>When you type something into a secure field, each character is displayed as a •. Good news for people who don’t want others to see their password; Bad news for anyone who has a long or difficult password (or is bad at typing). How could we improve the experience of secure text input so that the user entering information could have an idea of what they entered, without anyone else knowing it?</p>
<p>Chroma-Hash approaches this problem using an ambient color representation of the input as it is being typed. </p>
<h3>Use Case 1: Login Check</h3>
<p>If your password normally is represented as “red, purple, orange”, and after you’ve finished typing you see “pink, green, grey”, you’ll know you mistyped it somewhere along the way. This avoids a potentially long wait for the server to respond with a “failed login” notice.</p>
<h3>Use Case 2: Password Confirmation</h3>
<p>When you sign up for a web service, you often have to type your password twice to make sure that you entered what you wanted correctly. <a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">As in the demo</a>, a user will be able to confirm that two fields are the same visually. There are, of course, many alternatives for live-input validation of password confirmation, but this is another viable use case for Chroma-Hash.</p>
<h2>Security Concerns</h2>
<p>Under the scrutiny of the sharp minds on <a title="programming.reddit.com" href="http://mattt.github.com/Chroma-Hash/">proggit</a> and <a title="Y Combinator Hacker News" href="http://news.ycombinator.com/item?id=729556">Hacker News</a>, it’s only natural that some really good concerns were raised. As an experiment, this is the best kind of input I could ask for, because it challenges the viability of this as a visual metaphor, and works to improve the usability of the project as a whole.</p>
<h3>“It’s Not Non-Reversible”</h3>
<p>By all accounts, I would not, in fact, bet on Chroma-Hash being unbreakable. At least in its current iteration.</p>
<p>One of the common arguments is that by showing the colors as you type, one could step through and guess along each way. <a title="chaosmachine on Hacker News" href="http://news.ycombinator.com/user?id=chaosmachine">chaosmachine</a> on Hacker News <a title="Comment on Y! Combinator Hacker News" href="http://news.ycombinator.com/item?id=729724">explained it this way</a>:</p>
<p>Because you can see the color code at each step, it’s easy to compare results very quickly, even by hand. Did I get letter 1 right? Ok, move on to letter two, try each key until the colors match the recording. Do this for each step. At most, you have to try about 64 key presses to crack each letter.</p>
<p>Theoretically, this is definitely a concern. One of the ways I tried to prevent this was to animate transitions between the color sequences as they were being typed, so intermediary colors were never shown. Given a sufficiently slow typist, though, all bets are off in Chroma-Hash’s current state.</p>
<p>Another consideration, however, is how exactly someone would be able to tell what the colors are, at least in common use-cases. As a color expressed in Hex, there are 16,777,215 possible colors for each bar. Eye-balling it wouldn’t be enough to get an exact color value—the difference between #952A08 and #952A09 is nearly imperceptible, but represent an completely different hash input. Unless you get a really good look, it would be pretty hard to tell. And at that point, you might as well be looking at the person typing it instead :)</p>
<h3>MD5 is Weak</h3>
<p>Another concern was the use of MD5 rather than a stronger hashing algorithm, like <a title="SHA Hash Functions on Wikipedia" href="http://en.wikipedia.org/wiki/SHA_hash_functions">SHA-1</a>. For this first release, <a title="MD5 on Wikipedia" href="http://en.wikipedia.org/wiki/Md5">MD5</a> was a choice of convenience. MD5 is marginally faster, and with live-validation, I wanted to make sure that the animation was smooth and didn’t interfere with user input.</p>
<p>One of the ideas behind the project was that by using one hashing algorithm, a user could expect the same color on any website that implemented Chroma-Hash. This is not a central concept to this visualization, and may prove to be a bad idea. For the next iteration, I’m considering adding support for SHA-1 as an alternative hashing algorithm that can be passed in as a parameter.</p>
<h2>Iterative Improvement</h2>
<p>Weighing in these concerns, I believe that this kind of visualization can become a viable UI metaphor—all it would take are a few minor improvements. Here are a few suggestions that I am currently considering for the next version:</p>
<h3>Adding A Time Delay To Live Input</h3>
<p>To avoid the possibility of exploiting information as the user types their input, a short time delay could be added before the colors are displayed.<a href="http://news.ycombinator.com/item?id=729622"><a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">1</a></a> <a href="http://news.ycombinator.com/item?id=729629"><a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">2</a></a> <a href="http://www.reddit.com/r/programming/comments/95hxr/chromahash_a_sexy_nonreversible_live/c0bi0s7"><a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">3</a></a> <a href="http://www.reddit.com/r/programming/comments/95hxr/chromahash_a_sexy_nonreversible_live/c0bi1g9"><a title="programming.reddit.com" href="http://mattt.github.com/Chroma-Hash/">4</a></a> This way, all a potential attacker could know is a portion of the password’s hash, which is not nearly as useful.</p>
<h3>Using A Stronger Hashing Algorithm</h3>
<p>As stated before, I may add an option to use SHA-1 instead of MD5 in the next version. This is pending research into the potential gains and the concerns of maintaining a fluid user experience.<a href="http://www.reddit.com/r/programming/comments/95hxr/chromahash_a_sexy_nonreversible_live/c0bi24j"><a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">1</a></a> <a href="http://www.reddit.com/r/programming/comments/95hxr/chromahash_a_sexy_nonreversible_live/c0bi497"><a title="Chroma-Hash Demo" href="http://mattt.github.com/Chroma-Hash/">2</a></a></p>
<h3>Using a Salt</h3>
<p>A <a href="http://en.wikipedia.org/wiki/Salt_(cryptography">salt</a> “Salt on Wikipedia”) based on other field inputs or a server-defined constant, for instance, would further increase the security of the hashing algorithm. The only downside is that this approach would force the user to remember different color combinations for each site, which as I mentioned above, may not be such a bad thing.</p>
<p>Once again, I’d like to thank everyone who’s given me so much useful feedback and encouragement. I’m really excited to take <a title="Chroma-Hash" href="http://mattt.github.com/Chroma-Hash/">Chroma-Hash</a> to the next level, and as such, I extend you to continue this conversation on possible uses and security considerations.</p>
<p>Photo Credit: <a title="binarycoco on Flickr" href="http://www.flickr.com/photos/binarycoco/2736362903/">binarycoco</a></p>
Game Over: Learning From Failure In Videogameshttp://mattt.me/2009/game-over-learning-from-failure-in-videogames/2009-07-06T00:00:00Z2009-07-06T00:00:00ZMattt Thompsonm@mattt.me<p>Thinking through the contingencies of failure for an interaction is an exercise of empathy with the user. Whether in videogames or more traditional UIs, framing development in a mindset of failure allows you to get in the head of the typical user and design accordingly.</p>
<p>We tend to think of videogames in terms of success. It’s all about rescuing the princess, getting the high score, beating the final boss, or pwning your friends. The last thing you want to think about is seeing that Game Over screen.
… That is, unless you’re a game designer, in which case the whole “Game Over” thing may be the most important detail to get right.</p>
<p>Consider what Will Wright, creator of such iconic games as Sim City and The Sims, has to say about the matter:</p>
<p>When we make games now, we very much think in terms of what are the interaction loops, and what are the success and failure sides of those interaction loops. One of the things that’s kind of non-intuitive here is that it’s actually more important to really think through the failure side than the success side. Because, when you think about it, the success side is pretty boring: you want to get to the next level. You could just spend most of your time failing, and it’s important that the failures are interesting, varied and primarily that you understand why the failure occurred.</p>
<blockquote>
<p>Will Wright (2003)</p>
</blockquote>
<p>Failure is an inevitable part of the gaming experience, but accounting for it is a detail too often overlooked by developers. And who could blame them? It’s much more interesting to think about cool new game mechanics or engrossing narratives. But as Will Wright points out, failure is every bit as important. Failure closes the interaction loop. No matter how interesting a game mechanic or story you have, it doesn't matter if the player never experiences because they get frustrated and give up half-way through.</p>
<p>Thinking through the contingencies of failure for an interaction is an exercise of empathy with the user. Whether in videogames or more traditional UIs, framing development in a mindset of failure allows you to get in the head of the typical user and design accordingly.</p>
<p>Below are 2 examples of videogames that each approach failure in a poignant way. Together, they illustrate what I call 2 core mantras of designing for failure. By exploring the ways that these videogames deal with failure, we can learn a great deal about we can improve our own user experiences.</p>
<h2>Team Fortress 2</h2>
<p>In the overcrowded space of multiplayer first-person shooters, <a href="http://www.teamfortress.com/">Team Fortress 2</a> really stands out. Between its innovative class system, meticulously-groomed collection of maps, and an unmatched focus on teamwork, TF2 gets a lot of things right. One of those things that I’d like to focus on is the screen you get when you die.</p>
<p>Being blown apart by a rocket, lit up by sub-machine gun fire, bludgeoned by a baseball bat, or being burnt to a crisp isn’t anyone's idea of a good time. It’s a delicate matter to communicate to a user, but TF2 nails it:</p>
<p>There’s actually a lot going on with this screen.</p>
<p>First, the gameplay stops with a distinctive “whoosh” noise. With the frantic pace of a 24-person match, a break in the action affords the player a chance to step back and collect themselves. Beyond a visual cue that action has been suspended, this pause is a mental cue to reflect on what happened, so the player can understand how to not get pøwn'd next time.</p>
<p>Next, the camera zooms in to where your killer was when you bit the dust. This particular detail is something Valve talked about in their developer commentary: Testers sometimes had trouble understanding how or why they were killed. Especially in the case of a Sniper’s headshot from the other side of the level, for instance, it would be very frustrating to drop dead out of without adequate explanation. Thus, Valve settled on the aforementioned solution, to the chagrin of campers and griefers everywhere.</p>
<p>One of the most unique parts of this death screen is the “On the Bright Side” message that pops up from time to time. Just as zooming in to your killer added context of how you were killed, “On the Bright Side” adds context to your death itself. For instance, “You tied your previous record for kills this round of 4”, or “You stayed alive as Scout longer that round than your previous best”. As it were, this message recontextualizes death from failure to a form of success. Rather than be upset that you died again, you get recognized for things you did particularly well this time around.</p>
<p>Our insights from Team Fortress 2 leads us to the first mantra of Failure:</p>
<pre><code>Be Informative
</code></pre>
<p>Provide sufficient context for the user to understand what happened, and learn how to improve in the future.</p>
<p>When we fail, much of the pain we feel derives from a lingering sense of confusion. “What the heck? Why did that happen?” Without properly communicating the situation, the player is unable to causally link their actions to the response. This can quickly lead to frustration, and ultimately cause the player to give up. When we fail, it’s not that we’re necessarily upset with failing itself—after all, in videogames, the stakes are pretty low. What’s really upsetting is when, because of a broken feedback cycle, we don’t seem in control of our own destiny.</p>
<h2>World of Goo</h2>
<p>Fueled by its critical reception and widespread distribution, <a href="http://www.2dboy.com/games.php">World of Goo</a> has become an iconic success story for the <a href="http://www.offworld.com/gimme-indie-game/">emerging generation of indie games</a>. But more importantly, it shows how a strong concept combined with creativity and a great attention to detail can produce an amazing experience.</p>
<p>Each level of World of Goo challenges the player to build structures in order to reach a goal. How the player does it is up to them.</p>
<p>Game mechanics are introduced with new varieties of Goo Balls, like Ivy Goo, Balloon Goo, and Bomb Goo. These game mechanics are reinforced through different puzzles that explore the intricate ways that Goo Balls can combine and interact. Levels incorporate obstacles, like spike-lined chasms, mechanical automata, and hurricane-strength winds. All of these elements come together to keep gameplay varied and interesting, and in turn, make the player come up with new and creative strategies.</p>
<p>Sometimes, the solution is obvious, and it’s just a matter of getting there the fastest or with the fewest Goo Balls. Other times, things aren’t as clear, and it may take several iterations just to find something that works at all. Perhaps the worst, though, are the times when it’s clear exactly what needs to be done, but you just don’t have the finesse to get your goo tower to balance the right way.</p>
<p>For how challenging the game is, though, it never seems overly frustrating. Why this is has a lot to do with the way World of Goo mitigates failure.</p>
<p>In World of Goo, there is no Game Over screen. Rather, the notion of failure is recognized as being very distinct from Game Over. This touches on a fundamental truth of problem solving—that it is an iterative process; a process of constant failure, but also of constant learning. To expect the player to get it right every time undermines the spirit of the whole puzzle genre. If the player isn’t failing, it’s not really a puzzle, now is it?</p>
<p>Even though failure is inevitable, and indeed important, it’s important to recognize that not all failures are equal. For instance, certain levels in World of Goo have white “undo bugs” flying around on the screen’s periphery. Activating one reverts the last thing you did, making a particularly disastrous move not force a full restart. To that point of all failures not being equal, “undo bugs” allow for an acceptable level of inevitable human error to occur without taking away from the core gameplay experience.</p>
<p>All of this brings us to the second mantra of failure:</p>
<p>Be Implicit
When possible, allow the user to understand that they failed without having to tell them outright.</p>
<p>It’s like that archetypical High School French teacher—the one who corrects every last word you say, and then makes you repeat the whole sentence again, once you’ve painfully managed to eek it out. Don’t be that High School French teacher. If not just for the fact that such an interaction isn’t so much fun, they’re also not that effective. Going back to the first mantra of failure, provide the player with enough context to understand what they’re doing wrong, and they’ll often correct themselves.</p>
<p>Of course, there are many more things to be said about failure. This is only a start to a conversation that I extend into the ethos of everyone interested in making user interactions better.</p>
<p>For a deeper appreciation of the role of failure in game design, be sure to check out the work of <a href="http://www.jesperjuul.net/ludologist/">Jesper Juul</a>, particularly his essay, <a href="http://www.jesperjuul.net/text/fearoffailing/">“Fear of Failing? The Many Meanings of Difficulty in Video Games”</a>. Other suggested reading includes <a href="http://www.gamingw.net/item.php?id=77566">a Gaming World article on difficulty and failure (which also cites Juul)</a> and <a href="http://www.wired.com/gaming/gamingreviews/commentary/games/2009/03/gamesfrontiers_0309">A Wired article by Clive Thompson on Peggle</a>.</p>
Commencehttp://mattt.me/2009/commence/2009-05-13T00:00:00Z2009-05-13T00:00:00ZMattt Thompsonm@mattt.me<p>Earlier in the year, as is the tradition at Carnegie Mellon, there was an open contest to be the class speaker at commencement. As someone who never identified strongly as a student qua student, I knew my submission would be a long shot. It was, but how much better to have tried and failed.</p>
<p>Earlier in the year, as is the tradition at Carnegie Mellon, there was an open contest to be the class speaker at commencement. As someone who never identified strongly as a student qua student, I knew my submission would be a long shot. It was. But how much better to have tried and failed.</p>
<p>So for what it’s worth, here’s my parting message to all of you (us) graduating seniors.</p>
<p>Fellow members of the Class of 2009, I stand here before you to talk about a very important subject: hot dogs.</p>
<p>Well, not hot dogs, per se, but a story about hot dogs, which, as you might expect from a graduation speech, has very little to do with hot dogs.</p>
<p>This story takes place at the Vienna Sausage Company’s factory on the North Side of Chicago. In 1970, the company completed construction on this building, to replace the original plant on the South Side: a sprawling mass of inter-connected properties covering an entire city block. It had been built up incrementally over the company’s 70-year history. This new plant was designed from the ground-up to streamline the manufacturing process into a single, state-of-the-art facility.</p>
<p>When the plant finally opened, the first batches of the company’s signature product—their “natural-casing, old-world, hickory-smoked sausages” weren’t coming out right. They tasted fine, but they didn’t have the right snap when you bit into them. Even worse, the color was wrong. Rather than a distinctive bright red that had defined the brand for so long, these new batches were pink.</p>
<p>So, over the next 2 years, the Vienna Sausage Company did everything they could to figure out what was wrong. The ingredients were all the same. The process was all the same. Maybe the ovens were cooking differently? Maybe the water on the North Side of Chicago wasn’t the same as the South Side? After all this time, no one had any idea what could possibly be missing.</p>
<p>Then, one night, a few workers were out reminiscing on their days in the old plant, when someone mentioned Irving. Irving was the kind of guy that had been there forever. He knew everyone there; had nicknames for everyone. Listen to what he did: his job was to take racks of sausages from refrigeration to the ovens. Though this could take as long as 30 minutes, as he had to wind through the maze of hallways and buildings of the old plant to get there. He would go through the warm hanging benches for the pastrami, through the boiler-room, next to the tanks where they cooked the corned beef, sometimes even up an elevator, until he finally got to the smokehouse.</p>
<p>In this new plant, there was no Irving—he didn’t want to commute from the South Side to the new plant. And his long journey to the smokehouse was missing too—in this new facility, there just wasn’t any need for it. As it turns out, Irving’s trip, which gradually warmed the hot dogs before being cooked in the smokehouse: that was the secret ingredient. So secret, that not even the company itself knew it.</p>
<p>I originally heard this story from Ira Glass, on an old episode of a radio program called <a href="http://thislife.org/">This American Life</a>. And the reason I found it so compelling was his take on it:</p>
<blockquote>
<p>“What I like about the story is the fact that these guys at the factory had done everything right. Finally built their dream factory, with the best equipment and expertise that money could buy. But you can’t think of everything”</p>
<p>“Sometimes, you have no idea why you were a success in the first place.”</p>
</blockquote>
<p>That’s something you don’t really hear enough. All too often, we try to simplify things into easy-to-remember formulas, that either equate success as a function of how hard you work, or conversely, as a matter of blind luck.</p>
<p>Graduation is a big milestone in life, certainly. It’s one among many that you use to fold up your personal story into neat little episodes.</p>
<p>It’s that interface where one chapter wraps-up and another begins. Where our only choice is whether to reflect back or plan ahead.</p>
<p>I ask you today, to consider your own messy factory that you’ve unwittingly built-up over the last 20-some years. At milestones such as this, there is a tendency to want to reinvent yourself; to move across town, into a brand-new facility with everything meticulously designed to absolute perfection. And there’s no way to stop this—it’s human nature. Really, there’s no reason to either: sometimes it’s worth thinking about starting fresh with a clean slate. In fact, probably most of us will do just this to varying degrees, as we move across this country, and around the world, where our new identities await us. But as you stand there, with your clean slate in hand, ready to build “the perfect you”, remember this:</p>
<p>“Sometimes, you have no idea why you were a success in the first place.”</p>
<p>Look deep inside yourself, and you might begin to see where Irving is. He might be in that random elective you took on a whim that turned you onto something completely new and different. He might be in your renewed sense of hygiene after you transferred out of Computer Science. He might be in a very special person you met when you least expected it, who completely changed your life.</p>
<p>I ask you today, to look deep inside yourself, on every level, with the resolve to know yourself as well as you possibly can.</p>
<p>With luck, you may meet a fellow named Irving, who just so happens to be the very reason why you’re here today, graduating from Carnegie Mellon University.</p>
Computational / Poetry Thesis Blog Post / Stanza 三: Haikuhttp://mattt.me/2009/computational-poetry-thesis-blog-post-stanza-3-haiku/2009-03-17T00:00:00Z2009-03-17T00:00:00ZMattt Thompsonm@mattt.me<p>A wise man once said, that if you train an n-gram model with too much data, it will hurt. Bad. We’re talking Kurzweilian singularity ➡ grey goo ➡ ??? ➡ profit! kind of hurt. That’s the way I’ve felt over the last month or so, thinking about my thesis; there were so many directions I could go in, so many theoretically intriguing and clever avenues to venture.</p>
<p><a href="http://www.cs.cmu.edu/~nasmith/">A wise man</a> once told me that if you train an n-gram model for generation with too much data, it will hurt. Bad.</p>
<p>We’re talking Kurzweilian singularity ➡ grey goo ➡ ??? ➡ profit! kind of hurt.</p>
<p>That’s the way I’ve felt over the last month or so, thinking about my thesis; there were so many directions I could go in, so many theoretically intriguing and clever avenues to venture.</p>
<p>In order to head off my progressive channeling of Don Giovanni, I got back to fundamentals, and rediscovered what it really meant to be a haiku.</p>
<h2>俳句 – haiku</h2>
<p>“Ignorance of other cultures is the currency of ours.”</p>
<p>As much a law of the universe as entropy, any genuine artifact of a culture will inevitably become completely bastardized. Much the same fate has befallen the noble haiku, now distilled to a metrical exercise of 5-7-5. (Then again, who am I to judge, like I of all people could bemoan the loss of 17th-century Japanese culture). What appealed to me from traditions lost were two missing pieces— <a href="http://en.wikipedia.org/wiki/Kigo">kigo</a> and <a href="http://en.wikipedia.org/wiki/Caesura">caesura</a> —that come together to further constrain the project, making the problem at hand more well-defined.</p>
<h3>季語 – kigo (season word)</h3>
<p>梅がかに / ノット比の出る / 山路仮名</p>
<blockquote>
<p>scent of plum blossoms
on the misty mountain path
a big rising sun</p>
</blockquote>
<p>Traditional Japanese haiku have this remarkable ability to evoke a sense of place, as in this classic example from Bashō. In a post-modern context, though, such naturalistic references might best be supplanted by a pop-culture reference. Here’s my take:</p>
<blockquote>
<p>the earth is crying
our only hope is Al Gore
it’s gettin hott in hrrrr</p>
</blockquote>
<p>Yeah, that definitely speaks to me more than some plum trees and rocks in the fog.</p>
<p>From the start, I imagined that this program would provide a web interface, where people could generate poems about a keyword. Since I suspect most people would like to see poems about people and places, getting relevant text about named entities is actually a major concern.</p>
<h3>休止 – caesura</h3>
<p>春雨や / 小磯の戸外 / ヌルルほど</p>
<blockquote>
<p>spring rain —
small shells on a small beach
glittering</p>
</blockquote>
<p>This example by Buson illustrates the caesura, or break, of traditional haiku. Whether its a dash, period, colon (semi- or otherwise): old-school haiku have a grammatical and indexical turn in them. A good haiku uses the two resultant shards to play off of each other through stylistic and symbolic contrasts, embracing a sort of proto-Hegelian dialectic.</p>
<p>Pragmatically, this is pretty good news. Not only does the break provides a pseudo-grammatical structure to rest upon, but it allows for different semantic relationships to be explored. For instance, given a theme word or kigo, the former half could contain synonyms and metonyms, whereas the latter might be all antonyms.</p>
<h2>ボックスの話 – Natural Language Processing</h2>
<p>After a solid week of hacking on the project, a working prototype of the haiku module started to emerge.</p>
<p>Most importantly, I’m happy to report, it kinda works: the system generates valid haiku form just fine, and is cognizant of semantic relationships to the point of only needing parameter tuning.</p>
<p>Allow me to present the most reasonable and least-embarrassing results from initial testing:</p>
<blockquote>
<p>sunny cheerful day -
gloomy nimbus on Wall Street
today as stocks</p>
</blockquote>
<p>Stranger still, after generating this, my program decided to move to New York City to make it big, where it received mixed reviews on his interludes to the oppression of the working-man’s clock cycles.</p>
<h3>Introducing the Cast</h3>
<p>Keats is composed of several different NLP modules, each optimized for different tasks, which are combined using some pretty slick Ruby glue code. Here are the components used at the moment:</p>
<h4>CMU Pronouncing Dictionary</h4>
<p>As I mentioned before, the <a href="http://www.speech.cs.cmu.edu/cgi-bin/cmudict">CMU Pronouncing Dictionary</a> was the technical inspiration for my thesis project in the first place. It offers both phonological and prosodic information on tens of thousands of words. I indexed the flat-file into a MySQL database, which cached calculated information like number of syllables and vowel geometry, along with a delightful IPA string representation.</p>
<h4>WordNet</h4>
<p>Perhaps one of the best-known NLP resources out there, Princeton’s <a href="http://wordnet.princeton.edu/">WordNet</a>” is an semantic index of <a href="http://wordnet.princeton.edu/man/wnstats.7WN">staggering breadth and depth</a>. Not only does it have hundreds-of-thousands of entries, but for each entry, there is an extensive relational graph for everything from synonymy to verb frame. In order to integrate it into my project, I wrote a script to transform the Prolog database into a MySQL relational database, and cross reference it with the existing phonological information from the CMU Dictionary.</p>
<h4>PCFG</h4>
<p>After entertaining some ideas about language model training on a corpus of haiku, I decided the most reasonable way to approach it, at least for now, was with my trusted friend, the <a href="http://en.wikipedia.org/wiki/PCFG">Probabilistic Context-Free Grammar</a>. To get a feel for what it could do, I hand-wrote a series of transformation rules like S ➡ Adj N V or S ➡ Adj Adj N V N. From there, I would partition syllables for the line and find a candidate word from the WordNet SynSets.</p>
<h4>n-Gram Modeling</h4>
<p>Building off a small sample of a New York Times text corpus, I started to work with <a href="http://en.wikipedia.org/wiki/Trigram">trigram</a> and larger <a href="http://en.wikipedia.org/wiki/N-gram">n-gram</a> language models to produce meaningful (or at least reasonable) sequences between target keywords. This is what generated “on Wall Street / today as stocks” in the example poem. There was a rule in the PCFG that replaced M with this Markov model output.</p>
<h3>All Together Now</h3>
<p>Given the kigo, or theme word, I look up all of the semantic pairs associated with it in WordNet, whose correspondent has an entry in the CMU Pronunciation dictionary. After ranking the candidates (now randomly, but later by phonological features), it will partition syllables and generate a PCFG for a sequence before and a sequence after the split. Using the candidate rankings, it will fill in the slots as necessary, until the grammar is satisfied and the poem is complete.</p>
<p>At this point, I’m feeling pretty good about my progress so far. After some fine-tuning and figuring out better ways to rank candidate words, I should have a pretty robust engine that will produce not only passable haiku, but with slight modification, Fibonacci poems as well. As for the Limerick part of the project, I’m hoping that something will click in the next month, so I’ll have some idea of where to start with that.</p>
Prayer from the SF-bound Caltrain at 15:12 in early Augusthttp://mattt.me/2009/prayer-from-the-sf-bound-caltrain-at-1512-in-early-august/2009-03-14T00:00:00Z2009-03-14T00:00:00ZMattt Thompsonm@mattt.me<p>Riding on the Caltrain enumerates the untraveled possibilities of my former voyeurism.</p>
<p>There’s a sort of listlessness when you exist between lives. I guess you can only live life in one of two states: content with eternity, or bracing for a change that sweeps over you like a lazy, salty tide. You can talk to permanent fixtures of the universe—champions of their own unique set of mannerisms & affectations, or if you’re unlucky (as I am) you’re talking to ghosts, just waiting for the last anything (everything).</p>
<p>Perhaps it’s the presence of disorientation that allows me to explore. I wonder how different that really makes me, anyway. The weeks build upon each other, and the sins of monotony, of convenience, of routine, become unbearable temptations. The air ceases to carry that unfamiliar electricity.
The roads cease to branch
It’s a game of how long you can pretend to embrace the frightening beauty of chance,</p>
<p>Riding on the Caltrain enumerates the untraveled possibilities of my former voyeurism.</p>
<p>Questions of “what if” replaced by “why not”. Yet their vapor trails cast a shadow of phony, constructed illusions. Of what it was actually like to toss caution to the wind and venture to the Hillsdale stop and visit the racetrack. These un-lived carcasses of imagination hurt the most. As if their vacuous lies actualize their emptiness deep in your gut. It teases you with the prospect of making up for missed time. What a sorry trap.</p>
What Can "Left 4 Dead" Teach Us About The Social Web?http://mattt.me/2009/what-can-left-4-dead-teach-us-about-the-social-web/2009-02-22T00:00:00Z2009-02-22T00:00:00ZMattt Thompsonm@mattt.me<p>Valve's implementation of achievements in Left 4 Dead demonstrate 4 primary uses of the achievement framework: to be Instructive, to be Prescriptive, to be Demarcative, and to act as Incentive. Looking at how achievements shape the gameplay experience, there's a lot that can be applied in the context of social web applications, too.</p>
<p>Whether it’s <a href="http://half-life2.com/">Half Life</a>, <a href="http://tf2.com/">Team Fortress 2</a>, <a href="http://www.aperturescience.com/">Portal</a>, or, most recently, <a href="http://l4d.com/">Left 4 Dead</a>, <a href="http://valvesoftware.com/">Valve</a> has an attention to detail that is unsurpassed among game developers. If it wasn’t for their commitment to opening up their development process, such as through their brilliantly-conceived in-game developer commentary, I’d have no recourse but to conclude that these games were delivered on high from <a href="http://www.venganza.org/">His Noodley Goodness</a>.</p>
<p>Thankfully, their efforts have not gone unnoticed, given their commercial success as well as strong critical acclaim from game journalists and bloggers alike. Since the release of Left 4 Dead in November 2008, there’s been an steady stream of articles about the subtle touches that made for a such a unique gameplay experience. One particularly awesome piece by <a href="http://www.offworld.com/2008/12/why-left-4-dead-has-the-best-t.html">John Brownlee</a> details how the 5 minute intro video communicates all of the important gameplay dynamics without the player even noticing. Brilliant.</p>
<p>Reading this article got me thinking about what good ideas more conventional software developers could absorb from Valve. What I found particularly intriguing was L4D’s achievement system as a way to explore its potential in the context of the social web.</p>
<p>Mind you, achievement systems are not particularly novel. Perhaps the most ubiquitous example is <a href="http://www.xbox.com/en-US/live/">XBox Live</a>, wherein players unlock achievements by meeting conditions in a particular game, thereby earning trophy icons and Gamerscore points. These accomplishments are usually along the lines of “complete the game on hard difficulty” or “score 100,000 points”. Valve’s implementation (viz <a href="http://steampowered.com/">Steam</a>) in particular, though, is the first to really use achievements effectively.</p>
<p>In particular, I’ve identified 4 primary uses of the achievement framework: to be Instructive, to be Prescriptive, to be Demarcative, and to act as Incentive. For each of these, I’ll explore a canonical example from the <a href="http://www.steampowered.com/status/l4d/">Left 4 Dead achievements</a>, and tie it all together with <a href="http://microformats.org/">microformats</a>. Yes, <a href="http://microformats.org/">microformats</a>. Excited yet? Just so you know, there’s an achievement to earn by getting to the end of this article.</p>
<h2>Left 4 Uses of Achievements</h2>
<h3>Instructive</h3>
<p>All FPS’s are pretty much the same on the surface:
W-A-S-D, Mouse to look and aim, Click to shoot, R to reload, Numbers for weapons, Space to jump.
Play one and you’ve played them all.</p>
<p>This may not be a bad thing in itself, but it does present some design issues. For instance, how do you tell players about something new? No one reads instruction manuals, complex controller maps on loading screen are obnoxious, and in-game tutorials are really awkward in a story’s context (“Wow, an upgrade! Now I can press B to fire missiles!”)</p>
<p>One such hidden ability is that you can instantly kill an infected by sneaking up and melee-attacking them from behind. As you might expect, melee-ing an enemy from behind isn’t a very standard idiom. Not like a Head Shot, at least. So how did I learn to do that? Well, there just so happens to be an accomplishment called “Spinal Tap”, which describes this exact situation.</p>
<p><img alt="Spinal Tap Achievement" src="http://cdn.matttthompson.com/images/l4d-spinal-tap.png" /></p>
<p>As obscure as this might seem, buried in an achievements list and whatnot, <a href="http://www.steampowered.com/status/l4d/">according to Valve’s stat tracking</a> at the time of this writing, over 60% of users have discovered this on their own, perhaps many of them because of this being an accomplishment.</p>
<p>Achievements like “Spinal Tap” promote exploration by creating an invitation for users to try new things. By learning just this one new thing, a user will be more compelled to step out of their common habits to discover something new for themselves.</p>
<h3>Prescriptive</h3>
<p>What separates Left 4 Dead from pretty much any other multiplayer game is the intense focus on small-group cooperation. Valve took a huge risk with this too: if it hadn’t nailed that game dynamic, the whole game would have failed. As the central design philosophy for the game, everything comes back to cooperation in Left 4 Dead. Pull a <a href="http://www.youtube.com/watch?v=LkCNJRfSZBU">Leeroy Jenkins</a> and fight the horde on your own, and don’t be surprised when a hunter starts feasting on your innards.</p>
<p>So consider another achievement in Left 4 Dead: </p>
<p><img alt="Dead Giveaway Achievement" src="http://cdn.matttthompson.com/images/l4d-dead-giveaway.png" /></p>
<p>When you’ve been rattled by zombies to the point of near-collapse, it’s difficult to even consider healing someone else before you. However, achievements such as “Dead Giveaway” exists to prescribe such selfless actions in order to reward cooperation and cohesion within the group. Subtle rewards like this stress the importance of teamwork and empathy—just as Valve had in mind.</p>
<h3>Demarcative</h3>
<p>A core facet of our human nature is the importance of understanding one’s place in the world. For platformer and adventure games, like Super Mario World, the world map provides a sense of the vastness of the game, and your progress through it. RPGs use highly-developed plot in conjunction with leveling systems to contrast how far you’ve come since the beginning of the game. Puzzle games taunt you with high-scores.</p>
<p>Left 4 Dead, if you didn’t have achievements, wouldn’t have a strong sense of place on its own. You could spend hundreds of hours blasting through each campaign, and be right back where you started.</p>
<p>Of course, this isn’t the case.</p>
<p><img alt="Toll Collector" src="http://cdn.matttthompson.com/images/l4d-toll-collector.png" /></p>
<p>A strong cast of player achievements serve as a permanent record of where your time has gone, and how far your accomplishments reach. You get an achievement for playing through any of the campaigns once, like “Toll Collector” for completing the Death Toll campaign. You get one for beating them on expert too. Even though you spent hours needlessly earning each of them, when you certainly had better things to be doing, at least you have nothing to show for it.</p>
<h3>Incentive</h3>
<p>One thing is for sure, given fads and trends that pop in and out of existence: people love to collect shit.</p>
<p>The whole concept of an achievements system is based upon this core premise. Present a player with an empty trophy case, and they’ll spend hours upon hours scouring the game world to fill it up. A game might only last a few hours, but the meta-game is, or can be, eternal.</p>
<p><img alt="Zombie Genocidest" src="http://cdn.matttthompson.com/images/l4d-zombie-genocidest.png" /></p>
<p>Want to earn all of the achievements in Left 4 Dead? You’ll have to kill 53,595 infected first. To put that in perspective, I average a couple hundred on a normal campaign, each of which takes about an hour. For the average player, this accomplishment will might take 100 hours to complete. What’s amazing is that about 3% players have done it. That’s some dedication.</p>
<h2>Left 4 Relevance?</h2>
<p>So to recap, here’s what Valve’s achievement system does:</p>
<ul>
<li>It encourages players more likely to explore and discover (“Spinal Tap”)</li>
<li>It instructs players on the right way to do things (“Dead Giveaway”)</li>
<li>It provides players a sense of place and accomplishment (“Toll Collector”)</li>
<li>It challenges players to invest more time in the game (“Zombie Genocidest”)</li>
</ul>
<p>This is exactly what you want users to be doing. Not only is are achievements effective, but there’s very little technical overhead at all. So long as some real time is put into thinking about how you want to encourage users, the only other thing you need are some delicious icons and witty titles. For a fantastic write-up about using these in a web context, check out the <a href="http://developer.yahoo.com/ypatterns/pattern.php?pattern=achievements">Yahoo! Reputation Design Pattern Library</a>.</p>
<h3>Web 4.0 Achievements</h3>
<p>Putting things into context, imagine if popular sites implemented their own achievement system:</p>
<p><img alt="Last.fm Achievement" src="http://cdn.matttthompson.com/images/lastfm-achievement.png" /></p>
<p><img alt="Facebook Achievement" src="http://cdn.matttthompson.com/images/facebook-achievement.png" /></p>
<p><img alt="YouTube Achievement" src="http://cdn.matttthompson.com/images/youtube-achievement.png" /></p>
<h3>microformats 4 the win</h3>
<p>Just Blue Sky Solutioneering™ here, but a portable achievement network like XBox Live or Steam on the web would be pretty sweet. Call it a spiritual successor to those vBulliten-era forum rankings and titles.</p>
<p>To help you get started, here’s a first conceptualization of a microformat spec. For now, let’s just call it hAchievement:</p>
<pre><code><div id="blogdor" class="achievement">
<img src="images/trophy.png" class="icon" width="60" height="60"/>
<h1 class="title">Blogdor, the Wordenator!</h1>
<p class="description">You read this blog post. Gratz!</p>
</div>
</code></pre>
Second Stanza: Of Phones and Phonemeshttp://mattt.me/2009/second-stanza-of-phones-and-phonemes/2009-01-19T00:00:00Z2009-01-19T00:00:00ZMattt Thompsonm@mattt.me<p>That the creative processes of all writers—poets, novelists, academics, and the like—are completely hidden, is what makes NLP so frustrating. Such underdetermination is the reason why <a href="http://www.xkcd.org/114/">XKCD can (justifiably) dish beeves upon computational linguists with such pizazz</a>. There’s just no way to know what the hell is going on underneath the hood with human language, and all we have to go on is what comes out on the other side.</p>
<p>That the creative processes of all writers—poets, novelists, academics, and the like—are completely hidden, is what makes NLP so frustrating. Such underdetermination is the reason why <a href="http://www.xkcd.org/114/">XKCD can (justifiably) dish beeves upon computational linguists with such pizazz</a>. There’s just no way to know what the hell is going on underneath the hood with human language, and all we have to go on is what comes out on the other side.</p>
<p>On that note, my Plan B for this project is a magical horse named <a href="http://en.wikipedia.org/wiki/Zellig_Harris">“Zellig Horsis”</a>. Give him any subject matter, anything at all, and that thoroughbred synthesized lyrical genius will tap a Pulitzer-winning poem in morse code for you. Put peanut butter under his lips, and you can even imagine he’s actually saying it!</p>
<p>Horses aside, with all of the million ways I could approach computational poetry, my inner-linguist compels me to seriously consider my formal training. My roots. “Consult the <a href="http://en.wikipedia.org/wiki/Jan_Niecisław_Baudouin_de_Courtenay">Kazan School</a>, my son” suggests my inner-voice, sounding suspiciously like Jeff Goldblum.</p>
<h2>Two Things I Learned in Phonology</h2>
<h3>Feature Analysis of Consonants</h3>
<p>One of the great linguistic traditions is the <a href="http://en.wikipedia.org/wiki/Prague_Linguistic_Circle">Prague Linguistics Circle</a> from the 1930’s. Among their many contributions was Feature Analysis, an entirely different way to understand phonetic inventories. It did so by looking inside the phonemes themselves; breaking through the seeming atomicity of phonemes to understand them as bundles of descriptive features.</p>
<p>For instance, consider the phoneme [d], as in duck:</p>
<p><strong>Articulatory Phonetics</strong>: <em>Voiced Dental or Alveolar Plosive</em></p>
<p><strong>Feature Analysis</strong>: <code>[+Consonantal, -Vocalic, +Voicing, -Continuant, -Strident, -Nasal, -Tenseness, -Rounding]</code></p>
<p>Although the familiar articulatory perspective is more concise, feature analysis has the ability to take any two sounds and describe how they differ in much finer granularity. By breaking down phonetic strings into these features, I could construct a much more complex machine learning strategy. For instance, it could computationally determine the efficacy of sounds coming together inside words to compute a sort of poetic “score”.</p>
<p>Much of the delight in poetry comes from the delicious way sounds crash and coalesce to form a unique identity. Poetry, after all, is to be enjoyed out loud; the layering of semantic and prosodic and phonetic imagery is the essence and ultimately the mystique of poetry.</p>
<h3>Articulation of Vowels</h3>
<p>Vowels, as you might expect, are quite a bit different from consonants. Whereas consonants have a more distinct point of articulation (whether bilabial or dental or velar or uvular, placement variation doesn’t matter too much within that location), vowels are quite a bit harder to pin down. With continuous airflow through your vocal tract, you become a musical instrument of sorts. That is, the shape of your mouth changes what frequencies are generated and how it sounds.</p>
<p>If you ever took an undergraduate class in Linguistics, you’ve probably seen the <a href="http://www.youtube.com/watch?v=BH4D9g6D5kY">infamous “tongue video”</a>. It’s rough, but it does a good job of showing just how the tongue affects the shape of the oral tract, and how that corresponds to each phoneme.</p>
<p>Thing is, between languages like <a href="http://en.wikipedia.org/wiki/Hungarian_phonology">Hungarian</a>—which has 14 distinct vowels—and the combination of neighboring consonants, which color a vowel, there are a lot of possible sounds for each “vowel”. That’s when I got this completely random insight: what if I used the same geometric algorithm to calculate vowel proximity that I would use to <a href="http://votermap.us/">calculate cartographic entities</a>? Since I’m already using MySQL, why not add a spatial column to map the geometry of each word? It might be just crazy enough to work.</p>
<h2>Grasping at Straws</h2>
<p>Where either of these approaches gets me remains to be seen. Like most things with NLP, it’s a crap shoot until you actually try it.</p>
<p>In the meantime, I’ve had fun writing Ruby glue code to pretty-print IPA. My next step is to implement a formal subclass or maybe independent analog of the String class that stores IPA strings as an array of phonemes, which each with their own feature bundle definition. Yes, it’s nerdy, but damn is Ruby meta-programming fun.</p>
The Magical Tale of My Computational Poetry Thesis, First Stanzahttp://mattt.me/2009/the-magical-tale-of-my-computational-poetry-thesis-first-stanza/2009-01-11T00:00:00Z2009-01-11T00:00:00ZMattt Thompsonm@mattt.me<p>In order to complete my BA Linguistics, I have to write a Thesis. My goal for this project is to develop a programmatic module that, given a subject — be it love, Paris, or Heisenberg’s Uncertainty Principle—can produce a valid (and ideally tear-jerking or awe-inspiring) poem in the forms of Haiku, Limerick, and Fib.</p>
<p>In order to complete my BA Linguistics, I have to write a Thesis. But not on just anything. No no. I had to come up with a topic that met some level of marginal approval from my advisor, Professor Mandy Simons, a sweet but daunting woman of ~5’2” with a bespectacled glare and a curious, nearly-British accent that escapes all notions of placement like a magical knot that becomes tighter when you try to untie it.</p>
<p>Because OT field research on Cherokee wasn’t going to cut it, I had to soul-search for a topic whose premise would not ostensibly get me thrown out a window.</p>
<p>What I settled on was a combination of my favorite parts of linguistics: probabilistic models and phonology.</p>
<p>You see, what first got me intensely interested in linguistics was <a href="http://en.wikipedia.org/wiki/Markov_chain">Markov Chains</a>. It’s crazy that such a simple mechanism can produce something that has all of the great taste of lexical value, but with none of the calories! A few months after I first discovered them, <a href="http://loremipscream.com/">Lorem Ipscream</a> was born, and the world rejoiced at it’s creamy goodness. A <a href="http://flolcatr.com/">furrier project</a> of the same Markovian fervor was born a few months later. That, alas, was an insurmountably bad idea.</p>
<p>These days, I’ve been on quite the phonology binge. Ask any of my friends.
For instance, my girlfriend has refused to let me practice my voiced uvular trills, despite my well-argued point that it’s a fun noise to make.</p>
<p>Anyway, combine the two, and you get my thesis project, which I’ve come to call Keats.
Just as you might expect, I chose to honor <a href="http://en.wikipedia.org/wiki/John_Keats">the great poet</a> purely to exude an image of sophistication.
Never read a lick of him in my life, to be honest.</p>
<h2>Thesis Abstract, would I were stedfast as thou art</h2>
<p>My goal for this project is to develop a programmatic module that, given a subject—be it love, Paris, or Heisenberg’s Uncertainty Principle—can produce a valid (and ideally tear-jerking or awe-inspiring) poem in the forms of Haiku, Limerick, and Fib.</p>
<p>These three chosen forms represent three distinct problem spaces, as well as three very different opportunities to investigate the essence of poetic form.</p>
<h3>Haiku</h3>
<blockquote>
<p>Haikus are easy</p>
<p>But sometimes they don’t make sense</p>
<p>Refrigerator
<a href="http://www.typetees.com/product/623/Haikus_are_easy_but_sometimes_they_don_t?=">Threadless T-Shirt</a></p>
</blockquote>
<p>5/7/5. Anybody who’s anybody has written a <a href="http://en.wikipedia.org/wiki/Haiku">Haiku</a> before. What could be simpler?
Without any constraints on rhyme or syntax—just the right meter—Haiku are both the easiest to implement, but hardest to get right. What is it about a Haiku that makes it so great? Is it all in the content of simple, but powerful words in sequence, or is there a hidden prosody that escapes normal conscious detection?</p>
<h3>Limerick</h3>
<blockquote>
<p>There once was a man named Bertold</p>
<p>Who drank beer when the weather grew cold</p>
<p>As he reached for his cup…</p>
<p>“NEEEEVER GONNA GIVE YOU UP!!!”</p>
<p>Oh, snap! You just got limerickrolled!
<a href="http://limerickdb.com/?383">Limerick DB #383</a></p>
</blockquote>
<p>Ah, the lyric form of the Everyman. Filled to the brim with innuendo and wit, it just wouldn’t be any fun to do a project without <a href="http://en.wikipedia.org/wiki/Limerick_(poetry">Limericks</a>).</p>
<p>As luck would have it, Limericks have a pleasant balance of meter and rhyming constraints to make the problem of weaving sultry narratives interesting, while narrowing the problem into something bite-sized and manageable. Already, I have a sense that this may prove to be a harsh battleground between the forces of linguistic-based models and statistical, machine learning models.</p>
<h3>Fib</h3>
<blockquote>
<p>One</p>
<p>Small,</p>
<p>Precise,</p>
<p>Poetic,</p>
<p>Spiraling mixture:</p>
<p>Math plus poetry yields the Fib.
<a href="http://gottabook.blogspot.com/2006/04/fib.html">Pincus, Gregory K.</a></p>
</blockquote>
<p>And finally, the wildcard: Fibonacci-metered poems—known by the poets on the ins as, simply, “Fib”. Forged in the crucible of rebellion against the scourge of free-verse poetry in the 1990’s, this postmodern construct provides much-needed structure to the syntax-starved poetry slammers while maintaining an open-endedness and irony that resembles Germany’s inexplicable lust for all things cowboys and <a href="http://www.youtube.com/watch?v=QUHomRLop7I">David Hasselhoff</a>.</p>
<p>Like Haiku, meter—not rhyme—is enforced. However, the number of syllables per line is dictated by <a href="http://en.wikipedia.org/wiki/Fibonacci_sequence#Origins">an expression of the number of rabbit pairs per generation over time</a>. Aside from <a href="http://en.wikipedia.org/wiki/Fibonacci_sequence#Fibonacci_numbers_in_nature">showing up everywhere in nature</a>, the imagery of the two previous lines combining to form the next is quite, well, poetic.</p>
<p>As a theoretically open-ended form (although conventional Fib poets stop at line six), it also presents the challenge of producing the longest possible poem. Line 20, for example, would require 6765 syllables, which is the just about the length of a SAT writing sample. Clearly, things are going to get pretty freaky if I leave this running overnight.</p>
<p>So there you have it. The beginning to my semester-long quest to uncover the secret of this timeless form of expression. For posterity, and as a demonstrative means of not putting everything off until the last minute, I’ll be posting updates as the project develops.</p>
10,000 Resolutionshttp://mattt.me/2009/10000-resolutions/2009-01-01T00:00:00Z2009-01-01T00:00:00ZMattt Thompsonm@mattt.me<p>Every January 1st, as is the custom for millions of Americans, we enter into a collective delusion to commit ourselves to vague truisms that we already know to be good for us. This time around, I'm ditching resolutions for simple accounting.</p>
<p>Resolutions, come to think of it, are rather silly.</p>
<p>Every January 1st, as is the custom for millions of Americans, we enter into a collective delusion to commit ourselves to vague truisms that we already know to be good for us.</p>
<ul>
<li>Read more.</li>
<li>Eat healthy.</li>
<li>Work out.</li>
<li>Don’t procrastinate.</li>
<li>Be a better person.</li>
</ul>
<p>Well, duh.</p>
<p>This time around, I’m tired of setting myself up for failure.</p>
<p>This time around, I’m ditching resolutions for simple accounting.</p>
<h2>Life is like World of Warcraft—It all comes down to <a href="http://en.wikipedia.org/wiki/Grind_(gaming">grinding</a>).</h2>
<p>Among the books I’ve had the chance to finally enjoy over the holiday was <a href="http://www.amazon.com/gp/product/0316017922?ie=UTF8&tag=mattthom-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0316017922">Malcolm Gladwell’s “Outliers”</a>. One of the most striking ideas in the book is <a href="http://www.youtube.com/watch?v=Hz4hPbHIZ6Y">“The 10,000 Hour Rule”</a>. Simply stated, it takes about 10,000 hours for the human mind to fully assimilate a skill to the point of mastery.</p>
<p>It’s an incredibly empowering sentiment: provided that you meet some minimal intelligence or talent requirements (chances are, you’re fine), it’s conceivable that the old adage, “You can do anything you put your mind to”, will come to pass. Even if you don’t completely buy into this magical number, 10,000 hours is a long ass time; if not mastery then, hell, 10,000 hours certainly couldn’t hurt.</p>
<h2>Rather be working for a paycheck, then waiting to win the lottery.</h2>
<p>Consider this excerpt from the book <a href="http://www.amazon.com/gp/product/0961454733?ie=UTF8&tag=mattthom-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0961454733">Art & Fear</a> (<a href="http://www.codinghorror.com/blog/archives/001160.html">via Coding Horror</a>):</p>
<p>The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”.</p>
<p>Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.</p>
<p>For all that we stress about in our creative processes, it all boils down to the simple tautology—that the only way to get better at something is to do it. Yes, a lot of it’s going to suck. Yes, you’re going to hate when things don’t turn out right. Just learn to get over it. And for god’s sake, don’t get embarrassed about it. You’ve got 10,000 hours to get it right. Don’t worry about getting it wrong a couple times.</p>
<p>Underneath every brilliant work, there was a lot of sweat, love, and determination to get there. Figure out what you love, and start chipping away.</p>
<p>Happy New Year.</p>