<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  
  
  
  
  <title>Bret Taylor's blog</title>
  
    <updated>2014-04-24T05:48:34Z</updated>
  
  <id>http://backchannel.org/</id>
  <link rel="alternate" href="http://backchannel.org/" title="Bret Taylor's blog" type="text/html"/>
  <link rel="self" href="http://backchannel.org/blog/feed" title="Bret Taylor's blog" type="application/atom+xml"/>
  
    <entry>
      <id>http://backchannel.org/blog/mobile-multitasking</id>
      <title type="text">We need better mobile multitasking</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/mobile-multitasking" rel="alternate" type="text/html"/>
      <updated>2014-04-24T05:48:34Z</updated>
      <published>2014-04-24T04:36:53Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;The dominant trend in mobile design is to make apps completely immersive and tactile. Almost every new app I see has the same design philosophy: big text and photos, usually one per screen, that you can page through with the flick of a finger.&lt;/p&gt;

&lt;p&gt;These next generation apps are beautiful, but they don’t address my biggest mobile usability concern: &lt;em&gt;multitasking&lt;/em&gt;. In many ways, current design trends are moving in the opposite direction &amp;mdash; every app experience is full screen, completely taking over my device and ignoring the app I was using previously.&lt;/p&gt;

&lt;p&gt;Whenever I need to do research or genuinely use two apps simultaneously, I still have to go back to my laptop (despite being a  poster child for &lt;a href=&quot;https://quip.com/&quot;&gt;embracing mobile&lt;/a&gt;). I think the mobile OS that figures out a multitasking interface that is as efficient as PCs will gain much wider adoption on tablets. It may even &lt;a href=&quot;http://www.wired.com/2014/04/apple-q2-earnings-2/&quot;&gt;reinvigorate tablet sales&lt;/a&gt; by expanding the use cases for post-PC devices.&lt;/p&gt;

&lt;p&gt;I personally want a tabbed interface for apps on my tablet like I have for sites in my web browser. I am in the minority based on current mobile design trends, but I care more about the utility of transitioning between apps quickly than I do about the size of my photos.&lt;/p&gt;

&lt;p&gt;As an engineer, it is interesting to observe the nuanced differences between desktop browsers and mobile apps as it relates to user interface. I am not talking about HTML vs. native code, but which parts of the interface are in the developer’s control vs. the end user’s control.&lt;/p&gt;

&lt;p&gt;Web browsers are distinctly about user control. As the end user, I control the size of the window. I can right click on a link and control whether it opens in the same tab or a new tab. I can install plugins that directly manipulate apps written by other people. I can highlight anything I see and copy it to the clipboard. As a developer, you have to try really hard to disable these behaviors, and in most cases, it is impossible to do so.&lt;/p&gt;

&lt;p&gt;Mobile apps have taken the opposite approach for a combination of technical and product design reasons. Everything is in the developer’s control. Text isn’t selectable unless the developer decides it is. When a user clicks on a link or a button, the developer gets to choose what happens. Whenever I click on an address on my iPad, it automatically opens in Apple Maps even though I use Google Maps exclusively (I am &lt;a href=&quot;http://googleblog.blogspot.com/2005/02/mapping-your-way.html&quot;&gt;admittedly biased&lt;/a&gt;). As the end user, I have absolutely no control over any these behaviors.&lt;/p&gt;

&lt;p&gt;I think multitasking is a natural side effect of operating systems that offer greater end user control. PC-like windowed interfaces are clearly not the right choice for tablets, but we need something better than we have now (and no, &lt;a href=&quot;http://developer.android.com/guide/components/intents-filters.html&quot;&gt;Android intents&lt;/a&gt; aren’t even close to enough).&lt;/p&gt;

&lt;p&gt;I think it will require a philosophical shift from where we are now, taking some control out of the hands of app developers and putting it back in the hands of the end user so she can control what apps are on her screen at the same time. I think doing so would put the &lt;a href=&quot;http://www.idc.com/getdoc.jsp?containerId=prUS24129713&quot;&gt;final nail in the coffin&lt;/a&gt; for PCs and make my tablet a true replacement for my laptop.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/ios-vector-graphics</id>
      <title type="text">Vector Graphics in iOS via PDF</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/ios-vector-graphics" rel="alternate" type="text/html"/>
      <updated>2013-11-03T06:44:33Z</updated>
      <published>2013-11-03T06:24:11Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;I make iOS card games in my spare time. I love playing cards on my iPhone, and I love making the AI for the computer player (yes, I am that type of person).&lt;/p&gt;

&lt;p&gt;When I started my first game, I couldn't find any decent library to render playing cards on iOS. There were a bunch of repositories of playing card images (most of them hideous), but none worked for me. I needed multiple resolutions: iPhone, iPad, retina, non-retina, etc. I really wanted to be able to specify and change card sizes in my code.&lt;/p&gt;

&lt;p&gt;I eventually discovered an &lt;a href=&quot;https://code.google.com/p/vectorized-playing-cards/&quot;&gt;awesome vector playing card set&lt;/a&gt; on Google Code, but quickly ran into a different roadblock: iOS does not support rendering vector graphics natively.&lt;/p&gt;

&lt;p&gt;There are a few open source projects like &lt;a href=&quot;https://github.com/SVGKit/SVGKit&quot;&gt;SVGKit&lt;/a&gt; that offer full stack SVG parsing, but I was intimidated about importing such a large library into my weekend project just to render a playing card.&lt;/p&gt;

&lt;p&gt;Knowing iOS and Mac OS support PDF natively, I decided to take a different approach: convert my vector graphics to a PDF document. PDF supports vector drawing, and I could use the &lt;a href=&quot;https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CGPDFPage/Reference/reference.html&quot;&gt;Core Graphics PDF functions&lt;/a&gt; to convert them to UIImages, i.e.,&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;NSURL *url = [NSURL fileURLWithPath:[NSBundle.mainBundle pathForResource:@&quot;Card&quot; ofType:@&quot;pdf&quot;]];
CGSize size = CGSizeMake(100, 200); // Output image size
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef)url);
UIGraphicsBeginImageContextWithOptions(size, false, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page = CGPDFDocumentGetPage(document, 1);
CGContextConcatCTM(context, CGPDFPageGetDrawingTransform(page, kCGPDFBleedBox, CGRectMake(0, 0, size.width, size.height), 0, true));
CGContextDrawPDFPage(context, page);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In practice, I converted the repository of playing card graphics into a 52 page PDF document (one per card), and convert them to images in a single batch. It isn't particularly fast, but it was still faster than the competing SVG libraries. As long as you cache the UIImage you generate, it is more than fast enough, and much easier (and much smaller) than generating 4 or 8 images per graphic given the proliferation of iOS device screen resolutions.&lt;/p&gt;

&lt;p&gt;I hadn't seen any other people using this technique (PDF as a vector graphics file format for iOS), so I figured I would blog about it in case it is useful to other people.&lt;/p&gt;

&lt;p&gt;I also open sourced the &lt;a href=&quot;https://github.com/finiteloop/ios-cards&quot;&gt;card rendering library&lt;/a&gt; in case it is useful to anyone else working on card games on iOS.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/finiteloop/ios-cards&quot;&gt;Fork/download ios-cards&lt;/a&gt; on GitHub.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/social-cookbook</id>
      <title type="text">Social Cookbook: an open source Open Graph app</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/social-cookbook" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:28:38Z</updated>
      <published>2011-09-27T20:38:19Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;Last week, at Facebook's annual f8 conference in San Francisco, we released &lt;a href=&quot;http://developers.facebook.com/docs/beta/&quot;&gt;Open Graph&lt;/a&gt; along with the new &lt;a href=&quot;http://www.facebook.com/about/timeline&quot;&gt;Facebook Timeline&lt;/a&gt;. &lt;a href=&quot;http://developers.facebook.com/docs/beta/&quot;&gt;Open Graph&lt;/a&gt; is the biggest change to Facebook's Platform since it launched in 2007. If you missed the live stream of the conference, I highly recommend you &lt;a href=&quot;https://www.facebook.com/f8?sk=app_283743208319386&quot;&gt;check out the keynote&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At a recent &lt;a href=&quot;https://www.facebook.com/hackathon&quot;&gt;Facebook Hackathon&lt;/a&gt;, I made a social cookbook app built on Open Graph to test all of our new products. I am open sourcing it in case it is useful for any of you to get started with the product. You can use the app at &lt;a href=&quot;http://socialcookbook.me/&quot;&gt;http://socialcookbook.me/&lt;/a&gt;, but the source code is probably more interesting than the app for most of you. It is available &lt;a href=&quot;https://github.com/finiteloop/socialcookbook&quot;&gt;on Github&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that in this Developer Beta period, the app probably won't work properly (i.e., it won't add stuff to your Timeline yet). It will start working in its current form as soon as Timeline starts rolling out more broadly and Open Graph is open for business.&lt;/p&gt;

&lt;p&gt;Here's what the app looks like:&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;a href=&quot;//d1udwvgzrtavb8.cloudfront.net/d6415cac36a43c864a34316cd10feaf5b16547d2&quot; target=&quot;_blank&quot;&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/0eb4b3c9a4d3f295c56a8b82c9d330237357f247&quot; alt=&quot;Social Cookbook Recipe&quot;/&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;When you add a new recipe or a friends' recipe to your cookbook, that activity also goes to your Facebook timeline:&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/b82e12836a0d8a81dda8f3b874a0122d21792086&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;Once you add a recipe to your cookbook, you can let your friends know you cooked the recipe with the click of a button - and that, of course, also is reflected on your timeline.&lt;/p&gt;

&lt;p&gt;In the &lt;a href=&quot;https://developers.facebook.com/apps&quot;&gt;Facebook developer app&lt;/a&gt;, I configured those Open Graph actions (I use &quot;clip&quot; for adding a recipe to your cookbook) and the &quot;recipe&quot; object type:&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/5d5e44a66e63e2ab97902191623e39a911836dd1&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;I configured three different aggregations for people's timelines: Cooked Recipes (a reverse chronological list of recipes I've cooked), Most Cooked Recipe (an &quot;item&quot; layout style with the recipe you cooked most often that month or year), and My Recipes (a grid of all the recipes you added that month).&lt;/p&gt;

&lt;p&gt;I also support uploading photos, which are stored in &lt;a href=&quot;http://aws.amazon.com/s3/&quot;&gt;Amazon S3&lt;/a&gt; and served by &lt;a href=&quot;http://aws.amazon.com/cloudfront/&quot;&gt;Amazon CloudFront&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The bulk of the code is in &lt;a href=&quot;https://github.com/finiteloop/socialcookbook/blob/master/cookbook.py&quot;&gt;cookbook.py&lt;/a&gt;. The service is built with &lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado&lt;/a&gt; for obvious reasons :)&lt;/p&gt;

&lt;p&gt;Hope this helps you get started with Open Graph.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/web-sockets-tornado</id>
      <title type="text">Web Sockets in Tornado</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/web-sockets-tornado" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:29:17Z</updated>
      <published>2009-12-31T19:28:43Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;I have been playing around with HTML 5 &lt;a href=&quot;http://dev.w3.org/html5/websockets/&quot;&gt;Web Sockets&lt;/a&gt; for a personal project. The Web Sockets API enables web browsers to maintain a bi-directional communication channel to a server, which in turn makes implementing real-time web sites about 1000% easier than it is today.&lt;/p&gt;

&lt;p&gt;Currently, the only reasonable technical facility available to browsers to communicate to web servers is &lt;code&gt;XMLHttpRequest&lt;/code&gt;. Sites that update in real-time like FriendFeed use a number of horrible hacks on top of &lt;code&gt;XMLHttpRequest&lt;/code&gt; like &lt;a href=&quot;http://en.wikipedia.org/wiki/Push_technology#Long_polling&quot;&gt;long-polling&lt;/a&gt; to get data in real-time. (If you are interested, &lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado&lt;/a&gt; ships with a chat demo application that uses this long-polling technique - &lt;a href=&quot;http://github.com/facebook/tornado/blob/master/demos/chat/static/chat.js#L87&quot;&gt;here&lt;/a&gt; is the JavaScript in all its hacky glory).&lt;/p&gt;

&lt;p&gt;Web Sockets support a much simpler interface that enables both the client and the server send messages to each other asynchronously:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;var ws = new WebSocket(&quot;ws://friendfeed.com/websocket&quot;);
ws.onopen = function() {
    ws.send(&quot;This is a message from the browser to the server&quot;);
};
ws.onmessage = function(event) {
    alert(&quot;The server sent a message: &quot; + event.data);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;a href=&quot;http://blog.chromium.org/2009/12/web-sockets-now-available-in-google.html&quot;&gt;Google Chrome&lt;/a&gt; just added support for Web Sockets, and most major browsers will deploy Web Socket support in their next major release. Chrome's release inspired me to play around with &lt;a href=&quot;http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-55&quot;&gt;the protocol&lt;/a&gt;, which is extremely compatible with the design and goals of the &lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado web server&lt;/a&gt; we released a few months ago.&lt;/p&gt;

&lt;p&gt;I implemented &lt;a href=&quot;http://github.com/finiteloop/tornado/blob/master/tornado/websocket.py&quot;&gt;a Web Socket module for Tornado&lt;/a&gt;. Here is an example handler that echos back every message the client sends:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class EchoWebSocket(tornado.websocket.WebSocketHandler):
    def open(self):
        self.receive_message(self.on_message)

    def on_message(self, message):
       self.write_message(u&quot;You said: &quot; + message)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You map &lt;code&gt;WebSocketHandlers&lt;/code&gt; to URLs the same as all of your other handlers in your application. However, since the Web Socket protocol is message-based and not really related to HTTP, all of the standard Tornado read and write methods have been replaced with the two methods &lt;code&gt;send_message()&lt;/code&gt; and &lt;code&gt;receive_message()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can &lt;a href=&quot;http://github.com/finiteloop/tornado/blob/master/tornado/websocket.py&quot;&gt;download the module on Github&lt;/a&gt;. Let me know if you encounter any issues if you start using it.&lt;/p&gt;

&lt;p&gt;With this Tornado module, Web Sockets listen on the same host and port as all of your other request handlers. Once the initial HTTP connection is made, the connection protocol &quot;switches&quot; (or &quot;upgrades&quot; in the language of the Web Socket spec) from HTTP to the Web Socket protocol. I am not sure how most load balancers or proxies would respond to connections like this (most would probably close the connection or puke on the response). I plan on playing around a bit with nginx this weekend, but if you have had any anecdotal experience getting Web Sockets working with production load balancers, I would love to hear about it in the comments.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/oauth-wrap-friendfeed</id>
      <title type="text">OAuth WRAP support in FriendFeed for feedback</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/oauth-wrap-friendfeed" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:29:50Z</updated>
      <published>2009-12-21T19:29:32Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;As David Recordon mentioned on the &lt;a href=&quot;http://developers.facebook.com/news.php&quot;&gt;Facebook Developer Blog&lt;/a&gt;, the Facebook Platform team is working to move &lt;a href=&quot;http://developers.facebook.com/connect.php&quot;&gt;Facebook Connect&lt;/a&gt; over to OAuth over the next year. As a part of that effort, we have been working with Google, Microsoft, Yahoo!, and a number of other engineers on the &lt;a href=&quot;http://wiki.oauth.net/OAuth-WRAP&quot;&gt;OAuth WRAP&lt;/a&gt; standard, which aims to be much simpler to implement than the existing OAuth standard.&lt;/p&gt;

&lt;p&gt;As a part of this effort, I implemented OAuth WRAP support in &lt;a href=&quot;http://friendfeed.com/&quot;&gt;FriendFeed&lt;/a&gt; so we could have a live implementation of the standard to play with. I am really interested in getting feedback about the standard from developers who are currently using OAuth or Facebook Connect or both. If you have never used WRAP before, try it out, and let me know what you think (the comments on this post are a great forum, or feel free to send feedback to the &lt;a href=&quot;http://groups.google.com/group/oauth-wrap-wg&quot;&gt;OAuth WRAP mailing list&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;To get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://friendfeed.com/api/register&quot;&gt;Register your application&lt;/a&gt; on FriendFeed to get an OAuth consumer key and consumer secret&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://oauth-wrap-wg.googlegroups.com/web/WRAP-v0.9.7.2.pdf&quot;&gt;Read the OAuth WRAP specification&lt;/a&gt; and check out the &lt;a href=&quot;http://github.com/finiteloop/friendfeed-wrap-example/blob/master/friendfeedwrap.py&quot;&gt;Google AppEngine example application&lt;/a&gt; that implements WRAP&lt;/li&gt;
&lt;li&gt;Try implementing WRAP! The FriendFeed Authorize URL is &lt;code&gt;https://friendfeed.com/account/wrap/authorize&lt;/code&gt;, and the 
Access Token URL is &lt;code&gt;https://friendfeed.com/account/wrap/access_token&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The FriendFeed WRAP implementation does not support refreshing tokens; the tokens never expire.&lt;/p&gt;

&lt;h2&gt;An overview of OAuth WRAP&lt;/h2&gt;

&lt;p&gt;The main difference between OAuth and OAuth WRAP is that WRAP does not have elaborate token exchanges or signature schemes. Instead, all server-to-server WRAP calls happen via SSL. The &quot;access token,&quot; which grants your client the ability to make API calls on a user's behalf, is protected by SSL rather than by a shared secret and signature scheme.&lt;/p&gt;

&lt;p&gt;The browser-based authorization experience looks exactly the same to an end user. First, you redirect the user to the Authorize URL (&lt;code&gt;https://friendfeed.com/account/wrap/authorize&lt;/code&gt;) with your Consumer Key and callback URL. After the user authorizes, the server redirects the user back to your callback URL with a &lt;em&gt;verification code&lt;/em&gt;. You call the Access Token URL (&lt;code&gt;https://friendfeed.com/account/wrap/access_token&lt;/code&gt;) with that verification code to get back the Access Token.&lt;/p&gt;

&lt;p&gt;After that, you simply need to make all your API calls via HTTPS, and you include the Access Token in the URL or in an HTTP header. There are no signatures, and no additional token exchanges necessary. Your API calls will look like &lt;code&gt;https://friendfeed-api.com/v2/feed/home?wrap_access_token=...&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;My initial experience with WRAP&lt;/h2&gt;

&lt;p&gt;It was really easy to implement OAuth WRAP support in FriendFeed. I was able to implement WRAP on top of our existing support for OAuth, using the same tokens for both. As a consequence, our existing user interfaces for revoking applications work whether an app is using OAuth or OAuth WRAP. If we hadn't implemented OAuth support, OAuth WRAP would have been much easier to implement on its own because it is stateless; the verification code / access token exchange is so much simpler than the OAuth token exchange protocol.&lt;/p&gt;

&lt;p&gt;On the client side, it was also much easier because of the lack of signatures (HTTPS calls are just as easy as HTTP with most HTTP client libraries). Using HTTPS for all requests seemed a bit weird at first, but in practice I realized I was simply moving signature calculation one level lower in the stack. I am curious how you all feel about it when you try the API out.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/tornado-tech-talk</id>
      <title type="text">My tech talk on Tornado (video and slides)</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/tornado-tech-talk" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:30:21Z</updated>
      <published>2009-09-25T19:30:05Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;I gave a tech talk on &lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado&lt;/a&gt; yesterday evening at Facebook's offices. My slides and a video of the talk are below. If you have any questions, feel free to comment below, or join the &lt;a href=&quot;http://groups.google.com/group/python-tornado&quot;&gt;Tornado discussion group&lt;/a&gt; to chat with other Tornado developers.&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;object width=&quot;700&quot; height=&quot;392&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot; /&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot; /&gt;&lt;param name=&quot;movie&quot; value=&quot;//www.facebook.com/v/614004947048&quot; /&gt;&lt;embed src=&quot;https://www.facebook.com/v/614004947048&quot; type=&quot;application/x-shockwave-flash&quot; allowscriptaccess=&quot;always&quot; allowfullscreen=&quot;true&quot; width=&quot;600&quot; height=&quot;392&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/figure&gt;
&lt;figure&gt;&lt;iframe class=&quot;scribd_iframe_embed&quot; src=&quot;//www.scribd.com/embeds/20174886/content?start_page=1&amp;amp;view_mode=slideshow&amp;amp;access_key=key-s3127ikcg7c3lwwk57s&quot; data-auto-height=&quot;true&quot; data-aspect-ratio=&quot;&quot; scrolling=&quot;no&quot; id=&quot;doc_23407&quot; width=&quot;100%&quot; height=&quot;600&quot; frameborder=&quot;0&quot;&gt;&lt;/iframe&gt;&lt;/figure&gt;&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/tornado</id>
      <title type="text">The technology behind Tornado, FriendFeed's web server</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/tornado" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:31:44Z</updated>
      <published>2009-09-10T19:31:19Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;Today, we are open sourcing the non-blocking web server and the tools that
power FriendFeed under the name &lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado Web Server&lt;/a&gt;.
We are really excited to open source this project as a part of &lt;a href=&quot;http://developer.facebook.com/opensource.php&quot;&gt;Facebook's
open source initiative&lt;/a&gt;, and
we hope it will be useful to others building real-time web services. Check out
&lt;a href=&quot;http://developers.facebook.com/news.php?blog=1&amp;amp;story=301&quot;&gt;the announcement&lt;/a&gt;
on the Facebook Developer Blog. You can download Tornado at
&lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;tornadoweb.org&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Background&lt;/h2&gt;

&lt;p&gt;While there are a number of great Python frameworks available that have been
growing in popularity over the past couple years (particularly
&lt;a href=&quot;http://www.djangoproject.com/&quot;&gt;Django&lt;/a&gt;), our performance and feature
requirements consistently diverged from these mainstream frameworks. In
particular, as we introduced more real-time features to FriendFeed, we
needed the support for a large number of standing connections afforded by
the non-blocking I/O programming style and
&lt;a href=&quot;http://www.kernel.org/doc/man-pages/online/pages/man4/epoll.4.html&quot;&gt;epoll&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We ended up writing our own web server and framework after looking at
existing servers and tools like &lt;a href=&quot;http://twistedmatrix.com/&quot;&gt;Twisted&lt;/a&gt;
because none matched both our performance requirements and our ease-of-use
requirements.&lt;/p&gt;

&lt;p&gt;Tornado looks a bit like &lt;a href=&quot;http://webpy.org/&quot;&gt;web.py&lt;/a&gt; or Google's
&lt;a href=&quot;http://code.google.com/appengine/docs/python/tools/webapp/&quot;&gt;webapp&lt;/a&gt;,
but with additional tools and optimizations to take advantage of the
non-blocking web server and tools. Some of the distinctive features of
Tornado:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;All the basic site building blocks&lt;/strong&gt; - Tornado comes with built-in support for a lot of the most difficult and tedious aspects of web development, including &lt;a href=&quot;http://www.tornadoweb.org/documentation#templates&quot;&gt;templates&lt;/a&gt;, &lt;a href=&quot;http://www.tornadoweb.org/documentation#cookies-and-secure-cookies&quot;&gt;signed cookies&lt;/a&gt;, &lt;a href=&quot;http://www.tornadoweb.org/documentation#user-authentication&quot;&gt;user authentication&lt;/a&gt;, &lt;a href=&quot;http://www.tornadoweb.org/documentation#localization&quot;&gt;localization&lt;/a&gt;, &lt;a href=&quot;http://www.tornadoweb.org/documentation#static-files-and-aggressive-file-caching&quot;&gt;aggressive static file caching&lt;/a&gt;, &lt;a href=&quot;http://www.tornadoweb.org/documentation#cross-site-request-forgery-protection&quot;&gt;cross-site request forgery protection&lt;/a&gt;, and &lt;a href=&quot;http://www.tornadoweb.org/documentation#third-party-authentication&quot;&gt;third party authentication&lt;/a&gt; like Facebook Connect. You only need to use the features you want, and it is easy to mix and match Tornado with other frameworks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time services&lt;/strong&gt; - Tornado supports large numbers of concurrent connections. It is easy to write real-time services via &lt;a href=&quot;http://en.wikipedia.org/wiki/Push_technology#Long_polling&quot;&gt;long polling&lt;/a&gt; or HTTP streaming with Tornado. Every active user of FriendFeed maintains an open connection to FriendFeed's servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High performance&lt;/strong&gt; - Tornado is pretty fast relative to most Python web frameworks. We ran some &lt;a href=&quot;http://www.tornadoweb.org/documentation#performance&quot;&gt;simple load tests&lt;/a&gt; against some other popular Python frameworks, and Tornado's baseline throughput was over four times higher than the other frameworks:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/1e9ad02c0860feb8f0caa70b54cb74304961a618&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;h2&gt;Basic usage&lt;/h2&gt;

&lt;p&gt;The main Tornado module is &lt;code&gt;tornado.web&lt;/code&gt;, which implements a lightweight
web development framework. &lt;code&gt;tornado.web&lt;/code&gt; is built on our non-blocking
HTTP server and low-level I/O modules. Here is &quot;Hello, world&quot; in Tornado:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import tornado.httpserver
import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write(&quot;Hello, world&quot;)

application = tornado.web.Application([
    (r&quot;/&quot;, MainHandler),
])

if __name__ == &quot;__main__&quot;:
    http_server = tornado.httpserver.HTTPServer(application)
    http_server.listen(8888)
    tornado.ioloop.IOLoop.instance().start()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;A Tornado web application maps URLs or URL patterns to subclasses of
&lt;code&gt;tornado.web.RequestHandler&lt;/code&gt;. Those classes define &lt;code&gt;get()&lt;/code&gt; or &lt;code&gt;post()&lt;/code&gt;
methods to handle HTTP &lt;code&gt;GET&lt;/code&gt; or &lt;code&gt;POST&lt;/code&gt; requests to that URL. The example
above maps the root URL &lt;code&gt;'/'&lt;/code&gt; to the &lt;code&gt;MainHandler&lt;/code&gt; class, which prints
the &quot;Hello, world&quot; message.&lt;/p&gt;

&lt;p&gt;All of the additional features of Tornado mentioned above (like &lt;a href=&quot;http://www.tornadoweb.org/documentation#localization&quot;&gt;localization&lt;/a&gt; and &lt;a href=&quot;http://www.tornadoweb.org/documentation#cookies-and-secure-cookies&quot;&gt;signed cookies&lt;/a&gt;) are designed to be used on an à la carte basis. For example, to use signed cookies in your application, you just need to specify the secret cookie signing key when you create your application:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;application = tornado.web.Application([
    (r&quot;/&quot;, MainHandler),
], cookie_secret=&quot;61oETzKXQAGaYdkL5gEmGeJJFuYh7EQnp2XdTP1o/Vo=&quot;)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;and then you can call &lt;code&gt;set_secure_cookie()&lt;/code&gt; and &lt;code&gt;get_secure_cookie()&lt;/code&gt; in
your request handlers:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class LoginHandler(tornado.web.RequestHandler):
    def post(self):
        # Process login username and password
        self.set_secure_cookie(&quot;user_id&quot;, user[&quot;id&quot;])
        self.redirect(&quot;/home&quot;)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can find detailed documentation for all of these features at &lt;a href=&quot;http://www.tornadoweb.org/documentation&quot;&gt;tornadoweb.org/documentation&lt;/a&gt;. A few of my
favorite features are discussed in greater detail below.&lt;/p&gt;

&lt;h2&gt;Asynchronous requests&lt;/h2&gt;

&lt;p&gt;Tornado assumes requests are not asynchronous to make writing
simple request handlers easy. By default, when a request handler is executed,
Tornado will finish/close the request automatically.&lt;/p&gt;

&lt;p&gt;You can override that default behavior to implement streaming or hanging
connections, which are common for real-time services like FriendFeed.
If you want a request to remain open after the main request handler
method, you simply need to use the &lt;code&gt;tornado.web.asynchronous&lt;/code&gt; decorator:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class MainHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    def get(self):
        self.write(&quot;Hello, world&quot;)
        self.finish()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;When you use this decorator, it is your responsibility to call
&lt;code&gt;self.finish()&lt;/code&gt; to finish the HTTP request, or the user's browser
will simply hang.&lt;/p&gt;

&lt;p&gt;Here is a real example that makes a call to the FriendFeed API using
Tornado's built-in asynchronous HTTP client:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class MainHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    def get(self):
        http = tornado.httpclient.AsyncHTTPClient()
        http.fetch(&quot;http://friendfeed-api.com/v2/feed/bret&quot;,
                   callback=self.async_callback(self.on_response))

    def on_response(self, response):
        if response.error: raise tornado.web.HTTPError(500)
        json = tornado.escape.json_decode(response.body)
        self.write(&quot;Fetched &quot; + str(len(json[&quot;entries&quot;])) + &quot; entries &quot;
                   &quot;from the FriendFeed API&quot;)
        self.finish()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;When &lt;code&gt;get()&lt;/code&gt; returns, the request has not finished. When the HTTP client
eventually calls &lt;code&gt;on_response()&lt;/code&gt;, the request is still open, and the response
is finally flushed to the client with the call to &lt;code&gt;self.finish()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For a more advanced asynchronous example, take a look at the &lt;code&gt;chat&lt;/code&gt; demo
application included with &lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado&lt;/a&gt;. The chat demo
uses AJAX and &lt;a href=&quot;http://en.wikipedia.org/wiki/Push_technology#Long_polling&quot;&gt;long polling&lt;/a&gt; to implement a remedial real-time chat room on Tornado. You can also &lt;a href=&quot;http://chan.friendfeed.com:8888/&quot;&gt;see the chat demo in action&lt;/a&gt; on FriendFeed's servers.&lt;/p&gt;

&lt;h2&gt;Third-party authentication&lt;/h2&gt;

&lt;p&gt;Tornado comes with built-in support for authenticating with Facebook Connect,
Twitter, Google, and FriendFeed in addition to OAuth and OpenID. To log
a user in via &lt;a href=&quot;http://developers.facebook.com/connect.php&quot;&gt;Facebook Connect&lt;/a&gt;, you just need to implement a request handler
like:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class LoginHandler(tornado.web.RequestHandler, tornado.auth.FacebookMixin):
    @tornado.web.asynchronous
    def get(self):
        if self.get_argument(&quot;session&quot;, None):
            self.get_authenticated_user(self.async_callback(self._on_auth))
            return
        self.authenticate_redirect()

    def _on_auth(self, user):
        if not user: raise tornado.web.HTTPError(500, &quot;Auth failed&quot;)
        self.set_secure_cookie(&quot;uid&quot;, user[&quot;uid&quot;])
        self.set_secure_cookie(&quot;session_key&quot;, user[&quot;session_key&quot;])
        self.redirect(&quot;/home&quot;)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;All of the authentication methods support a relatively uniform interface
so you don't need to understand all of the intricacies of the different
authentication/authorization protocols to leverage them on your site.&lt;/p&gt;

&lt;p&gt;See the &lt;code&gt;auth&lt;/code&gt; and &lt;code&gt;facebook&lt;/code&gt; demo applications included with
&lt;a href=&quot;http://www.tornadoweb.org/&quot;&gt;Tornado&lt;/a&gt; for detailed examples of third
party authentication.&lt;/p&gt;

&lt;h2&gt;And more...&lt;/h2&gt;

&lt;p&gt;Check out the &lt;a href=&quot;http://www.tornadoweb.org/documentation&quot;&gt;Tornado documentation&lt;/a&gt;
for a complete list of features and modules.&lt;/p&gt;

&lt;p&gt;You can discuss the project, send feedback, and report bugs in our &lt;a href=&quot;http://groups.google.com/group/python-tornado&quot;&gt;mailing
list on Google Groups&lt;/a&gt;.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/friendfeed-schemaless-mysql</id>
      <title type="text">How FriendFeed uses MySQL to store schema-less data</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/friendfeed-schemaless-mysql" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:32:30Z</updated>
      <published>2009-02-27T19:32:02Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;h2&gt;Background&lt;/h2&gt;

&lt;p&gt;We use MySQL for storing all of the data in &lt;a href=&quot;http://friendfeed.com/&quot;&gt;FriendFeed&lt;/a&gt;. Our database has grown a lot as our user base has grown. We now store over 250 million entries and a bunch of other data, from comments and &quot;likes&quot; to friend lists.&lt;/p&gt;

&lt;p&gt;As our database has grown, we have tried to iteratively deal with the scaling issues that come with rapid growth. We did the typical things, like using read slaves and memcache to increase read throughput and sharding our database to improve write throughput. However, as we grew, scaling our existing features to accomodate more traffic turned out to be much less of an issue than adding &lt;em&gt;new&lt;/em&gt; features.&lt;/p&gt;

&lt;p&gt;In particular, making schema changes or adding indexes to a database with more than 10 - 20 million rows completely locks the database for hours at a time. Removing old indexes takes just as much time, and not removing them hurts performance because the database will continue to read and write to those unused blocks on every &lt;code&gt;INSERT&lt;/code&gt;, pushing important blocks out of memory. There are complex operational procedures you can do to circumvent these problems (like setting up the new index on a slave, and then swapping the slave and the master), but those procedures are so error prone and heavyweight, they implicitly discouraged our adding features that would require schema/index changes. Since our databases are all heavily sharded, the relational features of MySQL like &lt;code&gt;JOIN&lt;/code&gt; have never been useful to us, so we decided to look outside of the realm of RDBMS.&lt;/p&gt;

&lt;p&gt;Lots of projects exist designed to tackle the problem storing data with flexible schemas and building new indexes on the fly (e.g., &lt;a href=&quot;http://couchdb.apache.org/&quot;&gt;CouchDB&lt;/a&gt;). However, none of them seemed widely-used enough by large sites to inspire confidence. In the tests we read about and ran ourselves, none of the projects were stable or battle-tested enough for our needs (see &lt;a href=&quot;http://userprimary.net/user/2007/12/16/a-quick-look-at-couchdb-performance/&quot;&gt;this somewhat outdated article on CouchDB&lt;/a&gt;, for example). MySQL works. It doesn't corrupt data. Replication works. We understand its limitations already. We like MySQL for storage, just not RDBMS usage patterns.&lt;/p&gt;

&lt;p&gt;After some deliberation, we decided to implement a &quot;schema-less&quot; storage system on top of MySQL rather than use a completely new storage system. This post attempts to describe the high-level details of the system. We are curious how other large sites have tackled these problems, and we thought some of the design work we have done might be useful to other developers.&lt;/p&gt;

&lt;h2&gt;Overview&lt;/h2&gt;

&lt;p&gt;Our datastore stores schema-less bags of properties (e.g., JSON objects or Python dictionaries). The only required property of stored entities is &lt;code&gt;id&lt;/code&gt;, a 16-byte UUID. The rest of the entity is opaque as far as the datastore is concerned. We can change the &quot;schema&quot; simply by storing new properties.&lt;/p&gt;

&lt;p&gt;We index data in these entities by storing indexes in separate MySQL tables. If we want to index three properties in each entity, we will have three MySQL tables - one for each index. If we want to stop using an index, we stop writing to that table from our code and, optionally, drop the table from MySQL. If we want a new index, we make a new MySQL table for that index and run a process to asynchronously populate the index without disrupting our live service.&lt;/p&gt;

&lt;p&gt;As a result, we end up having more tables than we had before, but adding and removing indexes is easy. We have heavily optimized the process that populates new indexes (which we call &quot;The Cleaner&quot;) so that it fills new indexes rapidly without disrupting the site. We can store new properties and index them in a day's time rather than a week's time, and we don't need to swap MySQL masters and slaves or do any other scary operational work to make it happen.&lt;/p&gt;

&lt;h2&gt;Details&lt;/h2&gt;

&lt;p&gt;In MySQL, our entities are stored in a table that looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;CREATE TABLE entities (
    added_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
    id BINARY(16) NOT NULL,
    updated TIMESTAMP NOT NULL,
    body MEDIUMBLOB,
    UNIQUE KEY (id),
    KEY (updated)
) ENGINE=InnoDB;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;added_id&lt;/code&gt; column is present because InnoDB stores data rows physically in primary key order. The &lt;code&gt;AUTO_INCREMENT&lt;/code&gt; primary key ensures new entities are written sequentially on disk after old entities, which helps for both read and write locality (new entities tend to be read more frequently than old entities since FriendFeed pages are ordered reverse-chronologically). Entity bodies are stored as zlib-compressed, &lt;a href=&quot;http://docs.python.org/library/pickle.html&quot;&gt;pickled&lt;/a&gt; Python dictionaries.&lt;/p&gt;

&lt;p&gt;Indexes are stored in separate tables. To create a new index, we create a new table storing the attributes we want to index on all of our database shards. For example, a typical entity in FriendFeed might look like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
    &quot;id&quot;: &quot;71f0c4d2291844cca2df6f486e96e37c&quot;,
    &quot;user_id&quot;: &quot;f48b0440ca0c4f66991c4d5f6a078eaf&quot;,
    &quot;feed_id&quot;: &quot;f48b0440ca0c4f66991c4d5f6a078eaf&quot;,
    &quot;title&quot;: &quot;We just launched a new backend system for FriendFeed!&quot;,
    &quot;link&quot;: &quot;http://friendfeed.com/e/71f0c4d2-2918-44cc-a2df-6f486e96e37c&quot;,
    &quot;published&quot;: 1235697046,
    &quot;updated&quot;: 1235697046,
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We want to index the &lt;code&gt;user_id&lt;/code&gt; attribute of these entities so we can render a page of all the entities a given user has posted. Our index table looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;CREATE TABLE index_user_id (
    user_id BINARY(16) NOT NULL,
    entity_id BINARY(16) NOT NULL UNIQUE,
    PRIMARY KEY (user_id, entity_id)
) ENGINE=InnoDB;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Our datastore automatically maintains indexes on your behalf, so to start an instance of our datastore that stores entities like the structure above with the given indexes, you would write (in Python):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;user_id_index = friendfeed.datastore.Index(
    table=&quot;index_user_id&quot;, properties=[&quot;user_id&quot;], shard_on=&quot;user_id&quot;)
datastore = friendfeed.datastore.DataStore(
    mysql_shards=[&quot;127.0.0.1:3306&quot;, &quot;127.0.0.1:3307&quot;],
    indexes=[user_id_index])

new_entity = {
    &quot;id&quot;: binascii.a2b_hex(&quot;71f0c4d2291844cca2df6f486e96e37c&quot;),
    &quot;user_id&quot;: binascii.a2b_hex(&quot;f48b0440ca0c4f66991c4d5f6a078eaf&quot;),
    &quot;feed_id&quot;: binascii.a2b_hex(&quot;f48b0440ca0c4f66991c4d5f6a078eaf&quot;),
    &quot;title&quot;: u&quot;We just launched a new backend system for FriendFeed!&quot;,
    &quot;link&quot;: u&quot;http://friendfeed.com/e/71f0c4d2-2918-44cc-a2df-6f486e96e37c&quot;,
    &quot;published&quot;: 1235697046,
    &quot;updated&quot;: 1235697046,
}
datastore.put(new_entity)
entity = datastore.get(binascii.a2b_hex(&quot;71f0c4d2291844cca2df6f486e96e37c&quot;))
entity = user_id_index.get_all(datastore, user_id=binascii.a2b_hex(&quot;f48b0440ca0c4f66991c4d5f6a078eaf&quot;))
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;Index&lt;/code&gt; class above looks for the &lt;code&gt;user_id&lt;/code&gt; property in all entities and automatically maintains the index in the &lt;code&gt;index_user_id&lt;/code&gt; table. Since our database is sharded, the &lt;code&gt;shard_on&lt;/code&gt; argument is used to determine which shard the index gets stored on (in this case, &lt;code&gt;entity[&quot;user_id&quot;] % num_shards&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;You can query an index using the index instance (see &lt;code&gt;user_id_index.get_all&lt;/code&gt; above). The datastore code does the &quot;join&quot; between the &lt;code&gt;index_user_id&lt;/code&gt; table and the &lt;code&gt;entities&lt;/code&gt; table in Python, by first querying the &lt;code&gt;index_user_id&lt;/code&gt; tables on all database shards to get a list of entity IDs and then fetching those entity IDs from the &lt;code&gt;entities&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;To add a new index, e.g., on the &lt;code&gt;link&lt;/code&gt; property, we would create a new table:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;CREATE TABLE index_link (
    link VARCHAR(735) NOT NULL,
    entity_id BINARY(16) NOT NULL UNIQUE,
    PRIMARY KEY (link, entity_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We would change our datastore initialization code to include this new index:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;user_id_index = friendfeed.datastore.Index(
    table=&quot;index_user_id&quot;, properties=[&quot;user_id&quot;], shard_on=&quot;user_id&quot;)
link_index = friendfeed.datastore.Index(
    table=&quot;index_link&quot;, properties=[&quot;link&quot;], shard_on=&quot;link&quot;)
datastore = friendfeed.datastore.DataStore(
    mysql_shards=[&quot;127.0.0.1:3306&quot;, &quot;127.0.0.1:3307&quot;],
    indexes=[user_id_index, link_index])
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And we could populate the index asynchronously (even while serving live traffic) with:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;./rundatastorecleaner.py --index=index_link
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Consistency and Atomicity&lt;/h2&gt;

&lt;p&gt;Since our database is sharded, and indexes for an entity can be stored on different shards than the entities themselves, consistency is an issue. What if the process crashes before it has written to all the index tables?&lt;/p&gt;

&lt;p&gt;Building a transaction protocol was appealing to the most ambitious of FriendFeed engineers, but we wanted to keep the system as simple as possible. We decided to loosen constraints such that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The property bag stored in the main &lt;code&gt;entities&lt;/code&gt; table is canonical&lt;/li&gt;
&lt;li&gt;Indexes may not reflect the actual entity values&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consequently, we write a new entity to the database with the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write the entity to the &lt;code&gt;entities&lt;/code&gt; table, using the ACID properties of InnoDB&lt;/li&gt;
&lt;li&gt;Write the indexes to all of the index tables on all of the shards&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When we read from the index tables, we know they may not be accurate (i.e., they may reflect old property values if writing has not finished step 2). To ensure we don't return invalid entities based on the constraints above, we use the index tables to determine which entities to read, but we re-apply the query filters on the entities themselves rather than trusting the integrity of the indexes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read the &lt;code&gt;entity_id&lt;/code&gt; from all of the index tables based on the query&lt;/li&gt;
&lt;li&gt;Read the entities from the &lt;code&gt;entities&lt;/code&gt; table from the given entity IDs&lt;/li&gt;
&lt;li&gt;Filter (in Python) all of the entities that do not match the query conditions based on the actual property values&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To ensure that indexes are not missing perpetually and inconsistencies are eventually fixed, the &quot;Cleaner&quot; process I mentioned above runs continously over the entities table, writing missing indexes and cleaning up old and invalid indexes. It cleans recently updated entities first, so inconsistencies in the indexes get fixed fairly quickly (within a couple of seconds) in practice.&lt;/p&gt;

&lt;h2&gt;Performance&lt;/h2&gt;

&lt;p&gt;We have optimized our primary indexes quite a bit in this new system, and we are quite pleased with the results. Here is a graph of FriendFeed page view latency for the past month (we launched the new backend a couple of days ago, as you can tell by the dramatic drop):&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/f066c739eb6ff1a5d4f3d275ac564ce70efccda5&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;In particular, the latency of our system is now remarkably stable, even during peak mid-day hours. Here is a graph of FriendFeed page view latency for the past 24 hours:&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/72a319e1cd1c16520e26fa428bed7039ecb67f6d&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;Compare this to one week ago:&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/aaf78c3d130196bf0f8863fadd7b7bf41aa04bd3&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;The system has been really easy to work with so far. We have already changed the indexes a couple of times since we deployed the system, and we have started converting some of our biggest MySQL tables to use this new scheme so we can change their structure more liberally going forward.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/friendfeed-presentation</id>
      <title type="text">My FriendFeed presentation from MIT/Stanford Venture Lab</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/friendfeed-presentation" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:33:22Z</updated>
      <published>2008-09-21T19:33:02Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;Last week, I gave a short presentation about &lt;a href=&quot;http://friendfeed.com/&quot;&gt;FriendFeed&lt;/a&gt; before a panel at the MIT/Stanford Venture Lab. The panel, entitled &quot;Lifestreaming: The Real-time Web,&quot; included me, Loic Le Meur (&lt;a href=&quot;http://www.seesmic.com/&quot;&gt;Seesmic&lt;/a&gt;), Leah Culver (&lt;a href=&quot;http://www.pownce.com/&quot;&gt;Pownce&lt;/a&gt;), and Jeff Clavier (&lt;a href=&quot;http://www.softtechvc.com/&quot;&gt;SoftTechVC&lt;/a&gt;), and it was moderated by &lt;a href=&quot;http://allthingsd.com/about/kara-swisher/&quot;&gt;Kara Swisher&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After &lt;a href=&quot;http://www.louisgray.com/live/2008/09/bret-taylor-discusses-friendfeeds-road.html&quot;&gt;Louis Gray's writeup of my presentation&lt;/a&gt; and &lt;a href=&quot;http://kara.allthingsd.com/20080917/debating-the-real-time-web-at-stanford-university/&quot;&gt;Kara Swisher's video&lt;/a&gt; made the rounds on FriendFeed, I have gotten a number of requests for my slides. I uploaded &lt;a href=&quot;http://www.scribd.com/doc/6144012/Friend-Feed-Presentation&quot;&gt;my presentation&lt;/a&gt; to Scribd, and it is embedded below.&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;iframe class=&quot;scribd_iframe_embed&quot; src=&quot;//www.scribd.com/embeds/6144012/content?start_page=1&amp;amp;view_mode=slideshow&amp;amp;access_key=key-7a3b73ph9muit9k4hf4&quot; data-auto-height=&quot;true&quot; data-aspect-ratio=&quot;&quot; scrolling=&quot;no&quot; id=&quot;doc_73696&quot; width=&quot;100%&quot; height=&quot;600&quot; frameborder=&quot;0&quot;&gt;&lt;/iframe&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;The slides that got the most attention were about FriendFeed's traffic. FriendFeed has stored over 100 million entries shared by FriendFeed users. Getting to that point was a bit rocky. This slide shows FriendFeed's traffic growth in its first four months:&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/9054149efc1e19abce6628b66497609838f1fc4b&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;FriendFeed launched in private beta a little less than a year ago with a &lt;a href=&quot;http://www.nytimes.com/2007/10/01/technology/01feed.html&quot;&gt;great article in the New York Times&lt;/a&gt;, but, like most start-ups, that initial attention dropped quickly. By the end of our fourth month, our traffic levels had just creeped up to the levels we had at launch. Needless to say, it was a tough and emotional time for all of us at FriendFeed as we grappled with the product, marketing, and PR issues that may have been contributing to our stagnant growth.&lt;/p&gt;

&lt;p&gt;That growth curve quickly changed over the rest of the year. Here is FriendFeed's traffic until August or so (I highlighted the time period from the previous graph):&lt;/p&gt;

&lt;p&gt;&lt;figure&gt;&lt;img src=&quot;//d1udwvgzrtavb8.cloudfront.net/73639746614ff3572fa405b839a2da931caf3f44&quot;/&gt;&lt;/figure&gt;&lt;/p&gt;

&lt;p&gt;Those first four months don't even register on the y-axis we have grown so much since. So what happened around March? Diagnosing success and failure is a favorite past time of many in Silicon Valley, and I expect this blog post will inspire an analysis or two. I included a few of our theories in the slides. Among the features that have contributed to our growth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Public launch&lt;/strong&gt; - We launched out of private beta in late February, opening sign-ups and getting a brief surge of press.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Timing&lt;/strong&gt; - Life-streaming, open forms of social networking, and the &quot;real-time web&quot; became popular trends around that time period with the growing popularity of Twitter, FriendFeed, and others. FriendFeed certainly benefited (both in terms of PR and user adoption) from the increased attention on the space.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developer API&lt;/strong&gt; - We launched the &lt;a href=&quot;http://friendfeed.com/api/&quot;&gt;FriendFeed API&lt;/a&gt; in that time period, which inspired a wide range of desktop and mobile applications. Some, like &lt;a href=&quot;http://www.twhirl.org/&quot;&gt;Twhirl&lt;/a&gt;, had established user-bases, and the addition of FriendFeed support made using FriendFeed seamless for a large number of new users.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Speed and reliability&lt;/strong&gt; - We have focused quite a bit on making FriendFeed fast, responsive, and reliable. Many of our users started using the site because it is faster and more reliable than sites that serve similar functions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The press uses FriendFeed&lt;/strong&gt; - Since FriendFeed is about content discovery, it has been fairly widely adopted by content producers, from bloggers to professional journalists. We have tried to make the product work well for this unique (but important) subset of our users. As a consequence, I think FriendFeed has gotten more press than many other products of its size.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FriendFeed Rooms&lt;/strong&gt; - We launched &lt;a href=&quot;http://friendfeed.com/rooms/&quot;&gt;FriendFeed Rooms&lt;/a&gt; to make it easier for groups like classes, conference attendees, and companies to adopt FriendFeed. Since then, it has been adopted for an incredibly wide range of uses, from &lt;a href=&quot;http://friendfeed.com/rooms/the-life-scientists&quot;&gt;Life Science discussions&lt;/a&gt; to &lt;a href=&quot;http://friendfeed.com/rooms/venturebeat-wwdc-livestream&quot;&gt;Live-blogging the Apple WWDC&lt;/a&gt;, and many of our users have joined to participate in a particular Room.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whatever the specific reasons, this growth timeline is not atypical for new web services. Check out this &lt;a href=&quot;http://david.weebly.com/1/post/2008/02/the-importance-of-launching-early-and-staying-alive.html&quot;&gt;interesting post about Weebly's growth&lt;/a&gt; from David Rusenko, for example. As David put it, &quot;If you give up within a month or two, your product definitely won't be successful.&quot;&lt;/p&gt;

&lt;p&gt;I also think &lt;a href=&quot;http://en.wikipedia.org/wiki/Crossing_the_Chasm&quot;&gt;growth comes in stages&lt;/a&gt;, and our company has significant challenges ahead of us as our product and user base evolves to become more mainstream. I know there are other great blog posts about this topic from entrepreneurs &amp;#151; if you know of any good articles, I'd love to see links in the comments.&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/wikipedia-for-data</id>
      <title type="text">We need a Wikipedia for data</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/wikipedia-for-data" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:34:03Z</updated>
      <published>2008-04-09T19:33:43Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;&lt;em&gt;I just started blogging. I am not sure what I want to write about, but I think one theme will be &quot;things I want but want someone else to build.&quot; This article describes one of those things.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At Google, I worked on a number of projects that required data from third party data sources. We licensed mapping data for 100s of countries for &lt;a href=&quot;http://maps.google.com/&quot;&gt;Google Maps&lt;/a&gt;, movie showtimes data for &lt;a href=&quot;http://www.google.com/movies&quot;&gt;Google Movies&lt;/a&gt;, and stock data for &lt;a href=&quot;http://finance.google.com/&quot;&gt;Google Finance&lt;/a&gt;, among many others.&lt;/p&gt;

&lt;p&gt;After leaving Google and the company of the Google BizDev team, I have come to realize how hard it is for a everyday programmer to get access to even the most basic factual data. If you want to experiment with a new driving directions algorithm, it is infinitely more difficult than coming up with an algorithm; you have to hire a lawyer and a sign a contract with a company that collects that data in the country you are developing for. If you want to write an open source TiVo competitor, you need television listings data for every cable provider in the country, but your options are tenuous at best. In July, &lt;a href=&quot;http://www.fierceiptv.com/story/tv-listings-data-provider-zap2it-get-bulletproofed/2007-07-17&quot;&gt;the most popular &quot;free&quot; listings service&lt;/a&gt; shut down their site, breaking most &lt;a href=&quot;http://www.mythtv.org/&quot;&gt;MythTV&lt;/a&gt; installations. The &lt;a href=&quot;http://en.wikipedia.org/wiki/CDDB&quot;&gt;CD database&lt;/a&gt; (which is used to recognize CD track names when you rip CDs on your computer) has gone through a number of controversial transitions and license changes for similar reasons.&lt;/p&gt;

&lt;p&gt;Even when data is available under a reasonable license, it often suffers from extremely serious quality or discoverability problems. The US Census Bureau &lt;a href=&quot;http://www.census.gov/geo/www/tiger/index.html&quot;&gt;publishes map data&lt;/a&gt;, but it only includes a small subset of the attributes required for a real mapping product. The &lt;a href=&quot;http://trec.nist.gov/data/reuters/reuters.html&quot;&gt;Reuters corpus&lt;/a&gt;, which is a standard body of text used in data mining and information retrieval research, requires you to sign two agreements, send them to some organization via snail mail, and get the corpus via snail mail on CDs (what century is this, folks?).&lt;/p&gt;

&lt;p&gt;I think all of these barriers to data are holding back innovation at a scale that few people realize.  The most important part of an environment that encourages innovation is low barriers to entry. The moment a contract and lawyers are involved, you inherently restrict the set of people who can work on a problem to well-funded companies with a profitable product. Likewise, companies that sell data have to protect their investments, so permitted uses for the data are almost always explicitly enumerated in contracts.  The entire system is designed to restrict the data to be used in product categories that already exist.&lt;/p&gt;

&lt;p&gt;Imagine what amazing applications would be created if every programmer in the world had free access to all of these data sets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Map data for all countries in a relatively uniform data format&lt;/li&gt;
&lt;li&gt;White pages data (names and addresses) for all cities of the world&lt;/li&gt;
&lt;li&gt;Stock data for all major exchanges for all time&lt;/li&gt;
&lt;li&gt;Movie showtimes data for all cities in the world&lt;/li&gt;
&lt;li&gt;Television schedule data for all cities in the world&lt;/li&gt;
&lt;li&gt;Sports scores and stats for all sports in the world for all time&lt;/li&gt;
&lt;li&gt;Rich meta data for all musical albums and movies from all labels for all time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interesting thing is, almost every internet company would benefit if this data were freely available. Most internet companies have embraced &lt;a href=&quot;http://en.wikipedia.org/wiki/Linux&quot;&gt;open source operating systems&lt;/a&gt; because every company needs an operating system, and no company wants their OS to be a competitive advantage - they just want it to work. I would argue we are all in the same boat with these factual data sources. No one really wants factual data accuracy and completeness to be their competitive advantage; we all want the best data possible to build the best products possible, and discrepancies in data quality are artifacts of the extremely inefficient economy of buying and selling data we currently live in. If everyone had the same, high quality data, all of our products would be better for it.&lt;/p&gt;

&lt;p&gt;To this end, I think we should create a Wikipedia for data: a global database for all of these important data sources to which we all contribute and that anyone can use. When a user reports an inaccurate phone number in your products, save it back to the DataWiki so everyone can benefit, and in return, you get everyone else's improvements as well. If your local movie theater doesn't have listings data in DataWiki, you can type it in yourself, and everyone in your town can benefit, and all the products you use that access movie listings will automatically update. Need better mapping data for a city? Pay to collect it, and upload it to the DataWiki. In return you get all the other cities other companies paid for (sort of like a company contributing device drivers to the Linux kernel).&lt;/p&gt;

&lt;p&gt;DataWiki seems like an extremely hard problem, and I don't think it would work unless some big companies got on board and donated their data sets to bootstrap the process. However, I think all companies would benefit almost immediately from the quality improvements that would come from openness. Some data sets are more expensive to collect than others, and those certainly seem like the hardest data sets to make freely available.&lt;/p&gt;

&lt;p&gt;I have some concrete ideas on how this could work for some data sets, but I will save them for future posts. In the meantime, what are some of the most interesting existing projects attempting to open up these data sources? I only know of a few, and none of them has really taken off.&lt;/p&gt;

&lt;p&gt;&lt;i&gt;Update: Check out this &lt;a href=&quot;http://www.readwriteweb.com/archives/where_to_find_open_data_on_the.php&quot;&gt;great summary of the sites people have mentioned in the comments&lt;/a&gt; on ReadWriteWeb.&lt;/i&gt;&lt;/p&gt;
</content>
    </entry>
  
    <entry>
      <id>http://backchannel.org/blog/google-app-engine</id>
      <title type="text">Experimenting with Google App Engine</title>
      <author>
        <name>Bret Taylor</name>
        <uri>http://www.facebook.com/btaylor</uri>
      </author>
      <link href="http://backchannel.org/blog/google-app-engine" rel="alternate" type="text/html"/>
      <updated>2012-07-27T19:34:36Z</updated>
      <published>2008-04-08T19:34:07Z</published>
      <content type="html" xml:base="http://backchannel.org/">&lt;p&gt;&lt;a href=&quot;http://appengine.google.com/&quot;&gt;Google App Engine&lt;/a&gt; was actually the last project I worked on before I left Google. I was the PM of the project when it started, but has grown quite a bit since I left Google last June, and now it is has many more engineers and a handful of extremely talented PMs. I was fortunate enough to be able to see Kevin Gibbs and crew at Campfire One yesterday, and I could barely sit through the whole talk I was so excited to play around with the system.&lt;/p&gt;

&lt;p&gt;I have been &quot;meaning to&quot; start a blog for months. Blog software is extremely simple to implement, so I figured it would be a great app to test out on the new App Engine infrastructure. This blog runs on the code I wrote this evening.&lt;/p&gt;

&lt;p&gt;The lack of SQL is actually refreshing. Like Django and many other frameworks, you declare your data types in Python:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class Entry(db.Model):
    author = db.UserProperty()
    title = db.StringProperty(required=True)
    slug = db.StringProperty(required=True)
    body = db.TextProperty(required=True)
    published = db.DateTimeProperty(auto_now_add=True)
    updated = db.DateTimeProperty(auto_now=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I used a web framework we use at &lt;a href=&quot;http://friendfeed.com/&quot;&gt;FriendFeed&lt;/a&gt;. It looks a lot like the &lt;a href=&quot;http://code.google.com/appengine/docs/gettingstarted/usingwebapp.html&quot;&gt;webapp&lt;/a&gt; framework that ships with App Engine and &lt;a href=&quot;http://webpy.org/&quot;&gt;web.py&lt;/a&gt; (which inspired both of them). It took virtually no effort to get it to work in App Engine thanks to App Engine's support for &lt;a href=&quot;http://www.python.org/dev/peps/pep-0333/&quot;&gt;WSGI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Running the application looks a lot like the App Engine examples:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;application = web.WSGIApplication([
    (r&quot;/&quot;, MainPageHandler),
    (r&quot;/index&quot;, IndexHandler),
    (r&quot;/feed&quot;, FeedHandler),
    (r&quot;/entry/([^/]+)&quot;, EntryHandler),
])
wsgiref.handlers.CGIHandler().run(application)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Generating the front page is totally easy:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class MainPageHandler(web.RequestHandler):
    def get(self):
        entries = db.Query(Entry).order('-published').fetch(limit=5)
        self.render(&quot;main.html&quot;, entries=entries)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Generating the Atom feed is equally easy:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class FeedHandler(web.RequestHandler):
    def get(self):
        entries = db.Query(Entry).order('-published').fetch(limit=10)
        self.set_header(&quot;Content-Type&quot;, &quot;application/atom+xml&quot;)
        self.render(&quot;atom.xml&quot;, entries=entries)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I wanted to use &lt;a href=&quot;http://en.wikipedia.org/wiki/Slug_(production)&quot;&gt;slugs&lt;/a&gt; in my URLs to entries to make them friendlier, so I had to do a query to lookup entries for entry URLs:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class EntryHandler(web.RequestHandler):
    def get(self, slug):
        entry = db.Query(Entry).filter(&quot;slug =&quot;, slug).get()
        if not entry:
            raise web.HTTPError(404)
        self.render(&quot;entry.html&quot;, entry=entry)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I also needed security for adding/editing blog entries. App Engine lets you use Google's account system, which is nice for small apps like this. Likewise, it knows which users are &quot;admins&quot; for the app, so I decided to use this built-in role to handle security for the blog: only admins can add/edit entries. First, I wrote a &lt;a href=&quot;http://www.python.org/dev/peps/pep-0318/&quot;&gt;decorator&lt;/a&gt; that will automatically add admin security to any RequestHandler method (redirecting to the login page if the user is not logged in):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;def administrator(method):
    @functools.wraps(method)
    def wrapper(self, *args, **kwargs):
        user = users.get_current_user()
        if not user:
            if self.request.method == &quot;GET&quot;:
                self.redirect(users.create_login_url(self.request.uri))
                return
            raise web.HTTPError(403)
        elif not users.is_current_user_admin():
            raise web.HTTPError(403)
        else:
            return method(self, *args, **kwargs)
    return wrapper
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;My edit handler looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;class NewEntryHandler(web.RequestHandler):
    @administrator
    def get(self):
        self.render(&quot;new.html&quot;)

    @administrator
    def post(self):
        entry = Entry(
            author=users.get_current_user(),
            title=self.get_argument(&quot;title&quot;),
            slug=self.get_argument(&quot;slug&quot;),
            body=self.get_argument(&quot;body&quot;),
        )
        entry.put(entry)
        self.redirect(&quot;/entries/&quot; + entry.slug)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I don't think this blog will ever get millions of page views, but it is pretty cool that it could in theory :) I didn't have to configure anything. I didn't need to make an account system to make an administrative section of the site. And the entire blog is less than 100 lines of code. I deployed by running a script, and I was done. No machines, no &quot;apt-get install&quot;, no &quot;sudo /etc/init.d/whatever restart&quot;, nothing. &lt;/p&gt;

&lt;p&gt;I am impressed. The App Engine team has done a fantastic job, and I think they have already changed the way I do hobby projects.&lt;/p&gt;

&lt;p&gt;The next logical question is: would I run a real business on infrastructure that is so different than everyone else's? If I change my mind about App Engine, what are my options? I am hoping a number of open source projects spring up as alternatives to lower the switching costs over the next year. I will be very interested to see how many startups take the leap and run on App Engine entirely in the meantime.&lt;/p&gt;
</content>
    </entry>
  
</feed>
