<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
 
 <title>Cosmin Stejerean (offbytwo)</title>
 <link href="http://offbytwo.com/atom.xml" rel="self"/>
 <link href="http://offbytwo.com/"/>
 <updated>2013-10-06T16:06:49-07:00</updated>
 <id>http://offbytwo.com/</id>
 <author>
   <name>Cosmin Stejerean</name>
   <email>cosmin@offbytwo.com</email>
 </author>

 
 <entry>
   <title>Audit your EC2 infrastructure with source control</title>
   <link href="http://offbytwo.github.com/2012/08/03/audit-ec2-infra-with-scm.html"/>
   <updated>2012-08-03T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2012/08/03/audit-ec2-infra-with-scm</id>
   <content type="html">&lt;h1 id=&#39;audit_your_ec2_infrastructure_with_source_control&#39;&gt;Audit your EC2 infrastructure with source control&lt;/h1&gt;
&lt;p class=&#39;meta&#39;&gt;03 August 2012 - Dallas, TX&lt;/p&gt;
&lt;p&gt;You are performing a routine analysis of request logs on an internal web server when you notice a series of interesting requests from &lt;code&gt;10.191.12.13&lt;/code&gt;. A quick search determines that, as of this moment, this address does not belong to any of your servers. The requests happened 7 days ago, and much has changed during that time. Can you tell which of your instances had that IP address 7 days ago?&lt;/p&gt;

&lt;p&gt;Just to be sure, you review the security groups for this instance to make sure only internal traffic is allowed. You discover a rule that explicitly allows traffic from &lt;code&gt;10.191.12.13&lt;/code&gt;. Can you tell how long this rule has been present? Can you find the time period during which traffic from &lt;code&gt;10.191.12.13&lt;/code&gt; was allowed, and yet that address did not belong to you?&lt;/p&gt;

&lt;p&gt;These questions, and many others about the historic state of your infrastructure, could be answered easily if this information was present in a source control repository. You could then easily see when changes happened, browse to a specific point in time, and even use your source control infrastructure for things like email alerts.&lt;/p&gt;

&lt;p&gt;This is where &lt;a href=&#39;https://github.com/RisingOak/ec2audit&#39;&gt;ec2audit&lt;/a&gt; comes in. It can write the current state of your EC2 instances, security groups and ELB volumes to a series of JSON or YAML files that are suitable for version control.&lt;/p&gt;

&lt;p&gt;In order to set up &lt;code&gt;ec2audit&lt;/code&gt; you need IAM credentials, a source control repository, and some way to run it on a schedule. For example you can use Jenkins to schedule the runs and Git for source control. Things like &lt;code&gt;git log -S&lt;/code&gt; make it easy to find when things changed.&lt;/p&gt;

&lt;p&gt;To install &lt;code&gt;ec2audit&lt;/code&gt;, use &lt;code&gt;pip&lt;/code&gt; or &lt;code&gt;easy_install&lt;/code&gt;. You can also &lt;a href=&#39;http://pypi.python.org/pypi/ec2audit&#39;&gt;download a tarball&lt;/a&gt; and run &lt;code&gt;python setup.py install&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can then run &lt;code&gt;ec2audit&lt;/code&gt; as follows&lt;/p&gt;

&lt;p&gt;&lt;code&gt;
ec2audit -I &amp;lt;access-key&amp;gt; -S &amp;lt;secret-key&amp;gt; us-east-1 -o outputdir
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also supply AWS credentials via the standard environment variables &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The AWS credentials must be granted read access to the EC2 APIs. You should create an IAM user with only the adequate permissions. If you are using the AWS Console, you can use the &lt;code&gt;Amazon EC2 Read Only Access&lt;/code&gt; policy template for convenience. The following policy will also work:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
{
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Action&quot;: &quot;EC2:Describe*&quot;,
      &quot;Resource&quot;: &quot;*&quot;
    },
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Action&quot;: &quot;elasticloadbalancing:Describe*&quot;,
      &quot;Resource&quot;: &quot;*&quot;
    },
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Action&quot;: &quot;autoscaling:Describe*&quot;,
      &quot;Resource&quot;: &quot;*&quot;
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Remember to run &lt;code&gt;ec2audit&lt;/code&gt; regularly and version control the output. You can create an empty git repository (or use your SCM of choice), and you can run it on a schedule using &lt;code&gt;cron&lt;/code&gt; or your CI server.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Deploying Django applications on Heroku</title>
   <link href="http://offbytwo.github.com/2012/01/18/deploying-django-to-heroku.html"/>
   <updated>2012-01-18T00:00:00-08:00</updated>
   <id>http://offbytwo.com/2012/01/18/deploying-django-to-heroku</id>
   <content type="html">&lt;h1&gt;Deploying Django applications on Heroku&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;18 January 2012 &amp;#8211; Melbourne, Australia&lt;/p&gt;
&lt;p&gt;For a long time Ruby developers enjoyed painless deployment to &lt;a href=&quot;http://www.heroku.com/&quot;&gt;Heroku&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The Python landscape was limited to Google App Engine for quite some time (and I do mean &lt;strong&gt;limited&lt;/strong&gt; but I&amp;#8217;ll save that for another time). Once Heroku was acquired by Salesforce it seems that the market for cloud hosting of Python applications has exploded. Now we have plenty of choices such as &lt;a href=&quot;http://ep.io&quot;&gt;epio&lt;/a&gt; &lt;a href=&quot;http://gondor.io&quot;&gt;gondor&lt;/a&gt; &lt;a href=&quot;http://apphosted.com/&quot;&gt;appHosted&lt;/a&gt; &lt;a href=&quot;http://djangozoom.com/&quot;&gt;DjangoZoom&lt;/a&gt; (some of these are still in private beta). There is also &lt;a href=&quot;https://www.dotcloud.com/&quot;&gt;dotCloud&lt;/a&gt; that seems to support just about everything. It&amp;#8217;s a good time to be a Python developer.&lt;/p&gt;
&lt;p&gt;After their acquisition Heroku has released new features faster than ever. Their &lt;a href=&quot;http://devcenter.heroku.com/articles/cedar&quot;&gt;Cedar&lt;/a&gt; stack now officially supports Ruby, Node.js, Clojure, Java, Python and Scala. Let&amp;#8217;s take a look at how we can deploy a fairly typical, albeit simple, Django application to Heroku.&lt;/p&gt;
&lt;h3&gt;Prerequisites: pip and virtualenv&lt;/h3&gt;
&lt;p&gt;Before we get started we&amp;#8217;ll need to install &lt;a href=&quot;http://www.pip-installer.org/en/latest/index.html&quot;&gt;pip&lt;/a&gt; and &lt;a href=&quot;http://pypi.python.org/pypi/virtualenv&quot;&gt;virtualenv&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you have setuptools installed you should be able to install both using:&lt;/p&gt;
&lt;pre&gt;sudo easy_install pip
pip install virtualenv
&lt;/pre&gt;
&lt;p&gt;If you need more detailed instructions please take a look at &lt;a href=&quot;http://www.pip-installer.org/en/latest/installing.html&quot;&gt;installing pip&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Create git repository&lt;/h3&gt;
&lt;p&gt;You should already know how to create a new git repository.&lt;/p&gt;
&lt;pre&gt;git init myawesomeproject
cd !$
&lt;/pre&gt;
&lt;p&gt;In case you are wondering, &lt;code&gt;!$&lt;/code&gt; expands to the last argument to the last command (myawesomeproject in this case).&lt;/p&gt;
&lt;h3&gt;Create virtual-environment&lt;/h3&gt;
&lt;p&gt;If you have been using virtualenv for a while you might be used to creatting virtual environments in a folder called &lt;em&gt;ve&lt;/em&gt;, &lt;em&gt;env&lt;/em&gt; or similar. For the best experience when working&lt;br /&gt;
 with Heroku you should however create the virtual environment directly at the root of your checkout.&lt;/p&gt;
&lt;pre&gt;virtualenv --no-site-packages .
source bin/activate
&lt;/pre&gt;
&lt;p&gt;You have now created and activated your virtual environment. You will need to run bin/activate every time you are working on this project. While we&amp;#8217;re at it let&amp;#8217;s also ignore the virtualenv artifacts. Put the following in your .gitignore file&lt;/p&gt;
&lt;pre&gt;
/bin
/include
/lib
/share
&lt;/pre&gt;
&lt;p&gt;While you&amp;#8217;re at it you should also consider adding &lt;code&gt;*.pyc&lt;/code&gt; to .gitignore.&lt;/p&gt;
&lt;h3&gt;Install dependencies and freeze&lt;/h3&gt;
&lt;p&gt;For a simple Django application you will only need Django and psycopg2 (to talk to Postgres). Install them using pip and then freeze the exact versions used to a file called requirements.txt. Heroku will use requirements.txt to automatically install your dependencies when you push.&lt;/p&gt;
&lt;pre&gt;pip install Django psycopg2
pip freeze &amp;gt; requirements.txt
&lt;/pre&gt;
&lt;p&gt;When you add new requirements to your project you can &lt;code&gt;pip install&lt;/code&gt; them directly and regenerate &lt;em&gt;requirements.txt&lt;/em&gt; with &lt;code&gt;pip freeze&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Create Django application&lt;/h3&gt;
&lt;p&gt;Now you can create a Django project&lt;/p&gt;
&lt;p&gt;&lt;code&gt;django-admin.py startproject myproject&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;and make it awesome&amp;#8230;&lt;/p&gt;
&lt;h3&gt;Handling database migrations&lt;/h3&gt;
&lt;p&gt;You are going to quickly need to migrate your database schema. Fortunately you can use Django &lt;a href=&quot;http://south.aeracode.org/&quot;&gt;South&lt;/a&gt; to handle data and schema migrations. Install it with pip&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pip install south&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;and re-create your requirements file.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pip freeze &amp;gt; requirements.txt&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Add it to your &lt;code&gt;INSTALLED_APPS&lt;/code&gt; in &lt;code&gt;settings.py&lt;/code&gt; and start converting your applications using &lt;code&gt;convert_to_south&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;myawesomeproject/manage.py syncdb&lt;/code&gt;&lt;br /&gt;
&lt;code&gt;myawesomeproject/manage.py convert_to_south your_application&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Let&amp;#8217;s also tell South that our current database schema is up to date, by fake applying the initial migration.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;myawesomeproject/manage.py migrate --fake your_application&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now every time you make a change to your Django models, you can create new migrations and apply them.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;myawesomeproject/manage.py schema_migration --auto your_application
myawesomeproject/manage.py schema_migration migrate your_application
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can learn more about South, including using it for data migrations, by checking out the &lt;a href=&quot;http://south.aeracode.org/docs/tutorial/index.html&quot;&gt;tutorial&lt;/a&gt; and &lt;a href=&quot;http://south.aeracode.org/docs/&quot;&gt;documentation&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Handling static files&lt;/h3&gt;
&lt;p&gt;In a more traditional hosting setup you might use Apache or Nginx to handle serving static files. When deploying to Heroku though you should consider hosting your static files in S3. Luckily Django can easily support a variety of storage backends, and the &lt;a href=&quot;http://django-storages.readthedocs.org/en/latest/index.html&quot;&gt;django-storages&lt;/a&gt; package allows you to easily use S3.&lt;/p&gt;
&lt;p&gt;First, create a bucket in S3, using either the &lt;a href=&quot;http://aws.amazon.com/console/&quot;&gt;&lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; Console&lt;/a&gt; or your favorite tool. Then, modify your &lt;code&gt;settings.py&lt;/code&gt; and add the following values:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import os

AWS_ACCESS_KEY_ID = os.environ.get(&#39;AWS_ACCESS_KEY_ID&#39;)
AWS_SECRET_ACCESS_KEY = os.environ.get(&#39;AWS_SECRET_ACCESS_KEY&#39;)
AWS_STORAGE_BUCKET_NAME = &#39;&amp;lt;YOUR BUCKET NAME&amp;gt;&#39;

STATICFILES_STORAGE = &#39;storages.backends.s3boto.S3BotoStorage&#39;
DEFAULT_FILE_STORAGE = &#39;storages.backends.s3boto.S3BotoStorage&#39;

STATIC_URL = &#39;http://&#39; + AWS_STORAGE_BUCKET_NAME + &#39;.s3.amazonaws.com/&#39;
ADMIN_MEDIA_PREFIX = STATIC_URL + &#39;admin/&#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that we are using environment variables to store the &lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; access key and secret key. While we are on this topic, if you are planning to open source the Django application you are deploying, consider also storing your &lt;code&gt;SECRET_KEY&lt;/code&gt; in an environment variable.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;SECRET_KEY = os.environ.get(&#39;DJANGO_SECRET_KEY&#39;)&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We are now ready to Deploy to Heroku.&lt;/p&gt;
&lt;h3&gt;Creating an environment in Heroku&lt;/h3&gt;
&lt;p&gt;Let&amp;#8217;s start by installing the Heroku gem&lt;/p&gt;
&lt;p&gt;&lt;code&gt;gem install heroku&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;followed by creating a new Cedar stack in Heroku.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;heroku create --stack cedar&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Optionally, you might want to map your own domain name to your Heroku stack.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;heroku addons:add custom_domains
heroku domains:add www.example.com
heroku domains:add example.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find information on managing custom domains in Heroku &lt;a href=&quot;http://devcenter.heroku.com/articles/custom-domains&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Let&amp;#8217;s add the necessary environment variables (do the same for &lt;code&gt;SECRET_KEY&lt;/code&gt; if necessar)&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;heroku config:add AWS_ACCESS_KEY_ID=youraswsaccesskey
heroku config:add AWS_SECRET_ACCESS_KEY=yourawssecretkey
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For extra security you should use the Identity &amp;amp; Access Management (&lt;span class=&quot;caps&quot;&gt;IAM&lt;/span&gt;) service to create a separate user account with the following policy&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; {
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Action&quot;: &quot;s3:*&quot;,
      &quot;Resource&quot;: [
        &quot;arn:aws:s3:::BUCKETNAME&quot;,
        &quot;arn:aws:s3:::BUCKETNAME/*&quot;
      ]
    }
  ]
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This way if the credentials stored in Heroku are ever compromised, the attacker will only have access to the files stored in the bucket of this application.&lt;/p&gt;
&lt;h3&gt;Deploying application to Heroku&lt;/h3&gt;
&lt;p&gt;This is as easy as running running git push.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git push heroku master&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Your application is now deployed, but you still need to configure the database&lt;/p&gt;
&lt;p&gt;&lt;code&gt;heroku run manage.py syncdb&lt;/code&gt;&lt;br /&gt;
&lt;code&gt;heroku run manage.py migrate&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You should now have a working application, but we have not yet deployed our static files.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;heroku run manage.py collectstatic&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;At this point you should have a fully functional Django application deployed to Heroku with static files hosted in S3. If you are having problems you can investigate the logs with &lt;code&gt;heroku logs&lt;/code&gt;. You can also consider turning on &lt;code&gt;DEBUG&lt;/code&gt; temporarily, but &lt;strong&gt;don&amp;#8217;t&lt;/strong&gt; forget to turn this off. To make it easier to turn &lt;span class=&quot;caps&quot;&gt;DEBUG&lt;/span&gt; on and off consider adding the following to your &lt;code&gt;settings.py&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;DEBUG = bool(os.environ.get(&#39;DJANGO_DEBUG&#39;, &#39;&#39;))
TEMPLATE_DEBUG = DEBUG
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can turn debug on and off using &lt;code&gt;heroku config:add DJANGO_DEBUG=true&lt;/code&gt; and turning it off with &lt;code&gt;heroku config:remove DJANGO_DEBUG&lt;/code&gt;&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Emacs + paredit under terminal</title>
   <link href="http://offbytwo.github.com/2012/01/15/emacs-plus-paredit-under-terminal.html"/>
   <updated>2012-01-15T00:00:00-08:00</updated>
   <id>http://offbytwo.com/2012/01/15/emacs-plus-paredit-under-terminal</id>
   <content type="html">&lt;h1&gt;Emacs + paredit under terminal (Terminal.app, iTerm, iTerm2)&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;15 January 2012 &amp;#8211; Melbourne, Australia&lt;/p&gt;
&lt;p&gt;I prefer to use Emacs in a full-screen terminal window. One problem that has plagued me until today though has been the lack of proper Control and Meta arrow combinations when working at the terminal. Especially when working with &lt;a href=&quot;http://www.emacswiki.org/emacs/ParEdit&quot;&gt;paredit&lt;/a&gt; which frequently involves the use of &lt;code&gt;C-left&lt;/code&gt;, &lt;code&gt;C-right&lt;/code&gt;, &lt;code&gt;C-M-left&lt;/code&gt;, &lt;code&gt;C-M-right&lt;/code&gt; and less frequently of &lt;code&gt;M-up&lt;/code&gt; and &lt;code&gt;M-down&lt;/code&gt;. To get paredit to work properly I kept switching to Cocoa Emacs (in case you didn&amp;#8217;t know, you can install Cocoa emacs with &lt;code&gt;brew install emacs --cocoa&lt;/code&gt; if you are using &lt;a href=&quot;http://mxcl.github.com/homebrew/&quot;&gt;homebrew&lt;/a&gt; )&lt;/p&gt;
&lt;p&gt;Today I decided to get to the bottom of this problem at any cost. First, I suspected that Control arrow combinations were not being sent properly by my terminal, in my case iTerm2. You can also fix this however for Terminal.app or the original iTerm.&lt;/p&gt;
&lt;h3&gt;iTerm2 key bindings&lt;/h3&gt;
&lt;p&gt;Select Profiles &amp;gt; Open Profiles&amp;#8230; from the menu bar, or press Command-O and take a look at the default profile. Click on the &lt;strong&gt;Keys&lt;/strong&gt; section. While you are here verify you have &lt;em&gt;Left Option&lt;/em&gt; and &lt;em&gt;Right Option&lt;/em&gt; as &lt;code&gt;+Esc&lt;/code&gt;. For the arrow key fixes though you will need to add a series of key shortcuts. The easiest way to get started is select &lt;em&gt;Load Preset&amp;#8230;&lt;/em&gt; &amp;gt; &lt;em&gt;xterm Defaults&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This will map C-up, C-down, C-right, and C-left to send the following escape sequences:&lt;/p&gt;
&lt;pre&gt;
C-up    : Esc-[1;5A
C-down  : Esc-[1;5B
C-right : Esc-[1;5C
C-left  : Esc-[1;5D
&lt;/pre&gt;
&lt;p&gt;It will also define Shift arrows and Control-Shift arrows, but we don&amp;#8217;t care about those at the moment. These are not quite sufficient, but before we go any futher, let&amp;#8217;s make sure we can get these to work in Emacs.&lt;/p&gt;
&lt;h3&gt;Check Control-arrow bindings within Emacs&lt;/h3&gt;
&lt;p&gt;Open up a new terminal window and then open emacs at the terminal with &lt;code&gt;emacs -nw&lt;/code&gt;. Now, with paredit turned off, try &lt;code&gt;C-left&lt;/code&gt; and &lt;code&gt;C-right&lt;/code&gt;, which should most likely move a word at a time left and right respectively. To verify Emacs is picking up the correct keys you can also try &lt;code&gt;C-h k&lt;/code&gt; for &lt;em&gt;Describe key&lt;/em&gt; followed by the key combination. For example, &lt;code&gt;C-h k C-left&lt;/code&gt; should display&lt;/p&gt;
&lt;pre&gt;
&amp;lt;C-left&amp;gt; runs the command backward-word, which is an interactive
compiled Lisp function in `simple.el&#39;.

It is bound to &amp;lt;C-left&amp;gt;, &amp;lt;M-left&amp;gt;, M-b, ESC &amp;lt;left&amp;gt;.

(backward-word &amp;amp;optional ARG)

Move backward until encountering the beginning of a word.
With argument ARG, do this that many times.

[back]
&lt;/pre&gt;
&lt;p&gt;As long as &lt;code&gt;TERM&lt;/code&gt; is set to &lt;code&gt;xterm&lt;/code&gt; the above bindings should work automatically in Emacs. Should you have to define your own bindings for these escape sequences you could do so with&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
(define-key input-decode-map &quot;\e[1;5A&quot; [C-up])
(define-key input-decode-map &quot;\e[1;5B&quot; [C-down])
(define-key input-decode-map &quot;\e[1;5C&quot; [C-right])
(define-key input-decode-map &quot;\e[1;5D&quot; [C-left])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that &lt;em&gt;input-decode-map&lt;/em&gt; is only defined starting with Emacs 23.&lt;/p&gt;
&lt;h3&gt;Paredit&lt;/h3&gt;
&lt;p&gt;At this point you should have &lt;code&gt;C-left&lt;/code&gt; working outside of paredit. Now turn on paredit mode (&lt;code&gt;M-x paredit-mode&lt;/code&gt;) and try C-left and C-right again. Chances are you will see &lt;code&gt;[1;5D&lt;/code&gt; and &lt;code&gt;[1;5C&lt;/code&gt;. If this happens only in paredit mode, then the culprit is most likely the bidning of &lt;code&gt;M-[&lt;/code&gt;. You can figure this out by trying &lt;em&gt;describe key&lt;/em&gt; again. If you try &lt;code&gt;C-h k C-left&lt;/code&gt; you will most likley see&lt;/p&gt;
&lt;pre&gt;
M-[ runs the command paredit-bracket-wrap-sexp, which is an
interactive Lisp function in `paredit.el&#39;.

It is bound to M-[.

(paredit-bracket-wrap-sexp &amp;amp;optional N)

Wrap a pair of bracket around a sexp

M-[

(foo |bar baz)
  -&amp;gt;
(foo [|bar] baz)
&lt;/pre&gt;
&lt;p&gt;Huh?&lt;/p&gt;
&lt;h3&gt;Where does M-[ come from?&lt;/h3&gt;
&lt;p&gt;Each time you press Control + left arrow the terminal will send the following sequence as defined above: &lt;code&gt;ESC [ 1 ; 5 D&lt;/code&gt;. Emacs starts interpreting this sequence, but it gets an early match on &lt;code&gt;ESC [&lt;/code&gt; which is the same as &lt;code&gt;M-[&lt;/code&gt; and invokes &lt;code&gt;paredit-bracket-wrap-sexp&lt;/code&gt;. We need to turn off this behavior, which we can do by putting the following in &lt;code&gt;~/.emacs&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
(require &#39;paredit)
(define-key paredit-mode-map (kbd &quot;M-[&quot;) nil)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you load the above code, try &lt;code&gt;C-left&lt;/code&gt; again in paredit. If that works, you are ready for the next step.&lt;/p&gt;
&lt;h3&gt;Add Meta-arrow and Control-Meta-arrow to iTerm2&lt;/h3&gt;
&lt;p&gt;The &lt;em&gt;xterm Defaults&lt;/em&gt; only provided us with certain key bindings. Go back to the profile key bindings under iTerm2 and add bindings for the following:&lt;/p&gt;
&lt;pre&gt;
M-up      : Esc-[1;4A
M-down    : Esc-[1;4B
M-right   : Esc-[1;4C
M-left    : Esc-[1;4D

C-M-up    : Esc-[1;8A
C-M-down  : Esc-[1;8B
C-M-right : Esc-[1;8C
C-M-left  : Esc-[1;8D
&lt;/pre&gt;
&lt;p&gt;To do this, click on the + sign, type the key sequence, then under &lt;em&gt;Action:&lt;/em&gt; select &lt;em&gt;Send Escape Sequence&lt;/em&gt; and type in the escape seequence starting with &lt;code&gt;[1;&lt;/code&gt;. You can look as an example at the values for Control-left and friends that were added when you loaded the &lt;em&gt;xterm Defaults&lt;/em&gt; map.&lt;/p&gt;
&lt;p&gt;Why these values? I have no idea. I looked at the escape sequences for plain arrow keys, Shift-arrow keys and Control-arrow keys and I decided to experiment a little in the neighboring spaces, using &lt;code&gt;C-h k&lt;/code&gt; to figure out which key sequence is bound to what I want. If you find a better explanation please let me know.&lt;/p&gt;
&lt;h3&gt;iTerm&lt;/h3&gt;
&lt;p&gt;If you are using iTerm you can add key bindings as follows:&lt;/p&gt;
&lt;p&gt;Bookmarks &amp;gt; Manage Profiles &amp;gt; Keyboard Profiles &amp;gt; xterm and under &lt;em&gt;Key map settings:&lt;/em&gt; add (by clicking on the + sign)&lt;/p&gt;
&lt;pre&gt;
Key: cursor left
Modifier: Control
Action: send escape sequence

[1;5D
&lt;/pre&gt;
&lt;p&gt;Add the remaining key bindings using the above format and the values from the iTerm2 table.&lt;/p&gt;
&lt;h3&gt;Terminal.app&lt;/h3&gt;
&lt;p&gt;In Terminal.app you will need to add a few key bindings by going to Preferences &amp;gt; Settings &amp;gt; Keyboard. The end result will be the same as iTerm2 but the interface is slightly different.&lt;/p&gt;
&lt;pre&gt;
Key: cursor left
Modifier: Control
Action: send string to shell

\033[1;5D

&lt;/pre&gt;
&lt;p&gt;where &lt;code&gt;\033&lt;/code&gt; represents Escape. For the other keys refer to the map for iTerm2 above.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Scripted installation of Java on Ubuntu</title>
   <link href="http://offbytwo.github.com/2011/07/20/scripted-installation-java-ubuntu.html"/>
   <updated>2011-07-20T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2011/07/20/scripted-installation-java-ubuntu</id>
   <content type="html">&lt;h1&gt;Scripted installation of Java on Ubuntu (with Bash or Puppet)&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;20 July 2011 &amp;#8211; Dallas&lt;/p&gt;
&lt;p&gt;Every few months I need to script the installation of Java on Ubuntu, and I always seem to forget quite how to do it. I also seem to fail at finding anything useful on Google. Most of the posts either skip critical steps or involve manual steps. So I&amp;#8217;m going to document this here for future reference.&lt;/p&gt;
&lt;p&gt;Bash version (add sudo as necessary):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
add-apt-repository &quot;deb http://archive.canonical.com/ $(lsb_release -s -c) partner&quot;
apt-get update

echo &quot;sun-java6-jdk shared/accepted-sun-dlj-v1-1 select true&quot; | debconf-set-selections
echo &quot;sun-java6-jre shared/accepted-sun-dlj-v1-1 select true&quot; | debconf-set-selections

DEBIAN_FRONTEND=noninteractive aptitude install -y -f sun-java6-jre sun-java6-bin sun-java6-jdk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Given that I do most of my automation with Puppet these days, here is a Puppet class that will accomplish the same.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
class sun_java_6 {

  $release = regsubst(generate(&quot;/usr/bin/lsb_release&quot;, &quot;-s&quot;, &quot;-c&quot;), &#39;(\w+)\s&#39;, &#39;\1&#39;)

  file { &quot;partner.list&quot;:
    path =&amp;gt; &quot;/etc/apt/sources.list.d/partner.list&quot;,
    ensure =&amp;gt; file,
    owner =&amp;gt; &quot;root&quot;,
    group =&amp;gt; &quot;root&quot;,
    content =&amp;gt; &quot;deb http://archive.canonical.com/ $release partner\ndeb-src http://archive.canonical.com/ $release partner\n&quot;,
    notify =&amp;gt; Exec[&quot;apt-get-update&quot;],
  }

  exec { &quot;apt-get-update&quot;:
    command =&amp;gt; &quot;/usr/bin/apt-get update&quot;,
    refreshonly =&amp;gt; true,
  }

  package { &quot;debconf-utils&quot;:
    ensure =&amp;gt; installed
  }

  exec { &quot;agree-to-jdk-license&quot;:
    command =&amp;gt; &quot;/bin/echo -e sun-java6-jdk shared/accepted-sun-dlj-v1-1 select true | debconf-set-selections&quot;,
    unless =&amp;gt; &quot;debconf-get-selections | grep &#39;sun-java6-jdk.*shared/accepted-sun-dlj-v1-1.*true&#39;&quot;,
    path =&amp;gt; [&quot;/bin&quot;, &quot;/usr/bin&quot;], require =&amp;gt; Package[&quot;debconf-utils&quot;],
  }

  exec { &quot;agree-to-jre-license&quot;:
    command =&amp;gt; &quot;/bin/echo -e sun-java6-jre shared/accepted-sun-dlj-v1-1 select true | debconf-set-selections&quot;,
    unless =&amp;gt; &quot;debconf-get-selections | grep &#39;sun-java6-jre.*shared/accepted-sun-dlj-v1-1.*true&#39;&quot;,
    path =&amp;gt; [&quot;/bin&quot;, &quot;/usr/bin&quot;], require =&amp;gt; Package[&quot;debconf-utils&quot;],
  }

  package { &quot;sun-java6-jdk&quot;:
    ensure =&amp;gt; latest,
    require =&amp;gt; [ File[&quot;partner.list&quot;], Exec[&quot;agree-to-jdk-license&quot;], Exec[&quot;apt-get-update&quot;] ],
  }

  package { &quot;sun-java6-jre&quot;:
    ensure =&amp;gt; latest,
    require =&amp;gt; [ File[&quot;partner.list&quot;], Exec[&quot;agree-to-jre-license&quot;], Exec[&quot;apt-get-update&quot;] ],
  }

}

include sun_java_6
&lt;/code&gt;&lt;/pre&gt;</content>
 </entry>
 
 <entry>
   <title>Things you (probably) didn't know about xargs</title>
   <link href="http://offbytwo.github.com/2011/06/26/things-you-didnt-know-about-xargs.html"/>
   <updated>2011-06-26T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2011/06/26/things-you-didnt-know-about-xargs</id>
   <content type="html">&lt;h1&gt;Things you (probably) didn&amp;#8217;t know about xargs&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;26 June 2011 &amp;#8211; Bangalore, India&lt;/p&gt;
&lt;p&gt;If you&amp;#8217;ve spent any amount of time at a Unix command line you&amp;#8217;ve probably already seen &lt;code&gt;xargs&lt;/code&gt;. In case you haven&amp;#8217;t, xargs is a command used to execute commands based on arguments from standard input.&lt;/p&gt;
&lt;h3&gt;Common use cases&lt;/h3&gt;
&lt;p&gt;I often see xargs used in combination with &lt;code&gt;find&lt;/code&gt; in order to do something with the list of files returned by find.&lt;/p&gt;
&lt;p&gt;&lt;i&gt;Pedantic note:&lt;/i&gt; As people have correctly pointed out on Twitter and on Hacker News, find is a very powerful command and it has built in flags such as &lt;code&gt;-exec&lt;/code&gt; and &lt;code&gt;-delete&lt;/code&gt; that you can often use instead of piping to xargs. However people either don&amp;#8217;t know about the options to find, forget how to invoke -exec with it&amp;#8217;s archaic syntax, or prefer the simplicity of xargs. There are also performance implications to the various choices. I should write a follow up post on find.&lt;/p&gt;
&lt;p&gt;&lt;i&gt;Contrived examples warning:&lt;/i&gt; I needed something simple examples that would not detract from the topic. This is the best I could do given the time I had. &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/offbytwo/offbytwo.github.com&quot;&gt;Patches are welcome&lt;/a&gt; :)&lt;/p&gt;
&lt;p&gt;Recursively find all Python files and count the number of lines&lt;br /&gt;
&lt;code&gt;find . -name &#39;*.py&#39; | xargs wc -l &lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Recursively find all Emacs backup files and remove them&lt;br /&gt;
&lt;code&gt;find . -name &#39;*~&#39; | xargs rm &lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Recursively find all Python files and search them for the word &amp;#8216;import&amp;#8217;&lt;br /&gt;
&lt;code&gt;find . -name &#39;*.py&#39; | xargs grep &#39;import&#39; &lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Handling files or folders with spaces in the name&lt;/h3&gt;
&lt;p&gt;One problem with the above examples is that it does not correctly handle files or directories with a space in the name. This is because xargs by default will split on any white-space character. A quick solution to this is to tell find to delimit results with &lt;span class=&quot;caps&quot;&gt;NUL&lt;/span&gt; (\0) characters (by supplying &lt;code&gt;-print0&lt;/code&gt; to find), and to tell xargs to split the input on &lt;span class=&quot;caps&quot;&gt;NUL&lt;/span&gt; characters as well (&lt;code&gt;-0&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Remove backup files recursively even if they contain spaces&lt;br /&gt;
&lt;code&gt;find . -name &#39;*~&#39; -print0 | xargs -0 rm &lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;i&gt;Security note:&lt;/i&gt; filenames can often contain more than just &lt;a href=&quot;http://www.dwheeler.com/essays/fixing-unix-linux-filenames.html&quot;&gt;spaces&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Placement of the arguments&lt;/h3&gt;
&lt;p&gt;In the examples above xargs reads all non-white-space elements from standard input and concatenates them into the given command line before executing it. This alone is very useful in many circumstances. Sometimes however you might want to insert the arguments into the middle of a command. The &lt;code&gt;-I&lt;/code&gt; flag to xargs takes a string that will be replaced with the supplied input before the command is executed. A common choice is %.&lt;/p&gt;
&lt;p&gt;Move all backup files somewhere else&lt;br /&gt;
&lt;code&gt;find . -name &#39;*~&#39; -print 0 | xargs -0 -I % cp % ~/backups &lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Maximum command length&lt;/h3&gt;
&lt;p&gt;Sometimes the list of arguments piped to xargs would cause the resulting command line to exceed the maximum length allowed by the system. You can find this limit with&lt;/p&gt;
&lt;p&gt;&lt;code&gt;getconf ARG_MAX&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;In order to avoid hitting the system limit, xargs has its own limit to the maximum length of the resulting command. If the supplied arguments would cause the invoked command to exceed this built in limit, xargs will split the input and invoke the command repeatedly. This limit defaults to 4096, which can be significantly lower than ARG_MAX on modern systems. You can override xargs&amp;#8217;s limit with the &lt;code&gt;-s&lt;/code&gt; flag. This will be particularly important when you are dealing with a large source tree.&lt;/p&gt;
&lt;h3&gt;Operating on subset of arguments at a time&lt;/h3&gt;
&lt;p&gt;You might be dealing with commands that can only accept 1 or maybe 2 arguments at a time. For example the diff command operates on two files at a time. The &lt;code&gt;-n&lt;/code&gt; flag to xargs specifies how many arguments at a time to supply to the given command. The command will be invoked repeatedly until all input is exhausted. Note that on the last invocation you might get less than the desired number of arguments if there is insufficient input. Let&amp;#8217;s simply use xargs to break up the input into 2 arguments per line&lt;/p&gt;
&lt;pre&gt;
$ echo {0..9} | xargs -n 2

0 1
2 3
4 5
6 7
8 9
&lt;/pre&gt;
&lt;p&gt;In addition to running based on a specified number of arguments at time you can also invoke a command for each line of input at a time with &lt;code&gt;-L 1&lt;/code&gt;. You can of course use an arbitrary number of lines a time, but 1 is most common. Here is how you might diff every git commit against its parent.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git log --format=&quot;%H %P&quot; | xargs -L 1 git diff &lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Executing commands in parallel&lt;/h3&gt;
&lt;p&gt;You might be using xargs to invoke a compute intensive command for every line of input. Wouldn&amp;#8217;t it be nice if xargs allowed you to take advantage of the multiple cores in your machine? That&amp;#8217;s what &lt;code&gt;-P&lt;/code&gt; is for. It allows xargs to invoke the specified command multiple times in parallel. You might use this for example to run multiple &lt;code&gt;ffmpeg&lt;/code&gt; encodes in parallel. However I&amp;#8217;m just going to show you yet another contrived example.&lt;/p&gt;
&lt;p&gt;Parallel sleep&lt;/p&gt;
&lt;pre&gt;
$ time echo {1..5} | xargs -n 1 -P 5 sleep

real    0m5.013s
user    0m0.003s
sys     0m0.014s
&lt;/pre&gt;
&lt;p&gt;Sequential sleep&lt;/p&gt;
&lt;pre&gt;
$ time echo {1..5} | xargs -n 1 sleep

real    0m15.022s
user    0m0.004s
sys     0m0.015s
&lt;/pre&gt;
&lt;p&gt;If you are interested in using xargs for parallel computation also consider &lt;a href=&quot;http://www.gnu.org/software/parallel/&quot;&gt;&lt;span class=&quot;caps&quot;&gt;GNU&lt;/span&gt; parallel&lt;/a&gt;. xargs has the advantage of being installed by default on most systems, and easily available on &lt;span class=&quot;caps&quot;&gt;BSD&lt;/span&gt; and OS X, but parallel has some really nice features.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Backing up MySQL using EBS snapshots</title>
   <link href="http://offbytwo.github.com/2011/03/28/backing-up-mysql-using-ebs-snapshots.html"/>
   <updated>2011-03-28T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2011/03/28/backing-up-mysql-using-ebs-snapshots</id>
   <content type="html">&lt;h1&gt;Backing up MySQL using &lt;span class=&quot;caps&quot;&gt;EBS&lt;/span&gt; snapshots&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;28 March 2011 &amp;#8211; Dallas&lt;/p&gt;
&lt;p&gt;For running applications that use MySQL on &lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; I highly recommend taking a look at Amazon&amp;#8217;s &lt;a href=&quot;http://aws.amazon.com/rds/&quot;&gt;Relational Database Service&lt;/a&gt;  (&lt;span class=&quot;caps&quot;&gt;RDS&lt;/span&gt;). The smallest &lt;span class=&quot;caps&quot;&gt;RDS&lt;/span&gt; instance however costs around $80 per month which is prohibitively expensive for small side projects. &lt;span class=&quot;caps&quot;&gt;RDS&lt;/span&gt; might also not offer all the control necessary for more complicated database requirements. Not using &lt;span class=&quot;caps&quot;&gt;RDS&lt;/span&gt; however comes with the overhead of having to adminster a MySQL installation. Specifically we are going to take a look at backing up a MySQL server on &lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;In addition to MySQL&amp;#8217;s built-in database dump commands, we can also leverage &lt;a href=&quot;http://aws.amazon.com/ebs/&quot;&gt;Elastic Block Store&lt;/a&gt; (&lt;span class=&quot;caps&quot;&gt;EBS&lt;/span&gt;) snapshots in order to create a copy of the entire data volume. Amazon takes care of doing snapshots incrementally by only copying modified blocks. We are going to focus on how best to backup a MySQL database using &lt;span class=&quot;caps&quot;&gt;EBS&lt;/span&gt; snapshots in this guide.&lt;/p&gt;
&lt;h3&gt;Consistent Snapshots&lt;/h3&gt;
&lt;p&gt;Before we take a snapshot of an &lt;span class=&quot;caps&quot;&gt;EBS&lt;/span&gt; drive we need to ensure that we are getting a consistent view of the data at that point in time. In order to achieve this I recommend using the &lt;span class=&quot;caps&quot;&gt;XFS&lt;/span&gt; filesystem which allows us to freeze writes to the filesystem. We also need to lock the MySQL database and flush all of its data to disk. Eric Hammond has written an excellent tool called &lt;a href=&quot;http://alestic.com/2009/09/ec2-consistent-snapshot&quot;&gt;ec2-consistent-snapshot&lt;/a&gt; that allows to freeze &lt;span class=&quot;caps&quot;&gt;XFS&lt;/span&gt;, flush MySQL to disk and lock it, take an &lt;span class=&quot;caps&quot;&gt;EBS&lt;/span&gt; snapshot and restore writes to &lt;span class=&quot;caps&quot;&gt;XFS&lt;/span&gt; and MySQL.&lt;/p&gt;
&lt;h3&gt;Security considerations&lt;/h3&gt;
&lt;p&gt;We also need to provide ec2-consistent-snapshot with the &lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; credentials needed to create the snapshot. Because this tool will be running on the database machine and because it should run automatically in an unattended fashion we need to upload our &lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; access keys to the database host. This poses a security risk that we can mitigate by placing adequate access control on the file containing our credentials. We can also take advantage of Amazon&amp;#8217;s &lt;a href=&quot;http://aws.amazon.com/iam/&quot;&gt;Identity and Access Management&lt;/a&gt; (&lt;span class=&quot;caps&quot;&gt;IAM&lt;/span&gt;) to create a sub-account that has more limited permissions. Specifically we can create a sub-account that is only allowed to make the CreateSnapshot &lt;span class=&quot;caps&quot;&gt;API&lt;/span&gt; call. We can then place those credentials on the database host. This way if the credentials are compromized the attacker is only able to create snapshots rather than having full access to our account.&lt;/p&gt;
&lt;h3&gt;Boto&lt;/h3&gt;
&lt;p&gt;My favorite tool for interacting with Amazon&amp;#8217;s &lt;span class=&quot;caps&quot;&gt;API&lt;/span&gt; programatically is &lt;a href=&quot;http://boto.cloudhackers.com/&quot;&gt;boto&lt;/a&gt;, a Python interface to &lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; started by Mitch Garnaat. Boto can be installed using pip, easy_install or the OS&amp;#8217; package manager.&lt;/p&gt;
&lt;h3&gt;Using &lt;span class=&quot;caps&quot;&gt;IAM&lt;/span&gt; policies&lt;/h3&gt;
&lt;p&gt;I recommend creating groups wih very narrow permissions and then creating users that can be added to one or more groups based on the capabilites needed for that user. We&amp;#8217;ll start by creating a group for the create snapshot capability.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
import boto

iam = boto.connect_iam(&amp;lt;access key&amp;gt;, &amp;lt;secret key&amp;gt;)
iam.create_group(&#39;snapshoters&#39;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We then need to create a group policy granting access to the CreateSnapshot &lt;span class=&quot;caps&quot;&gt;API&lt;/span&gt; call. Policies are represented with a &lt;span class=&quot;caps&quot;&gt;JSON&lt;/span&gt; fragment that can be generated using Amazon&amp;#8217;s &lt;a href=&quot;http://awspolicygen.s3.amazonaws.com/policygen.html&quot;&gt;&lt;span class=&quot;caps&quot;&gt;AWS&lt;/span&gt; Policy Generator&lt;/a&gt; to create custom policies. Here is what the policy that allows CreateSnapshot looks like.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
{
  &quot;Statement&quot;: [
    {
      &quot;Sid&quot;: &quot;Stmt3121317131060&quot;,
      &quot;Action&quot;: [
        &quot;ec2:CreateSnapshot&quot;
      ],
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Resource&quot;: &quot;*&quot;
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the past I have had difficulties copying and pasting this &lt;span class=&quot;caps&quot;&gt;JSON&lt;/span&gt; fragment into a Python interpreter due to formatting issues. I was able to work around these issues by leveraging the fact that &lt;span class=&quot;caps&quot;&gt;JSON&lt;/span&gt; is for the most part also valid Python syntax. This means that we can do the following.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
create_snapshot_policy = {
  &quot;Statement&quot;: [
    {
      &quot;Sid&quot;: &quot;Stmt3121317131060&quot;,
      &quot;Action&quot;: [
        &quot;ec2:CreateSnapshot&quot;
      ],
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Resource&quot;: &quot;*&quot;
    }
  ]
}
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We can then use Python&amp;#8217;s built-in &lt;span class=&quot;caps&quot;&gt;JSON&lt;/span&gt; library to dump this policy back to &lt;span class=&quot;caps&quot;&gt;JSON&lt;/span&gt; when making the &lt;span class=&quot;caps&quot;&gt;API&lt;/span&gt; call. Here is how to add this new policy to the &amp;#8216;snapshoters&amp;#8217; group.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
import json

policy_txt = json.dumps(create_snapshot_policy)

iam.put_group_policy(&#39;snapshoters&#39;, &#39;CreateSnapshot&#39;, policy_txt)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can then create a new user and add this new user to this group.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
iam.create_user(&#39;dbbackup&#39;)
iam.add_user_to_group(&#39;snapshoters&#39;, &#39;dbbackup&#39;)
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Finally we need to generate access keys for this new user that we can feed to ec2-consistent-snapshot.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
iam.create_access_key(&#39;dbbackup&#39;)
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Putting it all together&lt;/h3&gt;
&lt;p&gt;We can now create the following script at /sbin/backup-database-volume. Replace the values in angle brackets with the access key and secret key from previous step. Also use the id of the &lt;span class=&quot;caps&quot;&gt;EBS&lt;/span&gt; vvolumen containing MySQL&amp;#8217;s data. This script assumes that this volume is mounted at /vol.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
#!/bin/sh

description=&quot;mysql-data-`date +%Y-%m-%d-%H-%M-%S`&quot;

ec2-consistent-snapshot \
--aws-access-key-id=&amp;lt;ACCESS-KEY&amp;gt; \
--aws-secret-access-key=&amp;lt;SECRET-KEY&amp;gt; \
--description=$description  \
--mysql --freeze-filesystem=&#39;/vol&#39; &amp;lt;VOLUME-ID&amp;gt;
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We also need to restrict this file so that it can only be viewed, modified or executed by root. Run the following&lt;/p&gt;
&lt;pre class=&quot;terminal&quot;&gt;&lt;code&gt;$ sudo chown root /sbin/backup-database-volume&lt;/code&gt;&lt;/pre&gt;
&lt;pre class=&quot;terminal&quot;&gt;&lt;code&gt;$ sudo chmod 700 /sbin/backup-database-volume&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Credentials for connecting to MySQL either can be specified by using a .my.cnf or by adding &amp;#8212;mysql-username, &amp;#8212;mysql-password to the ec2-consistent-snapshot snapshot command. These and other options of ec2-consistent-snapshot can be found by running&lt;/p&gt;
&lt;pre class=&quot;terminal&quot;&gt;&lt;code&gt;$ man ec2-consistent-snapshot&lt;/code&gt;&lt;/pre&gt;</content>
 </entry>
 
 <entry>
   <title>Getting started (quickly) with OpenStack's Swift</title>
   <link href="http://offbytwo.github.com/2010/11/10/getting-started-with-openstack-swift.html"/>
   <updated>2010-11-10T00:00:00-08:00</updated>
   <id>http://offbytwo.com/2010/11/10/getting-started-with-openstack-swift</id>
   <content type="html">&lt;h1&gt;Getting started (quickly) with OpenStack&amp;#8217;s Swift&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;10 November 2010 &amp;#8211; San Antonio&lt;/p&gt;
&lt;p&gt;I was at the &lt;a href=&quot;http://www.openstack.org/blog/2010/09/the-second-openstack-design-conference/&quot;&gt;OpenStack Design Summit&lt;/a&gt; this week and I wanted to check out the latest release of Swift, OpenStack&amp;#8217;s blob storage technology. The &lt;a href=&quot;http://swift.openstack.org/development_saio.html&quot;&gt;Swift All In One&lt;/a&gt; guide contains everything necessary to get started with Swift on a development machine, but it involves far too much effort for my tastes.&lt;/p&gt;
&lt;p&gt;I decided to quickly automate the entire process with a single bash script. So now you can get started with Swift with two commands on a new Ubuntu 10.10 VM&lt;/p&gt;
&lt;p&gt;&lt;code&gt;wget http://offbytwo.com/scripts/try-swift-on-ubuntu.sh&lt;/code&gt;&lt;br /&gt;
&lt;code&gt;bash try-swift-on-ubuntu.sh devauth&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;There is actually a far better way to get Swift up and running in an automated fashion using Chef (and possibly Vagrant), but I&amp;#8217;ll leave that for another post.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Working securely for multiple clients</title>
   <link href="http://offbytwo.github.com/2010/02/06/working-securely-for-multiple-clients.html"/>
   <updated>2010-02-06T00:00:00-08:00</updated>
   <id>http://offbytwo.com/2010/02/06/working-securely-for-multiple-clients</id>
   <content type="html">&lt;h1&gt;Working securely for multiple clients&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;06 February 2010 &amp;#8211; Dallas&lt;/p&gt;
&lt;p&gt;As a consultant I often end up working with sensitive client information on my laptop. Since laptops have a tendency to get misplaced or stolen, I  need to keep this client information encrypted. At the same time however I need to be able to work with this data effectively without having to jump through too many hoops.&lt;/p&gt;
&lt;p&gt;I believe that effective security requires a degree of convenience, since most people will inevitable circumvent security measures that interfere with their work. Therefore working securely on a laptop needs to be as convenient and natural as possible. I have a system I have developed over time that allows me to easily work with confidential client information on my laptop. I hope this guide will be useful to anyone in a similar situation. The specific examples used in this post involve OS X, TrueCrypt and Maven. It should be possible however to extrapolate and apply the same techniques to other technologies you might be using.&lt;/p&gt;
&lt;h3&gt;The ideal state&lt;/h3&gt;
&lt;p&gt;At this point you might be wondering what I consider to be secure and yet convenient, so I&amp;#8217;ll go ahead and describe my ideal setup. When I fire up a new Terminal I would like to type a single command to start working on a particular client project, such as&lt;/p&gt;
&lt;pre class=&quot;terminal&quot;&gt;&lt;code&gt;$ work_on_project_a&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This single command should mount my encrypted volume if not already mounted, change directory to the respective project, and alter my &lt;span class=&quot;caps&quot;&gt;PATH&lt;/span&gt; and environment variables accordingly so that any tools I use will just work as expected for the given project. So let&amp;#8217;s get started.&lt;/p&gt;
&lt;h3&gt;Mounting the encrypted volume on demand&lt;/h3&gt;
&lt;p&gt;For storing information securely I prefer using &lt;a href=&quot;http://www.truecrypt.org&quot;&gt;TrueCrypt&lt;/a&gt; because it is free, reliable and cross platform. Here is an example function that will check if our TrueCrypt volume is mounted, and attempt to mount it if not. In addition to easily mounting the encrypted volume, I want to also be able to quickly unmount it, since leaving a volume mounted unnecessarily increases the risk of compromise. Let&amp;#8217;s also add a function to work_on_project_a that allows us to unmount from the command line.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
function work_on_project_a() {
    TRUECRYPT=&quot;/Applications/TrueCrypt.app/Contents/MacOS/TrueCrypt&quot;
    SOURCE=&quot;/Volumes/someclient.tc&quot;
    DESTINATION=&quot;/Volumes/SomeClient&quot;
    
    function dismount() {
        $TRUECRYPT -d $SOURCE
    }
    
    if [ -z &quot;`ls $DESTINATION`&quot; ]; then
        echo &quot;Trying to mount...&quot;
        $TRUECRYPT --mount $SOURCE
    fi
    
    cd $DESTINATION/project_a
    
    # more stuff will go here
}
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Configure the environment&lt;/h3&gt;
&lt;p&gt;I want any project specific scripts to automatically be in my &lt;span class=&quot;caps&quot;&gt;PATH&lt;/span&gt; after activating a project. Let&amp;#8217;s make that happen by adding the following to the work_on_project_a function.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
    OLD_PATH=&quot;$PATH&quot;
    export PATH=&quot;$DESTINATION/bin:$PATH&quot;
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;In addition to configuring &lt;span class=&quot;caps&quot;&gt;PATH&lt;/span&gt;, this might be a good place to configure other environment variables, such as JAVA_HOME if your project requires a specific version of &lt;span class=&quot;caps&quot;&gt;JAVA&lt;/span&gt;, etc.&lt;/p&gt;
&lt;h3&gt;Configure maven to store artifacts securely&lt;/h3&gt;
&lt;p&gt;If you are using Maven, or a similar tool that automatically downloads and install artifacts, then I recommend configuring it to store all artifacts securely inside of the encrypted volume. In order to do so create a maven-settings.xml file under your encrypted volume and override localRepository setting.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
    &amp;lt;localRepository&amp;gt;/Volumes/SomeClient/mavenRepo&amp;lt;/localRepository&amp;gt;
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You might also want to configure the mirrors in case you have a project specific repository.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
    &amp;lt;mirrors&amp;gt;
        &amp;lt;mirror&amp;gt;
            &amp;lt;id&amp;gt;internal&amp;lt;/id&amp;gt;
            &amp;lt;name&amp;gt;Internal Maven Repo&amp;lt;/name&amp;gt;
            &amp;lt;url&amp;gt;http://internal.maven.repo/&amp;lt;/url&amp;gt;
            &amp;lt;mirrorOf&amp;gt;central&amp;lt;/mirrorOf&amp;gt;
        &amp;lt;/mirror&amp;gt;
    &amp;lt;/mirrors&amp;gt;
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now you can override Maven&amp;#8217;s global settings file in order to pick up the appropriate mirrors and local repository. The best way to do this is to create a mvn script inside of your project&amp;#8217;s bin folder that contains the following&lt;/p&gt;
&lt;pre&gt;&amp;lt;/code&amp;gt;
#!/bin/sh

/usr/bin/mvn -s /Volumes/SomeClient/maven-settings.xml $*
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Further configurations&lt;/h3&gt;
&lt;p&gt;If you need to perform further environment configuration you can do so in the work_on_project_a function. For example, if you have Nginx installed you might want to override the default Nginx configuration with the project specific one. I find this easier than trying to juggle multiple configurations in one Nginx file with vhosts. So for example I would use&lt;/p&gt;
&lt;pre&gt;&amp;lt;/code&amp;gt;
    sudo rm $NGINX_DESTINATION
    sudo ln -sf $DESTINATION/nginx.conf $NGINX_DESTINATION
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You can do something similar with Apache, etc.&lt;/p&gt;
&lt;h3&gt;Full example of work_on_project_a&lt;/h3&gt;
&lt;p&gt;Here is a full example that goes above and beyond what we described so far by adding a function to clean up after ourselves, as well as modifying PS1 to show the project we&amp;#8217;re currently working on.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
function work_on_project_a() {
    TRUECRYPT=&quot;/Applications/TrueCrypt.app/Contents/MacOS/TrueCrypt&quot;
    SOURCE=&quot;/Volumes/someclient.tc&quot;
    DESTINATION=&quot;/Volumes/SomeClient&quot;
    NGINX_DESTIONAT=&quot;/usr/local/nginx/conf/nginx.conf&quot;
    
    if [ -z &quot;`ls $DESTINATION`&quot; ]; then
        echo &quot;Trying to mount...&quot;
        $TRUECRYPT --mount $SOURCE
    fi
    
    OLD_PATH=&quot;$PATH&quot;
    OLD_JAVA_HOME=&quot;$JAVA_HOME&quot;
    OLD_PS1=&quot;$PS1&quot;
    
    export PATH=&quot;$DESTINATION/bin:$PATH&quot;
    export JAVA_HOME=&quot;/path/to/java/1.5&quot;
    export PS1=&quot;\[\033[01;32m\]PROJ_A:\[\033[01;34m\]\W$\[\033[0m\] &quot;
    
    sudo rm $NGINX_DESTINATION
    sudo ln -sf $DESTINATION/nginx.conf $NGINX_DESTINATION
    
    cd $DESTINATION/project_a
    
    function deactivate {
        export PATH=$OLD_PATH
        export JAVA_HOME=$OLD_JAVA_HOME
        export PS1=$OLD_PS1
        cd ~
    }
    
    function dismount() {
        $TRUECRYPT -d $SOURCE
    }
}
&lt;/pre&gt;&lt;p&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Note that this doesn&amp;#8217;t clean up nginx&amp;#8217;s configuration, but that&amp;#8217;s OK, next project I activate will configure nginx properly.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;The function we just developed allows me to conveniently start working on a project by mounting the encrypted volume on demand and setting up my environment accordingly. It also provides 2 functions for deactivating this environment to return to the default environment and for unmounting the encrypted volume when done. I hope you find this useful.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Running nosetests as a git pre-commit hook</title>
   <link href="http://offbytwo.github.com/2008/05/22/running-nosetests-as-a-git-pre-commit-hook.html"/>
   <updated>2008-05-22T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2008/05/22/running-nosetests-as-a-git-pre-commit-hook</id>
   <content type="html">&lt;h1&gt;Running nosetests as a git pre-commit hook&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;22 May 2008 &amp;#8211; Chicago&lt;/p&gt;
&lt;p&gt;I&amp;#8217;ve started using git for all my development recently (since it integrates so nicely with svn). I wanted to experiment with running my test as a pre-commit hook in git. In case you&amp;#8217;re curious all the hooks in git live in the hooks folder inside of .git&lt;/p&gt;
&lt;p&gt;Inside of this folder you will see various example scripts. The names should make it obvious when each hook is supposed to run. For example the pre-commit file will run before a commit (before you&amp;#8217;re even asked for the commit message). There are also hooks that can intercept the commit message, run after updates happen, etc. By default none of these files are executable, so git doesn&amp;#8217;t actually run them. If you would like to execute a hook simply put your code in the correct file and mark it executable.&lt;/p&gt;
&lt;p&gt;In the case of the pre-commit hook git will abort the commit if the pre-commit file returns with a status code other than 0. By default this file contains some perl code that checks for lines with trailing spaces and lines that have a space before a tab at the beginning. You can safely remove this code (I found the trailing space to be an annoying check)).&lt;/p&gt;
&lt;p&gt;So let&amp;#8217;s say you want to run your unit tests before each commit (and abort the commit if they fail). I&amp;#8217;m going to use nose (a Python unit testing framework) as an example. To run your nose tests you can simply issue the nosetests command. This will discover your tests, run them and exit with status code of 0 if everything passed. So you can simply put&lt;/p&gt;
&lt;pre class=&quot;terminal&quot;&gt;&lt;code&gt;
    #!/usr/bin/env bash
    nosetests
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;in your pre-commit file and now your unit tests will run and your commit will abort if the are test failures. This works well, unless you have tests that you expect to fail but still have something you would like to commit. You have to choices: either remove the executable bit from the pre-commit file or adjust your script to give you some options. Here is a little script I put together to prompt you if you would like to commit anyway in the even of test failures. Keep in mind I know very little if any bash scripting so if there is a better way to do this please let me know.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
    nosetests
    code=$?

    if [ &quot;$code&quot; == &quot;0&quot; ]; then
    exit 0
    fi

    echo -n &quot;Not all tests pass. Commit (y/n): &quot;
    read response
    if [ &quot;$response&quot; == &quot;y&quot; ]; then
    exit 0
    fi

    exit $code
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Hope this helps.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Show IP address of VM as console pre-login message</title>
   <link href="http://offbytwo.github.com/2008/05/09/show-ip-address-of-vm-as-console-pre-login-message.html"/>
   <updated>2008-05-09T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2008/05/09/show-ip-address-of-vm-as-console-pre-login-message</id>
   <content type="html">&lt;h1&gt;Show IP address of VM as console pre-login message&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;9 May 2008 &amp;#8211; Chicago&lt;/p&gt;
&lt;p&gt;In case you didn&amp;#8217;t know the pre-login message you see at a Linux console typically comes from /etc/issue&lt;/p&gt;
&lt;p&gt;You can customize this file to alter the message with some escape codes that will show things like the current date and time, machine name and domain, kernel version, etc. But one thing you can&amp;#8217;t easily display is the IP address of a machine. Showing the IP address is especially useful when building a virtual machine that will use &lt;span class=&quot;caps&quot;&gt;DHCP&lt;/span&gt;, like the Ubuntu development VM I use on my Macbook Pro. This way I can start VMware Fusion, see the IP address of the VM and then login over &lt;span class=&quot;caps&quot;&gt;SSH&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;In order to get the IP address to show in /etc/issue I needed to write a custom script that will rewrite /etc/issue with the IP address when the network interface is brought up. The first step was writing a simple script that will output the current IP address when run (by looking at the output of ifconfig).&lt;/p&gt;
&lt;pre class=&quot;terminal&quot;&gt;&lt;code&gt;/sbin/ifconfig | grep &quot;inet addr&quot; | grep -v &quot;127.0.0.1&quot; | awk &#39;{ print $2 }&#39; | awk -F: &#39;{ print $2 }&#39;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above script will run ifconfig and print out the IP address (after filtering out the localhost interface). I saved this script to &lt;code&gt;/usr/local/bin/get-ip-address&lt;/code&gt;. In order to get this into /etc/issue I decided to first copy &lt;code&gt;/etc/issue&lt;/code&gt; to &lt;code&gt;/etc/issue-standard&lt;/code&gt;, then create the following script that when run will overwrite /etc/issue with the contents of &lt;code&gt;/etc/issue-standard&lt;/code&gt; + IP address.&lt;/p&gt;
&lt;h3&gt;Debian/Ubuntu&lt;/h3&gt;
&lt;p&gt;Save the following script as &lt;code&gt;/etc/network/if-up.d/show-ip-address&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
#!/bin/sh
if [ &quot;$METHOD&quot; = loopback ]; then
    exit 0
fi

# Only run from ifup.
if [ &quot;$MODE&quot; != start ]; then
    exit 0
fi

cp /etc/issue-standard /etc/issue
/usr/local/bin/get-ip-address &amp;gt;&amp;gt; /etc/issue
echo &quot;&quot; &amp;gt;&amp;gt; /etc/issue
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and don&amp;#8217;t forget to mark it executable.&lt;/p&gt;
&lt;h3&gt;RedHat/CentOS&lt;/h3&gt;
&lt;p&gt;Save the following script as &lt;code&gt;/sbin/ifup-local&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
#!/bin/sh

if [ &quot;$1&quot; = lo ]; then
    exit 0
fi

cp /etc/issue-standard /etc/issue
/usr/local/bin/get-ip-address &amp;gt;&amp;gt; /etc/issue
echo &quot;&quot; &amp;gt;&amp;gt; /etc/issue
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and don&amp;#8217;t forget to mark it executable.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Working with LTPA</title>
   <link href="http://offbytwo.github.com/2007/08/21/working-with-ltpa.html"/>
   <updated>2007-08-21T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2007/08/21/working-with-ltpa</id>
   <content type="html">&lt;h1&gt;Working with Lightweight Third Party Authentication (&lt;span class=&quot;caps&quot;&gt;LTPA&lt;/span&gt;)&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;21 August 2007 &amp;#8211; Chicago&lt;/p&gt;
&lt;p&gt;Lightweight Third-Party Authentication (&lt;span class=&quot;caps&quot;&gt;LTPA&lt;/span&gt;), is an single sign-on technology used in &lt;span class=&quot;caps&quot;&gt;IBM&lt;/span&gt; WebSphere and Lotus Domino products. A server that is configured to use the &lt;span class=&quot;caps&quot;&gt;LTPA&lt;/span&gt; authentication will send a session cookie to the browser after sucessfuly authenticating a user. This cookie is only valid for one browsing session. This cookie contains the &lt;span class=&quot;caps&quot;&gt;LTPA&lt;/span&gt; token.&lt;/p&gt;
&lt;p&gt;A user with a valid &lt;span class=&quot;caps&quot;&gt;LTPA&lt;/span&gt; cookie can access a server that is a member of the same authentication domain as the first server and will be automatically authenticated. The cookies themselves contain information about the user that has been authenticated, the realm the user was authenticated to (such as an &lt;span class=&quot;caps&quot;&gt;LDAP&lt;/span&gt; server) and a timestamp. All of this information bis encrypted with a shared 3DES key and signed by a public/private key pair. This is all fine until you are trying to perform some troubleshooting and you realize you can&amp;#8217;t look inside of these cookies.&lt;/p&gt;
&lt;p&gt;One day I was in need of decrypting some tokens and couldn&amp;#8217;t find any information on the subject so I spent some time studying the format of the cookie and wrote some code to decrypt them. You can check out the code from &lt;a href=&quot;http://github.com/cosmin/samples/tree/master/LTPAUtils/&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The &lt;span class=&quot;caps&quot;&gt;LTPA&lt;/span&gt; cookie is encrypted with a 3DES key in DESede/&lt;span class=&quot;caps&quot;&gt;ECB&lt;/span&gt;/PKCS5Padding mode. If you are extracting the key from a Websphere or other &lt;span class=&quot;caps&quot;&gt;IBM&lt;/span&gt; server the key is likely protected with a password. The real key is encrypted also using 3DES in DESede/&lt;span class=&quot;caps&quot;&gt;ECB&lt;/span&gt;/PKCS5Padding mode with the &lt;span class=&quot;caps&quot;&gt;SHA&lt;/span&gt; hash of the supplied password padded with 0X0 up to 24 bytes. To decrypt the actual token you can take the password, generate a 3DES key, decrypt the encrypted key and then decrypt the cookie data. There is also a public/private key pair being used to sign the cookie. Since I had no intent in validating that the cookie is properly signed or in creating real cookies I did not spend anytime investigating the signature portion of the cookie. Drop me a note if you find the code useful or if you have some improvements you would like to share.&lt;/p&gt;</content>
 </entry>
 
 <entry>
   <title>Keeping SSH sessions alive</title>
   <link href="http://offbytwo.github.com/2007/08/20/keeping-ssh-sessions-alive.html"/>
   <updated>2007-08-20T00:00:00-07:00</updated>
   <id>http://offbytwo.com/2007/08/20/keeping-ssh-sessions-alive</id>
   <content type="html">&lt;h1&gt;Keeping &lt;span class=&quot;caps&quot;&gt;SSH&lt;/span&gt; sessions alive&lt;/h1&gt;
&lt;p class=&quot;meta&quot;&gt;20 August 2007 &amp;#8211; Chicago&lt;/p&gt;
&lt;p&gt;This is one of those things that I setup once a year when I get a new machine and then I always seem to forget the next time around so I&amp;#8217;ll post it here as a reference to myself and perhaps also help the occasional Googler. If you are having problems with your &lt;span class=&quot;caps&quot;&gt;SSH&lt;/span&gt; connection getting dropped after a certain amount of time (usually caused by &lt;span class=&quot;caps&quot;&gt;NAT&lt;/span&gt; firewalls and home routers), you can use the following setting to keep your connection alive&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
Host *
    ServerAliveInterval 180
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can place this either in &lt;code&gt;~/.ssh/config&lt;/code&gt; for user level settings or in &lt;code&gt;/etc/ssh/ssh_config&lt;/code&gt; for machine level settings. You may also replace * with a specific hostname or something like *.example.com to use on all machines within a domain.&lt;br /&gt;
This is the cleanest way of making sure your connections stay up and doesn&amp;#8217;t require changes to the destination servers (over which you may not have control). I am not sure however how this plays with the IdleTimeout setting on the server. I am guessing that a server should be able to enforce its own policy about how long folks are remained to be idle for security reasons so you might still get disconnected after a certain amount of time.&lt;/p&gt;</content>
 </entry>
 
 
</feed>
