<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <atom:link href="http://www.jimmycuadra.com/posts.atom" rel="self" type="application/rss+xml"/>
    <title>Jimmy Cuadra</title>
    <description>Writing and screencasts on Ruby, JavaScript, HTML, and CSS</description>
    <link>http://www.jimmycuadra.com/posts</link>
    <language>en</language>
    <pubDate>Wed, 11 Mar 2015 05:16:48 -0700</pubDate>
    <lastBuildDate>Wed, 11 Mar 2015 05:16:48 -0700</lastBuildDate>
    <item>
      <title>Option types and Ruby</title>
      <description>&lt;p&gt;I've been learning the &lt;a href="http://www.rust-lang.org/"&gt;Rust&lt;/a&gt; programming language over the last several months. One of the great things about learning a new programming language is that it expands your understanding of programming in general by exposing you to new ideas. Sometimes new ideas can result in lightbulb moments for programming in languages you already know. One of the things learning Rust has made me realize is how much I wish Ruby had sum types.&lt;/p&gt;

&lt;p&gt;A sum type is a type that has a number of "variants." These variants are alternate constructors for the type that can be differentiated from each other to confer different meaning, while still being the enclosing type. In Rust, sum types are provided through &lt;code&gt;enum&lt;/code&gt;. An enum type can be destructured into a value using pattern matching via Rust's &lt;code&gt;match&lt;/code&gt; operator.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;enum Fruit {
  Apple,
  Banana,
  Cherry,
}

fn print_fruit_name(fruit: Fruit) {
  match fruit {
    Apple =&amp;gt; println!("Found an apple!"),
    Banana =&amp;gt; println!("Found a banana!"),
    Cherry =&amp;gt; println!("Found a cherry!"),
  }
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;We define an enum, &lt;code&gt;Fruit&lt;/code&gt;, with three variants. The &lt;code&gt;print_fruit_name&lt;/code&gt; function takes a &lt;code&gt;Fruit&lt;/code&gt; value and then matches on it, printing a different message depending on which variant this particular &lt;code&gt;Fruit&lt;/code&gt; is. For our purposes here, the reason we use &lt;code&gt;match&lt;/code&gt; instead of a chain of if/else conditions is that &lt;code&gt;match&lt;/code&gt; guarantees that all variants must be accounted for. If one of the three arms of the match were omitted, the program would not compile, citing a non-exhaustive pattern match.&lt;/p&gt;

&lt;p&gt;Enum variants can also take arguments which allow them to wrap other types of values. The most common, and probably most useful example of this is the &lt;code&gt;Option&lt;/code&gt; type. This type allows you to represent the idea of a method that sometimes returns a meaningful value, and sometimes returns nothing. The same concept goes by different names sometimes. In Haskell, it's called the Maybe monad.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;pub enum Option&amp;lt;T&amp;gt; {
  Some(T),
  None,
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;An option can have two possible values: "Some" arbitrary value of any type T, or None, representing nothing. An optional value could then be returned from a method like so:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;fn find(id: u8) -&amp;gt; Option&amp;lt;User&amp;gt; {
  if user_record_for_id_exists(id) {
    Some(load_user(id))
  } else {
    None
  }
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Code calling this method would then have to explicitly account for both possible outcomes:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;match find(1) {
  Some(user) =&amp;gt; user.some_action(),
  None =&amp;gt; return,
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;What you do in the two cases is, of course, up to you and dependent on the situation. The point is that the caller must handle each case explicitly.&lt;/p&gt;

&lt;p&gt;How does this relate to Ruby? Well, how often have you seen this exception when working on a Ruby program?&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;NoMethodError: undefined method `foo' for nil:NilClass
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Chances are, you've seen this a million times, and it's one of the most annoying errors. Part of why it's so bad is that associated stack traces may not make it clear where the &lt;code&gt;nil&lt;/code&gt; was originally emitted. Ruby code tends to use &lt;code&gt;nil&lt;/code&gt; quite liberally. Rails frequently follows the convention of methods returning &lt;code&gt;nil&lt;/code&gt; to indicate either the lack of a value or the failure of some operation. Because there are loose &lt;code&gt;nil&lt;/code&gt;s everywhere, they end up in your code in places you don't expect and tripping you up.&lt;/p&gt;

&lt;p&gt;This problem is not unique to Ruby. It's been seen in countless other languages. Java programmers rue the NullPointerException, and &lt;a href="https://en.wikipedia.org/wiki/Tony_Hoare"&gt;Tony Hoare&lt;/a&gt; refers to the issue as his billion dollar mistake.&lt;/p&gt;

&lt;p&gt;What, then, might we learn from the concept of an option type in regards to Ruby? We could certainly simulate an Option type by creating our own class that wraps another value, but that doesn't really solve anything since it can't force callers to explicitly unwrap it. You'd simply end up with:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;NoMethodError: undefined method `foo' for #&amp;lt;Option:0x007fddcc4c1ab0&amp;gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;But we do have a mechanism in Ruby that will stop a caller cold in its tracks if it doesn't handle a particular case: exceptions. While it's a common adage not to "use exceptions for control flow," let's take a look at how exceptions might be used to bring some of the benefits of avoiding &lt;code&gt;nil&lt;/code&gt; through sum types. Imagine this example using an Active-Record-like &lt;code&gt;User&lt;/code&gt; object:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="keyword"&gt;def&lt;/span&gt; &lt;span class="function"&gt;message_user&lt;/span&gt;(email, message_content)
  user = &lt;span class="constant"&gt;User&lt;/span&gt;.find_by_email(email)
  message = &lt;span class="constant"&gt;Message&lt;/span&gt;.new(message_content)
  message.send_to(user)
&lt;span class="keyword"&gt;end&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;find_by_email&lt;/code&gt; method will try looking up a user from the database by their email address, and return either a user object or &lt;code&gt;nil&lt;/code&gt;. It's easy to forget this, and move along assuming our &lt;code&gt;user&lt;/code&gt; variable is bound to a user object. In the case where no user is found by the provided email address, we end up passing &lt;code&gt;nil&lt;/code&gt; to &lt;code&gt;Message#send_to&lt;/code&gt;, which will crash our program, because it always expects a user.&lt;/p&gt;

&lt;p&gt;One way to get around this is to just use a condition to check if &lt;code&gt;user&lt;/code&gt; is &lt;code&gt;nil&lt;/code&gt; or not before proceeding. But again, this is easy to forget. If we control the implementation of the &lt;code&gt;User&lt;/code&gt; class, we can force callers to explicitly handle this case by raising an exception when no user is found instead of simply returning &lt;code&gt;nil&lt;/code&gt;.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="keyword"&gt;def&lt;/span&gt; &lt;span class="function"&gt;message_user&lt;/span&gt;(email, message_content)
  user = &lt;span class="constant"&gt;User&lt;/span&gt;.find_by_email(email)
  message = &lt;span class="constant"&gt;Message&lt;/span&gt;.new(message_content)
  message.send_to(user)
&lt;span class="keyword"&gt;rescue&lt;/span&gt; &lt;span class="constant"&gt;UserNotFound&lt;/span&gt;
  logger.warn(&lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;Failed to send message to unknown user with email &lt;/span&gt;&lt;span class="inline"&gt;&lt;span class="inline-delimiter"&gt;#{&lt;/span&gt;email&lt;span class="inline-delimiter"&gt;}&lt;/span&gt;&lt;/span&gt;&lt;span class="content"&gt;.&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;)
&lt;span class="keyword"&gt;end&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Now &lt;code&gt;message_user&lt;/code&gt; explicitly handles the "none" case, and if it doesn't, an exception will be raised right where the &lt;code&gt;nil&lt;/code&gt; would otherwise have been introduced. Of course, the program will still run if this exception isn't handled, but it will crash in the case where it does, and the crash will have a more useful exception than the dreaded &lt;code&gt;NoMethodError&lt;/code&gt; on &lt;code&gt;nil&lt;/code&gt;. Forcing the caller to truly account for all cases is something that pattern matching provides in Rust which is not possible in Ruby, but using exceptions to provide earlier failures and better error messages gets us a bit closer to the practical benefit.&lt;/p&gt;

&lt;p&gt;There are other approaches to dealing with the propagation of &lt;code&gt;nil&lt;/code&gt; values in Ruby. Another well known approach is to use the null object pattern, returning a "dummy" object (in our example, a &lt;code&gt;User&lt;/code&gt;), that responds to all the same messages as a real user but simply has no effect. Some people would argue that is a more object-oriented or Rubyish approach, but I find that it introduces more complexity than its benefit is worth.&lt;/p&gt;

&lt;p&gt;Using exceptions as part of the interfaces of your objects forces callers to handle those behaviors, and causes early errors when they don't, allowing them to get quick, accurate feedback when something goes wrong.&lt;/p&gt;</description>
      <pubDate>Wed, 11 Mar 2015 05:16:48 -0700</pubDate>
      <link>http://www.jimmycuadra.com/posts/option-types-and-ruby</link>
      <guid>http://www.jimmycuadra.com/posts/option-types-and-ruby</guid>
    </item>
    <item>
      <title>etcd 2.0 static bootstrapping on CoreOS and Vagrant</title>
      <description>&lt;h2&gt;The problem&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://coreos.com/"&gt;CoreOS&lt;/a&gt; provides a pretty good setup for running a cluster of machines with &lt;a href="https://www.vagrantup.com/"&gt;Vagrant&lt;/a&gt;. You can find this setup at &lt;a href="https://github.com/coreos/coreos-vagrant"&gt;coreos/coreos-vagrant&lt;/a&gt;. Something I've found annoying, however, is that whenever you start a new cluster, you need to get a new discovery token from CoreOS's &lt;a href="https://coreos.com/docs/cluster-management/setup/cluster-discovery/"&gt;hosted discovery service&lt;/a&gt;. This is necessary for the &lt;a href="https://github.com/coreos/etcd"&gt;etcd&lt;/a&gt; instances running on each machine to find each other and form a quorum. The discovery token is written to the machines on initial boot via the cloud-config file named &lt;code&gt;user-data&lt;/code&gt;. If you destroy the machines and recreate them, you need to use a fresh discovery token. This didn't sit right with me, as I want to check everything into version control, and didn't want to have a lot of useless commits changing the discovery token every time I recreated the cluster.&lt;/p&gt;

&lt;h2&gt;The solution&lt;/h2&gt;

&lt;p&gt;Fortunately, etcd doesn't rely on the hosted discovery service. You can also bootstrap etcd statically if you know the IPs and ports everything will be running on in advance. It turns out that CoreOS's Vagrantfile is already configured to provide a static IP to each machine, so these IPs can simply be hardcoded into the cloud-config. There's one more snag, which is that etcd 0.4.6 (the one that currently ships in CoreOS) gets confused if the list of IPs you include when bootstrapping includes the current machine. That would mean that the cloud-config for each machine would have to be slightly different because it'd have to include the whole list, minus itself. Without introducing an additional layer of abstraction of your own, there isn't an easy way to provide a dynamic cloud-config file that would do this. Fortunately, the &lt;a href="https://coreos.com/blog/etcd-2.0-release-first-major-stable-release/"&gt;newly released etcd 2.0.0&lt;/a&gt; improves on the static bootstrapping story by allowing you to provide the full list of IPs on every machine. Because etcd 2.0 doesn't ship with CoreOS yet, we'll run it in a container.&lt;/p&gt;

&lt;p&gt;For this example, we'll use a cluster of three machines, just to keep the cloud-config a bit shorter. Five machines is the recommended size for most uses. Assuming you already have Vagrant and VirtualBox installed, clone the &lt;a href="https://github.com/coreos/coreos-vagrant"&gt;coreos/coreos-vagrant&lt;/a&gt; repository and copy &lt;code&gt;config.rb.sample&lt;/code&gt; to &lt;code&gt;config.rb&lt;/code&gt;. Open &lt;code&gt;config.rb&lt;/code&gt; and uncomment &lt;code&gt;$num_instances&lt;/code&gt;, setting its value to &lt;code&gt;3&lt;/code&gt;.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="comment"&gt;# Size of the CoreOS cluster created by Vagrant&lt;/span&gt;
&lt;span class="global-variable"&gt;$num_instances&lt;/span&gt; = &lt;span class="integer"&gt;3&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Next, create a new file called &lt;code&gt;user-data&lt;/code&gt; with the following contents:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="comment"&gt;#cloud-config&lt;/span&gt;

&lt;span class="key"&gt;coreos&lt;/span&gt;:
  &lt;span class="key"&gt;fleet&lt;/span&gt;:
    &lt;span class="key"&gt;etcd-servers&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;http://$private_ipv4:2379&lt;/span&gt;&lt;/span&gt;
    &lt;span class="key"&gt;public-ip&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;$private_ipv4&lt;/span&gt;&lt;/span&gt;
  &lt;span class="key"&gt;units&lt;/span&gt;:
    - &lt;span class="string"&gt;&lt;span class="content"&gt;name: etcd.service&lt;/span&gt;&lt;/span&gt;
      &lt;span class="key"&gt;command&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;start&lt;/span&gt;&lt;/span&gt;
      &lt;span class="key"&gt;content&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="delimiter"&gt;|&lt;/span&gt;&lt;span class="content"&gt;
        Description=etcd 2.0
        After=docker.service

        [Service]
        EnvironmentFile=/etc/environment
        TimeoutStartSec=0
        SyslogIdentifier=writer_process
        ExecStartPre=-/usr/bin/docker kill etcd
        ExecStartPre=-/usr/bin/docker rm etcd
        ExecStartPre=/usr/bin/docker pull quay.io/coreos/etcd:v2.0.0
        ExecStart=/bin/bash -c "/usr/bin/docker run \
          -p 2379:2379 \
          -p 2380:2380 \
          --name etcd \
          -v /opt/etcd:/opt/etcd \
          -v /usr/share/ca-certificates/:/etc/ssl/certs \
          quay.io/coreos/etcd:v2.0.0 \
          -data-dir /opt/etcd \
          -name %H \
          -listen-client-urls http://0.0.0.0:2379 \
          -advertise-client-urls http://$COREOS_PRIVATE_IPV4:2379 \
          -listen-peer-urls http://0.0.0.0:2380 \
          -initial-advertise-peer-urls http://$COREOS_PRIVATE_IPV4:2380 \
          -initial-cluster core-01=http://172.17.8.101:2380,core-02=http://172.17.8.102:2380,core-03=http://172.17.8.103:2380\
          -initial-cluster-state new"
        ExecStop=/usr/bin/docker kill etcd

        [X-Fleet]
        Conflicts=etcd*&lt;/span&gt;&lt;/span&gt;
    - &lt;span class="string"&gt;&lt;span class="content"&gt;name: fleet.service&lt;/span&gt;&lt;/span&gt;
      &lt;span class="key"&gt;command&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;start&lt;/span&gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Now just run &lt;code&gt;vagrant up&lt;/code&gt; and in a few minutes you'll have a cluster of three CoreOS machines running etcd 2.0 with no discovery token needed!&lt;/p&gt;

&lt;p&gt;If you want to run &lt;code&gt;fleetctl&lt;/code&gt; inside one of the CoreOS VMs, you'll need to set the default etcd endpoint, because the current fleet still expects etcd to be on port 4001:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;export FLEETCTL_ENDPOINT=http://127.0.0.1:2379
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;If you don't care about how that all works and just want a working cluster, you can stop here. If you want to understand the guts of that cloud-config more, keep reading.&lt;/p&gt;

&lt;h2&gt;The details&lt;/h2&gt;

&lt;p&gt;One of the changes in CoreOS 2.0 is that it now uses port 2379 and 2380 (as opposed to etcd 0.4.6 which used 4001 and 7001.) The &lt;code&gt;fleet&lt;/code&gt; section of the cloud-config tells fleet how to connect to etcd. This is necessary because the version of fleet that ships with CoreOS now still defaults to port 4001. Once etcd 2.0 is shipping in CoreOS, I'm sure fleet will be updated to match.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;units&lt;/code&gt; section of the cloud-config creates systemd units that will be placed in &lt;code&gt;/etc/systemd/system/&lt;/code&gt; on each machine. CoreOS ships with a default unit file for etcd, but we overwrite it here (simply by using the same service name, etcd.service) to use etcd 2.0 with our own configuration.&lt;/p&gt;

&lt;p&gt;The bulk of the cloud-config is the etcd.service unit file. Most of it is the same as a standard CoreOS unit file for a Docker container. The interesting bits are the arguments to the etcd process that runs in the container:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-listen-client-urls&lt;/code&gt;: This is the interface and port that the current machine's etcd should bind to for the client API, e.g. etcdctl. It's set to 0.0.0.0 to bind it to all interfaces, and it uses port 2379 which is the standard port, beginning in etcd 2.0.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-advertise-client-urls&lt;/code&gt;: This is the list of URLs etcd will announce as available for clients to contact.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-listen-peer-urls&lt;/code&gt;: Similar to the client URL version, this defines how the peer service should bind to the network. Again, we bind it to all interfaces and use the standard peer port of 2380.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-initial-advertise-peer-urls&lt;/code&gt;: Similar to the client version, this defines how etcd will announce its peer service to other etcd processes on other machines.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-initial-cluster&lt;/code&gt;: This is the secret sauce that keeps us from having to use a discovery token. We provide a list of each etcd service running in our cluster, mapping each machine's hostname to its etcd peer URL. Because we know which IP addresses Vagrant is going to use, we can simply enumerate them here. If you were running a cluster of a different size, this is where you would add or remove machines from the list.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-initial-cluster-state&lt;/code&gt;: A value of &lt;code&gt;new&lt;/code&gt; here tells etcd that it's joining a brand new cluster. If you were to later add another machine to the existing cluster, you'd change the value here in that machine's cloud-config.&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;It's worth noting that the arguments that begin with "initial" are just that: data that etcd needs in order to boot the first time. Once the cluster has booted once and formed a quorum, these arguments will be ignored on subsequent boots, because everything etcd needs to know will have been stored in its data directory (&lt;code&gt;/opt/etcd&lt;/code&gt;) in this case.&lt;/p&gt;

&lt;p&gt;This is all pretty complicated, but it will get easier once etcd 2.0 and a compatible fleet are shipped with CoreOS itself. Then the built-in etcd.service unit can be used again, and all the configuration options can be written in YAML format just like the &lt;code&gt;fleet&lt;/code&gt; section in this particular cloud-config.&lt;/p&gt;</description>
      <pubDate>Thu, 05 Feb 2015 22:25:03 -0800</pubDate>
      <link>http://www.jimmycuadra.com/posts/etcd-2-0-static-bootstrapping-on-coreos-and-vagrant</link>
      <guid>http://www.jimmycuadra.com/posts/etcd-2-0-static-bootstrapping-on-coreos-and-vagrant</guid>
    </item>
    <item>
      <title>Securing CoreOS with iptables</title>
      <description>&lt;p&gt;I've been keeping a close eye on &lt;a href="https://coreos.com/"&gt;CoreOS&lt;/a&gt; since it was originally announced, and in the last few months I've actually started using it for a few things. As a young project, CoreOS has lots of rough edges in terms of documentation and usability. One of the issues I ran into was how to secure a CoreOS machine's public network. By default, a fresh CoreOS installation has no firewall rules, allowing all inbound network traffic.&lt;/p&gt;

&lt;p&gt;In order to secure a CoreOS machine, I had to learn how to configure the firewall. I use the common &lt;a href="http://www.netfilter.org/projects/iptables/"&gt;iptables&lt;/a&gt; utility for this purpose. While I was vaguely familiar with iptables, I'd never really had to learn it, so I delved in to get a more thorough understanding of it. There are plenty of resources to learn iptables on the web, so I won't go into that too much here. The issue specific to CoreOS is how to configure iptables when launching a new machine.&lt;/p&gt;

&lt;p&gt;CoreOS is unusual in that it is extremely minimal. It's designed for all programs to be run inside Linux containers, so the OS itself contains only the subsystems and tools necessary to achieve that.  iptables, however, is one of the programs that does run on the OS itself.&lt;/p&gt;

&lt;p&gt;With a more traditional Linux distribution, it's common to launch a new instance and then provision it with a tool like Chef or Puppet. Your configuration lives in a Git repository somewhere and you run a program on the target machine after it's booted to converge it into the desired state. CoreOS is missing a lot of the infrastructure assumed to be present by tools like Chef and Puppet, so they are not supported. It is possible to run Ansible, a push-based configuration management tool, on a CoreOS host, but I'm not a fan of Ansible for reasons that are beyond the scope of this post, and plus, using a complex configuration management tool is sort of against the spirit of CoreOS, where almost everything should happen in containers.&lt;/p&gt;

&lt;p&gt;For very minimal on-boot configuration, CoreOS supports a subset of cloud-config, the YAML-based configuration format from the &lt;a href="http://cloudinit.readthedocs.org/en/latest/index.html"&gt;cloud-init&lt;/a&gt; tool. CoreOS instances can be provided a cloud-config file and will perform certain actions on boot. cloud-config can be used to load iptables with a list of rules for a more secure network.&lt;/p&gt;

&lt;p&gt;I'll provide the relevant portion of the cloud-config I use on &lt;a href="https://www.digitalocean.com/"&gt;DigitalOcean&lt;/a&gt;, then explain the relevant pieces:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="comment"&gt;#cloud-config&lt;/span&gt;

&lt;span class="key"&gt;coreos&lt;/span&gt;:
  &lt;span class="key"&gt;units&lt;/span&gt;:
    - &lt;span class="string"&gt;&lt;span class="content"&gt;name: iptables-restore.service&lt;/span&gt;&lt;/span&gt;
      &lt;span class="key"&gt;enable&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;true&lt;/span&gt;&lt;/span&gt;
&lt;span class="key"&gt;write_files&lt;/span&gt;:
  - &lt;span class="string"&gt;&lt;span class="content"&gt;path: /var/lib/iptables/rules-save&lt;/span&gt;&lt;/span&gt;
    &lt;span class="key"&gt;permissions&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;0644&lt;/span&gt;&lt;/span&gt;
    &lt;span class="key"&gt;owner&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="content"&gt;root:root&lt;/span&gt;&lt;/span&gt;
    &lt;span class="key"&gt;content&lt;/span&gt;: &lt;span class="string"&gt;&lt;span class="delimiter"&gt;|&lt;/span&gt;&lt;span class="content"&gt;
      *filter
      :INPUT DROP [0:0]
      :FORWARD DROP [0:0]
      :OUTPUT ACCEPT [0:0]
      -A INPUT -i lo -j ACCEPT
      -A INPUT -i eth1 -j ACCEPT
      -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
      -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
      -A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT
      -A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT
      COMMIT&lt;/span&gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Every cloud-config file must start with &lt;code&gt;#cloud-config&lt;/code&gt; exactly. I learned the hard way that this is not just a comment – it actually tells CoreOS to treat the file as a cloud-config. Otherwise it will assume it's a shell script and just run it as such.&lt;/p&gt;

&lt;p&gt;The following lines are &lt;a href="http://www.yaml.org/"&gt;YAML&lt;/a&gt; syntax. The &lt;code&gt;coreos&lt;/code&gt; section is a CoreOS-specific extension to cloud-init's cloud-config format. The &lt;code&gt;units&lt;/code&gt; section within it will automatically perform the specified action(s) on the specified &lt;a href="http://www.freedesktop.org/wiki/Software/systemd/"&gt;systemd&lt;/a&gt; units. systemd is the init system used by CoreOS, and many of the OS's core operations are tied closely to it. "Units" are essentially processes that are managed by systemd and represented on disk by unit files that define how the unit should behave.&lt;/p&gt;

&lt;p&gt;The systemd unit &lt;code&gt;iptables-restore.service&lt;/code&gt; ships with CoreOS but is not enabled by default. &lt;code&gt;enable: true&lt;/code&gt; turns it on and will cause it to run on boot after every reboot. Here are the important contents of that unit file:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;[Service]
Type=oneshot
ExecStart=/sbin/iptables-restore /var/lib/iptables/rules-save
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;The unit file defines a "oneshot" job, meaning it simply executes and exits and is not intended to stay running permanently. The command run is the &lt;code&gt;iptables-restore&lt;/code&gt; utility, which accepts an iptables script file defining rules to be loaded into iptables. Whenever the system reboots, all iptables rules are flushed and must be reloaded from this script. That's exactly what &lt;code&gt;iptables-restore&lt;/code&gt; does. The script it loads is expected to live at &lt;code&gt;/var/lib/iptables/rules-save&lt;/code&gt;, which brings us to the second section of the cloud-config file.&lt;/p&gt;

&lt;p&gt;cloud-config's &lt;code&gt;write_files&lt;/code&gt; section will, unsurprisingly, write files with the given content to the file system. The content field is the most important part here. This defines the iptables rules to load. The details of this configuration can be fully explained by reading the iptables documentation, but to summarize, these rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow all input to localhost&lt;/li&gt;
&lt;li&gt;Allow all input on the private network interface&lt;/li&gt;
&lt;li&gt;Allow all connections that are currently established, which prevents existing SSH sessions from being suddenly terminated&lt;/li&gt;
&lt;li&gt;Allow incoming TCP traffic on ports 22 (SSH), 80 (HTTP), and 443 (HTTPS)&lt;/li&gt;
&lt;li&gt;Allow incoming ICMP traffic for echo replies, unreachable destination messages, and time exceeded messages&lt;/li&gt;
&lt;li&gt;Drop all other incoming traffic&lt;/li&gt;
&lt;li&gt;Drop all traffic attempting to forward through the network&lt;/li&gt;
&lt;li&gt;Allow all outbound traffic&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;The three TCP ports allowed are pretty standard, but those are the rules you'd be most likely to change or augment depending on what services you'll be running on your CoreOS machine.&lt;/p&gt;

&lt;p&gt;After CoreOS boots, SSH into it, and verify that iptables was configured properly by running &lt;code&gt;sudo iptables -S&lt;/code&gt; (to see it listed in the same format as above) or with &lt;code&gt;sudo iptables -nvL&lt;/code&gt; (for the more standard list format).&lt;/p&gt;

&lt;p&gt;That's pretty much it! As you can see, there are a lot of related technologies to learn when venturing into CoreOS. Several of these were new for me, so there was a lot of learning involved in getting this working. For reference, the entire cloud-config I use for CoreOS on DigitalOcean can be found in &lt;a href="https://gist.github.com/jimmycuadra/fe79ae8857f3f0d0cae1"&gt;this Gist&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Fri, 30 Jan 2015 00:44:42 -0800</pubDate>
      <link>http://www.jimmycuadra.com/posts/securing-coreos-with-iptables</link>
      <guid>http://www.jimmycuadra.com/posts/securing-coreos-with-iptables</guid>
    </item>
    <item>
      <title>How Lita.io uses the RubyGems and rubygems.org APIs</title>
      <description>&lt;p&gt;Today I released a brand new website for Lita at &lt;a href="http://www.lita.io/"&gt;lita.io&lt;/a&gt;. While the site consists primarily of static pages for documentation, it also has a cool feature that takes advantage of a few relatively unknown things in the Ruby ecosystem. That feature is the &lt;a href="http://www.lita.io/plugins"&gt;plugins page&lt;/a&gt;, an automatically updating list of all Lita plugins that have been published to RubyGems.&lt;/p&gt;

&lt;p&gt;Previously, the only directory of Lita plugins was Lita's wiki on GitHub. When someone released a plugin, they'd have to edit the list manually. This was not ideal. It was easy to forget, and required that people knew that the wiki had such a list in the first place. To make an automatically updating list, I had to think about how I could detect Lita plugins out there on the Internet.&lt;/p&gt;

&lt;p&gt;I spent some time digging through the rubygems.org source code to see how I might get the information I wanted out of the API. After experimenting with a few things, I discovered an undocumented API: reverse dependencies. You can hit the endpoint &lt;code&gt;/api/v1/gems/GEM_NAME/reverse_dependencies.json&lt;/code&gt; and you will get back a list of all gems that depend on the given gem. This was great! Now I had a list of names of all the gems that depend on Lita. It's pretty safe to assume those are all Lita plugins.&lt;/p&gt;

&lt;p&gt;This API only returns the names of the gems, however. I also wanted to display a description and the authors' names. This data could be gathered from an additional API request, but there was another piece of information I wanted that couldn't be extracted from the API.&lt;/p&gt;

&lt;p&gt;Lita has two types of plugins: adapters and handlers. Adapters allow Lita to connect to a particular chat service, and handlers add new functionality to the robot at runtime; they're the equivalent of Hubot's scripts. I wanted the plugins page to list the plugin type along with the name, author, and link to its page on rubygems.org. To do this, I used another lesser-known feature: RubyGems metadata.&lt;/p&gt;

&lt;p&gt;In RubyGems 2.0 or greater, a gem specification can include arbitrary metadata. The metadata consists of a hash assigned to the &lt;code&gt;metadata&lt;/code&gt; attribute of a &lt;code&gt;Gem::Specification&lt;/code&gt;. The keys and values must all be strings. In Lita 2.7.1, I updated the templates used to generate new Lita plugins so that they automatically included metadata in their gemspecs indicating which type of plugin it is. For example:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="constant"&gt;Gem&lt;/span&gt;::&lt;span class="constant"&gt;Specification&lt;/span&gt;.new &lt;span class="keyword"&gt;do&lt;/span&gt; |spec|
  spec.metadata = { &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;lita_plugin_type&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt; =&amp;gt; &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;handler&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt; }
&lt;span class="keyword"&gt;end&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Because Lita requires Ruby 2.0 or greater, which comes with RubyGems 2.0, any Lita plugin can use the metadata attribute. Any plugins created before the generator update in Lita 2.7.1 can still be detected and listed on the plugins page, it just won't list their type.&lt;/p&gt;

&lt;p&gt;Now all I had to do was read this information from each plugin gem. Unfortunately, rubygems.org currently has no API that exposes gem metadata, so things got a little tricky. I wrote a script which called &lt;code&gt;gem fetch&lt;/code&gt; to download the actual gem files for all the Lita plugins. Once downloaded, running &lt;code&gt;gem spec&lt;/code&gt; on the gem file outputs a YAML representation of the gem's specification. In Ruby, loading that YAML with &lt;code&gt;YAML.load&lt;/code&gt; returns a &lt;code&gt;Gem::Specification&lt;/code&gt;. From there I could simply access the fields I wanted to display, including the type of plugin via &lt;code&gt;spec.metadata["lita_plugin_type"]&lt;/code&gt;. This data is then persisted in Postgres. The script runs once a day to get the latest data from RubyGems.&lt;/p&gt;

&lt;p&gt;This process could be made much easier and less error-prone if rubygems.org added metadata to the information it exposes over its API. Nevertheless, creating the plugins page for Lita.io was a good challenge and gave me a chance to explore some of the pieces of the RubyGems ecosystem I didn't know existed.&lt;/p&gt;</description>
      <pubDate>Thu, 23 Jan 2014 04:09:12 -0800</pubDate>
      <link>http://www.jimmycuadra.com/posts/how-lita-io-uses-the-rubygems-and-rubygems-org-apis</link>
      <guid>http://www.jimmycuadra.com/posts/how-lita-io-uses-the-rubygems-and-rubygems-org-apis</guid>
    </item>
    <item>
      <title>Getting started with Lita</title>
      <description>&lt;p&gt;&lt;a href="http://lita.io/"&gt;Lita&lt;/a&gt; is an extendable chat bot for Ruby programmers that can work with any chat service. If you've used Hubot before, Lita is similar, but written in Ruby instead of JavaScript. It's easy to get started, and you can have your own bot running in minutes.&lt;/p&gt;

&lt;p&gt;Lita uses regular RubyGems as plugins. You'll need at least one "adapter" gem to connect to the chat service of your choice. Add as many "handler" gems as you want to add functionality to your bot.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install the &lt;em&gt;lita&lt;/em&gt; gem.&lt;/li&gt;
&lt;/ol&gt;&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;  $ gem install lita
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Create a new Lita project.&lt;/li&gt;
&lt;/ol&gt;&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;  $ lita new
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;This will create a new directory called &lt;em&gt;lita&lt;/em&gt; with a Gemfile and Lita configuration file.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit the Gemfile, uncommenting or adding the plugins you want to use. There should be an adapter gem (such as &lt;a href="https://github.com/jimmycuadra/lita-hipchat"&gt;lita-hipchat&lt;/a&gt; or &lt;a href="https://github.com/jimmycuadra/lita-irc"&gt;lita-irc&lt;/a&gt;) and as many handler gems as you'd like. For example:&lt;/li&gt;
&lt;/ol&gt;&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;  source &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;https://rubygems.org&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;

  gem &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;lita&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
  gem &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;lita-hipchat&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
  gem &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;lita-karma&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
  gem &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;lita-google-images&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install all the gems you specified in the Gemfile:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;$ bundle install
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Redis. On OS X, you can use Homebrew with &lt;code&gt;brew install redis&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Test Lita out right in your terminal with the built-in shell adapter.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;$ bundle exec lita
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;&lt;p&gt;Type "Lita: help" to get a list of commands available to you.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Edit the Lita configuration file to add connection information for the chat service you're using. For example, if you're using the HipChat adapter, it might look something like this:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;&lt;span class="constant"&gt;Lita&lt;/span&gt;.configure &lt;span class="keyword"&gt;do&lt;/span&gt; |config|
  config.robot.name = &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;Lita Bot&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
  config.robot.adapter = &lt;span class="symbol"&gt;:hipchat&lt;/span&gt;
  config.adapter.jid = &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;12345_123456@chat.hipchat.com&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
  config.adapter.password = &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;secret&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;
  config.adapter.rooms = &lt;span class="symbol"&gt;:all&lt;/span&gt;
&lt;span class="keyword"&gt;end&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;&lt;p&gt;You'll want to consult the documentation for whichever adapter you're using for all the configuration options. If you're going to deploy Lita to Heroku, you'll want to add the Redis To Go add on and set &lt;code&gt;config.redis.url = ENV["REDISTOGO_URL"]&lt;/code&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Deploy your Lita project anywhere you like. If you're deploying to Heroku, you can use a Procfile like this:&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;web: bundle exec lita
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;&lt;p&gt;Lita also has built-in support for daemonization if you want to deploy it to your own server.&lt;/p&gt;

&lt;p&gt;Be sure to visit the &lt;a href="http://lita.io/"&gt;Lita&lt;/a&gt; home page for lots more information on usage, configuration, and adding your own behavior to your robot!&lt;/p&gt;</description>
      <pubDate>Tue, 20 Aug 2013 01:17:32 -0700</pubDate>
      <link>http://www.jimmycuadra.com/posts/getting-started-with-lita</link>
      <guid>http://www.jimmycuadra.com/posts/getting-started-with-lita</guid>
    </item>
    <item>
      <title>GuerillaPatch: An interface for monkey patching objects for Ruby 1.9 and 2.0</title>
      <description>&lt;p&gt;At RubyConf this week, the first preview of the upcoming Ruby 2.0 was released. One of the new features is &lt;em&gt;refinements&lt;/em&gt;, a way of adding new behavior to an object that only takes place within a certain scope. This allows a safer way to extend existing objects without screwing up code that may be depending on the original behavior. A common example of this is ActiveSupport, which adds extensions to many of the core classes. With refinements, these extensions can be added to a refinement module, which can then be "used" in other namespaces without affecting the object globally.&lt;/p&gt;

&lt;p&gt;This is a powerful new feature, but I was curious how best to write library code that uses it in a way that is interoperable with Ruby 1.9. I created a gem called &lt;strong&gt;GuerillaPatch&lt;/strong&gt; which provides a single interface that will extend an object with a refinement if run under Ruby 2.0, and fall back to simply modifying the class globally if run under 1.9.&lt;/p&gt;

&lt;p&gt;Install with &lt;code&gt;gem install guerilla_patch&lt;/code&gt;. The &lt;a href="https://github.com/jimmycuadra/guerilla_patch"&gt;source code&lt;/a&gt; and usage examples are available on GitHub.&lt;/p&gt;</description>
      <pubDate>Fri, 02 Nov 2012 19:03:55 -0700</pubDate>
      <link>http://www.jimmycuadra.com/posts/guerillapatch-an-interface-for-monkey-patching-objects-for-ruby-1-9-and-2-0</link>
      <guid>http://www.jimmycuadra.com/posts/guerillapatch-an-interface-for-monkey-patching-objects-for-ruby-1-9-and-2-0</guid>
    </item>
    <item>
      <title>jQuery is not an architecture</title>
      <description>&lt;p&gt;There is no question about the enormous positive impact &lt;a href="http://jquery.com/"&gt;jQuery&lt;/a&gt; has had on front end web development. It removes all the pain of the terrible DOM API and provides utility functions for just about everything small applications need. But when your application starts to grow, you need to structure your code in a more organized way to make sure things are testable, maintainable, and that features are discoverable to both new members of your team and your future self, who may not be able to discern the business logic of your application from looking at a slew of selectors and anonymous function callbacks.&lt;/p&gt;

&lt;p&gt;Although jQuery has many facilities, its main purpose is as an abstraction layer to create a single, developer-friendly API that hides the ugliness of bad APIs and browser implementation quirks. Because most of its methods involve selecting, traversing, and manipulating DOM elements, the basic unit in jQuery is the selector. While this has the benefit of making DOM interaction simple and easy for beginners to learn, it has the unfortunate side effect of making people think that anything but very trivial amounts of JavaScript should be organized around jQuery selectors.&lt;/p&gt;

&lt;p&gt;In a very simple application or basic web page, e.g. a WordPress blog, tiny snippets of jQuery or a plugin dropped into the page may be appropriate. But if you're building something with an amount of JavaScript even slightly larger than that, things will get difficult to maintain quickly.&lt;/p&gt;

&lt;p&gt;jQuery is no substitute for good application architecture. It's just another utility library in your toolbelt. Think of jQuery simply as an abstraction of the DOM API and a way to think about Internet Explorer less often.&lt;/p&gt;

&lt;p&gt;To illustrate just how backwards the jQuery-selector-as-basic-unit approach is for a non-trivial application, think about how it compares to the structure of an object in an object-oriented paradigm. At the most basic level, objects consist of members and methods – stateful data and functions that perform actions to manipulate that data. Methods take arguments on which they operate. In selector-as-basis jQuery, you're effectively starting with an argument (the selector), and &lt;em&gt;passing it a method&lt;/em&gt; in the form of anonymous functions. This is not to say that object-oriented software is the only correct approach to writing software. The problem is that jQuery tends to make the target of an action the focus and not the action itself.&lt;/p&gt;

&lt;h2&gt;Ways to improve architecture&lt;/h2&gt;

&lt;p&gt;There are a few simple ways to improve the architecture of your JavaScript code beyond what is shown in most jQuery literature.&lt;/p&gt;

&lt;h3&gt;Namespaces&lt;/h3&gt;

&lt;p&gt;Use a single global object to namespace all your code. Use more objects attached to your global object to separate your modules by their high-level purpose. This protects you from name collisions and organizes the parts of your application in a discoverable way. When your application grows very large, this makes it easier for people less familiar with the system to find where a particular widget or behavior is defined.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;window.FOO = {
  &lt;span class="key"&gt;Widgets&lt;/span&gt;: {},
  &lt;span class="key"&gt;Utilities&lt;/span&gt;: {},
};
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;Modules&lt;/h3&gt;

&lt;p&gt;Use function prototypes or the &lt;a href="http://addyosmani.com/resources/essentialjsdesignpatterns/book/#modulepatternjavascript"&gt;module pattern&lt;/a&gt; to create pseudo classes for your modules. All the intelligence about what your application does should be encapsulated inside these discoverable modules. Use clear, straight-forward method names. By extracting your event handlers and other functions into named methods, they can be unit tested in isolation for a high level of confidence that your system behaves the way you expect. It's also much clearer what the module does when looking at code you haven't seen in a while.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;FOO.Widgets.&lt;span class="function"&gt;CommentBox&lt;/span&gt; = &lt;span class="keyword"&gt;function&lt;/span&gt; (containerSelector) {
  &lt;span class="local-variable"&gt;this&lt;/span&gt;.container = &lt;span class="predefined"&gt;$&lt;/span&gt;(containerSelector);
  &lt;span class="local-variable"&gt;this&lt;/span&gt;.container.on(&lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;submit&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;, &lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;form&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;, &lt;span class="local-variable"&gt;this&lt;/span&gt;.addComment.bind(&lt;span class="local-variable"&gt;this&lt;/span&gt;));
};

FOO.Widgets.CommentBox.prototype.&lt;span class="function"&gt;addComment&lt;/span&gt; = &lt;span class="keyword"&gt;function&lt;/span&gt; (event) {
  &lt;span class="comment"&gt;// More logic here...&lt;/span&gt;
};

&lt;span class="comment"&gt;// Many more methods...&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;Instantiate your modules&lt;/h3&gt;

&lt;p&gt;Initialize your modules by constructing new objects, passing in the appropriate selectors as arguments. Assign the newly created object to a property on your global object. A good convention is to use properties beginning with capital letters for modules, and properties beginning with lowercase letters for instances of those modules. Using this approach, you can have multiple instances of the same widget on one page with explicit control over each individual instance. This is useful both for interaction between widgets and for development in the web inspector in your browser.&lt;/p&gt;

&lt;p&gt;Deferring initialization until a module is instantiated programmaticaly gives you great flexibility when testing. You can isolate the effect of your module to a particular subtree of the DOM, which can be cleared out in a teardown phase after each test.&lt;/p&gt;

&lt;div class="CodeRay"&gt;
  &lt;div class="code"&gt;&lt;pre&gt;FOO.comments = &lt;span class="keyword"&gt;new&lt;/span&gt; FOO.Widgets.CommentBox(&lt;span class="string"&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;span class="content"&gt;.comment-box&lt;/span&gt;&lt;span class="delimiter"&gt;"&lt;/span&gt;&lt;/span&gt;);
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;Just the beginning&lt;/h2&gt;

&lt;p&gt;These are just a few very simple examples of ways to structure your code as your application starts to grow. For even better object-oriented abstractions, consider using a library like &lt;a href="http://documentcloud.github.com/backbone/"&gt;Backbone&lt;/a&gt;, which, although usually associated with Gmail-style single page applications, is also very useful for writing well-organized JavaScript that focuses on behavior and application logic instead of selectors tying it to the markup and heavy chains of callbacks.&lt;/p&gt;</description>
      <pubDate>Wed, 11 Jul 2012 01:26:23 -0700</pubDate>
      <link>http://www.jimmycuadra.com/posts/jquery-is-not-an-architecture</link>
      <guid>http://www.jimmycuadra.com/posts/jquery-is-not-an-architecture</guid>
    </item>
    <item>
      <title>Please don't use Cucumber</title>
      <description>&lt;p&gt;&lt;a href="http://cukes.info/"&gt;Cucumber&lt;/a&gt; is by far my least favorite thing in the Ruby ecosystem, and also the worst example of &lt;a href="http://en.wikipedia.org/wiki/Cargo_cult_programming"&gt;cargo cult programming&lt;/a&gt;. Cucumber has almost no practical benefit over acceptance testing in pure Ruby with &lt;a href="https://github.com/jnicklas/capybara"&gt;Capybara&lt;/a&gt;. I understand the philosophical goals behind behavior driven development, but in the real world, Cucumber is a solution looking for a problem.&lt;/p&gt;

&lt;p&gt;The fact that Cucumber has gained the popularity it has in the Ruby community is outright baffling to me. All the reasons to use it that people give are theoretical, and I have never seen them matter or be remotely applicable in the real world. Cucumber aims to bridge the gap between software developers and non-technical stakeholders, but the reality is that product managers don't really care about Gherkin. Their time is better spent brainstorming all the various use cases for a feature and communicating this either verbally or in free form text. Reading (and especially writing) Gherkin is a waste of their time, because Gherkin is not English. It's extremely inflexible Ruby disguised as English. The more naturally it reads, the more difficult it is to translate it into reusable code via step definitions.&lt;/p&gt;

&lt;p&gt;There are basically two extremes of Cucumber:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Writing Gherkin describing the feature at a very high level, and reusing few of the step definitions between features.&lt;/li&gt;
&lt;li&gt;Reusing step definitions, resulting in a level of detail described in the Gherkin which is not useful for any of the stakeholders.&lt;/li&gt;
&lt;/ol&gt;&lt;p&gt;Everything in between is just a bad compromise of one or the other.&lt;/p&gt;

&lt;p&gt;Gherkin is really just glorified comments. If you simply write free form comments above Capybara scenarios, you can convey the same high level information about what the test is doing and what the acceptance criteria are, without any of the overhead, maintenance cost, and general technical debt of Cucumber. This doesn't allow for the real red-green-refactor cycle from the outside in of the BDD philosophy, but in my experience, developers tend to avoid the test-first approach with Cucumber simply because it's so painful to use. If you're not really following BDD practices, and your non-technical stakeholders are not reading or writing Gherkin, Cucumber is wasting your developers' time and bloating your test suite.&lt;/p&gt;

&lt;p&gt;The one advantage Cucumber offers over simply commenting Capybara scenarios is that, by tying the "English" directly to the implementation, it's impossible for the "comment" to rot. This is certainly a danger, as a misleading comment is worse than no comment at all. However, this benefit comes at an extremly heavy cost. I would argue that it should simply be the discipline of developers to make sure that any time a Capybara scenario is updated, the corresponding comment is read through and updated as necessary.&lt;/p&gt;

&lt;p&gt;Whenever someone writes a criticism of a particular piece of software, there is always a group of people who respond by saying, "It's just a tool. If it works for you, use it. If it doesn't, don't." While I agree in theory, this is where the effect of the cargo cult becomes real and damaging. Some guy somewhere came up with this idea that seemed great in theory, and everyone jumped on the bandwagon doing it because it sounded cool and it seemed like something they &lt;em&gt;should&lt;/em&gt; do. After a while, people choose to use it just because it became the status quo. They don't see that all the reasons a tool like Cucumber seemed like a good idea, based on some blog post they read 3 years ago, are not in tune with the real, practical needs of their project or their organization. And once that choice has been made, everyone has to live with the increasing technical debt and slowed, painful development it creates.&lt;/p&gt;</description>
      <pubDate>Thu, 31 May 2012 19:52:56 -0700</pubDate>
      <link>http://www.jimmycuadra.com/posts/please-don-t-use-cucumber</link>
      <guid>http://www.jimmycuadra.com/posts/please-don-t-use-cucumber</guid>
    </item>
    <item>
      <title>Constraints and compromises in ECMAScript 6</title>
      <description>&lt;p&gt;This past week I attended a talk by &lt;a href="http://calculist.org/"&gt;Dave Herman&lt;/a&gt;, an employee of Mozilla and member of ECMA TC39, who are in charge of the standard for JavaScript. Dave talked about a few of the features and improvements coming to JS in ECMAScript 6, including, what I think is the most exciting and desperately needed: the new module system.&lt;/p&gt;

&lt;p&gt;I won't describe how the new module system actually works in this post, as that information is already available elsewhere (in particular at the &lt;a href="http://wiki.ecmascript.org/doku.php?id=harmony:modules"&gt;ES Wiki&lt;/a&gt;). While this new module system looks great, I was concerned with the fact that it eschews the current community built around Node and its CommonJS module loading system. I asked Dave what the reasoning for this was. It boiled down to wanting to provide features for module loading which would not be possible using Node's current CommonJS system. Interoperability between client and server side JavaScript programs will eventually be achieved by converting existing code targeted for Node to the new module system.&lt;/p&gt;

&lt;p&gt;While this may be a painful migration due to the amount of existing code using CommonJS, it seems like the best choice, given that the amount of JavaScript code targeting Node is dwarfed by the amount targeting the browser. Node will also be able to begin the conversion process fairly soon, as it only needs to wait for the implementation of ECMAScript 6 modules in V8.&lt;/p&gt;

&lt;p&gt;Another feature Dave talked about was variable interpolation in strings via so-called &lt;a href="http://wiki.ecmascript.org/doku.php?id=harmony:quasis"&gt;quasi-literals&lt;/a&gt;. This feature is something very common in other high level languages, but to date JavaScript has relied on the concatenation of strings and variables with the &lt;code&gt;+&lt;/code&gt; operator to achieve this. ES6, somewhat confusingly, uses the &lt;code&gt;`&lt;/code&gt; (backtick) to surround these quasi-literals and interpolates variables with &lt;code&gt;${varname}&lt;/code&gt; syntax. I was also curious about the reason for this awkward choice, given Ruby and CoffeeScript's precedence for using double-quoted strings with embedded expressions in &lt;code&gt;#{}&lt;/code&gt;, but Dave had the answer. Backticks were chosen for backwards compatibility. If existing strings were to suddenly gain this ability, existing code on the web that happened to have literal occurrences of &lt;code&gt;${&lt;/code&gt; in them would become syntax or reference errors. The most important constraint TC39 must embrace, unlike languages which compile to JavaScript, is that changes to the language must not break existing code on the web.&lt;/p&gt;

&lt;p&gt;Backticks, even though they are often used to execute shell commands in other languages, were chosen simply due to limited set of ASCII characters remaining for new syntax. Again, the syntax they chose here is not ideal, but given the constraints necessary for advancing JavaScript without breaking the web as it exists today, it's a pretty decent compromise.&lt;/p&gt;

&lt;p&gt;I'd like to thank Dave again for his talk. I'm looking forward to ES6!&lt;/p&gt;</description>
      <pubDate>Sun, 22 Apr 2012 22:28:59 -0700</pubDate>
      <link>http://www.jimmycuadra.com/posts/constraints-and-compromises-in-ecmascript-6</link>
      <guid>http://www.jimmycuadra.com/posts/constraints-and-compromises-in-ecmascript-6</guid>
    </item>
    <item>
      <title>New city, new job, new life</title>
      <description>&lt;p&gt;I haven't posted anything about myself in a while, so I'm happy to present this news: I have moved from San Diego to San Francisco, and am now happily employed by &lt;a href="http://www.change.org" title="Change.org"&gt;Change.org&lt;/a&gt;. This move was a long time coming. Many of my friends in college were from the bay area, as well as more friends I met through those friends. Everyone talked it up and seemed eager to return. In addition to the social motivation, San Francisco has a climate and culture that is much more akin to my taste. It's also one of the biggest tech hubs in the world, so career-wise it was a great choice as well.&lt;/p&gt;

&lt;p&gt;I just finished my third week as a software engineer for Change.org. I'm happy to be spending my day working with technologies I love (Ruby, Rails, JavaScript, etc.), and even more so to be doing it to support a company with an awesome mission. Change.org has been receiving a lot of press lately and is growing very fast, and it's exciting to work for a company whose success I have a real interest in.&lt;/p&gt;

&lt;p&gt;Other than &lt;a href="http://sfappeal.com/news/2012/01/sfps-responds-to-two-shootings-friday-night.php" title="Muni shooting"&gt;one scary incident&lt;/a&gt; on a Muni bus, San Francisco has been a blast to live in so far. I'm enjoying the overwhelming amount of places to eat, the ease of getting around, the cool weather, and the cool people I've met so far. Meeting people in the Ruby and open source communities I've only followed online has me a bit star struck.&lt;/p&gt;

&lt;p&gt;I will miss a few friends and the &lt;a href="http://sdruby.org/" title="SD Ruby"&gt;San Diego Ruby community&lt;/a&gt; I left behind, but San Francisco is clearly where I belong. Here's to a fantastic 2012 and beyond.&lt;/p&gt;</description>
      <pubDate>Sat, 11 Feb 2012 22:09:42 -0800</pubDate>
      <link>http://www.jimmycuadra.com/posts/new-city-new-job-new-life</link>
      <guid>http://www.jimmycuadra.com/posts/new-city-new-job-new-life</guid>
    </item>
  </channel>
</rss>
