<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
 <title>Kent R. Spillner's Blog</title>
 <link href="http://kent.spillner.org/blog/atom.xml" rel="self"/>
 <link href="http://kent.spillner.org/blog/"/>
 <updated>2023-07-31T08:07:09+00:00</updated>
 <id>http://kent.spillner.org/blog/</id>
 <author>
   <name>Kent R. Spillner</name>
   <email>kspillner@acm.org</email>
 </author>
 
 <entry>
   <title>watch -g</title>
   <link href="http://kent.spillner.org/blog/work/2012/06/13/watch.html"/>
   <updated>2012-06-13T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/work/2012/06/13/watch</id>
   <content type="html">@watch(1)@, part of &quot;procps&quot;:procps, is one of my favorite command line utilities.  It repeatedly executes a command so you can watch the output change, and if you pass the optional @--diff/-d@ command line argument it highlights the changes between executions (or across every execution with @--difference=cumulative@).  It's great for one-off monitoring uses, like running @/sbin/ifconfig@ to see how frequently your network interfaces are experiencing errors or dropping packets, or running @/bin/ls -l@ to see how quickly some log files are growing, etc.

By default, @watch@ runs until either the command you are watching errors out or you interrupt it, but a colleague suggested an option to exit when the output changes.  This is useful for things like &quot;Makefiles&quot;:make and shell scripts where you want to gate an automated process on an external event or resource.  For example, when integration testing a web application you want to wait for the app to fully start before running any tests, or when upgrading network card drivers you want to wait for the kernel to reinitialize interfaces before restarting your applications.

I wrote a patch for @watch@ that adds an optional @--chg-exit/-g@ command line argument for this behavior, and I'm happy to say it was &quot;merged upstream&quot;:merge and &quot;is available as of procps v3.3.3&quot;:announcement.  The patch makes possible things like @watch -g &quot;curl http://localhost:3000/&quot; &amp;&amp; ./run_integration_tests.sh@ to wait for your webapp to start before running any tests, or @watch -g &quot;nc -w 1 &lt;ip address&gt; 22&quot; &amp;&amp; ssh &lt;ip address&gt; restart_apps.sh@ to wait until &quot;OpenSSH&quot;:openssh is ready before restarting your applications on a remote server.

There are other tools that provide the same functionality, but the advantage of using @watch -g@ is being able to watch what's going on when running in a terminal.  I don't use @watch -g@ in automated processes that run headless on remote machines such as build servers or cronjobs, but it's my preferred tool for interactive use.  Fortunately, it's easy to tell when a process is running interactively (see: &quot;Perl's -t file test operator&quot;:ttyperl, &quot;Python's file#isatty()&quot;:ttypython, &quot;Ruby's IO#tty?&quot;:ttyruby, &quot;/usr/bin/tty&quot;:ttyshell, etc.), so depending on context scripts can automatically decide whether to wait via @watch -g@ or something else.

[announcement]http://www.freelists.org/post/procps/procps-333-released
[make]http://kent.spillner.org/blog/work/2009/12/30/make-all-environments.html
[merge]https://gitorious.org/procps/procps/merge_requests/1
[openssh]http://www.openssh.org/
[procps]http://procps.sf.net/
[ttyperl]http://perldoc.perl.org/functions/-X.html
[ttypython]http://docs.python.org/library/stdtypes.html#file.isatty
[ttyruby]http://www.ruby-doc.org/core-1.9.3/IO.html#method-i-tty-3F
[ttyshell]http://linux.die.net/man/1/tty
</content>
 </entry>
 
 <entry>
   <title>Code Simplicity</title>
   <link href="http://kent.spillner.org/blog/books/2012/04/09/code-simplicity.html"/>
   <updated>2012-04-09T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/books/2012/04/09/code-simplicity</id>
   <content type="html">&quot;Code Simplicity&quot;:book was a terrible disappointment. I was excited by the novel prospect that the author managed to create an original science of software design, but in reality this book is just a vague, rambling argument in favor of &quot;Agile software development&quot;:agile. In fact, every idea in this book has already been presented in far better books by &quot;Kent Beck&quot;:beck, &quot;Martin Fowler&quot;:fowler, &quot;Robert C. Martin&quot;:unclebob, etc.

I applaud the author's ambition in wanting to create a science of software design, but I think he was incredibly naive to think he could do so without more data, evidence, and rigor. The most confusing aspect of the whole book is that he spends several pages in chapter 2 talking about what a science is, and what the necessary characteristics of a science of software design must look like, but then throughout the rest of the book he doesn't make any attempt to adhere to this model. Instead, he always proceeds directly from vague generalizations and observations, or &quot;data&quot; from contrived examples, to his &quot;laws&quot; and &quot;facts&quot; about software design. For example, in chapter 4 he argues about optimizing design decisions to reduce the future cost of maintenance at the expense of greater initial implementation cost, and the only evidence he offers in support of this position is a series of tables showing different hypothetical situations with different costs of effort and value. It's not that his conclusion is necessarily flawed or invalid -- indeed, making decisions to reduce the future cost of maintenance is a very reasonable and pragmatic approach -- but that his argument suffers from lack of evidence, and specificity, and rigorous application of the scientific method.

In the whole book the only external evidence offered in support of one of his conclusions is a table in chapter 5 that shows some statistics about how five different files changed over time (in terms of line count, change count, number of lines added and deleted, etc.). But he doesn't identify any of the files, or the project(s) from which they came, or the time period in which he analyzed when creating that table. He uses this arbitrary collection of information about five random files to build and support his entire case that developers should write code that is easy to change in the future, should not write code that they don't need right now, should not write code that is too abstract, etc. Again, these are all good rules of thumb and useful lessons for every software developer to learn, but it is incredibly naive of him to label this as science given the flimsy evidence used as a basis to support his claims. Laughably, he closes this section by writing &quot;there is a lot more interesting analysis the could be done on these numbers. You're encouraged to dig into this data and see what else you can learn.&quot; Yes, there is a lot more interesting analysis that could be done, but until you're more forthcoming with details about where your data and evidence comes from we can't verify or refute any of your claims!

I think this book was published too early in its development, and would be well served by a major rewrite (or three). The author needs to spend a lot more time and effort building a solid foundation for his &quot;science,&quot; and should spend less time with the hand-wavy, anecdotal summaries from his own personal experience.

[agile]http://agilemanifesto.org/
[beck]http://www.amazon.com/Extreme-Programming-Explained-Embrace-Edition/dp/0321278658
[book]http://shop.oreilly.com/product/0636920022251.do
[fowler]http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672
[unclebob]http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882
</content>
 </entry>
 
 <entry>
   <title>Test-Driven Infrastructure with Chef</title>
   <link href="http://kent.spillner.org/blog/books/2012/03/28/test-driven-infrastructure-with-chef.html"/>
   <updated>2012-03-28T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/books/2012/03/28/test-driven-infrastructure-with-chef</id>
   <content type="html">I thought &quot;Test-Driven Infrastructure with Chef&quot;:book was fantastic! It offers a great, no-nonsense introduction to &quot;Chef&quot;:chef and the concept &quot;Infrastructure as Code.&quot;

I poked around the &quot;Chef wiki&quot;:wiki before reading this book, but I couldn't find a good, concise introduction to the framework. I understand the concepts of &quot;nodes, roles, resources, recipes, cookbooks, etc.&quot;:components but I was missing a guide that would tie it all together for me. Turns out, this book is that guide! And it also introduces the &quot;cucumber-chef tool&quot;:tool and shows how to build recipes and cookbooks test-first.

The book introduces &quot;Infrastructure as Code,&quot; test-driven development, Chef, and cucumber-chef, and then proceeds to a simple example using Chef to provision a shared Linux server. The recipes for the server are developed test-first, demonstrating both the technique and the workflow.

Overall, this book is the best introduction to Chef I've come across, and after reading it I feel confident in my ability to start writing my own recipes test first.

The biggest drawback of the book is that is short. Although it covers a great deal of material in so few pages, I would have liked a few more chapters at the end that explored more advanced topics, or dug into some of the other topics in more detail. As it is, I can't wait to get my hands on a copy of the author's next book, &quot;Chef: The Definitive Guide&quot;:defguide (perhaps that was his intent all along! ;P).

[book]http://shop.oreilly.com/product/0636920020042.do
[chef]http://www.opscode.com/chef
[components]http://wiki.opscode.com/display/chef/Core+Components
[defguide]http://shop.oreilly.com/product/0636920025146.do
[tool]http://wiki.opscode.com/display/chef/Core+Components
[wiki]http://wiki.opscode.com/
</content>
 </entry>
 
 <entry>
   <title>Book Reviews</title>
   <link href="http://kent.spillner.org/blog/books/life/2010/09/18/books-category.html"/>
   <updated>2010-09-18T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/books/life/2010/09/18/books-category</id>
   <content type="html">I've added a new category to my site, &quot;Books&quot;:http://kent.spillner.org/blog/books/, where I intend to post book reviews.  This new category comes complete with &quot;its very own feed&quot;:http://feeds.feedburner.com/kent-spillner-books.

Once again, you're welcome!
</content>
 </entry>
 
 <entry>
   <title>Make All Environments Consistent</title>
   <link href="http://kent.spillner.org/blog/work/2009/12/30/make-all-environments.html"/>
   <updated>2009-12-30T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/work/2009/12/30/make-all-environments</id>
   <content type="html">Consistent environments are important on software projects.  They increase the team's productivity, make it easier to diagnose, replicate and resolve bugs, make it easier to tune performance and scale the application, and reduce the cost of administration and maintenance.  Building consistent environments is a great candidate for automation; in fact, tools like &quot;Chef&quot;:chef and &quot;Puppet&quot;:puppet were written for exactly this purpose.  But Chef and Puppet are complicated, heavyweight solutions.  They also require a considerable amount of infrastructure themselves (&quot;git&quot;:git, &quot;Ruby&quot;:ruby, etc.)!

For lighter, simpler configuration needs I prefer &quot;Makefiles&quot;:make.  Makefiles are great for this sort of thing because versions of &quot;Make&quot;:make exist for every operating system, and most UNIX systems come with some version already installed.  I use Make to install my personal configuration settings for common utilities on each machine I use (see &quot;http://github.com/sl4mmy/dotfiles&quot;:http://github.com/sl4mmy/dotfiles), for example.

Makefile syntax is familiar to system administrators and easy to learn, and because Makefiles are &quot;shell scripts&quot;:sh (with a few format constraints) they map directly to the sequence of steps you would perform manually to setup your environments.  But arguably the biggest advantage of Makefiles is they won't do anything when the file about to be installed or modified exists.  This is critical in shared production environments where services might already be installed and configured for other applications, and you don't want to risk accidentally overwriting something important.

Consider a project dependent on &quot;Java&quot;:java and &quot;JRuby&quot;:jruby, and a convention that third-party applications should be installed under @/opt/apps/&lt;name&gt;/&lt;name&gt;-&lt;version&gt;@.  The following Makefile installs and configures this project's environment accordingly:

&lt;pre&gt;&lt;code&gt;
# Makefile

JAVA_VERSION=1.6.0_18
JRUBY_VERSION=1.4.0

# TARGET 0:
all: java jruby

# TARGET 1:
java: /opt/apps/java/java-${JAVA_VERSION}/

# TARGET 2:
jruby: /opt/apps/jruby/jruby-${JRUBY_VERSION}/

# TARGET 3:
/opt/apps/java/java-${JAVA_VERSION}/: /opt/apps/java/ /tmp/java-${JAVA_VERSION}.tar.gz
	(cd /opt/apps/java; tar czf /tmp/java-${JAVA_VERSION}.tar.gz)

# TARGET 4:
/opt/apps/jruby/jruby-${JRUBY_VERSION}/: /opt/apps/jruby/ /tmp/jruby-${JRUBY_VERSION}.tar.gz
	(cd /opt/apps/jruby; tar czf /tmp/jruby-${JRUBY_VERSION}.tar.gz)

# TARGET 5:
/opt/apps/java/: /opt/apps/
	mkdir /opt/apps/java

# TARGET 6:
/opt/apps/jruby/: /opt/apps/
	mkdir /opt/apps/jruby

# TARGET 7:
/tmp/java-${JAVA_VERSION}.tar.gz:
	# Shell commands to download an archive of Java and save it as /tmp/java-${JAVA_VERSION}.tar.gz

# TARGET 8:
/tmp/jruby-${JRUBY_VERSION}.tar.gz:
	# Shell commands to download an archive of JRuby and save it as /tmp/jruby-${JRUBY_VERSION}.tar.gz

# TARGET 9:
/opt/apps/:
	mkdir -p /opt/apps

&lt;/code&gt;&lt;/pre&gt;

&lt;br/&gt;
Starting from the top:

&lt;pre&gt;&lt;code&gt;
# Makefile

JAVA_VERSION=1.6.0_18
JRUBY_VERSION=1.4.0

&lt;/code&gt;&lt;/pre&gt;

You should always name your Makefiles as @Makefile@ because Make will look for that file in the current directory by default when it runs.  You _are_ free to name your Makefiles anything you wish, but if they are _not_ named @Makefile@ then Make will require additional configuration telling it which file to load.

Makefile comments start with #, just like shell scripts.  The @# Makefile@ comment in this example is not necessary, it is only for illustrative purposes.

The first two non-comment lines define variables that will be used throughout this Makefile.  You may safely assume the syntax for defining and referencing variables in Makefiles is identical to shell scripting.

&lt;pre&gt;&lt;code&gt;
# TARGET 0:
all: java jruby

&lt;/code&gt;&lt;/pre&gt;

Target 0, named @all@ here but could be named anything, is the default target because it is the first target declared in this Makefile. Running @make@ in the directory containing this Makefile is equivalent to running @make all@.

Make is a file-centric build tool; it assumes the purpose of each target is to produce (or &quot;make,&quot; natch) a single file.  By convention, the name of a target is the name of the output file produced by invoking that target.  In our example, Make assumes that the @all@ target will produce a file in the current directory named @all@.  Make uses this convention to determine whether or not it actually needs to execute that target: if a file named @all@ already exists in the current directory, Make won't execute the @all@ target.

Of course, targets are _not required_ to actually produce a file with the same name as the target, but if they don't Make will never think such targets are up-to-date and will execute them every time.  In this case, we want Make to do exactly that; our goal is not to produce a file named @all@ in the current directory, our goal is to set up a fully working environment on any machine by simply copying this Makefile and running @make@.  We are using @all@ as a convenience target so that we can make _both_ @java@ and @jruby@ by default.  If there were more dependencies, we could create separate targets for each of them and add them as additional prerequisites of @all@.

Our @all@ target is declared with two prerequisites: @java@ and @jruby@.  Make executes all prerequisites before executing the target, so our @java@ and @jruby@ targets will execute before @all@.

&lt;pre&gt;&lt;code&gt;
# TARGET 1:
java: /opt/apps/java/java-${JAVA_VERSION}/

# TARGET 2:
jruby: /opt/apps/jruby/jruby-${JRUBY_VERSION}/

&lt;/code&gt;&lt;/pre&gt;

The @java@ and @jruby@ targets are declared in Target 1 and Target 2, and each have a single prerequisite of the Java and JRuby installation directories defined in our convention.  Absolute paths are valid target names in Makefiles, and since they are used as prerequisites here, Make will search for targets with those names before it executes the @java@ or @jruby@ targets.

&lt;pre&gt;&lt;code&gt;
# TARGET 3:
/opt/apps/java/java-${JAVA_VERSION}/: /opt/apps/java/ /tmp/java-${JAVA_VERSION}.tar.gz
	(cd /opt/apps/java; tar czf /tmp/java-${JAVA_VERSION}.tar.gz)

# TARGET 4:
/opt/apps/jruby/jruby-${JRUBY_VERSION}/: /opt/apps/jruby/ /tmp/jruby-${JRUBY_VERSION}.tar.gz
	(cd /opt/apps/jruby; tar czf /tmp/jruby-${JRUBY_VERSION}.tar.gz)

&lt;/code&gt;&lt;/pre&gt;

The @/opt/apps/java/java-${JAVA_VERSION}/@ and @/opt/apps/jruby/jruby-${JRUBY_VERSION}/@ targets are declared in Target 3 and Target 4.  They both have two prerequisites, and one step.  Notice that the trailing slash in each target's name makes it explicit to Make that these targets produce directories.

Steps are shell commands Make runs when it executes a target, and virtually any valid shell script is also a valid step.  Steps are associated with the target immediately above them in the Makefile, and they _must_ begin with a @TAB@ (*no spaces!*).  Multiple steps can be associated with a target and will run in order, but each line _must_ begin with a @TAB@ (*no spaces!*).  Makefiles are very touchy about whitespace: every line beginning with a @TAB@ below a target definition is a step for that target, and the first line that does _not_ begin with a @TAB@ is the separator between targets.

When Target 3 is executed, Make will spawn a temporary subshell, change the current working directory to @/opt/apps/java@ and expand @/tmp/java-${JAVA_VERSION}.tar.gz@ there.  The parenthesis around the step definition are not required by Make, they are used here as in a regular shell script: the enclosed commands will run in a subshell, returning to the parent shell when finished.  I did this so that changing directories in this step won't affect the rest of the Makefile.  Target 4 does the same for JRuby.

&lt;pre&gt;&lt;code&gt;
# TARGET 5:
/opt/apps/java/: /opt/apps/
	mkdir /opt/apps/java

# TARGET 6:
/opt/apps/jruby/: /opt/apps/
	mkdir /opt/apps/jruby

...

# TARGET 9:
/opt/apps/:
	mkdir -p /opt/apps

&lt;/code&gt;&lt;/pre&gt;

Target 3 (@/opt/apps/java/java-${JAVA_VERSION}/@) depends on @/opt/apps/java/@, declared as Target 5, and Target 4 (@/opt/apps/jruby/jruby-${JRUBY_VERSION}/@) depends on @/opt/apps/jruby/@, declared as Target 6.  Both targets depend on @/opt/apps/@, declared as Target 9.  These three targets ensure that the correct directory structure exists according to our convention, creating missing directories as necessary.

&lt;pre&gt;&lt;code&gt;
# TARGET 7:
/tmp/java-${JAVA_VERSION}.tar.gz:
	# Shell commands to download an archive of Java and save it as /tmp/java-${JAVA_VERSION}.tar.gz

# TARGET 8:
/tmp/jruby-${JRUBY_VERSION}.tar.gz:
	# Shell commands to download an archive of JRuby and save it as /tmp/jruby-${JRUBY_VERSION}.tar.gz

&lt;/code&gt;&lt;/pre&gt;

Targets 7 and 8 are placeholders showing how to download the necessary files before building an environment.

So, what happens when you run @make@ on a clean environment without any of the directory structure or tarballs necessary for this project?  Follow along by tracing the prerequisites in the Makefile.

Since @all@ is the default target, Make starts by looking for a file named @all@ in the current directory; unable to find that file, Make tries to execute the prerequisites of @all@, @java@ and @jruby@.  Since no file named @java@ exists in the current directory either, Make looks up the target named @java@ and then tries to execute its prerequisite, @/opt/apps/java/java-${JAVA_VERSION}/@.  Since that directory does not exist, Make looks up the target named @/opt/apps/java/java-${JAVA_VERSION}/@ and tries to execute its prerequisites, @/opt/apps/java/@ and @/tmp/java-${JAVA_VERSION}.tar.gz@.  Since the directory @/opt/apps/java/@ does not exist, Make looks up the target named @/opt/apps/java/@ and tries to execute its prerequisite, @/opt/apps/@.  Since that directory does not exist, Make looks up the target named @/opt/apps/@, sees that it has no prerequisites, and executes the steps associated with that target (@mkdir -p /opt/apps@) which produces the directory @/opt/apps/@.

Make then backtracks to the @/opt/apps/java/@ target, determines that its prerequisites were previously executed, and executes its steps (@mkdir /opt/apps/java@) which produces the directory @/opt/apps/java/@.  Make backtracks again to the @/opt/apps/java/java-${JAVA_VERSION}/@ target, sees that it still has one unsatisfied prerequisite, and looks up the target named @/tmp/java-${JAVA_VERSION}.tar.gz@.  Make sees that @/tmp/java-${JAVA_VERSION}.tar.gz@ has no prerequisites, and executes its steps to download an archive of Java and save it as @/tmp/java-${JAVA_VERSION}.tar.gz@.  Make backtracks to the @/opt/apps/java/java-${JAVA_VERSION}/@ target again, sees that all of its prerequisites are now satisfied, and executes its steps which produce the directory @/opt/apps/java/java-${JAVA_VERSION}/@.

Make then backtracks all the way back to the @all@ target, and does the same for its other prerequisite, @jruby@.

Now, consider what happens when you re-run @make@ in the directory containing this Makefile (or run it for the first time in an environment which was previously setup manually).  Make sees that @all@ is the default target, but cannot find a file named @all@ in the current directory, so it tries to execute its prerequisites, @java@ and @jruby@.  There is no file named @java@ in the current directory, either, but the @java@ target's only prerequisite is @/opt/apps/java/java-${JAVA_VERSION}/@, and that directory _does_ exist, so Make skips the prerequisite's steps and goes straight to executing the steps associated with the @java@ target.  Since there are none, nothing happens, and Make backtracks to the @all@ target and tries to execute its other prerequisite, @jruby@.  Similarly, there is no file named @jruby@, but the directory corresponding to the @jruby@ target's only prerequisite _does_ exist, so Make skips executing the prerequisite target, going straight to the steps associated with @jruby@.  Again, there are none so nothing happens, Make backtracks back to the @all@ target, determines its prerequisites were previously executed and so tries to execute the steps associated with @all@.  Since there are no steps associated with @all@, nothing happens, and Make finishes successfully without having done _anything_.  Perfect for production environments!

What happens if you delete @/tmp/java-${JAVA_VERSION}.tar.gz@ and re-run @make@ in the directory containing the Makefile?  Well, nothing.  Make never sees that file is missing since the directory @/opt/apps/java-${JAVA_VERSION}@ exists.  But if you delete the directory @/opt/apps/java-${JAVA_VERSION}/@ and re-run @make@, then @/tmp/java-${JAVA_VERSION}.tar.gz@ will be downloaded again and @/opt/apps/java-${JAVA_VERSION}/@ will be re-created.  The point is: you must carefully ensure all of your application's dependencies are properly exposed as prerequisites in your Makefile, not hidden as nested prerequisites of prerequisites.

I love &quot;Make&quot;:make as a low-touch solution for setting up consistent environments!  It's simple to learn and easy to use, and it is low-risk for shared production environments because it reduces the likelihood of modifying or deleting files used by other applications.  And when you outgrow Make's capabilities and need a more powerful tool, &quot;Makefiles&quot;:make are the perfect way to consistently install and configure &quot;Chef&quot;:chef or &quot;Puppet&quot;:puppet across multiple machines.

[ant]http://ant.apache.org
[chef]http://wiki.opscode.com/display/chef/Home
[git]http://www.git-scm.com
[java]http://java.sun.com
[jruby]http://jruby.org
[make]http://www.opengroup.org/onlinepubs/009695399/utilities/make.html
[puppet]http://reductivelabs.com/products/puppet/
[rake]http://rake.rubyforge.org
[ruby]http://www.ruby-lang.org
[sh]http://www.opengroup.org/onlinepubs/009695399/utilities/sh.html
[ubuntu]http://www.ubuntu.com
</content>
 </entry>
 
 <entry>
   <title>Java Build Tools: Ant vs. Maven</title>
   <link href="http://kent.spillner.org/blog/work/2009/11/14/java-build-tools.html"/>
   <updated>2009-11-14T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/work/2009/11/14/java-build-tools</id>
   <content type="html">_*TRANSLATIONS:* &quot;Spanish&quot;:spanish (gracias, Jos&amp;eacute;!)_

_*UPDATED 2010-01-06:* linked to demonstration of the &quot;10 minute mvn clean build&quot; problem, and added notes about: slow build times, excessive memory use, bad test result output, untrusted repository artifacts, and external configuration files._

_*UPDATED 2010-02-21:* linked to Spanish translation (courtesy of Jos&amp;eacute; Manuel Prieto)_

The best build tool is the one you write yourself.  Every project's build process is unique, and often individual projects need to be built multiple different ways.  It is impossible for tool authors to anticipate every build's requirements, and foolhardy to try (&quot;Apache&quot;:apache developers: take note).  The best any tool can do is provide a flexible library of reusable tasks that can easily be adapted to your needs, but even that is insufficient.  Off-the-shelf tasks never suit your project perfectly.  You will waste countless hours struggling to make those tasks do _exactly_ what you need, only to give up and write a plugin instead.  Writing your own custom build tool is quick and easy, and requires less maintenance than you fear.  Don't be afraid: builds should fit your project, not the other way around.

If you don't want to write your own build tool, then you should use &quot;Rake&quot;:rake.  &quot;Rake&quot;:rake is the best existing build tool for Java projects.  &quot;Rake&quot;:rake provides a bunch of standard methods to perform common build tasks, and anything else can be quickly implemented in &quot;Ruby&quot;:ruby.  Writing build scripts in a real programming language gives &quot;Rake&quot;:rake a huge advantage over other tools.  There are other advantages, too, but none are as important.

So, you should write custom build tools for your projects.  If you don't want to, then you should switch to &quot;Rake&quot;:rake.  If you can't switch, you should lobby for the right to switch.  If politics drives technology decisions, if you will never be allowed to switch, then quit your job or leave the project.

If you lack the courage to quit, then use &quot;Ant&quot;:ant.  &quot;Ant&quot;:ant is the second best existing build tool for Java projects.  Although inferior to &quot;Rake&quot;:rake, &quot;Ant&quot;:ant is still a great build tool.  &quot;Ant&quot;:ant is mature and stable, it is fast, and it comes with a rich library of tasks.  &quot;Ant&quot;:ant makes it possible (but &quot;not at all easy&quot;:http://alex-verkhovsky.blogspot.com/2009/03/lets-use-real-languages-for-builds.html) to script rich, complex builds processes custom-tailored to your project.

So, write your own build tool, or else switch to &quot;Rake&quot;:rake, or fight to switch to &quot;Rake&quot;:rake, or quit and go some place where you can use &quot;Rake&quot;:rake.  And if all else fails, use &quot;Ant&quot;:ant until you can find a new job somewhere else that uses &quot;Rake&quot;:rake.

That's it!  Those are the *only* choices I can recommend!  Because you never, *ever*, under _any_ circumstances want to use &quot;Maven&quot;:maven!

&quot;Maven&quot;:maven builds are an infinite cycle of despair that will slowly drag you into the deepest, darkest pits of hell (where &quot;Maven&quot;:maven itself was forged).  You will initially only spend ten minutes getting &quot;Maven&quot;:maven up and running, and might even be happy with it for a while.  But as your project evolves, and your build configuration grows, the basic &quot;pom.xml&quot;:pom that you started with will prove inadequate.  You will slowly add more configuration to get things working the way you need, but there's only so much you can configure in &quot;Maven&quot;:maven.  Soon, you will encounter &quot;Maven's&quot;:maven low glass ceiling for the first time.  By &quot;encounter,&quot; I mean &quot;smash your head painfully against.&quot;  By &quot;for the first time,&quot; I mean &quot;you will do this repeatedly and often in the future.&quot;  Eventually, you'll figure out some convulted &quot;pom.xml&quot;:pom hackery to work around your immediate issue.  You might even be happy with &quot;Maven&quot;:maven again for a while... until another limitation rears its ugly little head.  It's a lot like some tragic Greek myth, only you are the damned soul and the eternity of suffering is your build process.

Seriously.  &quot;Maven&quot;:maven is a _horrible_ implementation of _bad_ ideas.  I believe someone, somewhere had (perhaps still has) a vision for &quot;Maven&quot;:maven that was sensible, if not seductive.  But the actual implementation of &quot;Maven&quot;:maven lacks any trace of such vision.  In fact, everything in &quot;Maven&quot;:maven is so bad that it serves as a valuable example of how _not_ to build software.  You know your build is awesome when it works the opposite of &quot;Maven&quot;:maven.

Consider the test results output from &quot;Maven's&quot;:maven &quot;Surefire plugin&quot;:surefire.  Everything seems fine as long as all of your tests are passing, but &quot;Surefire&quot;:surefire reports are a _nightmare_ to debug when things go wrong!  The only information logged to the console is the name of the failing test class.  You must _manually_ cross-reference that name with a log file written in the @target/surefire-reports/@ directory, but those logs are written _one per test class_!  So, if multiple test classes fail, you must separately check multiple log files.  It seems like a minor thing, but it quickly adds up to a major annoyance and productivity sink.

&quot;Maven&quot;:maven advocates claim their tool embraces the principle of _&quot;Convention Over Configuration&quot;:http://www.sonatype.com/books/maven-book/reference/installation-sect-conventionConfiguration.html_; &quot;Maven&quot;:maven advocates are liars.  The only convention &quot;Maven&quot;:maven supports is: &quot;compile, run unit tests, package .jar file&quot;:http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html.  Getting &quot;Maven&quot;:maven to do _anything_ else requires  _configuring_ the conventions.  Want to package a .war file?  &quot;Configure it&quot;:http://maven.apache.org/plugins/maven-war-plugin/index.html.  Want to run your application from the command line?  &quot;Configure it&quot;:http://mojo.codehaus.org/exec-maven-plugin/java-mojo.html.  Want to run acceptance tests or functional tests or performance tests with your build, too?   You can &quot;configure it&quot;:http://docs.codehaus.org/display/MAVENUSER/Maven+and+Integration+Testing, but it involves *not* running your unit tests, or not running them during the _conventional_ unit test phase of your build process, or...  Want to generate code coverage metrics for your project?  You can &quot;configure that&quot;:http://mojo.codehaus.org/cobertura-maven-plugin/index.html, too, but your tests will run _twice_ (or only once, but not during the _conventional_ unit test phase), and sometimes it reports 0% code coverage despite the comprehensive test suite.

Speaking of configuration, &quot;Maven&quot;:maven has the worst configuration syntax since &quot;Sendmail&quot;:http://www.sendmail.org: &quot;alternating normal form&quot;:http://www.ltg.ed.ac.uk/~ht/normalForms.html &quot;XML&quot;:xml.  As a consequence, &quot;Maven&quot;:maven configuration is verbose, difficult to read and difficult to write.  Things you can do in one or two lines of &quot;Ruby&quot;:ruby or &quot;XML&quot;:xml with &quot;Rake&quot;:rake or &quot;Ant&quot;:ant require six, seven, eight lines of &quot;pom.xml&quot;:pom configuration (assuming it's even _possible_ with &quot;Maven&quot;:maven).

There's nothing consistent about &quot;Maven's&quot;:maven configuration, either.  Some things are configured as classpath references to .properties files bundled in .jar files configured as dependencies, some things are configured as absolute or relative paths to files on disk, and some things are configured as system properties in the JVM running &quot;Maven&quot;:maven.  And some of those absolute paths are portable across projects because &quot;Maven&quot;:maven knows how to correctly resolve them, but some are not.  And sometimes &quot;Maven&quot;:maven is smart enough to recursively build projects in the correct order, but sometimes it's not.

And some things aren't even configured in the &quot;pom&quot;:pom!  Some things, like &quot;Maven repositories&quot;:http://maven.apache.org/repository-management.html, servers, and authentication credentials, are configured in &quot;settings.xml&quot;:settings.  It is perfectly reasonable to want to keep user's passwords out of &quot;pom.xml&quot;:pom files which will be checked into the project's version control repository.  But &quot;Maven's&quot;:maven solution is terrible: all this configuration goes in a &quot;settings.xml&quot;:settings file that lives outside of any project's directory.  You can't directly share any of this configuration between your desktop and laptop, or with other developers, or with your project's build servers.  But it _is_ *automatically* shared with _every_ single &quot;Maven&quot;:maven project you work with, and potentially every single &quot;Maven&quot;:maven project _every user_ on that machine works with.  When a new developer joins your project, they must _manually_ merge the necessary configuration into their existing &quot;settings.xml&quot;:settings.  When a new agent is added to your build server farm, the necessary configuration is _manually_ merged into its existing &quot;settings.xml&quot;:settings.  Ditto for when you migrate to a new machine.  And when _any_ of this configuration needs to be updated, it must be _manually_ updated on _every_ single machine!  This was a solved problem before &quot;Maven&quot;:maven came along, too: properties files.  Project teams can put generic configuration like this in a properties file which is checked in to version control, and individual developers can override that information in local properties file which are not checked in to version control.

All this stuff in &quot;Maven&quot;:maven -- the conventions, the configuration, the process -- is governed by &quot;The Maven Way&quot;.  Unfortunately, &quot;The Maven Way&quot; is undocumented.  You can catch fleeting glimpses of it by trawling the &quot;Maven documentation&quot;:maven, searching the &quot;Google&quot;:http://www.google.com, or buying books written by &quot;Maven developers&quot;:http://maven.apache.org/team-list.html.  The other way you encounter &quot;The Maven Way&quot; is by tripping over (or smashing against) its invisible boundaries.  &quot;Maven&quot;:maven was not built to be flexible, and it does _not_ support every possible build process.  &quot;Maven&quot;:maven was built for &quot;Apache&quot;:apache projects, and assumes every project's build process mirrors &quot;Apache's&quot;:apache own.  That's great news for open-source library developers who volunteer on their own time and to whom &quot;release&quot; means &quot;upload a new .zip file to your website for others to manually find, download, and add to their own projects.&quot;  It sucks for everyone else.  While &quot;Rake&quot;:rake and &quot;Ant&quot;:ant can accommodate every build process, &quot;Maven&quot;:maven can't; it is possible, and in fact quite likely, that &quot;Maven&quot;:maven just doesn't support the way you want to build your software.

And &quot;Maven's&quot;:maven dependency management is completely, entirely, irrevocably broken.  Actually, I take that back; &quot;Maven's&quot;:maven strategy of  downloading &quot;ibiblio&quot;:http://ibiblio.org to the user's home directory and then dumping everything on the classpath is incredibly stupid and wrong and should never be confused with &quot;dependency management.&quot;  I recently worked on a &quot;Maven&quot;:maven project which produced a 51 MB .war file; by switching to &quot;Ant&quot;:ant with hand-rolled dependency management, we shrunk that .war file down to 17 MB.  Hrmmm... 51 - 17 = 34 = 17 x 2, or: 2/3 of the original bulk was useless crap &quot;Maven&quot;:maven dumped on us.

Extraneous dependencies don't just eat up disk space, they eat up precious RAM, too!  &quot;Maven&quot;:maven is an all-around memory hog.  Relatively simple projects, with only a parent &quot;pom&quot;:pom and a few sub-modules, require extensive JVM memory tuning with all those fancy @JAVA_OPTS@ settings you typically only see on production servers.  Things are even worse if your &quot;Maven&quot;:maven build is integrated with your IDE.  It's common to set your JVM's max heap size to several hundred megabytes, the max permgen size to a few hundred megabytes, and enable permgen sweeping so classes themselves are garbage collected.  And all this just to build your project, or work with &quot;Maven&quot;:maven in your IDE!

Funny story: on that same project I once endured a ten minute &quot;mvn clean&quot; build because &quot;Maven&quot;:maven thought it needed yet more crap in order to &quot;rm -rf ./target/&quot; (see a similar example: &quot;http://gist.github.com/267553&quot;:http://gist.github.com/267553).  Actually, there's nothing funny about that story; trust me: you don't want a build tool which automatically downloads unresolved dependencies before cleaning out your build output directories.  You don't want a build tool which automatically downloads unresolved dependencies, *PERIOD*!  Automatically downloading unresolved dependencies makes your build process _nondeterministic_!  Good ol' nondeterminism: loads of fun in school, not so fun at work!

And all that unnecessary, unwanted network chatter takes time.  You pay a performance penalty for &quot;Maven's&quot;:maven broken dependency management on every build.  Ten minute clean builds are horrible, but adding an extra minute to _every_ build is even worse!  I estimate the average additional overhead of &quot;Maven&quot;:maven is about one minute per build, based on the fact that the one time I switched from &quot;Maven&quot;:maven to &quot;Ant&quot;:ant the average build time dropped from two and a half minutes to one and a half.  Similarly, the one time I switched from &quot;Ant&quot;:ant to &quot;Maven&quot;:maven the average build time increased from two minutes to three.

You have no control over, and limited visibility into, the dependencies specified by your dependencies.  Builds _will_ break because different copies of &quot;Maven&quot;:maven _will_ download different artifacts at different times; your local build _will_ break again in the future when the dependencies of your dependencies accidentally release new, non-backwards compatible changes without remembering to bump their version number.  Those are just the innocent failures, too; the far more likely scenario is your project depends on a specific version of some other project which in turn depends on the LATEST version of some other project, so you still get hosed even when downstream providers _do_ remember to bump versions!  Every release of every dependency's dependencies becomes a new opportunity to waste several hours tracking down strange build failures.

But &quot;Maven&quot;:maven is even worse than that: not only does &quot;Maven&quot;:maven automatically resolve your project's dependencies, it automatically resolves its own plugins' dependencies, too!  So now not only do you have to worry about separate instances of &quot;Maven&quot;:maven accidentally downloading incompatible artifacts (or the same instance downloading incompatible artifacts at different times), you also have to worry about your build tool itself behaving differently across different machines at different times!

&quot;Maven's&quot;:maven broken dependency management is also a gaping security hole, since it is currently impossible in &quot;Maven&quot;:maven to determine where artifacts originally came from and whether or not they were tampered with.  Artifacts are automatically checksummed when they are uploaded to a repository, and &quot;Maven&quot;:maven automatically verifies that checksum when it downloads the artifact, but &quot;Maven&quot;:maven implicitly trusts the checksum on the repository it downloaded the artifact from.  The current extent of &quot;Maven&quot;:maven artifact security is that the &quot;Maven&quot;:maven developers control who has write access to the authoritative repository at &quot;ibiblio&quot;:http://www.ibiblio.org/maven/.  But there is no way of knowing if the repository you download all your dependencies from was poisoned, there is no way of knowing if your local repository cache was poisoned, and there is no way of knowing which repository artifacts in your local repository cache came from or who uploaded them there.

These problems are not caused by careless developers, and are not solved by using &quot;repository managers&quot;:http://maven.apache.org/repository-management.html to lock down every artifact &quot;Maven&quot;:maven needs.  &quot;Maven&quot;:maven is broken and wrong if it assumes humans never make mistakes.  &quot;Maven&quot;:maven is broken and wrong if it requires users to explicitly specify every version of every dependency, and every dependency's dependencies, to reduce the likelihood of downloading incompatible artifacts.  &quot;Maven&quot;:maven is broken and wrong if it requires a third-party tool to prevent it connecting to the big, bad internets and automatically downloading random crap.  &quot;Maven&quot;:maven is broken and wrong if it thinks nothing of slowing down _every_ build by connecting to the network and checking _every_ dependency for _any_ updates, and automatically downloading them.  &quot;Maven&quot;:maven is broken and wrong if it behaves differently on my laptop at the office and at home.  &quot;Maven&quot;:maven is broken and wrong if it requires an internet connection to delete a directory.  &quot;Maven&quot;:maven is broken and wrong.

&quot;Save&quot;:rake &quot;yourself&quot;:ant.

[ant]http://ant.apache.org
[apache]http://jakarta.apache.org
[java]http://java.sun.com
[maven]http://maven.apache.org
[pom]http://maven.apache.org/guides/introduction/introduction-to-the-pom.html
[rake]http://rake.rubyforge.org
[ruby]http://www.ruby-lang.org
[settings]http://maven.apache.org/settings.html
[surefire]http://maven.apache.org/plugins/maven-surefire-plugin/
[xml]http://www.xml.com

[spanish]http://prietopa.wordpress.com/2010/01/29/herramientas-de-construccion-de-java-ant-vs-maven/
</content>
 </entry>
 
 <entry>
   <title>And Now for Something Completely Different</title>
   <link href="http://kent.spillner.org/blog/life/work/2009/11/13/something-different.html"/>
   <updated>2009-11-13T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/life/work/2009/11/13/something-different</id>
   <content type="html">Today is my last day working for &quot;ThoughtWorks&quot;:http://www.thoughtworks.com.  After four and a half years, I decided to hang up my consultant's cape and try something new: I accepted an offer from &quot;DRW Trading Group&quot;:http://www.drw.com starting next week.

I am extremely excited about my new job at &quot;DRW&quot;:http://www.drw.com, but the decision to leave &quot;ThoughtWorks&quot;:http://www.thoughtworks.com was difficult and I am sad to say &quot;goodbye!&quot;  &quot;ThoughtWorks&quot;:http://www.thoughtworks.com was an important part of my life these past several years, and I am leaving behind a lot of great friends.  I will miss working with &quot;ThoughtWorkers&quot;:http://blogs.thoughtworks.com every day, and I wish them all the very best.

Ultimately, my decision to leave stems from my frustration with consulting.  Ostensibly, companies hire consultants for their expertise in helping solve difficult problems.  Typically, companies select consultants based on past experience and expertise in successfully solving similar problems.  But always companies ignore or push back against anything running counter to their current way of doing things, no matter how much they paid for the suggestion.

I accept that compromise, education and negotiation are important aspects of consulting, and I agree it is unreasonable - and would be unwise - to expect clients to do everything exactly as they are advised.  I also enjoy helping others solve problems.  I am just tired of arguing about the same things with different clients over and over again.  And hence the change.

I am not bitter or jaded, I bear no grudge; it is just time for me to try something new away from client politics.  &quot;ThoughtWorks&quot;:http://www.thoughtworks.com is a great company, and I can't wait to see what things lie in store for them in the future!  But I won't miss my cape.</content>
 </entry>
 
 <entry>
   <title>Private Methods are a Code Smell</title>
   <link href="http://kent.spillner.org/blog/work/2009/11/12/private-methods-stink.html"/>
   <updated>2009-11-12T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/work/2009/11/12/private-methods-stink</id>
   <content type="html">Private methods are a code smell.  They should be moved to collaborating classes and made public.

Private helper methods indicate classes are doing too many things.  Moving private helper methods to different classes, including creating new classes if necessary, splits the original responsibilities across multiple classes leading to simpler, better designs.

Sometimes, private methods are created to split complex logic or processes into small, easily digested pieces.  Often, such private methods have gnarly dependencies because they directly access or modify internal state.  Moving these methods to appropriate collaborators (again, creating new classes as necessary) exposes such dependencies.  Eliminating these dependencies simplifies the new API, which improves readability and understanding.

Sometimes, private methods are created just to give pieces of functionality more descriptive names.  Although descriptive names are desirable, creating private methods to provide descriptive names for things is still a smell.  Moving these methods to collaborators and making them public creates opportunities for future reuse without reducing the clarity of the original code.

Taking small steps to improve design leads to flashes of brilliant design inspiration.  As code slowly evolves into better shape, bits and pieces fall into place until another, superior design becomes clear.  Making private methods public and moving them to (perhaps missing) collaborators is a simple and effective way to quickly improve design.  The resulting code is simpler, more testable, more reusable, more cohesive, and less coupled.  And when a superior design suddenly presents itself, a few public methods on many classes are easier to refactor than a few classes with many private methods.</content>
 </entry>
 
 <entry>
   <title>The Tao of Test-Driven Development</title>
   <link href="http://kent.spillner.org/blog/work/2009/11/11/tao-of-test-driven-development.html"/>
   <updated>2009-11-11T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/work/2009/11/11/tao-of-test-driven-development</id>
   <content type="html">I gave my _Tao of Test-Driven Development_ tutorial at OOPSLA 2009 in Orlando, FL.  The tutorial began with a quick lecture-style introduction to test-driven development, and concluded with a hands-on coding dojo for attendees to practice what they learned.

The slides from my presentation are now &quot;available online&quot;:http://github.com/sl4mmy/presentations/tree/master/tao-of-test-driven-development/ in Keynote, PowerPoint and PDF formats.

I also uploaded &quot;starter code for parts 1 and 2 of the dojo&quot;:http://github.com/sl4mmy/presentations/tree/master/tao-of-test-driven-development/.  Although the dojo was technology-agnostic and attendees free to use any language, the starter code uses Java, JUnit and Ant.

The starter code for part 2 includes my solution to part 1, but there are no right or wrong solutions.  My solution is illustrative of _my_ approach to solving part 1, but is by no means the only solution.  Indeed, I practiced part 1 three times in preparation for the tutorial and ended up with a different solution each time.</content>
 </entry>
 
 <entry>
   <title>Introducing Hippie</title>
   <link href="http://kent.spillner.org/blog/work/2009/11/10/introducing-hippie.html"/>
   <updated>2009-11-10T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/work/2009/11/10/introducing-hippie</id>
   <content type="html">My current project integrates with a lot of external services.  Unfortunately, our service-level agreements do not extend to our development and testing environments, so our application breaks frequently because of service problems.  Services go down, services have bugs in the latest development versions deployed in our environments, our environments are upgraded to new versions of services that are not backwards compatible with the old versions, etc.

We write &quot;JUnit&quot;:http://www.junit.org tests to verify each service is running correctly in our environments, so we do not waste time debugging problems caused by service failures.  These tests make my team more productive, but unfortunately they do nothing to make the services more reliable.  At least, not by themselves...

Enter &quot;hippie&quot;:http://github.com/sl4mmy/hippie

&quot;Hippie&quot;:http://github.com/sl4mmy/hippie is an open source tool that automatically sends &quot;Nagios&quot;:http://www.nagios.org &quot;passive checks&quot;:http://nagios.sourceforge.net/docs/3_0/passivechecks.html based on the result of running &quot;JUnit&quot;:http://www.junit.org tests.

&quot;hippie&quot;:http://github.com/sl4mmy/hippie bridges the divide between developer tools and system administrator tools.  &quot;JUnit&quot;:http://www.junit.org tests automatically notify &quot;Nagios&quot;:http://www.nagios.org which automatically notifies other teams that their services are broken.

It is an extension of the automatic build notifications our continuous integration tools provide, only at much, much finer resolution.  &quot;Hippie&quot;:http://github.com/sl4mmy/hippie makes it possible to notify Alice on &quot;Team: Tiger&quot; when TigerTests#shouldRespondWithOkStatus() fails, but notify Bob in production support when LegacyTests#shouldAlwaysBeUpAndRunning() fails.

Those &quot;JUnit&quot;:http://www.junit.org tests we wrote no longer just prevent us from wasting time debugging service problems, they now kick off a chain of events culminating (hopefully) with someone else fixing the problem.  And _that_ is the ultimate goal of every programming endeavor.</content>
 </entry>
 
 <entry>
   <title>GitHub Pages</title>
   <link href="http://kent.spillner.org/blog/life/politics/work/2009/11/09/github-pages.html"/>
   <updated>2009-11-09T00:00:00+00:00</updated>
   <id>http://kent.spillner.org/blog/life/politics/work/2009/11/09/github-pages</id>
   <content type="html">_*UPDATED 2010-09-18:* Added a new category to my site, &quot;Books&quot;:http://kent.spillner.org/blog/books/, with &quot;its own separate feed.&quot;:http://feeds.feedburner.com/kent-spillner-books_

I finally found a blogging platform to love: &quot;GitHub Pages&quot;:http://pages.github.com/.

&quot;GitHub Pages&quot;:http://pages.github.com/ rocks!  It gives me control over every aspect of my site: directory structure, page layout, permalink format.  Everything is stored in plain text, versioned (courtesy of Git, natch), and cloudified.  There is no dumb blog name or stupid subtitle, no biography or mugshot, no comments or tag clouds.  I especially like the lack of comments because the best conversations happen when everyone just listens to me.

The sheer awesomeness of &quot;GitHub Pages&quot;:http://pages.github.com/ and its simplicity of use compel me to blog; they also compelled me to upgrade my GitHub account and wire-up a vanity domain.  I resisted lending my voice to the symphony of idiots blogging on the internets for many years, but no more!  &quot;GitHub Pages&quot;:http://pages.github.com/ makes it so damn easy, I just can't help myself.  If I fail in this endeavour, if I fail to update regularly or at all, it will be for lack of trying, not inadequacy of the tool.

The world suffers for lack of access to my opinions.  To make it easier to learn my opinion on a particular subject, blog entries will be categorized as one of: &quot;life&quot;:http://kent.spillner.org/blog/life/, &quot;politics&quot;:http://kent.spillner.org/blog/politics/, or &quot;work&quot;:http://kent.spillner.org/blog/work/.  This entry itself is a special case and will be categorized as all three to seed my syndication feeds.  There are separate feeds &quot;for&quot;:http://feeds.feedburner.com/kent-spillner-life &quot;each&quot;:http://feeds.feedburner.com/kent-spillner-politics &quot;category&quot;:http://feeds.feedburner.com/kent-spillner-work, and a fourth &quot;feed with everything&quot;:http://feeds.feedburner.com/kent-spillner, to provide the illusion of choice.  In reality, all of my opinions are equally valuable.

You're welcome.
</content>
 </entry>
 
</feed>
