<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Joshua Flanagan&#039;s Blog</title>
	<atom:link href="https://lostechies.com/joshuaflanagan/feed/" rel="self" type="application/rss+xml" />
	<link>https://lostechies.com/joshuaflanagan</link>
	<description></description>
	<lastBuildDate>Mon, 09 Nov 2015 17:09:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.2</generator>
		<item>
		<title>Build your own single function keyboard</title>
		<link>https://lostechies.com/joshuaflanagan/2015/11/09/build-your-own-single-function-keyboard/</link>
		<comments>https://lostechies.com/joshuaflanagan/2015/11/09/build-your-own-single-function-keyboard/#comments</comments>
		<pubDate>Mon, 09 Nov 2015 17:09:03 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[hardware]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://lostechies.com/joshuaflanagan/?p=97</guid>
		<description><![CDATA[I mentioned in my last post that it took months between the time I ordered my Infinity Ergodox keyboard and the time it arrived. In the meantime, I started reading up on it, and learned that the firmware could also&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2015/11/09/build-your-own-single-function-keyboard/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>I mentioned in <a href="http://www.joshuaflanagan.com/blog/2015/11/02/infinity-ergodox.html">my last post</a> that it took<br />
months between the time I ordered my Infinity Ergodox keyboard and the time it arrived. In the meantime, I <a href="http://people.eecs.ku.edu/~tlindsey/ErgoDox_FAQ.html">started reading up on it</a>, and learned that the firmware could also run on a <a href="https://www.pjrc.com/teensy/index.html">Teensy</a>. I didn’t know what a Teensy was, so started researching. Its a low-cost development board that is (mostly) Arduino compatible, and excels at building USB input devices. Using the <a href="https://www.pjrc.com/teensy/teensyduino.html">Teensyduino software</a>, you can easily configure the type of device it will appear as to your computer (USB mouse, USB keyboard, etc).</p>
<p>I decided I’d try my hand at building my <em>own</em> keyboard. But this one would be very special. It would have one key, with one function: approve GitHub pull requests! At ShippingEasy, all of our code goes through a Pull Request. Before you can merge the Pull Request, someone else has to approve it. By convention, we indicate approval with the <code>:+1:</code> emoji (also <code>:thumbsup:</code>). So I wanted a big, red button that I could slap to approve a pull request.</p>
<p>Turns out, its pretty easy to do. My search for a big red button led me to the <a href="http://www.staples.com/Staples-Easy-Button-/product_606396">Staples Easy Button</a>, which has a good reputation for these types of crafty projects. The button is very sturdy, so it can handle your emphatic code review approvals.</p>
<p><img src="http://www.joshuaflanagan.com/blog/assets/pr_button_complete.jpg" alt="finished button" /></p>
<p>The electronics part of the project was very straightforward. I disassembled the Easy button (<a href="http://www.instructables.com/id/Z-Wave-Easy-Button/step2/Disassemble-The-Easy-Button/">good instructions</a>) and got rid of the speaker. I used some hookup wire to connect the Easy button’s existing button circuitry to one of the input pins (20) and Ground on the <a href="https://www.pjrc.com/store/teensylc.html">Teensy LC</a>. To verify the button was connected properly, I wrote a simple program for the Teensy that lit up the onboard LED when there was input on pin 20.</p>
<p><img src="http://www.joshuaflanagan.com/blog/assets/teensy_connections.jpg" alt="Teensy connections" /></p>
<p>The hardest part, by far, of the entire project was trying to fit and secure the new board inside, and be able to close it back up. I also needed a hole in the case so the USB cable could go from the Teensy to the computer. Luckily I have a <a href="http://www.dremel.com/en-us/Tools/Pages/ToolDetail.aspx?pid=200+Series">Dremel tool</a>, which is perfect for this type of job. I used it to carve out a lot of the internal plastic, and drilling a hole in the side for the cable. I got a little creative and glued in a carved-up a <a href="http://www.homedepot.com/p/Safety-1st-Ultra-Clear-Plug-Protectors-18-Pack-HS230/205885675">plug protector</a> I found in the junk drawer. This serves as a mount for the Teensy, so it won’t rattle around inside.</p>
<p><img src="http://www.joshuaflanagan.com/blog/assets/teensy_mount.jpg" alt="Teensy mount" /></p>
<p>The final, and easiest, step was to write the software for my new “keyboard”. I used the <a href="https://www.pjrc.com/teensy/td_download.html">Teensyduino add-on</a> for the Arduino IDE, and set the USB Type to Keyboard. All Arduino programs (“sketches”) consist of a <code>setup</code> and a <code>loop</code> function. The <code>setup</code> is run once, and is where I configure the hardware pins. The Teensy LC’s onboard LED is available on pin 13 &#8211; I configure that as an output (to aid debugging). Then I need to configure pin 20 as an input (you’ll recall I soldered a wire from the button to pin 20). The <code>loop</code> function runs repeatedly, forever, while the device has power. I use the <code>Button</code> library which nicely encapsulates the logic for detecting button presses on an input pin. When the button is pressed, I turn on the LED and then send the sequence of characters <code>:+1:</code>, followed by Command+Enter to submit the PR comment form (probably needs to be changed to ALT+Enter on Windows). The <code>Keyboard</code> library handles all the details of sending the proper USB HID codes to the computer. Full source code:</p>
<pre>
/* Add +1 comment to Github Pull Request

   You must select Keyboard from the "Tools > USB Type" menu
*/

#include <Bounce.h>

const int ledPin = 13;
const int buttonPin = 20;
const int debounceTime = 10; //ms

Bounce button20 = Bounce(buttonPin, debounceTime);


void setup() {
  pinMode(ledPin, OUTPUT);
  pinMode(buttonPin, INPUT_PULLUP);
}

void loop() {
  button20.update();

  if (button20.fallingEdge()) {
    digitalWrite(ledPin, HIGH);
    Keyboard.println(":+1:");

    // submit form (Command+Return)
    Keyboard.press(KEY_LEFT_GUI);
    Keyboard.press(KEY_RETURN);
    delay(100);
    Keyboard.releaseAll();

  } else if (button20.risingEdge()){
    digitalWrite(ledPin, LOW);
  }
}
</pre>
<p>I had a blast with this project, and its a great introduction to building a simple device that can talk to a computer. Using the Teensy and a big button, you can send anything that a USB mouse or keyboard can send. Think of the possibilities! Special shout-out to <a href="https://twitter.com/scichelli">Sharon Cichelli</a> for her enthusiasm in showing how accessible and fun these hardware projects can be &#8211; hi Sharon!</p>
<p><img src="http://www.joshuaflanagan.com/blog/assets/pr_button_parts.jpg" alt="Work area with parts" /></p>
<p><em>Originally posted at <a href="http://www.joshuaflanagan.com/blog/2015/11/07/build-your-own-single-function-keyboard.html">http://www.joshuaflanagan.com/blog/2015/11/07/build-your-own-single-function-keyboard.html</a></em></p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2015/11/09/build-your-own-single-function-keyboard/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Infinity ErgoDox</title>
		<link>https://lostechies.com/joshuaflanagan/2015/11/02/infinity-ergodox/</link>
		<comments>https://lostechies.com/joshuaflanagan/2015/11/02/infinity-ergodox/#comments</comments>
		<pubDate>Tue, 03 Nov 2015 05:25:00 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[hardware]]></category>

		<guid isPermaLink="false">https://lostechies.com/joshuaflanagan/?p=86</guid>
		<description><![CDATA[I’ve always been picky about my keyboard, but recently discovered an entirely new world of keyboard enthusiasts. Aaron Patterson was on the Ruby Rogues podcast talking about mechanical keyboard kits. As in, keyboards you build yourself. You pick out the&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2015/11/02/infinity-ergodox/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>I’ve always been picky about my keyboard, but recently discovered an entirely new world of keyboard enthusiasts. <a href="https://devchat.tv/ruby-rogues/200-rr-200th-episode-free-for-all-">Aaron Patterson was on the Ruby Rogues podcast</a> talking about mechanical keyboard kits. As in, keyboards you build yourself. You pick out the key switches (get just the right clicky feel), you pick out the keycaps, you pick a layout, etc. And then you solder all the parts together. That sounded pretty extreme to me! But I was just getting back into hardware hacking (arduino, etc) and figured it would make for a fun project.</p>
<p>Intrigued, I did some research, and discovered that Massdrop was just starting an effort to build the <a href="https://www.massdrop.com/buy/infinity-ergodox">Infinity ErgoDox</a> &#8211; an update to the popular ErgoDox. Perfect, I placed an order (I went with Cherry MX Blue switches and blank DCS<br />
keycaps, in case that means anything to you). And waited. That was back in April. I pretty much forgot about it until a box arrived from Massdrop a couple weeks ago.</p>
<p><img src="http://www.joshuaflanagan.com/blog/assets/infinity_ergodox_parts.jpg" alt="parts" /></p>
<p>I finally got a rainy weekend to dedicate some time to it. Reading the forums, people were saying that this was a much easier build than the ErgoDox, with one mentioning a 20 minute build time. I think it took me 20 minutes to get all my tools out and unwrap everything. This was not a cheap keyboard, the parts are currently scarce, and most of the documentation seems to assume more knowledge of the process than I had. But taking my time, soldering the key switches to the PCB, installing the stabilizers, assembling the case, I had a working keyboard about four hours later!<br />
<img src="http://www.joshuaflanagan.com/blog/assets/infinity_ergodox_assembled.jpg" alt="a keyboard" /></p>
<p>Well, kinda. The hardware is only half of the story. The cool thing about the Infinity ErgoDox is that it has a <a href="https://github.com/kiibohd/controller">completely customizable (open source) firmware</a>. Now that I have a device with a bunch of keys on it, I can decide exactly<br />
what all of the keys do. It has an online configurator to build your customized firmware for you, if all you want to do is re-map keys. But if you really want to get fancy, you can build the firmware from source. This exposes all of the capabilities of the device. Specifically, this keyboard kit includes a 128&#215;32 LCD panel, as well as a controller to individually address each key’s backlight LED (not included in the kit &#8211; bring your own fun colors).</p>
<p>The best part is that there is a lot of unexplored territory. The existing firmware code has some very crude support for the LCD and LEDs &#8211; just enough to prove that they work. So there is a lot of room to flesh them out further, to try and support all the silly things people might want to light up while they type. That’s the part I am most excited about. Eventually, I’ll probably enjoy typing on it, too.</p>
<p><em>Originally posted at <a href="http://www.joshuaflanagan.com/blog/2015/11/03/infinity-ergodox.html">http://www.joshuaflanagan.com/blog/2015/11/03/infinity-ergodox.html</a></em></p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2015/11/02/infinity-ergodox/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A smarter Rails url_for helper</title>
		<link>https://lostechies.com/joshuaflanagan/2012/03/27/a-smarter-rails-url_for-helper/</link>
		<comments>https://lostechies.com/joshuaflanagan/2012/03/27/a-smarter-rails-url_for-helper/#comments</comments>
		<pubDate>Wed, 28 Mar 2012 03:36:06 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[conventions]]></category>
		<category><![CDATA[rails]]></category>
		<category><![CDATA[ruby]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2012/03/27/a-smarter-rails-url_for-helper/</guid>
		<description><![CDATA[Problem In my Rails application, I want to be able to determine a URL for any given model, without the calling code having to know the type of the model. Instead of calling post_path(@post) or comment_path(@comment), I want to just&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2012/03/27/a-smarter-rails-url_for-helper/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<h2 id="problem">Problem</h2>
<p>In my Rails application, I want to be able to determine a URL for any given model, without the calling code having to know the type of the model. Instead of calling <code>post_path(@post)</code> or <code>comment_path(@comment)</code>, I want to just say <code>url_for(@post)</code> or <code>url_for(@comment)</code>. This already works in many cases, when Rails can infer the correct route helper method to call. However, Rails cannot always infer the correct helper, and even if it does, it might not be the one you want.</p>
<p>For example, suppose you have comments as a nested resource under posts&nbsp; (config/routes.rb):</p>
<pre><code>resources :posts do
  resources :comments
end
</code></pre>
<p>&nbsp;</p>
<p>I have a <code>@post</code> with id 8, which has a <code>@comment</code> with id 4. If I call <code>url_for(@post)</code>, it will correctly resolve to <code>/posts/8</code>. However, if I call <code>url_for(@comment)</code>, I get an exception:</p>
<pre><code>undefined method `comment_path' for #&lt;#&lt;Class:0x007fb58e4dbde0&gt;:0x007fb58d0453e0&gt;
</code></pre>
<p>&nbsp;</p>
<p>Rails incorrectly guessed the the route helper method would be comment_path (unfortunately, it knows nothing of your route configuration). The correct call would be <code>post_comment_path(@comment.post, @comment)</code>, which returns <code>/posts/8/comments/4</code>. However, that requires the calling code to <i>know</i> too much about comments.</p>
<h2 id="mysolution">My solution</h2>
<p>I wanted a way to “teach” my application how to resolve my various models into URLs. The idea is inspired by FubuMVC’s UrlRegistry (which was originally inspired by Rails’ url_for functionality…). I came up with <a href="https://gist.github.com/2223161" target="_blank">SmartUrlHelper</a>. It provides a single method: <code>smart_url_for</code>, which is a wrapper around <code>url_for</code>. The difference is that you can register “handlers” which know how to resolve your edge cases.</p>
<p>To solve my example problem above, I’d add the following code to config/initializers/smart_urls.rb:</p>
<pre><code>SmartUrlHelper.configure do |url|
  url.for -&gt;model{ model.is_a?(Comment) } do |helpers, model|
    helpers.post_comment_path(model.post, model)
  end
end
</code></pre>
<p>&nbsp;</p>
<p>Now I can call <code>smart_url_for(@post)</code> or <code>smart_url_for(@comment)</code> and get the expected URL. The <code>comment</code> is resolved by the special case handler, and the <code>post</code> just falls through to the default <code>url_for</code> call. Note that in this example, I use instance variables named @post and @comment, which implies I know the type of object stored in the variable. In that case, <code>smart_url_for</code> is just a convenience. However, consider a scenario where you have generic code that needs to build a URL for any model passed to it (like the form_for helper). In that case, something like <code>smart_url_for</code> is a necessity.</p>
<h2 id="feedback">Feedback</h2>
<p>First, does Rails already have this functionality built-in, or is there an accepted solution in the community? If not, what do you think of this approach? I&#8217;d welcome suggestions for improvement. Particularly, I’m not wild about storing the handlers in the Rails.config object, but didn’t know a better way to separate the configuration step (config/initializer) from the consuming step (calls to <code>smart_url_for</code>). So far, it is working out well on my project.</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2012/03/27/a-smarter-rails-url_for-helper/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>Powerfully simple persistence: MongoDB</title>
		<link>https://lostechies.com/joshuaflanagan/2012/02/06/easy-persistence-mongodb/</link>
		<comments>https://lostechies.com/joshuaflanagan/2012/02/06/easy-persistence-mongodb/#comments</comments>
		<pubDate>Mon, 06 Feb 2012 16:15:00 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[mongodb]]></category>
		<category><![CDATA[ruby]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2012/02/06/easy-persistence-mongodb/</guid>
		<description><![CDATA[In my post &#8220;Great time to be a developer&#8220;, I listed MongoDB as one of the tools that made my task (track travel times for a given route) easy. This post will show you how. What do I need to&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2012/02/06/easy-persistence-mongodb/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>In my post &#8220;<a href="http://lostechies.com/joshuaflanagan/2012/01/31/great-time-to-be-a-developer/" target="_blank">Great time to be a developer</a>&#8220;, I listed <a href="http://www.mongodb.org/" target="_blank">MongoDB</a> as one of the tools that made my task (track travel times for a given route) easy. This post will show you how.</p>
<h2>What do I need to store?</h2>
<p>My travel time data collection job needs the URL for the traffic data endpoint for each route that I&#8217;ll be tracking. I could have just hardcoded the URL in the script, but I knew that my co-workers would be interested in tracking their routes too, so it made sense to store the list of routes in the database.</p>
<p>I need to store the list of &#8216;trips&#8217;. I define a trip as the reported travel details for a given route and departure time (Josh&#8217;s route at 9am, Josh&#8217;s route at 9:10am, Tim&#8217;s route at 9:10am, etc.). I want to capture the date of each trip so that I can chart the trips for a given day, and compare day to day variation. Even though I really only need to the total travel time for each trip, I want to capture the entire response&nbsp; from the traffic service (travel times, directions, traffic delay, etc.) so that I could add new visualizations in the future.</p>
<h2>Setup</h2>
<p>First, I had to install mongo on my laptop. I used the <a href="http://www.mongodb.org/downloads" target="_blank">homebrew</a> package manager, but <a href="http://www.mongodb.org/downloads" target="_blank">binary releases are readily available</a>.</p>
<pre>brew install mongodb
</pre>
<p>I need to add the route for my commute. I fire up the mongo console by typing <font size="4" face="Cordia New">mongo</font>. I&#8217;m automatically connected to the default &#8216;test&#8217; database in my local mongodb server. I add my route:</p>
<pre class="brush:js; gutter:false; tab-size:2;toolbar: false">&gt; db.routes.save({
  name: 'josh',
  url: 'http://theurlformyroute...'
})</pre>
<p>I verify the route was saved:</p>
<pre class="brush:js; gutter:false; wrap-lines:true; tab-size:2;toolbar: false">&gt; db.routes.find()
{"_id" : ObjectId("4f22434d47dd721cf842bdf6"),
 "name" : "josh",
 "url" : "http://theurlformyroute..." }
</pre>
<p>It is worth noting that I haven&#8217;t skipped any steps. I fired up the mongo console, ran the save command, and now I have the route in my database. I didn&#8217;t need to create a database, since the &#8216;test&#8217; database works for my needs. I didn&#8217;t need to define the routes collection &#8211; it was created as soon as I stored something in it. I didn&#8217;t need to define a schema for the data I&#8217;m storing, because there is no schema. I am now ready to run my data collection script.</p>
<h2>Save some data</h2>
<p>I&#8217;ll use the ruby MongoDB driver (gem install mongo) directly (you can also use something like mongoid or mongomapper for a higher-level abstraction). My update script needs to work with the URL for each route:</p>
<pre class="brush:ruby; gutter:false; wrap-lines:false; tab-size:2;toolbar: false">db = Mongo::Connection.new.db("test")
db["routes"].find({}, :fields =&gt; {"url" =&gt; 1}).each do |route|
  url = route["url"]
  # collect trip data for this route's url
end
</pre>
<p>I want to group related trips for a commute, so I create a &#8216;date_key&#8217; based on the current date/time. A date_key looks like: 2012-01-25_AM, 2012-01-25_PM, or 2012-01-26_AM. Now to store the details returned from the traffic service:</p>
<pre class="brush:ruby; gutter:false; wrap-lines:false; tab-size:2;toolbar: false">trip_details = TrafficSource.get(url)
db["routes"].update({"_id" =&gt; route["_id"]}, {
  "$addToSet" =&gt; {"trip_keys" =&gt; date_key},
  "$push" =&gt; {"trips.#{date_key}" =&gt; trip_details}
})
</pre>
<p>After running for a couple days, this will result in a route document that looks something like:</p>
<pre class="brush:js; gutter:false; wrap-lines:false; tab-size:2;toolbar: false">{
  _id: 1234,
  name: 'josh',
  url: 'http://mytravelurl...',
  trip_keys: ['2012-01-25_AM', '2012-01-25_PM', '2012-01-26_AM',...],
  trips: {
    2012-01-25_AM: [{departure: '9:00', travelTime: 24, ...}, {departure: '9:10', travelTime: 26}, ...],
    2012-01-25_PM: [{departure: '9:00', travelTime: 28, ...}, {departure: '9:10', travelTime: 29}, ...],
    2012-01-26_AM: [{departure: '9:00', travelTime: 25, ...}, {departure: '9:10', travelTime: 25}, ...],
    ...
  }
}
</pre>
<p>That is *all* of the MongoDB-related code in the data collection script. I haven&#8217;t left out any steps &#8211; programmatic, or administratrive. None of the structue was defined ahead of time. I just &#8216;$push&#8217;ed some trip details into &#8216;trips.2012-01-25_AM&#8217; on the route. It automatically added an object to the &#8216;trips&#8217; field, with a &#8217;2012-01-25_AM&#8217; field, which holds an array of trip details. I also store a list of unique keys in the trip_keys field using $addToSet in the same `update` statement.</p>
<h2>Show the data</h2>
<p>The web page that charts the travel times makes a single call to MongoDB:</p>
<pre class="brush:ruby; gutter:false; wrap-lines:false; tab-size:2;toolbar: false;">route = db["routes"].find_one(
  {:name =&gt; 'josh'},
  :fields =&gt; {"trips" =&gt; 1}
)</pre>
<p>The entire trips field, containing all of the trips grouped by date_key, is now available in the ruby hash <font size="4" face="Courier New">route</font>. With a little help from ruby&#8217;s <a href="http://ruby-doc.org/core-1.9.3/Enumerable.html#method-i-map" target="_blank">Enumerable#map</a>, I transform the data into a format consumable by <a href="http://www.highcharts.com/" target="_blank">Highcharts JS</a>.</p>
<h2>Production</h2>
<p>Just to be thorough, I&#8217;ll mention that I had to modify the script for production use. I replaced the `db` local variable with a method that uses the <a href="http://addons.heroku.com/mongolab" target="_blank">mongolab</a> connection when available, or falls back to the local test connection:</p>
<pre class="brush:ruby; gutter:false; wrap-lines:false; tab-size:2;toolbar: false">def db
  @db ||=
  begin
    mongolab_uri = ENV['MONGOLAB_URI']
    return Mongo::Connection.new.db("test") unless mongolab_uri
    uri = URI.parse(mongolab_uri)
    Mongo::Connection.from_uri(mongolab_uri).db(uri.path.gsub(/^\//, ''))
  end
end
</pre>
<h2>Conclusion</h2>
<p>A couple queries, a single, powerful update statement, and no administration or schema preparation. Paired with the <a href="http://api.mongodb.org/ruby/current/file.TUTORIAL.html" target="_blank">ruby driver</a>&#8216;s seemless mapping to native Hash objects, it is hard to imagine a simpler, equally powerful, persistence strategy for this type of project.</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2012/02/06/easy-persistence-mongodb/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Great time to be a developer</title>
		<link>https://lostechies.com/joshuaflanagan/2012/01/31/great-time-to-be-a-developer/</link>
		<comments>https://lostechies.com/joshuaflanagan/2012/01/31/great-time-to-be-a-developer/#comments</comments>
		<pubDate>Wed, 01 Feb 2012 05:09:58 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[heroku]]></category>
		<category><![CDATA[ruby]]></category>
		<category><![CDATA[sinatra]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2012/01/31/great-time-to-be-a-developer/</guid>
		<description><![CDATA[I am in awe of the free tools available to software developers today. It is amazing how fast, and cheaply, you can turn an idea into productive code. I was so pumped by a recent experience, I decided to share.&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2012/01/31/great-time-to-be-a-developer/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>I am in awe of the free tools available to software developers today. It is amazing how fast, and cheaply, you can turn an idea into productive code. I was so pumped by a recent experience, I decided to share.</p>
<h2>The Problem</h2>
<p>My employer is moving to a new location in a part of the city that I&#8217;m not very familiar with. I have no idea what the traffic patterns are like, and I&#8217;m wondering when to leave for work in the morning. I tried looking at various web mapping services. Some factor in <em>current</em> traffic, but I couldn&#8217;t find any that could tell me historical traffic information, making it impossible to make a decision about departure time in advance.</p>
<h2>Idea</h2>
<p>Tracking historical travel times for the world would be a huge task, but what if I could create a small history of my specific route? The TomTom Live Traffic Route Planner site can give me directions to work, and estimate travel time based on current traffic conditions. I discovered that it returns a lot of the trip information from a single AJAX call. A quick copy/paste of the URL to <font size="4" face="Cordia New">curl</font> confirmed that I could repeat the call and get the same data. I just need to hit that endpoint at various times in the morning and store the results. Later, I&#8217;ll be able to analyze the results and determine the best time to leave for work.</p>
<h2>Keep it Minimal</h2>
<p>Now I have an idea, but I don&#8217;t know if it is worth pursuing. I don&#8217;t know if the data I&#8217;m getting is accurate (it worked once or twice from curl <em>now</em>, but maybe the long URL contains some session id that will expire?). I don&#8217;t want to invest a lot of time or money in building it out. Also, it&#8217;s 6pm Tuesday night, I only have 3 more days left this week before I make the commute to the new office. I need to start collecting data as soon as possible; hopefully Wednesday morning. It&#8217;s time to write some code. Fast.</p>
<p>This is where the quality of available tools really make an impact. To name a few:</p>
<blockquote><p><a href="http://www.ruby-lang.org" target="_blank">Ruby</a> &#8211; low ceremony scripting with a vast ecosystem of libraries to accomplish common tasks.</p>
<p><a href="https://github.com/jnunemaker/httparty" target="_blank">HTTParty</a> &#8211; crazy simple library to call a URL and get a Ruby hash of the response data &#8211; no parsing, no Net:HTTP.</p>
<p><a href="http://www.heroku.com/" target="_blank">Heroku</a> &#8211; There is no better option for hosting applications in the early proving stage. Create and deploy a new app in seconds, for free. The free <a href="http://addons.heroku.com/scheduler" target="_blank">Heroku Scheduler</a> add-on lets me run a script every 10 minutes in my hosted environment &#8212; exactly what I need for my data collection.</p>
<p><a href="http://www.mongodb.org/" target="_blank">MongoDB</a> &#8211; natural fit for persisting an array (the trips calculated every 10 minutes) of ruby hashes (responses from traffic service). No schema, no mapping, no fuss.<br />&nbsp;<br /><a href="http://addons.heroku.com/mongolab" target="_blank">MongoLabs</a> &#8211; free MongoDB hosting on Heroku. One click to add, and I have a connection string for my own 240MB in the cloud. Sweet.</p>
</blockquote>
<p>By 11pm Tuesday night, my script is running in the cloud, ready to start collecting data Wednesday morning. I&#8217;m not going to spend any time on building a UI until I know if the data collection works.</p>
<h2>Checkpoint</h2>
<p>On Wednesday night, I use the mongo console to review the trip data that was collected in the morning. I see that the trip duration changes for each request, which gives me hope that I&#8217;ll have meaningful data to answer my question. However, I also notice that the reported &#8220;traffic delay&#8221; time is always zero. I&#8217;m a little concerned that my data source isn&#8217;t reliable. I&#8217;m glad I haven&#8217;t invested too much yet. At this point, I can just write off the time as a well-spent refresher of MongoDB.</p>
<h2>Further exploration</h2>
<p>I&#8217;m still curious to see a visualization of the data. I decide to spend a couple hours to see if I can build a minimal UI to chart the departure vs. duration times. Again, the available tools gave me fantastic results with minimal effort:</p>
<blockquote><p><a href="http://www.sinatrarb.com/" target="_blank">sinatra</a> &#8211; an incredibly simple DSL for exposing your ruby code to the web. All I needed was a single endpoint that would pull data from mongo and dump it to the client to render a chart. Anything more than sinatra would be overkill, and anything less would be tedious.</p>
<p><a href="http://www.highcharts.com/" target="_blank">Highcharts JS</a> &#8211; amazing javascript library for generating slick client-side charts. A ton of options (including the very helpful <a href="http://www.highcharts.com/demo/spline-irregular-time" target="_blank">datetime x-axis</a>), well-documented, and free for non-commercial use. I didn&#8217;t have a &#8220;go-to&#8221; option for client-side charting, so I had to do a quick survey of what was available. This is the first one I tried, and it left me with absolutely no reason to look at others.</p>
</blockquote>
<p>After a couple hours (mostly to learn Highcharts), I have my chart and a potential answer (leave before 7:45, or after 9):<br /><a href="http://lostechies.com/joshuaflanagan/files/2012/01/traveltime_chart.png"><img style="background-image: none; border-right-width: 0px; margin: 0px 0px 24px; padding-left: 0px; padding-right: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px" title="traveltime_chart" border="0" alt="traveltime_chart" src="http://lostechies.com/joshuaflanagan/files/2012/01/traveltime_chart_thumb.png" width="571" height="349"></a></p>
<h2>Conclusion</h2>
<p>I essentially spent one evening writing the data collection script, and another night building the web page that rendered the chart. I&#8217;ve proven to myself that the idea was sound, if not the data source. I will continue to poke around to see if I can find a more reliable API for travel times, but otherwise consider this project &#8220;done.&#8221; In years past, getting to this point would have meant a <em>lot</em> more effort on my part. It is awesome how much of the hard work was done by the people and companies that support a great developer ecosystem.</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2012/01/31/great-time-to-be-a-developer/feed/</wfw:commentRss>
		<slash:comments>17</slash:comments>
		</item>
		<item>
		<title>Coordinating multiple ajax requests with jquery.when</title>
		<link>https://lostechies.com/joshuaflanagan/2011/10/20/coordinating-multiple-ajax-requests-with-jquery-when/</link>
		<comments>https://lostechies.com/joshuaflanagan/2011/10/20/coordinating-multiple-ajax-requests-with-jquery-when/#comments</comments>
		<pubDate>Fri, 21 Oct 2011 03:17:33 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[jquery]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2011/10/20/coordinating-multiple-ajax-requests-with-jquery-when/</guid>
		<description><![CDATA[While building a rich JavaScript application, you may get in a situation where you need to make multiple ajax requests, and it doesn&#8217;t make sense to work with the results until after all of them have returned. For example, suppose&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2011/10/20/coordinating-multiple-ajax-requests-with-jquery-when/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>While building a rich JavaScript application, you may get in a situation where you need to make multiple ajax requests, and it doesn&#8217;t make sense to work with the results until after <em>all</em> of them have returned. For example, suppose you wanted to collect the tweets from three different users, and display the entire set sorted alphabetically (yes, its contrived). To get the tweets for a user via jQuery, you write code like: </p>
<div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:812469c5-0cb0-4c63-8c15-c81123a09de7:bbddfc1a-6180-4391-bcb6-b6b5299f83ba" class="wlWriterEditableSmartContent">
<pre name="code" class="js">$.get("http://twitter.com/status/user_timeline/SCREEN_NAME.json",
  function(tweets){
    // work with tweets
   },
  "jsonp");
</pre>
</div>
<p><em>(For the purposes of this example, I&#8217;m going to assume there is no way to get the tweets for multiple users in a single call)</em></p>
<p>To get the tweets for three users, you would need to make three separate <font face="Courier New">$.get</font> calls to the user_timeline endpoint. Since each call is executed asynchronously, with no guarantee which would return first, the code to coordinate the response for all three users would likely be a mess of shared state and/or nested callbacks.</p>
<p>As of jQuery 1.5, the solution is much simpler. Each of the ajax functions were changed to return a Deferred object which manages the callbacks for a call. (The Deferred object is beyond the scope of this post, but I encourage you to <a href="http://api.jquery.com/category/deferred-object/" target="_blank">read the documentation</a> for a more thorough explanation.) The power of Deferred objects become apparent when used with the new <font face="Courier New">jquery.when</font> utility function. <font face="Courier New">jquery.when</font> accepts any number of Deferred objects, and allows you to assign callbacks that will be invoked when all of the Deferred objects have completed. The Deferred objects returned by the ajax functions are complete when the responses are received. This may sound confusing, but it should be much clearer when you see it applied to the example scenario:</p>
<p><script src="https://gist.github.com/1302978.js"></script><noscript>
<pre><code class="language-javascript javascript">$.when( getTweets('austintexasgov'),
        getTweets('greenling_com'),
        getTweets('themomandpops')
      ).done(function(atxArgs, greenlingArgs, momandpopsArgs){
    var allTweets = [].concat(atxArgs[0]).concat(greenlingArgs[0]).concat(momandpopsArgs[0]);
    var sortedTweets = sortTweets(allTweets);
    showTweets(sortedTweets);
      });

var getTweets = function(user){
    var url='http://twitter.com/status/user_timeline/' + user + '.json';
    return $.get(url, {count:5}, null, 'jsonp');
}

// see it in action at http://jsfiddle.net/94PGy/4/</code></pre>
<p></noscript> </p>
<ul>
<li>I have a helper method, <font face="Courier New">getTweets</font>, which returns the return value of a call to $.get. This will be a Deferred object representing that call to the twitter server.
<li>I call <font face="Courier New">$.when</font>, passing it the three Deferred objects from the three ajax calls.
<li>The<font face="Courier New"> done()</font> function is chained off of $.when to declare the code to run when all three ajax calls have completed successfully.
<li>The<font face="Courier New"> done()</font> function receives an argument for each of the ajax calls. Each argument holds an array of the arguments that would be passed to that ajax call&#8217;s <font face="Courier New">success</font> callback. The <font face="Courier New">$.get success</font> callback gets three arguments: <font face="Courier New">data</font>, <font face="Courier New">textStatus</font>, and <font face="Courier New">jqXHR</font>. Therefore, the <font face="Courier New">data</font> argument from the call for @greenling_com tweets is available in <font face="Courier New">greenlingArgs[0]</font>. Similarly, the <font face="Courier New">textStatus</font> argument for the call for @austintexasgov tweets would be in <font face="Courier New">atxArgs[1]</font>.
<li>The fifth line creates the <font face="Courier New">allTweets</font> array combining the tweets (the first, or <font face="Courier New">data</font>, argument) from all three calls to twitter.</li>
</ul>
<p>It it that last point that is interesting to me. I&#8217;m able to work with a single collection containing data from three separate ajax requests, without writing any awkward synchronization code.</p>
<p><a href="http://jsfiddle.net/94PGy/4/" target="_blank">Play with the example</a> on jsFiddle</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2011/10/20/coordinating-multiple-ajax-requests-with-jquery-when/feed/</wfw:commentRss>
		<slash:comments>13</slash:comments>
		</item>
		<item>
		<title>Run QUnit tests under Continuous Integration with NQUnit</title>
		<link>https://lostechies.com/joshuaflanagan/2011/08/09/run-qunit-tests-under-continuous-integration-with-nqunit/</link>
		<comments>https://lostechies.com/joshuaflanagan/2011/08/09/run-qunit-tests-under-continuous-integration-with-nqunit/#comments</comments>
		<pubDate>Tue, 09 Aug 2011 21:20:05 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[jquery]]></category>
		<category><![CDATA[qunit]]></category>
		<category><![CDATA[teamcity]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2011/08/09/run-qunit-tests-under-continuous-integration-with-nqunit/</guid>
		<description><![CDATA[Almost three years ago, I wrote about Running jQuery QUnit tests under Continuous Integration. As you can imagine, a lot has changed in three years. You would now have a lot of trouble following my post if you use the&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2011/08/09/run-qunit-tests-under-continuous-integration-with-nqunit/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>Almost three years ago, I wrote about <a href="http://lostechies.com/joshuaflanagan/2008/09/18/running-jquery-qunit-tests-under-continuous-integration/" target="_blank">Running jQuery QUnit tests under Continuous Integration</a>. As you can imagine, a lot has changed in three years. You would now have a lot of trouble following my post if you use the latest versions of WatiN and NUnit.</p>
<p>Fortunately, Robert Moore and Miguel Madero have already taken the code, fixed it to work with the latest libraries, and released it as <a href="https://github.com/robdmoore/NQUnit" target="_blank">NQUnit</a>. They&#8217;ve even packaged it up for easy consumption via Nuget:</p>
<pre class="brush: shell; gutter:false;wrap-lines:false;tab-size:2">PM&gt; Install-Package NQUnit</pre>
<p>My team recently dumped our implementation in favor of NQUnit. Thanks, and great job guys!</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2011/08/09/run-qunit-tests-under-continuous-integration-with-nqunit/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How to use a tool installed by Nuget in your build scripts</title>
		<link>https://lostechies.com/joshuaflanagan/2011/06/24/how-to-use-a-tool-installed-by-nuget-in-your-build-scripts/</link>
		<comments>https://lostechies.com/joshuaflanagan/2011/06/24/how-to-use-a-tool-installed-by-nuget-in-your-build-scripts/#comments</comments>
		<pubDate>Fri, 24 Jun 2011 17:52:50 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2011/06/24/how-to-use-a-tool-installed-by-nuget-in-your-build-scripts/</guid>
		<description><![CDATA[My last post covered tips for people creating Nuget packages. This one is important for people consuming Nuget packages. Some Nuget packages include executables in their tools folder. It is very easy to use these tools within Visual Studio because&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2011/06/24/how-to-use-a-tool-installed-by-nuget-in-your-build-scripts/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>My <a href="http://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/">last post</a> covered tips for people creating Nuget packages. This one is important for people consuming Nuget packages. </p>
<p>Some Nuget packages include executables in their tools folder. It is very easy to use these tools within Visual Studio because Nuget makes them available in the path of the Package Manager Console. However, they are very difficult to use outside of Visual Studio, especially in a build script. The problem is the name of the folder containing the installed package includes the version number of the package. If you install NUnit 2.5.1, the nunit-console.exe will be in packages\NUnit.2.5.1\tools. However, if you later upgrade to NUnit.2.5.2, the path to nunit-console.exe will change to packages\NUnit.2.5.2\tools. You will need to change your build scripts every time you upgrade your version of NUnit. That is unacceptable.</p>
<p>The solution is to create a helper that can figure out where the tool lives. If you are using rake for build automation, it is fairly straightforward:</p>
<p> <script src="https://gist.github.com/1044159.js?file=nuget_tool.rb"></script><noscript>
<pre><code class="language-ruby ruby"># Usage: nunit_path = tool &quot;NUnit&quot;, &quot;nunit-console.exe&quot;

def tool(package, tool)
  File.join(Dir.glob(File.join(package_root,&quot;#{package}.*&quot;)).sort.last, &quot;tools&quot;, tool)
end</code></pre>
<p></noscript>
<p>If not, you may want to create a batch file in the root of your project that calls your tool. You can create a tool specific batch file:</p>
<p> <script src="https://gist.github.com/1044159.js?file=nunit.bat"></script><noscript>
<pre><code class="language-batchfile batchfile">@ECHO OFF
SETLOCAL
REM You might prefer specific scripts for each tool. Makes the usage a bit easier. This is an example for NUnit.

FOR /R %~dp0\source\packages %%G IN (nunit-console.exe) DO (
  IF EXIST %%G (
    SET TOOLPATH=%%G
    GOTO FOUND
  )
)
IF '%TOOLPATH%'=='' GOTO NOTFOUND

:FOUND
%TOOLPATH% %*
GOTO :EOF

:NOTFOUND
ECHO nunit-console not found.
EXIT /B 1</code></pre>
<p></noscript>
<p>Or, if you have lots of tools from different packages, you might just want a generic batch file that allows you to specify the executable name:</p>
<p> <script src="https://gist.github.com/1044159.js?file=nuget_tool.bat"></script><noscript>
<pre><code class="language-batchfile batchfile">@ECHO OFF
SETLOCAL
REM This can be used for any .exe installed by a nuget package
REM Example usage: nuget_tool.bat nunit-console.exe myproject.tests.dll
SET TOOL=%1
FOR /R %~dp0\source\packages %%G IN (%TOOL%) DO (
  IF EXIST %%G (
    SET TOOLPATH=%%G
    GOTO FOUND
  )
)
IF '%TOOLPATH%'=='' GOTO NOTFOUND

:FOUND
%TOOLPATH% %2 %3 %4 %5 %6 %7 %8 %9
GOTO :EOF

:NOTFOUND
ECHO %TOOL% not found.
EXIT /B 1</code></pre>
<p></noscript></p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2011/06/24/how-to-use-a-tool-installed-by-nuget-in-your-build-scripts/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Tips for building Nuget packages</title>
		<link>https://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/</link>
		<comments>https://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/#comments</comments>
		<pubDate>Fri, 24 Jun 2011 03:24:26 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/</guid>
		<description><![CDATA[&#160; I&#8217;ve spent a lot of time over the past couple weeks building and consuming Nuget packages on real projects. I&#8217;ve picked up a few tips that, while not mind-blowing, should help you get beyond the simple scenarios you see&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>&nbsp;</p>
<p>I&#8217;ve spent a lot of time over the past couple weeks building and consuming Nuget packages on real projects. I&#8217;ve picked up a few tips that, while not mind-blowing, should help you get beyond the simple scenarios you see in most demos.&nbsp; I&#8217;ll refer to <a href="https://github.com/DarthFubuMVC/bottles/blob/6d82e063fd889ac1909c98adc369a97b4c1e377e/packaging/nuget/bottles.nuspec" target="_blank">an example .nuspec</a> from one of my projects, so you may want to keep it open in your browser.</p>
<h2>Do not store the version number in your nuspec file</h2>
<p>I assume you already had a build script before Nuget came. You already have a way to generate and store the version of your build output. For example, I tend to store the first three components of the version in a VERSION.txt file in my code repository. It gets manually incremented when appropriate (see <a href="http://semver.org/" target="_blank">semver</a>). The fourth component is the build number generated by my continuous integration server. This information is already used to generate my AssemblyInfo.cs file, artifacts package name, etc. It doesn&#8217;t make any sense for me to repeat that information in my .nuspec file, so set the version tag to 0.0.0 (see line 5) to remind yourself that it is not used. Use the -Version option of the nuget.exe <font face="Courier New">pack</font> command to provide the actual version.</p>
<h2>Prefer the &lt;files&gt; element approach to adding files to your package</h2>
<p>Again, you already have a build script, and it likely copies all of your build output to a known location. It doesn&#8217;t make sense to copy the files around even more to build the structure required by Nuget&#8217;s &#8220;conventional folders&#8221; approach. Instead, use the &lt;files&gt; section of your .nuspec (see line 20) to refer to the files in your build output folder. The powerful wildcard support makes it easy to include everything you want, including source code, as suggested by the next tip&#8230;</p>
<h2>Publish symbols packages for your Nugets</h2>
<p>Nuget has integrated support for building and publishing symbol packages to <a href="http://www.symbolsource.org/" target="_blank">symbolsource.org</a>. These symbol packages will greatly enhance the debugging experience for anyone that uses your library, and since its so easy, there is no reason for you to not publish them (assuming your library is open source).</p>
<ol>
<li>Include .pdb files for the files in the package&#8217;s lib folder (see line 22)
<li>Include all of the corresponding source files in the package&#8217;s src folder. Using the filespec wildcard support, this can be one line (see line 23).
<li>Pass the -Symbols flag to the <font face="Courier New">nuget.exe pack</font> command when building your package. It will create two files: yourpackage.nupkg and yourpackage.symbols.nupkg.
<li>(optional, but recommended) Register at symbolsource.org and associate your Nuget.org API key with your account.
<li>A single call to <font face="Courier New">nuget.exe push yourpackage.nupkg</font> will upload yourpackage.nupkg to nuget.org and yourpackage.symbols.nupkg to symbolsource.org. </li>
</ol>
<p>SymbolSource has good instructions on how package consumers can <a href="http://www.symbolsource.org/Public/Home/VisualStudio" target="_blank">configure Visual Studio to use the symbol packages</a>.</p>
<h2>Provide multiple packages that target different usage scenarios</h2>
<p>A single source code repository may produce many build artifacts, but if they aren&#8217;t always used together, they shouldn&#8217;t be distributed via Nuget together. Instead, create a single Nuget package that targets each scenario. A perfect example of what <em>not</em> to do is the NUnit.2.5.10.11092 package (I don&#8217;t mean to pick on NUnit, it was probably created quickly by a volunteer just to get it out there. That&#8217;s cool, I get it. If anyone from the NUnit team wants my help fixing the packages, just contact me). When you install NUnit via Nuget, you get nunit.framework.dll, nunit.mocks.dll and pnunit.framework.dll added to your project. I would guess that 90% of users just want nunit.framework.dll. Similarly, the tools folder contains a number of executables, but I bet most people just want nunit-console.exe. I would break this up into an NUnit package (nunit.framework.dll in lib and all files needed to run nunit-console.exe in tools), an NUnit.Mocks package (nunit.mocks.dll in lib and has the NUnit package as a dependency), and an NUnit.Parallel package (all of the pnunit files with a dependency on NUnit if needed). The bottles.nuspec in my example is just <a href="https://github.com/DarthFubuMVC/bottles/tree/d94241ff8ee4582428e73a3e3324a9b5c875f30b/packaging/nuget" target="_blank">one of many packages</a> produced by the bottles source code repository.</p>
<h2><strong>Create <em>core</em> packages that work outside of Visual Studio</strong></h2>
<p>The powershell scripts and web.config modifications are cool, but if they are not absolutely necessary to use your library, provide a &#8220;core&#8221; package that does not include them. You can always provide a &#8220;quickstart&#8221; package with these extras, and have it depend on your core package to bring in the meat.</p>
<h2>Automate building and publishing your packages</h2>
<p>If you follow my previous two tips, you&#8217;ll have multiple .nuspec files to deal with. Fortunately, it is easy to add a single step to your build process to create all of your packages. I have a <a href="https://gist.github.com/1044131" target="_blank">gist with a couple examples</a> to get you started (feel free to fork to improve or add more examples). It should be fairly straightforward to make new scripts to publish your packages based on those examples.</p>
<p>I feel pretty strongly about only pushing &#8220;public CI builds&#8221; to the official Nuget feed, for traceability. This makes the publishing step a little more complex, but nothing that can&#8217;t be handled in a rake script. <a href="https://gist.github.com/1044142" target="_blank">I&#8217;ve got an example</a> that downloads the .nupkg files built on <a href="http://teamcity.codebetter.com">http://teamcity.codebetter.com</a> and then publishes them.</p>
<h2>Create an icon for your package</h2>
<p>The icon will show up on nuget.org and in the Visual Studio Nuget dialog. Not many people do it (right now), so your packages will stand out. Let GitHub serve your icon file &#8211; just browse to the icon in your repo on GitHub, and then copy the link for the &#8220;raw&#8221; url and include it as your &lt;iconUrl&gt; metadata. Might as well do the same for your &lt;licenseUrl&gt;.</p>
<h2>Install <a href="http://nuget.codeplex.com/releases/view/59864" target="_blank">Nuget Package Explorer</a></h2>
<p>This probably should have been the first tip, its that useful. Makes it much easier to view the contents and metadata in your local .nupkg files, and download packages from nuget.org. Invaluable when designing your packages. It even lets you edit your packages, though I&#8217;ve never tried that.</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>An opportunity for a viable .NET open source ecosystem</title>
		<link>https://lostechies.com/joshuaflanagan/2011/05/27/an-opportunity-for-a-viable-net-open-source-ecosystem/</link>
		<comments>https://lostechies.com/joshuaflanagan/2011/05/27/an-opportunity-for-a-viable-net-open-source-ecosystem/#comments</comments>
		<pubDate>Fri, 27 May 2011 06:23:57 +0000</pubDate>
		<dc:creator>Joshua Flanagan</dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://lostechies.com/joshuaflanagan/2011/05/27/an-opportunity-for-a-viable-net-open-source-ecosystem/</guid>
		<description><![CDATA[I recently started getting to know Microsoft&#8217;s Nuget package management tool. I&#8217;ll admit I was going into it expecting to be disappointed, still annoyed that it had effectively killed the nu project and marginalized OpenWrap &#8211; two projects in the&#160;&#8230; <a href="https://lostechies.com/joshuaflanagan/2011/05/27/an-opportunity-for-a-viable-net-open-source-ecosystem/">Continue&#160;reading&#160;<span class="meta-nav">&#8594;</span></a>]]></description>
			<content:encoded><![CDATA[<p>
<blockquote></blockquote>
<p>I recently started getting to know Microsoft&#8217;s <a href="http://www.nuget.org/" target="_blank">Nuget</a> package management tool. I&#8217;ll admit I was going into it expecting to be disappointed, still annoyed that it had effectively killed the <a href="http://codebetter.com/drusellers/2010/07/17/nu/" target="_blank">nu</a> project and marginalized <a href="http://www.openwrap.org/" target="_blank">OpenWrap</a> &#8211; two projects in the same problem space, but from the community, that I thought had promise. However, after an hour or so of playing with it &#8211; pulling packages into a project, authoring my own package &#8211; I came away impressed. It worked and it was easy.</p>
<p>Of course, it wasn&#8217;t too long before I ran into a pain point. I wanted to create a nuget package as part of my continuous integration build. A nuget package needs a version number that increases with every release. So it stands to reason that every time you build the package, you are going to give it a new version number. The problem was that the version number could only be specified in the .nuspec file. The .nuspec file describes how to build the package, mostly static information that should be checked into source control. The idea that I would have to modify this file every time I built seemed absurd (because it was). I had found it! A reason to complain about nuget! Microsoft just doesn&#8217;t get it! Wait until twitter hears about this!</p>
<h2>A New Hope</h2>
<p>And then I remembered there was something very <em>different</em> about the nuget project: while it is primarily maintained by Microsoft employees, it accepts contributions from the community. Suddenly, the barbs I was about to throw wouldn&#8217;t apply just to the big bully Microsoft, but my own .NET developer community. As a core member of many open source projects over the years, I know how limited resources are, and that feature prioritization is usually based on the needs of the people willing to do the work. If I thought it was important, it was time to step up. So <a href="http://nuget.codeplex.com/workitem/754" target="_blank">I logged an issue</a>. It got a few nods of approval, so <a href="http://nuget.codeplex.com/SourceControl/changeset/changes/3e9aa42bbdd2" target="_blank">I submitted a pull request</a>. And then the strangest, most wonderful, the big-ship-is-turning thing happened: they accepted it. Within two days of discovering this twitter-rant worthy, how-could-they-not-think-of-this-scenario, nuget-is-doomed problem, it was fixed. Huh.</p>
<p>Fast forward a few weeks. Old prejudices die hard. I&#8217;m trying to take advantage of the very cool <a href="http://www.symbolsource.org/" target="_blank">symbolsource.org</a> integration (I think the lack of an approachable public symbol server until now has been as big a hole as not having a package manager). It becomes painfully clear that the nuget support for symbol package creation is only viable for the most simplest scenarios. I was annoyed and exhausted from a long night of fighting the tool, so was a little quicker on the twitter trigger (although still much tamer than I felt):</p>
<p><a href="http://lostechies.com/joshuaflanagan/files/2011/05/image.png"><img style="background-image: none; border-bottom: 0px; border-left: 0px; margin: 0px 0px 8px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="image" border="0" alt="image" src="http://lostechies.com/joshuaflanagan/files/2011/05/image_thumb.png" width="324" height="109"></a></p>
<p>Then, remembering my last experience, I figured I would at least <a href="http://nuget.codeplex.com/discussions/258338" target="_blank">start a discussion</a> before giving up for the night. To my surprise, the next day it was <a href="http://nuget.codeplex.com/workitem/1089" target="_blank">turned into an issue</a> &#8211; this isn&#8217;t just another <a href="http://ayende.com/blog/2667/how-to-kill-the-community-feedback-or-the-uselessness-of-microsoft-connect" target="_blank">Microsoft Connect black hole</a>. After hashing out a few details, I went to work on a solution and submitted a <a href="http://nuget.codeplex.com/SourceControl/changeset/changes/2e7df0e9ae42" target="_blank">pull request</a>. It was accepted within a few days. Aha! This is open source. This is how its supposed to work. This works.</p>
<p>The Nuget project is the most exciting thing to come out of Microsoft DevDiv since the .NET Framework was released. I think it is very important to the .NET open source community that it succeeds. Not just for the obvious benefit of simplifying distribution for all of the non-Microsoft open source solutions. But for the potential it has to change minds internally at Microsoft about how community collaboration can work. And for the potential it has to change minds in the <em>community</em>, about how collaboration with Microsoft can work.</p>
<h2>Epilogue: The alternative</h2>
<p>Ever read the What If&#8230;? comic books? <a href="http://en.wikipedia.org/wiki/What_If_(comics)" target="_blank">From wikipedia</a>:</p>
<blockquote><p><i>What If</i> stories usually began with [the narrator] briefly recapping a notable event in the mainstream Marvel Universe, then indicating a particular point of divergence in that event. He would then demonstrate, by way of looking into a parallel reality, what could have happened if events had taken a different course from that point.</p>
</blockquote>
<p>I&#8217;ve been thinking a lot lately about &#8220;What if&#8230; Microsoft had launched ASP.NET MVC as a collaborative project that accepted community contributions the same way that Nuget does?&#8221;</p>
<p>Would <a href="http://fubumvc.com/" target="_blank">FubuMVC</a> exist? I&#8217;m not sure it would, and I&#8217;m not sure that would be a bad thing. We, the FubuMVC team, started out building a new product on ASP.NET MVC, adopting Microsoft&#8217;s framework before it even went 1.0. We inevitably ran into some pain points that we had to work around. Of course, the workarounds had to be implemented in our application codebase because Microsoft did not want them and/or could not take them. As we built up more and more workarounds, we relied less and less on the out-of-box functionality. It starts with a custom action invoker. Which leads to custom model binding. Then custom HTML helpers. Route creation and URL resolution. After a year and a half of this, it became clear we were building our own framework. During a few days of a long Christmas vacation, <a href="http://lostechies.com/chadmyers/" target="_blank">Chad Myers</a> went the last mile and extracted our changes into a google code project that eventually became FubuMVC.</p>
<p>Now consider what might have happened if Microsoft accepted contributions to MVC. Remember, we didn&#8217;t decide from day 1 to write a new framework &#8211; we had a product to build. As each issue came up, we could have submitted it as a patch. Each patch on its own wouldn&#8217;t have been a revolutionary change. But the motivations behind them may have started to sneak into the consciousness of everyone working on the codebase. You see, we, and anyone else that may have contributed, had a major advantage over the Microsoft contributors: we were building real applications with this framework.</p>
<p>Now, I don&#8217;t think the end result would have been a framework with all of FubuMVC&#8217;s traits (requiring an inversion of control container would have been DOA). But certainly some of the core ideals (composition over inheritance, leveraging the strengths of the type system) could have snuck in. And instead of going our own way, we could have benefited from all of Microsoft&#8217;s work: Visual Studio tooling, razor view engine support, documentation, and an overall thoroughness you get from paid employees working on it full time. And of course its not just us &#8211; ideas and code from projects like <a href="https://github.com/openrasta/openrasta-stable/wiki" target="_blank">OpenRasta</a>, <a href="https://github.com/NancyFx/Nancy" target="_blank">NancyFX</a>, and <a href="http://owin.org/" target="_blank">OWIN</a> could have snuck in too.</p>
<p>I think that was a major missed opportunity. The Nuget project is proving there is a better way.</p>
]]></content:encoded>
			<wfw:commentRss>https://lostechies.com/joshuaflanagan/2011/05/27/an-opportunity-for-a-viable-net-open-source-ecosystem/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>
