The Blog of Joe Doyle Notes of a software developer Sat, 08 Aug 2015 20:35:02 GMT The Blog of Joe Doyle Copyright © 2014 Joe Doyle. All rights reserved Feed for Node.js <![CDATA[The Next Next Adventure]]> Mon, 29 Dec 2014 08:00:00 GMT After a stretch of radio-silence, I'm ready to get back into writing again. We're settled in after a very busy second half of the year.

The JavaScript Capital of the World

We wanted to live in San Francisco but the reality of that just wasn't possible. We did end up in the next best place, Oakland! As you can imagine, moving your family 2000 miles isn't the easiest thing to do. But, we made it and have been settling in. It's been a blast getting to know Oakland. I've really enjoyed being in the heart of the JavaScript community. There are quite a few prominent JavaScript & Node developers that call it home. It's inspiring to be local to so many creative and intelligent folks. And the community is welcoming as I learned by going to JSFest.

Another New Year and Another New Job

At the beginning of December I joined App Press as an employee. I had been working with them as a freelance developer since April under my own company, Doyle Software. The App Press team is great and I think we have a great product. While I'm sad to see end of Doyle Software, I'm very excited about being a part of App Press. If you're looking for a way to design and deliver beautiful, native apps be sure to check us out!

I'm responsible for the server development and infrastructure, so expect to see a lot more post posts about Node & Angular, nginx, AWS, and Docker and other DevOpsy related topics.

<![CDATA[The Next Adventure]]> Tue, 15 Apr 2014 07:00:00 GMT The past 4 years here in Indianapolis have been wonderful and a time of growth for me. I have many happy memories as I look back at our time here. But, it is time for us to move on. This summer my family and I will be moving to San Francisco.

Goodbye Pinnacle of Indiana

Sadly this means that it's time to leave my job at Pinnacle of Indiana. I've really enjoyed the last few years working there. They are a great development team and I will miss working with them.

If you are looking for a team to help with your next .NET project, be sure to give them a call.

Indianapolis Meetups

One of the best choices I made was to go to the JavaScript and Node meetups. Indy.js has been a great source of information on various topics as well as a great networking group to meet developers from around the city. I've even been lucky enough to present there a few times. Node.Indy has grown from a handful of people to a well attended meetup. The presentations have ranged from high speed web scraping to opening garage doors via Arduinos to websockets and WebRTC.

If you are in the Indianapolis area, I highly recommend both of these meetups.

Hello Doyle Software

While we're still in Indianapolis, I'm going to be doing freelance work under my own company. You can checkout my site at I currently have work lined up, but if you have a project you're looking for help with, let me know and I'll see if its a fit for me. My focus is on projects where Node.js and Angular.js make sense to provide an interactive and efficient solution. I'm not against doing some small, short-term .NET projects as well.

San Francisco!

We're both excited about moving to San Francisco. We're at a point in our lives where we get to choose anywhere we want to live, so why not go somewhere warm. My wife has secured a great job in downtown doing what she loves. I'm not exactly sure what I want to do next, but I'm sure I can find it in the Bay Area. It's also not a bad place to be as a software developer interested in Node!

I'll still be around in Indianapolis until mid-summer if you want to chat or get together!

<![CDATA[Minification and Bundling in ASP.NET MVC]]> Fri, 11 Oct 2013 04:00:00 GMT Show MVC 4

How to use in MVC 3

Show gotachs with .debug and .min files

<![CDATA[Using Karma for JavaScript Testing]]> Mon, 12 Aug 2013 07:00:00 GMT Getting the tooling to do TDD with JavaScript code has been something that I've been struggling with for the last year. There have been lots of tools that can handle one aspect or another of JavaScript testing, but nothing was a complete solution. I thought our needs would be fairly common since we're using pretty standard tool sets.

I wanted:

  • Ability to run JS tests automatically or with a simple key command within Visual Studio (ala Resharper)
  • The ability to use wildcards for our source and test files. Listing each file out is too painful on a large project.
  • TeamCity integration just like we have for our C# unit tests
  • Code coverage generation, preferably that could also hook into TeamCity

Some Nice To Haves:

  • Allow devs to generate code coverage locally
  • A configuration that could be checked into our source control

I'm finally happy with the current setup we're using now. We've setup Karma which fits our needs and hits just about every point we wanted.

Our Setup

Here's a bit more detail on what we're using and testing against.

Our JS code is mostly using Knockout.js. We try to keep jQuery use to a minimum, and keep it out of our ViewModels completely, with the exception of $.ajax. Knockout makes it very easy to test our client side logic because there is no reliance on the DOM.

On the testing side we use QUnit mainly because it is very close to NUnit which is our testing framework on the C# side of things. We've recently introduced Sinon.js for our mocking/spies/stubbing framework. We had been using one I wrote, but Sinon is just so much better.

A Brief History of Testing

When we started with JavaScript testing we just used a web page setup from the QUnit tutorials. That was fine for local testing, but didn't work with TeamCity. It didn't take long to get PhantomJS setup and having our tests run in TeamCity that way.

To get code coverage working, we found the YUI-Coverage tool. It's a Java app that instruments your code then parses the output created when the tests run. It worked but was a pain to maintain. Since the files were modified when they were instrumented, we had to make sure we saved off a copy of the originals otherwise we'd see coverage percentages like 56000%. It has no issue instrumenting an already instrumented file for bonus coverage fun.

We were able to get this setup working, but it wasn't quite where we wanted it to be.

Enter Angular.js & Karma

I had seen the limits of Knockout when it came to very complicated Single Page Apps (SPA) that we had worked on. Knockout worked, but the code was not as clean and clear as I would have liked it to be. I started reading about Angular.js and it's approach as a client-side framework. I came across the test framework that the Angular team had created. At the time it had a rather unfortunate name (which has been corrected), but it appeared to be everything we were looking for.

Karma is a command line tool that runs in Node.js. It uses more modern approaches to testing by being just a modular test runner. It supports all the major testing libraries, including QUnit. It also has a code coverage module which runs Istanbul, also by the YUI team. Istanbul uses another library calls Esprima which allows for the instrumentation to be done in memory saving us the step of saving off the originals.

How it works is actually really cool. You configure Karma with your source and test files and tell it how you want the results reported back to you. There are a variety of reporters; we just use the progress one. You also tell Karma which browsers you would like your tests run in. It defaults to Chrome, but supports the major browsers and PhantomJS. You can configure as many as you like and have your tests run on each concurrently.

Karma hosts the web server itself and uses websockets to establish a connection to the testing browser. When you update your files, it re-sends them to the browsers and re-runs your tests. This provides instant and automatic feedback. Exactly what we want for doing TDD.

As of 0.10, Karma is now plugin based. The team did a good job of breaking out the existing functionality into Node modules, and the community has filled the gaps. The TeamCity reporter works great, so we're still covered there.

Karma on Windows

For the most part, getting Karma to work on Windows was painless. We're using Node 0.10.15 and all of the Node modules that are used compile just fine. We did run into an issue with how the location of the Chrome and Firefox executables is determined, but I have already submitted pull requests to correct that (Chrome Reporter, Firefox Reporter).

We have two Karma config files setup. Once for local development that runs after files are saved, and another for TeamCity and code coverage enabled. This allows us to see the coverage without having to check in, which is actually pretty nice.

My Contribution to the Karma Community

As I was learning how to get Karma going I didn't like how I had to keep the console window visible to know if my tests failed. I wanted to hear that my tests failed.

Introducing Karma-Beep-Reporter. It's a simple reporter that outputs the ASCII character 0x07 (Bell) when you have a failed test or your tests fail to run altogether. It's meant to run along side one of the other reporters since it only beeps. I've only tested it on Windows so far, but it works great. I welcome comments and feedback!

<![CDATA[Getting started with Node.js and Nginx]]> Mon, 22 Jul 2013 04:00:00 GMT I've started to move on to the next phase of learning about Node.js. I have a few sites created and for the most part IISNode has done a good job allowing me to run within IIS. Enabling output and kernel level caching gives a nice boost to performance as well. While this is all well and good, it's not how Node.js is generally run in production scenarios. I decided it was time to learn about hosting Node.js sites on Linux behind nginx.

####The Goal Here's what I want to accomplish.

  1. Get a Linux VM setup; Ubuntu 13.04 x64
  2. Install Node.js & nginx
  3. Configure nginx to proxy my site with caching enabled for static files
  4. Setup my site to start when the server boots

####Installing Linux There's not much exciting here. Just a vanilla Ubuntu server install. I made sure I had OpenSSH installed so I could manage it remotely. I've done this part before.

I am not an experienced Linux administrator. I can get around and do some basics, but Linux is undiscovered country for me. The steps below are what I've been able to scrape together off the internet. It worked for me. If there's something I did wrong or there's a better way, I'd love to hear about it!

####Installing Node.js & nginx Doing a little of the Google magic points out that while Ubuntu has a Node.js package, its not maintained or up to date. The Node repo has a nice Github wiki page covering the steps you need to do to add a reference to the up to date package.

sudo apt-get update
sudo apt-get install python-software-properties python g++ make
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs

This worked like a charm. Now I have Node v0.10.13 running.

I followed a similar process with nginx. They have straightforward documentation for each of the main Linux distros.

The first step is to install the nginx public key. I downloaded it the server then ran this command:

sudo apt-key add nginx_signing.key

Next I added these two lines to the end of /etc/apt/sources.list

deb raring nginx
deb-src raring nginx

Now I'm ready to install.

apt-get update
apt-get install nginx

Success! nginx installed.

####Configure nginx This is where things got fun. So I found a good post on StackOverflow with an answer that looked like what I needed! So I started at the top and went to create a new file in /etc/nginx/sites-available. Only, I didn't have a sites-available directory. Did I miss a step?

Again, StackOverflow to the rescue! It turns out that the sites-available/sites-enabled setup is part of the Ubuntu maintained package, not the main package from the nginx folks. I like the concept of the sites-available/sites-enabled setup, so I decide to implement it. I create the directories, edit the /etc/nginx/nginx.conf file, restart nginx ( sudo service nginx restart), and now I can go back to getting the site setup.

I used an article from the ARG! Team Blog I found on Hardening Node.js For Production. Looked like what I wanted! Instead of putting the server configuration directly in the nginx.conf, I put mine in the sites-available directory and created a symbolic link to it in the sites-enabled directory. For those that want to see the command:

cd /etc/nginx/sites-enabled
sudo ln -s /etc/nginx/sites-available/test.conf test.conf

Here's the test.conf file:

upstream testsite {

server {
    listen 80;
    access_log /var/log/nginx/test.log;

    location ~ ^/(images/|img/|javascript/|js/|css/|stylsheets/|favicon.ico) {
        root /home/joe/testsite/public;
        access_log off;
        expires max;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass http://testsite;
        proxy_redirect off;

The article from the ARG! Team Blog goes into detail about what's going on in this file. Here are the highlights:

Lines 1-3:
This defines where my Node.js site is at. In my case, its on the same machine on port 3500. This can be another server, or multiple servers to round-robin against.

Lines 9-13:
This defines where the static content is that nginx should serve instead of Node.js. Notice that it points to my public directory inside my site.

Lines 15-23:
This defines the root of the site that nginx should proxy for. We add a bunch of headers to tell Node.js/Express that there's a proxy in front of it.

Line 21:
The url here isn't the url used to access the site. Instead it is referring to Line 1 as the backend servers to send requests to.

####Time to test it! After I got all this setup, I started up my site. I opened it up in the browser and...

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to
Commercial support is available at

Thank you for using nginx.

Not quite what I was expecting. At least I know nginx is running. But what went wrong? I rechecked everything and I thought it looked right. Then I remembered the instructions for enabling the sites-available/sites-enabled. I had added this line as directed:

include /etc/nginx/sites-enabled/*;

What I missed was to remove the line that was already there:

include /etc/nginx/conf.d/*.conf;

I commented it out by putting a # in front of it and restart nginx again. When I tested this time, success!

Here's my final nginx.conf after adding the rest of the parts from the ARG! Team blog:

user  nginx;
worker_processes  4;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/;

events {
    worker_connections  1024;

http {
    proxy_cache_path  /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m;
    proxy_temp_path /var/tmp;
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip  on;
    gzip_comp_level 6;
    gzip_vary on;
    gzip_min_length 1000;
    gzip_proxied any;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    gzip_buffers 16 8k;

    #include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

Ok, time for the last step.

####Start the site when the server boots I'm used to Windows services which are compiled programs. Ubuntu has Upstart which is a nicer script driven system. It looks like that's the modern approach for what I want to do.

I'm not running a module to restart Node if it goes down! This is just a test. When I move a real production site behind nginx I will use a module like Forever.

I started with this StackOverflow post and couldn't get it to work. I did more searching and ran across the Upstart Cookbook which helped to explain what I was even trying to do, and then I found this post about Node.js and the Forever module. The example they gave was much simpler.

To create an Upstart script create a file in /etc/init. I called mine test.conf for simplicity. Here's what I ended up with in the file:


description "Test Node.js Site"

env FULL_PATH="/home/joe/testsite"
env FILE_NAME="app.js"

start on startup
stop on shutdown

 exec node $FULL_PATH/$FILE_NAME > /home/joe/testsite/test.log
end script

I start it up with:

sudo start test

And the site is live!

I reboot the server and... the site is down. Hmm. Back to the Google.

This time it's AskUbuntu (a StackExchange Network Site) which has a perfectly named post: Why isn't my upstart service starting on system boot? It led me to try changing my start event from on startup on line 8 to:

start on net-device-up IFACE=eth0

I reboot once again... and the site is up!

####What next? Now that I have a basic site setup I want to play around with moving a few other sites onto this server and off of IIS. Since I still do have sites that I want to keep on IIS, I'm also planning on having nginx proxy for those as well. If things go well I'll probably also move this site as well.

<![CDATA[A Pattern for Connecting to MongoDB in an Express App]]> Tue, 25 Jun 2013 04:00:00 GMT A common question I've seen on StackOverflow asks for the best way to open a connection to MongoDB when starting up your Express app. Folks generally don't care for just putting all of the Express setup in the callback of the MongoDB connect call, but it seems to be the generally accepted approach. I didn't like it either and felt that there must be a better way. Here's what I came up with.

The Callbacks

You can't really escape the callbacks when dealing with the native MongoDB driver. Pretty much every call expects a callback. The way I deal with that is by using Promises using the Q library. Q goes beyond just providing a way to use promises by also providing helper functions for wrapping existing Node.js APIs that use the standard callback pattern of the function(err, result).

Promises are a deep topic themselves, so I won't go into them in detail here. Just know they they can help turn the callback "Pyramid of Doom" or "Callback Christmas Tree" into a chained series of function calls which greatly improves the readability of your code. Google can hook you up if you want to know more.

The Database object

The first step that made the most sense when I first started using MongoDB in Node.js was to create my data access object. Its used for creating the connection, holding the references to the collections used, and the methods that perform the specific actions against MongoDB.

So here's what my Database object looks like:

var Q = require('Q'),
    MongoClient = require('mongodb').MongoClient,
    ObjectId = require('mongodb').ObjectID,
    Server = require('mongodb').Server,
    ReplSet = require('mongodb').ReplSet;
    _ = require('underscore');

var Database = function(server, database) {
    this.server = server;
    this.database = database;

Database.prototype.connect = function(collections) {
    var self = this;
    var connectionString = "mongodb://" + this.server + "/" + this.database + '?replicaSet=cluster&readPreference=secondaryPreferred';
    return Q.nfcall(MongoClient.connect, connectionString)
        .then(function(db) {
            _.each(collections, function(collection) {
                self[collection] = db.collection(collection);

            return db;

Database.prototype.findDocs = function(term) {
    return this.mydocs.find({ Title: term }).stream();    

Database.prototype.saveDoc = function(postData) {
    return Q.npost(this.mydocs, "update", [{ id: }, postData, { w: 1, upsert: true }]);

module.exports = Database;

So what's going on here? For the most part, nothing very exciting. We take in the server(s) and database we want to connect to in the constructor. The first interesting part starts on line 16. Q.nfcall is our Q helper function wrapping the MongoClient.connect and giving us a promise back. We chain a then() function which is called after the connection is made to MongoDB. The function receives the connected db object which we can then save a reference to each collection we want to use for our app. We then return the db object from the function so we can keep passing it along. The end result, which consists of the chain of our two functions, is still a promise which is then returned back to the caller.

Just to show a little more detail, you can also see the Q library in use for performing an upsert when we want to save a new document. Again, the promise is returned which means we don't need to use a callback. Line 27 also shows that the find function can utilize streams instead of using a callback. I hope that feature is spread around more!

The Express App Configuration

For the Express configuration, I decided to keep most of it wrapped in a function. Most of it could be pulled out and just run before we initialize the database. I like that its wrapped up, personally.

So here's what our app.js looks like:

var database = new Database(settings.databaseServers, settings.database);

function startServer(db) {
    app.set('port', process.env.PORT || 3000);
    app.set('views', __dirname + '/views');
    // The rest of the setup is excluded for brevity...
    console.log('Connected to the database');
    app.locals.database = database;
    http.createServer(app).listen(app.get('port'), function onServerListen(){
        console.log('Express server listening on port ' + app.get('port'));

database.connect(['Posts', 'Stats', 'Log'])

The code that really starts things off is at the bottom. We call connect on our Database, passing in the array of collections we want. Since connect returns a promise, we can tack on another function using then() which will also use our connected db object. In this case, it's our startServer function which loads up Express and start our server listening.

Accessing the Database in your Routes

In our app.js snippet, something I do is attach the database to app.locals on line 10. I'm not sure if this is the best approach, but it has been working for me so far. Now in my routes, I can access the database using It could also be passed in to the registerRoutes function and pass around from there. For my blog, instead of accessing the database directly from the reference, I have another layer which resembles the Repository pattern. For simpler apps I've been ok with the direct reference approach.

Can it be better?

Like most of the code we write, it looks pretty good today. Much better than how we did it last year. I'm not sure if there's a better way, with better falling into [simpler, more scalable, something I just don't know about]. If you know of or use a better approach, I'd love to hear about it!

<![CDATA[Getting Started With MongoDB and C# Revisited]]> Sun, 26 May 2013 07:00:00 GMT A few years ago I wrote about using the MongoDB driver in C#. Its been one of my most popular posts and it really needs an update! Since 2011, the 10gen driver has become the standard. Its been getting updated on a regular basis with features that bring it inline with what we would expect in C#. I've been using MongoDB for all of my personal projects and have been very happy with it. So here's an update for what it looks like today to use the MongoDB driver version 1.8.1.

Getting the Driver

The source is still located on GitHub at But now that we have NuGet, the easiest way to get started is by using the NuGet package mongocsharpdriver ( It's everything we need compiled and ready to go.

To the Code

The best place to get started is still the official 10gen C# driver tutorial at It covers what you need to get started and helps to keep track of what is new each release.

I'm going to stick with the original app which was a simple tool that keeps track of passwords and various notes. I've been using it for a few years and have been happy with it's simplicity.

The first change is how we connect. It used to look like this:

MongoServer server = MongoServer.Create("mongodb://myserver");
MongoDatabase db = server.GetDatabase("TheDatabase");

That style has been deprecated and instead we should use the new MongoClient class. Here's what that looks like now:

var mongoClient = new MongoClient("mongodb://myserver");
mongoServer = mongoClient.GetServer();
var database = mongoServer.GetDatabase("TheDatabase");

Not too different, but the naming is definitely clearer. It no longer looks like we're creating a server which is nice.

Getting a reference to the collection is still the same. We specify our CredentialSet class when we get our collection so that we don't need to work with BsonDocuments if we don't want to. Even though MongoDB is a schema-less document store, it does make life easier to have a fixed type to work with.

var passwords = database.GetCollection<CredentialSet>("passwords");

And just as a reminder, our model looks like this:

public class CredentialSet
    public ObjectId Id { get; set; }
    public string Title { get; set; }
    public string Username { get; set; }
    public string Password { get; set; }
    public string WebSite { get; set; }
    public string Notes { get; set; }
    public int Owner { get; set; }
    public DateTime LastUpdate { get; set; }

One thing that did get fixed is the _id issue from last time. The driver will now use Id as the document id automatically. It also will look for _id, but that's not inline with C# standards.

So lets save a new document:

var password = new CredentialSet();
// set the property values.

Now that we have a saved document, let's query for it. Here's where we get to my favorite new feature; support for Linq.

Instead of building up a Query object and using Find or FindAll, we can access our collection as an IQueryable and use Linq against it. Like most custom Linq providers, not every operation is supported, but its typically nothing that can't be worked around.

So before we had:

var query = Query.EQ("Title", "A password");
var oneDocument = passwords.FindOne(query);

Now we can do:

var result = passwords.AsQueryable().SingleOrDefault(x => x.Title=="A password");

Of course the older methods are still available for the operations that don't make sense with Linq, such as map/reduce.

Next Time: Aggregation Framework, V8, and Full Text Search

Another nice feature in MongoDB 2.1 and later is the Aggregation Framework. It provides an easier alternative to map/reduce. I'm still learning about it, but I am using it on this site to generate some statistics for my dashboard view. As of 2.4, V8 now powers MongoDB and we get a few extra benefits such as multiple scripts executing at the same time. We also got the first version of a full text search engine built into MongoDB. I'll dive into these next time.

<![CDATA[The Creation of My New Blog]]> Sun, 19 May 2013 07:00:00 GMT The World of WordPress

About a year and a half ago I made the switch from Posterous to a WordPress for my blog. I figured that I might as well learn how to use the 800 pound gorilla in the room. For the most part, things went well considering that I wanted to run it on Windows under IIS and use SQL Server as the database. I added some plugins for the basics like commenting, syntax highlighting and the like. The import from Posterous was smooth with nothing lost.

And it was good.

I'm not sure if it was the WordPress upgrade or an update to one of the plugins. One day a few months ago I tried to create a new post only to have 90% of it just disappear upon hitting save. I hit the edit button and retyped a paragraph to see if that would save. It didn't. Typed a little less then previewed the post this time. Gone. I did a few more experiments with creating new posts and editing them in various stages. They all seemed to auto-save early in the entry and then get locked forever.

A Ghost in the Darkness

I wasn't sure what I wanted to do about my blog. I wasn't in the mood to re-install WordPress. I looked at a few blogs written in .NET, but none of them really appealed to me since most are written using WebForms. Then I saw the KickStarter for Ghost popup on Twitter. It's basically the start of a new platform designed to focus on blogs vs the CMS style product that WordPress has become. Its written in Node.js with SQLite as the default backend database. Markdown is used as the input language with a real-time preview as you create a post. It looks to leverage the best of HTML5 to make a state of the art blogging platform.

My initial reaction was probably the same as most developers when they see something cool on the web:

I can build that!

And so I did.

"I see you have constructed a new lightsaber."

There's a bit of me that feels writing your own blog is a rite of passage as a developer. I know most people use existing packages because why would you really want to waste time at creating something that has been created hundreds of times before. For me, this is a chance to not only give it my personal touch, but really experiment with new technologies and practice the skills outside of my comfort zone. Some might say it's like a Jedi building his first lightsaber.

At work I almost exclusively use ASP.NET MVC 4. And while I really do like using it, I felt this was the perfect time to try building a website in Node.js and Express. I really liked the idea of using Markdown instead of a WYSIWYG editor or plain HTML. I also liked the idea of having the layout update in real time when writing a post. I'm using MongoDB since it's my go-to datastore due to how easy and fast it is. So far the core is done. It's still mostly MVF (minimum viable functionality), but I'll keep tweaking it as I go.

Here are some of the highlights that I'm proud of or really happy with.


To get the dual Markdown/HTML rendering I'm using Pagedown which is from the folks at Stack Exchange. Its the editor they use on their sites. It was really easy to implement and there's even a 3rd party add (Pagedown.Extra) on which extends the Markdown a bit more for things such as tables and code syntax highlighting. For syntax highlighting I'm using SyntaxHighlighter.

For uploading images and files I integrated Dropzone.js by overriding the image dialog in Pagedown. Dropzone is amazingly simple implement and provides thumbnails of the images as you upload. Just eye candy, I know, but the effect is sweet.

Here's a screenshot of me writing this post: Editor Screen


If there's anything I need more practice at, it's design. Thanks to Twitter Bootstrap, I got a running start. I like the clean and simple look so I tried to keep plenty of whitespace and let it define the sections. I'm using LESS for the CSS. I'm not yet customizing Bootstrap, but its on the list. Font Awesome is used for the icons. I went pretty minimalistic on the colors sticking to the really-dark-grey-and-black on while. I'm still iterating over the layouts, but I think I'm pretty close.


I run my own servers, so I wanted continue to host my blog locally. For now I'm using iisnode with Node.js 0.10. One of the benefits is that I can have IIS host all of the static content and only have Node host the dynamic pages. This is the standard Node configuration I hear about, with the exception that its Nginx used instead of IIS. The concept is the same.

I have Grunt setup to do my build and deployment so I can test locally then push out the live site. I really like Grunt and am looking at how feasible it would be to use in the .NET world for things like project scaffolding.


I wanted the site to be fast. Really fast. I tried to do all that I could to optimize the site. Grunt combines and minifies my JavaScript and CSS. Express is gzipping the content. The slowest part of the site is Disqus which is used for comments. Without Disqus, page load times are sub-70ms. Someone said on Twitter that a blog without comments is not a blog (and I agree), so its a price I'm willing to pay. One way I make things fast is loading all posts in memory and keeping them there. I don't have thousands of posts, so I can get away with that. Right now Node is only using ~60MB of memory, so I'm not too concerned.

Almost there

I still have a few behind the scenes sections to create. I want to build up a dashboard for some stats. Probably won't be as amazing as what Ghost will provide, but I'm not sure I need that much. I still have Google Analytics running anyways, and its not like I'm going to beat that.

I also want to pretty up the Edit page to use auto-completion for the tags and to have the url get built from the title automatically. Just a bit of extra polish really.

I do have an RSS feed, so if you're interested in .NET and Javascript posts, please do subscribe.

Until next time...

<![CDATA[And the winner is… Git]]> Tue, 03 Apr 2012 04:00:00 GMT In 2010 I made the choice to use Mercurial instead of Git. That was mostly due to Mercurial having much better Windows support. It had strong tooling for the command line, Windows Explorer, and Visual Studio. It was a simple choice. Ahh, but the times they are a changing…

Here we are, a quarter of the way through 2012. The world has moved forward as it tends to do. GitHub has dominated the open source world and elevated Git to be the de facto winner of the DVCS battle. The final victory for Git was the announcement from CodePlex that they too will support Git. It seems Git is really all you hear about now-a-days in the DVCS world.

So now that it’s clear that Git is the tool to know, I guess it’s time to make the switch for my personal projects. I can get by cloning a repo with my rudimentary Git skills, now is the time to get familiar enough with it to use it in my normal workflow. I’m not planning on converting existing repos, just starting new ones on Git.

Another good sign for us Windows Git users is that Phil Haack has moved over to GitHub. I have much faith in Phil as he and the GitHub gang works towards improving the Git experience on Windows.

And I, for one, welcome our new DVCS overlords.

<![CDATA[New Year, New Job, New Stuff!]]> Thu, 09 Feb 2012 05:00:00 GMT Motivation can be a funny thing. Sometimes you have it when you don’t have time for it. Other times it’s nowhere to be found when you are desperately seeking it. My motivation for blogging has waxed and waned other the last year, as probably is apparent by all three of my posts. This year will be better. Hard to be much worse!

I am motivated by moving my blog over to WordPress. I think my layout is cleaner, and my mobile layout is awesome! (Try it!)

New Year!

Happy New Year! Gregorian and Chinese.

New Job!

In November I started a new job as a developer at Pinnacle of Indiana. They found me through Careers 2.0 at I had created a profile when it was still 1.0, but I never really expected to get contacted, let alone a job.

I get to work with a great group of people who have a focus on the craftsmanship of programming. I’m excited to learn and hopefully begin to master the Agile methodologies. Most of our projects are .NET, but I’m getting exposure to other Microsoft products like SharePoint, Dynamics GP, and CRM. And of course, lots of web apps. The JavaScript I write today is so much better than just 2 months ago.

So far it’s been a great opportunity for me and I’m super lucky to be working here. Depending on what I come up with, I might even do some blogging on our developer blog at

New Stuff!

In addition to learning more work skills, I have started brewing my own beer! I’m currently in process of fermenting my second batch. I started out with a Porter which turned out great! My Amber should be ready in about 2 weeks. I would love to blog about it, but I’m not sure what I would say yet.

I’ve played with quite a few of the “cool kid” technologies over the last year and I have a bunch of things I want to cover in future posts. Some of the topics are:

  • Node.js
  • MongoDB
  • JavaScript
  • Git

More posts are on the way!

<![CDATA[CoffeeScript gently reminds me that tabs are not spaces]]> Wed, 05 Oct 2011 04:00:00 GMT I recently picked up Trevor Burnham’s CoffeeScript book. So far it’s a great introduction into CoffeeScript and also Node.js, two topics which I wanted to learn more about. I started running through the first examples to see them run. I downloaded the latest node.exe, and found a way to add the CoffeeScript module without NPM. I wrote up a simple test just to make sure it worked:

console.log "Hello World!"

I ran this command to run it:

node %coffee%

That worked. Node gladly printed my string, passed through CoffeeScript. Of course there isn’t much that CoffeeScript is doing, but there were no errors.

My next step was to try the first full sample in the book. It’s a buildup of the larger app which the book is building up to. I use Notepad++ for most of my plain text editing. I typed in the code, saved it and ran it.

Error: In, Parse error on line 14: Unexpected ‘POST_IF’

The function at line 14 looks like this:

promptForTile2 = ->
  console.log "Please enter coordinates for the second tile."
inputCallback = (input) ->
  if strToCoordinates input
  console.log "Swapping tiles...done!"

Everything looked correct. I just didn’t get it. Googling for the ‘Unexpected POST_IF’ brings up that it’s a parsing error and most posts have to do with multi-line if statements. I didn’t think that was what I was running into here.

Or was I?

I read through the multi-line if posts and it dawned on me that maybe it was being more helpful then I thought. I went back though my code and re-counted the spaces just to make sure I was consistent. Turns out I wasn’t exactly. Notepad++ was helping me out my automatically starting the next line at the same indention level as the last line. The I ran into was that Notepad++ was inserting a tab instead of 4 spaces when the indention was 4 spaces or more. CoffeeScript didn’t like the tab to start the line after the if statement. It wants spaces, not tabs.

The fix was easy enough. Like all great apps, Notepad++ is flexible. I just had to turn off the option to automatically align the next line. After cleaning out the tabs and changing them to spaces, we were good to go.

Since I didn’t really find anything on Google I thought it might help someone else that runs into this. I’m pretty sure it the kind of thing that only us Windows users will run into, with all of our overly helpful tools.

<![CDATA[ Cutting the Cable – Life after Comcast]]> Thu, 26 May 2011 04:00:00 GMT Like most American’s, our family had cable TV, Comcast in our area. And it was good, just expensive. Having 2 TVs, HD service, and a DVR really adds up. Throw in the sports package to be able to watch NFL Network in the fall and we’re talking a healthy sum of money each month. Now we’re doing fine financially, but we have a house, a toddler, and are planning for another one in the future. The more we can save, the better, right? Channels, Channels, Channels

Our most common complaint is one you can read about whenever someone talks about cable (or U-verse, or satellite). We had more channels than anyone can watch. We had just about each channel twice, one in standard def, and one in HD. I never did count them, but I would guess we had about 250 HD channels. Of which, I think we watched about 15 on a regular basis. And most of those weren’t watched live, but through the DVR after we recorded the show.

When you compare the total number of channels available per your monthly bill, you’re probably close to $0.25 a channel. Not too bad. But when you compare that bill to our actual usage… Now it’s more like $10 a channel. Throw in that 4 of those channels are the major networks, and that makes you think if there is a better way.

I know that the “À La Carte” channel model will never happen even though that is the trend most technologies are headed. If that were an option on Comcast, we would have stayed a cable TV customer. We are still an Internet customer, but on the business side. To the Cloud! – Kind of

So what’s the alternative? We do like to watch some TV. Just not as much any more.

Netflix – Of course! Hulu Plus also helps fill in the gaps. But we have a nice HD TV which is a monster compared to our laptop screens!

Roku to the rescue! IPTV – The future is here

I don’t remember exactly where I first heard about Roku boxes. I think it was a podcast or maybe Slashdot. Either way, this device is pretty slick. You can connect it to your Netflix, Hulu Plus, and Amazon Instant accounts to get started. Then there are a bunch of free and premium channels available as well. Roku also has an SDK which allows developers to create their own channels from content they may already have available in the web. You can even connect into your Pandora account to listen to music through it. A little Googling, and you can even find a channel to watch YouTube.

Roku has 3 different models available ranging from $60 to $100. This is a one time fee. You just have to pay for the other services you use, like Netflix or the premium channels. Honestly, there is no reason not to just get the XD|S version and call it a day. Full HD, wireless N, optical output, AND a USB port which can be used to view pictures or play movies. All of that is worth the extra $40 in my opinion.

Overall, the quality is really good. Granted, having a 20MB downstream internet connection sure helps that, but overall, we haven’t had any issues with the Roku box itself. It holds the wireless connection well and picture quality is really good for streaming. We even picked up a second Roku for the other TV. Just about perfect

Overall, Netflix & Hulu Plus cover about 80% of everything I want to watch. We don’t have Amazon Instant right now. Hulu Plus is good, but the commercials have to go. I’d pay another $5 a month to be totally commercial free. Saturday Night Live is pretty much skit, commercial, skit, commercial. Plus, I like Big Bang Theory, so no CBS on Hulu also hurts a bit. The CBS website does cover that though.

The transition was pretty smooth. Our TV bill is now about 1/10th what it was before. We miss the DVR and select shows that aren't on Hulu Plus, but everything else is on demand anyways. There’s also plenty of children’s programming for our little guy. I have thought of getting an antenna and seeing what local channels get can get in HD. We haven’t quite gotten to that point yet, but maybe someday.

<![CDATA[Getting Started With MongoDB and the 10gen C# Driver]]> Tue, 01 Feb 2011 05:00:00 GMT While most of this article still applies to the current version of the C# driver, I have written an updated version: Getting Started With MongoDB and C# Revisited

My main goals are to setup MongoDB for small scale applications that aren’t going to scale up to lots of users and multiple servers. I’ve installed MongoDB as a service and I’ve started to play around with the 10gen C# driver. There are a couple of C# drivers already out there (NoRM and the mongodb-csharp ones being the most popular) and people report varying levels of success. 10gen has also released a driver which has caught up feature-wise to the others. I decided to use this one because it’s easy to use and I do get a warm and fuzzy feeling knowing that it is from the 10gen folks. You can find the repository at

Into the code!

The best place to get started is the 10gen C# driver tutorial at It covers what you need to get started and sometimes a bit more.

The app I’m writing is a simple one that stores usernames and passwords along with some other information like the URL of the website to use the password and any notes that we might want to add. It’s just going to be on our internal network and won’t have any interface to the internet. That means I’m not focusing on things like encrypting the passwords, or other security measures if this were to be used anywhere else in the public.

Connecting to your database is pretty straight forward. There are many other options available, but just to get started, you just need the server name.

MongoServer server = MongoServer.Create("mongodb://myserver");
MongoDatabase db = server.GetDatabase("TheDatabase");

One of the things I like about MongoDB is that just asking for the database will create it.

So now that we have a reference to our new database, what are we going to do with it? Most of the time, we already have a model ready to be stored. Here’s my class that hold a password set.

public class CredentialSet
    public ObjectId _id { get; set; }
    public string Title { get; set; }
    public string Username { get; set; }
    public string Password { get; set; }
    public string WebSite { get; set; }
    public string Notes { get; set; }
    public int Owner { get; set; }
    public DateTime LastUpdate { get; set; }

It’s a pretty basic class. The only addition from standard C# is the ObjectId class. This class represents the default MongoDB identifier. You can choose to use your own unique identifier, but for now, I’m just going to use the default.

Our next step is to create an instance of our class and save it to the database. But first, we need a place to store it. In the relational world, we would use tables inside our database to store the data. In the MongoDB world, we use the Collection. Just like the database, the act of getting the reference to it will create it if it doesn’t exist.

MongoCollection passwords = db.GetCollection("passwords");

As you can see, we can specify our CredentialSet class when we get our collection. Even though MongoDB is a schema-less document store, it does make life easier to have a standard, static type to work with. When we specify a class like this, we are telling the driver to use our CredentialSet as the default when pulling our documents from the database. You can still insert any type of document you want, but this style saves us some key strokes later on.

So now let’s save our document.

var password = new CredentialSet();
// set the property values.

We can now use a tool like MongoVUE to see our record in MongoDB. When we take a look at it, we see something a little unexpected. Our _id is all zeros!

/* 0 */
  "_id": "000000000000000000000000",
  "Title": "A password",
  "Username": "username",
  "Password": "password",
  "WebSite": "",
  "Notes": "This is a password!",
  "Owner": "1",
  "LastUpdate": "Tue, 1 Feb 2011 10:47:20 GMT -05:00"

Doing some research, I found some references to a known issue with the 10gen client, and a simple fix. We just need to add an attribute to our model’s _id property. Here’s the updated CredentialSet.

public class CredentialSet
    public ObjectId _id { get; set; }
    public string Title { get; set; }
    public string Username { get; set; }
    public string Password { get; set; }
    public string WebSite { get; set; }
    public string Notes { get; set; }
    public int Owner { get; set; }
    public DateTime LastUpdate { get; set; }

This tells the driver that we want to use the _id property as the internal MongoDB identifier. After delete our exiting item using MongoVUE, we can run our sample again and examine the record.

/* 0 */
"_id": "4d38833880844214f0a8c60b",
"Title": "A password",
"Username": "username",
"Password": "password",
"WebSite": "",
"Notes": "This is a password!",
"Owner": "1",
"LastUpdate": "Tue, 1 Feb 2011 10:47:20 GMT -05:00"

Much better. Now we can try to pull that document out. There are lots of queries you might want to do. Way more than I can go through. I’m just going to show two simple examples. The first is pulling out all documents, and the second is finding a specific record based on a single field.

Let’s start with all records.

var allPasswords = passwords.FindAll();

It doesn’t get much easier than this! Again, we can use this simple method because we’ve specified a default document class. From here, we have a collection of CredentialSet objects that we can work with using standard methods such as foreach or Linq to Objects. So now let’s get a specific document.

To get a specific document, we need to build up a Query object to tell the driver how to create the JSON that MongoDB will use to find our document. From there, we use the FindOne method on the collection.

var query = Query.EQ("Title", "A password");
var oneDocument = passwords.FindOne(query);

There are lots of options when creating a Query. The one we used here, EQ, does a simple Equality comparison. It finds all documents where the Title field exactly matches ‘A password’. Since this was the Title we put in above, that’s the one we get back. Just about all of the options for querying are available. The 10gen C# driver page does a good job covering them.

Wrapping up

This was my first use of MongoDB. With the basics of saving and retrieval down, I can move forward on getting an app up and running. I know that this is really simple and it doesn’t cover any of the features MongoDB is known for such as master/slave replication or sharding. I also don’t do any error checking.

Something as simple as this can be done with any relational database. But in order to do this, I’d need to hook in an ORM such as nHibernate or EF4. That means extra code. The MongoDB driver handles all of the class to JSON mapping automatically. That’s what I’m looking for with this.

Standard tutorial disclaimer: None of this code is what I consider Production Ready. It did give me a starting point to move forward from. Hopefully it helps someone else as well.

<![CDATA[MongoDB and scaling down?]]> Tue, 14 Dec 2010 05:00:00 GMT I’ve been reading up on MongoDB. I picked up MongoDB: The Definitive Guide. It’s not a bad reference to have. I’m actually surprised how thin it is, yet it answered just about every question I had. It covered everything I wanted to know in getting started with MongoDB when looking at it from a “SQL” perspective. It did a good job explaining about de-normalizing your schema because document databases just don’t work that way. It also covered the basics of accessing subdocuments, or arrays in arrays. I pretty much understand its benefits and limits as a database. Rightfully so, it didn't really cover single instance scenarios to the depth I have been thinking about. Sure, we all know how great all of the NoSQL databases scale when you need to handle 1.21 bajillion requests at once. But what if you want to use the easy and speed of storing and accessing data for a small app? I know it’s not sexy enough to cover in the main stream, but what about the people that want a document database for use in a small environment? That’s what I want to know about.

What if I want to create the back end of a billing system or CRM that isn’t going to scale beyond a single office? What if I’m never going to go past a thousand users?

I think that there can be a great benefit to the small app developer by using a document or key/value database instead of only a traditional ACID compliant, relational database. I’m going to experiment with using MongoDB as a central database for a few small apps. Why shouldn't we use the ease of access provided by these systems for apps that will never hit millions of users? Sure, ORMs are great, but what if we never had to use them at all? My next little hobby project will be to convert a little password app for my wife to sync up to a MongoDB server. It currently uses a local SQLCE database which works just fine. I think by expanding it to also save on one of my servers, it will provide a backup of the data in case her laptop crashes. It will also let her search my passwords, and me, hers.

I want explore the best settings for MongoDB when you only plan on using it on a single server. I feel that this is an under served area. There are lots of companies, for better or for worse, that don’t have the ability to scale out across multiple servers for their critical systems. Should they stick with traditional relational databases or can they too enjoy the performance benefits of a document database?’ The goal for my next post is to provide the answers to this question. What is the best configuration for a single server environment?

<![CDATA[5 Things to Learn in 2011]]> Fri, 12 Nov 2010 05:00:00 GMT There is so much amazing stuff happening with technology today. I think I’m at a good spot in my career where I need to push my boundaries. I don’t have any development work as part of my job, so I think I need to pick up some hobby projects. My comfort zone is within the Microsoft stack: C#, IIS, and SQL Server really. Below is a list of things that I think I need to learn more about in 2011. It’s more like a list of personal goals, but also in case I lose focus as time goes on it’s here to remind me.

  1. MongoDB
    I have installed MongoDB twice before, but never really did anything with it other an walk through a couple of tutorials and think, “That’s pretty neat”, and just move on. There’s something about the scalability and ease of use of a NoSQL database that I find interesting.

  2. MonoDroid
    I know this isn’t a large learning curve since I will still be using C#, but I have never done mobile development. I have an HTC Incredible and I love it. I’m sure I can come up with some app that I want and code it up. I just signed up for the Preview. Crossing my fingers that I get accepted.

  3. Clojure
    I walked through a few of the tutorials but I haven’t really dove into it. I really need to figure out a project that could use Clojure. Functional programming really interests me, but I just don’t have the need for it… yet.

  4. Windows Azure
    Yeah, I know that’s really broad and there are a bunch of different parts to it. I have an MSDN subscription so I can play around on a small scale for free. I think it’s going to be important to really understand how it works and what I might be able to use it for.

  5. node.js
    It’s not super Windows friendly, but it sounds like you can get it to run via Cygwin. It sounds like it’s really cool and worth checking out. At a minimum I can pretend to be one of the cool kids that knows it’s potential.

I’m sure this will change, but it’s a pretty good start I think.

<![CDATA[ A new focus and a new home]]> Tue, 26 Oct 2010 04:00:00 GMT The big change in my job has been the de-emphasis of my development time, and now focusing on the email archive migration side of our business. I was bummed about the change, but it’s the direction we’re headed, and business is booming! I think many people don’t really know what email archiving is, and the need for migrating between email archiving systems is probably even more foreign. But, those folks who work for a company that needs to archive, understand the severity of not having it. While Bishop still sells and supports a few different email archiving packages, we also have a lot of experience in migrations. And that’s what I’m in charge of now. Maybe someday I’ll get to develop again.

Since the focus of my job has changed, I’m also changing the focus of my blog. I’m making it a little more general by giving it a technology focus instead of a pure development one. I also decided to try out Posterous as a hosting site and I switched the domain name to I picked it up sometime last year when GoDaddy had a special on.

I’m also going to start posting more. For the most part I’ve just been lazy, but I feel renewed seeing the new layout of the site. Now that Microsoft and WordPress are closely aligned, maybe I’ll try that out if I get the motivation. Who knows.

<![CDATA[How I chose between Git and Mercurial]]> Wed, 10 Mar 2010 05:00:00 GMT The current popular topic among developers is distributed version control. The current standard is Subversion, which was a nice improvement over CVS. Like most technology, when there are pain points, someone is going to improve on it. That’s where distributed version control systems come in. The two front runners are Git and Mercurial. There are dozens of blog posts about using a DVCS and what’s good about them over Subversion, so I’m not going to talk about that. Instead, this is about the factors in how I chose between them as I migrate away from Subversion. Here’s what my environment and what I needed:

  1. As a .NET developer, I use Windows.
  2. The repository should be easily accessible over the internet from my servers.
  3. The repository needs to be locked down. The code I write isn’t open source.

The two DVCS’ do quite well with #2 & #3. But one of them does much better at #1 and makes #2 easier. Anyone who has looked at Git or Mercurial knows that Git wasn’t exactly designed with Windows support in mind. It can be used in Windows, and many people do. If it had a native Windows implementation, it probably would have been my choice. This StackOverflow question made it clear, that Git on Windows is not ready for prime time.

Ok, Mercurial it is!

So now to get started. There are tons of good tutorials about getting started. The two places I used were and the TekPub series Mastering Mercurial. Getting up and running locally is pretty simple. TortoiseHg and VisualHg are on par with their Subversion counterparts, although I still learned about the command line options. I did struggle a bit to have Mercurial import my existing Subversion repositories, but again, a quick search brought up a bunch if pages with the fix.

The next step is to get the repositories up online. The TekPub series had a great walkthrough for getting the hgwebdir.cgi setup. Most of the difficulty is making sure that Python is enabled in IIS, after that it’s permissions that you need to get right. In the past, I played around with PHP and Python a little, so I had done most of those steps before. The overall setup in IIS does require basic authentication as the first level of security. That of course means you need to use SSL to keep the username and password from being sent in clear-text. Luckily, I already have an SSL certificate installed and ready.

Tying up the lose ends

So now I’ve gotten everything setup. I can push and pull from my “master” copy. The one thing that is driving me crazy is that I keep getting prompted for my credentials! I was a little surprised that there weren’t more articles about this. I was led down the correct path from an older post about someone trying to this on Linux. Luckily it translated over directly.

Mercurial on Windows uses INI files for most settings. There are global settings which are stored in the TortoiseHg install folder, and per-user settings in the root of the user’s profile. Here’s the section that needs to be added:

group.prefix =
group.username = domainuser
group.password = password

The ‘group’ that prefixes the entries is just a friendly name to group of properties for the authentication. It’s not tied to anything like the server name or repository. These settings will apply to any repository hosted on the server entered in the group.prefix field. After this was saved, I no longer got prompted when I pushed or pulled from the main server.

Moving forward

Now I’m in day-to-day mode. It’s still a little bit of a change in workflow compared to Subversion which I used for the last few years. The branching is really nice and the graph display is a slick feature. I haven’t had to do a complicated merge yet, but I hear it’s a nice experience. I can’t wait.

<![CDATA[Win32_ProcessStopTrace Truncation Follow-up]]> Fri, 19 Feb 2010 05:00:00 GMT Yesterday I wrote about a bug I discovered when using WMI to monitor process start ups and terminations. The bug is that the ProcessName property is truncated to 15 characters with the Win32_ProcessStopTrace object.

Two test scenarios I still needed were on 32bit Windows Server 2008 and 64bit Windows Server 2003. I built up a 32bit Windows Server 2008 and tested using the code I posted yesterday. Sure enough, the ProcessName was truncated. I still want to build up a 64bit Windows Server 2003 box, but it’s not a priority since this appears to be a 2008 issue.

Work-around To Track Process Lifetime

Since we can’t use the ProcessName reliably on Windows Server 2008, I need a better way to link the start-ups and the terminations. A simple solution is to also track the ProcessID which is provided in both the Win32_ProcessStartTrace and Win32_ProcessStopTrace objects. When a Start is triggered, I just keep track of both the ProcessName and ProcessID. When I get the Stop event, I can look back on the cached ProcessName using the ProcessID received in the Stop event.

I’m planning on posting this bug to Microsoft Connect once I find the right section to do that in.

<![CDATA[Process Name Truncation bug when using WMI to monitor processes on Windows Server 2008]]> Thu, 18 Feb 2010 05:00:00 GMT I’m working on a class that monitors when processes on a machine start up and stop. The easiest way to do this is to use WMI with the Win32_ProcessStartTrace and Win32_ProcessStopTrace classes. I wrote a small class to test this out to make sure it meets my needs. Here is the code I’m using for my ProcessWatcher class:

using System;
using System.Management;
namespace ProcessWatcherTest
    class ProcessWatcher
        private ManagementEventWatcher processStartWatcher;
        private ManagementEventWatcher processStopWatcher;
        public void StartMonitoring(string serverName)
            string startQuery = "SELECT * FROM Win32_ProcessStartTrace";
            string stopQuery = "SELECT * FROM Win32_ProcessStopTrace";
            string managementPath = string.Format(@"{0}rootcimv2", serverName);
            processStartWatcher = new ManagementEventWatcher(new WqlEventQuery(startQuery));
            processStopWatcher = new ManagementEventWatcher(new WqlEventQuery(stopQuery));
            ManagementScope scope = new ManagementScope(managementPath);
            processStartWatcher.Scope = scope;
            processStopWatcher.Scope = scope;
            processStartWatcher.EventArrived += processStartWatcher_EventArrived;
            processStopWatcher.EventArrived += processStopWatcher_EventArrived;
        public void StopMonitoring()
            processStartWatcher.EventArrived -= processStartWatcher_EventArrived;
            processStopWatcher.EventArrived -= processStopWatcher_EventArrived;
        void processStartWatcher_EventArrived(object sender, EventArrivedEventArgs e)
            var o = e.NewEvent.Properties["ProcessName"];
            Console.WriteLine("Got Start: {0}", o.Value);
        void processStopWatcher_EventArrived(object sender, EventArrivedEventArgs e)
            var o = e.NewEvent.Properties["ProcessName"];
            Console.WriteLine("Got Stop: {0}", o.Value);

What I like about using WMI instead of polling the process list is that we can use events to get notified. This lets us hook into this class to allow for a more flexible design. For this test class, we’re simply writing out the name of the process to the console.

My development machine is running Windows Server 2008 R2 x64. When I ran my test app watching for local processes, it worked great! The console was listing all processes as they start and stop. Then I noticed something strange for the process stop message:

Process Name Truncation Example

Processes with names longer than 15 characters (including the extension) are getting truncated! I did some searches on the web, and didn’t find anything about this. Curious to see if this is only something on my machine, I copied the app over to a Windows Server 2003 x86 server I have running and got the following results:

Process Name Not Truncation

And sure enough, the full process name is displayed. So now I copied the app over to a machine running Windows Server 2008 x64 that runs one of our domain controllers. The process name was truncated again. So what does it mean?

I tested this on a few other Windows Server 2008 machines. They all showed the same truncation. I haven’t been able to test this on a 32bit Windows Server 2008 box yet. I also haven’t tested this on any 64bit Windows Server 2003 boxes yet either. That means either this is a 64bit bug, or a Windows Server 2008 (& R2) bug. If I get some time, I’ll create VMs of the two other test cases and see what the results are.

<![CDATA[Exchange 2010 Update Rollup 1 OWA Problem & Fix]]> Wed, 30 Dec 2009 05:00:00 GMT This isn’t exactly programming related, but it might trip up other people like it did to us.

I recently installed the Update Rollup 1 and ran into a problem where the patch failed to complete. This also left OWA broken with a JavaScript error in flogon.js preventing the logon screen to show up.

In searching for a fix for this error, I ran across some good comments on the MS Exchange Team Blog for the rollup announcement. A few people posted the suggestion from MS PSS of running the update from within Powershell started as an administrator. I tried this and it did allow the update to run correctly and restored OWA functionality.

I haven’t seen any official comments on whether using Powershell is the new requirement for installing rollups. It might just be a UAC issue and need to be run as an Administrator, which isn’t possible without launching a new CMD or Powershell.

<![CDATA[PDC 2009 – Lots of new stuff to play with!]]> Sat, 21 Nov 2009 05:00:00 GMT Microsoft’s annual Professional Developer Conference was this week. While I wasn’t able to attend, there are plenty of opportunities to follow what’s going on. Twitter was flowing with links and comments about everything announced. I was also able to watch the keynote speeches live off the PDC website. Overall I was excited by the latest offerings. Here are some of my top picks.

Silverlight 4

A nice demo was done with the new webcam and microphone support. That’s neat, but I personally don’t have much use for that at the moment. I know lots of people have been waiting for the printing support which will finally be added. I could see using that someday, but it’s not needed for anything currently planned. What excites me the most is the new Out of the Browser experience. In Silverlight 3, you were sandboxed in with very limited interaction with the client OS. Silverlight 4 relaxes that to allow for interaction in ways that can be really useful. I need to figure out what exactly is available, but it sounds like a step in the right direction in simplifying client-side apps.

Windows Azure Features

I love the idea of the cloud and the potential it has. While I currently don’t have any plans for a could-based app, I am on the look out for how it can be used in our projects. The most interesting new feature for me is the Windows Azure Drive which lets a cloud store be mounted as a disk drive. I also like that Microsoft is running a program to allow for free data uploads during non-peak hours until June 2010. The Azure Platform AppFabric was also announced. It allows for companies to run their own private cloud service internally. I could see this becoming quite popular if its cheap/free and easy to install.

Now I just need some ideas on how to use all this stuff!

<![CDATA[Playing Around with the Web: ASP.NET MVC, jQuery, and Silverlight 3]]> Wed, 04 Nov 2009 05:00:00 GMT A few of the technologies that are getting lots of attention over the last 6 months are ASP.NET MVC, jQuery, and Silverlight 3. Everyone is talking about them on blog posts, video series, Twitter, podcasts, you name it. There’s plenty of information to be had if you want to learn about them. The only thing really holding me back is time. My current projects are mostly service-based with basic front ends for configuration. Not very sexy when it comes to being able to use all the hyped up technology out there. At least I can use .NET 3.5 and all its goodness.

My ASP.NET MVC Introduction

I’m working on a new project that has a front end that is well suited to be a web site. It’s a simple monitoring application that keeps an eye on a few services distributed across a couple of servers and monitors some application logs files looking for error conditions. When a service crashes or errors occur, we’ll send out an email to the administrators so they can take a look at it. I’m going to have a Windows service to monitor everything, and then use WCF to let the web site grab the latest data for viewing. Overall its a simple project. Perfect for learning a new framework.

I must say, so far I really like using the MVC pattern. I haven’t done a ton of WebForms projects in the past, but I can see why people really like using it. I’m not a test-first kind of guy (yet), but the easy unit testing is a plus. The flow feels more natural to me. I’m currently using the default view engine, but would love to give Spark a try since it looks cleaner on the code side.

I have another internal web app I wrote to control our Hyper-V virtual machines that I started moving over to MVC. I think it’s going to be a good fit since it’s data-based and there are discrete actions that can be taken against each VM. I might turn that process into some future posts.


Not having done a lot of web development, I never really looked into jQuery to see what it is or what it can do. It’s inclusion into ASP.NET MVC by Microsoft was my first time seeing it. I’m at the point where I don’t know what I need it for, but am trying to learn it to know for sure. Reading over the jQuery web site, I am amazed at all of the things it can do. I also really like that they have a UI library. I played around with the ProgressBar a bit. It’s totally over kill for what I used it for since I just need to display a constant percentage. Never the less I added it in to play around with it.

I have the 4th edition of JavaScript, The Definitive Guide which I’ve been using as a reference more often lately. I should probably invest in the 5th edition. I know there are also a few jQuery books out there as well if I really want to get up to speed on things.

Silverlight 3

I’ve wanted to do a Silverlight project for a few years. I never really had the time or an idea for what I should do. So I sat down one afternoon recently and wrote a test app.

Our company has a team of engineers that handle support calls from our customers. We use Salesforce to manage the tickets. As part of the support we offer, we provide service level agreements of various levels to our customers for how prompt a response they will get. We currently have a ASPX page, which ties into a tab in Salesforce, which displays data from XML files generated by an Windows service reading from our internal SQL Server which has a copy of all existing cases. The ASPX page is really just a couple of DataGrid controls. We kept it simple so that it works on our Windows Mobile phones without any issues. It has met our needs for over 3 years.

I decided that I would make a Silverlight 3 app which would display the cases currently within their SLA. I was really interested in seeing how the Out of Browser feature worked and what the end-user experience was. I used a single DataGrid control with grouping to display the tickets based on their support plan. I haven’t picked up XAML yet, so that part was a little new. There were a couple of good blog posts I found (but didn’t save) on how to do the things I wanted to. I just used the WebClient to download the XML files as strings then parse them using the XDocument APIs. The end result was pretty decent for only spending a couple of hours on it. It would probably make a good series of posts, but I don’t think we’re going to develop it into anything. Everyone like the web page version better.

The tough part about Silverlight right now is that about 90% of all blog posts that come up on Google cover the beta versions. I took me 30 minutes to get the correct code to do grouping on the DataGrid control. The way it was done in the beta didn’t make it to RTM. I finally did find one post that showed an RTM example and I was up and running.

What’s Next

I really want to play around with Windows Azure. I thought I read some where that it will remain free for CTP users until January. Hopefully they’ll have a free offering that I can get a chance to develop against if I miss the window before PDC 09. I just need an idea of what to test!

<![CDATA[Can YOU do FizzBuzz?]]> Mon, 24 Aug 2009 04:00:00 GMT I’ve had a busy last few months. Raising an infant is not an easy task by any means. Add in a project that needs to keep moving forward and things get almost crazy. We have been able to bring on another developer. The interviewing process was an interesting one to say the least. I have interviewed people for system administrator positions in the past, but this is the first developer position I’ve done. We had the classic HR style questions ready. We also knew that these would only test for personality compatibility, not technical skills. So where do we go from there?

For this position, we went through a consulting company. They handle some basic technical testing based on our requirements, so we knew that the candidates have coding knowledge and have worked on other projects for other companies. I hit StackOverflow and gleaned some technical interview questions that I felt would help us determine who is qualified. A very common tactic is to have the candidates do almost a complete day of problem solving/coding. In the future, I’d love to have that luxury when hiring developers. We didn’t really have that kind of time for our interviews, since we were interviewing a group in a single day.

What I decided to do was to give the candidates the FizzBuzz test. Now this first hit the web en-mass in 2007. It sparked all sorts of posts saying it was great and terrible, sometimes both at the same time. Most people confirmed that the majority of candidates can’t pass it. I was amazed at this. Of course the people who posted comments were able to put correct code up, but who know how long it took them. So, I wondered what we would see 2 years later with our interviews. As a note: yes, I myself had no problem with it and it took me only a couple of minutes do complete due to my slow handwriting.

Our results surprised me greatly. We hit an 80% fail rate. Most of the failures weren’t massive failures. They just missed 1 or 2 basic things that they could have fixed after we asked about it. To be fair to them, I did explain the problem to them and let them ask any questions before they began. I let them know that this is just a simple problem to test their basic coding skills. I told them that there aren’t any tricks to this; don’t over think it. The #1 problem? They forgot to print the number when it wasn’t a multiple of 3 or 5 or both. Of course, this is the first sentence of the problem.

The second part of the exercise was for them to walk me through the code that they had written. It’s just as important to be able to explain their code in English as it is to write it in C#. The worst candidate literally read the C# code he wrote. Semicolons, parentheses, and all. Of course the stronger people immediately knew how to talk through their code. Their past experience on larger projects was clearly shown.

In the end, we’ve been happy with the person we selected. And, yes, he did pass FizzBuzz.

<![CDATA[REST in .NET Revisited]]> Fri, 17 Jul 2009 04:00:00 GMT It’s been two months since I first took a look at implementing a RESTful service using the .NET framework. It’s been a busy time personally with the birth of our first child. During that time, I’ve done a bit of research, read some books, seen some presentations to get a pretty good handle on the options available. The other day I attended the July meeting of the Chicago .NET User’s Group. Scott Seely was the presenter, and he covered the four basic approaches for creating a RESTful .NET service. Scott’s co-authored book (Effective REST Services via .NET: For .NET Framework 3.5) covers these options, but I haven’t had a chance to pick it up yet, so this is pretty much my take on them.

Option 1: Use the ASP.NET MVC Framework

When I first read through Professional ASP.NET MVC 1.0 the thought of implementing a data access service with it popped into my head right away. A quick Google search shows that I’m not alone. Using the MVC framework feels like there is extra work being done when compared to the other options for a pure dataset-based interface. I could see this being a great option if you already have an MVC site up and running with content and you want to expose data as well. I’m pretty sure people are already doing this to expose XSS feeds and such for their site. As mentioned at the CNUG meeting, this is probably the best option if you already know ASP.NET MVC to a good degree and want to create a RESTful service very quickly. I would also assume that this isn’t the option with the highest performance. The “official” Microsoft recommendation is that ASP.NET MVC isn’t the proper choice for creating a RESTful data service; WCF or ADO.NET Data Services are.

Option 2: Use ADO.NET Data Services

ADO.NET Data Services sounded like it was a really good match when I first read it’s overview. We’re not using the Entity Framework, but that’s OK because it also supports custom data sources. There are also quite a few good postings on how to use various technologies with ADO.NET Data Services. I created a test project based on some sample code which just uses an in-memory data source. Implementing a read-only service is very easy and fast to do. I wasn't thrilled about the default output format of AtomPub, but it’s easy enough to change to JSON by adding an “accept-content: text/json” header in the request on the client side.

Where things got a little messy was implementing the rest of CRUD operations for the custom data source. Marc Gravell wrote a series of blog posts covering how to write the code required to fill out the rest of the CRUD operations, which really helped me gain an understanding of what is needed. Compared to the automatic implementation when using the Entity Framework, I would have liked to see less reflection involved. Granted, I didn't dig deep enough, or truly know enough to speak about a “better” way to do it, I just know that reflection isn't the fastest operation in the .NET Framework.

I do like the ability to perform additional filtering and manipulation of the data directly in the URL. I’m not sure it’s a feature I need or would use, but as I read somewhere, that’s kind of the beauty of it; someone else might. If I had a pure “expose a dataset but you can’t use SQL (tcp 1433)” scenario, then I would go with this option. The limiting factor is that you have to return an IQueryable or an IEnumerable for your results. For my project, this isn't an issue for about 80% of the calls that would be made. However, there are some calls where I want to return a stream of binary data.

I do like ADO.NET Data Services. There may be a day where I have a need for it. For now, it’s not the best fit.

####Option 3: Use WCF In .NET 3.5, Microsoft added a lot of functionality to WCF to easily create a RESTful service. I started with a pure WCF setup and was happy with the results. I also tried out the WCF REST Starter Kit. It felt like it complicated things more than I was expecting so I stopped using it. I prototyped out a portion of our service using WCF and started playing around with it. There is a lot of flexibility with controlling how the results are formatted. There’s also no limitation on what you can output. Just to goof around, I had one URL fire back a Silverlight application to be run. I also found it easy to get information for times when I got stuck, such as implementing authorization and authentication.

My plan was to implement our service using WCF until I learned about the last option.

Option 4: Use a custom HttpHandler in IIS/ASP.NET

I hadn't run across this option at all when starting my research. Scott had presented it at the CNUG meeting and ran through some sample code. The main benefit of this method is performance. Scott commented that the service he wrote for the MySpace REST service was originally written using WCF. It was processing about 4,000 RPS. When they re-wrote it using a custom HttpHandler, they were able to handle about 14,000 RPS. We didn't get details as to the exact nature of the work being done, but that’s a pretty significant improvement. Our service will never hit that level of traffic since it’s an intranet application, but we do care about good performance and lower CPU utilization.

The trade-off for the speed improvement is the extra code that needs to be written. WCF does a lot of tasks automatically for you, such as serializing the responses, setting up the UriTemplateTable, method routing, etc. With a custom HttpHandler, we get to do all of that ourselves. For me personally, the extra code isn't enough to make we want to switch back to WCF. I actually like the flexibility it offers. The core of the code is also common enough with the WCF version that if I run into an unforeseen problem, I can switch back to WCF quickly.

Final Overview

ASP.NET MVC – It’s already pretty RESTful, but it feels like you need to do too much when providing mostly data access

ADO.NET Data Services – A powerful way to host data over a service. In a pure data CRUD application, it’s a good way to go, especially if you’re using the Entity Framework.

WCF – The recommended method from Microsoft. Flexible with just enough built-in helper methods and features.

Custom HttpHandler – Very similar to the WCF approach, but closer to the wire. Just about complete control at the cost of more coding.

For now, I’m going to go with the HttpHandler. I think it’s going to best fit our needs and offer the most flexibility. Time to code!

<![CDATA[SQL Server 2008 Full Text Indexing or Lucene.Net?]]> Thu, 11 Jun 2009 04:00:00 GMT I’m at the point where I need to choose how I’m going to implement the search functionality for our project. My first impulse was to use the Full Text Indexing (FTI) built into SQL Server (2005 when we first discussed the project). I’ve seen other projects similar to ours use it, but I haven’t really heard much about the pros/cons in a production environment other than “we’re using it”. I’ve read all about the improvements in SQL Server 2008 and it sounds good. We were on the fence with requiring SQL Server 2008 over 2005, but I think 2008 is the right way to go.

Researching recommendations and pitfalls of FTI in 2008 consistently keeps pointing to the Blog where Jeff Atwood had discussed a problem they ran into with FTI 2008. They got Microsoft involved, and it turned out to be a minor bug that was also fixable by changing the structure of the query. Filtering out all the re-posts about this incident, it looks like there aren’t a lot of articles beyond the “this is how to turn it on” tutorials. Either people aren’t using FTI in 2008, or they just aren’t writing a lot about it. In the end, it sounds like SQL Server 2008 should be fully functional and scalable for our needs.

The other option that gets a lot of praise is Lucene.Net. Like many people, I was unsure about Lucene.Net’s production-readiness while it was in Apache’s Incubator status. Some searching shows that it’s in use in many production environments much larger than my project will ever get to. I also ran across some good explanations about how the Lucene.Net project generally is more stable that the native Java version due to the delay in porting to over to C#. It makes sense to me. They are porting the last released version, not the daily build. You might not have every new feature the Java folks are enjoying, but you get the benefit of some testing before porting. I get the impression that the API itself is good stuff. It’s just up to you to screw it up.

We’re going to be using Oracle’s Outside-In Search Export API to get the text rendering of documents. That removes the pain of trying to find iFilters for document types we might want to search through on the SQL Server 2008 side, and writing my own text extraction app on the Lucene.Net side. From here it really boils down to the amount of work it is to get things running.

For now, we’re going to give SQL Server 2008 FTI a shot. I already know the basics of FTI in SQL Server so it shouldn’t be much learning to get up and running. It is comforting to know that we have Lucene.Net ready as a replacement if we need it. Maybe we’ll include it as a configurable option in later versions.

<![CDATA[ The Garbage Collector and Unsafe Code Revisited]]> Mon, 18 May 2009 04:00:00 GMT It’s been almost a month since I first thought about using unsafe C# code. The application is still being developed, but the parts using the unsafe code are pretty much finished. Overall, I’m very happy with the results. Performance is more than acceptable and it’s spending less than 1% of the running time in the GC.

Here are a few of the guidelines I tried to follow and are probably common sense for using unsafe code in C#:

  • Limit the number of methods that use unsafe types.
  • The less methods that need to be decorated with the unsafe keyword, the better.
  • Limit the lifetime of unsafe types.
  • In my application, I only need the blocks of memory for a single parsing, then I release it.
  • Use the correct memory allocation functions.
  • HeapAlloc, HeapFree, and GetProcessHeap are the MSDN recommended methods to use.

I’m sure others could add to this list. The first two helped keep me focused on separating out my classes in keeping with SOLID Principles (as so far as I understand them). Of course that’s probably a whole post itself.

<![CDATA[VS 2008 SP1 Team System Profiler + Hyper-V = BSOD]]> Sat, 16 May 2009 04:00:00 GMT We recently joined the Microsoft BizSpark program. With that, comes access to the Visual Studio 2008 Team System versions. I decided to upgrade from VS Professional to the TS Developer Edition. I fired up one of our projects and started playing with the new features available to me. The code analysis part was neat. I expected more details and information, but I might just need to read up on what it’s telling me. Then I decided to fire up the Performance Wizard to see what that told me. Just as the app was about to start, boom. My first BSOD in Windows Server 2008.

The crash details:


The computer has rebooted from a bugcheck. The bugcheck was: 0x0000004a (0x0000000077a55aea, 0×0000000000000001, 0×0000000000000000, 0xfffffa6008c7dca0)

After my box came back up I hit the Google to see if this was specific to my machine or a known issue. There weren’t a lot of hits, but I did get a few good ones.

  • This was first discovered as a known issue on the Intel Core i7 line of processors. A patch was released, and about half the people who posted something weren’t able to install it. My box is an AMD64 X2, so not a lot of help there.
  • This is the best report that I found. It matched what I saw exactly.
  • I then found a link to the Microsoft Connect site. It described the exact type of system I am using with the same crashing. In the end, it was reproduced in VS 2008, and fixed in VS2010. The cause is listed as such from the report:

I wanted to inform you that we have been able to reproduce the crash you have experienced. The crash occurs on the specified processor when Hyper-V is enabled on the BIOS and Hyper-V role is added to Windows 2008 Server.

We are working towards a solution. I will be in contact with you once a solution is available. Thank you, once again, for taking the time and bringing the issue to our attention.
Daryush Laqab
Program Manager
VSTS Profiler, Code Coverage, and Test Impact Analysis
Posted by Microsoft on 3/19/2009 at 9:03 AM


There isn’t a post about the hotfix being released yet. I’ll take it as good news that there should be one on the way. I just hope we get it before VS 2010 ships.

<![CDATA[What’s the best way to REST in .NET?]]> Fri, 15 May 2009 04:00:00 GMT The more I hear about RESTful implementations, the more I believe it’s the most flexible way to expose the web interface for our project. It also allows for a built-in API of sorts to access our data in case our customers want to act on it in some way. Now the next step. How exactly do I create a RESTful web service in .NET? WCF appears to be Microsoft’s recommended method, but ASP.NET MVC looks like it could also do the job. And then there’s the WCF REST Starter Kit.

I was able to get a test service running using pure WCF and some samples I found on the web. Since our access will always be read-only, things were pretty simple. What I haven’t figured out yet is security. I want to be able to authenticate the user, then based on the username, either allow or deny access to the results. It’s also not immediately apparent how I can return a stream to mimic a file download. I ordered a copy of RESTful .NET which should help to fill in some of the parts I’m missing.

I’ve gone through the ASP.NET MVC 1.0 book and learned a lot. It really looks like a simple way to not only implement a REST service, but also allow for human-readable pages as well if I want. The book also provided examples for authentication and returning streams/byte arrays.

I’m tempted to prototype it out in ASP.NET MVC to get a feel for the differences compared to my poking and prodding of REST in WCF.

<![CDATA[A Brave New World]]> Sat, 09 May 2009 04:00:00 GMT I downloaded and installed ASP.NET MVC 1.0 today. I’ve been following the press it’s been getting as well as it’s Ruby on Rails cousin. I’ve used the MVC pattern in only one application I’ve ever written. I can think of 3 others that I should have. I get the concept, just moving over to web development is new for me. I’ve only done a few ASP.NET WebForms apps in the past, most of which would have been much easier had the ASP.NET MVC been around.

Part of my current project will involve creating an interface to the objects in our database. The data is hierarchical. We have users, which have “folders” (for lack of a better term), which contain documents. It sounds like a great place to use a RESTful API. The data will be read-only, which should make things even easier. The data also doesn’t get modified, just added to, so it’s also a good candidate for caching (both server-side and client-side). I still need to figure out a few things:

  • How do I implement security?
    • We will want to control who can access sections of the data. User A shouldn’t be able to see User B’s data by default, but it should be configurable.
    • While Windows Integrated Authentication is fine for typical usage, we will need to also support Basic Authentication (yes, over SSL).
  • The current ASP.NET MVC templates are in the /controller/action/id format. Can you do more complicating things such as /controller/id/controller/action/id?
  • Instead of returning HTML from every View, can I load up data from the database and cause a file download on the client side? I would think so.

I ordered Professional ASP.NET MVC 1.0 on Amazon today. That should help answer many of the questions I have. I’ve been starting to read the tutorials on the web, but many are based on beta and release candidate versions (which apparently have changed a bit over time). They’re better than nothing, right? That’s what I get for being late to the party.

<![CDATA[Garbage Collection, the Large Object Heap, and my Results]]> Thu, 23 Apr 2009 04:00:00 GMT I wrote a test app simulating the creation of large byte arrays I discussed in my last post. It created 6 randomly sized arrays ranging from 20K to 250MB 100,000 times in a loop. I opened up Performance Monitor and watched the Gen-2, % in GC and Large Object Heap size for the app. To my surprise, there was no noticeable delays or pauses during the run. The unexpected result was that the code was in the GC about 50% of the run time. That means half the CPU cycled used were just for cleaning up and moving memory around! That seems like a big waste of cycles.

From there I decided to re-write the code using the Unsafe keyword. I allocated the memory using Marshal.GlobalHAlloc using the same random sizing code. I mapped the space to an UnmanagedMemoryStream and wrote out some bytes to it at random points to make sure the OS was really giving me the memory. The CPU utilization was much better. For this test, I simply watched CPU usage in Performance Monitor and task manager to watch the rise and fall of available system memory.

I was unsure about using the Unsafe option I C# at first. Most people just like to talk about how dangerous it is and that if you need to use it then you’re probably doing something wrong. I feel that this scenario is a good fit for Unsafe C#. I need to read through large blocks of memory, then dump it. Performance of the read is somewhat important, but I don’t want the cost of the memory allocation/de-allocation to be a noticeable factor like in the first test. The vast majority of my application will be fine using the GC and will not be in the Unsafe blocks, but this one part will probably benefit greatly with direct control of the memory.

Time to get coding!

<![CDATA[Memory Management Woes in .NET]]> Wed, 22 Apr 2009 04:00:00 GMT I had to learn a bit about the GC in .NET for my current project. I’m going to be processing chunks of data that are various sizes ranging from a few KB to a few hundred MB. I’m thinking that I want to keep it in memory because I need to parse it and that would be the fastest way. I could write it to disk and read it via a FileStream, but I’m hoping to avoid the delay of the disk write until I’ve parsed out the chunks I’m interested in saving.

Keeping them in memory concerns me because of the possible performance hit I would get when creating a bunch of objects that are over the 85K threshold which qualifies them for storage on the large object heap. We’re limiting the code to only run on x64 which gives us some breathing room with the amount of memory we can access which should help. I’m not sure how the pause that will occur when the GC does it’s full Gen-2 collection, of possibly a few gig of memory, will effect us. How long will it take? Will it really matter when I know? A one second pause might not be too bad, but a 5 second pause every 5 minutes might be a waste of time, especially if there is a better design. I’m sure this falls under pre-mature optimization and I know the best way to get a handle on this is to build up a test app to see it for myself. I’m still early in the design, so changing it isn’t a huge deal and I think I have a possible workaround.

Currently this is being written in C#. I’ve been asking myself is if C# isn’t the best tool for this job. If I just manage the memory myself in C++, then the delay of freeing up memory is done right away in a manner that I control. Another option I’ve looked into a little bit is Unsafe Mode for C#. That would give me the benefits of being able to manage some of the memory myself and not running into the Gen-2 clean up. I just don’t know what the negatives to Unsafe mode are other than the full-trust requirement, which doesn’t really affect us.

I guess I really just need to test this out.

<![CDATA[Entering the society of developers]]> Tue, 21 Apr 2009 04:00:00 GMT I guess it’s time to finally join the club that everyone else seems to be apart of. I’ve stayed away from blogging mainly because I didn’t feel I had anything of importance to say. I’ve been primarily a lurker of forums and other’s blogs, participating when it’s met my need or I’ve run across something that I truly felt I had some knowledge in. The main inspiration of starting my own blog is a video of Scott Hanselman talking about how developers can utilize social networking. I really liked the reasons on why a developer should blog and the ways to get started. I highly recommend watching it.

One of the other tools he mentioned in the video is StackOverflow. This site has become my new Slashdot. I’ve contemplated making it my home page, but I really like the dashboard style view of my iGoogle page. So instead, I just check it a bunch of times during the day. The speed at which the development community has embraced this idea is really amazing. The number of new questions and answers makes it an invaluable tool for any developer. My reputation isn’t very high, but that’s mainly because I don’t like repeating the correct answer after 3 people already have. But that’s just me.

Another new app for me is Twitter. I’m still getting used to. If you want to follow me, you can find me as JoeDoyle23. I’m following a few people just to get started. What really draws me to twitter are the applications that people are making for it. Here in Chicago, a guy created an app that pulled Metra (train) delays off their site and tweets them out. Just setup your account to text your phone, and bam! You now know of any train delays when they happen! I don’t have a link for that, but I bet the Google can help us out if I ever decide to use it. (I don’t take the train lot, yet)