Now that blog posts, metadata, and comments are combined in one file, it's less work to start a new post. I figured I'd reduce friction even further by using a Visual Studio Item Template. It's best to remove every barrier to writing new posts. ;-)
I found a nice article on Visual Studio templates by Eric Sowell which answered the basic questions and another about multiple file templates that happened to get me past a stumbling block when editing a template.
Mostly everything is the same as in those posts, except the directories are different for the newer version of Visual Studio. There are a couple of caveats when doing a template for a file that doesn't end in .cs
. To make it easier to see what I'm doing, I also made a companion video.
Continue Reading...
Now that blog posts, metadata, and comments are combined in one file, it's less work to start a new post. I figured I'd reduce friction even further by using a Visual Studio Item Template. It's best to remove every barrier to writing new posts. ;-)
I found a nice article on Visual Studio templates by Eric Sowell which answered the basic questions and another about multiple file templates that happened to get me past a stumbling block when editing a template.
Mostly everything is the same as in those posts, except the directories are different for the newer version of Visual Studio. There are a couple of caveats when doing a template for a file that doesn't end in .cs
. To make it easier to see what I'm doing, I also made a companion video.
Here's the template I'm using for new posts:
---
title: $safeitemname$
created: $time$
published: $time$
tags:
---
---
# comments being here
---
To make a template, you simply create a file in Visual Studio, put whatever text you want in it, and then go to File | Export Template
and follow the prompts.
The words surrounded by dollar signs are template parameters. There's a nice list of template parameters on msdn.
For whatever reason, the template paramters are not always replaced automatically. For .cs
files, it worked with no further effort. For my .markdown
template, I had to extract the exported template and edit the vstemplate
xml file. There is an attribute of the ProjectItem
nodde called ReplaceParameters
. Set it's value to true.
Now, here's the stumbling block I mentioned before. You must zip up the files directly. Do not, like I did, zip the folder. It creates an extra level of nesting. There won't be any warning in visual studio. The template just won't show up.
After editing the template, I found it necessary to manually move the file from Users\{user}\Documents\Visual Studio 2015\My Exported Templates
to Users\{user}\Documents\Visual Studio 2015\Templates\ItemTemplates
.
Finally, I needed to restart Visual Studio. Sometimes using the suggestion of issuing devenv /installvstemplates
from a Visual Studio command prompt seemed to work. Other times it didn't. Restarting Visual Studio worked consistently. Once it's done, you don't have to think about it again, so I didn't spend more time trying to nail down the fastest way to refresh the template cache.
Hopefully those tips will help you get your own templates up and running smoothly.
If anything is unclear, try watching the companion video on YouTube. Please contact me with any comments, ommissions, or clarifications. Making the video was fun. I'll mix in more video content with future posts.
For my contact form, I needed some spam protection from the evil bots roaming the interwebs. I considered recaptcha, but I want to minimize the amount of javascript on this site, plus it's fun to see what I can do on my own.
For inspiration, I remembered a post I read long ago from Sam Saffron on blog spam. In it, he talks about the habits of bots and how they love to fill out form fields. The trick then is to give the bots something to fill out that humans won't. Instead of making humans prove they are not bots, make bots prove they are human.
I recently used formspree which has a _gotcha
field for this purpose. I added a similar field to my contact form and hid it using css, so humans won't be bothered by it. The results were great. The first contact that came in was marked as a bot!
Continue Reading...
For my contact form, I needed some spam protection from the evil bots roaming the interwebs. I considered recaptcha, but I want to minimize the amount of javascript on this site, plus it's fun to see what I can do on my own.
For inspiration, I remembered a post I read long ago from Sam Saffron on blog spam. In it, he talks about the habits of bots and how they love to fill out form fields. The trick then is to give the bots something to fill out that humans won't. Instead of making humans prove they are not bots, make bots prove they are human.
I recently used formspree which has a _gotcha
field for this purpose. I added a similar field to my contact form and hid it using css, so humans won't be bothered by it. The results were great. The first contact that came in was marked as a bot!
After that initial success, I began experimenting a bit more. I currently have 3 different honeypot fields to see which ones the bots find irresistable. I think that the first honeypot will prove the most effective.
There are a few things to note.
At first I had the required
attribute on all the non-honeypot fields. I think this is a dead giveaway to the bots on what they can ignore. Also, I realized I didn't really need to require anything. If you want to send me a message without an email address to reply to, so be it.
Second, I'm hiding the honeypot fields via css which means screen readers will still present them to users. For that case, I made sure the labels on the fields are very clear, e.g. "This section is for robots. Humans can ignore." In the end, it won't matter since I'm not actually blocking bot posts. I'm simply marking them in the subject so I can quickly sort in my mail client. You only need to read a few words to identify a marketing email.
For now, I'm happy with this approach. I can always make it more robust later if need be. I'm curious as to how it would hold up as protection against comment spam.
TLS Everywhere is gaining traction. I'm not convinced, but I'm not passionate enough about the issue to dig into it. I was intrigued by the Let's Encypt project and it's mission to provide free, pervasive, SSL Certs, and wanted to see if I could get it working IRL.
Troy Hunt has a good post on setting up Let's Encrypt on an Azure WebApp. [Side note, Troy has a lot of great articles on his site].
The most painful part of this process was setting up the principal in Active Directory. Dealing with AD always makes my heart sink. There are so many terms and concepts that I have zero interest in learning.
Continue Reading...
TLS Everywhere is gaining traction. I'm not convinced, but I'm not passionate enough about the issue to dig into it. I was intrigued by the Let's Encypt project and it's mission to provide free, pervasive, SSL Certs, and wanted to see if I could get it working IRL.
Troy Hunt has a good post on setting up Let's Encrypt on an Azure WebApp. [Side note, Troy has a lot of great articles on his site].
The most painful part of this process was setting up the principal in Active Directory. Dealing with AD always makes my heart sink. There are so many terms and concepts that I have zero interest in learning.
The tricky part was that the Tenant name (whatever.onmicrosoft.com) is no longer in the drop down shown in Troy's post. For me, it was in the url of the old portal. I have no idea how to find it in the new portal, so I'm not sure what to do when the old portal [eventually?] goes away.
The next stumbling bloc I ran into was the extension was unable to access the files in the .well-known
folder. Pro Tip: If google makes it seem like no one else has your problem, you haven't found a new problem, you've done something silly.
In my case, I had introduced a "lower case all the urls" IIS rewrite rule long ago. I won't bore you with how I figured this out, but it also turned out that it was incorrectly translating the url ... in some cases ... such that only the first letter of the filename came through.
I added a condition to the rewrite rule to ignore files in the .well-known
folder.
All is well......except browsers take that 301 Permanent rather seriously. After clearing the browser cache, I was able to see that the new rewrite rule worked as expected.
Finally, the Let's Encrypt extension was able to complete it's setup and I was able to surf to my site under https!
Yay! All done?
Not quite. The files from my recently added azure cdn weren't coming through. It turns out that using TLS with custom domains is not yet supported. :-(
Oh well. I could buy a cert for this, or just wait until Q4 and see what happens. Since my domain is cookieless, has no login, and is public information, TLS can wait. Besides, I'll get to see if the Let's Encrypt extension updates the cert as advertised.
Using a CDN is a well known way to increase the performance of your web site. At $0.087 per GB, I figured why not give the Azure CDN a try.
My first concern was I didn't want to have to push assets, especially css, to the CDN as a separate step in my release process. I found a decent guide on setting up origin pull for the Azure CDN. That post shows the old portal. The process in the new portal is a bit nicer, but the concepts are the same.
I did run into a couple of issues which may help you out.
Continue Reading...
Using a CDN is a well known way to increase the performance of your web site. At $0.087 per GB, I figured why not give the Azure CDN a try.
My first concern was I didn't want to have to push assets, especially css, to the CDN as a separate step in my release process. I found a decent guide on setting up origin pull for the Azure CDN. That post shows the old portal. The process in the new portal is a bit nicer, but the concepts are the same.
I did run into a couple of issues which may help you out.
I was confused at first, but the simple choice was setting the Endpoint Type as web app. Once I chose my site, most of the other options were filled in.
For some reason, the Standard Verizon option didn't work well for me. It has a 90 minute propagation time and I couldn't quite seem to get things working consistently. Since the Standard Akamai option has a 1 minute propagation time, I switched to that. The CDN worked quite smoothly from there.
To switch, I had to delete the CDN in azure portal and set it back up again. Fortunately, the names I used were released and avaialble again the second time.
The other thing I ran into was Font Awesome webfonts were not loading correctly. It turned out to be a CORS issue. After wandering around for a bit trying to figure out how to change the CORS settings for the CDN itself, I found out I could add the settings to the web.config of my site.
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*"/>
</customHeaders>
</httpProtocol>
</system.webServer>
I also managed to setup a custom domain, so instead of kijanawoodard.azureedge.net
, my static files are served from cdn.kijanawoodard.com
.
Everything appears to be serving correctly now. It's amazing how easy doing things like this has become.
At the end of my previous post, I considered merging the comments into post file.
I couldn't help myself. ;-)
With a little C# interactive, I now have half the files. Here's the code I typed into the interactive window:
Continue Reading...
At the end of my previous post, I considered merging the comments into post file.
I couldn't help myself. ;-)
With a little C# interactive, I now have half the files. Here's the code I typed into the interactive window:
var files = Directory.EnumerateFiles(@"content/posts/", "*.markdown");
foreach (var file in files)
{
var post = File.ReadAllText(file).Trim();
var comments = File.ReadAllText(file.Replace(".markdown", ".comments.yaml")).Trim();
var output = $"{post}{Environment.NewLine}{Environment.NewLine}---{Environment.NewLine}# comments begin here{Environment.NewLine}{Environment.NewLine}{comments}";
File.WriteAllText(file, output);
}
The instructions for comments are changed to reflect that there's only one file now.
There's a nice wiki on c# interactive. One thing to note, ReSharper maps the History Navigation keys [Alt+UpArrow
] to something else. I decided to map it back, but scope the assignment to the interactive window.
For my blog engine, I had a static C# class that contained a list of metadata about each post. That means when adding a new post, I had to add an element to the Posts list and create the markdown file.
Instead, I've decided to keep the post metadata in the file itself. For the format, I followed the Jekyll Front Matter convention.
I was facing the tedious task of typing all the metadata in the posts when I remembered that VS 2015 comes with C# Interactive.
Continue Reading...
For my blog engine, I had a static C# class that contained a list of metadata about each post. That means when adding a new post, I had to add an element to the Posts list and create the markdown file.
Instead, I've decided to keep the post metadata in the file itself. For the format, I followed the Jekyll Front Matter convention.
I was facing the tedious task of typing all the metadata in the posts when I remembered that VS 2015 comes with C# Interactive.
It was stunningly simple. I put the list of posts into the interactive window. Then iterated the list, formatted the front matter and prepended it to the post files. No assemblies, no compiling, no debugging. Done.
Full disclosure: I didn't write it perfectly the first time, but I just discarded the changes in git and ran the interactive script until I got it right.
I can already see an improvement writing this post. Just create the post.mardown file and go.
Hmmmm. Now I'm wondering if I should add the yaml comments at the bottom of the post file. I had thought I would use a library to parse the metadata, but none were quite what I wanted and it turned out to be not that much code. Since I'm doing my own parsing up front, I can snip out the comments section before passing the post text to the markdown processor. Hmmmm.
In my previous post, I wrote about removing Disqus from this blog. One tricky part was dealing with the comment export data. While you certainly can get your comments out of Disqus, they don't come in a great format. You get a lovely chunk of xml where the posts (called threads) are disconnected from the comments (which are called posts).
I needed something that would let me parse the data without too much effort. This code only needed to be run (successfully) one time. I considered using the C# xml classes or doing something dynamic, but I wasn't thrilled at the prospect.
It turns out, FSharp.Data has an xml type provider. It also turns out that F# type providers are amazing.
Continue Reading...
In my previous post, I wrote about removing Disqus from this blog. One tricky part was dealing with the comment export data. While you certainly can get your comments out of Disqus, they don't come in a great format. You get a lovely chunk of xml where the posts (called threads) are disconnected from the comments (which are called posts).
I needed something that would let me parse the data without too much effort. This code only needed to be run (successfully) one time. I considered using the C# xml classes or doing something dynamic, but I wasn't thrilled at the prospect.
It turns out, FSharp.Data has an xml type provider. It also turns out that F# type providers are amazing.
Check out this of code:
type Xml = XmlProvider<"kijanawoodard-2015-03-19T23_28_52.887832-all.xml">
let data = Xml.GetSample()
Just like that, with no class definitons and no tedious xml and string parsing, I have a fully typed model that can be used to access the data. For example:
data.Posts.First().Author.Name
After getting things arranged into yaml format, I wrote out the comment files and was done. I did all the processing and exlporation in F# Interactive, so I never compiled the assembly or started the debugger.
F# could grow on me.
I've decided to remove disqus comments from my blog.
I had several issues with disqus:
What really forced the decision was the nature of comments themselves. Are comments a valuable part of the blog? If so, I should own them. If not, I should turn them off.
Continue Reading...
I've decided to remove disqus comments from my blog.
I had several issues with disqus:
What really forced the decision was the nature of comments themselves. Are comments a valuable part of the blog? If so, I should own them. If not, I should turn them off.
On taking ownership, disqus has an export feature. It "works", but it's not ideal. I'll talk about how I dealt with that in a future post. Hint: F#.
At the moment, I've migrated comments to yaml and they are stored in the github repo with posts. I'm looking at options for a friendlier comment system, but in the meantime, send me a pull request. ;-)
It's long bothered me that I had a separate endpoint for Atom. After adding the archive endpoint, the absurdity really showed since is the same data, just in a different format.
Content negotiation to the rescue.
Web API has a decent conneg system built in. Fortunately, asp.net has enough extension points that we can craft a workable solution.
Continue Reading...
It's long bothered me that I had a separate endpoint for Atom. After adding the archive endpoint, the absurdity really showed since is the same data, just in a different format.
Content negotiation to the rescue.
Web API has a decent conneg system built in. Fortunately, asp.net has enough extension points that we can craft a workable solution.
In order to get the 404 page working, I used a custom action invoker. I figured that could be used as the basis for content negotiation. I leaned on several sources to pull together the implementation.
The code needs a bit of work, but I'm happy with the effects. You can make a request to an endpoint with an appropriate accept header or by adding an extension to the url.
The result is that the atom feed uses the same endpoint as the archive you can see this post in json, xml, html, and partial html.
Hat tip to Joey Guerra for the extensions and phtml. He's pushed these ideas for years. I haven't always been receptive to them, but I felt there was enough value to implement them here.
One thing I learned doing this implementation is that the content negotiation should effect what action gets called rather than being merely reactive to the action result.
For instance, I have code to do csv negotiation but it isn't being used because the current structure for posts isn't "flat" enough for csv. I considered some tricks using the mediator or cooking up some reflection magic to automatically flatten classes, but that seemed time consuming. Besides, what I really wanted was to get a clear opportunity in my controller to shape the output for a given mime type.
I am halfway there as I enable atom and xml to use a razor view to shape the output. For csv though I'd rather have something like:
public class MyController : Controller
{
...
public object Csv(PostRequest request)
{
var model = _mediator.Send<PostRequest, PostGetViewModel>(request);
return new SomeCsvShape
{
Title = model.Post.Title,
...
};
}
}
Similarly rather than have a bunch of interfaces to support something like HAL, an action could decorate the regular model with links, etc.
In the end, I'd like content negotiation to have
Interestingly, I had nearly run out of reasons to have a controller class, other than because c# code has to be in a class. Content negotiation gives new perspective to controller cohesion.
A side note on scope creep. I spent a fair amount of time trying to work out pdf content negotiation. After hunting around, I found Rotativa, which looked promising, but I ran into a bug. It could be my issue, by while I was thinking about how to code my way out of this problem, it finally dawned on me: ctrl-p in chrome, plus a little css, pdf support done. :-]
Zach Burke mentioned to me that he wanted to add a 404 page to his new blog. Sounds like a good idea, lets do it.
I assumed I was going to configure httpErrors, but I figured I'd google a bit anyway. Turns out, there is quite a debate about 404 pages with asp.net mvc. I decided I didn't really care about the nuances and I wanted to get the feature done. Good enough. Commit.
Next I decided that the 404 page should display the post archive so that the user can choose a post that exists. Hmmm. Ok, a little scope creep: how about we have an independent archive page. Fine. Using the mediator, it was straight forward to implement. Commmit.
Continue Reading...
Zach Burke mentioned to me that he wanted to add a 404 page to his new blog. Sounds like a good idea, lets do it.
I assumed I was going to configure httpErrors, but I figured I'd google a bit anyway. Turns out, there is quite a debate about 404 pages with asp.net mvc. I decided I didn't really care about the nuances and I wanted to get the feature done. Good enough. Commit.
Next I decided that the 404 page should display the post archive so that the user can choose a post that exists. Hmmm. Ok, a little scope creep: how about we have an independent archive page. Fine. Using the mediator, it was straight forward to implement. Commmit.
So far so good. But the archive is rendered with the full layout on the 404 page. We don't want duplicate headers and sidebars. The easy answer is to write some code like this in the controller.
if (ControllerContext.IsChildAction)
return PartialView(model);
else
return View(model);
Yuck. I don't like that. How can we remove this logic from our controller? I need something like FubuMVC Behaviors, but we don't have those in asp.net mvc.
After a quite a bit of stumbling around, using an IActionInvoker
derived from the built-in ControllerActionInvoker
seemed to fit the bill pretty well. I'm not happy with the implementation of the class at all. It is a hack and it shows, but we're here to ship features, not build ivory towers. Commit.
I used Vessel to wire the IActionInvoker
to the controller. In some sense, using Property Injection this way violates our sensibilities. My view is practical: I don't really want to do it this way, but this is what asp.net mvc gives me. I don't want the controller to have to muck about with setting their own ActionInvoker and I really don't want a controller base class. Yet, unless I'm ready to switch to FubuMVC or OpenRasta, I'm not going to get a nice pipeline to work with. Using Property Injection seems like a reasonable compromise.
I've also decided to continue to leave the "pain" of manual controller setup in place. Connecting the action invoker was dead simple since there were no container incantations to consider. I also find that it makes me think about things like request pipelines, behavior chains, and the true responsibilities of a controller.
404 page done.
Side notes.
It turns out that setting the layout in the view overrides the partial view behavior, so I had to remove the layout declaration. That led to adding a ViewStart page. Scope creep.
Conceptually, I like the views hierarchy to be composed by "something outside of themselves". If the layout is hard coded in the view, it's hard to reuse in another layout. I think I would like that to be more specific than a ViewStart file, but I don't have bearings on an alternate solution.
It also turns out that the Application_EndRequest
wasn't getting called when running on Azure Websites, aka production. Found an SO post that solved the problem. Commit. This scenario highlights the value of pushing to production often. The bug simply doesn't happen in dev/test. It only happens in production. Because the prod deploy was so small, it was easy to grok the issue and fix it.
The more I use git, the more I like tiny commits that address a single issue. I don't bother to create tickets for personal projects, but in my head, I try and reason about the simplest way to solve a problem and then only commit code for that problem. Other issues I see along the way, I will either code them, but commit them individually, or make a note and come back to them later. Getting to done is vital.
After implementing Vessel, I was curious what it would be like to add a module system. To do this, I added a RegisterModules method that scans for classes implementing IModule
and executes them.
Doing this allows us to define our mediator functionality in context. I like this because it allows us to add new features without having to modify a central registry.
The down side is we lose some Application level legibility. We are exchanging that for Feature level legibility. Vessel allows us to arrange your projects as we see fit.
Continue Reading...
After implementing Vessel, I was curious what it would be like to add a module system. To do this, I added a RegisterModules method that scans for classes implementing IModule
and executes them.
Doing this allows us to define our mediator functionality in context. I like this because it allows us to add new features without having to modify a central registry.
The down side is we lose some Application level legibility. We are exchanging that for Feature level legibility. Vessel allows us to arrange your projects as we see fit.
The central configuration is now fairly minimal with a small amount of duplication to satisfy ISP.
I think Vessel is now complete, perhaps a bit bloated by features. Maybe some day I'll add a way to specify assemblies to scan or a plugin folder if the need arises.
I considered a special hook to register controllers given that we found that Controllers always take exactly one dependency. However, I'd like to live with that "pain" for the moment and see if it can inspire better solutions.
Sorry to frighten you on this Hallow's eve, but this post is not about yet another mediator. :-]
After finishing Liaison, I found myself coding in anger. What else could I cull from my stack? The obvious answer:
Kill the IoC Container.
Continue Reading...
Sorry to frighten you on this Hallow's eve, but this post is not about yet another mediator. :-]
After finishing Liaison, I found myself coding in anger. What else could I cull from my stack? The obvious answer:
Kill the IoC Container.
Inspired by Ayende's IoC container in 15 lines of code, I wrote Vessel as a stripped down IoC on which to hang the mediator and any other singletons. I bloated it up with some extra features, like registering a constructor function, but it's still small enough to read without scrolling. Vessel doesn't even have it's own GitHub repo yet.
For so few lines, I was able to remove both Autofac packages from my project. Right away, I'll stipulate that my container usage on this blog is beyond trivial. If I find myself in trouble, I can install-package my way back home again.
I think I am seeking bedrock. I want to know the pain that causes me to use a framework or library. I am way up the abstraction hierarchy here, but digging.
One interesting discovery I made through this exercise: a controller is a class that has exactly one dependency and that dependency is IMediator. I think this fact can be exploited in the future, but for the moment, I'm happy with this small victory.
Well nimbus, you had a great run, but now it's over. Make room for Liaison.
While I was building nimbus, something was nagging me. It was great and flexible and web scale and all, but...
Nimbus is utter bloatware!
Continue Reading...
Well nimbus, you had a great run, but now it's over. Make room for Liaison.
While I was building nimbus, something was nagging me. It was great and flexible and web scale and all, but...
Nimbus is utter bloatware!
Mike Pennington summed it up in the comments:
I'm somewhat conflicted about this blog post. I like what you're doing, and the code is very clean and concise. And ISP is followed such that, as you mention, units of work are separate and can be tested as actual units. Although, on the other hand, it feels a little bit like magic.
Even with so much magic removed, so much magic remained. There are many subtle "features". You can mix void handlers with result handlers. You can use base types for message handlers to generalize them. You can choose scalar vs class results. There's a lot squeezed in there.
You can also run into trouble. If you use handlers with a mix of result types or Send
using a type not used in Subscribe
, you get a RuntimeBinderException. I added comments to give you a head's up, but I found myself lost a couple of times.
I considered making a "strict mode" for nimbus that would throw
if you didn't use the same types for handlers and subscribe. You would have to decorate any non-conforming handlers.
Then I started to think: what work is nimbus really doing?
Nimbus is passing the message and result to each handler in the order specified.
The goal is to isolate the units from each other and separate the organization of the units from the units themselves. Ok, what if we just code that?
Now the mediator configuration for Posts on this blog looks like this:
mediator.Subscribe<PostRequest, PostGetViewModel>(message =>
{
var result = new PostGetViewModel();
result = new FilteredPostVault().Handle(message, result);
result = new MarkdownContentStorage(root).Handle(message, result);
return result;
});
That code could be cut down to two lines, but I found this more readable. It should be clear now how the message is transformed into a result.
If we don't like inline functions, we can do this:
public static void RegisterContainer()
{
...
mediator.Subscribe<PostRequest, PostGetViewModel>(Execute);
...
}
...
private static PostGetViewModel Execute(PostRequest message)
{
var result = new PostGetViewModel();
result = new FilteredPostVault().Handle(message, result);
result = new MarkdownContentStorage(root).Handle(message, result);
return result;
}
If we don't want a bunch of functions, we can use classes:
mediator.Subscribe<PostRequest, PostGetViewModel>(message =>
new HandlePostGetViewModel().Handle(message));
...
public class HandlePostGetViewModel
{
public PostGetViewModel Handle(PostRequest message)
{
var result = new PostGetViewModel();
result = new FilteredPostVault().Handle(message, result);
result = new MarkdownContentStorage(root).Handle(message, result);
return result;
}
}
Wait. Wait. Wait! Whoa! Whoa! Whoa! Whoa. Whoa. Whoa. Whoa.
Whoa.
Isn't that where we started??!?
Yes and no.
We've come full circle, but along the way, we've dropped a lot of dead weight and clarified our approach to code considerably.
How do we keep from going off the reservation and making spaghetti in our Subscriptions?
store.OpenSession()
.I think I prefer either the inline function or methods within the configuration class over classes. I'll try it out in a couple projects and see. As always, copy/paste into your own project and salt to taste.
The Liaison source code is now 60 lines. About half of that is cruft due to the fact that c# has void Actions
as opposed to having Func<Unit>
. I thought about forcing a result to reduce the LoC, but I'd rather have a nicer api.
Another nice side effect of the simpler code is a 3x performance boost vs nimbus. I'm happy with 9M operations per second.
One thing I think I miss is the IHandle interface. Maybe I'm just being sentimental, but it does enforce rules for method names [Handle vs Execute vs ???]. Add the interfaces if it helps keep your codebase consistent.
On a minor note, I named nimbus with the project, solution, folders, etc all lowercase. It turns out, I prefer being idiomatic for the language in play. Javascript methods should be doSomething()
and c# methods should be DoSomething()
. Liaison is cased properly.
;-]
Once I started questioning IoC containers, a variety of problems presented themselves.
Near the end of questioning IoC, I posited an escape hatch:
new Mediator(new DoThis(), new DoThat(), new DoTheOther());
Working to achieve this api, I came up with Nimbus. To see how it works, I incorporated nimbus into this blog.
Continue Reading...
Once I started questioning IoC containers, a variety of problems presented themselves.
Near the end of questioning IoC, I posited an escape hatch:
new Mediator(new DoThis(), new DoThat(), new DoTheOther());
Working to achieve this api, I came up with Nimbus. To see how it works, I incorporated nimbus into this blog.
Here is the controller that displays a blog post before:
public class PostGetController : Controller
{
private readonly IPostVault _vault;
private readonly IContentStorage _storage;
public PostGetController(IPostVault vault, IContentStorage storage)
{
_vault = vault;
_storage = storage;
}
public ActionResult Execute(string slug)
{
var posts = _vault.ActivePosts;
var post = posts.FirstOrDefault();
if (slug != null) post = _vault.AllPosts.FirstOrDefault(x => x.Slug.ToLower() == slug.ToLower());
if (post == null) return HttpNotFound();
var content = _storage.GetContent(post.FileName);
var previous = posts.OrderBy(x => x.PublishedAtCst).FirstOrDefault(x => x.PublishedAtCst > post.PublishedAtCst);
var next = posts.FirstOrDefault(x => x.PublishedAtCst < post.PublishedAtCst);
var model = new PostGetViewModel(post, content, previous, next, _vault.ActivePosts, _vault.FuturePosts);
return View(model);
}
}
Here is the same controller after:
public class PostGetController : Controller
{
private readonly IMediator _mediator;
public PostGetController(IMediator mediator)
{
_mediator = mediator;
}
public ActionResult Execute(PostRequest request)
{
var model = _mediator.Send<PostRequest, PostGetViewModel>(request);
if (model.Post == null) return HttpNotFound();
return View(model);
}
}
I'll stipulate that the net effect is that I've simply shifted the code around. My assertion is that I have shifted it to a better place. The implementation gets to decide precisely how to fulfill the message contract and it can optimize aggressively.
Notice that the controller no longer is dependent upon IPostVault
and IContentStorage
. That isn't superficial. Those interfaces are gone.
I remember struggling with those interfaces. What is a Post Vault anyway? Post Repository? Post Service? Post....Locker? Blech!!! That pain was a sign that it was a superfluous interface created by my desire to "inject an interface" into the controller.
I also kept trying to balance how to leak data from the classes. Should I present the posts as one big list and let the controller figure active vs future? Should I have two lists? Three? Should they be IEnumerable
or IReadOnlyCollections
?
How about we make them private. Ahhhhh. Debate over. Much better. :-]
Here's the mediator configuration:
var mediator = new Mediator();
mediator.Subscribe<PostRequest, PostGetViewModel>(() =>
new ISubscribeFor<PostRequest>[] {
new FilteredPostVault(),
new MarkdownContentStorage(root) });
This configuration becomes a Rosetta stone for understanding how the application is wired together. If we want to make a change to a particular request, we know exactly what pieces are involved. Even better, if we decide to, say, change persistence technology, we have a project plan laid before us of what pieces need to be implemented with the new technology. It's already separated into pieces to dole out to the team.
I'm passing IMediator
into the controller via autofac. See, containers are ok. I also got to drop some esoteric autofac configuration for which I had links to documentation. My abstraction surface is being streamlined. Classes will depend on the messages, IMediator
, or some other singleton or derivative such as IDocumentSession
.
What does zero or one dependencies get us?
Easy testing for one. We might actually end up with some units to test.
We also get the flexibility we were looking for with the "endless interfaces" approach. Right now, it so happens that view model for displaying posts is created through the cooperation of two classes. If that should need to be one class or seven classes, we can make the change without modifying the controller dependency list. If we decide to keep posts in a database or convert the text to sphinx or razor, the controller doesn't notice: OCP, SRP, ISP, DIP. Can I just throw Liskov in?
The source file, at 135 LoC, is meant for copy/paste inclusion and modification. For instance, say you want to decorate each handler instance with logging/timing/etc. Salt to taste.
Coding nimbus was an interesting journey that led to other realizations. I noticed how tempting it is to add features.
What kept me in check was ISP. I thought SRP was a stronger check against sloppy design, but I kept being able to justify responsibility. Yeah, that feature is close enough. We don't need an entirely different class for just this.
It turned out the ISP kept me on the straight and narrow. After ranting about violating isp, I could hardly force clients to take dependencies they weren't going to use. That led me to write a bunch of code to cover the signature permutations. The resulting bloat drove me to cut features that weren't truly pertinent.
I really enjoyed this exercise and I'm much happier with the structure of my blog code.
But I think we can do better.
I wanted to get R# to do a little typing for me so that I can more easily add new blog posts.
A new post looks like this:
new Post
{
Title = "Creating a ReSharper Macro",
Slug = "creating-a-resharper-macro",
FileName = "creating-a-resharper-macro.markdown",
PublishedAtCst = DateTime.Parse("October 17, 2013"),
},
The slug and filename are independently adjustable for flexibility, but they usually start out as a derivative of whatever I'm going to name the post.
Continue Reading...
I wanted to get R# to do a little typing for me so that I can more easily add new blog posts.
A new post looks like this:
new Post
{
Title = "Creating a ReSharper Macro",
Slug = "creating-a-resharper-macro",
FileName = "creating-a-resharper-macro.markdown",
PublishedAtCst = DateTime.Parse("October 17, 2013"),
},
The slug and filename are independently adjustable for flexibility, but they usually start out as a derivative of whatever I'm going to name the post.
I created a R# Live Template that looks like this:
new Post
{
Title = "$title$",
Slug = "$slug$",
FileName = "$slug$.markdown",
PublishedAtCst = System.DateTime.Parse("$date$"),
},
This is great. When I activate the template, I get a chance to type for each $variable$
. So I can type $title$
, then $slug$
, which gets used on two lines, and finally $date$
.
For $date$
, I hooked it to a R# Macro that formats today's date.
For $slug$
, I want the title to be lowercased and the spaces to be replaced with hyphens. Unfortunately, there wasn't a built in macro that did this.
I found a post about extending macros that shows how to create your own macro. The post a bit confusing and out of date for R# 7+.
Fortunately, I found a couple examples which gave me some much needed hints to write and install the macro.
In addition to what's outlined in the blog post, here are some key steps:
JetBrains.ReSharper.Feature.Services.dll
from the R# program files bin directory. R# itself will then be able to pull in the rest of the dependencies from the bin folder as needed.It would be nice if everything needed to write a plugin/macro was available via nuget. It would also be nice if there was a "import plugin" feature in the R# options so you didn't have to find the right directory.
If JetBrains wanted to get really crazy, they could create a Publish Plugin feature that allowed quick and easy social code sharing.
I put the source code for the macro on GitHub.
Looking back on my posts about violating ISP and duck typing, a question emerges: why not declare our dependencies on the methods that need them, rather than at the object level?
For example:
void LoginUser(IAuthenication auth, string userid)
This code would be very explicit and allow us to be more granular with our dependency chain.
Continue Reading...
Looking back on my posts about violating ISP and duck typing, a question emerges: why not declare our dependencies on the methods that need them, rather than at the object level?
For example:
void LoginUser(IAuthenication auth, string userid)
This code would be very explicit and allow us to be more granular with our dependency chain.
However, it would also be a pain in the neck. Specifically, every caller would have to take a dependency on the target's dependencies just to pass them through.
Constructor Injection comes along to save the day. Our dependency arrives with it's dependencies already baked in.
In functional languages, dependencies can be "baked in" via partial application.
Constructor injection is partial application for OO.
On my previous post, Joey Guerra asked a question in the comments:
Can we go further? Can't we just say that class Oauth2Authentication is the interface? I mean, why do I have to care if it implements IAuthenticate? Do I care if it implements that "interface" or do I care if it has or doesn't a method called Authenticate?
In other words, isn't the interface declared in the wrong place?
Continue Reading...
On my previous post, Joey Guerra asked a question in the comments:
Can we go further? Can't we just say that class Oauth2Authentication is the interface? I mean, why do I have to care if it implements IAuthenticate? Do I care if it implements that "interface" or do I care if it has or doesn't a method called Authenticate?
In other words, isn't the interface declared in the wrong place?
Interfaces, as we typically use them in C#, mean "these are methods and properties supported by this class".
interface IAuthentication { bool Authenticate(); }
class Authentication : IAuthentication
That's fine and dandy, but it's only half of the equation. The problem is that we don't usually know the use cases when we write the interface. We can't necessarily coordinate the interface definition across other potential implementations either.
When we get to usage, say we find that there are two classes we can use:
class SamlAuthentication : IAuthentication
{ bool Authenticate() {...} }
class ClaimsAuthentication : IAuthenticateUsers
{ bool Authenticate() {...} }
It turns out, they have the exact same method signature. But alas, they don't have the same interface. I've been in the maddening circumstance that the two interfaces were named the same, but they were from different namespaces / vendors.
Here we are with two classes that could work for us, but that don't share a common interface Wouldn't it be nice to just declare this:
interface IAuth { bool Authenticate(); }
//usage
public void LoginUser(IAuth auth) { ... }
Instead of relying on the declared interfaces on the signature, we'll declare what our method needs and let the compiler sort out if the class matches the signature. A sort of interface inversion.
There's no need to make up a name for this. It's called duck typing.
Unfortunately, in C#, the way to do duck typing is to use dynamic. That's ok in some circumstances, but why can't the compiler recognize the signature compatibility. In addition, it would be really nice to easily marshal a class into the shape required by an interface.
If we're not willing to use dynamic and we don't control the source of our implementation classes, we're stuck using the adapter pattern and writing a bunch of boilerplate code.
There's an unintended consequence of this aspect of C#. Architects want to avoid these "wasteful adapters". They also want to avoid "changes to the core". To compensate, they try to imagine a variety of ways their interfaces could be used and define those use cases as part of the interface. When we don't know, we guess and we try to "plan for the future". This leads us to write broader interfaces than we probably need.
Isn't Big Design Up Front what we were trying to avoid when we decided to made our code "loosely coupled" by adding interfaces?
This simple code, used in so many examples, has always bothered me.
class Foo : IFoo
The idea is that we have abstracted an interface so we can be solid and our IoC container will even make this easier. We must be on the right path. Right?
Wrong.
Continue Reading...
This simple code, used in so many examples, has always bothered me.
class Foo : IFoo
The idea is that we have abstracted an interface so we can be solid and our IoC container will even make this easier. We must be on the right path. Right?
Wrong.
One goal of dependency inversion is that we can swap out implementations. Take this example.
class Authentication : IAuthentication
We're basically declaring that we haven't thought about this very much and we're just typing away, brain off. What would another implementation even be called?
class Authentication2 : IAuthentication //???????????
We can do better.
class Oauth2Authentication : IAuthentication
Immediately, we get the idea that other implementations might be:
class ActiveDirectoryAuthentication : IAuthentication
class LdapAuthentication : IAuthentication
class SamlAuthentication: IAuthentication
Foo: IFoo
is a give up.
Ever think that your young kids are sleeping too much at night or during naps? Feel that you aren't spending enough "bonding time" with them because they just sleep sleep sleep?
Here are a few tips to wake up your kids based on personal observation.
Lay your head down and close your eyes.
Continue Reading...
Ever think that your young kids are sleeping too much at night or during naps? Feel that you aren't spending enough "bonding time" with them because they just sleep sleep sleep?
Here are a few tips to wake up your kids based on personal observation.
Lay your head down and close your eyes.
Apparently, there is an alarm clock, that only kids can hear, which is triggered when your head touches a pillow.
Sit down on a toilet seat.
Blares like a fire bell to a two year old.
Wrap your head around that work project you've got to get done.
For pure magic, it helps if your boss is expecting the project to be done in the morning. If a promotion is on the line, all the better.
Make eye contact with your spouse and say something pleasant.
There is nothing like a moment of marital bliss to shake kids out of a sleepy stupor.
Sit at a table and move a forkful of food toward your mouth.
Note: shoveling leftovers into your mouth directly from the fridge does not work! You really need to cook something, set the table, and sit down. There is extra potency if your spouse sits with you [see above]. The moment you exhale and think "ahhhhh, this is nice", your kid will bolt upright in bed.
Have some friends over for dinner and open a bottle of wine.
The whine will flow freely.
Ok, any of those methods should get your kids to wake up, but every now and then, you actually want them to sleep a little longer. No problem.
Schedule an early morning appointment for them.
Financial penalties or extreme embarrassment increase the effectiveness of this approach.
Plan an outing right after nap time.
As a bonus, you'll get extra time to bond precisely at the time naps normally start.
Go on a trip
Air travel helps here since you have to be there by a certain time. Cruises are good too since there's not another one you can get on in a couple hours.
Pro tip - Most of these sleep tricks will also work if your kid has been constipated and you need them to move their bowels.
I doubt these techniques will work once kids get to teenage years. I'll write a follow up post in 12 years to let you know.
In the previous post post, we explored how constructor injection can be abused to violate isp. At the end, I mentioned possible SRP violations as well.
Let's look at the same class:
public class CustomerService : ICustomerService
{
private readonly IRepository<Customer> _repository;
private readonly IEmailService _email;
public CustomerService(IRepository<Customer> repository, IEmailService email)
{
_repository = repository;
_email = email;
}
...
public void CreateCustomer(Customer customer)
{
_repository.Add(customer);
_email.SendWelcomeEmail(customer);
}
}
CreateCustomer
calls the repository to add the customer and instructs the email service to send the welcome email.
Continue Reading...
In the previous post post, we explored how constructor injection can be abused to violate isp. At the end, I mentioned possible SRP violations as well.
Let's look at the same class:
public class CustomerService : ICustomerService
{
private readonly IRepository<Customer> _repository;
private readonly IEmailService _email;
public CustomerService(IRepository<Customer> repository, IEmailService email)
{
_repository = repository;
_email = email;
}
...
public void CreateCustomer(Customer customer)
{
_repository.Add(customer);
_email.SendWelcomeEmail(customer);
}
}
CreateCustomer
calls the repository to add the customer and instructs the email service to send the welcome email.
Unfortunately, this is pretty standard fare.
The only way to expose the problem is to explore the boundaries of this approach as new requirements come in.
This "create customer" feature starts getting complex in a hurry. Thankfully, we have R#, our trusty IoC container and a good mocking framework. We can get all these features coded up and tested in no time.
Except, SRP has silently disappeared. The CustomerService
now has tentacles throughout the system. Nearly any change could affect CustomerService
and changing CustomerService
could affect the reliability of the entire application.
I suppose you noticed the OCP violations here as well.
As it turns out, "loose coupling", is still coupling. Our class here has to know about the email, swag, and stats concepts.
So what to do instead? Messaging.
interface ICustomerCreated
class EmailService: IHandle<ICustomerCreated>
class SwagService: IHandle<ICustomerCreated>
class StatsService: IHandle<ICustomerCreated>
Now the customer service only cares about customer issues. Reasons to change: customer reasons. Clean.
Notice a secondary beneift? You don't need the IEmailService
interface any longer because nothing depends on it. The message is the interface. Testing CustomerService just got much easier.
Decoupled is better than loosely coupled.
One problem with IoC containers is that they facilitate ISP violations through constructor injection.
The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.
Let's take a look at some typical, and very terrible, code. I'm actually astonished at how many anti-patterns can be put into so few lines of code (AP/LOC?). It makes my eyes bleed.
Continue Reading...
One problem with IoC containers is that they facilitate ISP violations through constructor injection.
The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.
Let's take a look at some typical, and very terrible, code. I'm actually astonished at how many anti-patterns can be put into so few lines of code (AP/LOC?). It makes my eyes bleed.
public interface ICustomerService
{
Customer GetCustomer(int id);
void CreateCustomer(Customer customer);
}
public interface IRepository<T>
{
T GetById(int id);
void Add(T entity);
}
public interface IEmailService
{
void SendWelcomeEmail(Customer customer);
void SendDailyAppStatusToOperations(Customer customer);
}
public class Customer
{
public int Id { get; set; }
public string Name { get; set; }
}
public class CustomerService : ICustomerService
{
private readonly IRepository<Customer> _repository;
private readonly IEmailService _email;
public CustomerService(
IRepository<Customer> repository,
IEmailService email)
{
_repository = repository;
_email = email;
}
public Customer GetCustomer(int id)
{
return _repository.GetById(id);
}
public void CreateCustomer(Customer customer)
{
_repository.Add(customer);
_email.SendWelcomeEmail(customer);
}
}
About the only thing that code is missing is an ICustomer
class. Don't laugh. Interfaces on POCOs/DTOs have been spotted in the wild. Let's not dwell on this code or how it can be changed.
How does it violate ISP?
CustomerService
never uses SendDailyAppStatusToOperations
, and yet it's called out as a dependency.
To be clear, this isn't caused by IoC containers. We programmers tend to have a false sense of security that if we have interfaces, we're following best practices. We're coding to the interface! Our blind usage of layered architectures, "noun services", and endless abstractions are more to blame.
The IEmailService
is very typical in systems I run across. Why aren't these methods on separate interfaces? My guess is that the tools (containers, R#, moq, scm, etc) are all subtly pushing us in this direction.
Uggggh. I could create another interface, but then I'd have to go wire it up/mock it/inject it when I have this interface that already makes sense. I mean, it's all about emailing, right? And there's this other service that uses both methods. How many different interfaces am I going to have to create here?!? I'll have to add more files. Gah, I just know there will be merge conflicts on the project file! How about I just add this one method here and type alt-enter (R# ftw!).
Now, let's go a step farther. Let's expand the definition of ISP to the entire contract of an object, including it's constructor.
I've seen, and written, the following test many times:
Null
is passed for a "dependency" and the test still passes.
Before you dismiss this as a straw man argument, how did I know I could pass null
for IEmailService
? Isn't that secret knowledge of the internal workings of the class? If I need to change IEmailService
, does it affect CustomerService
?
If IEmailService
isn't required for "getting a customer", when is it required? Why is it a dependency for "getting a customer"?
This is where I lay some blame on the IoC container. If we were "new-ing" up classes manually, this would obviously be silly. Someone working on "displaying customer info" feature would balk at having to construct an IEmailService
class to pass. You can immediately see that you need a different construct for dealing with "displaying customer info" as opposed to "creating a new customer". In anger, the programmer will probably supply null
and commit (hey, didn't break the build!). You could argue you should have another constructor that only takes one arg, but that's not what your container is going to use. If you have a constructor with one arg, then you need some guard clauses on methods that use IEmailService
to tell the caller to use the correct constructor.
The container makes this pain disappear, and that is A Bad Thing.
Imagine the business wants to "Send an email when the customer is accessed on Tuesdays". Someone goes and codes it. Wait, why are all these "Get Customer" tests failing [every Tuesday]? You mean I have to go fix all those? Yes. Yes you do.
One could argue that this was bad test writing. You should always supply a mock of all dependencies!
Yet isn't part of TDD writing the minimum code to make the test pass? Besides, should doesn't make it so.
So, while we're going through our test suite creating mocks where we didn't need them before, let's think about this a bit.
Is the "Customer Service class" dependent on IEmailService
or is the "Create Customer method" dependent on IEmailService
?
I'd say the latter, but that leads us back to 8 lines of code.
As further evidence, a thought experiment: why not inject every possible dependency and then it will already be there if we ever need it? We can use better tooling to auto-mock them for easy testing.
Pretty horrible idea, right? Why is a dependency we only need some of the time better?
Too many dependencies contribute to SRP violations as well, but I'll save that for a future post. As a preview, Jimmy Bogard pointed out on twitter that you can't really "unit test" classes written in this manner.
Last night I awoke at 3am with a thought: maybe Greg Young has a point.
The other day, I watched Greg's 8 lines of code video. I found myself agreeing, out loud, with the presentation, which is somewhat startling when you're sitting by yourself. One thing I couldn't quite swallow was his "no ioc" stance.
But then, I woke up with that thought.
Continue Reading...
Last night I awoke at 3am with a thought: maybe Greg Young has a point.
The other day, I watched Greg's 8 lines of code video. I found myself agreeing, out loud, with the presentation, which is somewhat startling when you're sitting by yourself. One thing I couldn't quite swallow was his "no ioc" stance.
But then, I woke up with that thought.
Here's the funny bit: I didn't know why my brain decided Greg had a point. So I had to back-solve the result of my own subconscious mental processing.
Now, let me state right away. I don't have any intention of convincing you of anything. I'm not convinced myself. Let's just explore a bit.
Here's a command handler interface from ShortBus.
public interface ICommandHandler<in TMessage>
{
void Handle(TMessage message);
}
It's called from a mediator like this:
public virtual Response Send<TMessage>(TMessage message)
{
var allInstances =
_container
.GetAllInstances<ICommandHandler<TMessage>>();
...
foreach (var handler in allInstances)
...
handler.Handle(message);
I can write up my system like so:
class DoSomething
class DoThis : ICommandHandler<DoSomething>
class DoThat : ICommandHandler<DoSomething>
Beautiful. This gives us a simple way to do in-memory messaging.
To invoke all the handlers:
mediator.Send(doSomething);
To add some new functionality:
class DoSomething
class DoThis : ICommandHandler<DoSomething>
class DoThat : ICommandHandler<DoSomething>
class DoTheOther : ICommandHandler<DoSomething>
To replace functionality:
class DoSomething
class DoThis : ICommandHandler<DoSomething>
class DoThat : ICommandHandler<DoSomething>
class DoAnother : ICommandHandler<DoSomething>
SRP? Check. OCP? Check. Decoupled, flexible, testable? Check. Check. Check.
I like this.
Now, let's imagine that DoThat
must follow DoThis
. We need ordering. Hmmm. Well, if we need ordering, we really need a chain of events. Ok, let's sketch something.
class DoSomething
class DoThis : ICommandHandler<DoSomething>
class DoThisCompleted
class DoThat : ICommandHandler<DoThisCompleted>
Not too bad.
What if each handler needs to occur in a certain order?
I guess we just create all the intermediate events. Hmmm. If each class is working from the same data and we're using an identity map, this seems like wasted effort. But we get the benefits of messaging, so we can live with it.
Ok. Above we replaced DoTheOther
with DoAnother
. What if we want to write and test DoAnother
, but we don't want it running in our production system quite yet?
Hmmmmmmm. Well, if our intention is to eventually replace DoTheOther
, we can make a feature branch in our source control, delete DoTheOther
, write DoAnother
, and then deploy that feature branch to our testing environments.
I guess another option would be some kind of marker interface or attribute that tells the mediator to skip the handler. But what if someone forgets to add the marker?
Well, we'd better do some kind of assertion on our container that it has only registered the desired handlers. But at that point, we pretty much lose the benefit of our automatic wire up.
In the example code here, everything is declared together. But those handlers could be in any file. They could be in any assembly. How do we tell what handlers are going to run?
I suppose our container might have a feature that would dump all the found instances or we could write something in the mediator that would dump that information. Then we could....inspect that manually on every build to check if.....
This is getting complicated.
What if we want some things to happen only on certain days?
DoAnother
is only relevant on Wednesdays. Wait, no Thursdays. We could put that logic in DoAnother
, but then it has too many responsibilities and needs to be altered when the business changes it's mind. We could create a handler scheduler that manages that I guess.
What if the app is multi-tenant and which handlers are invoked depend on the tenant involved? What if the tenants share instances of the app?
Hopefully, our container has an ITenantIdentificationStrategy.
Oh boy.
So, yes. Nearly any case can be handled by carefully studying and utilizing your container of choice. And explaining all of that to someone new to the project will be....fun.
But let's back up to where we first ran into trouble. We wanted to:
Consider this:
new Mediator(new DoThis(), new DoThat(), new DoTheOther());
Done. All the use cases are satisfied. And we have a new instance per request. If I'm honest with myself about autofac expressions, I'm pretty much writing this code in the app boostrap routines already.
The other thorny use cases can also be solved with pretty straight forward code.
Go check out the 8 lines of code video and check out the EventStore repo to see Greg's ideas in action. I found the code very discoverable.
My fear is this style involves lots of boring typing. But in exchange for a few minutes of boredom, I get a crystal clear high level overview of the entire system, all the components, and how they fit together.
As programmers, sometimes our desire to automate overcomes our judgment and solutions take more time and effort than the original problem.
I'm not quite ready to give up my container and start using Func<Unit>
everywhere, but I sure am starting to think a lot more about whether the external tools I'm using are pulling their weight.
And I think that was Greg Young's point.
Sometimes it's the little things.
A while ago, I installed .net demon and ran the trial. I was instantly impressed.
The trial expired during a period when I wasn't coding at home very often. I kind of forgot about it, except for occasionally noticing the trial expiration reminder at the bottom of visual studio. Too many other more pressing things to do...blah blah blah.
Continue Reading...
Sometimes it's the little things.
A while ago, I installed .net demon and ran the trial. I was instantly impressed.
The trial expired during a period when I wasn't coding at home very often. I kind of forgot about it, except for occasionally noticing the trial expiration reminder at the bottom of visual studio. Too many other more pressing things to do...blah blah blah.
As I was working today, I suddenly noticed how often I was pressing ctrl-s (save), ctrl-shift-b (build), alt-tab (switch windows), and F5 (refresh browser).
I went to the .net demon site to purchase my license.
$30.
Thirty dollars. I've been expending all those keystrokes to save thirty dollars?
In addition to auto-save and compile, .net demon integrates with live reload so Chrome auto-refreshes as I code. I just glance at my browser to see changes.
I find I'm using the mouse a lot less as well.
I think I've gotten my money's worth typing up this post. :-]
I was contacted by Packt Publishing to review Learning NServiceBus by David Boike. David is active in the NServiceBus community, so I was eager to see what he had put together.
Using a messaging framework can be a daunting prospect for someone who is used to typical RPC / web service programming. In Learning NServiceBus, David has distilled the essential parts of getting messaging to work on NServiceBus into easily digestible bits. I consider myself a slow reader yet I was able to get through the book in a few hours. Also, as someone who is familiar with NServiceBus, I thought I might find the material boring. Instead, David presents it in a fun, engaging manner that had me happily swiping through the pages [kindle reader on iPad].
Continue Reading...
I was contacted by Packt Publishing to review Learning NServiceBus by David Boike. David is active in the NServiceBus community, so I was eager to see what he had put together.
Using a messaging framework can be a daunting prospect for someone who is used to typical RPC / web service programming. In Learning NServiceBus, David has distilled the essential parts of getting messaging to work on NServiceBus into easily digestible bits. I consider myself a slow reader yet I was able to get through the book in a few hours. Also, as someone who is familiar with NServiceBus, I thought I might find the material boring. Instead, David presents it in a fun, engaging manner that had me happily swiping through the pages [kindle reader on iPad].
The book is divided into 8 chapters. One nice feature is that the first 6 chapters, which deal with code, have downloadable code samples that you can run on your own machine. There's no better way to understand code than to compile and run it.
Chapter 1, "Getting on the IBus", goes over the basics of getting NServiceBus up and running. For many frameworks and platforms, getting started is make or break. NServiceBus has always impressed me as easy to get going. David takes us step by step through the process and explains what all the "bits" you download to your machine are and why they are there. He covers writing a basic message flow and starting it up in a console app. I like that the console app is pictured in color and the output messages are explained.
All the code in the book is nicely formatted and easy to read. There is a great balance of showing you everything, explaining the relevant pieces and putting you at ease that things that were not explained, will be elaborated upon later in the book. In this way, each chapter builds upon the last.
I also felt there was a good balance of explaining just enough messaging theory to give context without dwelling on the ins and outs of SOA/DDD/TDD/ETC. There were a few points in the book where David simply referred to existing resources on the web to get further information. This is a good approach. While those things are essential to designing a message based architecture, they are not essential to understand how to use NServiceBus. The distinctions are not always clear, but David did a good job picking and choosing what to include. By pointing us to other resources, we're not left with that uneasy feeling that "there's something we don't know". Instead, we feel we have a good basic grasp of the necessary concepts and path to deepen our knowledge when necessary.
After each chapter I felt that I had received "just enough" information to start using the features described. While easy to digest, the book covers a lot of ground. Encryption, fault tolerance, logging, virtualization, monitoring, and scaling are all explored.
Sagas, also known as long running processes or Process Managers, can be a challenge to understand at first glance. Chapter 6 focuses on Sagas and covers how you approach them from a technical perspective and from a business perspective. David reminds us that part of our jobs as software developers is understanding and educating the business, not simply typing code.
A lot of tech books tend to be focused on programmers. Chapter 7 is focused on actually running NServiceBus from an administrators perspective. NServiceBus is opinionated about what things should be configured by programmers and which should be under the administrators control. Keeping this distinction front and center at all times help projects run successfully in production. It's easy to say "make everything configurable", but that often leads to unmaintainable software and strange bugs when no one can remember why a certain series of config incantations produce some unexpected behavior. Again, as a programmer, our job is not to write code and "throw it over the wall". We should be cognizant of how the code will be run in production and give the right levers to allow the operations folks to make informed decisions about the runtime environment without needing to call a developer.
The only annoyance I had in the book the use of exclamation points for emphasis. The first few times were fine, but after a while, it just pulled my focus out of the book!
All in all, as someone who has explained basic NServiceBus and Messaging on numerous occasions, I can now refer people to Learning NServiceBus as a primer. Deep knowledge of a tool comes through usage. This book is a great way to start using a great tool.
Why not? I took a day and wrote a blog engine. I had a few goals in mind. I started by stealing, er, uh, learning from the blog of Tim G Thomas whose source code is conveniently on GitHub.
I didn't go bare minimum, but I feel like I got pretty close.
I cheated a bit. I didn't want a db, but I figured a blog has to have comments, so I went with Disqus. So, technically, Disqus has a db on my behalf and loads through js. But those things aren't in my code base which means I don't have to maintain them. Win.
Continue Reading...
Why not? I took a day and wrote a blog engine. I had a few goals in mind. I started by stealing, er, uh, learning from the blog of Tim G Thomas whose source code is conveniently on GitHub.
I didn't go bare minimum, but I feel like I got pretty close.
I cheated a bit. I didn't want a db, but I figured a blog has to have comments, so I went with Disqus. So, technically, Disqus has a db on my behalf and loads through js. But those things aren't in my code base which means I don't have to maintain them. Win.
I'm still deciding whether I should go with Gists for code or plain html code blocks. Gists look a bit nicer, can be forked and downloaded, but they introduce more javascript and the code is not in the actual markdown. If the gist goes away, that code is gone. What do you think?
For CSS, I could have gone with Twitter Bootstrap or Foundation, but when I was making my decision, they seemed pretty heavy weight for a blog. Kube plus Font Awesome seem to be doing quite well.
I considered adding an archive page, like on Tim Thomas' blog. Since I only have 20 posts as of now, I'll just list the all posts. Once I have a hundred posts, I can come back and add that feature.
I added an Atom feed just to see what that is like. It's trivial. Now, do I need it?
I was going for speed as well. I wanted page onload to be under 250ms. I was stoked when it was clocking in around 50ms.....until I added Disqus and Gist. The pops me to ~400ms, but I'll live with that for the features of Disqus. That fact may kill gists for me though. ;-)
I've wondered what it would be like to write in Markdown. I have to say, having written these posts once in html having fought with the editor, writing in Markdown is very nice. It flows quite naturally. I like the use of labels for links. It makes it easy to refer to the same link many times in a document and you can have a nice bibliography. Check out the raw source of this post.
The posts are Markdown, so they are in the content folder. The meta data for posts are in classes. I have just enough infrastructure there to post into the future. I nixed some code about putting posts in "active status". Pure YAGNI.
I also didn't want a formula for the Slug. I wanted to tweak the slug, title, and file name without having to think about the output of a method somewhere. I also avoided a base class since I just describe the shape of the class. I created a ReSharper live template to output a new class and I fill in the details. Works well.
I'm giving up the ability to "blog on the go". In reality, that never happened with my WordPress blog. Writing blog posts takes hours, for me anyway. Also, I could always edit directly on GitHub and push to production if I wanted.
Yes. I actually typed IoC. I've been so negative on IoC lately, I wanted to give it a try again. I wanted to use it in a minimal way where it could provide value rather than blindly using it everywhere.
I actually setup some interfaces in the project. gasp
I wanted posts to follow the Open/Closed principle and have the ability to create a new blog post and have it be picked up by the infrastructure automatically without modifying some particular class. When FilteredPostValult.cs
gets instantiated by the container, all the posts are there. That bit of magic is accomplished by this line of code.
I decided to use Autofac since I hadn't tried that one before. It works fine IMO and there are nuget packages to get all the bits you need.
MVC 4 doesn't add trailing slashes to routes. For consistency, your routes should always end the same way. On my old wordpress blog, there was a trailing slash. I added a url rewrite rule to the web.config to redirect to the canonical form. I also normalized to lower case and to a non-www host name.
I wanted to see what it is like to put the model, view, and controller together in a folder instead of spreading them out across the project structure. I think I am striving for organization by feature, but this blog has too few features to know if that's working. :-D
In order to get this structure to work, I had to tweak the view engine. It's shockingly straight forward. This is customized exactly to the needs of this project.
I used GitHub for source control, of course. I decided to see how easy it would be to launch a site on azure from github. I logged into Azure. Clicked on New -> Website. On the landing screen it asked me if I wanted to deploy from GitHub (amongst other choices). I picked the repo. Deployed. Wow.
I'm pretty happy with this all in all. There's not a whole lot of code. The idea that I spent as much time working on the posts as on the blog engine tells me I'm on the right track.
Tell me what you think.
I created a date in js like this: new Date(2012, 0, 1, 8, 15, 0);
.
When I console.log it, I get this: 2012-01-01T14:15:00.000Z
(My timezone is CST).
The date is already converted to UTC and will convert back to local time for display. If you ship the date to js as UTC, it just works.
Continue Reading...
I created a date in js like this: new Date(2012, 0, 1, 8, 15, 0);
.
When I console.log it, I get this: 2012-01-01T14:15:00.000Z
(My timezone is CST).
The date is already converted to UTC and will convert back to local time for display. If you ship the date to js as UTC, it just works.
Once again, if you think you need to write a bunch of code to solve a common business problem, you're thinking incorrectly about the problem, the solution, or both.
I tried to fight the framework, but it isn't worth it. With RavenDB, the default expectation is to use string for Id properties and they will get generated to look like this: "posts/1". The slash causes a problem for MVC routing and it doesn't look all that great "posts/details/posts/1".
You can overcome the MVC issues pretty quickly just by making your Id an int property. Blam. It works and your route looks like "posts/details/1".
It turns out though that once you get into more interesting RavenDB features, notably indexes, using int for Id is a real PITA. Indexes run on the Server and the server still sees all the document as having Id as string list "posts/1". Your queries with int Id properties won't match and you'll be frustrated.
Continue Reading...
I tried to fight the framework, but it isn't worth it. With RavenDB, the default expectation is to use string for Id properties and they will get generated to look like this: "posts/1". The slash causes a problem for MVC routing and it doesn't look all that great "posts/details/posts/1".
You can overcome the MVC issues pretty quickly just by making your Id an int property. Blam. It works and your route looks like "posts/details/1".
It turns out though that once you get into more interesting RavenDB features, notably indexes, using int for Id is a real PITA. Indexes run on the Server and the server still sees all the document as having Id as string list "posts/1". Your queries with int Id properties won't match and you'll be frustrated.
So I decided to switch back to string Id properties and then convert them to int for the routing. For Load<Post>(id), using the int works great. However, as soon as that id is used in a where clause, forget it; you have to figure out how to get the proper string representation again.
Here are two choices for what to do with your id properties.
I decided on the first option: changing the parts separator so my id looks like "posts-1". Now my routing works fine. The seo friendliness is an issue, but I'm working on apps not websites so it's ok. The urls end up being more "hackable". Say you end up with a route like "blogs/edit/1/1", I think it's easier to hack/read "blogs/edit/posts-1/comment-1".
Rule of life: Don't Fight the Framework. If you find yourself writing a lot of code to get around a framework/tool issue, either drop the framework/tool or drop your hack code. It's more trouble than it's worth.
I've found myself making mental leaps about coding more quickly by cross pollinating the input data.
Take architectural abstractions. They've always grated on me. The better I got at writing code, the more I thought they were a waste of time...most of the time.
Yesterday I was reading the latest in a series of Ayende posts about Limiting your abstractions.
Continue Reading...
I've found myself making mental leaps about coding more quickly by cross pollinating the input data.
Take architectural abstractions. They've always grated on me. The better I got at writing code, the more I thought they were a waste of time...most of the time.
Yesterday I was reading the latest in a series of Ayende posts about Limiting your abstractions.
Today I had time to kill during jury duty and read a Ribbon Farm post about Dense Writing.
Click.
What's always bothered my architectural abstractions is that tend to become brain-off copy/paste best practices that are adding more noise than value to the code base.
I like the idea that you should limit your abstractions in your code base. Oren says "you get a dozen" - tops!
The point is that you cannot abstract everything. You actually need to make fact-informed decisions and iterate to new decisions.
Simply declaring IDataAbstraction<T>
doesn't make it so.
If you try to hide EF and NHibernate behind your abstraction you will be unable to optimize. For example, should you Eager load complex properties of an entity or not. Sometimes you should, sometimes you should not. The only code that knows when to do the right thing, is the calling code. Your abstraction prevents you from optimizing when and where necessary.
Finding yourself in this predicament, you have a few options.
IFetchingStrategy<T>
and map that against EF and NHibernate. You're wasting your life. You've got an app to build.Similarly, swapping out SQL Server for CouchDB for RavenDB for HyperGraphDB is not going to be trivial simply because you whipped together some IDataBase<T>
. These technologies have subtle, and not so subtle, differences that contribute to a decision about whether or not to use them in your project. You can't hide them behind an abstraction "in case you were wrong".
Either you are castrating the tool, meaning you might as well have chosen something else, or your abstraction is an illusion and you're wasting time with Empty Calorie Abstractions.
Now all that sounds awful unless you get the odd idea in your head that you can have more than one database in your system. Then all these decisions are much less important. But that's another story.
An aside, the writing on RibbonFarm demonstrates that I need to work on my writing. The entire site is worth reading if only for the mind-expanding properties of the dense writing.
Last week, I started experimenting with FubuMVC. About two months ago, I met three of the Fubu guys down in Austin and they sparked my curiosity about FubuMVC. Last month I took Udi Dahan’s excellent SOA course and asked him about FubuMVC in light of his views on SOA. His response was to challenge me to give FubuMVC a try and find out.
I started working my way through the beginner material on FubuMVC when I struck upon an issue present in any web framework: when you POST data and hit a problem, how do you re-hydrate the view with the data the user entered and show the user about what went wrong.
I recently came up with a solution that I was happy with in ASP.Net MVC3. However, the solution relied on a base class and FubuMVC steers us toward a compositional approach to development. I ran across these stack posts and they meshed very well with my solution for ASP.Net MVC3.
Continue Reading...
Last week, I started experimenting with FubuMVC. About two months ago, I met three of the Fubu guys down in Austin and they sparked my curiosity about FubuMVC. Last month I took Udi Dahan’s excellent SOA course and asked him about FubuMVC in light of his views on SOA. His response was to challenge me to give FubuMVC a try and find out.
I started working my way through the beginner material on FubuMVC when I struck upon an issue present in any web framework: when you POST data and hit a problem, how do you re-hydrate the view with the data the user entered and show the user about what went wrong.
I recently came up with a solution that I was happy with in ASP.Net MVC3. However, the solution relied on a base class and FubuMVC steers us toward a compositional approach to development. I ran across these stack posts and they meshed very well with my solution for ASP.Net MVC3.
The crux of the idea is that the Input Model for the POST is symmetrical with the View Model for the GET. I took the idea just a bit further.
public class FooEditRequestModel
{
public int FooId { get; set; }
}
public class FooEditViewModel : IRedirectable
{
public int FooId { get; set; }
[Required]
public int BarId { get; set; }
public IEnumerable<Bar> Bars { get; set; } //reference/lookup data
public FubuContinuation RedirectTo { get; set; }
}
public class FooEditInputModel
{
public int FooId { get; set; }
[Required]
public int BarId { get; set; }
}
I defined a RequestModel for the GET, a ViewModel which might contain reference data, and an InputModel which is POSTed back to the server.
The reference data is the kicker. I really don’t want to post all the reference data back to the server when most of my posts will be fine. On more complex pages, it’s just difficult to do. However, once on the server, it’s tricky to marshal the user’s input data over to the GET method. I also don’t want my GET method to have to deal with the possibility of Input model data showing up as parameters.
Here’s some non working code I threw together to see how I liked my idea.
public FooEditViewModel Execute(FooEditRequestModel request)
{
var foo = db.GetFooById(request.FooId);
var bars = db.GetBars();
var model = new FooEditViewModel() {FooId = foo.FooId, BarId = foo.BarId, Bars = bars};
return model;
}
public FooEditViewModel Execute(FooEditInputModel input)
{
var model = new FooEditViewModel();
try
{
db.UpdateBarId(input.FooId, input.BarId);
model.RedirectTo = FubuContinuation.RedirectTo<SomeOtherRequestModel>();
}
catch (MyValidationBusinessRuleOrDBExceptions e)
{
//Auto Mapper the properties needed for the request
var request = input.MapTo<FooEditRequestModel>();
model = Execute(request);
model = input.MapTo(model); //Auto Mapper the input data to rehydrate the view
}
return model;
}
I’m using some FubuMVC patterns here, so you’ll kinda have to accept that this is possible if you’re coming from another web server stack. Both my GET and POST methods return the view model. I taught FubuMVC that any method the takes a class with "Request" in the name is a GET and any method that takes a class with "Input" in the name is a POST.
The GET is pretty standard. The POST is where the fun begins.
First, I’m not happy with the Try/Catch. I’d rather wrap that up with a Fubu Behavior. But let’s move on for a second.
If all goes well, I’m going to update my Foo and redirect to wherever using the IRedirectable interface. A lot of FubuMVC POST examples return FubuContinuation to handle the redirection, but I wanted to be able to return my view model directly from the POST to avoid having to find a way to get the data over to the GET method.
If my request goes sideways, I should be able to get all the data I need from my InputModel in order to build a RequestModel. With that RequestModel, I can simply execute the GET and obtain a ViewModel. Now I can just use AutoMapper to copy over the InputModel properties over the ViewModel and I should be golden. FubuMVC really shines here. In my ASP.Net version of this, the action method returns a ViewResult which I had to crack open in order to find and modify the view model within.
Now back to that try/catch I don’t like. The FubuMVC examples I’ve seen show how to set up validation failure handlers to do something when the InputModel doesn’t validate.
It seems to me there are 4 general concepts of correctness for POSTing an InputModel.
In the last case, I prefer to send the user to a "Fail Whale" type of generic error page and issue a priority 1 alert to OPS. Either, someone is hacking the system, or we have a serious bug in our UI that is presenting invalid options to users. Either way, someone should deal with the problem quickly.
In the other cases, it’s nice to present the form back to the user telling them what’s wrong and letting them make a choice. The PRG pattern above allows for this.
Using FubuMVC, I can imagine ways to get this into the behavior chain based on our conventions and have it working seamlessly. I’ve been growing increasingly wary of inheritance, but I can see defining our classes like this.
public class FooEditRequestModel
...
public class FooEditInputModel : FooEditRequestModel
....
public class FooEditViewModel : FooEditInputModel, IRedirectable
....
Now our behaviors could know exactly which methods to execute and which data to copy when doing it’s work.
Boy…This was a lot of work.
It looks like the Fubu team is working hard to get some similar behavior using symmetrical models baked into FubuMVC. For "completeness", they probably should.
After going through this exercise, I had to ask myself how the Fubu team themselves have managed to get by without all this. The answer is that they mostly POST via ajax.
I read about AjaxContinuations when I was first starting to dig into FubuMVC, but I ignored it as an oddity. Of course I want to POST my entire page back to the server, because…well, because….because that’s what’s done.
Now imagine my form POSTed via ajax. Immediately, my "re-hydrate" the ViewModel problem evaporates. All my thought on symmetrical models seems meaningless. My how do I wrap all this in a Behavior/Try/Catch simplifies as well.
In addition I gain some interesting choices for the Concurrency Violation / DB Down case. I can auto-repost or I can tell the user who changed what data or I can just do the normal "submit again" messaging.
Of course I have some work to do on the client like tie error messages back to fields and follow urls that come in via the ajax response. It turns out the Fubu team has been working on all that stuff, but that wouldn’t be too hard to cook up yourself either.
For the last couple of hours I’ve been trying to come up with a scenario where it was unacceptable to POST via ajax. For web "applications", dependence on javascript is standard. I wouldn’t want to try and build a complex app without it. For e-commerce, I can see not forcing javascript on users. However, it turns out that they are mostly posting data that can’t fail an authorization check. By that I mean if they spoof another valid ProductId on their form, so what, they just found a new way to fill out their shopping cart. The only places you may have an issue is when they submit CC info or Address information. But these places are limited compared to a web "application" where most of the pages involve modifying data in some way.
I’m going to try to POST via ajax most of the time.
One recurrent theme in business is the disconnect between the “product team” and the Sales/Marketing team. I’ve repeatedly seen product teams working long hours to make insane deadlines because the Sales team has undersold their value.
Sane rules of sales go out the window when it comes to selling software, and probably any kind of service business that primarily involves brain work like advertising or design.
The salesperson goes out and makes a deal in which the customer gets 100 hours worth of value for 40 hours worth of dollars. The salesperson celebrates another successful closing and moves on.
Continue Reading...
One recurrent theme in business is the disconnect between the “product team” and the Sales/Marketing team. I’ve repeatedly seen product teams working long hours to make insane deadlines because the Sales team has undersold their value.
Sane rules of sales go out the window when it comes to selling software, and probably any kind of service business that primarily involves brain work like advertising or design.
The salesperson goes out and makes a deal in which the customer gets 100 hours worth of value for 40 hours worth of dollars. The salesperson celebrates another successful closing and moves on.
Congratulations, you’ve just brought the business one step closer to bankruptcy.
Worse still, management will look at the situation and thinks the salesperson is great because they brought in revenue and the product team is terrible because they are complaining about doing a little work.
I suppose the idea is that an FTE is “free” after 40 hours so it’s a good deal. “We’ll make Joe FTE work 100 hours in one week and the customer gets the project on budget. Win-Win.” Except Joe is talented and has options. If Joe walks, how much does it cost to recruit and train a new Joe. How’s the deal looking now?
Worse yet, the business missed out on 60 hours worth of revenue. Not only have you lost money now, you’ve also told the market that your services aren’t worth much. Often the idea of the first deal is that “next time” we’ll make up for it. Good luck selling 100 hours of work for 160 hours of revenue. More likely, the customer will balk and point out that last time “it only cost 40 hours”, so they probably won’t even want to pay 100 hours.
Whoops. Your business is now in a death spiral.
And yet, for real goods, it’s so much more obvious this is a bad deal. I could be the #1 Lexus salesman in the country if I sold any vehicle for $20,000. That will never happen because the sales manager won’t approve the deal.
I wish more managers would veto bad deals in service based companies.
I find myself writing code like this a lot:
public static void DoSomething(Foo foo)
{
var thing = foo == null ? null : foo.Thing;
}
I thought about adding an operator like ??? to go with ?? and ?, but you can’t do that in c# and it would probably be confusing to the next programmer anyway.
So how about an extension method to wrap that up:
Continue Reading...
I find myself writing code like this a lot:
public static void DoSomething(Foo foo)
{
var thing = foo == null ? null : foo.Thing;
}
I thought about adding an operator like ??? to go with ?? and ?, but you can’t do that in c# and it would probably be confusing to the next programmer anyway.
So how about an extension method to wrap that up:
public static class ObjectExtensionMethods
{
public static TResult NullOr<T, TResult>(this T foo, Func<T, TResult> func)
{
if (foo == null) return default(TResult);
return func(foo);
}
}
//usage
public static void DoSomething(Foo foo)
{
var value = foo.NullOr(f => f.Property);
}
Not a lot less typing, but a bit clearer and you’re less likely to screw up.
I mentioned that I got an idea while writing the post on extension methods. I realized that you can null check using this technique.
It get’s annoying writing this code over and over:
public static void ImportantMethod(string value)
{
if (value == null)
throw new ArgumentNullException();
}
I had considered using a NotNull<T>
to take care of null checking.
Continue Reading...
I mentioned that I got an idea while writing the post on extension methods. I realized that you can null check using this technique.
It get’s annoying writing this code over and over:
public static void ImportantMethod(string value)
{
if (value == null)
throw new ArgumentNullException();
}
I had considered using a NotNull<T>
to take care of null checking.
But with extension methods like this:
public static class ObjectExtensionMethods
{
public static void NullCheck<T>(this T foo)
{
NullCheck(foo, string.Empty);
}
public static void NullCheck<T>(this T foo, string variableName)
{
if (foo == null)
throw new ArgumentNullException(variableName);
}
public static void NullCheck<T>(this T foo, string variableName, string message)
{
if (foo == null)
throw new ArgumentNullException(variableName, message);
}
}
//usage
public static void ImportantMethod(string value)
{
value.NullCheck();
}
Much nicer. The overloads can facilitate whatever messaging level you desire.
This is all probably a moot point with Code Contracts in .net 4.0. To get Code Contracts working in VS2010, you have to download the code from DevLabs. That caught me off guard because the code contracts namespace is available by default in VS2010, but the actual code analysis was not.
Still, in the right situations I’d like to work on avoiding null altogether with the Null Object Pattern or immutable classes.
I try to avoid ! tests in “if” blocks if there is a clearer way to express the idea in positive manner. Thanks to Larry McNutt for turning me on to this concept.
In my post on extension methods, I had a string extension called HasValue. What’s the use of this?
//negative logic:
if (!string.IsNullOrWhiteSpace(s))
return;
//becomes:
if (s.HasValue())
return;
I think the second form is much more readable in that you have one less “twist” to think about.
Continue Reading...
I try to avoid ! tests in “if” blocks if there is a clearer way to express the idea in positive manner. Thanks to Larry McNutt for turning me on to this concept.
In my post on extension methods, I had a string extension called HasValue. What’s the use of this?
//negative logic:
if (!string.IsNullOrWhiteSpace(s))
return;
//becomes:
if (s.HasValue())
return;
I think the second form is much more readable in that you have one less “twist” to think about.
I’ve encountered if checks in code that were just tortuous:
if (!foo.IsAlreadyUndone())
return;
That sort of thing just makes my brain hurt and ensures that maintenance will introduce bugs.
To get radical on my first example, I’ll sometimes go as far as removing any function calls from inside if blocks:
//negative logic:
if (!string.IsNullOrWhiteSpace(s))
return;
//becomes:
if (s.HasValue())
return;
And yes, I will use simple variables like “bool ok;” instead of “bool stringHasAValue;” because the clarity of intent is there. If this thing is ok, get out of the function. I can use this all over the code and the reader knows that nothing interesting is happening. We’ve done a check and determined we can short circuit this method. Now we can look below and determine what is interesting about this method.
One cool and useful feature of extension methods is the fact that a null instance can call the method.
So say you write some code like this:
using System;
namespace FizzBuzz
{
public class ExtensionsDemo
{
public static void TestString()
{
var s = "hello";
var ok = s.HasValue();
s = null;
ok = s.HasValue();
}
}
public static class StringExtensionMethods
{
public static bool HasValue(this string value)
{
return !string.IsNullOrWhiteSpace(value);
}
}
}
You would expect the second call to HasValue would blow up because the string is null. But the extension method is on the class not the instance so it goes through with no problem. Very handy. In fact, while typing this post I just thought of a very good use for this…coming soon.
Continue Reading...
One cool and useful feature of extension methods is the fact that a null instance can call the method.
So say you write some code like this:
using System;
namespace FizzBuzz
{
public class ExtensionsDemo
{
public static void TestString()
{
var s = "hello";
var ok = s.HasValue();
s = null;
ok = s.HasValue();
}
}
public static class StringExtensionMethods
{
public static bool HasValue(this string value)
{
return !string.IsNullOrWhiteSpace(value);
}
}
}
You would expect the second call to HasValue would blow up because the string is null. But the extension method is on the class not the instance so it goes through with no problem. Very handy. In fact, while typing this post I just thought of a very good use for this…coming soon.
On a side note, I think string.IsNullOrWhiteSpace is new for C# 4.0. I just found that writing the code sample for this post. Otherwise, I had to do a null check before doing a trim and then checking the length of the string.
I’m still crying over this. Just go in VS2008, type “for”, and hit tab twice. The try alt-K, alt-X and pick anything under the c# menu.
If you knew this before and didn’t tell me, you’re probably the same person who laughed while I manually typed out using directives.
Switch on the code has a code snippet tutorial.
Continue Reading...
I’m still crying over this. Just go in VS2008, type “for”, and hit tab twice. The try alt-K, alt-X and pick anything under the c# menu.
If you knew this before and didn’t tell me, you’re probably the same person who laughed while I manually typed out using directives.
Switch on the code has a code snippet tutorial.
If you’re not watching TekPub videos, you’re doing yourself a disservice.
I was watching an awesome video from the Mastering ASP.NET MVC 2 series, and i noticed the presenter doing something interesting.
I haven’t been too big on visual studio shortcut keys, so when I saw this it just made me cringe. If you type a class name that you don’t have a using statement for, but Visual Studio knows about the class, it will suggest using directives to you and allow you to easily add them to the using directive section at the top of the class file.
Continue Reading...
If you’re not watching TekPub videos, you’re doing yourself a disservice.
I was watching an awesome video from the Mastering ASP.NET MVC 2 series, and i noticed the presenter doing something interesting.
I haven’t been too big on visual studio shortcut keys, so when I saw this it just made me cringe. If you type a class name that you don’t have a using statement for, but Visual Studio knows about the class, it will suggest using directives to you and allow you to easily add them to the using directive section at the top of the class file.
Say you want to copy a file and you type the word "File" and realize you don’t have a reference.
Now press Ctrl-. (period). You get a pop-up like this:
Now cringe like I did and think about all the time you wasted scrolling up to the top of the file and typing in the using manually. Oh but wait, you already know that File is in System.IO. What about all the times that you didn’t know what namespace the class was in and you had to go to MSDN or Google to figure that out.
I always thought people I saw doing this were using ReSharper and I didn’t want to "get addicted" to a third party tool I might not be able to use at the office.
Breathe deep and move on.
I had an email discussion about Script# the other day. I didn’t like it. I admit I looked at the website for less time than it took me to write this post.
After sleeping on it, I figured out what bothers me about it. It reminds me of WebForms – let’s pretend we’re not in a browser.
Better approaches to being uncomfortable working in a browser:
Continue Reading...
I had an email discussion about Script# the other day. I didn’t like it. I admit I looked at the website for less time than it took me to write this post.
After sleeping on it, I figured out what bothers me about it. It reminds me of WebForms – let’s pretend we’re not in a browser.
Better approaches to being uncomfortable working in a browser:
WPF/Click Once seems like a good choice. jQuery seems like a good choice. Compiling a high level language into a scripting language seems like a recipe for headaches.
A while back, I read quite a bit of Reginald Braithwaite’s excellent blog. If you care at all about programming, reading his blog will intrigue, mystify, depress, and inspire you.
I particularly liked his post about everyone being a Blub programmer. To quote a Paul Graham quote from Reg’s blog:
Blub falls right in the middle of the abstractness continuum… As long as our hypothetical Blub programmer is looking down the power continuum, he knows he’s looking down. Languages less powerful than Blub are obviously less powerful, because they’re missing some feature he’s used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn’t realize he’s looking up. What he sees are merely weird languages… Blub is good enough for him, because he thinks in Blub.
Continue Reading...
A while back, I read quite a bit of Reginald Braithwaite’s excellent blog. If you care at all about programming, reading his blog will intrigue, mystify, depress, and inspire you.
I particularly liked his post about everyone being a Blub programmer. To quote a Paul Graham quote from Reg’s blog:
Blub falls right in the middle of the abstractness continuum… As long as our hypothetical Blub programmer is looking down the power continuum, he knows he’s looking down. Languages less powerful than Blub are obviously less powerful, because they’re missing some feature he’s used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn’t realize he’s looking up. What he sees are merely weird languages… Blub is good enough for him, because he thinks in Blub.
Most people take their language of choice being called Blub with offense. Being called a Blub programmer is even more offensive.
In defiance of Blub, I can see that languages other than my LOC (C#) offer useful idioms and abilities my LOC does not. So I can postulate using other languages for what they are best at doing.
On the other hand, the pragmatist in me sees that doing my BDD specs in Ruby Cucumber, doing the multi-threaded coding in F#, doing the UI in VB.net and doing the BLL in C# will never get passed the PM/Executives and will confuse the hell out of the maintenance dev. And that’s staying 75% within Blub.Net.
In defense of Blub, Blub is generally good enough for the task at hand and using another language will simply trade deficiencies.
In defense of Blub programmers, it must be said that any programmer who has looked “down the power continuum”, is already above average. Your average programmer has never even considered whether Blub is any better or not. Blub was in use when they were put on the project and they hack away in Blub. More to the point, they suck at Blub and real Blub programmers cry when they have to deal with their code.
I’ll take a team full of Blub programmers over a team of average programmers any day.
I inherited some code at work that made use of enums. I happily continued the pattern in order to get the job done considering the tight deadline. My spidey sense kept tingling telling me something was wrong, but I couldn’t quite put my finger on it.
The code I started with was pretty standard stuff.
There was an enum:
Continue Reading...
I inherited some code at work that made use of enums. I happily continued the pattern in order to get the job done considering the tight deadline. My spidey sense kept tingling telling me something was wrong, but I couldn’t quite put my finger on it.
The code I started with was pretty standard stuff.
There was an enum:
enum CarType
{
Slow,
Fast,
Lightning
}
There was a class that used the enum:
class Car
{
public Guid Id { get; private set; }
public string Name { get; set; }
public CarType CarType { get; set; }
public Car() { Id = Guid.NewGuid(); }
}
There was some List creation:
var cars = new List<Car>()
{
new Car() { Name = "Yugo", CarType = CarType.Slow },
new Car() { Name = "M3", CarType = CarType.Fast },
new Car() { Name = "Tesla Roadster", CarType = CarType.Lightning }
};
And then there was branching logic on the enum. This is where the trouble began:
foreach (var car in cars)
{
switch (car.CarType)
{
case CarType.Slow:
DoSlowCarStuff();
break;
case CarType.Fast:
DoFastCarStuff();
break;
case CarType.Lightning:
DoLightningCarStuff();
break;
default:
break;
}
}
This code was smelly and I was adding more of it. I really didn’t understand what I didn’t like, but I wanted to do something different.
I decided to use extension methods.
public static class CarTypeExtensionMethods
{
public static void DoCarStuff(this CarType type)
{
switch (type)
{
case CarType.Slow:
DoSlowCarStuff();
break;
case CarType.Fast:
DoFastCarStuff();
break;
case CarType.Lightning:
DoLightningCarStuff();
break;
default:
break;
}
}
static void DoSlowCarStuff() { }
static void DoFastCarStuff() { }
static void DoLightningCarStuff() { }
}
So now my consuming code looked like this:
foreach (var car in cars)
car.CarType.DoCarStuff();
Ahhhh. Now that’s bliss. All the car type junk was packaged together and the calling code is dead simple.
But something still felt…wrong. The big “switches” were all gone, but I still had some “if (carType ==” statements lying around. I could put those in the extension methods, but that wasn’t really the root issue.
I went to the Big G and typed something like “c# alternatives to enums”. Somewhere along the line I stumbled on this post about comparing c# enums to java enums.
At first, I thought, this looks like a ton more code to write for little gain. But it felt like the right direction. I decided to just write it and see what happened.
class CarType
{
public static readonly CarType Slow = new CarType()
{
_display = "Slow",
_dostuff = () =>
{
//do slow car stuff
}
};
public static readonly CarType Fast = new CarType()
{
_display = "Fast",
_dostuff = () =>
{
//do fast car stuff
}
};
public static readonly CarType Lightning = new CarType()
{
_display = "Lightning",
_dostuff = () =>
{
//do lightning car stuff
}
};
public override string ToString()
{
return _display;
}
public void DoCarStuff()
{
_dostuff();
}
private CarType() { }
private string _display;
private Action _dostuff;
}
And suddenly the real problem with the original code was obvious. Switch/If blocks were littered everywhere through the program. If you added a new CarType, you’d have to hunt through the entire application updating the switch/if logic.
The extension method class was better in that the code was all in one class, but you still had to go through and update it all.
Now, when you create a new “enum” type, all the logic is done right there. Even typing up this blog post I smiled when I didn’t have to change my Car class or the consuming code that called DoCarStuff(). I can add CarTypes at will, knowing I don’t have to change any other code.
Enums are still useful when all they do is identity. As soon as you start branching on enums, switch to a class. You’ll thank me later.
So by now, you might be thinking, congratulations, you’ve discovered the strategy pattern. I get that. However, I find it useful to think about solving concrete problems like getting rid of branching code on enums by using proper classes. It’s the same thing, but if I just said, “use the strategy pattern”, a lot of people, myself included, would leave the blog post less informed.
Finally, I know some of you might think that this code is terrible:
car.CarType.DoCarStuff();
I realize that Car should probably define DoCarStuff and not expose it’s CarType, but this was the first example I could think of and I figured I’d write the post instead of trying to think of the perfect example.
Thoughts?
Getting hired for a programming job means interviewing. This process is utterly necessary, but often tedious.
Interviewers have to be able to weed out the “no hopers”. The problem is, decent candidates are put off being asked “the basics”.
For me, the problem is time. We only have 45 minutes to an hour for the interview. I’d rather spend the time talking about our respective views on SOLID, Agile, IoC/DI, CI, etc. The really important point of an interview is determining whether there is a "fit" between myself and the company.
Continue Reading...
Getting hired for a programming job means interviewing. This process is utterly necessary, but often tedious.
Interviewers have to be able to weed out the “no hopers”. The problem is, decent candidates are put off being asked “the basics”.
For me, the problem is time. We only have 45 minutes to an hour for the interview. I’d rather spend the time talking about our respective views on SOLID, Agile, IoC/DI, CI, etc. The really important point of an interview is determining whether there is a "fit" between myself and the company.
If there is a fit, then the fact that I haven’t been doing C# 4.0 for 5 years is irrelevant (ahem). If there’s no fit, the fact that you have a good dental plan is irrelevant.
So in the interest of saving time and skipping ahead to what matters, I decided to post my implementation of FizzBuzz. I used LINQ since I hadn’t seen that implementation, though I’m sure it’s out there.
var numbers = from num in Enumerable.Range(1, 100)
select num % 15 == 0 ? "FizzBuzz"
: num % 5 == 0 ? "Buzz"
: num % 3 == 0 ? "Fizz"
: num.ToString();
foreach (var num in numbers)
Console.WriteLine(num);
That took about two minutes with a decent chunk of that spent firing up VS2010. Yes, I could have used a lambda for the Console.WriteLine, but I think the foreach is still more readable.
Great. I can write FizzBuzz. I also know all the access modifiers and I can use encapsulation and polymorphism in a sentence.
Now let’s move on. :-D
Thanks to Matt Taylor sending me email comments about my Avoiding FizzBuzz post, I decided to jazz it up a bit.
This whole blog is dedicated to avoiding FizzBuzz type questions in general, so I figure the more code posts the better.
Using my IEnumerable Each extension method, I tried out this version of FizzBuzz:
Continue Reading...
Thanks to Matt Taylor sending me email comments about my Avoiding FizzBuzz post, I decided to jazz it up a bit.
This whole blog is dedicated to avoiding FizzBuzz type questions in general, so I figure the more code posts the better.
Using my IEnumerable Each extension method, I tried out this version of FizzBuzz:
Action<int> printnum = num =>
{
var value = num % 15 == 0 ? "FizzBuzz"
: num % 5 == 0 ? "Buzz"
: num % 3 == 0 ? "Fizz"
: num.ToString();
Console.WriteLine(value);
};
Enumerable.Range(1, 100).Each(printnum);
Notice I’m still trying to decompose statements into rough single responsibility. I’m still going for readability/maintainability over trying to minimize statement count. At the same time, I always strive to reduce typing by staying pretty DRY. It’s a balancing act.
Thoughts?
I’ve wanted to write an each statement for IEnumerable for a while, but haven’t bothered, mostly because other devs had decided to translate everything to List anyway so I just used .ForEach or a straight foreach as appropriate. A comment on my Avoiding FizzBuzz post by Matt Taylor spurred me to do an implementation.
Anyway, an Each() extension method on IEnumerable is trivial:
public static class IEnumerableExtensionMethods
{
public static void Each<T>(
this IEnumerable<T> list
, Action<T> action)
{
foreach (var item in list)
action(item);
}
}
I’ve wanted to write an each statement for IEnumerable for a while, but haven’t bothered, mostly because other devs had decided to translate everything to List anyway so I just used .ForEach or a straight foreach as appropriate. A comment on my Avoiding FizzBuzz post by Matt Taylor spurred me to do an implementation.
Anyway, an Each() extension method on IEnumerable is trivial:
public static class IEnumerableExtensionMethods
{
public static void Each<T>(
this IEnumerable<T> list
, Action<T> action)
{
foreach (var item in list)
action(item);
}
}
Gmail doesn’t mess around with spam. Even messages from Gmail are not above suspicion.
Gmail doesn’t mess around with spam. Even messages from Gmail are not above suspicion.
I finally got around to getting a subscription for TekPub. I watched three of the jQuery videos, two of the MVC videos and one Linq video.
The thing is, I use jQuery and Linq all the time. I thought I “knew” these technologies. I’ve watched a slew of the MVC videos on asp.net. I’ve read articles all over the web. I’ve read the docs for these products.
I learned more in these six videos in a day than I had in a year of fumbling around. The problem is that you have a project to get done. With jQuery, things were so much easier than writing raw js, I thought it was great already. I see now that I was missing quite a bit. For Linq, I got the clearest explanation of how we moved from .net 2.0 to 3.5, exactly what lambdas and delegates do and how they are constructed essentially. I was decent at using Linq, but I always found myself struggling to explain it. Now I can say – Goto TekPub.
Continue Reading...
I finally got around to getting a subscription for TekPub. I watched three of the jQuery videos, two of the MVC videos and one Linq video.
The thing is, I use jQuery and Linq all the time. I thought I “knew” these technologies. I’ve watched a slew of the MVC videos on asp.net. I’ve read articles all over the web. I’ve read the docs for these products.
I learned more in these six videos in a day than I had in a year of fumbling around. The problem is that you have a project to get done. With jQuery, things were so much easier than writing raw js, I thought it was great already. I see now that I was missing quite a bit. For Linq, I got the clearest explanation of how we moved from .net 2.0 to 3.5, exactly what lambdas and delegates do and how they are constructed essentially. I was decent at using Linq, but I always found myself struggling to explain it. Now I can say – Goto TekPub.
Rob was already my here because of SubSonic, now I have TekPub to be thankful for as well.
Thanks Rob and James.