Recently I have just started working on a new project and as we were blessed with a build server and a mostly working build I was reminded of this project that had been put on the back burner. This time I approached it with a far more pragmatic mindset and just wanted to get something working. I wasn’t looking to win any UI awards and I didn’t want to architect it any further than what I needed to get the job done.
This is what the light can do:
I still haven’t quite worked out exactly how the button works on the light, but I was able to easily set it so it would beep if I pushed the button thanks to a built in setting. This however wasn’t something that I especially needed anyway.
My needs however were simple. I had three colours to choose from, red, green and yellow. I also wanted to be able to make the light flash while a build was running and continue to show the colour of the previous build result.
Another technology I have been looking into recently is Redis. It’s a key value store which lets you hold some simple data structures. It also has an excellent publish/subscribe model which allows messages to be broadcast to interested clients.
After using this to store some configuration information for some tests to great success I thought about using Redis to provide an interface into the inner workings of my build light. By holding the desired state of the light in Redis all I would need is to signal that a change had been made and all would be well.
Rather than starting from scratch I chose to extend the existing example code as it was already working well. The work I needed to do involved automating the interaction with the UI, a task I was already familiar with and to do it within the application itself, a breeze. Working this way also provided full visibility into what settings were being sent to the light.
The changes I needed to make were:
I decided to leave the buzzer and switch control as an exercise for another day as these were not essential to my plans (and the buzzer would likely annoy me).
I decided to use a Redis hash (a Dictionary) to store information about the desired state of the light. In Redis, hashes can have individual fields changed making it fairly easy to work with. I chose to set up the following fields for each colour light:
state
with 0
representing off, 1
on, and 2
flashingpower
representing the power level or intensity of the lightonduty
specifying a time interval for the light to be on when flashingoffduty
specifying a time interval for the light to be off when flashingoffset
specifying an offset when flashing so that the flashing of lights
could be synchronised (e.g. flashing between red and green)These all corresponded to appropriate controls in the example application.
The code for the solution can be found in my HIDVIWINCS repository on GitHub.
Of course, this was only part of the battle. Fortunately obtaining the status
of the latest build was reasonably straightforward. First, I set up an Enum
to keep track of the light’s current colour.
1 2 3 4 5 6 7 8 9 10 |
|
With this simple task out of the way I could set up individual methods to update the appropriate hashes in Redis. The library I used in the build monitor was Booksleeve, a slightly older version than the client I used in HIDVIWINCS. Setting the colour of the light was as simple as folows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Then to make the light flash I simply needed to make sure that the state was set appropriate for the current light.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
It is worth noting that the message sent on the update channel doesn’t actually matter at this point in time as it is simple the receipt of the message that triggers an update to the light’s status.
Finally the code required to check the build status is reasonably straightforward:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
The code required to integrate the light to the build monitor was actually the simplest bit of code in the whole thing. Indeed I haven’t shown the most complex part of the code because it still needs work, but it handles restarting the loop if it fails and shutting the whole thing down cleanly.
Importantly it means that the monitoring tools are well separated from the actual control of the light. They no longer need to even exist on the same computer. Indeed during my initial testing I had Redis hosted on one computer, the light on another and I was using a Redis client on a third.
Photos and videos of the build light in action!
I have been using a mixture of C# and F# to solve the problems. I have been wanting to expand my F# knowledge for some time now, so my strategy so far is to re-implement re-usable components that I had written in C# into F#. Because I’m a huge fan of recursion many of these re-implementations were extremely straightforward as I was already following a common functional pattern.
Eventually I came across Problem 57. The problem involves evalutating an expanding formula into the appropriate fractional result. Nb: Code found in this post provides a partial solution to this problem.
These are the first four iterations as described in the problem:
1 + 1/2 = 3/2 = 1.5
1 + 1/(2 + 1/2) = 7/5 = 1.4
1 + 1/(2 + 1/(2 + 1/2)) = 17/12 = 1.41666...
1 + 1/(2 + 1/(2 + 1/(2 + 1/2))) = 41/29 = 1.41379...
So I am essentially working with a tree of operations. Each iteration can be based on the one preceding it. The problem requires that each of these is able to be reduced to a single fraction. Therefore the pieces that I need are:
I won’t be looking at the last of these three pieces here (I’ll leave that as an exercise for you).
Also, it would be kind of nice if I had a function to show me a string representation of my expression so that I could easily check I’m on the right track (because that is generally easier than trying to understand an object graph plus I already had a string representation to target).
So when it came to picking the data structure I was reminded of discriminated unions in F#. These allowed you to define different but related structures. The relationship could then be used to enable pattern matching to deal with a specific case.
The expressions that I needed to support were quite simple, add, divide and of course a constant value. In this case I’ll throw caution to the wind and use an integer.
1 2 3 4 |
|
Here my constant values are stored in a Constant
object which is itself an expression.
Add
and Divide
are both comprised of a pair of expressions.
Expression
is a recursive type and can either have constant leaf nodes of branches
which perform an operation.
I could have made a simplification as in the values I was constructing the left operand
of Add
and Divide
would always be a constant, but this more flexible structure generally
felt better.
I could now easily construct the first iteration like so:
1
|
|
This is rather verbose and difficult to read and is only going to become more difficult as we increment the value. So the next thing I’m going to do is create a function which will create a string version of the expression so that I can review it by hand.
Now when I am thinking about the string function (I’m going to call it stringify
)
I can think of the various types:
Constant
will just be a ToString()
call on the value.Add
and Divide
will be recursive and call the stringify
function of both expressions
and separate the results with the appropriate operator.Putting these into practice becomes extremely straightforward (ignoring that I haven’t used
a StringBuilder
).
1 2 3 4 5 6 |
|
What struck me as awesome was the amazing simplicity of this code. Using pattern
matching I was able to easily address my special case for including brackets and
give the other cases in a straightforward manner as well. The F# compiler
even helped by ensuring that I matched all possible patterns (it uses wizardry
known as type inference to determine that v
is an Expression
type).
I could then execute my stringify
function on root to check that everything worked
as intended. The delightful part at this point was that it did. The next step was
to reduce one of these expressions down to its smallest possible form. This too
would prove to be trivial. Here’s the code I came up with:
1 2 3 4 5 6 7 8 9 |
|
Here I looked at the sort of smaller patterns I was dealing with and repeatedly called reduce until I achieved a desirable pattern. Again, the simplicity of each pattern and its resulting reduction really drove home the incredible benefit provided by pattern matching.
Pattern matching is a technique that I really miss when I use languages like C#. For this reason alone I would really like to see F# become more widely adopted so it may be finally possible to use it in projects where we are currently confined to a single language.
I have written a number of code generating tools over the years, especially focussing on the generation of SQL from some sort of structure. Looking at how easily I was able to solve the problem above using F# I wish that I had some of these techniques at my disposal then.
Finally, I made a slight modification in the code above which will prevent it from being used verbatim to solve the Euler problem.
]]>I also have added to my path a large number of the GnuWin32 tools which
are sometimes better at dealing with raw text like sed
and occasionally
grep
instead of PowerShell’s Select-String
. By having these utilities in my
path I can better deal with being stuck in pure cmd.exe
.
Anyway, one of my favourite utilities is Clip.exe
. It comes out of the box in
Windows 7 and presumably Windows Vista. I rolled my own for Windows XP although
I believe that it may be available as part of a resource kit. In case it isn’t
immediately obvious, what Clip.exe
does is takes whatever is fed to Standard
Input and saves it to the clipboard. Very useful indeed.
The clipboard is a great place for storing some data temporarily and I quite
frequently find that I want to process it in various ways. I have a bunch of
command line utilities that are perfect at this, but I have to save the
contents to a file and pass that file to the utility. That sounds like busy
work. Fortunately the GnuWin32 utilities and PowerShell both read from
standard input. What I need is a tool that does the opposite of Clip.exe
and
writes the contents of the clipboard to standard input. What I need is
Paste.exe
. So I wrote it.
A simple compile…
$ csc .\Paste.cs
And after copying to my Utilities folder (which is in my path) I’m good to go.
An added bonus is that I handle files copied in Windows Explorer. These will be returned as a list of filenames.
By default I add an extra Environment.NewLine
and the end of the content as
it tends to make the whole thing neater in the console. If this is causing you
hassles use the -r
switch which will paste the contents in their raw form.
I use paste to quickly view the clipboard contents:
$ paste
To strip formatting in the clipboard:
$ paste | clip
To filter clipboard contents:
$ paste | grep -i batman
As subjects of a PowerShell ForEach-Object:
$ paste | % { $_.Length }
If you find a common pattern, share it in the comments below.
]]>Both Game of Thrones and Mad Men have season passes available for just over $30. Oh, and that’s HD. A quick look and you are looking at about $50 for a Blu-Ray of season 2.
So suddenly we have an influx of shows that are timely and cheap (relatively speaking). Sure, they encumbered with the iTunes DRM and you have to use iTunes but I like to think that this is a good sign. It seems like Australia has become a test bed for online distribution.
Aside from the DRM (and the fact that you are restricted to Apple devices) there is one glaring omission, communication. The scheduling of when shows will be available is largely hidden until they are released. So right now I’m going on faith that the prompt deliveries will continue for the remainder of the season but obviously have no guarantee that they will. Even if some insight was given into when the next episode will be released that would be an excellent start.
The next issue is the overall experience. I have a setup with a Mac Mini and an Apple TV connected to my television. It’s a shame that I kind of need both. The Mac Mini has been delegated as the storage server and it can hold all my iTunes downloads. However the full screen experience is somewhat lacking and generally requires fiddling around with a trackpad and keyboard. The Apple TV on the other hand delivers a fantastic (if somewhat limited) full screen experience as you would expect. However there is no way to reliably cache your shows on it so as to avoid disturbances due to issues involved in communicating with Apple’s servers. The device itself has a substantial buffer, but to buffer an entire show generally involves starting it, hitting pause and waiting for it to download completely without looking at anything else in the meantime.
So this is where the Mac Mini comes into its own. I can open up iTunes and download the shows I have purchased and share them with my Apple TV through the Home Sharing feature. Unfortunately the Apple TV interface for Home Sharing isn’t as slick as the regular TV interface, but once you get going it doesn’t really matter. The Apple TV also serves as a great way to browse the iTunes store and purchasing items is really easy.
And with this ease comes a small snag. It appears (and has been confirmed by a support request with Apple) that when you purchase a season pass on an Apple TV you cannot set up iTunes to automatically download new episodes as they become available. You can still download these episodes manually though, so the limitation won’t stop you from downloading your purchase shows, but it does make it just a little bit harder.
The workaround of course is to perform the actual purchase via iTunes on the machine that you want to download the episodes to. So unfortunately for the moment the keyboard and trackpad attached to my Mac Mini are there to stay.
Fortunately iTunes does send an email when a new episode of a show you’ve subscribed to is released, so this acts as a prompt to kick off the manual download.
]]>But what about the times when I want to review the changes that I’ve made? This is especially important when I’m about to push changes and inflict my work on other developers. It’s also a great opportunity to spot issues which may result in broken builds.
For small changes I still stick with the console and run:
$ hg diff
This gives me a fairly simple diff of my changes. I can make this even better
by enabling the color extension which will spruce up my command line
output by adding colour to a myriad of different commands, but importantly for
this case, diff. By marking insertions in green and deletions in red I can get
a very quick overview of the changes I have made. To enable this extension I
just add the following to my mercurial.ini
file.
1 2 |
|
However a basic diff is somewhat limited. It generally won’t take the filetype into consideration and a small change to a big line can be difficult to spot. Fortunately there are a number of great diff tools out there. The one I use is Beyond Compare 3 (you’ll need the Professional version if you want its excellent three way merge feature). It isn’t free, but for a tool I use every day I feel that it has been worth it many times over.
These instructions should work with any diff tool capable of understanding command line parameters and able to compare two folders.
To integrate our graphical diff tool with Mercurial we’ll turn to another extension, Extdiff. This extension helps you use a graphical diff tool to compare two different changesets (or a changeset with the current work in progress). Importantly it also makes sure that if you make any changes to the working copy they will be propagated back when you are done. So if your diff tool is good at moving changes from one file to another and editing files in place you can very easily clean up your changes. Enabling the extension is straightforward.
1 2 3 |
|
This has however only enabled the extdiff
command in Mercurial. If we want
to configure our diff tool of choice we have to make one final change.
1 2 3 4 5 6 |
|
Here I’ve added the extdiff
section and created a new command vdiff
which
will use Beyond Compare as my diff tool. So now to use Beyond Compare to view
my changes I just need to run the following:
$ hg vdiff
I can create as many of these commands as I like if I want to enable different diff tools. It is also possible to parse arguments to the diff tool. For instructions check out the Extdiff documentation page on the Mercurial wiki.
]]>$ hg init .
That one line changed my life forever. Pretty soon I had just about everything I was actively working on managed by source control. It truly was a golden age. At this time I was generally working inside my own repositories sheltered from the rest of the world. Even for a single user Mercurial came to my aid by protecting me from myself. Coupled with an awesome diff tool I felt unstoppable. I was able to better break down problems and experiment more knowing that I could easily get back to a known good state and easily examine exactly what I’d just changed.
Of course as these things go it was only a matter of time before I was able to really dive into a repository that was actively maintained by other users. Here I was finally able to see if Mercurial was worthy of all the hype that I had read having consumed plenty of documentation (and forgetting all the things I wasn’t using on a regular basis).
And I was generally happy. But one thing continued to niggle at me. The number of “Merge” commits seemed extremely unnecessary. So I looked into the rebase extension. Now I was able to graft my changes at the end of everyone else’s history. Most of my changes were fairly isolated and there were rarely any conflicts (and when there were easily resolved with a simple three way merge).
And so again, life was good. But as time moved on I became wary of all this rewriting of history. One instance of sending duplicate changesets to the ‘master’ repository will quickly identify how easy it is to do the wrong thing. Wasn’t Mercurial supposed to save me from all this? I lost faith and began to look elsewhere, learnt the ins and outs of Git and really came to like the light weight branching it offered. This seemed to be perfect for what I wanted. I could have feature branches on bits of work that I might do and then merge them into the trunk when I’m done. Suddenly the merges didn’t seem so bad as each served a specific purpose of bringing a feature to the main line.
So I looked into Mercurial branches. Branches in Mercurial aren’t as lightweight as in Git. And even when they are closed traces of them seem to last forever. That might be fine for some cases, but for the work I was doing I didn’t want to worry about coming up with unique names for my branches and have those choices persisted for all eternity.
So I went for a search to find how other Mercurial users tackle this apparent shortfall. The answer was to maintain multiple heads and to use bookmarks to manage each head.
When I was first learning about Mercurial I suppose I gave myself the impression that multiple heads were bad and whenever there was more than one head you need to merge. This is of course completely wrong.
Mercurial quite happily chugs along with multiple heads and indeed it is fairly fundamental to how it works. When it comes time to push your changes you might run into issues, but even here we can work around them.
When you have multiple heads to work with it becomes evident fairly quickly that you will need to have a good way to switch between these heads. This is where bookmarks come into play. So let’s look at an example of how this works:
I’ve opened up my trusty console window and have updated my repository.
$ hg pull -u
I’ve used the -u
switch here to update the repository at the same time as
pulling the latest changes. I don’t have any bookmarks set yet so I’m going
to create one now so that I can easily track the ‘tip’ from the ‘master’
repository. I’m going to call it ‘master’ because that’s easy to remember.
$ hg book master
I’ve shortened the bookmarks
command to book
because book
is easier to
type and still gets the point across without being too cryptic. Now I’m going
to start working on a fairly major change. I want to be able to commit often as
I know that I will get things into a partially working state frequently but
don’t want to share my changes until I’m completely done. So I’ll create a new
bookmark to keep track of these changes:
$ hg book batman
So now I have two bookmarks that both point to the same changeset. Importantly
the active bookmark is batman
because it’s the last bookmark I used. You
can only have one active bookmark at a time. To see the bookmarks that I have I
can just run the book
command with no parameters.
$ hg book
* batman 34:7f6c4f9e45fb
master 34:7f6c4f9e45fb
Looks good. The *
indicates which bookmark is currently active. Note that
they both point to the same changeset. Now I’m going to make some changes and
commit:
$ hg commit -m "Improved grapple."
Now if I look at my bookmarks I’ll see that batman
has moved with my new
changeset and master
has stayed put.
$ hg book
* batman 35:63a4549bc962
master 34:7f6c4f9e45fb
This is generally the point where someone might interrupt me with an urgent
change. This time I know it is a simple change and we need to get it into the
master
ASAP. I don’t want to risk my improved grapple though and this change
is unrelated anyway so I’ll start back at the master bookmark.
$ hg update master
1 file updated, 0 files merged, 0 files removed, 0 files unresolved
$ hg book robin
$ hg book
batman 35:63a4549bc962
master 34:7f6c4f9e45fb
* robin 34:7f6c4f9e45fb
Now I can make my change and commit.
$ hg commit -m "Reduce brightness of Robin costume."
$ hg book
batman 35:63a4549bc962
master 34:7f6c4f9e45fb
* robin 36:9832ab432fec
Now I can see that the robin
bookmark has moved, the batman
bookmark
stays in place and the master
bookmark still points to the last changeset I
pulled from the master repository. At this point I technically have two heads.
$ hg heads .
changeset: 35:63a4549bc962
bookmark: batman
user: Rhys Parry <rhys@example.com>
date: Wed Apr 3 20:40:12 2013 +1000
summary: Improved grapple.
changeset: 36:9832ab432fec
bookmark: robin
tag: tip
user: Rhys Parry <rhys@example.com>
date: Wed Apr 3 20:52:19 2013 +1000
summary: Reduce brightness of Robin costume.
Because we have bookmarks for these changesets we can quickly switch between
them. Here you can also see that the robin
changeset also is regarded as the
tip
of the repository. Because this change is urgent I want to push it to the
master repository ASAP. So first I’ll check if there is anything new in the
master repository.
$ hg in
comparing with B:\Batcave
searching for changes
changeset: 37: f33b6a00e172
tag: tip
user: Alfred Pennyworth <alfred@example.com>
date: Wed Apr 3 20:38:19 2013 +1000
summary: Improve delivery of hot cocoa.
Here we can see that there has been a change since I started working. If I want to push this change I’ll need to merge my changes. But first I’ll make updating as simple as possible.
$ hg update master
1 file updated, 0 files merged, 0 files removed, 0 files unresolved
$ hg pull -u
This will pull in Alfred’s changes and leave us with three heads. Importantly
when we used hg update master
Mercurial knew we were switching to a bookmark
so it set it as the active bookmark. When we pulled in Alfred’s changes the
bookmark followed the update so now master
is pointing where we were
expecting it to.
$ hg book
batman 35:63a4549bc962
* master 37:f33b6a00e172
robin 36:9832ab432fec
Now we just need to merge the robin
branch into master
and we’ll be in a
position to start pushing our changes. Again, this is straightforward.
$ hg merge robin
$ hg commit -m "Merge Robin costume improvements."
This time I can easily come up with a merge message because I know that there
is a common theme to what I am merging. The same would apply if I was merging
one hundred changesets. I can also now think of the merge as a true change and
not just a nasty side effect of my choice of version control system. Finally I
am ready to push my changes to the master repository. Because I don’t want the
changes from my batman
bookmark going up I’ll take a bit more care and do the
following:
$ hg push -r master
This will push the master
bookmark and all its descendants (which now includes
the robin
changes as well but not the batman
changes). If you want to
preview what changesets you will be sending to the master repository you can
always run the following command first:
$ hg out -r master
Sometimes you might forget to include the -r
flag. If you do Mercurial will
kindly refuse your push and none of your changes will be pushed. By default
Mercurial isn’t keen on letting you push extra heads to other repositories.
With this in mind we can feel a little safer knowing that Mercurial will prevent us from inflicting our unfinished changes on others before their time. We can of course force these changes if we want:
$ hg push --force -B master -B batman backup
Here I’ve forced these extra heads to be pushed to my own backup repository so that if my machine fails all my changes, including the bookmarks (which are not pushed by default).
So that’s a peak at one of the ways I use Mercurial on a daily basis. I’m sure there are many improvements to the way I am currently working and I relish the opportunity to explore more of Mercurial’s functionality.
]]>While you can create new circles there doesn’t appear to be any way to sort them. Unfortunately this means that my extreme circles (close friends and fringe) are right next to each other and that’s just not right. However the problem really becomes evident with the menu list of your streams. Only your first custom circle is shown in the list (when you have added more than two custom circles). These are also sorted alphabetically after the predefined circles, which is a little odd, but I can see why it might be the case.
I think that at least in my case, Google+ is more likely to displace Twitter than Facebook. That said I’m not overly active on Facebook, but I find using twitter is a great way to keep up to date with the people I actually care about and a little industry stuff as well. I think Google+ is well targeted for that particular purpose. Like Twitter though you only know who is actually listening, not how much they care about what you say.
]]>Well, this is all pretty standard. So all the things from your circles that your circle friends have deemed you worthy to see appear here. What I would like to see here is the ability to exclude feeds from certain circles from appearing in the stream. This would allow your default view to be void of noise but still provide ready access to these people.
I have no doubt that this is a feature the guys at Google are keen to push out (the math nerds that they are). Basically what is needed is the ability to define composite circles, that is circles that use standard set operations (union, intersection and subtraction) to define their members. Of course they would have to explain it better than me, but some Venn diagrams should make it clear enough to just about anybody.
Because the suggestions are based partly on the entire contents of you Gmail address book I’ve found that I get suggestions for people I’ve already added (but have multiple email accounts). One suggestion was even to add myself. Where I’ve told Google that I have multiple addresses I would hope that it could prune some of that for me.
I do like the multiple personality approach that Google+ gives you. I’m glad that even though set functionality isn’t available yet that by using circles you can model some pretty complex social hierarchies. Whether enough people will make the switch remains to be seen. I’m not holding my breath, because unless Google rapidly expands their trial interest may just fizzle. So, hopefully Google can bring wave after wave of improvements while the buzz is still in the air.
]]>After holding down what felt like every button ((On the English (UK) keyboard)) to see what special characters they revealed (like the iPhone, the ° symbol is hidden under the ‘0’ key) the pipe character still eluded me. However Windows Phone 7 also comes with a special smiley keyboard which has a wide array of smileys to choose from, including the flat :| smiley. Knowing that was the pipe character right there it became easy, simply insert the :| smiley, move the cursor between the colon and the pipe, hit the backspace key and move the cursor back to the end of the line.
It couldn’t be simpler…
Actually, maybe it could. Pipe symbol please!?!
]]>However I soon ran into a problem in that when I called msiexec.exe it would not block the script, so it would try to run multiple instances on Windows Installer and if you’ve had any experience with Windows Installer you know that just doesn’t work (and for good reason).
A quick search on the interwebs revealed that I could simply wait for the
msiexec.exe process to finish. Rather than doing some sort of convoluted
monitoring of the process inside Powershell I decided to use the the Start-Process
commandlet (inspired by Heath Stewart’s post ‘Waiting for msiexec to Finish’).
Start-Process
is a little bit different from ‘start’ especially in how it
passes the parameters (through the -ArgumentList
argument/parameter). But
fortunately the -Wait
parameter was exactly what I was looking for. Here’s
the final line:
Start-Process -FilePath msiexec -ArgumentList /i, $installer, /quiet -Wait
This let everything nicely chain together and now deployments are super easy, as they should be.
]]>But the test details page has always looked a little verbose. The default Debug trace is full of so much stuff that I find it completely unusable, and what it does have is barely legible anyway. So usually I just collapse the section and move on to my own less verbose logging and the stack trace.
Unfortunately all this information comes at a bit of a price and I have seen
machines run into the good ol' OutOfMemoryException
more than once while
working with the log. So to eliminate that cause from the list of possible
culprits I hit the net and went searching for how to disable the debug trace
in Coded UI Tests.
Unfortunately I didn’t find much (other than how to enable, and usually by
editing the test agent configuration). I wanted to find a solution that I
could add to my solution (damn overloaded words) that would just work no
matter what machine I ran the tests on. Fortunately it was really easy. I just
added an App.config
file with the following:
1 2 3 4 5 6 7 8 |
|
It did the trick and it is making my life a whole lot easier. Hopefully it helps someone else too. (You could also try other values for the level, but I’m an all or nothing kind of guy).
]]>That’s right. I’ve said it. I didn’t really enjoy Avatar. It went on and on and frankly I’m kind of annoyed that the humans didn’t wipe out the blue guys. Sure the movie was visually very fancy and there were lots of ooing and ahing about the 3D visuals (which I agree were spectacular), but in adding the third dimension to the visuals the movie lost one of the most important dimensions, substance.
And that’s where the problem really lies. The addition of 3D to the movie world has just given the movie makers of today yet another distraction from actually making a good movie. You know, one where you actually care about the characters.
The other thing that bugs me about movies in 3D is that it can make it very difficult to focus on what is going on. Although I will give credit to The Last Airbender which made only very subtle use of 3D and I could actually read the text when it appeared without straining. Piranha on the other hand made the text almost impossible to read.
I seriously question the need for presenting films in three dimensions. Movie makers have been using a simple two dimensional screen for years and doing just fine. They use lighting, shadows and other fancy tricks to provide the illusion of depth and when you are caught up in the movie you don’t even really notice. So I think that’s where the problem really is. In 3D movies all I notice is that it is in 3D and it becomes harder to recognise and interpret the story that is actually happening on the screen. It sounds odd, but I find it harder to actually immerse myself in a 3D film.
I suppose one big reason the studios might be pushing for more 3D film releases apart from the increased ticket prices is that it might be a way to thwart some camcorder piracy of their movies. Although It would probably be fairly simple to put a filter on the camera so that’s probably a stupid reason.
And if the kids out there want to really play with the whole three dimensional thing, I suggest placing two objects one behind the other. Close one eye and line up your sight so that the back object is obscured. Now alternately close and open each eye. It’s like magic.
Photo courtesy of William Denniss. Used with permission.
]]>Now the feature has hit the main release. If you still aren’t sure what I’m talking about, here’s a screenshot:
Personally I like the change, and here’s why:
Of course, HTTPS urls display differently.
So this could be a little confusing, but it does further highlight the fact that the connection is secured.
Nevertheless one of the reasons I do include Chrome as part of my browser cycle is because it is different. This change is different from the other browsers, but it is exactly this difference that I like.
]]>It works by sorting your test lists and tests by name, creating a consistent ordering, allowing better merging and comparison of test lists. It’s a command line tool so it can integrate with automated processes really well.
It works in two modes:
For support, visit our support page.
VSMDI Normalizer is free for personal and commercial use. It comes with no warranty, explicit or implicit.
To use VSMDI Normalizer with Beyond Compare:
Working from home the last few weeks has put more stress on my laptop than it has previously and I was constantly hitting my 2GiB limit leaving my hard drive thrashing as Windows struggled to swap pages in and out of memory.
I was surprised when I installed Windows 7 that I didn’t need to download any drivers from my manufacturer (Asus). My graphics drivers were installed through Windows Update and everything else worked out of the box. ((Unfortunately this didn’t include my Bluetooth drivers, but as I am now using the Microsoft Explorer Mouse this doesn’t seem like such a loss.))
So, after some encouragement from a friend on twitter I decided to try installing the 64-bit version of Windows 7 and if it worked, move up to 4GiB of RAM.
I already had the 64-bit image ready to go on Windows Deployment Services, but I had just recently finished setting up my machine perfectly. I was particularly worried about having to reconfigure Outlook and set up new PST file. Now I could have tried copying my user profile and transferring that way, but instead I figured that I’d give Windows Easy Transfer a try. Once I passed the initial welcome screen I was confronted with the following options:
I guess this is a good idea for people who don’t have a wired home network. I didn’t have one of these cables (and I don’t think looping it back to the same computer would work right) so I moved to the next option.
Surely this was the option I wanted. After all, I wanted to copy the files to my network server. Unfortunately, no. This option migrates directly to the new computer. This wasn’t right either.
An external hard disk or USB flash drive? That sounds very specific. Fortunately this includes network drives too. In fact, it just brings up a standard file dialog so you could likely store the migration file anywhere you want.
Then it was just a case of following the on-screen directions. It not only backed up the Documents folder, but it grabbed other folders on the disk and on different partitions. Unfortunately it doesn’t grab the settings for all applications, but it covered enough for my needs.
Once you’ve migrated back you get this handy migration report which you can use as a guide to see what applications you still have to install:
]]>It’s actually quite obvious when you look into what is happening. To do an update you would usually do something like this:
1 2 3 4 5 |
|
Of course, nothing happens. Here’s the code (slightly edited for readability) that is generated by the LINQ to SQL classes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
Without a primary key the two following interfaces aren’t emitted:
INotifyPropertyChanging
and INotifyPropertyChanged
Therefore LINQ to SQL doesn’t know that your record has changed (so can’t warn you that it can’t update).
Now that you understand the problem the solution is simple: Define a primary key in your table.
]]>1. If you want something or are asking a question put it in your first message.
Every time I am interrupted by an instant message that just says “Hi” or “Rhys” I scream a little inside. This “handshaking protocol” has broken my concentration and I am now trying to work out what the person wants. I can even see that they are feverously trying to type their actual message. Why waste my previous cycles by forcing me to process a single useless “header” and wait for the actual body. Send the header and the body at the same time!! As an example:
Hi Rhys, do you have time for a quick test review?
This message is concise, expresses the point and can easily be responded to, like so:
I’m busy. Go away.
Ok, in reality it would probably be more like this:
Sure
Or if I really am busy:
Can it wait? I am in the middle of something and should be ready in about 20 minutes.
2. Send complete messages
The last example leads us into the next rule, send complete messages. Don’t leave the recipient of your message guessing. Sure, you can’t answer all possible questions at once, but at least answer the most obvious ones. Empower the person you are communicating with by giving them the information they need to make a decision so that the conversation can end quickly.
3. Don’t let conversations drag on
If an Instant Messaging conversation is going on too long it is a good indication that the process has broken down. If possible it may be time to get up and speak to the person the old fashioned way. You’ll be able to get more information processed more quickly. If you can’t speak in person, use a telephone or if there is just a lot of information that you need to pass, write an email.
Final words
I’m sure there are more rules that could be applied, but I know that if everyone could follow the first rule I’d be much much happier.
]]>LINQ: Powerful Stuff (QMSDNUG)
You may need to skip the first 5 minutes.
Slides are available here: http://linq.i-think22.net/LinqApril2009.pdf
Demos will be available soon.
]]>Let’s start by looking at how we might add a new entry to our blog. Here is the XML file again:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
So we want to add a new Entry under the Entries element. We’ll also assume
that our XML file has been parsed into an XElement
variable blog
.
We’ll start by creating our entry first:
1 2 3 4 5 |
|
We started by creating the element, set the “Archived” attribute, then added the other necessary elements. I’ve still added the Comments element even though it will be empty. Depending on the rules that have been set about how I should layout the XML it might be optional.
To check that my code worked I plugged it into LINQPad and dumped the value of entry like so:
1
|
|
The results showed me the following:
1 2 3 4 5 |
|
Wow, that’s exactly what we want. Even though we used a Boolean
value
instead of a String
for the attribute, XElement
was smart enough to
display its value as a human readable string. The XML is also nicely formatted
and readable. I added the call to ToString()
to emphasise that it wasn’t
LINQPad that was responsible for the improved formatting.
What we have done here is generate an XML fragment. Sometimes it is easier to think of large XML files as smaller fragments that can be handled independently.
So now all we have to do is find the Entries element and add our entry
XElement
to it like so.
1
|
|
This will leave us with the final XML looking like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
You might be wondering why the ToString()
method of XElement
doesn’t
include the XML declaration. Because XElement represents a fragment of XML
which could appear anywhere in an XML document. If it included the XML
declaration it would lose this flexibility. However there is a workaround if
you are outputting to a final file.
1 2 |
|
The Save()
method on XElement
automatically adds an appropriate XML
declaration, which is probably a good idea as it sorts out the complicated
things like the encoding and XML version (which I’ve never seen as anything
other than 1.0 to date). The Save()
method can take either the name of a
file (as a String
), an XmlWriter
or TextWriter
. In the example above
I’ve used a StringWriter
(which is a subclass of TextWriter
) to save XML
to a StringBuilder
object which I could then use to build a string
containing the XML. Save()
also takes a second parameter, SaveOptions
which allows you to save your XML file without the extra whitespace that I’ve
shown above. If you want to save those bytes it might be worth looking at this
option.
I haven’t yet decided what my next LINQ post will cover (although LINQ to Entities is high on the agenda), so I won’t promise anything here now. I have much more to say still about LINQ, so feel free to post in the comments suggestions for areas to cover in future posts and the areas you would like to see covered in more detail. So far this has been fairly introductory and we’ll be building towards more advanced topics over the coming weeks.
]]>Projects can certainly suffer from too much XML or XML is used when a better option exists. Once your XML files become too difficult to read in a text editor it may be better to look at another option (or better design your XML schema).
Skip this section if you already know XML, but take time to look at this XML sample as it will be used throughout the article.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Above is an example of a simple XML file. XML files follow a structured pattern called a schema. The schema defines the rules for what is allowed where and generally defines the structure of your file. Fortunately you don’t need to write a formal schema to get started with XML. Instead you can just start laying out your data. That’s where the “X” in XML comes from, because it is eXtensible.
So the sample XML above is being used to store the contents of a simple blog. XML isn’t the best way to do this, but a blog is a simple well understood concept. If you read my article on LINQ to SQL you might notice that this is very similar to the database example I used there.
Every XML document should start with what is known as an XML declaration. It’s in the first line of the XML and defines the version of the XML as well as the encoding of the file. If you are using notepad you can select the encoding when you save the file. The topic of encodings is out of the scope of this article.
The next important element that all XML files need is a root node. In this example our root node is called “Blog” and it holds all of our other elements. There can only be one root node in an XML document so if we wanted another blog we would have to put it in another XML file or redesign our XML to have a new root node (such as BlogCollection).
From there we can see that our XML document is made up of two key parts, elements and attributes. Elements are the things in angle brackets (called tags) and an element continues until it is closed with a matching closing tag. Closing tags are different from regular tags as they have a forward slash (/) before the name of the tag. We will use the term element to describe everything from the opening tag (a regular tag) to the closing tag, and a tag as the bit with the angle brackets.
There is also a special kind of tag called a self-closing tag that is both an opening tag and a closing tag. These tags have a forward slash before the closing angle bracket. For example:
1
|
|
The space before the forward slash is optional (and stems back to compatibility with HTML). Personally I like keeping the space there, but your project may have different rules.
The other important concept is attributes. Attributes go inside the tag to provide more information about a tag. Attributes can only be used once per element (but one element can have multiple attributes). In the example above, we have given the entry tag the Archived attribute.
Sometimes it can be difficult to determine whether data should be expressed as an attribute or as a child element (an element inside another element). Typically the rule of thumb is that an attribute should be describing metadata, that is extra information about the element itself and how it might be interpreted. Occasionally this doesn’t clear things up at all. If you are still confused, consider the complexity of the data and whether multiple instances of the data will be required. Complex and repeating data is a sure sign that you want to use an element.
Importantly elements can contain other elements which can in turn contain more elements (and so on). XML follows a very strict hierarchy (which makes it easy to navigate) so an element must be closed inside the element that it was opened in. This means that any element (except the root node of course) has one and only one parent element. If you are modelling structured data it is unlikely you’ll run into troubles.
Finally I’ve also added a comment to remind me to add authors to the comments.
We won’t actually be doing this, it was merely there to demonstrate how you
can include comments in your XML documents. Comments should be ignored
when parsing an XML file as they are unrelated to the data. Comments begin
with <!--
and end with -->
.
Ok, so by now you should know enough about XML to understand how we can parse this XML file and pull the necessary elements.
LINQ to XML is a set of classes designed to work well with LINQ. It provides a very simple API that allows XML to be read and written with ease.
The centre of your LINQ to XML world is XElement
. Through XElement
we can
access all of the important information in the sample above. Let’s start by
writing a query that can help us get the Blog entries to display on the front
page. We’ll assume I’ve loaded the XML as a string into a variable called
blogXml
.
1 2 3 4 5 6 7 8 9 10 |
|
This example does absolutely no error checking (something you’ll definitely
want to do if you are working with real XML) but demonstrates how simple it is
to find particular elements inside XML. Additionally you can use XElement
objects to pass XML fragments around your application. We could have made our
LINQ query return an anonymous type that pulled out the Title, Body and
Comment count for each entry, but instead we just pulled out the XElement
itself. From there we were able count the comments inside our loop.
There is nothing preventing you from using these fantastic classes without having to use LINQ queries as well. In fact, most of the XML parsing code I’ve written lately doesn’t use LINQ queries at all to find elements, just the methods of the XElement class. Let’s look at the ones you’ll likely use most. Don’t worry that these parameters take an XName as their parameter, strings are automatically cast to a XName. You’ll need to use XName if you are dealing with namespaces (which I’ll discuss in a future post).
Element(XName name)
returns the first immediate child element with the given name. If the element does not exist it returns null
.Elements()
returns an IEnumerable<XElement>
of all the immediate child elements. So against Blog the enumeration would yield a single “Entries” XElement
. If there are no child elements the enumeration will be empty.Elements(XName name)
returns an IEnumerable<XElement>
of all the immediate child elements with the given name. If no elements with the name exist it will return an empty enumeration.Attribute(XName name)
returns an XAttribute
that is the attribute with the specified name. If the attribute does not exist it returns null
.To match the Element()
and Elements()
methods there are also a set of
Descendant()
and Descendants()
methods. These work in the same way except
that they return all elements under the node. We used this method when we were
finding the Entry element as we didn’t care about the rest of the document’s
hierarchy.
Because these methods return null if the element (or attribute) is not found
it is important to check that the value is not null unless you are using a
method which returns an IEnumerable<T>
object.
You now know all the important classes needed to parse XML files (perhaps to
load up some strongly typed objects). In my next post I’ll be discussing how
you can use this same class to build complex XML structures. In the meantime,
check out the MSDN documentation for XElement
.