Friday, 10 May 2013

Namespace prefixes in Microsoft Xml Document Transformation's SetAttributes and RemoveAttributes transforms

Link for the Codeplex fork I mention in this article:

Get this package from Nuget using the package ID Microsoft.Web.XdtEx.

Microsoft recently open-sourced the XDT library that's at the heart of the web.config transformations.  Partner this up with a new feature of Nuget 2.5 which supports the auto-importing of MSBuild .props and .targets files into a project file, and that means I could try something out with XML files in a Xamarin.Android project I'm working on at the moment.

I'm building an Android app (obviously) and due to our company's needs, we want to build a vanilla app which can then be re-used for other brands within the same group.  I had the idea that I could create the whole thing as a nuget package, now that Nuget supports the 'MonoAndroid' platform.

Deploying code to projects with nuget is easy - and with the partial class model in C# it's simple to deploy a core platform to a project with one set of code files that never change (thus letting us manage them with nuget's install/upgrade workflow), and then adding extensibility points with partial methods (in addition to virtual methods etc).

However, I also needed to be able to do the same for android resource files.  Consider the default main.axml from a new Xamarin.Android project:

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android=""

Now, in my scenario, I'd want to deploy this core layout as content in my nuget package, but then I want people to be able to customise it without touching the original.

Let's say, for example, that I want to change the text on the <Button /> element there from '@string/Hello' (an Android resource identifier) to the literal string 'Hello World'.  This is how we'd want to do it in XDT:

            android:text="Hello World"

See the 'xdt:Transform="SetAttributes(android:text)"' attribute there? (If it worked in the base version) That would instruct XDT to replace the android:text attribute of our input file to "Hello World", because that's the content used here.

There is a slight snag, however: the current (as of 10th May 2013) version of XDT (available on codeplex, or, on as a package) does not support namespace prefixes in xdt:Transform operations.  It does support them in the xdt:Locator attribute, however.

Again, since XDT is now open-source (although not yet accepting pull requests), I decided to fork it and have a go at putting it in - and an hour or so later I had it working.  You can see the diff here on codeplex: the primary changes are in XmlElementContext.cs, XmlTransform.cs and XmlAttributeTransform.cs, the rest are tests.  Note that in the next commit I renamed the test from LocatorNamespaceTests.cs to NamespacePrefixTests.cs.

If you're currently using the XDT library (either directly, from a codeplex build, or via the official nuget package) you will be able to switch over to using this version without issues.  If you're coming from an official release, then one thing to note is that this DLL is not signed - as is the case with the original codeplex release.

So what about the Android transform?

If you're wondering - yes I've managed to get the build integration working from a nuget package - and to have my files added to the project like this:


The xtransform is the file that is edited, the xbase is the file that contains the default content, and main.axml is generated by the build task before build (and before Xamarin.Android compiles it).

However, it's not been easy at all to do this - as it involves writing an install.ps1 to add the <DependentUpon /> metadata to the .axml.xbase and .axml files - which unfortunately requires a VS reload of the project in order to show the hierarchy in the IDE.  I use the Microsoft.Build.Evaluation.ProjectCollection.GlobalProjectCollection to get the MSBuild project (I'm not including the powershell scripts because it detracts from the main point in this post) in order to be able to add the metadata more easily, but I then have to save the project through that as well in order to trigger a reload.

If you do a search around, you'll find the official line is to save a project in a nuget package install/uninstall script through the $project parameter that is passed to your script, as this is a 'friendly' save, which doesn't reload the environment (and therefore almost always what you want to do).  The problem is that VS will only update the file hierarchy of dependent files when a project is reloaded - and there's no command in VS you can execute to perform this grouping (unless VSCommands is installed). Bah!

And then on uninstall/upgrade, the file hierarchy causes big issues, because if nuget removes a file that is a parent of others, then Visual Studio will automatically remove the children.  This means that my initial hierarchical attempt - which was to have xbase -> xtransform -> output as the hierarchy - kept dropping both the xtransform and output files even through they'd changed - because they were deleted when the xbase was deleted.

Furthermore, flattening the hierarchy again in the uninstall.ps1 script didn't make any difference, because the underlying change is not reflected in the environment unless a project reload occurs.

I'm not really sure if there's any way around this - so I went for putting the editable file at the root instead.

UPDATE - This doesn't work either - because nuget doesn't then find the xbase file when it goes looking for it.  I've had to write an Uninstall.ps1 script for the package that does nuget's job, finding the item and deleting it pseudo-manually.

What about a nuget package?

Yup - I've updated the post to include the package ID and link - but here it is again for completeness: Microsoft.Web.XdtEx.

Friday, 19 April 2013

ICM and further follow-up - Still getting spam!

As my last two (here and here) posts have been documenting - I've had a really tough time trying to stop from sending me email surveys, having mistakenly allowing myself to be registered with them following an ICM telephone survey.

Most recently I'd received an email from a guy called Keith Bates of Creston Insight (a division of Creston to which ICM and newvista Research belong) assuring me that I will no longer receive email surveys.

Well - today I got another survey from

So off I go to Information Commissioner's Office to report this organisation.

Tuesday, 16 April 2013

ICM and follow-up - Well, that was quick

Update (19th April 2013) - I'm still getting spam despite being assured that it will stop - read more.

Yesterday I posted about my experiences with ICM and an affiliated website,  In it I explained how I had repeatedly tried to unsubscribe from their email survey service to no effect, I also included the text of an email I'd written to Keith Bates, apparently the head of the 'Insight Division' of Creston, the overarching company that controls ICM, newvista Research and many others.

I waxed lyrical about how I didn't expect to receive any response to this email, but I was wrong.

Last night I received this response:

Dear Mr Zoltan,
I was very concerned to receive your email and immediately launched an investigation into your case. My team have been working on this matter for the past few hours and I am now in a position to respond to you
Firstly, let me apologize for the experience you have suffered. This is far below our usual standard. Secondly, I would like to reassure you that normally panellists cannot receive a survey until they have logged in, with login and password details being issued upon first registration. This evening we have double checked our procedures and are satisfied that all are operating properly.
Finally, I can confirm that you have been unsubscribed from our panel and that you will not hear from us again.
I am sorry that you have experienced a less than satisfactory level of service and hope that this will not affect your future participation in market research surveys.
Keith Bates

So it would appear that my ranting might just have achieved something for once.

It's just a shame that it took such drastic action to get the desired result!

Monday, 15 April 2013 is SPAM! Say no to ICM re email surveys.

Update (19th April 2013) - I'm still getting spam despite being assured that it will stop - read more.
Update (16th April 2013) - I have received a response to the email I sent below - read my next post for more.

Like many people in the UK, I've been called by ICM and completed one or two of their telephone polls (often conducted for organisations such as the BBC, the Guardian and many other media organisations).

Following one of these completed polls, I was asked if I would like to receive polls by email, too, from 'one of our partners' - with the potential to earn some money.  Assuming that I would be able to 'suck it and see', and unsubscribe if I got bored of receiving these emails, I said yes.

It was definitely one of the stupidest mistakes I've ever made.  The surveys come from ( who proudly report on their website how much money their members have earned today.  I've no doubt that some people are completing their surveys and 'earning' money - but I seriously doubt it's as much as they say (close to £3mill today, apparently).  But anyway I digress - that lack of trust on my part is most probably due to the experience I've had with this company.

Despite numerous attempts at unsubscribing, including sending direct emails informing them that I would be taking the matter further (to the Information Commissioner's Office by the way, if you need to report spam), I am still receiving these surveys.  It wasn't spam to start with - as I'd said yes - but now that I have unsubscribed it most definitely IS spam, and the fact that the company involved is supposedly one of the most respected polling organisations in the world is even worse.

After a bit of research, I discovered that is run by newvista Research.  Trying to find a postal address or head office for this company is nigh-on possible, with the only means of contacting them being an email address on their website.  That's just fucking shoddy - not to mention deeply dubious.

A deeper search on Google yields a page on the website for a company called Creston, which mentions the word 'newvista' a couple of times, and I satisfied myself that there is a direct relationship here between the two.  Surprise surprise, this same company also lists two divisions of ICM in a list on their website.  Some more clicking led me to the 'Insight' division of Creston, which explicitly states that newvista Research is one of its companies.

So my next step?  To email Keith Bates - apparently the head of Creston Insight - with a particularly snotty email about this inability to unsubscribe from  The text of this email follows in grey:


My name is Andras Zoltan. I am on the list of contacts for ICM research following some telephone polling that I have done in the past. Following one phone-call, I was asked if I’d be interested in receiving polls by email, with the potential to earn money.

A day later I received my first email from I understand that this website is part of newvista Research which, in turn, is part of Creston Insight. Since you are the head of Creston Insight I am emailing you. This is my work email address – the email address that is ‘registered’ in is [omitted]. I am happy for you to email me at that address to confirm that I am who I say I am.

After a few weeks of receiving such emails, and ignoring them, I decided it was time to unsubscribe. So I did this, discovering first that the ‘click here’ link that is seemingly helpfully appended to the end of all emails doesn’t actually do anything except take you to the homepage for the website – this is bad. Had I been a less-internet-savvy person I would have assumed that the unsubscribe was complete at this stage.

So I read the FAQ, which informs you that, to unsubscribe, you have to login. However, I’ve never received any login details for the website – so I was curious as to exactly how I would do this. This is also very bad. Many normal people would have given up at this point.

So I went to the ‘forgotten password’ page and entered my email address. Expecting to receive a ‘reset password’ link, I was absolutely astonished to receive an email soon after containing my password.  I hope I don't need to explain to you that having users' passwords stored either in clear text or in any easily reversible encryption form within your assumedly massive database leaves you absolutely wide open to hacking attacks.

If I were a shady Eastern European (or indeed any other stereotyped geographically regional) organisation looking for lists of email addresses and passwords I would be making a bee-line to your website right about now and trying every penetration attack known to man to try and gain access to what could be a goldmine (my assumption being that if your developers are so blasé about security as to do this, then I'm sure your website is probably vulnerable to something like a SQL injection attack or similar).  I am a software developer so I know exactly the kind of thing of which I speak.  I'm sure you have people you can ask that can verify what I say.

For those people who are getting paid by your website, who are the people most likely to have registered a 'real' password instead of the default that is setup for new accounts, you are guilty of the most heinous crime of security irresponsibility and all your customers need to know this - if nothing else so that you enact a change within this company.

Not only that, however, but can you wholeheartedly trust everyone that works for your company?  I'm sure more than one person has at least read-only access to your live database.  If, as is most likely, your users' passwords are stored in the clear, then you're providing one hell of a temptation for someone looking to make some ugly cash on the side by selling a few database records…

Anyway - I proceeded to login and click the unsubscribe link as directed by the aforementioned FAQ.  There's some flannel on that page about some surveys still being sent because 'there may be some surveys still holding in our system for surveys that are already running at the time you have unsubscribed, if you do receive an invite for these please ignore it.'  What utter codswallop - it's just a cover-story for the fact that your unsubscribe process doesn't actually do anything does it!?

…Because, within a couple of days I was still receiving surveys from your bloody website, and a month later in fact.  I then sent an email about 3 weeks ago directly back to your helpful 'contact us' email address, explaining that if I was not removed from the the system completely then I would be taking the matter further.  3 weeks later have these emails stopped? Well, clearly that's rhetorical given that I'm now emailing you, but even so.

So here we are.  I am not under the illusion that this email will be going to your directly, but I sincerely hope that it does reach you soon and that I am removed from this sham organisation's database within 48 hours.  If not, I will be contacting the Information Commissioner's Office and reporting the website as sending spam (since I have repeatedly asked to unsubscribe and have been ignored, it now becomes something they will deal with).  That won't be all I do, but it's a good start I'm sure you'll agree.

I have also posted this email, with context, on my own personal blog at  I don't have a massive readership (in fact my regular reader count probably stands at a fat zero), but I reckon there's enough links in the content, and with 'newvistalive', 'ICM' and 'spam' being mentioned in the title, that it might slowly creep up the search engine rankings enough to show up in search results for the inevitable thousands of other people that have been/are/will be frustrated by the activities of your organisation.

I look forward to hearing from you in due course,

Andras Zoltan

So there we have it - I have absolutely no faith whatsoever that this will change anything.  But you never know, it just might.  Judging by the situation thus far, however, my guess is that this 'Keith Bates' isn't even real.

By the way, for the non-technical among you, the whole security issue I mention in the email is a really big concern and one that you should definitely be worried about if you have registered on this website with a legitimate password.  Any website that is able to send you your password has got their security massively fucked up and absolutely cannot be trusted with your data.  The issue is not that someone could see your password in your email, it's that anybody who can gain access to their database can see your password.  And believe me, that's not only hackers, but probably any developer that works for them, or has been employed by them - directly or indirectly.  If they use outsourcing for their development then the risk is much higher, too, as someone working for a company temporarily will have much less chance of getting found out if they steal a bit of information here or there.

Friday, 21 December 2012

MSDN and MS Fail - Developing Store Apps? You're on your own

One of the reasons I often say to people that the Microsoft stack is the best to work on is the quality of the documentation.  For most APIs you get decent documentation as to what the method/class does, what kind of parameters to pass and, most crucially, why the operation will throw an exception (and what to do about it).  Across most of the BCL, too, you get lots of helpful exceptions when things go wrong - so you can address any issues and try again.

For this post I'm going to be working primarily with a random pair of examples (and it's by no means the best, especially throughout the Windows Store MSDN you'll see stuff like I'm about to show all over the place):

From the BCL MSDN: The AesManaged class (from the core BCL) and the documentation for it's CreateEncryptor(byte[], byte[]) method.

With those two opened in tabs, let's now open the Windows Store MSDN topics DataProtectionProvider and the DataProtectionProvider.ProtectStreamAsync(IInputStream, IOutputStream) method.

Font Size and Page Width

The first thing to note between the two different types of documentation is that whilst both are fixed width, the BCL pages are slightly wider.  'Why is that a problem?'  You ask - just look at the size of that text on the Windows Store pages!  The following image shows what the class name code block looks like in my browser (first) and what it would look like if it used the same font size as the BCL documentation:


It's about 55px difference in width there, for less than 50 characters.  So consider that for the code examples this means a lot more broken lines and wrapping: (screenshots from the DataProtectionProvider topic):



That second one is almost laughable - actually (and I'll be getting to this in a minute).

Now of course MSDN also does have a general design rule that code samples shouldn't wrap.  The fact that these do means someone in the QA team there either should be given a stern talking to - or the sack.

But of course this gripe doesn't just stand for code samples - the text in general is all larger.  The difference is that the BCL stylesheet uses 12px for the font size, and the new one uses 0.87em.  Okay so using ems is better; but you should try to get it right.  Sure I can change my font size to 'smaller' in IE and get the desired effect - but I just shouldn't have to.  At least start with an em value that is equivalent, for most people, to what the original MSDN uses - I believe 0.75em is a good one for most cases.

The overall effect of the larger text, in my opinion is to make it all look unprofessional and makes me feel like I'm working with a toy framework.

The page size has been reduced to 985px vs 1220px.  So on my 1680x1050 desktop (I'm sure a minimum for most developers) I end up with a preponderance of whitespace in the gutters of my browser.  Okay so it stops people with 1024px screens having to pan - that's great.  But for people with bigger screens it just looks crap.  This is what the min-width CSS rule is for, people!  We obviously don't want our lines too long (and re-pagination of code samples would be a step too far and too-long lines of text get hard to read), so a liquid layout that grows to a max of 1220px would seem to make the most sense here.

On to the content - the code samples

Why oh why oh why are the code samples for the DataProtectionProvider, which are written in C#, not highlighted, and why oh why does it say in the language tag tab on the top: 'None'.  I assume the two are related, but either way it's immensely difficult to read those walls of code.  In fact, because they're not coloured they're actually just walls of fixed-width text, which is horrible.  It's also especially difficult when we have seriously ugly code wrapping as shown in the screenshots above.

Clearly, whoever wrote the samples was expecting the available width to be the same as it is for the 'proper' MSDN - but nobody in the design department told him or her that it isn't.  Left-hand, right-hand…

Also - where are the samples for other developers?  Now in some other classes and APIs you will find multiple-language examples, but you'll also find a lot like this, where only one language is presented.  Not acceptable.

In nearly all of the .Net BCL documentation you'll get at least C# and VB; and sometimes JScript.

The runtime experience - and the actual documentation

This is where the real problems start.

First - and we're looking at the ProtectStreamAsync method documentation here - a reference is made back to which constructor you should use.  That constructor takes a magic string.  Now try and find a list of all the magic strings that are acceptable for that constructor.  Go on - I dare you - you simply won't.

In fact, if you run the Win RT Crypto API sample with either of the WEBCREDENTIALS versions - which do, admittedly appear to be placeholder strings for you to fill in - the sample crashes!

And this is where we get to what is actually my biggest gripe - and this gripe is not just with the documentation, but the whole WinRT .Net layer too:  Exceptions.

Knowing what exceptions are to be expected if something goes wrong is a central tenet to .Net framework programming.  This method's documentation contains absolutely nothing about the exceptions that will be raised if something goes wrong.  The implication being that nothing can go wrong if no exceptions are listed.  Not so!  As my previous paragraph states - the sample crashes with this exception:


To find out what that actually means I had to google 'dataprotectionprovider hresult 80090034' (but notice google changes the dataprotectionprovider name to 'data protection provider' - because if you actually search for the original phrase you get no results).  The top result here is COM Error Codes which lists 0x80090034 as 'NTE_ENCRYPTION_FAILURE: Encryption Failed'.

Marvellous - thanks for that.

So what should change?

Well the COM error code list is absolutely fine - I have no actual gripe with that; if I'm doing native development then I do need to know what all these error codes are.

But I'm not doing native development, I'm doing .Net development, which uses exceptions to communicate failure.  So that exception should be raised with some meaningful information about what's actually gone wrong (based on the HRESULT that was returned) and/or - MSDN should contain a good list of the exceptions that can occur and why they will occur.  In this case, there's clearly something wrong with the magic string I sent to the DataProtectionProvider constructor.  But since I've got no way of knowing what the acceptable values actually are; and I've got no way of knowing what's actually wrong with the string I did send - the API is basically useless to me.  It really might as well not exist at all.


In fact this whole exceptions thing is something that's pervasive throughout the 'new' MSDN, and I think something that points to a core decision that's been made somewhere.

In Windows Store/RT development, exceptions apparently never happen and if they do, then you should swallow them - don't worry about what they actually are.  I came across a code sample on a topic the other day (I'm sorry but I can't remember what it was) which used a catch(Exception){ } block to swallow every exception and return null from the example method.

There wasn't even a paragraph saying something like 'this is not good practise' and linking off to a helpful topic about handling exceptions effectively (like this very topic taken from MSDN itself!).

I mean: wow.

So what's this framework and it's documentation trying to achieve?

…Because I really have found it to be less than effective in many many scenarios so I can't imagine it's to be helpful.

As it stands, in my opinion, it's to make Windows Store development accessible to the beginner programmer, that's what.  The aim is to get as many people blundering about in the new .Net framework; hiding all the complicated stuff like what could go wrong and why, so that more store apps are developed.

Okay so that's a reasonable goal, but at the same time it's alienating people like me who want to know why stuff goes wrong and what I can do to either fix it or avoid it.  I really wanted to use that DataProtectionProvider, for example, so I could use out-of-the-box encryption tied to the user's roaming identity (I'm assuming that the "LOCAL=User" magic string therefore isn't applicable, but I wouldn't bloody well know would I!?) to store some sensitive data that I also wanted to roam.  But I can't - because I literally have no way of knowing how to do it.

Oh and there's another target audience implicit within this - Java and Objective-C developers from Android and iOS-land.  They might be just starting to learn .Net and C#/VB - so let's not clutter up our documentation with loads of technical details that might frighten them off (I'm not implying that such devs are non-technical by the way - I'm suggesting that the MSDN team is condescending to them) and pretend that all our APIs are really friendly and will always work.

They'll really thank you when shizzle does happen but there's no fizzling clue as to what's gone wrong.  Really, they will.  If I, a professional .Net developer with many years experience across most of the .Net envronment stacks (service, desktop, command line, web forms, mvc) am getting really frustrated with this then I'm sure they're going to feel absolutely abandoned.

And finally - another example - the Geolocator

My last rant - and another indicator of both a documentation fail but also some shortcomings in WinRT itself - is one of the very very core APIs that you'll use in Windows Store Development: the GeoLocator class and it's GetGeopositionAsync overload.  (Note here, for some reason, all the code examples are now magically in JavaScript and not available in anything else!).

Now - for some reason I've yet to fathom.  Geolocation will not work on my desktop in the office.  Any app that tries to use my location will never get it.  Even IE10 will not dish out a location (and yet IE9 running on Windows 7 would happily get one resolved by IP address - figure that one out).  Now, I'm thinking this likely because I'm in an enterprise environment with a corporate proxy/firewall - but really the reason it doesn't work is irrelevant to this discussion.  As a developer I should be able to handle all scenarios effectively and easily - especially when the API and it's documentation implies that all the issues are already dealt with for me.

So, anyway, JobServe Windows Store App I'm developing right now will of course take advantage of geo-location if it's available - so I wrote this very simple piece of code, in an async method:

var loc = new Geolocator();
var t =
    await loc.GetGeopositionAsync(TimeSpan.FromDays(7), TimeSpan.FromSeconds(10));

That means - 'get a location that's at most 7 days old, and return within 10 seconds if you can't'.

On my machine, with whatever geo location issues it has, this call - never returns - not after 10 seconds, not after 5 minutes.  I left it running for an entire lunchtime once - still nothing.

The timeout value expressed here implies that you can control just how long to wait for a location update to be received.  In fact that only seems to work if the Geolocator can be initialised (on my machine it never moves past the 'Initialising' state). 

But surely the documentation will tell me about this - and how to handle it.  No - it doesn't.  It mentions about 7 second timeouts when the object is in 'Connected Standby' - but that doesn't even seem to relate to anything else on the Geolocator object - it's certainly not one of the enum values in the PositionStatus enum.

So what can I do - our app will be used by people in business environments to search for jobs in their lunch breaks (contractors especially) - I'll have no choice but to do a manual wait for the ten seconds and move on manually if the task hasn't completed.  The task can't be killed or terminated, though, because it's an IAsyncOperation and doesn't support CancellationTokens - so I'll just have to leak the task until the app is restarted.

All in all - that's not good - and to be honest I'm embarrassed to have to deploy code to people's machines that will do such a thing.

But I have no choice, because for the first time in a long time, Microsoft have dropped quite a few balls with WinRT and it's .Net interface - and all the time they remain rolling around the floor I'm finding the experience of developing for it a bit like swimming through treacle.

I should say, though, the WPF element is fantastic - it's a really mature environment now - and plugging into that has been a joy.  Documentation, though - again - a bit light (I keep having to go to the full framework's documentation to get decent XAML examples for some of the stuff that isn't new to RT).

Tuesday, 27 November 2012

Shameless plug - Use the new JobServe Web API to search for jobs your way

As my signature states - I work for JobServe in the UK.  Over the past few months I have been working on a new REST API (using the ASP.Net Web API from pre-beta to RTM) and it is now in official public beta.
Now of course, this isn't just so your good selves can start running your own job searches using whatever web-enabled client takes your fancy, but that is one of the cool benefits that has come out of it - and that's why we're calling it a public beta.
At the time of writing, you can use the API to run almost any job search that you can on the website.  As you would expect, you can get the results in XML or JSON (also GET requests with search parameters in the query-string are supported as well).  Note that JSONP is not currently supported - but it's slated for a future release.

Sounds great, how do I get in?

In order to get cracking with this - you need to request an API token.  This is not an automated process but we should be able to get you set up in a day or two at most.  When filling out this form it's important to be honest about what you intend to do with the API.  That way, we'll know what kind of request patterns to look for and are less likely to revoke access at a later date.

…And then what?

Once you've got your API token - just follow the documentation that I've put together on the JobServe Web API mini-site to get started.  You can contact the support team by email from the site if you have a problem - but please don't contact us looking for code on how to make HTTP requests; that's outside our remit!
If you're environment is .Net, and you don't mind sending/receiving XML, we are making available DataContractSerializer-compatible classes that are generated live from the latest API Types schema.  These are available from the Source Code download page.  These code files (C# and VB.Net available) give you compatible types for all the types used by the API.  Note that these classes aren't, however, natively suitable for JSON - some might work with Json.Net, for example, but they're not intended to.
The most likely usage of this API is to extract the Permalink for a desirable job.  This will take you straight to that job on the JobServe website, from which point you can then apply in the browser as normal.  The sharp-eyed among you will notice in the API types list that clearly we have got objects that can manage the application process outside the browser - but at the moment I'm sorry to say that we have no plans to make that bit public - this is chiefly for the security of our users instead of bloody-mindedness.
So, get yourself signed up for an API token and start writing!  Will you setup an azure service to notify you of jobs?  Perhaps you'll write yourself a personalised search app?  If you want to share any code with us to help others (we might put it on the aforementioned source code page), then please do.

Can I use SSL?

If you're concerned about security - SSL endpoints are available for all operations - just use instead of

Any code samples on the way?

I will try and put together a code example for accessing the service in C# 5 with the 'modern' HttpClient using async/await.  If you do have problems making HTTP requests - then I can also heartily recommend StackOverflow (I am known to hang around there a bit, so I might even see your question) as a great source of help.  Provided, that is, you try your best solve your problem first!

Happy coding!  Oh, and happy job hunting!

Wednesday, 19 September 2012

Adding ‘Deny’ functionality to AuthorizeAttribute in Asp.Net Web API

For the web service project I’m working on at the moment I need to be able to treat authorization differently based on the hostname of the URL that requests are made through.

To state more clearly – these web services will have a ‘sandbox’ mode in addition to the real mode, and the mode a request will operate under is determined as part of the controller-selection phase early in the Web API request lifecycle.  So, say that my web services will be hosted on; the sandbox will simply be

Please note – a discussion of how this is implemented is entirely outside the scope of this article; but I’ll just say that I’ve developed an in-house multi-tenancy layer for both MVC 4 and Web API that allows us to define ‘brands’ and, under those, you can then redefine content, controllers, and even the DI container that is used.

These services are going to require caller-level authentication for most operations via SCRAM Authentication (RFC 5802), and as such most controllers or actions will be decorated with the AuthorizeAttribute:

[Authorize(Roles = APICallerIdentity.Authenticated_Role)]

The value being passed to the Roles member there is just a constant I’m using to keep things consistent.

Now here’s the thing – in Sandbox mode you will be able to access many of the operations as a Guest caller (i.e. without needing to undertake authentication) but in the live mode you will not.  You can still run through the SCRAM Authentication process, of course, and once you do you will have an access token that can be used to authenticate future requests so that you are no longer treated as a guest.

To set this up, a filter that runs earlier in the pipeline (registered via the global HttpConfiguration.Filters collection) – which is responsible for authenticating the request – asks the current mode if guest access is enabled.  If it is, and no credentials are found in the request, then the guest user is set to the current thread’s identity with both the Guest_Role and Authenticated_Role roles set.  Thus, all the methods that require authentication normally can still work in sandbox mode.

I then started work on an operation yesterday that I only ever want to be accessible to non-guest authenticated callers – regardless of whether we’re in sandbox or live mode – where was AuthorizeAttribute now!?

The simple fact is – it can’t do that, but it’s ridiculously simple to subclass it and make it so it can – here’s the full listing of AuthorizeExAttribute:

/// <summary>
/// Extends core AuthorizeAttribute to support denying users and roles.
/// An error occurs during authorization if a user or role is found in
/// both allow and deny rules.
/// </summary>
public class AuthorizeExAttribute : AuthorizeAttribute
    private static readonly string[] _emptyArray = new string[0];

    private static string[] SplitString(string value)
        if (value == null)
            return _emptyArray;

        return (from s in value.Split(",".ToCharArray(),
                        select s.Trim()).ToArray();

    //initialised in constructor
    private readonly Lazy<bool> _isValid;

    private string _denyUsers;
    private string[] _denyUsersSplit = _emptyArray;

    /// <summary>
    /// Gets or sets the users that are to be denied access.
    /// Note: if any of these are also present in the base Users property,
    /// then an error occurs during authorization.
    /// </summary>
    public string DenyUsers
            return _denyUsers ?? string.Empty;
            _denyUsers = value;
            _denyUsersSplit = SplitString(value);

    private string _denyRoles;
    private string[] _denyRolesSplit = _emptyArray;
    /// <summary>
    /// Gets or sets the roles that are to be denied access.
    /// Note: if any of these are also present in the base Users property,
    /// then an error occurs during authorization.
    /// However, if a user is allowed access, but has a role that is denied
    /// access, the deny rule wins.
    /// </summary>
    public string DenyRoles
            return _denyRoles ?? string.Empty;
            _denyRoles = value;
            _denyRolesSplit = SplitString(value);

    /// <summary>
    /// Initializes a new instance of the <see cref="AuthorizeExAttribute"/>
    /// class.
    /// </summary>
    public AuthorizeExAttribute()
        _isValid = new Lazy<bool>(() =>
            //have to re-split the base Users and Roles (if this were
            //implemented within the AuthorizeAttribute this would not
            //be necessary)
            var usersSplit = SplitString(Users);
            var rolesSplit = SplitString(Roles);

            return !_denyUsersSplit.Any(u => usersSplit.Contains(u)) &&
                        !_denyRolesSplit.Any(r => rolesSplit.Contains(r));

    protected override bool IsAuthorized(
            System.Web.Http.Controllers.HttpActionContext actionContext)
        if (!_isValid.Value)
            throw new InvalidOperationException(
                "One or more users or roles appear in both deny and allow rules");

        var baseResult = base.IsAuthorized(actionContext);
        if (!baseResult)
            return false;
        //since it returned true we know there is an authenticated user.
        var user = Thread.CurrentPrincipal;
        if (_denyUsersSplit.Length > 0 && _denyUsersSplit.Contains(
                    user.Identity.Name, StringComparer.OrdinalIgnoreCase))
            return false;

        if (_denyRolesSplit.Length > 0 && _denyRolesSplit.Any(user.IsInRole))
            return false;

        return true;

Please note that the coding style here is intended to mirror that of the core AuthorizeAttribute class – you can see what I mean by looking at the current version on codeplex – because, as the comments mention, this code could easily be merged into that (I know I can submit this myself – but I already have one pull request on the go – and still haven’t gotten around to putting the tests in that are needed to get that one included!).

Notice that an error occurs if a user or role is present in both the allow and deny lists.  Notice also that a ‘deny role’ rule takes precedence over an allow user or role rule.  So if ‘Jimmy’ is allowed, but has the denied role ‘BannedUsers’, then Jimmy will have to work on getting himself un-banned…

Similarly if Jimmy has the ‘AllowedUsers’ role, but still has the ‘BannedUsers’ role, then it’s back to trying make friends with us for Jimmy.

With this in place – all I have to do now to allow authenticated users but deny authenticated guest users is to change my use of AuthorizeAttribute to:

[AuthorizeEx(Roles = APICallerIdentity.Authenticated_Role,

And now authorization is denied for the action or controller on which it is applied if the caller is authenticated, but as a guest.

Happy coding!