Jekyll2022-01-13T08:12:58+00:00https://robdmoore.id.au/Robert Daniel Moore’s BlogBlog about software engineering, web development, agile/lean/Continuous Delivery, C#, ASP.NET and Microsoft Azure.MsDeploy to Azure Web App with Application Insights extension enabled when deleting additional destination files2017-01-30T09:44:04+00:002017-01-30T09:44:04+00:00https://robdmoore.id.au/blog/2017/01/30/msdeploy-to-azure-web-app-with-application-insights-extension-enabled-when-deleting-additional-destination-files<p>When performing an MsDeploy to an Azure Web App and you have the App Insights extension enabled you may find something interesting happens if you use the option to delete additional files on the destination that don’t appear in the source. If you look at the deployment log you may see something like this:</p>
<!--more-->
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2017-01-30T07:29:27.5515545Z Info: Deleting file ({sitename}\ApplicationInsights.config).
2017-01-30T07:29:27.5515545Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.2.2.0\Microsoft.ApplicationInsights.2.2.0.nupkg).
2017-01-30T07:29:27.5515545Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.2.2.0).
2017-01-30T07:29:27.5515545Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.Agent.Intercept.2.0.6\Microsoft.ApplicationInsights.Agent.Intercept.2.0.6.nupkg).
2017-01-30T07:29:27.5515545Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.Agent.Intercept.2.0.6).
2017-01-30T07:29:27.5515545Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.Azure.WebSites.2.2.0\Microsoft.ApplicationInsights.Azure.WebSites.2.2.0.nupkg).
2017-01-30T07:29:27.5515545Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.Azure.WebSites.2.2.0).
2017-01-30T07:29:27.5525645Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.DependencyCollector.2.2.0\Microsoft.ApplicationInsights.DependencyCollector.2.2.0.nupkg).
2017-01-30T07:29:27.5525645Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.DependencyCollector.2.2.0).
2017-01-30T07:29:27.5525645Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.PerfCounterCollector.2.2.0\Microsoft.ApplicationInsights.PerfCounterCollector.2.2.0.nupkg).
2017-01-30T07:29:27.5525645Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.PerfCounterCollector.2.2.0).
2017-01-30T07:29:27.5525645Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.Web.2.2.0\Microsoft.ApplicationInsights.Web.2.2.0.nupkg).
2017-01-30T07:29:27.5525645Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.Web.2.2.0).
2017-01-30T07:29:27.5525645Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.WindowsServer.2.2.0\Microsoft.ApplicationInsights.WindowsServer.2.2.0.nupkg).
2017-01-30T07:29:27.5525645Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.WindowsServer.2.2.0).
2017-01-30T07:29:27.5525645Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.2.2.0\Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.2.2.0.nupkg).
2017-01-30T07:29:27.5525645Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.2.2.0).
2017-01-30T07:29:27.5525645Z Info: Deleting file ({sitename}\App_Data\packages\Microsoft.Web.Infrastructure.1.0.0.0\Microsoft.Web.Infrastructure.1.0.0.0.nupkg).
2017-01-30T07:29:27.5525645Z Info: Deleting directory ({sitename}\App_Data\packages\Microsoft.Web.Infrastructure.1.0.0.0).
2017-01-30T07:29:27.5535680Z Info: Deleting directory ({sitename}\App_Data\packages).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.Agent.Intercept.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.DependencyCollector.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.HttpModule.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.PerfCounterCollector.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.ServerTelemetryChannel.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.Web.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.AI.WindowsServer.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.ApplicationInsights.AzureWebSites.dll).
2017-01-30T07:29:27.5535680Z Info: Deleting file ({sitename}\bin\Microsoft.ApplicationInsights.dll).
</code></pre></div></div>
<p>The cool thing about this is it gives you an indication of what the extension actually does. The fact there is an App_Data/packages folder with what is clearly unpacked NuGet packages tells us that the extension is installing a NuGet package into your site for you. That makes a lot of sense given you don’t need to install the extension if you installed the NuGet package yourself (I generally don’t bother because I don’t need App Insights locally and see it as a deployment concern so I like App Service adding it for me :)).</p>
<p>Setting the MsDeploy option to delete extraneous files is very useful so it’s not something I want to simply turn off. However, a <a href="http://mdavies.net/2012/08/12/microsofts-hidden-gem-msdeploy/">knowledge of MsDeploy</a> gives us some indication as to a possible solution. In this case we can make use of the <a href="https://technet.microsoft.com/en-us/library/dd569089(v=ws.10).aspx">skip option</a> to specify that MSDeploy should ignore the above affected files.</p>
<p>Putting it all together, if you specify the following rules in your msdeploy.exe call then you should have success:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-skip:objectname='filePath',absolutepath='ApplicationInsights.config' -skip:objectname='dirPath',absolutepath='App_Data\\packages\\*.*' -skip:objectname='filePath',absolutepath='bin\\Microsoft.AI.*.dll' -skip:objectname='filePath',absolutepath='bin\\Microsoft.ApplicationInsights.*.dll'
</code></pre></div></div>
<p>After doing that your deployment log should look something like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2017-01-30T08:10:26.7172425Z Info: Object filePath ({sitename}\ApplicationInsights.config) skipped due to skip directive 'CommandLineSkipDirective 1'.
2017-01-30T08:10:26.7182428Z Info: Object dirPath ({sitename}\App_Data\packages) skipped due to skip directive 'CommandLineSkipDirective 2'.
2017-01-30T08:10:26.7192429Z Info: Object filePath ({sitename}\bin\Microsoft.AI.Agent.Intercept.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7192429Z Info: Object filePath ({sitename}\bin\Microsoft.AI.DependencyCollector.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7192429Z Info: Object filePath ({sitename}\bin\Microsoft.AI.HttpModule.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7192429Z Info: Object filePath ({sitename}\bin\Microsoft.AI.PerfCounterCollector.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7192429Z Info: Object filePath ({sitename}\bin\Microsoft.AI.ServerTelemetryChannel.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7202428Z Info: Object filePath ({sitename}\bin\Microsoft.AI.Web.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7202428Z Info: Object filePath ({sitename}\bin\Microsoft.AI.WindowsServer.dll) skipped due to skip directive 'CommandLineSkipDirective 3'.
2017-01-30T08:10:26.7202428Z Info: Object filePath ({sitename}\bin\Microsoft.ApplicationInsights.AzureWebSites.dll) skipped due to skip directive 'CommandLineSkipDirective 4'.
2017-01-30T08:10:26.7202428Z Info: Object filePath ({sitename}\bin\Microsoft.ApplicationInsights.dll) skipped due to skip directive 'CommandLineSkipDirective 4'.
</code></pre></div></div>
<h2 id="what-to-do-when-accidentally-deleting-app-insights-files">What to do when accidentally deleting App Insights files</h2>
<p>If you find yourself in the position that you have accidentally deleted the App Insights files you simply need to delete the App Insights extension and then re-add it and it should work again.</p>
<h2 id="adding-the-sdk-to-your-application">Adding the SDK to your application</h2>
<p>If you eventually end up adding the App Insights SDK to your application then take note that you will need to make sure the <a href="https://twitter.com/davidebbo/status/858016577127665665">version of the extension and the version of the SDK DLLs match</a> otherwise you’ll get a version mismatch exception on app startup.</p>
<p>You still need to install the exception because the SDK alone doesn’t collect <a href="https://docs.microsoft.com/en-us/azure/application-insights/app-insights-monitor-performance-live-website-now">all information</a>.</p>Rob MooreWhen performing an MsDeploy to an Azure Web App and you have the App Insights extension enabled you may find something interesting happens if you use the option to delete additional files on the destination that don’t appear in the source. If you look at the deployment log you may see something like this:NDC Sydney 2016 videos2017-01-30T09:21:48+00:002017-01-30T09:21:48+00:00https://robdmoore.id.au/blog/2017/01/30/ndc-sydney-2016-videos<p>Hi all,</p>
<p>Just a quick note that the videos for the presentations I delivered with Matt Davies at NDC Sydney 2016 are up:</p>
<ul>
<li><a href="https://vimeo.com/189830215">Microtesting: How We Set Fire To The Testing Pyramid While Ensuring Confidence</a></li>
<li><a href="https://vimeo.com/200279525">Modern Authentication</a></li>
</ul>
<p>It was an amazing conference to attend, let alone present at and I’m looking forward to this year’s conference!</p>Rob MooreHi all, Just a quick note that the videos for the presentations I delivered with Matt Davies at NDC Sydney 2016 are up: Microtesting: How We Set Fire To The Testing Pyramid While Ensuring Confidence Modern Authentication It was an amazing conference to attend, let alone present at and I’m looking forward to this year’s conference!Modern Auth Presentation2016-05-03T14:30:13+00:002016-05-03T14:30:13+00:00https://robdmoore.id.au/blog/2016/05/03/modern-auth-presentation<p>This morning I presented a talk on modern authentication with Matt Davies to the Yow! West conference.</p>
<p>I just published the slides at <a href="https://github.com/MRCollective/ModernAuthPresentation">https://github.com/MRCollective/ModernAuthPresentation</a>.</p>Rob MooreThis morning I presented a talk on modern authentication with Matt Davies to the Yow! West conference. I just published the slides at https://github.com/MRCollective/ModernAuthPresentation.Announcing release of ChameleonForms 2.0.0 and new documentation site2016-01-03T05:12:06+00:002016-01-03T05:12:06+00:00https://robdmoore.id.au/blog/2016/01/03/announcing-release-of-chameleonforms-2-0-0-and-new-documentation-site<p>I’m somewhat more subdued with my excitement for announcing this <a href="/blog/2013/11/17/chameleonforms-1-0-released/">than I was for 1.0</a>. In fact I just had a chuckle to myself in re-reading that post :) (oh and if you were wondering - did Matt and I enjoy Borderlands 2? Yes we very much did, it’s a great game).</p>
<p>Nonetheless, there is some really cool stuff in ChameleonForms 2.0 and I’m particularly excited about the new PartialFor functionality, which I will describe below. My peak excitement about PartialFor was months ago when the code was actually written, but Matt and I have had a particularly busy second half of the year with our work roles expanding in scope and a healthy prioritisation of our personal lives so it took a while to get our act together and get the code merged and released.</p>
<p>There have been a range of point releases that added a bunch of functionality to ChameleonForms since the 1.0 release and before this 2.0 release. You can <a href="https://github.com/MRCollective/ChameleonForms/releases">peruse the releases list</a> to see the features.</p>
<!--more-->
<h2 id="new-docs-site">New docs site</h2>
<p>I’ve taken the lead (as well as a bunch of advice - thanks mate) from <a href="http://jake.ginnivan.net/">Jake Ginnivan</a> and moved the <a href="http://readthedocs.org/projects/chameleonforms/">documentation for ChameleonForms</a> to <a href="http://readthedocs.org/">Read the Docs</a>. The new documentation site is now generated from files in the <a href="https://github.com/MRCollective/ChameleonForms/tree/master/docs">source repository’s docs folder</a>. This is awesome because it means that the documentation can be tied to current state of the software - no more documentation that is ahead or behind and pull requests can now contain documentation changes corresponding to the code changes.</p>
<p>For those who are curious the process I followed to migrate from GitHub wiki to Read the Docs was:</p>
<ol>
<li>Clone the wiki</li>
<li>Move all the files into the docs folder of the repository</li>
<li><a href="https://github.com/MRCollective/ChameleonForms/blob/master/mkdocs.yml">Add a mkdocs.yml file to the root of the repository with all of the files</a> (this means I need to keep a list of the files in there, but I don’t mind since it gives me control of the menu, you can omit the mkdocs.yml file if you want and it alphabetically places all of the files in the menu)</li>
<li>Sign up for Read the Docs and create a new project linked to the GitHub repository</li>
<li>Enable the fenced code markdown extension</li>
<li>Change all internal documentation links to reference the .md file (in my case I had to search for all links to wiki/* and remove the wiki/ and add in the .md)</li>
<li>Change any occurrences of <code class="highlighter-rouge">c#` with</code>csharp` (GitHub supports using c# for the fenced code snippet, but mkdocs doesn’t)</li>
<li>Check all of the pages since some of them might render weirdly - I had to add some extra spaces between paragraphs and code blocks / bullet lists for instance since the markdown parser is slightly different</li>
</ol>
<p>There are a bunch of different formats that give more flexibility that Read the Docs supports (e.g. restructured text), but I’m very happy with the markdown support.</p>
<h2 id="20-minor-features-and-bug-fixes">2.0 minor features and bug fixes</h2>
<p>Check out the <a href="https://github.com/MRCollective/ChameleonForms/releases/tag/2.0.0">release notes for the 2.0 release</a> to see a bunch of minor new features and bug fixes that have been contributed by a bunch of different people - thanks to everyone that contributed! It always give Matt and I a rush when we receive a pull request from someone :).</p>
<h2 id="partialfor-feature">PartialFor feature</h2>
<p>This is the big feature. <a href="https://github.com/MRCollective/ChameleonForms/blob/master/BREAKING_CHANGES.md#version-200">A few breaking changes</a> went into the 2.0 release in order to make this possible. This is the first of the <a href="https://github.com/MRCollective/ChameleonForms/issues/107">extensibility features</a> we have added to ChameleonForms.</p>
<p>Essentially, it allows us to contain a part of a form in a partial view, with full type-safety and intellisense. The form can be included directly against a form or inside a form section. This makes things like common parts of forms for create vs edit screens possible. This allows you to remove even more repetition in your forms, while keeping a clean separation between forms that are actually separate.</p>
<p>The best way to see the power of the feature in it’s glory is by glancing over the <a href="https://github.com/MRCollective/ChameleonForms/blob/master/ChameleonForms.AcceptanceTests/PartialForTests.Should_render_correctly_when_used_via_form_or_section_and_when_used_for_top_level_property_or_sub_property.approved.html">acceptance test for it</a>. The output should be fairly self explanatory.</p>
<p>There is also a <a href="http://chameleonforms.readthedocs.org/en/latest/partials/">documentation page on the feature</a>,</p>
<h2 id="is-chameleonforms-still-relevant">Is ChameleonForms still relevant?</h2>
<p>We were very lucky to be included in <a href="http://www.hanselman.com/blog/NuGetPackageOfTheWeekADifferentTakeOnASPNETMVCFormsWithChameleonForms.aspx">Scott Hanselman’s NuGet package of the week</a> earlier this year. The comments of Scott’s post are very interesting because it seems our library is somewhat controversial. A lot of people are saying that single page applications and the increasing prevalence of JavaScript make creating forms in ASP.NET MVC redundant.</p>
<p>Matt and I have spent a lot more time in JavaScript land than MVC of late and we concede that there is certainly a lot more scenarios now that don’t make sense to break out MVC. That means ChameleonForms isn’t as relevant as when we first started developing it.</p>
<p>In saying that, we still firmly believe that there are a range of scenarios that MVC is very much appropriate for. Where you don’t need the flexibility of an API and/or you need pure speed of development (in particular developing prototypes) and/or you’re building CRUD applications or heavily forms-based applications (especially where you need consistency of your forms) we believe MVC + ChameleonForms is very much a good choice and often is the best choice.</p>Rob MooreI’m somewhat more subdued with my excitement for announcing this than I was for 1.0. In fact I just had a chuckle to myself in re-reading that post :) (oh and if you were wondering - did Matt and I enjoy Borderlands 2? Yes we very much did, it’s a great game). Nonetheless, there is some really cool stuff in ChameleonForms 2.0 and I’m particularly excited about the new PartialFor functionality, which I will describe below. My peak excitement about PartialFor was months ago when the code was actually written, but Matt and I have had a particularly busy second half of the year with our work roles expanding in scope and a healthy prioritisation of our personal lives so it took a while to get our act together and get the code merged and released. There have been a range of point releases that added a bunch of functionality to ChameleonForms since the 1.0 release and before this 2.0 release. You can peruse the releases list to see the features.Recent talks2015-06-01T03:17:29+00:002015-06-01T03:17:29+00:00https://robdmoore.id.au/blog/2015/06/01/recent-talks<p>I recently gave a couple of conference talks:</p>
<ul>
<li>
<p><a href="https://a.confui.com/public/conferences/54fae12ed02ecad6f60000a8/locations/54fae12ed02ecad6f60000a9/speakers?framehost=http%3A%2F%2Fwest.yowconference.com.au%2F">2015</a> <a href="http://west.yowconference.com.au/">Yow! West conference</a>; joint presentation with <a href="http://mdavies.net/">Matt Davies</a> on Microtesting:</p>
<ul>
<li>
<p>Do you want to write less tests for the same amount of confidence?
Do you want to print out the testing pyramid on a dot matrix printer, take it outside and set fire to it?</p>
<p>How confident are you that you can survive the refactoring apocalypse without breaking your tests?</p>
<p>As consultants, we get to see how testing is performed across many different organisations and we have a chance to experiment with different testing strategies across multiple projects. Through this experience, we have developed a pragmatic process for setting an initial testing strategy that is as simple as possible and iterating on that strategy over time to evolve it based on how it performs. We have also settled on a style of testing that has proved to be very effective at reducing testing effort while maintaining (or even improving) confidence from our tests.</p>
<p>This talk will focus on some of our learnings and we will cover the different types of testing and how they interact, breaking apart the usual practice of testing all applications in the same way, the mysterious relationship between speed and confidence, how we were able to throw away the testing pyramid and a number of techniques that have worked well for us when testing our applications.</p>
</li>
<li><a href="https://github.com/MRCollective/MicrotestingPresentation">Slides published to GitHub</a></li>
<li>There will be a video, I’ll link to it from the GitHub when it’s published</li>
</ul>
</li>
<li>
<p><a href="https://www.crowdcast.io/e/anzcoders2015">2015</a> <a href="http://www.anzcoders.com/">ANZ Coders virtual conference</a>; presentation on Applying useful testing patterns using TestStack.Dossier:</p>
<ul>
<li>
<p>The Object Mother, Test Data Builder, Anonymous Variable/Value, equivalence class and constrained non-determinism patterns/concepts can help you make your tests more readable/meaningful, more terse and more maintainable when used in the right way.</p>
<p>This talk will explain why and where the aforementioned patterns are useful and the advantages they can bring and show examples in code using a library I recently released called TestStack.Dossier.</p>
</li>
<li><a href="https://github.com/robdmoore/TestingPatternsWithDossierPresentation">Slides published to GitHub</a></li>
<li><a href="https://www.youtube.com/watch?v=CJSK8WhSA84">Video</a></li>
</ul>
</li>
</ul>Rob MooreI recently gave a couple of conference talks: 2015 Yow! West conference; joint presentation with Matt Davies on Microtesting: Do you want to write less tests for the same amount of confidence? Do you want to print out the testing pyramid on a dot matrix printer, take it outside and set fire to it? How confident are you that you can survive the refactoring apocalypse without breaking your tests? As consultants, we get to see how testing is performed across many different organisations and we have a chance to experiment with different testing strategies across multiple projects. Through this experience, we have developed a pragmatic process for setting an initial testing strategy that is as simple as possible and iterating on that strategy over time to evolve it based on how it performs. We have also settled on a style of testing that has proved to be very effective at reducing testing effort while maintaining (or even improving) confidence from our tests. This talk will focus on some of our learnings and we will cover the different types of testing and how they interact, breaking apart the usual practice of testing all applications in the same way, the mysterious relationship between speed and confidence, how we were able to throw away the testing pyramid and a number of techniques that have worked well for us when testing our applications. Slides published to GitHub There will be a video, I’ll link to it from the GitHub when it’s published 2015 ANZ Coders virtual conference; presentation on Applying useful testing patterns using TestStack.Dossier: The Object Mother, Test Data Builder, Anonymous Variable/Value, equivalence class and constrained non-determinism patterns/concepts can help you make your tests more readable/meaningful, more terse and more maintainable when used in the right way. This talk will explain why and where the aforementioned patterns are useful and the advantages they can bring and show examples in code using a library I recently released called TestStack.Dossier. Slides published to GitHub VideoAnnouncing the release of TestStack.Dossier 3.02015-05-17T07:58:29+00:002015-05-17T07:58:29+00:00https://robdmoore.id.au/blog/2015/05/17/announcing-the-release-of-teststack-dossier-3-0<p>I’ve added a blog post on the TestStack blog <a href="http://www.teststack.net/v1.0/blog/announcing-teststackdossier-v30">announcing the release of v3.0 of Dossier</a>.</p>Rob MooreI’ve added a blog post on the TestStack blog announcing the release of v3.0 of Dossier.Azure Resource Manager intro presentation and workshop2015-05-06T09:32:59+00:002015-05-06T09:32:59+00:00https://robdmoore.id.au/blog/2015/05/06/azure-resource-manager-intro-presentation-and-workshop<p>I attended the <a href="http://www.meetup.com/Perth-Cloud/events/221691559/">Azure Saturday</a> event here in Perth last weekend. <a href="http://mdavies.net/">Matt</a> and I did a basic intro presentation on Azure Resource Manager and ran an associated workshop, which we have <a href="https://github.com/MRCollective/AzureResourceManager_MicrosoftAzureSaturdayPerth2015">published to our GitHub organisation</a>.</p>
<p>Azure Resource Manager is one of the most important things to understand about Azure if you plan on using it since it’s the platform that underpins the provisioning and management of all resources in Azure going forward.</p>
<p><a href="http://media.robdmoore.id.au/uploads/2015/05/highres_437006950.jpeg"><img src="/assets/highres_437006950-300x225.jpeg" alt="Azure Saturday Perth 2015 presentation" /></a></p>Rob MooreI attended the Azure Saturday event here in Perth last weekend. Matt and I did a basic intro presentation on Azure Resource Manager and ran an associated workshop, which we have published to our GitHub organisation. Azure Resource Manager is one of the most important things to understand about Azure if you plan on using it since it’s the platform that underpins the provisioning and management of all resources in Azure going forward.Automating Azure Resource Manager2015-04-30T11:07:18+00:002015-04-30T11:07:18+00:00https://robdmoore.id.au/blog/2015/04/30/automating-azure-resource-manager<p>I’ve recently been (finally) getting to speed with <a href="http://channel9.msdn.com/Events/Build/2014/2-607">Azure Resource Manager</a> (ARM). It’s the management layer that drives the new <a href="https://portal.azure.com">Azure Portal</a> and also features like <a href="http://azure.microsoft.com/en-us/documentation/articles/azure-preview-portal-using-resource-groups/">Resource Groups</a> and <a href="http://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-configure/">Role-Based Access Control</a>.</p>
<p>You can interact with ARM in a number of ways:</p>
<ul>
<li><a href="https://portal.azure.com">new Portal</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/dn654592.aspx">PowerShell commandlets</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/azure/dn790568.aspx">HTTP API</a></li>
<li><a href="https://github.com/projectkudu/ARMClient">ARMClient</a></li>
<li><a href="http://azure.microsoft.com/en-us/documentation/api/management-resource-sdk-net/">.NET Library</a></li>
</ul>
<p>To authenticate to the ARM API you need to use an Azure AD credential. This is all well and good if you are logged into the Portal, or running a script on your computer (where a web browser login prompt to Azure AD will pop up), but when automating your API calls that’s not available.</p>
<p>Luckily there is a <a href="http://blog.davidebbo.com/2014/12/azure-service-principal.html">post by David Ebbo</a> that describes how to generate a <a href="https://msdn.microsoft.com/en-us/library/azure/dn132633.aspx">Service Principal</a> (equivalent of the concept of an <a href="https://servergeeks.wordpress.com/2012/10/29/service-account-in-ad/">Active Directory Service Account</a>) attached to an <a href="https://msdn.microsoft.com/en-us/library/azure/dn151122.aspx">Azure AD application</a>.</p>
<p>The only problem with this post is that there are a few manual steps and it’s quite fiddly to do (by David’s own admission). I’ve developed a PowerShell module that you can use to idempotently create a Service Principal against either an entire Azure subscription or against a specific Resource Group that you can then use to automate your ARM code.</p>
<p>I’ve <a href="https://github.com/robdmoore/azure-resource-manager-api-credentials">published the code to GitHub</a>.</p>
<p>In order to use it you need to:</p>
<ol>
<li>Ensure you have the <a href="http://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/">Windows Azure PowerShell commandlets</a> installed</li>
<li>Download the <a href="https://github.com/robdmoore/azure-resource-manager-api-credentials/blob/master/Set-ARMServicePrincipalCredential.psm1">Set-ARMServicePrincipalCredential.psm1</a> file from my GitHub repository</li>
<li>Download the Azure Key Vault PowerShell commandlets and put the AADGraph.ps1 file next to the file from GitHub</li>
<li>Execute the Set-ARMServicePrincipalCredential command as per the <a href="https://github.com/robdmoore/azure-resource-manager-api-credentials/blob/master/Examples.ps1">examples on GitHub</a></li>
</ol>
<p>This will pop up a web browser prompt to authenticate (this will happen twice since I’m using two disjointed libraries - hopefully this will get resolved if Azure AD commandlets end up becoming integrated with the Azure Commandlets) give you the following information:</p>
<ul>
<li>Tenant ID</li>
<li>Client ID</li>
<li>Password</li>
</ul>
<p>From there you have all the information you need to authenticate your automated script with ARM.</p>
<p>If using PowerShell then this will look like:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> $securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
$servicePrincipalCredentials = New-Object System.Management.Automation.PSCredential ($ClientId, $securePassword)
Add-AzureAccount -ServicePrincipal -Tenant $TenantId -Credential $servicePrincipalCredentials | Out-Null
</code></pre></div></div>
<p>If using ARMClient then this will look like:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> armclient spn $TenantId $ClientId $Password | Out-Null
</code></pre></div></div>
<p>One last note: make sure you store the password securely when automating the script, e.g. <a href="https://confluence.jetbrains.com/display/TCD9/Typed+Parameters">TeamCity password</a>, <a href="https://utoolity.atlassian.net/wiki/pages/viewpage.action?pageId=19464201">Bamboo password</a> or <a href="http://docs.octopusdeploy.com/display/OD/Variables">Octopus sensitive variable</a>.</p>Rob MooreI’ve recently been (finally) getting to speed with Azure Resource Manager (ARM). It’s the management layer that drives the new Azure Portal and also features like Resource Groups and Role-Based Access Control. You can interact with ARM in a number of ways: new Portal PowerShell commandlets HTTP API ARMClient .NET Library To authenticate to the ARM API you need to use an Azure AD credential. This is all well and good if you are logged into the Portal, or running a script on your computer (where a web browser login prompt to Azure AD will pop up), but when automating your API calls that’s not available. Luckily there is a post by David Ebbo that describes how to generate a Service Principal (equivalent of the concept of an Active Directory Service Account) attached to an Azure AD application. The only problem with this post is that there are a few manual steps and it’s quite fiddly to do (by David’s own admission). I’ve developed a PowerShell module that you can use to idempotently create a Service Principal against either an entire Azure subscription or against a specific Resource Group that you can then use to automate your ARM code. I’ve published the code to GitHub. In order to use it you need to: Ensure you have the Windows Azure PowerShell commandlets installed Download the Set-ARMServicePrincipalCredential.psm1 file from my GitHub repository Download the Azure Key Vault PowerShell commandlets and put the AADGraph.ps1 file next to the file from GitHub Execute the Set-ARMServicePrincipalCredential command as per the examples on GitHub This will pop up a web browser prompt to authenticate (this will happen twice since I’m using two disjointed libraries - hopefully this will get resolved if Azure AD commandlets end up becoming integrated with the Azure Commandlets) give you the following information: Tenant ID Client ID Password From there you have all the information you need to authenticate your automated script with ARM. If using PowerShell then this will look like: $securePassword = ConvertTo-SecureString $Password -AsPlainText -Force $servicePrincipalCredentials = New-Object System.Management.Automation.PSCredential ($ClientId, $securePassword) Add-AzureAccount -ServicePrincipal -Tenant $TenantId -Credential $servicePrincipalCredentials | Out-Null If using ARMClient then this will look like: armclient spn $TenantId $ClientId $Password | Out-Null One last note: make sure you store the password securely when automating the script, e.g. TeamCity password, Bamboo password or Octopus sensitive variable.Testing AngularJS directives using Approval Tests2015-04-22T15:00:37+00:002015-04-22T15:00:37+00:00https://robdmoore.id.au/blog/2015/04/22/testing-angularjs-directives-using-approval-tests<p>I recently had an application I was developing using AngularJS that contained a fair number of directives that were somewhat complex in that the logic that backed them was contained in services that called HTTP APIs. The intent was to provide a single JavaScript file that designers at the company I was working at could include and then build product pages using just HTML (via the directives). I needed to provide some confidence when making changes to the directives and pin down the behaviour.</p>
<p>As explained below, I ended up doing this via approval tests and I’ve published <a href="https://github.com/robdmoore/angular-directive-approval-tests">how I did it on GitHub</a>.</p>
<h2 id="why-i-wanted-to-use-approval-tests">Why I wanted to use Approval Tests</h2>
<p>In order to test these directives I didn’t want to have to perform tedious DOM inspection code to determine if the directives did what I wanted. Most AngularJS directive testing examples you will find on the Internet tell you to do this though, including the <a href="https://docs.angularjs.org/guide/unit-testing#testing-directives">official documentation</a>.</p>
<blockquote>
<p>Side note: in my research I stumbled across <a href="https://github.com/vojtajina/ng-directive-testing">the ng-directive-testing library</a>, which I feel is an improvement over most example code out there and if you do want to inspect the DOM as part of your testing I recommend you check it out.</p>
</blockquote>
<p>This style of testing works fine for small, simple directives, but I felt would be tedious to write and fragile for my use case. Instead, I had an idea that I wanted to apply the <a href="http://approvaltests.com/">approval tests</a> technique.</p>
<p>I use this technique when I have a blob of JSON, XML, HTML, text etc. that I want to verify is what I expect and pin it down without having to write tedious assertions against every aspect of it - hence this technique fitted in perfectly with what I wanted to achieve with testing the directives.</p>
<h2 id="how-i-did-it">How I did it</h2>
<p>Given that directives need the DOM it was necessary to run the tests in a web browser. In this case I decided to do it via <a href="https://github.com/karma-runner/karma">Karma</a> since I was already using Node JS to <a href="https://github.com/mishoo/UglifyJS2">uglify</a> the JavaScript.</p>
<p>ApprovalTests requires access to the filesystem in order to write the approval files and then access to open processes on the computer to pop open a diff viewer if there is a difference in the output. This is not possible from the web browser. Thus, even though there is a <a href="https://github.com/approvals/Approvals.NodeJS">JavaScript port of ApprovalTests</a> (for NodeJS) I wasn’t able to use it directly in my tests.</p>
<p>While contemplating my options, it occured to me I could spin up a NodeJS server to run the approvals code and simply call it from the browser - it’s not much different to how Karma gets test results. After that realisation I stumbled across <a href="https://github.com/kristofferahl/approvals-server">approvals-server</a> - someone had already implemented it! Brilliant!</p>
<p>From there it was simply a matter of stitching up the code to all work together - in my case using Grunt as the Task Runner.</p>
<h2 id="example-code">Example code</h2>
<p>To that end, I have <a href="https://github.com/robdmoore/angular-directive-approval-tests">published a repository</a> with a contrived example that demonstrates how to test a directive using Approval Tests.</p>
<p>The main bits to look at are:</p>
<ul>
<li><code class="highlighter-rouge">gruntfile.js</code> - contains the grunt configuration I used including my Grunt tasks for the approval server, which probably should be split into a separate file or published to npm (feel free to send me a PR)</li>
<li><code class="highlighter-rouge">app/spec/displayproducts.directive.spec.js</code> - contains the example test in all it’s glory</li>
<li><code class="highlighter-rouge">app/test-helpers/approvals/myapp-display-products-should-output-product-information.approved.txt</code> - the approval file for the example test</li>
<li><code class="highlighter-rouge">app/test-helpers/approvals.js</code> - the code to get name of currently executing Jasmine 2 test and the code to send an approval to the approval server</li>
<li><code class="highlighter-rouge">app/test-helpers/heredoc.js</code> - a <a href="http://www.tuxradar.com/practicalphp/2/6/3">heredoc</a> function to allow for easy specification of multi-line markup</li>
<li><code class="highlighter-rouge">app/test-helpers/directives.js</code> - the test code that compiles the directive, cleans it up for a nice diff and passes it to be verified</li>
</ul>
<h2 id="notable-bits">Notable bits</h2>
<h3 id="style-guide">Style guide</h3>
<p>If you are curious about why I wrote my Angular code the way I have it’s because I’m following <a href="https://github.com/johnpapa/angular-styleguide">John Papa’s AngularJS style guide</a>, which I think is very good and greatly improves maintainability of the resulting code.</p>
<h3 id="taming-karma">Taming karma</h3>
<p>I managed to get the following working for Karma:</p>
<ul>
<li>Watch build that runs tests whenever a file changes - see the <code class="highlighter-rouge">karma:watch</code> and <code class="highlighter-rouge">dev</code> tasks</li>
<li>Default build including tests - see the <code class="highlighter-rouge">karma:myApp</code> and <code class="highlighter-rouge">default</code> tasks</li>
<li>A build that pops up a Chrome window to allow for debugging - see the <code class="highlighter-rouge">karma:debug</code> and <code class="highlighter-rouge">debugtests</code> takss</li>
</ul>
<h3 id="simultaneous-approval-server-runs">Simultaneous approval server runs</h3>
<p>I managed to allow for the <code class="highlighter-rouge">dev</code> task to be running while running <code class="highlighter-rouge">default</code> by including the <code class="highlighter-rouge">isPortTaken</code> code to determine if the approvals server port is already taken.</p>
<blockquote>
<p>Side note: if you are using this code across multiple projects consecutively then be careful because the approval server might be running from the other project. A way to avoid this would be to change the port per project (in both <code class="highlighter-rouge">gruntfile.js</code> and <code class="highlighter-rouge">approvals.js</code>.</p>
</blockquote>
<h3 id="improved-approval-performance-on-windows">Improved approval performance on Windows</h3>
<p>I found that the performance of the approvals library was <a href="https://github.com/approvals/Approvals.NodeJS/issues/20">very slow on Windows</a>, but with some assistance from the maintainers I worked out what the cause was and submitted a <a href="https://github.com/approvals/Approvals.NodeJS/pull/27">pull request</a>. The version in npm has been updated, but there are <a href="https://github.com/kristofferahl/approvals-server/issues/1">currently no updates to approvals-server to use it</a>. To overcome this I have used the <code class="highlighter-rouge">npm-shrinkwrap.json</code> file to <a href="http://blog.nodejs.org/2012/02/27/managing-node-js-dependencies-with-shrinkwrap/">override</a> the version of the approvals library.</p>
<h3 id="get-currently-running-test-name-in-jasmine-2">Get currently running test name in Jasmine 2</h3>
<p>I wanted the approval test output file to be automatically derived from the currently-running test name (similar to what happens on .NET). It turns out that is a lot harder to arhieve in Jasmine 2, but with some Googling/StackOverflowing I managed to get it working as per the code in the <code class="highlighter-rouge">approvals.js</code> file.</p>
<h3 id="cleaning-up-the-output-markup-for-a-good-diff">Cleaning up the output markup for a good diff</h3>
<p>AngularJS leaves a bunch of stuff in the resulting markup such as HTML comments, superfluous attributes and class names, etc. In order to remove all of this so the approved file is clean and in order to ensure the whitespace in the output is both easy to read and the same no matter what browser is being used I apply some modifications to the markup as seen in <code class="highlighter-rouge">directives.js</code>.</p>
<h3 id="easily-specifying-multi-line-test-markup">Easily specifying multi-line test markup</h3>
<p>I pulled in a heredoc function I found on StackOverflow as seen in <code class="highlighter-rouge">heredoc.js</code> and used in the example test, e.g.:</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nx">DirectiveFixture</span><span class="p">.</span><span class="nx">verify</span><span class="p">(</span><span class="nx">heredoc</span><span class="p">(</span><span class="kd">function</span> <span class="p">()</span> <span class="p">{</span><span class="cm">/*
<myapp-display-products category="car" product="car">
<div></div>
</myapp-display-products
*/</span><span class="p">}));</span>
</code></pre></div></div>
<p>This is much nicer than having to concatenate one stirng per line or append a <code class="highlighter-rouge">\</code> character at the end of each line, both of which aren’t handled nicely by the IDE I’m using.</p>Rob MooreI recently had an application I was developing using AngularJS that contained a fair number of directives that were somewhat complex in that the logic that backed them was contained in services that called HTTP APIs. The intent was to provide a single JavaScript file that designers at the company I was working at could include and then build product pages using just HTML (via the directives). I needed to provide some confidence when making changes to the directives and pin down the behaviour. As explained below, I ended up doing this via approval tests and I’ve published how I did it on GitHub. Why I wanted to use Approval Tests In order to test these directives I didn’t want to have to perform tedious DOM inspection code to determine if the directives did what I wanted. Most AngularJS directive testing examples you will find on the Internet tell you to do this though, including the official documentation. Side note: in my research I stumbled across the ng-directive-testing library, which I feel is an improvement over most example code out there and if you do want to inspect the DOM as part of your testing I recommend you check it out. This style of testing works fine for small, simple directives, but I felt would be tedious to write and fragile for my use case. Instead, I had an idea that I wanted to apply the approval tests technique. I use this technique when I have a blob of JSON, XML, HTML, text etc. that I want to verify is what I expect and pin it down without having to write tedious assertions against every aspect of it - hence this technique fitted in perfectly with what I wanted to achieve with testing the directives. How I did it Given that directives need the DOM it was necessary to run the tests in a web browser. In this case I decided to do it via Karma since I was already using Node JS to uglify the JavaScript. ApprovalTests requires access to the filesystem in order to write the approval files and then access to open processes on the computer to pop open a diff viewer if there is a difference in the output. This is not possible from the web browser. Thus, even though there is a JavaScript port of ApprovalTests (for NodeJS) I wasn’t able to use it directly in my tests. While contemplating my options, it occured to me I could spin up a NodeJS server to run the approvals code and simply call it from the browser - it’s not much different to how Karma gets test results. After that realisation I stumbled across approvals-server - someone had already implemented it! Brilliant! From there it was simply a matter of stitching up the code to all work together - in my case using Grunt as the Task Runner. Example code To that end, I have published a repository with a contrived example that demonstrates how to test a directive using Approval Tests. The main bits to look at are: gruntfile.js - contains the grunt configuration I used including my Grunt tasks for the approval server, which probably should be split into a separate file or published to npm (feel free to send me a PR) app/spec/displayproducts.directive.spec.js - contains the example test in all it’s glory app/test-helpers/approvals/myapp-display-products-should-output-product-information.approved.txt - the approval file for the example test app/test-helpers/approvals.js - the code to get name of currently executing Jasmine 2 test and the code to send an approval to the approval server app/test-helpers/heredoc.js - a heredoc function to allow for easy specification of multi-line markup app/test-helpers/directives.js - the test code that compiles the directive, cleans it up for a nice diff and passes it to be verified Notable bits Style guide If you are curious about why I wrote my Angular code the way I have it’s because I’m following John Papa’s AngularJS style guide, which I think is very good and greatly improves maintainability of the resulting code. Taming karma I managed to get the following working for Karma: Watch build that runs tests whenever a file changes - see the karma:watch and dev tasks Default build including tests - see the karma:myApp and default tasks A build that pops up a Chrome window to allow for debugging - see the karma:debug and debugtests takss Simultaneous approval server runs I managed to allow for the dev task to be running while running default by including the isPortTaken code to determine if the approvals server port is already taken. Side note: if you are using this code across multiple projects consecutively then be careful because the approval server might be running from the other project. A way to avoid this would be to change the port per project (in both gruntfile.js and approvals.js. Improved approval performance on Windows I found that the performance of the approvals library was very slow on Windows, but with some assistance from the maintainers I worked out what the cause was and submitted a pull request. The version in npm has been updated, but there are currently no updates to approvals-server to use it. To overcome this I have used the npm-shrinkwrap.json file to override the version of the approvals library. Get currently running test name in Jasmine 2 I wanted the approval test output file to be automatically derived from the currently-running test name (similar to what happens on .NET). It turns out that is a lot harder to arhieve in Jasmine 2, but with some Googling/StackOverflowing I managed to get it working as per the code in the approvals.js file. Cleaning up the output markup for a good diff AngularJS leaves a bunch of stuff in the resulting markup such as HTML comments, superfluous attributes and class names, etc. In order to remove all of this so the approved file is clean and in order to ensure the whitespace in the output is both easy to read and the same no matter what browser is being used I apply some modifications to the markup as seen in directives.js. Easily specifying multi-line test markup I pulled in a heredoc function I found on StackOverflow as seen in heredoc.js and used in the example test, e.g.: DirectiveFixture.verify(heredoc(function () {/* <myapp-display-products category="car" product="car"> <div></div> </myapp-display-products */})); This is much nicer than having to concatenate one stirng per line or append a \ character at the end of each line, both of which aren’t handled nicely by the IDE I’m using.Announcing TestStack.Dossier library2015-04-22T08:18:34+00:002015-04-22T08:18:34+00:00https://robdmoore.id.au/blog/2015/04/22/announcing-teststack-dossier-library<p>I’m pleased to announce the addition of a (somewhat) new library to the <a href="https://github.com/teststack">TestStack family</a> called TestStack.Dossier. I say somewhat new because it’s a version 2 of an existing library that I published called <a href="/blog/2013/05/26/announcing-ntestdatabuilder-library/">NTestDataBuilder</a>. If you hadn’t already heard about that library here is the one liner (which has only changed slightly with the rename):</p>
<blockquote>
<p>TestStack.Dossier provides you with the code infrastructure to easily and quickly generate test fixture data for your automated tests in a terse, readable and maintainable way using the Test Data Builder, anonymous value and equivalence class patterns.</p>
</blockquote>
<p>The release of TestStack.Dossier culminates a few months of (off and on) work by myself and fellow TestStacker <a href="http://www.michael-whelan.net/">Michael Whelan</a> to bring a range of enhancements. The library itself is very similar to NTestDataBuilder, but there <a href="https://github.com/TestStack/TestStack.Dossier/blob/master/BREAKING_CHANGES.md">are some minor breaking changes</a>. I decided to reduce confusion by keeping the version number consistent between libraries so TestStack.Dossier starts at version 2.0.</p>
<h2 id="so-why-should-i-upgrade-to-v2-anyway">So why should I upgrade to v2 anyway?</h2>
<p>There is more to TestStack.Dossier v2 than just a name change, a lot more. I’ve taken my learnings (and frustrations) from a couple of years of usage of the library into account to add in a bunch of improvements and new features that I’m really excited about!</p>
<blockquote>
<p>Side note: <a href="/blog/2013/05/26/test-data-generation-the-right-way-object-mother-test-data-builders-nsubstitute-nbuilder/">my original post on combining the test data builder pattern with the object mother pattern</a> and <a href="https://github.com/robdmoore/TestWestTestDataSustainabilityPresentation">follow-up presentation</a> still holds very true - this combination of patterns has been invaluable and has led to terser, more readable tests that are easier to maintain. I still highly recommend this approach (I use <s>NTestDataBuilder</s> TestStack.Dossier for the test data builder part).</p>
</blockquote>
<h3 id="anonymous-value-support">Anonymous value support</h3>
<p>As explained in my anonymous variables post (TBW(ritten) - future proofing this post, or setting myself up for disappointment :P) in my <a href="/blog/2014/01/23/test-naming-automated-testing-series/">automated testing series</a>, the use of the <a href="http://blogs.msdn.com/b/ploeh/archive/2008/11/17/anonymous-variables.aspx">anonymous variable pattern</a> is a good pattern to use when you want to use values in your tests whose exact value isn’t significant. By including a specific value you are making it look like that value is important in some way - stealing cognitive load from the test reader while they figure out the value in fact doesn’t not matter.</p>
<p>This is relevant when defining a test data builder because of the initial values that you set the different parameters to by default. For instance, the example code for NTestDataBuilder on the readme had something like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">CustomerBuilder</span> <span class="p">:</span> <span class="n">TestDataBuilder</span><span class="p"><</span><span class="n">Customer</span><span class="p">,</span> <span class="n">CustomerBuilder</span><span class="p">></span>
<span class="p">{</span>
<span class="k">public</span> <span class="nf">CustomerBuilder</span><span class="p">()</span>
<span class="p">{</span>
<span class="nf">WithFirstName</span><span class="p">(</span><span class="s">"Rob"</span><span class="p">);</span>
<span class="nf">WithLastName</span><span class="p">(</span><span class="s">"Moore"</span><span class="p">);</span>
<span class="nf">WhoJoinedIn</span><span class="p">(</span><span class="m">2013</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">public</span> <span class="n">CustomerBuilder</span> <span class="nf">WithFirstName</span><span class="p">(</span><span class="kt">string</span> <span class="n">firstName</span><span class="p">)</span>
<span class="p">{</span>
<span class="nf">Set</span><span class="p">(</span><span class="n">x</span> <span class="p">=></span> <span class="n">x</span><span class="p">.</span><span class="n">FirstName</span><span class="p">,</span> <span class="n">firstName</span><span class="p">);</span>
<span class="k">return</span> <span class="k">this</span><span class="p">;</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>In that case the values <code class="highlighter-rouge">"Rob"</code>, <code class="highlighter-rouge">"Moore"</code> and <code class="highlighter-rouge">2013</code> look significant on initial inspection. In reality it doesn’t matter what they are; any test where those values matter should specify them to <a href="/blog/2014/02/23/making-intent-clear-derived-values-automated-testing-series/">make the intent clear</a>.</p>
<p>One of the changes we have made for v2 is to automatically generate an anonymous value for each requested value (using <code class="highlighter-rouge">Get</code>) if none has been specified for it (using <code class="highlighter-rouge">Set</code>). This not only allows you to get rid of those insignificant values, but it allows you to trim down the constructor of your builder - making the builders terser and quicker to write.</p>
<p>Given we aren’t talking about variables but rather values I have thus named the pattern anonymous values rather than anonymous variables.</p>
<p>There are a number of default conventions that are followed to determine what value to use via the new <a href="https://github.com/TestStack/TestStack.Dossier/blob/master/TestStack.Dossier/AnonymousValueFixture.cs">Anonymous Value Fixture</a> class. This works through the application of anonymous value suppliers - which are processed in order to determine if a value can be provided and if so a value is retrieved. At the time of writing the default suppliers are the following (applied in this order):</p>
<ul>
<li><code class="highlighter-rouge">DefaultEmailValueSupplier</code> - Supplies an email address for all string properties with a property name containing <code class="highlighter-rouge">email</code></li>
<li><code class="highlighter-rouge">DefaultFirstNameValueSupplier</code> - Supplies a first name for all string properties with a property name containing <code class="highlighter-rouge">firstname</code> (case insensitive)</li>
<li><code class="highlighter-rouge">DefaultLastNameValueSupplier</code> - Supplies a last name for all string properties with a property name containing <code class="highlighter-rouge">lastname</code> or <code class="highlighter-rouge">surname</code> (case insensitive)</li>
<li><code class="highlighter-rouge">DefaultStringValueSupplier</code> - Supplies the property name followed by a random GUID for all string properties</li>
<li><code class="highlighter-rouge">DefaultValueTypeValueSupplier</code> - Supplies an <a href="http://blog.ploeh.dk/2009/04/03/CreatingNumbersWithAutoFixture/">AutoFixture generated value</a> for any value types (e.g. int, double, etc.)</li>
<li><code class="highlighter-rouge">DefaultValueSupplier</code> - Supplies default(T)</li>
</ul>
<p>This gets you started for the most basic of cases, but from there you have a lot of flexibility to apply your own suppliers on both a global basis (via<code class="highlighter-rouge">AnonymousValueFixture.GlobalValueSuppliers</code>) and a local basis for each fixture instance (via <code class="highlighter-rouge">fixture.LocalValueSuppliers</code>) - you just need to implement <code class="highlighter-rouge">IAnonymousValueSupplier</code>. See the <a href="https://github.com/TestStack/TestStack.Dossier/blob/master/TestStack.Dossier.Tests/GetAnonymousTests.cs">tests for examples</a>.</p>
<h3 id="equivalence-classes-support">Equivalence classes support</h3>
<p>As explained in my equivalence classes and constrained non-determinism post (TBW) in my <a href="/blog/2014/01/23/test-naming-automated-testing-series/">automated testing series</a> the principle of <a href="http://blog.ploeh.dk/2009/03/05/ConstrainedNon-Determinism/">constrained non-determinism</a> frees you from having to worry about the fact that anonymous values can be random as long as they fall within the <a href="http://xunitpatterns.com/equivalence%20class.html">equivalence class</a> of the value that is required for your test.</p>
<p>I think the same concept can and should be applied to test data builders. More than that, I think it enhances the ability for the test data builders to <a href="/blog/2013/05/26/test-data-generation-the-right-way-object-mother-test-data-builders-nsubstitute-nbuilder/">act as documentation</a>. Having a constructor that reads like this for instance tells you something interesting about the <code class="highlighter-rouge">Year</code> property:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">CustomerBuilder</span> <span class="p">:</span> <span class="n">TestDataBuilder</span><span class="p"><</span><span class="n">Customer</span><span class="p">,</span> <span class="n">CustomerBuilder</span><span class="p">></span>
<span class="p">{</span>
<span class="k">public</span> <span class="nf">CustomerBuilder</span><span class="p">()</span>
<span class="p">{</span>
<span class="nf">WhoJoinedIn</span><span class="p">(</span><span class="n">Any</span><span class="p">.</span><span class="nf">YearAfter2001</span><span class="p">());</span>
<span class="p">}</span>
<span class="p">...</span>
<span class="p">}</span>
</code></pre></div></div>
<p>You may well use value objects that protect and describe the integrity of the data (which is great), but you can still create an equivalence class for the creation of the value object so I still think it’s relevant beyond primitives.</p>
<p>We have some built-in equivalence classes that you can use to get started quickly for common scenarios. At the time of writing the following are available (as extension methods of the <code class="highlighter-rouge">AnonymousValueFixture</code> class that is defined in a property called <code class="highlighter-rouge">Any</code> on the test data builder base class):</p>
<ul>
<li><code class="highlighter-rouge">Any.String()</code></li>
<li><code class="highlighter-rouge">Any.StringMatching(string regexPattern)</code></li>
<li><code class="highlighter-rouge">Any.StringStartingWith(string prefix)</code></li>
<li><code class="highlighter-rouge">Any.StringEndingWith(string suffix)</code></li>
<li><code class="highlighter-rouge">Any.StringOfLength(int length)</code></li>
<li><code class="highlighter-rouge">Any.PositiveInteger()</code></li>
<li><code class="highlighter-rouge">Any.NegativeInteger()</code></li>
<li><code class="highlighter-rouge">Any.IntegerExcept(int[] exceptFor)</code></li>
<li><code class="highlighter-rouge">Any.Of&lt;TEnum&gt;()</code></li>
<li><code class="highlighter-rouge">Any.Except&lt;TEnum&gt;(TEnum[] except)</code></li>
<li><code class="highlighter-rouge">Any.EmailAddress()</code></li>
<li><code class="highlighter-rouge">Any.UniqueEmailAddress()</code></li>
<li><code class="highlighter-rouge">Any.Language()</code></li>
<li><code class="highlighter-rouge">Any.FemaleFirstName()</code></li>
<li><code class="highlighter-rouge">Any.MaleFirstName()</code></li>
<li><code class="highlighter-rouge">Any.FirstName()</code></li>
<li><code class="highlighter-rouge">Any.LastName()</code></li>
<li><code class="highlighter-rouge">Any.Suffix()</code></li>
<li><code class="highlighter-rouge">Any.Title()</code></li>
<li><code class="highlighter-rouge">Any.Continent()</code></li>
<li><code class="highlighter-rouge">Any.Country()</code></li>
<li><code class="highlighter-rouge">Any.CountryCode()</code></li>
<li><code class="highlighter-rouge">Any.Latitude()</code></li>
<li><code class="highlighter-rouge">Any.Longitude()</code></li>
</ul>
<p>There is nothing stopping you using the anonymous value fixture outside of the test data builders - you can create a property called <code class="highlighter-rouge">Any</code> that is an instance of the <code class="highlighter-rouge">AnonymousValueFixture</code> class in any test class.</p>
<p>Also, you can easily create your own extension methods for the values and data that makes sense for your application. See the <a href="https://github.com/TestStack/TestStack.Dossier/tree/master/TestStack.Dossier/EquivalenceClasses">source code for examples to copy</a>. A couple of notes: you have the ability to stash information in the fixture by using the <code class="highlighter-rouge">dynamic Bag</code> property and you also have an <a href="https://github.com/AutoFixture/AutoFixture">AutoFixture</a> instance available to use via <code class="highlighter-rouge">Fixture</code>.</p>
<blockquote>
<p>Side note: I feel that Dossier does some things that are <a href="https://twitter.com/robdmoore/status/566533835869782016">not easy to do in AutoFixture</a>, hence why I don’t “just use AutoFixture” - I see Dossier as complimentary to AutoFixture because they are trying to achieve different (albeit related) things.</p>
</blockquote>
<p>A final note: I got the idea for the <code class="highlighter-rouge">Any.Whatever()</code> syntax from the <a href="https://github.com/grzesiek-galezowski/tdd-toolkit">TDD Toolkit by Grzegorz Gałęzowski</a>. I really like it and I highly recommend his <a href="https://github.com/grzesiek-galezowski/tdd-ebook">TDD e-book</a>.</p>
<h3 id="return-set-rather-than-this">Return Set rather than this</h3>
<p>This is a small, but important optimisation that allows test data builders to be that little bit terser and easier to read/write. The <code class="highlighter-rouge">Set</code> method now returns the builder instance so you can change your basic builder modification methods like in this example:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Before</span>
<span class="k">public</span> <span class="n">CustomerBuilder</span> <span class="nf">WithLastName</span><span class="p">(</span><span class="kt">string</span> <span class="n">lastName</span><span class="p">)</span>
<span class="p">{</span>
<span class="nf">Set</span><span class="p">(</span><span class="n">x</span> <span class="p">=></span> <span class="n">x</span><span class="p">.</span><span class="n">LastName</span><span class="p">,</span> <span class="n">lastName</span><span class="p">);</span>
<span class="k">return</span> <span class="k">this</span><span class="p">;</span>
<span class="p">}</span>
<span class="c1">// After</span>
<span class="k">public</span> <span class="n">CustomerBuilder</span> <span class="nf">WithLastName</span><span class="p">(</span><span class="kt">string</span> <span class="n">lastName</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">return</span> <span class="nf">Set</span><span class="p">(</span><span class="n">x</span> <span class="p">=></span> <span class="n">x</span><span class="p">.</span><span class="n">LastName</span><span class="p">,</span> <span class="n">lastName</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>
<h3 id="amazingly-terse-list-of-object-generation">Amazingly terse list of object generation</h3>
<p>This is by far the part that I am most proud of. I’ve long been frustrated (relatively speaking, I thought what I had in the first version was very cool and useful) with the need for writing the lambda expressions when building a list of objects, e.g.:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">customers</span> <span class="p">=</span> <span class="n">CustomerBuilder</span><span class="p">.</span><span class="nf">CreateListOfSize</span><span class="p">(</span><span class="m">3</span><span class="p">)</span>
<span class="p">.</span><span class="nf">TheFirst</span><span class="p">(</span><span class="m">1</span><span class="p">).</span><span class="nf">With</span><span class="p">(</span><span class="n">b</span> <span class="p">=></span> <span class="n">b</span><span class="p">.</span><span class="nf">WithFirstName</span><span class="p">(</span><span class="s">"Robert"</span><span class="p">).</span><span class="nf">WithLastName</span><span class="p">(</span><span class="s">"Moore))
</span> <span class="p">.</span><span class="nf">TheLast</span><span class="p">(</span><span class="m">1</span><span class="p">).</span><span class="nf">With</span><span class="p">(</span><span class="n">b</span> <span class="p">=></span> <span class="n">b</span><span class="p">.</span><span class="nf">WithEmail</span><span class="p">(</span><span class="s">"matt@domain.tld"</span><span class="p">))</span>
<span class="p">.</span><span class="nf">BuildList</span><span class="p">();</span>
</code></pre></div></div>
<p>I always found tha the need to have the <code class="highlighter-rouge">With</code> made it a bit more verbose than I wanted (since it was basically noise) and I found that needing to write the lambda expression slowed me down. I dreamed of having a syntax that looked like this:</p>
<div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">var</span> <span class="n">customers</span> <span class="p">=</span> <span class="n">CustomerBuilder</span><span class="p">.</span><span class="nf">CreateListOfSize</span><span class="p">(</span><span class="m">3</span><span class="p">)</span>
<span class="p">.</span><span class="nf">TheFirst</span><span class="p">(</span><span class="m">1</span><span class="p">).</span><span class="nf">WithFirstName</span><span class="p">(</span><span class="s">"Robert"</span><span class="p">).</span><span class="nf">WithLastName</span><span class="p">(</span><span class="s">"Moore"</span><span class="p">)</span>
<span class="p">.</span><span class="nf">TheLast</span><span class="p">(</span><span class="m">1</span><span class="p">).</span><span class="nf">WithEmail</span><span class="p">(</span><span class="s">"matt@domain.tld"</span><span class="p">)</span>
<span class="p">.</span><span class="nf">BuildList</span><span class="p">();</span>
</code></pre></div></div>
<p>Well, one day I had a brainwave on how that may be possible and I went and <a href="https://twitter.com/robdmoore/status/511021144384610304">implemented it</a>. I won’t go into the details apart from saying that I used Castle Dynamic Proxy to do the magic (and let’s be honest it’s magic) and you can <a href="https://github.com/TestStack/TestStack.Dossier/blob/master/TestStack.Dossier/Lists/ListBuilder.cs">check out the code if interested</a>. I’m hoping this won’t come back to bite me, because I’ll freely admit that this adds complexity to the code for creating lists; you can have an instance of a builder that isn’t an instance of a real builder, but rather a proxy object that will apply the call to part of a list of builders (see what I mean about complex)? My hope is that the simplicity and niceness of using the API outweighs the confusion / complexity and that you don’t really have to understand what’s going on under the hood if it “just works”<sup>TM</sup>.</p>
<p>If you don’t want to risk it that’s fine, there is still a <code class="highlighter-rouge">With</code> method that takes a lambda expression so you can freely avoid the magic.</p>
<p>The nice thing about this is I was able to remove NBuilder as a dependency and you no longer need to create an extension method for each builder to have a <code class="highlighter-rouge">BuildList</code> method that doesn’t require you to specify the generic types.</p>
<h2 id="why-did-you-move-to-teststack-and-why-is-it-now-called-dossier">Why did you move to TestStack and why is it now called Dossier?</h2>
<p>I moved the library to TestStack because it’s a logical fit - the goal that we have at TestStack is to make it easier to perform automated testing in the .NET ecosystem - that’s through and through what this library is all about.</p>
<p>As to why I changed the name to Dossier - most of the libraries that we have in TestStack have cool/quirky names that are relevant to what they do (e.g.<a href="https://github.com/TestStack/TestStack.Seleno">Seleno</a>, <a href="https://github.com/TestStack/TestStack.Bddfy">Bddfy</a>). NTestDataBuilder is really boring so with a bit of a push from my colleagues I set about to find a better name. I found Dossier by Googling for synonyms of data and out of all the words dossier stood out as the most interesting. I then <a href="https://www.google.com/search?q=define%3A+dossier">asked Google what the definition was</a> to see if it made sense and low and behold, the definition is strangely appropriate (person, event, subject being examples of the sorts of objects I tend to build with the library):</p>
<blockquote>
<p>a collection of documents about a particular person, event, or subject</p>
</blockquote>
<h2 id="mundane-stuff">Mundane stuff</h2>
<p>The GitHub repository has been moved to <a href="https://github.com/TestStack/TestStack.Dossier/">https://github.com/TestStack/TestStack.Dossier/</a> and the previous URL will automatically redirect to that address. I have released an <a href="http://www.nuget.org/packages/NTestDataBuilder">empty v2.0 NTestDataBuilder release to NuGet</a> that simply includes TestStack.Dossier as a dependency so you can do an<code class="highlighter-rouge">Update-Package</code> on it if you want (but will then need to address the breaking changes).</p>
<p>If you have an existing project that you don’t want to have to change for the breaking changes then feel free to continue using NTestDataBuilder v1 - for the featureset that was in it I consider that library to be complete and there weren’t any known bugs in it. I will not be adding any changes to that library going forward though.</p>
<p>As usual you can grab this library from <a href="https://www.nuget.org/packages/TestStack.Dossier">NuGet</a>.</p>Rob MooreI’m pleased to announce the addition of a (somewhat) new library to the TestStack family called TestStack.Dossier. I say somewhat new because it’s a version 2 of an existing library that I published called NTestDataBuilder. If you hadn’t already heard about that library here is the one liner (which has only changed slightly with the rename): TestStack.Dossier provides you with the code infrastructure to easily and quickly generate test fixture data for your automated tests in a terse, readable and maintainable way using the Test Data Builder, anonymous value and equivalence class patterns. The release of TestStack.Dossier culminates a few months of (off and on) work by myself and fellow TestStacker Michael Whelan to bring a range of enhancements. The library itself is very similar to NTestDataBuilder, but there are some minor breaking changes. I decided to reduce confusion by keeping the version number consistent between libraries so TestStack.Dossier starts at version 2.0. So why should I upgrade to v2 anyway? There is more to TestStack.Dossier v2 than just a name change, a lot more. I’ve taken my learnings (and frustrations) from a couple of years of usage of the library into account to add in a bunch of improvements and new features that I’m really excited about! Side note: my original post on combining the test data builder pattern with the object mother pattern and follow-up presentation still holds very true - this combination of patterns has been invaluable and has led to terser, more readable tests that are easier to maintain. I still highly recommend this approach (I use NTestDataBuilder TestStack.Dossier for the test data builder part). Anonymous value support As explained in my anonymous variables post (TBW(ritten) - future proofing this post, or setting myself up for disappointment :P) in my automated testing series, the use of the anonymous variable pattern is a good pattern to use when you want to use values in your tests whose exact value isn’t significant. By including a specific value you are making it look like that value is important in some way - stealing cognitive load from the test reader while they figure out the value in fact doesn’t not matter. This is relevant when defining a test data builder because of the initial values that you set the different parameters to by default. For instance, the example code for NTestDataBuilder on the readme had something like this: class CustomerBuilder : TestDataBuilder<Customer, CustomerBuilder> { public CustomerBuilder() { WithFirstName("Rob"); WithLastName("Moore"); WhoJoinedIn(2013); } public CustomerBuilder WithFirstName(string firstName) { Set(x => x.FirstName, firstName); return this; } ... } In that case the values "Rob", "Moore" and 2013 look significant on initial inspection. In reality it doesn’t matter what they are; any test where those values matter should specify them to make the intent clear. One of the changes we have made for v2 is to automatically generate an anonymous value for each requested value (using Get) if none has been specified for it (using Set). This not only allows you to get rid of those insignificant values, but it allows you to trim down the constructor of your builder - making the builders terser and quicker to write. Given we aren’t talking about variables but rather values I have thus named the pattern anonymous values rather than anonymous variables. There are a number of default conventions that are followed to determine what value to use via the new Anonymous Value Fixture class. This works through the application of anonymous value suppliers - which are processed in order to determine if a value can be provided and if so a value is retrieved. At the time of writing the default suppliers are the following (applied in this order): DefaultEmailValueSupplier - Supplies an email address for all string properties with a property name containing email DefaultFirstNameValueSupplier - Supplies a first name for all string properties with a property name containing firstname (case insensitive) DefaultLastNameValueSupplier - Supplies a last name for all string properties with a property name containing lastname or surname (case insensitive) DefaultStringValueSupplier - Supplies the property name followed by a random GUID for all string properties DefaultValueTypeValueSupplier - Supplies an AutoFixture generated value for any value types (e.g. int, double, etc.) DefaultValueSupplier - Supplies default(T) This gets you started for the most basic of cases, but from there you have a lot of flexibility to apply your own suppliers on both a global basis (viaAnonymousValueFixture.GlobalValueSuppliers) and a local basis for each fixture instance (via fixture.LocalValueSuppliers) - you just need to implement IAnonymousValueSupplier. See the tests for examples. Equivalence classes support As explained in my equivalence classes and constrained non-determinism post (TBW) in my automated testing series the principle of constrained non-determinism frees you from having to worry about the fact that anonymous values can be random as long as they fall within the equivalence class of the value that is required for your test. I think the same concept can and should be applied to test data builders. More than that, I think it enhances the ability for the test data builders to act as documentation. Having a constructor that reads like this for instance tells you something interesting about the Year property: class CustomerBuilder : TestDataBuilder<Customer, CustomerBuilder> { public CustomerBuilder() { WhoJoinedIn(Any.YearAfter2001()); } ... } You may well use value objects that protect and describe the integrity of the data (which is great), but you can still create an equivalence class for the creation of the value object so I still think it’s relevant beyond primitives. We have some built-in equivalence classes that you can use to get started quickly for common scenarios. At the time of writing the following are available (as extension methods of the AnonymousValueFixture class that is defined in a property called Any on the test data builder base class): Any.String() Any.StringMatching(string regexPattern) Any.StringStartingWith(string prefix) Any.StringEndingWith(string suffix) Any.StringOfLength(int length) Any.PositiveInteger() Any.NegativeInteger() Any.IntegerExcept(int[] exceptFor) Any.Of&lt;TEnum&gt;() Any.Except&lt;TEnum&gt;(TEnum[] except) Any.EmailAddress() Any.UniqueEmailAddress() Any.Language() Any.FemaleFirstName() Any.MaleFirstName() Any.FirstName() Any.LastName() Any.Suffix() Any.Title() Any.Continent() Any.Country() Any.CountryCode() Any.Latitude() Any.Longitude() There is nothing stopping you using the anonymous value fixture outside of the test data builders - you can create a property called Any that is an instance of the AnonymousValueFixture class in any test class. Also, you can easily create your own extension methods for the values and data that makes sense for your application. See the source code for examples to copy. A couple of notes: you have the ability to stash information in the fixture by using the dynamic Bag property and you also have an AutoFixture instance available to use via Fixture. Side note: I feel that Dossier does some things that are not easy to do in AutoFixture, hence why I don’t “just use AutoFixture” - I see Dossier as complimentary to AutoFixture because they are trying to achieve different (albeit related) things. A final note: I got the idea for the Any.Whatever() syntax from the TDD Toolkit by Grzegorz Gałęzowski. I really like it and I highly recommend his TDD e-book. Return Set rather than this This is a small, but important optimisation that allows test data builders to be that little bit terser and easier to read/write. The Set method now returns the builder instance so you can change your basic builder modification methods like in this example: // Before public CustomerBuilder WithLastName(string lastName) { Set(x => x.LastName, lastName); return this; } // After public CustomerBuilder WithLastName(string lastName) { return Set(x => x.LastName, lastName); } Amazingly terse list of object generation This is by far the part that I am most proud of. I’ve long been frustrated (relatively speaking, I thought what I had in the first version was very cool and useful) with the need for writing the lambda expressions when building a list of objects, e.g.: var customers = CustomerBuilder.CreateListOfSize(3) .TheFirst(1).With(b => b.WithFirstName("Robert").WithLastName("Moore)) .TheLast(1).With(b => b.WithEmail("matt@domain.tld")) .BuildList(); I always found tha the need to have the With made it a bit more verbose than I wanted (since it was basically noise) and I found that needing to write the lambda expression slowed me down. I dreamed of having a syntax that looked like this: var customers = CustomerBuilder.CreateListOfSize(3) .TheFirst(1).WithFirstName("Robert").WithLastName("Moore") .TheLast(1).WithEmail("matt@domain.tld") .BuildList(); Well, one day I had a brainwave on how that may be possible and I went and implemented it. I won’t go into the details apart from saying that I used Castle Dynamic Proxy to do the magic (and let’s be honest it’s magic) and you can check out the code if interested. I’m hoping this won’t come back to bite me, because I’ll freely admit that this adds complexity to the code for creating lists; you can have an instance of a builder that isn’t an instance of a real builder, but rather a proxy object that will apply the call to part of a list of builders (see what I mean about complex)? My hope is that the simplicity and niceness of using the API outweighs the confusion / complexity and that you don’t really have to understand what’s going on under the hood if it “just works”TM. If you don’t want to risk it that’s fine, there is still a With method that takes a lambda expression so you can freely avoid the magic. The nice thing about this is I was able to remove NBuilder as a dependency and you no longer need to create an extension method for each builder to have a BuildList method that doesn’t require you to specify the generic types. Why did you move to TestStack and why is it now called Dossier? I moved the library to TestStack because it’s a logical fit - the goal that we have at TestStack is to make it easier to perform automated testing in the .NET ecosystem - that’s through and through what this library is all about. As to why I changed the name to Dossier - most of the libraries that we have in TestStack have cool/quirky names that are relevant to what they do (e.g.Seleno, Bddfy). NTestDataBuilder is really boring so with a bit of a push from my colleagues I set about to find a better name. I found Dossier by Googling for synonyms of data and out of all the words dossier stood out as the most interesting. I then asked Google what the definition was to see if it made sense and low and behold, the definition is strangely appropriate (person, event, subject being examples of the sorts of objects I tend to build with the library): a collection of documents about a particular person, event, or subject Mundane stuff The GitHub repository has been moved to https://github.com/TestStack/TestStack.Dossier/ and the previous URL will automatically redirect to that address. I have released an empty v2.0 NTestDataBuilder release to NuGet that simply includes TestStack.Dossier as a dependency so you can do anUpdate-Package on it if you want (but will then need to address the breaking changes). If you have an existing project that you don’t want to have to change for the breaking changes then feel free to continue using NTestDataBuilder v1 - for the featureset that was in it I consider that library to be complete and there weren’t any known bugs in it. I will not be adding any changes to that library going forward though. As usual you can grab this library from NuGet.