<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Kenneth Truyers]]></title><description><![CDATA[.NET team lead at Wealthkernel - Alpaca]]></description><link>https://www.kenneth-truyers.net/</link><generator>Ghost 6.26</generator><lastBuildDate>Wed, 08 Apr 2026 01:05:23 GMT</lastBuildDate><atom:link href="https://www.kenneth-truyers.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Best practices for good PR's]]></title><description><![CDATA[To understand what constitutes a good pull request, we must first define the reason why we use the pull request process:
- PR's are a great way of sharing information about the code base. 
- It creates an extra gate before our code goes to production
- It improves code quality. ]]></description><link>https://www.kenneth-truyers.net/2018/10/31/best-practices-good-pr/</link><guid isPermaLink="false">5ab3ecf9fc11f500225ab6a7</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Wed, 31 Oct 2018 23:52:34 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Over the last few years I&apos;ve had many discussions about what constitutes a good PR with fellow developers. I want to summarize here what I&apos;ve learned and touch base with the rest of developers in the wild (you).</p>
<p>Note: For the context this post, I&apos;m talking about PR&apos;s inside an organisation. The requirements for PR&apos;s on open source projects may be very different and very often depend on what type of project it is.</p>
<h2 id="whydoweusepullrequests">Why do we use pull requests?</h2>
<p>To understand what constitutes a good pull request, we must first define the reason why we use the pull request process:</p>
<ul>
<li>PR&apos;s are a great way of sharing information about the code base. They limit code ownership (check my post on <a href="https://www.kenneth-truyers.net/2016/09/27/avoiding-code-ownership/">avoiding code ownership</a> to see why that&apos;s a bad thing)</li>
<li>It creates an extra gate before our code goes to production</li>
<li>It improves code quality. This is not only because there&apos;s another person to catch mistakes. If you know someone will review your code, you&apos;re probably, at least subconsciously, making a bigger effort to write better code.</li>
</ul>
<h2 id="creatinggoodpullrequests">Creating good pull requests</h2>
<p>Whenever I review a PR, I look for the following qualities:</p>
<ul>
<li>Logical commits. Each commit should be a single, coherent change.</li>
<li>All commit messages are descriptive</li>
<li>The list of commits are related and implement a complete story</li>
<li>The work in the PR adds quantifiable value.</li>
<li>Code complies with the defined standards</li>
<li>The pull request is as small as practically possible</li>
</ul>
<h3 id="logicalcommits">Logical commits</h3>
<p>When reviewing a PR, it&apos;s really difficult to judge code quality when you only have half the information. Therefore, it&apos;s very import that each commit contains a single coherent change, that in theory should be reviewable on its own.</p>
<p>I also prefer that each commit would be deployable individually, although sometimes this can be a bit hard to achieve. However hard it may be in some cases, I always strive towards that end.</p>
<h3 id="commitmessages">Commit messages</h3>
<p>This is a pet peeve of me, but far too often I see commit messages that just describe what is done. While that is important information, it&apos;s only part of the story. Usually, what is done is easily verifiable from the code. What interests me, as a reviewer, and later as a reader of the code, is why a particular change was implemented. The key questions a commit message should answer are:</p>
<ul>
<li>What has changed</li>
<li>Why was the change necessary, possibly referencing a formal requirement</li>
<li>What are direct and indirect consequences of the change</li>
</ul>
<p>What does <strong>not</strong> need to be in the commit message?</p>
<ul>
<li>Who made the change (that is already part of the commit)</li>
<li>When the change was made (also part of the commit)</li>
<li>How this code can be improved (this should go in the backlog)</li>
<li>How you feel about the code</li>
</ul>
<h3 id="relatedcommits">Related commits</h3>
<p>When you submit a PR, it should relate to a single theme. Reviewing code can be a difficult task since very often the context is not entirely clear. If a PR tries to fix or implement multiple things, this only gets harder. To avoid this, we should strive to divide a user story we&apos;re working on into multiple related commits and submit them as a single PR.</p>
<h3 id="thepullrequestsaddsquantifiablevalue">The pull requests adds quantifiable value</h3>
<p>This could be in fulfilling a customer requirement, but could also be a refactor, performance improvements, new tests etc.</p>
<p>In the case of implementing requirements, it&apos;s fairly easy to see that this adds value. However, in the case of a refactor or performance improvements this may be harder. In these you usually want to discuss this beforehand. Nevertheless, it should be fairly obvious once the PR has been submitted that there is value in the addition.</p>
<h3 id="codecomplieswithdefinedstandards">Code complies with defined standards</h3>
<p>While I don&apos;t believe style checking should be performed by a human reviewer (this task is better suited to automated tools), there are other standards that do need a review. These could be things like the data access method, a recurring pattern that&apos;s been used, usage of internal libraries vs rolling your own.</p>
<p>Before submitting the PR, these are things that should be checked by the submitter of the PR.</p>
<h3 id="smallerisbetter">Smaller is better</h3>
<p>Reviewing code can be a daunting task. As a consequence, the larger the PR, the harder it will be to review. The harder it is to review, the less deep the review will be and it will be more of a code &quot;scan&quot;, rather than a code &quot;review&quot;.</p>
<h2 id="agoodreview">A good review</h2>
<p>Conversely, creating good PR&apos;s doesn&apos;t mean that you have a good review process. The flip side of it, reviewing the code, is also very important. This is a list of questions and attitudes towards the code under review that could help:</p>
<h3 id="doesthiscodesolveaproblem">Does this code solve a problem?</h3>
<p>Ideally, this should have been a pre-condition before even starting work on the code, however, there are cases where it may not be clear-cut and this is a good first question to ask. If it doesn&apos;t solve a problem, then all the rest is pointless.</p>
<h3 id="doesthecodedowhatthesubmitterintended">Does the code do what the submitter intended?</h3>
<p>Once established that the purpose is sound, we need to verify whether the implementation is actually correct. This could be trying to find potential bugs, but also unwanted side effects that the submitter did not intend to cause.</p>
<h3 id="howwouldihavesolvedthis">How would I have solved this?</h3>
<p>This is a tricky one. Sometimes developer styles vary wildly. The purpose of this question is not to force the developer to do it your way, but rather contrast different coding styles and evaluate if one is preferable over the other.</p>
<h3 id="arethereanyusefulabstractions">Are there any useful abstractions?</h3>
<p>Sometimes the problem may already have been solved elsewhere, but the author is unaware. It&apos;s good to point this out and suggest ideas on how to make use of this. In some cases, it will simply be reusing existing code, in others, it might need introducing a new abstract concept to rely on.</p>
<h3 id="playadvocateofthedevil">Play advocate of the devil</h3>
<p>Try to catch any mistakes, errors, violations against conventions, ...<br>
That said, be nice about it. The goal is to improve the code, not to prove your superiority over the other developer. All review comments should strictly be about the code. It&apos;s the code that is under review, not the developer.</p>
<h3 id="isthiscodecoveredbytestsanddoesitneedtobe">Is this code covered by tests (and does it need to be)?</h3>
<p>Tests are important, not only for regression issues, but also for documentation. If the code is highly algorithmic, then you probably want automated unit tests in place (hint: these are a nice place to start the review, as it will give you a nice list of requirements that are being implemented).</p>
<p>Conversely, if it&apos;s mainly coordinating code, question why unit tests are being added and whether they are really necessary. (Check out my post on <a href="https://www.kenneth-truyers.net/2015/06/27/the-test-pyramid/">the test pyramid </a> for more information on what to test and what not)</p>
<h3 id="doesthiscodefollowstylesandpatterns">Does this code follow styles and patterns?</h3>
<p>As I mentioned before, pure code style issues should probably be caught by automated tools. Regardless, some things won&apos;t be possible through static code analysis, so you want to have a look at those more closely.</p>
<h2 id="settingstandardsandguidelines">Setting standards and guidelines</h2>
<p>Apart from submitting good PR&apos;s and executing good reviews, another part of the puzzle is having a clear understanding of when a PR will be rejected and when it can be merged. Here are some of the standards that I believe should be in place. The specifics of these may vary under circumstances, but regardless of the parameters, it&apos;s good to have a clear understanding between all developers:</p>
<h3 id="reasonsforrejection">Reasons for rejection</h3>
<p>Certain things may be a clear reason for rejection. It&apos;s important to list these and agree with everybody. The reason is that an early rejection can save a lot of time, both on the reviewer as well as on the submitter side. It avoids discussions over trivial problems. Some of the things that can lead to a straight rejection:</p>
<ul>
<li>Failing tests (even better is to enforce this by the CI server)</li>
<li>Not following styling guidelines (even better to enforce through an automated tool)</li>
<li>Infractions against the rules of a good PR. This could mean non-descriptive commit messages, very large PR&apos;s, multiple stories in one PR, ...</li>
</ul>
<h3 id="conflictresolution">Conflict resolution</h3>
<p>Eventually you&apos;ll run into the situation where two developers will disagree. That is perfectly fine and actually a sign of good team dynamics. What is not fine however is having endless discussions about it or one developer overriding the other because of reasons like seniority, having the admin rights, etc.</p>
<p>It&apos;s better to prevent these situations from happening and have a conflict resolution strategy. One particular method that works well is to always require at least two code reviewers. In those cases, it&apos;s simply the majority vote that counts.</p>
<h3 id="reviewleadtime">Review lead time</h3>
<p>One of the biggest reasons I have seen people skip PR&apos;s is because it blocks them. When the product team is pushing for a deployment, it doesn&apos;t help when your PR is being left in the review code for days on end.</p>
<p>Ideally, a PR should be reviewed within an hour. If PR&apos;s are following the practices mentioned in this post, this should definitely be doable. Where it breaks down is on large PR&apos;s.</p>
<p>It&apos;s therefor very important to keep PR&apos;s as small as possible and ensure that reviews are executed promptly.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[What's new and coming in C# 8.0]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/csharp-3.jpg" alt="C# 8" title="C# 8" loading="lazy"><br>
Another year, another C# update. This time we&#x2019;re up for already C# 8. I want to dive into the most likely new C# 8 features, what they look like and why they&#x2019;re useful.</p>
<blockquote>
<p>Disclaimer: The information in this blog post was written well before the release</p></blockquote>]]></description><link>https://www.kenneth-truyers.net/2018/04/20/whats-new-c-8-0/</link><guid isPermaLink="false">5ab2d765fc11f500225ab610</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Fri, 20 Apr 2018 11:28:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/csharp-3.jpg" alt="C# 8" title="C# 8" loading="lazy"><br>
Another year, another C# update. This time we&#x2019;re up for already C# 8. I want to dive into the most likely new C# 8 features, what they look like and why they&#x2019;re useful.</p>
<blockquote>
<p>Disclaimer: The information in this blog post was written well before the release of C# 8 and all information in this post is subject to change. This applies to possible inclusion of features in C# 8 as well as the described syntax.</p>
</blockquote>
<h2 id="c8nullablereferencetypes">C# 8: Nullable reference types</h2>
<p>This feature is the most anticipated feature of C# 8 because it brings a lot of value and is known from other languages such as F#. For backward compatibility reasons however, it will work slightly different in C#.</p>
<h3 id="what">What?</h3>
<p>Hold on! Aren&#x2019;t reference types already nullable? Yes, they are indeed. What this features brings is a slight shift: from C# 8, all reference types will be considered to be non-nullable by default. When you want a nullable reference type you will have to express that explicitly.</p>
<h3 id="why">Why?</h3>
<p>The dreaded <span style="font-family: consolas;">NullReferenceException</span>! We&#x2019;ve all run into this exception plenty of times and they are mistakes that could have easily been caught at compile time if only we had the means to express it. Nullable reference types don&#x2019;t solve this problem, but they do allow you to express your intent much better.</p>
<p>Apart from that, there&#x2019;s an inconsistency between reference and value types: Value types are by default non-nullable, when adding the ?-modifier to the type we can make them nullable. Reference types on the other hand are currently nullable and there&#x2019;s no way to express a non-nullable reference type.</p>
<h3 id="how">How?</h3>
<p>Similar to nullable value types, in C# 8 you will be able to declare that a variable is non-nullable by simply using the type. In case you want a nullable reference type, you will have to explicitly mention that by appending the <span style="font-family: Consolas;">?</span>-modifier at the end of the type. The first syntax enhancement is the ability to express a nullable reference type:</p>
<pre><code class="language-csharp">string? someText = null;
</code></pre>
<p>The above denotes the syntax for non-nullable reference types. Nullable reference types will then get the following syntax:</p>
<pre><code class="language-csharp">string someText = &quot;this is some string&quot;; 
someText = null; // WARNING!
</code></pre>
<p>As you can see in the above example, the current syntax will start to have a slightly different meaning: all reference types are considered to be non-nullable. That is a breaking change though. Since C# is so widely used, it&#x2019;s quite impossible to make such a breaking change, so that&#x2019;s the reason why the second line says &#x201C;Warning&#x201D;, rather than &#x201C;Error&#x201D;. If this would be considered an error it would promptly break thousands of code bases. Furthermore, even if this doesn&#x2019;t break code, it&#x2019;s still a breaking change because you now get a warning where you didn&#x2019;t get one before. Therefore, nullable reference types will be considered an opt-in feature.</p>
<h4 id="staticflowanalysis">Static flow analysis</h4>
<p>Apart from the rather obvious example above, there will be more benefits through the application of static flow analysis. Consider the following examples:</p>
<pre><code class="language-csharp">string someText = null; // Warning: Cannot convert null to non-nullable reference 
string? otherText = null; // OK, assign null to nullable 
string nonnull = otherText; // Warning: possible null assignment
</code></pre>
<p>On line 3, the analyzer can detect that we are assigning a value that is potentially null to a non-nullable variable and will correctly raise a warning.</p>
<pre><code class="language-csharp">string? text = null; 
var length = text.Length; // Warning: Possible dereference of a null reference
</code></pre>
<p>The same applies to dereferencing a nullable variable. The compiler knows that we are potentially dealing with a null reference and will raise a warning. There are two ways to get around this issue: either we declare the original variable as a non-nullable type or we do a null check. Through static analysis, it can detect that the variable cannot possible be <span style="font-family: Consolas;">null</span>:</p>
<pre><code class="language-csharp">string? text = null; 
if(text != null) 
{ 
    var length = text.Length; // OK, you checked 
}
</code></pre>
<h4 id="limitations">Limitations</h4>
<p>There are however some limitations on what static analysis flow can provide:</p>
<ul>
<li>Methods that return a non-nullable type will be interpreted as safe. They can however still return a null reference. (For example a library that has not been updated)</li>
<li>It won&#x2019;t always recognize whether you have done a proper null check. When you call <span style="font-family: consolas;">string.IsNullOrEmpty</span> for example, the analyzer won&#x2019;t go into that method call to check this.</li>
</ul>
<p>The first item cannot be solved unfortunately. Because of historical reasons, there&#x2019;s no way to fit strict non-nullability into C#. This feature will reduce problems with null references, but will not eliminate it.</p>
<p>The second issue is more of an annoyance, as you would have to do an explicit null check even if you know that the item is not null. This would lead to this sort of code:</p>
<pre><code class="language-csharp">public void DoSomething(string? text) 
{ 
    if(!string.IsNullOrEmpty(text) &amp;&amp; text != null) 
        Console.WriteLine(text.Length); 
}
</code></pre>
<p>The above code is obviously redundant and we need a terser way of describing that. Therefore, the <span style="font-family: Consolas;">!-</span>modifier is introduced. It allows you to specifically get rid of the warning:</p>
<pre><code class="language-csharp">public void DoSomething(string? text) 
{ 
    if(!string.IsNullOrEmpty(text)) 
        Console.WriteLine(text!.Length);
}
</code></pre>
<p>By adding the <span style="font-family: Consolas;">!</span>-operator when dereferencing the string, we tell the compiler &#x201C;trust me, I know what I&#x2019;m doing!&#x201D;. The same can be applied when assigning a nullable reference to a non-nullable reference:</p>
<pre><code class="language-csharp">string? text = &quot;some text&quot;; 
string someText = text; // Warning! 
string otherText = text!; // OK, I trust you
</code></pre>
<h2 id="c8asynchronousstreams">C# 8: Asynchronous Streams</h2>
<h3 id="what">What?</h3>
<p>The ability to use async/await inside an iterator.</p>
<h3 id="why">Why?</h3>
<p>When you implement an iterator, currently you are limited to doing it synchronously. There are cases where you might need to await a call on every iteration to fetch the next item.</p>
<h3 id="how">How?</h3>
<p>To support this feature, a couple of things need to be implemented:</p>
<ul>
<li>New types: the async equivalents of<span style="font-family: Consolas;"> IEnumerable<t></t></span> and <span style="font-family: Consolas;">IEnumerator<t> </t></span></li>
<li>Methods with a yield return don&#x2019;t currently support async/await.</li>
<li>The ability to specify an await on an iterator construct such as foreach</li>
</ul>
<p>The new types that will be introduced are <span style="font-family: Consolas;">IAsyncEnumerable<t></t></span> and <span style="font-family: Consolas;">IAsyncEnumerator<t></t></span>. <span style="font-family: Consolas;">IAsyncEnumerable<t></t></span> will just have a single method that returns an <span style="font-family: Consolas;">IAsyncEnumerator<t></t></span>. The interesting interface is <span style="font-family: Consolas;">IAsyncEnumerator<t></t></span> which is defined as follows:</p>
<pre><code class="language-csharp">public interface IAsyncEnumerator&lt;out T&gt; : IAsyncDisposable 
{ 
    Task&lt;bool&gt; MoveNextAsync(); T Current { get; } 
}
</code></pre>
<p>This allows you to build an iterator that can await each call to the <span style="font-family: Consolas;">MoveNextAsync</span> method.</p>
<p>The second thing that will be added is the support for async/await in methods that yield results:</p>
<pre><code class="language-csharp">static async IAsyncEnumerable&lt;int&gt; GetNumbers()
{ 
    for (int i = 0; i &lt; 100; i++) 
    { 
        await Task.Delay(1000); 
        yield return i; 
    } 
}
</code></pre>
<p>Some notable things:</p>
<ul>
<li>We have an async keyword, but the return type is not a <span style="font-family: Consolas;">Task&lt;&gt;</span>. This is the support that will be built-in and specifically enabled when you return an <span style="font-family: Consolas;">IAsyncEnumerable<t></t></span></li>
<li>We can await any calls inside the method</li>
</ul>
<p>The last bit that will be added is the consumption side. When we iterate over an <span style="font-family: Consolas;">IAsyncEnumerable</span> we need to make sure that the calling method also awaits the call, so from the caller we also await the call:</p>
<pre><code class="language-csharp">foreach await (var number in GetNumbers()) 
{ 
    Console.WriteLine(number); 
}
</code></pre>
<h2 id="c8defaultinterfaceimplementations">C# 8: Default interface implementations</h2>
<h3 id="what">What?</h3>
<p>The ability to provide an implementation on an interface method. This makes it optional for implementers to override the method or not.</p>
<h3 id="why">Why?</h3>
<p>It allows for (a limited type of) multiple inheritance in C#</p>
<h3 id="how">How?</h3>
<p>Very similar to abstract classes you will be able to express method bodies inside an interface declaration:</p>
<pre><code class="language-csharp">interface ILogger 
{ 
    void Write(string text) 
    { 
        Console.WriteLine(text); 
    } 
}
</code></pre>
<p>This then allows you to implement the interface without implementing the Write-method explicitly:</p>
<pre><code class="language-csharp">class Logger : ILogger { } 
ILogger l = new Logger(); 
l.Write(&quot;text&quot;);
</code></pre>
<p>Note, however that the class does <strong>not</strong> inherit the method of the interface and the following will still give you a compilation error:</p>
<pre><code class="language-csharp">Logger l = new Logger(); 
l.Write(&quot;text&quot;); // Error: Logger does not contain a method &quot;Write&quot;
</code></pre>
<p>This is also the reason why we can&apos;t really call it multiple inheritance. Consider the following implementation:</p>
<pre><code class="language-csharp">interface IA 
{ 
    void Write(string text) 
    { 
        Console.WriteLine(text);
    }
}
interface IB 
{ 
    void Something() 
    { 
        // ... 
    }
}
class C : IA, IB {}
</code></pre>
<p>The following won&apos;t work, because you need to dereference it through the interface in order to use the default implementations:</p>
<pre><code class="language-csharp">var obj = new C();
obj.Write(&quot;test&quot;);  // Error
obj.Something();    // Error
</code></pre>
<p>To circumvent this issue, we need to dereference the methods through the interfaces:</p>
<pre><code class="language-csharp">(obj as IA).Write(&quot;OK&quot;); // OK
(obj as IB).Something(); // OK
</code></pre>
<p>This feature has quite a few edge cases, which you can read about in the proposal here: <a href="https://github.com/dotnet/csharplang/blob/master/proposals/default-interface-methods.md?ref=kenneth-truyers.net">Default interface methods proposal</a><br>
Note also that for this feature, it will be necessary to modify the runtime, which is one of the reasons why it is unsure as whether this feature will make it into C# 8 or potentially a future version.</p>
<h2 id="c8extensioneverything">C# 8: Extension everything</h2>
<h3 id="what">What?</h3>
<p>The ability to create extension properties, fields and operators.</p>
<h3 id="why">Why?</h3>
<p>When extension methods were introduced in C# 3 it was a supporting feature to enable LINQ. Once published it became clear that there&#x2019;s a lot of value in it outside of LINQ. The ability to create extension properties, fields and operators would greatly increase this value.</p>
<h3 id="how">How?</h3>
<p>Currently extension methods are declared as static method which accept a special first parameter: the instance which the method is extending. In essence, it&#x2019;s syntactical sugar over calling a static method where you pass in the instance as the first parameter. With extension everything, this syntax will change. There are a few competing design proposals and it&#x2019;s unclear which will be the ultimate design, but for illustration&#x2019;s sake I will use one of them. It&#x2019;s highly likely this will change though:</p>
<pre><code class="language-csharp">public extension class ListExtensions&lt;T&gt; : List&lt;T&gt; 
{ 
    // instance extensions
    private int _sum = 0; 
    public int GetSum() =&gt; _sum; 
    public int Sum =&gt; _sum; 
    public List&lt;T&gt; this[int[] indices] =&gt; ...; 
    
    // static extensions
    static Random _rnd = new Random();
    public static List&lt;T&gt; GetRandomLengthList() =&gt;
        new T[_rnd.Next].ToList();
    
    public static implicit operator int(List&lt;T&gt; self) =&gt; self.GetSum(); 
    public static implicit List&lt;T&gt; operator +(List&lt;T&gt; left, List&lt;T&gt; right) =&gt; 
        left.Concat(right); 
}
</code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>All in all, I think the planned additions will be a significant improvement in making C# more robust and terse. None of the above is set in stone, but it&#x2019;s very likely these features will appear in C# 8 and if not they will surely come to the next version of C#. All of the discussion are these new features are open and available on <a href="https://github.com/dotnet/csharplang/tree/master/proposals?ref=kenneth-truyers.net">GitHub</a>. I encourage you to have a look at them and have a play with it once they are ready for trial.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Refactoring taken too far]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I came across a tweet today about refactoring badly written code. I&#x2019;m always interested in that, so I saw a few fellow devs had taken some badly written code and then applied refactoring to it, following good software design principles.</p>
<p>It all started with this article on CodeProject</p>]]></description><link>https://www.kenneth-truyers.net/2017/04/06/refactoring-taken-too-far/</link><guid isPermaLink="false">5ab2d765fc11f500225ab60f</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Thu, 06 Apr 2017 01:42:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I came across a tweet today about refactoring badly written code. I&#x2019;m always interested in that, so I saw a few fellow devs had taken some badly written code and then applied refactoring to it, following good software design principles.</p>
<p>It all started with this article on CodeProject from April last year: <a href="https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code?ref=kenneth-truyers.net" title="https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code">https://www.codeproject.com/articles/1083348/csharp-bad-practices-learn-how-to-make-a-good-code</a></p>
<p>The author shows a piece of badly written and then goes through a slew of refactorings following Liskov, strategy patterms, dependency injection and many other well-known principles.</p>
<p>While I&#x2019;m a big fan of good software practices, the number 1 principle I like to adhere to is KISS: (Keep it simple, stupid). If you have been reading my blog before, you&#x2019;ll certainly have come across the theme. I&#x2019;m not the only though, there were a few follow up posts as well:</p>
<ul>
<li><a href="http://ralfw.de/2016/03/dont-let-cleaning-up-go-overboard/?ref=kenneth-truyers.net">Don&#x2019;t let cleaning go overboard</a> (by Ralf Westpahl)</li>
<li><a href="http://functionalsoftware.net/fsharp-rewrite-of-a-fully-refactored-csharp-clean-code-example-612/?ref=kenneth-truyers.net">An F# rewrite</a> (by Roman Bassart)</li>
<li><a href="http://www.davidarno.org/2017/01/26/using-c-7-and-succinct-to-give-f-a-run-for-its-money/?ref=kenneth-truyers.net">Using C# 7 and Succin<t> to give F# a run for its money</t></a> (by David Arno)</li>
</ul>
<p>I like many of the ideas expressed in the above posts, but I couldn&#x2019;t stop to think about how this code could be made much simpler. Looking at the posts I still see things that are complicating matters much more than necessary.</p>
<h2 id="codebeforerefactoring">Code before refactoring</h2>
<p>For reference, this is the initial code from the first post:</p>
<pre><code class="language-csharp">public class Class1 
{ 
    public decimal Calculate(decimal amount, int type, int years) 
    { 
        decimal result = 0; 
        decimal disc = (years &gt; 5) ? (decimal)5/100 : (decimal)years/100; 
        if (type == 1) 
        { 
            result = amount; 
        } 
        else if (type == 2) 
        { 
            result = (amount - (0.1m * amount)) - disc * (amount - (0.1m * amount));         } 
        else if (type == 3) 
        { 
            result = (0.7m * amount) - disc * (0.7m * amount); 
        } 
        else if (type == 4) 
        { 
            result = (amount - (0.5m * amount)) - disc * (amount - (0.5m * amount));         } 
        return result; 
    } 
}
</code></pre>
<p>This is indeed not very nice code. What I do like about it though, is that it&#x2019;s compact. When I read this, I can probably figure out relatively quickly what this code does. It has many problems though (as discussed in the original post).</p>
<p>The &#x201C;issue&#x201D; I have with the refactorings in the other posts is that they try to cater for use cases that simply aren&#x2019;t specified. From what I can tell, these are the requirements:</p>
<ul>
<li>Give a discount based on what type of customer it is</li>
<li>Give a loyalty discount which equals the amount of years the customer has been active, with a maximum of 5</li>
<li>Both discounts are cumulable</li>
</ul>
<h2 id="thesimplestpossiblesolution">The simplest possible solution</h2>
<blockquote>
<p>UPDATE: after comments on twitter / reddit, I noticed that the tests were incorrect. I had taken them from one of the refactorings and assumed they were correct. I have modified the data and updated the tests to reflect what the original code does.</p>
</blockquote>
<blockquote>
<p>UPDATE 2: I went against my own adage: going to far. Smuggling the discount info into the enum was too much. I have refactored it to a dictionary, which is easier to maintain.</p>
</blockquote>
<pre><code class="language-csharp">static readonly Dictionary&lt;Status, int&gt; Discounts = new Dictionary&lt;Status, int&gt; 
{ 
    {Status.NotRegistered, 0 }, 
    {Status.SimpleCustomer, 10 }, 
    {Status.ValuableCustomer, 30 }, 
    {Status.MostValuableCustomer, 50 } 
}; 
decimal applyDiscount(decimal price, Status accountStatus, int timeOfHavingAccountInYears) 
{ 
    price = price - Discounts[accountStatus] * price/100; 
    return price - Math.Min(timeOfHavingAccountInYears, 5) * price/100; 
}
</code></pre>
<p>It&#x2019;s a simple calculation, so why not express is at as a simple calculation? While I see the power of functional programming, sometimes discriminated unions, partial application and the likes are just overkill for simple problems.</p>
<p>For a coding kata like this, I understand that people want to provide a most elegant solution for future needs, but unfortunately I see these patterns arise far too often, complicating already complex problems even more.</p>
<h3 id="bonus1tests">Bonus 1: Tests</h3>
<pre><code class="language-csharp">[Fact] public void Tests() 
{ 
    applyDiscount(100m, Status.MostValuableCustomer, 1).Should().Be(49.5000m);
    applyDiscount(100m, Status.ValuableCustomer, 6).Should().Be(66.5000m);
    applyDiscount(100m, Status.SimpleCustomer, 1).Should().Be(89.1000m); 
    applyDiscount(100m, Status.NotRegistered, 0).Should().Be(100.0m); 
}
</code></pre>
<h3 id="bonus2">Bonus 2:</h3>
<p>Want a UI with that? <a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2017/01/DiscountCalculator.xlsx?ref=kenneth-truyers.net">Here you go!</a> &#x1F642;</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Git as a NoSql database]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Git&#x2019;s man-pages state that it&#x2019;s a <em>stupid content tracker</em>. It&#x2019;s probably the most used version control system in the world. Which is very strange, since it doesn&#x2019;t describe itself as being a source control system. And in fact, you can use git</p>]]></description><link>https://www.kenneth-truyers.net/2016/10/13/git-nosql-database/</link><guid isPermaLink="false">5ab2d765fc11f500225ab60e</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Thu, 13 Oct 2016 11:24:25 GMT</pubDate><media:content url="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-22.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-22.png" alt="Git as a NoSql database"><p>Git&#x2019;s man-pages state that it&#x2019;s a <em>stupid content tracker</em>. It&#x2019;s probably the most used version control system in the world. Which is very strange, since it doesn&#x2019;t describe itself as being a source control system. And in fact, you can use git to track any type of content. You can create a Git NoSQL database for example.</p>
<p>The reason why it says <em>stupid</em> in the man-pages is that it makes no assumptions about what content you store in it. The underlying git model is rather basic. In this post I want to explore the possibilities of using git as a NoSQL database (a key-value store). You could use the file system as a data store and then use <span style="font-family: &apos;Courier New&apos;;">git add</span> and <span style="font-family: &apos;Courier New&apos;;">git commit</span> to save your files:</p>
<pre><code class="language-bash"># saving a document 
echo &apos;{&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth&quot;}&apos; &gt; 1.json 
git add 1.json 
git commit -m &quot;added a file&quot; 
# reading a document 
git show master:1.json =&gt; {&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth&quot;}
</code></pre>
<p>That works, but you&#x2019;re now using the file system as a database: paths are the keys, values are whatever you store in them. There are a few disadvantages:</p>
<ul>
<li>We need to write all our data to disk before we can save them into git</li>
<li>We&#x2019;re saving data multiple times</li>
<li>File storage is not deduplicated and we lose the benefit git provides us for automatic data deduplication</li>
<li>If we want to work on multiple branches at the same time, we need multiple checked out directories</li>
</ul>
<p>What we want rather is a <em>bare</em> repository, one where none of the files exist in the file system, but only in the git database. Let&#x2019;s have a look at git&#x2019;s data model and the plumbing commands to make this work.</p>
<h2 id="gitasanosqldatabase">Git as a NoSQL database</h2>
<p>Git is a* content-addressable file system*. This means that it&#x2019;s a simple key-value store. Whenever you insert content into it, it will give you back a key to retrieve that content later.<br>
Let&#x2019;s create some content:</p>
<pre><code class="language-bash">#Initialize a repository 
mkdir MyRepo 
cd MyRepo git init 
# Save some content 
echo {&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth&quot;} | git hash-object -w --stdin da95f8264a0ffe3df10e94eed6371ea83aee9a4d
</code></pre>
<p><span style="font-family: &apos;Courier New&apos;;">Hash-object</span> is a <em>git plumbing</em> command which takes content, stores is it in the database and returns the key</p>
<blockquote>
<p>The &#x2013;<span style="font-family: &apos;Courier New&apos;;">w </span>switch tells it to store the content, otherwise it would just calculate the hash. the <span style="font-family: &apos;Courier New&apos;;">&#x2013;-stdin </span>switch tells git to read the content from the input, instead of from a file.</p>
</blockquote>
<p>The key it returns is a sha-1 based on the content. If you run the above commands on your machine, you&#x2019;ll see it returns the exact same sha-1. Now that we have some content in the database, we can read it back:</p>
<pre><code class="language-bash">git cat-file -p da95f8264a0ffe3df10e94eed6371ea83aee9a4d 
{&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth&quot;}
</code></pre>
<h3 id="gitblobs">Git Blobs</h3>
<p>We now have a key-value store with one object, a blob:</p>
<p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image.png" alt="Git as a NoSql database" loading="lazy"><br>
There&#x2019;s only one problem: we can&#x2019;t update this, because if we update the content, the key will change. That would mean that for every version of our file, we&#x2019;d have to remember a different key. What we want instead, is to specify our own key which we can use to track the versions.</p>
<h3 id="gittrees">Git Trees</h3>
<p>Trees solve two problems:</p>
<ul>
<li>the need to remember the hashes of our objects and its version</li>
<li>the possibility to storing groups of files.</li>
</ul>
<p>The best way to think about a tree is like a folder in the file system.&#xA0; To create a tree you have to follow two steps:</p>
<pre><code class="language-bash"># Create and populate a staging area 
git update-index --add --cacheinfo 100644 da95f8264a0ffe3df10e94eed6371ea83aee9a4d 1.json 
# write the tree 
git write-tree d6916d3e27baa9ef2742c2ba09696f22e41011a1
</code></pre>
<p>This also gives you back a sha. Now we can read back that tree:</p>
<pre><code class="language-bash">git cat-file -p d6916d3e27baa9ef2742c2ba09696f22e41011a1 100644 blob 
da95f8264a0ffe3df10e94eed6371ea83aee9a4d 1.json
</code></pre>
<p>At this point our object database looks as follows:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-6.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-6.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
To modify the file, we follow the same steps:</p>
<pre><code class="language-bash"># Add a blob 
echo {&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth truyers&quot;} | git hash-object -w --stdin 42d0d209ecf70a96666f5a4c8ed97f3fd2b75dda 

# Create and populate a staging area 
git update-index --add --cacheinfo 100644 42d0d209ecf70a96666f5a4c8ed97f3fd2b75dda 1.json 

# Write the tree 
git write-tree 2c59068b29c38db26eda42def74b7142de392212
</code></pre>
<p>That leaves us with the following situation:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-15.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-15.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
We now have two trees that represent the different states of our files. That doesn&#x2019;t help much, since we still need to remember the sha-1 values of the trees to get to our content.</p>
<h3 id></h3>
<h3 id="gitcommits">Git Commits</h3>
<p>One level up, we get to commits. A commit holds 5 pieces of key information:</p>
<ol>
<li>Author of the commit</li>
<li>Date it was created</li>
<li>Why it was created (message)</li>
<li>A single tree object it points to</li>
<li>One or more previous commits (for now we&#x2019;ll only consider commits with only a single parent, commits with multiple parents are <em>merge commits</em>).</li>
</ol>
<p>Let&#x2019;s commit the above trees:</p>
<pre><code class="language-bash"># Commit the first tree (without a parent) 
echo &quot;commit 1st version&quot; | git commit-tree d6916d3 05c1cec5685bbb84e806886dba0de5e2f120ab2a 

# Commit the second tree with the first commit as a parent 
echo &quot;Commit 2nd version&quot; | git commit-tree 2c59068 -p 05c1cec5 
9918e46dfc4241f0782265285970a7c16bf499e4
</code></pre>
<p>This leaves us with the following state:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-16.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-16.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
Now we have built up a complete history of our file. You could open the repository with any git client and you&#x2019;ll see how <span style="font-family: &apos;Courier New&apos;;">1.json</span> is being tracked correctly. To demonstrate that, this is the output of running <span style="font-family: &apos;Courier New&apos;;">git log</span>:</p>
<pre><code class="language-bash">git log --stat 9918e46 9918e46dfc4241f0782265285970a7c16bf499e4 &quot;Commit 2nd version&quot;
1.json | 1 + 1 file changed, 1 insertions(+) 
05c1cec5685bbb84e806886dba0de5e2f120ab2a &quot;Commit 1st version&quot; 1.json | 1 + 1 file changed, 1 insertion(+)
</code></pre>
<p>And to get the content of the file at the last commit:</p>
<pre><code class="language-bash">git show 9918e46:1.json 
{&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth truyers&quot;}
</code></pre>
<p>We&#x2019;re still not there though, because we have to remember the hash of the last commit. Up until now, all objects we have created are part of git&#x2019;s *object database. *One characteristic of that database is that it stores only <strong>immutable</strong> objects. Once you write a blob, a tree or a commit, you can never modify it without changing the key. You can also not delete them (at least not directly, the git gc command <strong>does</strong> delete objects that are <em>dangling</em>).</p>
<h3 id></h3>
<h3 id="gitreferences">Git References</h3>
<p>Yet another level up, are Git references. References are not a part of the object database, they are part of the reference database and are <strong>mutable</strong>. There are different types of references such as branches, tags and remotes. They are similar in nature with a few minor differences. For the moment, let&#x2019;s just consider branches. A branch is a pointer to a commit. To create a branch we can write the hash of the commit to the file system:</p>
<pre><code class="language-bash">echo 05c1cec5685bbb84e806886dba0de5e2f120ab2a &gt; .git/refs/heads/master
</code></pre>
<p>We now have a branch <span style="font-family: &apos;Courier New&apos;;">master</span>, pointing at our first commit. To move the branch, we issue the following command:</p>
<pre><code class="language-bash">git update-ref refs/heads/master 9918e46
</code></pre>
<p>This leaves us with the following graph:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-17.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-17.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
And finally, we&#x2019;re now able to read the current state of our file:</p>
<pre><code class="language-bash">git show master:1.json 
{&quot;id&quot;: 1, &quot;name&quot;: &quot;kenneth truyers&quot;}
</code></pre>
<p>The above command will keep working, even if we add newer versions of our file and subsequent trees and commits as long as we move the branch pointer to the latest commit.</p>
<p>All of the above seems rather complex for a simple key-value store. We can however abstract these things so that client applications only have to specify the branch and a key. I&#x2019;ll come back to that in a different post though. For now, I want to discuss the potential advantages and drawbacks of using git as a NoSQL database.</p>
<h2 id="dataefficiency">Data efficiency</h2>
<p>Git is very efficient when it comes to storing data. As mentioned before, blobs with the same content are stored only once because of how the hash is calculated. You can try this out by adding a whole bunch of files with the same content into an empty git repository and then checking the size of the <span style="font-family: &apos;Courier New&apos;;">.git</span> folder versus the size on disk. You&#x2019;ll notice that the <span style="font-family: &apos;Courier New&apos;;">.git</span> folder is quite a bit smaller.</p>
<p>But it doesn&#x2019;t stop there, git does the same for trees. If you change a file in a sub tree, git will only create a new sub tree and just reference the other trees that weren&#x2019;t affected. The following example shows a commit pointing at a hierarchy with two sub folders:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-18.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-18.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
Now if I want to replace the blob <span style="font-family: &apos;Courier New&apos;;">4658ea84</span>, git will only replace those items that are changed and keep those that haven&#x2019;t as a reference. After replacing the blob with a different file and committing the changes the graph looks as follows (new objects are marked in red):</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-19.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-19.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
As you can see, git only replaced the necessary items and referenced the already existing items.</p>
<p>Although git is very efficient in how it references existing data, if every small modification would result in a complete copy, we would still get a huge repository after a while. To mitigate this, there&#x2019;s an automatic garbage collection process. When <em>git gc</em> runs, it will look at your blobs. Where it can it will remove the blobs and instead store a single copy of the base data, together with the delta for each version of the blob. This way, git can still retrieve each unique version of the blob, but doesn&#x2019;t need to store the data multiple times.</p>
<h2 id="versioning">Versioning</h2>
<p>You get a fully versioned system for free. With that versioning also comes the advantage of not deleting data, ever. I&#x2019;ve seen examples like this in SQL databases:</p>
<pre><code>id | name | deleted 1 | kenneth | 1
</code></pre>
<p>That&#x2019;s OK for a simple record like this, but that&#x2019;s usually not the whole story. Data might have dependencies on other data (whether they&#x2019;re foreign keys or not is an implementation detail) and when you want to restore it, chances are you can&#x2019;t do it in isolation. With git, it&#x2019;s simply a matter of pointing your branch to a different commit to get back to the correct state on a database level, not a record level.</p>
<p>Another practice I have seen is this:</p>
<pre><code>id | street | lastUpdate 1 | town rd | 20161012
</code></pre>
<p>This practice is even less useful: you know it was updated, but there&#x2019;s no information on what was actually updated and what the previous value was. Whenever you update data, you&#x2019;re actually deleting data and inserting new one. The old data is lost forever. With git, you can run <em>git log</em> on any file and see what changed, who changed it, when and why.</p>
<h2 id="gittooling">Git tooling</h2>
<p>Git has a rich toolset which you can use to explore and manipulate your data. Most of them focus on code, but that doesn&#x2019;t mean you can&#x2019;t use them with other data. The following is a non-exhaustive overview of tools that I can come up with of the top of my mind.</p>
<p>Within the basic git commands, you can:</p>
<ul>
<li>Use <em>git diff</em> to find the exact changes between two commits / branches / tags / &#x2026;</li>
<li>Use <em>git bisect</em> to find out when something stopped working because of a change in the data</li>
<li>Use git <em>hooks</em> to get automatic change notifications and build full-text indices, update caches, publish data, &#x2026;</li>
<li>Revert, branch, merge, &#x2026;</li>
</ul>
<p>And then there are external tools:</p>
<ul>
<li>You can use Git clients to visualize your data and explore it</li>
<li>You can use pull requests, such as the ones on GitHub, to inspect data changes before they are merged</li>
<li>Gitinspector: statistical analysis on git repositories</li>
</ul>
<p>Any tool that works with git, works with your database.</p>
<h2 id="nosql">NoSQL</h2>
<p>Because it&#x2019;s a key-value store, you get the usual advantages of a NoSQL store such as a schema-less database. You can store any content you want, it doesn&#x2019;t even have to be JSON.</p>
<h2 id="connectivity">Connectivity</h2>
<p>Git can work in a partitioned network. You can put everything on a USB stick, save data when you&#x2019;re not connected to a network and then push and merge it when you get back online. It&#x2019;s the same advantage we regularly use when developing code, but it could be a life saver for certain use cases.</p>
<h2 id="transactions">Transactions</h2>
<p>In the above examples, we committed every change to a file. You don&#x2019;t necessarily have to do that, you can also commit various changes as a single commit. That would make it easy to roll back the changes atomically later.</p>
<p>Long lived transactions are also possible: you can create a branch, commit several changes to it and then merge it (or discard it).</p>
<h2 id="backupsandreplication">Backups and replication</h2>
<p>With traditional databases, there&#x2019;s usually a bit of hassle to create a schedule for full backups and incremental backups. Since git already stores the entire history, there will never be a need to do full backups. Furthermore, a backup is simply executing <em>git push</em>. And those pushes can go anywhere, GitHub, BitBucket or a self-hosted git-server.</p>
<p>Replication is equally simple. By using git hooks, you can set up a trigger to run git push after every commit. Example:</p>
<pre><code class="language-bash">git remote add replica git@replica.server.com:app.git 
cat .git/hooks/post-commit 

#!/bin/sh 
git push replica
</code></pre>
<p>This is fantastic! We should all use Git as a database from now on!</p>
<p>Hold on! There are a few disadvantages as well:</p>
<h2 id="querying">Querying</h2>
<p>You can query by key &#x2026; and that&#x2019;s about it. The only piece of good news here is that you can structure your data in folders in such a way that you can easily get content by prefix, but that&#x2019;s about it. Any other query is off limits, unless you want to do a full recursive search. The only option here is to build indices specifically for querying. You can do this on a scheduled basis if staleness is of no concern or you can use <span style="font-family: &apos;Courier New&apos;;">git hooks</span> to update indices as soon as a commit happens.</p>
<h2 id="concurrency">Concurrency</h2>
<p>As long as we&#x2019;re writing blobs there&#x2019;s no issue with concurrency. The problem occurs when we start writing commits and updating branches. The following graph illustrates the problem when two processes concurrently try to create a commit:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image55.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image55_thumb.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
In the above case you can see that when the second process modifies the copy of the tree with its changes, it&#x2019;s actually working on an outdated tree. When it commits the tree it will lose the changes that the first process made.</p>
<p>The same story applies to moving branch heads. Between the time you commit and update the branch head, another commit might get in. You could potentially update the branch head to the wrong commit.</p>
<p>The only way to counter this is by locking any writes between reading a copy of the current tree and updating the head of the branch.</p>
<h2 id="speed">Speed</h2>
<p>We all know git to be fast. But that&#x2019;s in the context of creating branches. When it comes to commits per second it&#x2019;s actually not that fast, because you&#x2019;re writing to disk all the time. We don&#x2019;t notice it, because usually we don&#x2019;t do many commits per second when writing code (at least I don&#x2019;t). After running some tests on my local machines I got into a limit of about 110 commits/second.</p>
<blockquote>
<p>Brandon Keepers showed some results in a <a href="https://vimeo.com/44458223?ref=kenneth-truyers.net#t=21m32s">video</a> a few years ago and he got to about 90 commits / second which seems in line of what hardware advances may have brought.</p>
</blockquote>
<p>110 commits/second is enough for a lot of applications, but not for all of them. It&#x2019;s also a theoretical maximum on my local development machines, with lots of resources. There are various factors that can affect the speed:</p>
<h3 id="treesizes">Tree sizes</h3>
<p>In general, you should prefer to use lots of subdirectories instead of putting all documents in the same directory. This will keep the write speed as close to the maximum as possible. The reason for that is that every time you create a new commit, you have to copy the tree, make a change to it and then save the modified tree. Although you might think that affects size as well, that&#x2019;s actually not the case because running <em>git gc</em> will make sure to save it as a delta instead of as two different trees. Let&#x2019;s take a look at an example:</p>
<p>In the first case, we have 10.000 blobs stored in the root directory. When we add a file we copy the tree that contains 10.000 items, add one and save it. This could be a potentially lengthy operation, because of the size of the tree.</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-20.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-20.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
In the second case we have 4 levels of trees, with each 10 sub trees and 10 blobs at the last level (10 * 10 * 10 * 10 = 10.000 files):</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-21.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-21.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
In this case, if we want to add a blob, we don&#x2019;t need to copy the entire hierarchy, we just need to copy the branch that leads to the blob. The following image shows the trees that had to be copied and amended:</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image-22.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/image_thumb-22.png" alt="Git as a NoSql database" title="image" loading="lazy"></a><br>
So, by using sub folder, instead of having to copy 1 tree with 10.000 entries, we can now copy 5 trees with 10 entries, which is quite a bit faster. The more your data grows, the more you&#x2019;ll want to use sub folders.</p>
<h3 id="combiningvaluesintotransactions">Combining values into transactions</h3>
<p>If you need to do more than 100 commits/second, chances are you don&#x2019;t need to be able to roll them back on an individual basis. In that case, instead of committing every change, you could commit several changes in one commit. You can write blobs concurrently, so you could potentially write 1000s of files concurrently to disk and then do 1 commit to save them into the repository. This has drawbacks, but if you want raw speed, this is the way to go.</p>
<p>The way to solve this is to add a different backend to git that doesn&#x2019;t immediately flush its contents to disk, but writes to an in-memory database first and then asynchronously flushes it to disk. Implementing this is not that easy though. When I was testing this solution using <em>libgit2sharp</em> to connect to a repository, I tried using a Voron-backend (which is available as open-source, as well as a variant that uses ElasticSearch). That improved speed quite a bit, but you lose the benefit of being able to inspect your data with any standard git tool.</p>
<h2 id="merging">Merging</h2>
<p>Another potentially pain point is when you are merging data from different branches. As long as there are no merge conflicts, it&#x2019;s actually a rather pleasant experience, as it enables a lot of nice scenarios:</p>
<ul>
<li>Modify data that needs approval before it can go &#x201C;live&#x201D;</li>
<li>Run tests on live data that you need to revert</li>
<li>Work in isolation before merging data</li>
</ul>
<p>Essentially, you get all the fun with branches you get in development, but on a different level. The problem is when there IS a merge conflict. Merging data can be rather difficult because you won&#x2019;t always be able to make out how to handle these conflicts.</p>
<p>One potential strategy is to just store the merge conflict as is when you&#x2019;re writing data and then when you read, present the user with the diff so they can choose which one is correct. Nonetheless, it can be a difficult task to manage this correctly.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Git can work as a NoSQL database very well in some circumstances. It has its place and time, but I think it&#x2019;s particularly useful in the following cases:</p>
<ul>
<li>You have hierarchic data (because of its inherent hierarchical nature)</li>
<li>You need to be able to work in disconnected environments</li>
<li>You need an approval mechanism for your data (aka you need branching and merging)</li>
</ul>
<p>In other cases, it&#x2019;s not a good fit:</p>
<ul>
<li>You need extremely fast write performance</li>
<li>You need complex querying (although you can solve that by indexing through commit hooks)</li>
<li>You have an enormous set of data (write speed would slow down even further)</li>
</ul>
<p>So, there you go, that&#x2019;s how you can use git as a NoSQL database. Let me know your thoughts!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Open source software on company time]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/osi_keyhole_300X300_90ppi_0.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/osi_keyhole_300X300_90ppi_0_thumb.png" alt="open source software" title="osi_keyhole_300X300_90ppi_0" loading="lazy"></a><br>
Most developers love open source software, and often we come across a piece of software that we&#x2019;re writing and think &#x201C;it would be great if that already existed as an open source package&#x201D;, but then, it doesn&#x2019;t. Since we&#x2019;re writing software for</p>]]></description><link>https://www.kenneth-truyers.net/2016/10/05/open-source-software-company-time/</link><guid isPermaLink="false">5ab2d765fc11f500225ab60d</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Wed, 05 Oct 2016 01:52:12 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/osi_keyhole_300X300_90ppi_0.png?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/10/osi_keyhole_300X300_90ppi_0_thumb.png" alt="open source software" title="osi_keyhole_300X300_90ppi_0" loading="lazy"></a><br>
Most developers love open source software, and often we come across a piece of software that we&#x2019;re writing and think &#x201C;it would be great if that already existed as an open source package&#x201D;, but then, it doesn&#x2019;t. Since we&#x2019;re writing software for a company, the natural tendency is then to implement it in-house. The thing is, if you had that thought, chances are someone else might have had the same thought. So, why shouldn&#x2019;t we open source it? From a business perspective, telling your boss you want to write open source software on company might not make much sense to him. After all, he&#x2019;s paying you to write software for everyone. If you argument is going to be &#x201C;open source is cool&#x201D;, you probably won&#x2019;t get very far.</p>
<p>Apart from being <em>cool</em>, developing open source software as a company actually has a lot of benefits.</p>
<blockquote>
<p>Before I iterate over what I think the advantages are, I want to make one thing clear. When we write open source software on company time, we don&#x2019;t want to open source the company&#x2019;s unique selling point. It&#x2019;d be foolish to think any company would allow that. When I talk about open sourcing company software, I&#x2019;m talking about general purpose software. Software you need to develop to support your domain. This could be an interface to a database, some utility functions you often use or any other piece that could be usable outside of the context of your company.</p>
</blockquote>
<h2 id="opensourcesoftwareadvantages">Open source software advantages</h2>
<h3 id="quality">Quality</h3>
<p>Open source software is, by its nature, public. That means that any hack you implement, any security mistakes will be visible for everybody. Therefor, when you&#x2019;re working on something public, it&#x2019;s natural to be a lot more cautious about how you develop. But that&#x2019;s only one way open source improves&#xA0; quality.</p>
<p>Another reason quality improves is that it forces you to decouple it from your domain. In in-house software it often becomes tempting to include domain logic inside external libraries, because it&#x2019;s just easier, faster. In the long run, you will have less separation of concerns which affects maintainability. Writing general purpose open source software, forces you to decouple it.</p>
<h3 id="testing">Testing</h3>
<p>If your project is popular, a community will form around it. Once you have a community, they will start using it and possibly discover bugs before you run into them. If you have an active community they might even create a pull request to solve the bug for you. That&#x2019;s free testing and bug fixing for the company.</p>
<h3 id="newfeaturesscenarios">New features / scenarios</h3>
<p>Similar to the scenario with bug detection and public testing, it&#x2019;s possible someone runs into a use case that the current software doesn&#x2019;t handle. If they really need it, they might decide to implement it and create a pull request.</p>
<h3 id="exposure">Exposure</h3>
<p>If your software is of a high quality, it will start becoming popular. Being open source doesn&#x2019;t mean it needs to be white-labeled. So, every time someone gets in contact with the software, your company&#x2019;s logo will be there. This gives exposure of your company&#x2019;s name to a potential large audience of developers.</p>
<p>There are plenty of examples of tech companies that are open sourcing software, to name a few:</p>
<ul>
<li>Uber: <a href="https://uber.github.io/?ref=kenneth-truyers.net" title="https://uber.github.io/">https://uber.github.io/</a></li>
<li>Spotify: <a href="https://github.com/spotify?ref=kenneth-truyers.net" title="https://github.com/spotify">https://github.com/spotify</a></li>
<li>Google: Android</li>
<li>AirBnb: <a href="http://nerds.airbnb.com/open-source/?ref=kenneth-truyers.net" title="http://nerds.airbnb.com/open-source/">http://nerds.airbnb.com/open-source/</a></li>
<li>Facebook: React</li>
</ul>
<p>If you go through the list, you&#x2019;ll see they&#x2019;re not open sourcing their main selling point, but parts that orthogonal to their business model (ie: you won&#x2019;t see Google open sourcing their search algorithm, or Facebook&#x2019;s timeline rules). The above is also a list of companies that are *hot *among developers. A lot of developers want to work in these companies, precisely because of their openness. Having exposure of your company to developers could attract new talent. That&#x2019;s not to say that this is the ultimate hiring strategy, but it&#x2019;s another channel which could yield some interesting results.</p>
<h3 id="developersatisfaction">Developer satisfaction</h3>
<p>Good developers are, in my opinion, passionate about their work. They will feel happier if they can work on something that solves more than the daily problems they&#x2019;re running into. Having a happier development team aids in productivity, retaining talent and general atmosphere in the company. Working on something which is open source and not &#x201C;just&#x201D; for the company might also trigger them to spend some hobby time on it which again is free work for the company.</p>
<h3 id="developerinvolvement">Developer involvement</h3>
<p>Sometimes developers move on from the company. One of the first things you do, is remove their access to the code repository. Even if they feel they did something useful and want to continue using it, they&#x2019;ll have to work on it privately on a copy they have. Technically they&#x2019;re not allowed to, but that&#x2019;s not the reality.</p>
<p>On the other hand, if the code is open source, and they move on, they can still contribute to it. Likely they will use it in the next company they&#x2019;re at. This will minimize the loss of knowledge in the team and again get more development time on the software for free. This is something that I experienced first hand when moving on from a company. The software we had written was by no means popular, but I found it to be useful, so I introduced it at my new company. Now we&#x2019;re happily contributing to it when we need it.</p>
<p>It&#x2019;s a win-win-win. It&#x2019;s good for my new company, because they get software that was already developed, it&#x2019;s good for my old company, because I (and my colleagues) are adding new features and bug fixes for their software and it&#x2019;s good for me, because I didn&#x2019;t have to do it all over again.</p>
<h2 id="conclusion">Conclusion</h2>
<p>A lot of the above advantages are only real if your software becomes popular. But what do you have to lose? If it doesn&#x2019;t become popular it&#x2019;s just the same as developing it in-house which was the starting point.</p>
<p>Another argument, although not very tangible, is that it&#x2019;s just the right thing to do. We live in a world where we can only achieve what we&#x2019;re achieving by standing on the shoulders of others. I&#x2019;m sure 99,99% of closed source software is using open source software somewhere, so why not be a good citizen and contribute back to the community?</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Avoiding code ownership]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Creating development silos, is a practice I have seen in many different teams. What I&#x2019;m talking about is having developers specialize in parts of the domain, i.e. one developer handles all the code related to invoicing, another one does everything around order management, etc. It&#x2019;s</p>]]></description><link>https://www.kenneth-truyers.net/2016/09/27/avoiding-code-ownership/</link><guid isPermaLink="false">5ab2d765fc11f500225ab60c</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Tue, 27 Sep 2016 01:36:07 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Creating development silos, is a practice I have seen in many different teams. What I&#x2019;m talking about is having developers specialize in parts of the domain, i.e. one developer handles all the code related to invoicing, another one does everything around order management, etc. It&#x2019;s a natural tendency to select the same programmer for the same part of the application all the time. The reason is that it creates immediate and tangible results. Assigning the developer that has most knowledge of the code at hand, is the one that will complete the task as fast and as good as possible.</p>
<p>However, in the long term, I believe there&#x2019;s more benefit in doing the exact opposite. Although counter intuitive, I think spreading the knowledge of the domain throughout the development team has a lot of advantages.</p>
<h2 id="knowledgesharing">Knowledge sharing</h2>
<p>If you assign a task only to a developer that has worked on a feature before, you concentrate the knowledge of that part of the domain in that developer. While having a lot of knowledge is good for a single developer, it&#x2019;s not good for the team. Having the knowledge spread out over the team unlocks a bunch of advantages:</p>
<h3 id="consistency">Consistency</h3>
<p>Initially, a developer with more knowledge of the code will complete a task faster. That is a logical fact. The same is true for the opposite: if you give the task to a developer with no knowledge of the code at all, it will take him considerably more time and effort to complete it. In an ideal world, a team is always the same size and no one ever leaves, takes holidays or is off sick. In reality however, teams change, have other responsibilities and are in general not constant.</p>
<p>By having knowledge concentrated on a developer/feature basis, you will see a lot of highs and lows in the productivity of the team. The highs occur on moments that the team is in a stable period with no one leaving the company, no sick days and/or holidays. The lows happen when someone leaves. All of a sudden a lot of knowledge has left the company and someone needs to pick that up. The same happens when someone goes on holiday. In a way, a holiday is even worse, because the rest of the team will just sit and wait to start &#x201C;that feature that touches invoicing&#x201D; until John, who knows all about invoicing, is back from holiday.</p>
<p>If you spread the knowledge however, you will eventually get to a point where development speed is consistent. Holidays are spread over the year and the team just picks up any work that&#x2019;s available. The same happens when someone leaves the company. What&#x2019;s more is that a new developer can be trained by anyone, not just by that one other guy who also has the knowledge required (provided there IS actually another one around).</p>
<h3 id="responsibility">Responsibility</h3>
<p>If you know that no one will look at the code you&#x2019;re writing (at least not until you&#x2019;re gone), there&#x2019;s a natural tendency to get sloppy. Conversely, knowing that this code will be viewed tomorrow by your colleague tends to put you on edge. This is not just because developers are sloppy or because of a bad developer. To some extent, every developer is prone to this, regardless of how good or professional they are. It&#x2019;s simple human nature.</p>
<h3 id></h3>
<h3 id="communication">Communication</h3>
<p>Having shared knowledge moves code ownership from the individual to the team. With everyone at the same wavelength, it will improve communication within the team, and mistakes will get picked up by the team, not by individuals. Individuals can have good days and bad days. A team usually zeroes that out.</p>
<h3 id="codereviews">Code Reviews</h3>
<p>If you&#x2019;re doing code reviews (and you should: <a href="https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/">https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/</a>), rotating the team will also improve their quality. If you review code of which you have never seen the context, it&#x2019;s a lot harder to assess the quality. Either it takes a lot of time to review, because you have to read and analyze all the surrounding bits, or, more likely, developers tend to think, &#x201C;looks decent enough, I suppose that&#x2019;s handled somewhere else&#x201D;.</p>
<p>If you know the context, it&#x2019;s much easier to spot bugs, suggest alternate patterns and provide valuable feedback. Without knowing context, code reviews are often reduced to formatting checks, something that&#x2019;s better left to automation.</p>
<h3 id="codequality">Code Quality</h3>
<p>By having a broad knowledge of the entire domain, chosen solutions tend to fit better into the whole. It&#x2019;s easy to provide a narrow solution for the problem at hand, but it&#x2019;s very difficult to provide a generic solution that will be scalable in light of where the business is heading. By sharing the knowledge, you prevent tunnel vision.</p>
<h2 id="developersatisfaction">Developer Satisfaction</h2>
<p>Development is a creative activity. Nothing kills creativity more than repetition and boredom. By working on the same part over and over again, developers get bored, leave for other, more exciting places or simply go into standby mode, doing what they need to do and nothing more. By having people work on different parts, they will feel more as part of an organization, an idea and can see the goals. That creates a motivating environment and will make them want to do the best possible job.</p>
<h2 id="planning">Planning</h2>
<p>Related to the point on consistency, planning workload also becomes a lot easier. You no longer have to check each developer&#x2019;s schedule and you don&#x2019;t have to cut out requested holidays (which also helps for developer satisfaction). Because there are multiple developers who can do a job, you can just assign the task to any developer that&#x2019;s available.</p>
<h2 id="termsandconditions">Terms and conditions</h2>
<p>Rotating the team around features has a lot of advantages, as explained above. Obviously, don&#x2019;t take this advice to the extreme. Don&#x2019;t let your DBA design your front page, don&#x2019;t let your UX specialist optimize DB queries and, for the love of god, don&#x2019;t let your PR spokesmen implement your log in page.<br>
Developers all have their specialties, that&#x2019;s OK, but they should be technical specialties. What you want to avoid is that developers become specialists in a thin slice of the domain.</p>
<p>Another thing to keep in mind is the team size. I&#x2019;ve found the above guidelines to be useful in small teams. Once your team grows beyond 7-8 developers, I prefer to take the other extreme: separate the teams. The boundaries between the knowledge should be more clearly defined and any team should consider code by a different team as if it were third-party code. This allows teams to be very focused. It also means that external interfaces should be very clear, well documented and above all, stable.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Sometimes doing the non-intuitive thing, is more beneficial than doing what seems natural. Rotating a team around features increases consistency, code quality, responsibility and developer&#x2019;s satisfaction. While going for short term wins might be tempting, the long term benefits are clear.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Database migrations made simple]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/06/db_migrations_thumb.jpg" alt="database migrations" title="database migrations" loading="lazy"><br>
I make no secret of the fact that <a href="https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/">I don&#x2019;t like ORM&#x2019;s</a>. One part of why I don&#x2019;t like them is the way they handle database migrations. To successfully create and execute database migrations, you often need to know quite a bit about the</p>]]></description><link>https://www.kenneth-truyers.net/2016/06/02/database-migrations-made-simple/</link><guid isPermaLink="false">5ab2d765fc11f500225ab60b</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Thu, 02 Jun 2016 11:46:15 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/06/db_migrations_thumb.jpg" alt="database migrations" title="database migrations" loading="lazy"><br>
I make no secret of the fact that <a href="https://www.kenneth-truyers.net/2014/11/15/how-to-ditch-your-orm/">I don&#x2019;t like ORM&#x2019;s</a>. One part of why I don&#x2019;t like them is the way they handle database migrations. To successfully create and execute database migrations, you often need to know quite a bit about the framework. I don&#x2019;t like having to know things which can be solved in a much simpler way.<br>
Apart from ORM migrations, there are tools out there such as Redgate&#x2019;s database tools. While this is actually a very good and useful tool, it&#x2019;s often overkill for small to medium-sized applications. Apart from that, it&#x2019;s also quite expensive, so maybe not the best tool to use in your start-up.</p>
<h2 id="kiss">KISS</h2>
<p>Database migrations, contrary to popular belief, are not rocket science. Essentially, you want to execute some scripts whenever you release a new version of your software and possibly execute some scripts to undo those in case you want to rollback a faulty deployment. Building such a thing is not very difficult. It won&#x2019;t have all the bells and whistles that a tool such as Redgate has, but the knowledge required is far less and, more importantly, instantly understandable for any new hires on a team. If all you need is upgrade and downgrade, you can use the code from this article with your modifications and tweaks. It&#x2019;s based on some simple conventions without any possibilities for configuration or customization, but again, YAGNI (you aren&#x2019;t gonna need it). If you end up needing it, then you can just modify the script and get on with more interesting stuff.</p>
<p>The code is very simple in its setup and it works like this:</p>
<ul>
<li>A table MigrationScripts is created in the database which contains all scripts that have been executed and at what date</li>
<li>On startup of the application (or whichever moment you choose), it scans a folder for .sql scripts with the naming convention YYYYMMDD-HHMM-&lt;some_name&gt;.sql</li>
<li>The code then does a diff to see which scripts have already been executed in the database</li>
<li>It then runs the scripts that haven&#x2019;t been executed yet, in the order of the date parsed from the naming convention</li>
</ul>
<h2 id="databasemigrationsupgrading">Database migrations: upgrading</h2>
<p>The example that I&#x2019;ll be showing here is something I used previously on an ASP.NET MVC app, which was the only client accessing the database. In case you have multiple applications accessing the same database, you probably want to extract this code into a separate application so you can deploy that application whenever a database update is needed. I&#x2019;ll show the simple version though.</p>
<blockquote>
<p>NOTE: I&#x2019;m using Dapper in this example, as we where using it in said project, but this could easily be done with any other micro-ORM or ADO.NET directly.</p>
</blockquote>
<pre><code class="language-csharp">public static void Run() 
{ 
    string conString = ConfigurationManager.ConnectionStrings[&quot;sql_migrations&quot;] 
                                           .ConnectionString; 
    using (var con = new SqlConnection(conString)) 
    { 
        // check if the migrations table exists, otherwise execute the first script (which creates that table) 
        if (con.ExecuteScalar&lt;int&gt;(@&quot;SELECT count(1) FROM sys.tables 
                                            WHERE T.Name = &apos;migrationscripts&apos;&quot;) == 0) 
        { 
            con.Execute(GetSql(&quot;20151204-1030-Init&quot;)); 
            con.Execute(@&quot;INSERT INTO MigrationScripts (Name, ExecutionDate) 
                                 VALUES (@Name, GETDATE())&quot;, 
                                 new { Name = &quot;20151204-1030-Init&quot; }); 
         } 
                                 
        // Get all scripts that have been executed from the database 
        var executedScripts = con.Query&lt;string&gt;(&quot;SELECT Name FROM MigrationScripts&quot;); 
        // Get all scripts from the filesystem 
        Directory.GetFiles(HostingEnvironment.MapPath(&quot;/App_Data/Scripts/&quot;)) 
                // strip out the extensions 
                .Select(Path.GetFileNameWithoutExtension) 
                // filter the ones that have already been executed 
                .Where(fileName =&gt; !executedScripts.Contains(fileName)) 
                // Order by the date in the filename 
                .OrderBy(fileName =&gt; DateTime.ParseExact(fileName.Substring(0, 13), &quot;yyyyMMdd-HHmm&quot;, null)) 
                .ForEach(script =&gt; 
                { 
                    // Execute each one of the scripts 
                    con.Execute(GetSql(script)); 
                    // record that it was executed in the migrationscripts table 
                    con.Execute(@&quot;INSERT INTO MigrationScripts (Name, ExecutionDate) 
                                         VALUES (@Name, GETDATE())&quot;, 
                                         new { Name = script }); 
                 }); 
             } 
         } 
         
         static string GetSql(string fileName) =&gt; 
             File.ReadAllText(HostingEnvironment.MapPath($&quot;/App_Data/Scripts/{fileName}.sql&quot;));
</code></pre>
<p>That&#x2019;s it, about 20 lines of code (comments and line breaks don&#x2019;t count ;-)) for a fully working database migration infrastructure.</p>
<h2 id="databasemigrationsdowngrading">Database migrations: downgrading</h2>
<p>In cases where you want to be able to roll back the database, you could add another convention: all rollback scripts have the same name but with _rollback appended to the filename. Then you can add a separate function which takes as an argument the name of the script you want to roll back. From there on, it&#x2019;s a case of loading the correct rollback scripts, sorting them, executing them and removing the records from the MigrationScripts table. All in all, another 20 lines.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The above code allows you to:</p>
<ul>
<li>Store all your database changes in Git</li>
<li>Do code reviews on SQL scripts (by anyone, including DB Admins)</li>
<li>Pull the repo with a fresh database, run the application and get started (handy for new hires)</li>
<li>Test the migrations locally and in every test environment</li>
<li>Do data migrations (or any SQL you want to write for your migrations)</li>
<li>Be flexible: you own the code, so anything is possible (convention change, extract it, deploy it separately, etc. )</li>
</ul>
<p>It does have a little cost of ownership, as you may need to modify it sometimes. I&#x2019;d argue however that the cost is smaller than having to know about your ORM&#x2019;s migration intricacies or learn how to use a database management tool.</p>
<p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/06/works-on-my-machine_thumb.png" alt="works-on-my-machine" title="works-on-my-machine" loading="lazy"><br>
Also, this code is fully certified &#x201C;Works on my machine&#x201D;-ware. You can use it, tweak it, ask me a question about it, but don&#x2019;t ask me to create a NuGet package of it, as it would go right back to the place I wanted to avoid with this code snippet.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Writing custom EsLint rules]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In statically compiled languages, we usually lean on the compiler to catch out common errors (or plain stupidities). In dynamic languages we don&#x2019;t have this luxury. While you could argue over whether this is a good or a bad thing, it&#x2019;s certainly true that a good</p>]]></description><link>https://www.kenneth-truyers.net/2016/05/27/writing-custom-eslint-rules/</link><guid isPermaLink="false">5ab2d765fc11f500225ab60a</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Fri, 27 May 2016 11:49:27 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In statically compiled languages, we usually lean on the compiler to catch out common errors (or plain stupidities). In dynamic languages we don&#x2019;t have this luxury. While you could argue over whether this is a good or a bad thing, it&#x2019;s certainly true that a good static analysis tool can help you quite a bit in detecting mistakes. For Javascript, a few tools are available: there&#x2019;s the good old JsLint, which is very strict, JsHint, created because not all of us are Douglas Crockford and then there&#x2019;s EsLint. In this post I&#x2019;ll show you how to create custom EsLint rules.</p>
<p>EsLint is quite a nice alternative for JsHint and is very flexible. While JsHint certainly has its benefits and comes out of the box with a lot of configurable options, EsLint allows you to configure your own custom rules.</p>
<p>These custom EsLint rules can be added to their library and released to the community as optional extra checks, they can be company specific to enforce a certain coding style or they can be project specific.</p>
<p>In this post I want to talk about creating project specific custom EsLint rules. It&#x2019;s easily transferrable to a more common use plugin if you want to (by publishing to NPM). I found creating a project specific plugin has a few more hurdles, so I&#x2019;ll explain that.</p>
<h2 id="analyzingcode">Analyzing code</h2>
<h3 id="asbtractsyntaxtrees">Asbtract Syntax Trees</h3>
<p>Before we start writing custom EsLint rules, I first want to show how code is analyzed and how we can plug in to that. In order to analyze code we must first build an abstract syntax tree. In this case we want to build an ES 6 <a href="https://github.com/estree/estree?ref=kenneth-truyers.net">abstract syntax tree (AST)</a>. An AST is essentially a data structure which describes the code. The next example shows some sample code and the corresponding syntax tree:</p>
<p>var a = 1 + 1;</p>
<p><img src="https://www.kenneth-truyers.net/wp-content/uploads/2016/05/ast_thumb.jpg" alt="custom EsLint rules - Abstract Syntax Tree" loading="lazy">The above visualization can also be presented as a pure data structure, here in the form of JSON:</p>
<pre><code class="language-json">{ 
    type: &quot;VariableDeclaration&quot;, 
    declarations: [
        { 
            type: &quot;VariableDeclarator&quot;, 
            id: { 
                type: &quot;Identifier&quot;, 
                name: &quot;a&quot; 
            }, 
            init: { 
                type: &quot;BinaryExpression&quot;, 
                left: { 
                    type: &quot;Literal&quot;, 
                    value: 1, 
                    raw: &quot;1&quot; 
                }, 
                operator: &quot;+&quot;, 
                right: { 
                    type: &quot;Literal&quot;, 
                    value: 1, 
                    raw: &quot;1&quot; 
                } 
            }
        }
    ]
    , kind: &quot;var&quot; 
}
</code></pre>
<p>When we have this syntax tree, you could walk the structure and then write something like this for each node:</p>
<pre><code class="language-javascript">if(node.type === &quot;VariableDeclarator&quot; &amp;&amp; node.id.name.length &lt; 2) {
    console.log(&quot;Variable names should be more than 1 character&quot;); 
}
</code></pre>
<h2 id="writingcustomeslintrules">Writing custom EsLint rules</h2>
<p>Since all of this AST-generation and node-walking is not specific to any rule, it can be externalized, and that&#x2019;s exactly what EsLint gives us. EsLint builds the syntax tree and walks all the nodes for us. We can then define interception points for the nodes we want to intercept. Apart from that, EsLint also gives us the infrastructure to report on problems that are found. Here&#x2019;s the above example rewritten as an EsLint rule:</p>
<pre><code class="language-javascript">module.exports.rules = { 
    &quot;var-length&quot;: context =&gt; 
        ({ VariableDeclarator: (node) =&gt; 
            { 
                if(node.id.name.length &lt; 2){ 
                    context.report(node, &apos;Variable names should be longer than 1 character&apos;); 
                } 
            } 
        }) 
    };
</code></pre>
<p>This can then be plugged in to EsLint and it will report the errors for any Javascript code you throw at it.</p>
<h2 id="eslintplugins">EsLint plugins</h2>
<p>In order to write a custom EsLint rule, you need to create an EsLint plugin. An EsLint plugin must follow a set of conventions before it can be loaded by EsLint:</p>
<ul>
<li>It must be a node package (distributed through NPM, although there&#x2019;s a way around it, read on &#x2026;)</li>
<li>Its name must start with eslint-plugin</li>
</ul>
<blockquote>
<p>The documentation mentions a way to write custom rules in a local directory and running them through a command-line option. This still works, but is deprecated and will soon break in newer versions of EsLint. It&#x2019;s recommended to go the plugin-route as described in this post.</p>
</blockquote>
<h3 id="creatingtheplugin">Creating the plugin</h3>
<p>With the above requirements, we can go two routes:</p>
<ul>
<li>Use <a href="http://yeoman.io/?ref=kenneth-truyers.net">YeoMan</a> and the corresponding <a href="https://www.npmjs.com/package/generator-eslint?ref=kenneth-truyers.net">EsLint generator</a></li>
<li>Create your own package</li>
</ul>
<p>The generator sets you up with a nice folder structure, including tests, a proper description and some documentation. However, if you just want to write some quick rules, I find it easier to just create a folder and the structure myself. Essentially, you need two files:</p>
<ul>
<li>package.json (remember, it has to be an NPM package)</li>
<li>index.js, where your rules will live</li>
</ul>
<p>Here&#x2019;s a basic version of the package.json:</p>
<pre><code class="language-json">{
	&quot;name&quot;: &quot;eslint-plugin-my-eslist-plugin&quot;,//remember, the name has to start with eslint-plugin
        &quot;version&quot;: &quot;0.0.1&quot;,
	&quot;main&quot;: &quot;index.js&quot;,
	&quot;devDependencies&quot;: {
            &quot;eslint&quot;: &quot;~2.6.0&quot;
	},
	&quot;engines&quot;: {
		&quot;node&quot;: &quot;&gt;=0.10.0&quot;
	}
}
</code></pre>
<p>And this is what index.js looks like, with our custom EsLint rule:</p>
<pre><code class="language-javascript">module.exports.rules = { 
    &quot;var-length&quot;: context =&gt; 
        ({ VariableDeclarator: (node) =&gt; 
            { 
                if(node.id.name.length &lt; 2){ 
                    context.report(node, &apos;Variable names should be longer than 1 character&apos;);
            }
        } 
        // , more interception points (see https://github.com/estree/estree) }) 
    // more rules };
</code></pre>
<h3 id></h3>
<h3 id="installingtheplugin">Installing the plugin</h3>
<p>As I mentioned before, if you want to share your plugin, you can distribute it via NPM. This doesn&#x2019;t always make sense though as you might have project specific rules. In those cases, you can just create the folder with your plugin locally and commit it to your code repository. For it to work, you still need to install it as a node package though. You can do that with the following NPM command:</p>
<pre><code class="language-bash">npm install -S ./my-eslint-plugin
</code></pre>
<p>This will install the package from the local folder my-eslint-plugin. That way, you can keep the rules locally to your project and still use them while running EsLint.</p>
<h3 id></h3>
<h3 id="configuringtheplugin">Configuring the plugin</h3>
<p>For EsLint to recognize and use the plugin we have to notify it through the configuration. We need to do two things:</p>
<ul>
<li>Tell it to use the plugin</li>
<li>Switch on the rules</li>
</ul>
<p>To tell it to use plugin, we can add a plugins node into the configuration, specifying the name of the plugin (without the &#x201C;eslint-plugin&#x201D;-prefix):</p>
<pre><code class="language-json">&quot;plugins&quot;: [ &quot;my-eslint-plugin&quot; ]
</code></pre>
<p>Next we need to define the rules:</p>
<pre><code class="language-json">&quot;rules&quot;: { &quot;my-eslint-plugin/var-length&quot;: &quot;warn&quot; }
</code></pre>
<p>With the plugin installed, you can now run EsLint and it will report on one letter variable names.</p>
<h2 id="example">Example</h2>
<p>While this is all nice, the above rule is probably not very useful, since there&#x2019;s already a built-in rule for that (<a href="http://eslint.org/docs/rules/id-length?ref=kenneth-truyers.net" title="http://eslint.org/docs/rules/id-length">http://eslint.org/docs/rules/id-length</a>).</p>
<p>As for general styling rules, EsLint has probably most of them already covered, and the ones it hasn&#x2019;t are probably quite obscure. Custom EsLint rules come in handy on a project-level basis.</p>
<p>As an example, I&#x2019;m currently working on an Angular 1 project. The intention is to port this over to a different framework soon. Because of that, we want to make sure we&#x2019;re as independent of Angular as possible. There are certain things we can do just as easily in plain JS instead of using angular&#x2019;s utility methods. For others, we can use different libraries that we can port over as well when we port the application.</p>
<p>Now, we don&#x2019;t want to go off and change all these occurrences at once, because that would be a lot of upfront work. Ideally, we want the following:</p>
<ul>
<li>Get notified when there&#x2019;s a call to an angular-method which could be done easily in plain JS in the module we&#x2019;re working on</li>
<li>Get notified on the CI-server (with a warning) if an angular-method is used</li>
<li>Once we get rid of the warnings for that angular-method, fail the build on the CI-server if that call is detected again</li>
</ul>
<p>So, as an example, here are a few rules we defined in our project:</p>
<pre><code class="language-javascript">module.exports.rules = { 
    &quot;no-angular-copy&quot;: context =&gt; 
        ({ MemberExpression: function(node) { 
            if (node.object.name === &quot;angular&quot; &amp;&amp; node.property.name === &quot;copy&quot;) {
                context.report(node, &quot;Don&apos;t use angular.copy, use cloneDeep from lodash instead.&quot;); 
            }
        } 
    }), 
    &quot;no-angular-isDefined&quot;: context =&gt; 
        ({ MemberExpression: function(node) { 
            if (node.object.name === &quot;angular&quot;) { 
                if(node.property.name === &quot;isDefined&quot;) { 
                    context.report(node, &quot;Don&apos;t use angular.isDefined. Use vanilla JS.&quot;); 
                } else if (node.property.name === &quot;isUndefined&quot;) { 
                    context.report(node, &quot;Don&apos;t use angular.isUndefined. Use vanilla JS&quot;); 
                } 
            } 
        } 
    }) 
};
</code></pre>
<p>We then enabled the rules with a warning in our configuration. As soon as we notice no more warnings for one of these rules, we will switch them to errors. The CI-build is configured to fail when EsLint finds an error. On top of that we have the EsLint plugin for VsCode, which looks like this in the editor:</p>
<p><img src="https://www.kenneth-truyers.net/wp-content/uploads/2016/05/vscode_eslint_thumb.jpg" alt="Custom EsLint rules - VS Code EsLint" loading="lazy"><br>
This combination ensures that we clean up angular calls while we continue development and that no new calls will be introduced.</p>
<blockquote>
<p>Sidenote: the rules here are not foolproof since someone could assign angular to a temp variable and then call the methods on the temp variable. Be that as it may, we want to catch the general use case with a simple rule. We could probably write a more thorough analyzer, but it would take a lot of time. Since all of this code still needs to go through <a href="https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/">code reviews</a>, we don&#x2019;t worry too much about it.</p>
</blockquote>
<h2 id="otherpossibilities">Other possibilities</h2>
<p>The above example is something that was very convenient for our use case, but the possibilities are endless. Here are a few things you could achieve with this:</p>
<ul>
<li>Ensure a user message is shown when HTTP calls are initiated and that they&#x2019;re properly removed once it ends.</li>
<li>Ensure jQuery isn&#x2019;t used when we&#x2019;re using a SPA-framework (or only in certain modules)</li>
<li>When using jQuery, ensure you&#x2019;re always calling event-handlers using the .on method instead of the shorthand ,click and similar</li>
</ul>
<p>There are plenty of possibilities for custom EsLint rules and most of it depends on your project. What other ideas do you have?</p>
<h2 id="existingplugins">Existing plugins</h2>
<p>Of course, there are plenty of existing <a href="https://www.npmjs.com/search?q=eslint+plugin&amp;ref=kenneth-truyers.net">EsLint plugins</a> for existing frameworks on NPM already. If you&#x2019;re using one of these frameworks, it&#x2019;s worth checking out the rules and see if you could benefit from enabling some of these</p>
<p>More information on writing custom EsLint rules can be found in the <a href="http://eslint.org/docs/developer-guide/working-with-rules?ref=kenneth-truyers.net">offical documentation</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Iterators and Generators in Javascript]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Last week I wrote about the yield return statement in c# and how it allows for deferred execution. In that post I explained how it powers LINQ and explained some non-obvious behaviors.</p>
<p>In this week&#x2019;s post I want to do the same thing but for Javascript. ES6 (ES2015)</p>]]></description><link>https://www.kenneth-truyers.net/2016/05/20/iterators-and-generators-in-javascript/</link><guid isPermaLink="false">5ab2d765fc11f500225ab609</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Fri, 20 May 2016 12:01:29 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Last week I wrote about the yield return statement in c# and how it allows for deferred execution. In that post I explained how it powers LINQ and explained some non-obvious behaviors.</p>
<p>In this week&#x2019;s post I want to do the same thing but for Javascript. ES6 (ES2015) is becoming more and more mainstream, but in terms of usage I mostly see the more common arrow-functions or block-scoping (with let and const).</p>
<p>However, iterators and generators are also a part of Javascript and I want to go through how we can use them to create deferred execution in Javascript.</p>
<h2 id="iterators">Iterators</h2>
<p>An iterator is an object that can access one item at a time from a collection while keeping track of its current position. Javascript is a bit &#x2018;simpler&#x2019; than c# in this aspect and just requires that you have a method called <font face="Courier New">next</font> to move to the next item to be a valid iterator.</p>
<p>The following is an example of function that creates an iterator from an array:</p>
<pre><code class="language-javascript">let makeIterator = function(arr){ 
    let currentIndex = 0; 
    return { 
        next(){ 
            return currentIndex &lt; arr.length 
                ? { value: arr[currentIndex++], done : false } 
                : { done: true}; 
        } 
    }; 
}
</code></pre>
<p>We could now use this function to create an iterator and iterate over it:</p>
<pre><code class="language-javascript">let iterator = makeIterator([1,2,3,4,5]); 
while(1){ 
    let {value, done} = iterator.next(); 
    if(done) break; 
    console.log(value); 
}
</code></pre>
<h2 id="iterables">Iterables</h2>
<p>An iterable is an object that defines its iteration behavior. The <font face="Courier New">for..of</font> loop can loop over any iterable. Built-in Javascript objects such as <font face="Courier New">Array</font> and <font face="Courier New">Map</font> are iterables and can thus be looped over by the <font face="Courier New">for..of</font> construct. But we can also create our own iterables. To do that we must define a method on the object called <font face="Courier New">@@iterator</font> or, more conveniently, use the <font face="Courier New">Symbol.iterator</font> as the method name:</p>
<pre><code class="language-javascript">let iterableUser = { 
    name: &apos;kenneth&apos;, 
    lastName: &apos;truyers&apos;, 
    [Symbol.iterator]: function*(){ 
        yield this.name; 
        yield this.lastName; 
    } 
} 
// logs &apos;kenneth&apos; and &apos;truyers&apos; 
for(let item of iterableUser){ 
    console.log(item); 
}
</code></pre>
<h2 id="generators">Generators</h2>
<p>Custom iterators and iterables are useful, but are complicated to build, since you need to take care of the internal state. A generator is a special function that allows you to write an algorithm that maintains its own state. They are factories for iterators. A generator function is a function marked with the <font face="Courier New">*</font> and has at least one <font face="Courier New">yield</font>-statement in it.</p>
<p>The following generator loops endlessly and spits out numbers:</p>
<p>function* generateNumbers(){ let index = 0; while(true) yield index++; }</p>
<p>A normal function would run endlessly (or until the memory is full), but similar to what I discussed in the post on yield return in C#, the <font face="Courier New">yield</font>-statement gives control back to the caller, so we can break out of the sequence earlier.</p>
<p>Here&#x2019;s how we could use the above function:</p>
<pre><code class="language-javascript">let sequence = generateNumbers(); //no execution here, just getting a generator for(let i=0;i&lt;5;i++){ 
    console.log(sequence.next()); 
}
</code></pre>
<h2 id="deferredexecution">Deferred Execution</h2>
<p>Since we have the same possibilities for yielding return values in Javascript as in C#, the only what&#x2019;s missing to be able to recreate LINQ in Javascript are extension methods. Javascript doesn&#x2019;t have extension methods, but we can do something similar.</p>
<p>What we&#x2019;d like to do is to be able to write something like this:</p>
<pre><code class="language-javascript">generateNumbers().skip(3)
                 .take(5)
                 .select(n =&gt; n * 3);
</code></pre>
<p>It turns out, we can do this, although we need to take a few hurdles.</p>
<p>To attach methods to existing objects (similar to what extension methods do in c#), we can use the prototype in Javascript. Generators however all have a different prototype, so we can&#x2019;t easily attach new methods to all generators. Therefore, what we need to do is make sure that they all share the same prototype. To do that, we can create a shared prototype and a helper function that assigns the shared prototype to the function:</p>
<pre><code class="language-javascript">function* Chainable() {
} 
function createChainable(f){ 
    f.prototype = Chainable.prototype; 
    return f; 
}
</code></pre>
<p>Now that we have a shared prototype, we can add methods to this prototype. I&#x2019;m also going to create a helper method for this:</p>
<pre><code class="language-javascript">function createFunction(f) { 
    createChainable(f); 
    Chainable.prototype[f.name] = function(...args) { 
        return f.call(this, ...args); }; 
        return f; 
    }
}

In the above method:

- It makes sure the function itself is also chainable, by calling createChainable
- Then it attaches the method to the shared protoype (using the name of the function). The method receives the arguments, which gets passed on to that method while supplying the correct this-context.

With this in place we can now create our &#x201C;extension methods&#x201D; in Javascript:
```javascript
// the base generator 
let test = createChainable(function*(){ 
    yield 1; 
    yield 2; 
    yield 3; 
    yield 4; 
    yield 5; 
}); 
// an &apos;extension&apos; method 
createFunction(function* take(count){ 
    for(let i=0;i&lt;count;i++){ 
        yield this.next().value; 
    } 
}); 
// an &apos;extension&apos; method 
createFunction(function* select(selector){ 
    for(let item of this){ 
        yield selector(item); 
    } 
}); 

// now we can iterate over this and this will log 2,4,6) 
for(let item of test.take(3).select(n =&gt; n*2)){ 
    console.log(item); 
}

Note that in the above method, it doesn&#x2019;t matter whether we first &lt;font face=&quot;Courier New&quot;&gt;take&lt;/font&gt; and then &lt;font face=&quot;Courier New&quot;&gt;select&lt;/font&gt; or the other way around. Because of the deferred execution, it will only fetch 3 values and do only 3 selects.

### Caveat

One problem with the above is that it doesn&#x2019;t work on standard iterables such as Arrays, Sets and Maps because they don&#x2019;t share the prototype. The workaround is to write a wrapper-method that wraps the iterable with a method that does use the shared prototype:
```javascript
let wrap = createChainable(function*(iterable){ 
    for(let item of iterable){ 
        yield item; 
    } 
});

With the wrap function, we can now wrap any array, set or map and chain our previous function to it:
```javascript
let myMap = new Map(); 
myMap.set(&quot;1&quot;, &quot;test&quot;); 
myMap.set(&quot;2&quot;, &quot;test2&quot;); 
myMap.set(&quot;3&quot;, &quot;test3&quot;); 
for(let item of wrap(myMap).select(([key,value]) =&gt; key + &quot;--&quot; + value)
                           .take(3)){ 
   console.log(item); 
}

One more thing I want to add is the ability to execute a chain, so that it returns an array (for c# devs: the ToList-method). This method can be added on to the prototype:
```javascript
Chainable.prototype.toArray = function(){ 
    let arr = []; 
    for(let item of this){ 
        arr.push(item); 
    } 
    return arr; 
}
</code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>If we implement the above, it allows us to write LINQ-style Javascript:</p>
<pre><code class="language-javascript">mySet.set(&quot;1&quot;, &quot;test&quot;); 
mySet.set(&quot;2&quot;, &quot;test2&quot;); 
mySet.set(&quot;3&quot;, &quot;test3&quot;); 
wrap(mySet).select(([key,value]) =&gt; key + &quot;--&quot; + value)
           .take(3)
           .toArray()
           .forEach(item =&gt; console.log(item));
</code></pre>
<p>Obviously, this only works in ES2015 and it&#x2019;s probably not a good idea to actually write LINQ in Javascript using this method (and besides, there are already other implementations of LinqJS), but it does demonstrate the power of Iterators and Generators in Javascript.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Yield return in C#]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The yield return statement is probably one of the most unknown features of C#. In this post I want to explain what it does and what its applications are.</p>
<p>Even if most developers have heard of yield return it&#x2019;s often misunderstood. Let&#x2019;s start with an easy</p>]]></description><link>https://www.kenneth-truyers.net/2016/05/12/yield-return-in-c/</link><guid isPermaLink="false">5ab2d765fc11f500225ab608</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Thu, 12 May 2016 13:00:26 GMT</pubDate><media:content url="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/05/yield.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/05/yield.png" alt="Yield return in C#"><p>The yield return statement is probably one of the most unknown features of C#. In this post I want to explain what it does and what its applications are.</p>
<p>Even if most developers have heard of yield return it&#x2019;s often misunderstood. Let&#x2019;s start with an easy example:</p>
<p>IEnumerable<int> GetNumbers() { yield return 1; yield return 2; yield return 3; }</int></p>
<p>While the above has no value for anything serious, it&#x2019;s a good example to debug to see how the yield return statement works. Let&#x2019;s call this method:</p>
<p>foreach(var number in GetNumbers()) Console.WriteLine(number);</p>
<p>When you debug this (using F11, Step Into), you will see how the current line of execution jumps between the foreach-loop and the yield return statements. What happens here is that each iteration of the <strong>foreach</strong> loop calls the iterator method until it reaches the <strong>yield return</strong> statement. Here the value is returned to the caller and the location in the iterator function is saved. Execution is restarted from that location the next time that the iterator function is called. This continues until there are no more yield returns.</p>
<p>A first use case of the yield statement is the fact that we don&#x2019;t have to create an intermediate list to hold our variables, such as in the example above. There are a few more implications though.</p>
<h2 id="yieldreturnversustraditionalloops">Yield return versus traditional loops</h2>
<p>Let&#x2019;s have a look at a different example. We&#x2019;ll start with a traditional loop which returns a list:</p>
<pre><code class="language-csharp">IEnumerable&lt;int&gt; GenerateWithoutYield() 
{ 
    var i = 0; 
    var list = new List&lt;int&gt;(); 
    while (i&lt;5) 
        list.Add(++i); 
    return list; 
} 
foreach(var number in GenerateWithoutYield()) 
    Console.WriteLine(number);
</code></pre>
<p>These are the steps that are executed:</p>
<ol>
<li><font face="Courier New">GenerateWithoutYield</font> is called.</li>
<li>The entire method gets executed and the list is constructed.</li>
<li>The foreach-construct loops over all the values in the list.</li>
<li>The net result is that we get numbers 1 to 5 printed in the console.</li>
</ol>
<p>Now, let&#x2019;s look at an example with the <strong>yield return</strong> statement:</p>
<pre><code class="language-csharp">IEnumerable&lt;int&gt; GenerateWithYield() 
{ 
    var i = 0; 
    while (i&lt;5) 
        yield return ++i; 
} 
foreach(var number in GenerateWithYield()) 
    Console.WriteLine(number);
</code></pre>
<p>At first sight, we might think that this is a function which returns a list of 5 numbers. However, because of the yield-statement, this is actually something completely different. This method doesn&#x2019;t in fact return a list at all. What it does is it creates a state-machine with a promise to return 5 numbers. That&#x2019;s a whole different thing than a list of 5 numbers. While often the result is the same, there are certain subtleties you need to be aware of.</p>
<p>This is what happens when we execute this code:</p>
<ol>
<li><font face="Courier New">GenerateWithYield</font> is called.</li>
<li>This returns an <font face="Courier New">IEnumerable<int></int></font>. Remember that it&#x2019;s not returning a list, but a promise to return a sequence of numbers when asked for it (more concretely, it exposes an iterator to allow us to act on that promise).</li>
<li>Each iteration of the <strong>foreach</strong> loop calls the iterator method. When the <strong>yield return</strong> statement is reached the value is returned, and the current location in code is retained. Execution is restarted from that location the next time that the iterator function is called.</li>
<li>The end result is that you get the numbers 1 to 5 printed in the console.</li>
</ol>
<h2 id="exampleinfiniteloops">Example: infinite loops</h2>
<p>Now you might think that since both example behave exactly the same, that there&#x2019;s no difference in which one we use. Let&#x2019;s modify the example a bit to show where the differences lie. I&#x2019;m going to make two small changes:</p>
<ul>
<li>Instead of looping in the generator until we reach 5, I&#x2019;m going to loop endlessly:</li>
</ul>
<pre><code class="language-csharp">IEnumerable&lt;int&gt; GenerateWithYield() 
{ 
var i = 0; 
while (true) 
yield return ++i; 
} 
IEnumerable&lt;int&gt; GenerateWithoutYield() 
{ 
    var i = 0; 
    var list = new List&lt;int&gt;(); 
    while (true) 
        list.Add(++i); 
    return list; 
}
</code></pre>
<ul>
<li>Instead of iterating directly over the list, I&#x2019;m going to take 5 items of the list:</li>
</ul>
<pre><code class="language-csharp">foreach(var number in GenerateWithoutYield().Take(5)) 
    Console.WriteLine(number); 

foreach(var number in GenerateWithYield().Take(5)) 
    Console.WriteLine(number);
</code></pre>
<p>When we do this, the difference is clear. Following the previously described steps, in the case of the method without yield, the loop will never finish as it will keep looping forever inside the <font face="Courier New">GenerateWithoutYield</font>-method when it&#x2019;s called in the first step (until it throws an OutOfMemoryException). In the case of the <font face="Courier New">GenerateWithYield</font>-method however, we get a different behavior. Because the <font face="Courier New">Take</font>-method is actually implemented with a yield return operator as well, this will succeed. The method only gets called until the <font face="Courier New">Take</font>-method is satisfied.</p>
<h2 id="examplemultipleiterations">Example: multiple iterations</h2>
<p>Another side effect of the yield return statement is that multiple invocations will result in multiple iterations. Let&#x2019;s have a look at an example:</p>
<pre><code class="language-csharp">IEnumerable&lt;Invoice&gt; GetInvoices() 
{ 
    for(var i = 1;i&lt;11;i++) 
        yield return new Invoice {Amount = i * 10}; 
} 
void DoubleAmounts(IEnumerable&lt;Invoice&gt; invoices) 
{ 
    foreach(var invoice in invoices) 
        invoice.Amount = invoice.Amount * 2; 
} 
var invoices = GetInvoices(); 
DoubleAmounts(invoices); 
Console.WriteLine(invoices.First().Amount);
</code></pre>
<p>Read through the above code sample and try to predict what will be written to the console.</p>
<p>What do you think the output is here? 20? In fact, the result is 10. Let&#x2019;s see why:</p>
<ul>
<li>When the line <font face="Courier New">var invoices = GetInvoices();</font> is executed we&#x2019;re not getting a list of invoices, we&#x2019;re getting a state-machine that can create invoices.</li>
<li>That state machine is then passed to the <font face="Courier New">DoubleAmounts</font>-method.</li>
<li>Inside the <font face="Courier New">DoubleAmounts</font>-method we use the state-machine to generate the invoices and we double the amount of each of those invoices.</li>
<li>All the invoices that were created are discarded though, as there are no references to them.</li>
<li>When we return to the main method, we still have a reference to the state-machine. By calling the <font face="Courier New">First</font>-method we again ask it to generate invoices (only one in this case). The state-machine again creates an invoice. This is a new invoice and as a result, the amount will be 10.</li>
</ul>
<blockquote>
<p>Because this is non-obvious behavior, tools such as Resharper will warn you about multiple iterations.</p>
</blockquote>
<h2 id="reallifeusage">Real life usage</h2>
<p>It&#x2019;s pretty neat that we can write seemingly infinite loops and get away with it, but what can we use it for in real life? In broad terms, I&#x2019;ve found two main use cases (all other use cases I found are a subclass of these two).</p>
<h3 id="customiteration">Custom iteration</h3>
<p>Let&#x2019;s say we have a list of numbers. We now want to display all the numbers larger than a specific number. In a traditional implementation that might look like this:</p>
<pre><code class="language-csharp">IEnumerable&lt;int&gt; GetNumbersGreaterThan3(List&lt;int&gt; numbers) 
{ 
    var theNumbers = new List&lt;int&gt;(); 
    foreach(var nr in numbers) 
    { 
        if(nr &gt; 3) 
            theNumbers.Add(nr); 
        } 
        return theNumbers; 
    } 
    foreach(var nr in GetNumbersGreaterThan3(new List&lt;int&gt; {1,2,3,4,5}) 
        Console.WriteLine(nr);
</code></pre>
<p>While this will work, it has a disadvantage: we had to create an intermediate list to hold the items. The flow can be visualized as follows:</p>
<p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/05/standard_loop-2.jpg" alt="Yield return in C#" title="standard_loop" loading="lazy"><br>
You can see in the above image, how the first list is created, then iterated and filtered into a new list. This new list is then iterated again.</p>
<p>We can avoid this intermediate list by using yield return:</p>
<pre><code class="language-csharp">IEnumerable&lt;int&gt; GetNumbersGreaterThan3(List&lt;int&gt; numbers) 
{ 
    foreach(var nr in numbers) 
    { 
        if(nr &gt; 3) 
            yield return nr; 
    } 
} 
foreach(var nr in GetNumbersGreaterThan3(new List&lt;int&gt; {1,2,3,4,5}) 
    Console.WriteLine(nr);
</code></pre>
<p>Now, the execution looks very different:</p>
<p><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/05/yield_loop-2.jpg" alt="Yield return in C#" title="yield_loop" loading="lazy"><br>
In this diagram it&#x2019;s clear that we only iterate the list once. When we get to the items that are needed, control is ceded to the caller (the foreach-loop in this case)</p>
<h3 id="statefuliteration">Stateful iteration</h3>
<p>Since the method containing the yield return statement will be paused and resumed where the yield-statement takes place, it still maintains its state. Let&#x2019;s take a look at the following example:</p>
<pre><code class="language-csharp">IEnumerable&lt;int&gt; Totals(List&lt;int&gt; numbers) 
{ 
    var total = 0; 
    foreach(var number in numbers) 
    { 
        total += number; 
        yield return total; 
    } 
} 
foreach(var total in Totals(new List&lt;int&gt; {1,2,3,4,5}) 
    Console.WriteLine(total);
</code></pre>
<p>The above code will output the values 1,3,6,10,15. Because of the pause/resume behavior, the variable total will hold its value between iterations. This can be handy to do stateful calculations.</p>
<h2 id="deferredexecution">Deferred execution</h2>
<p>All of the above samples have one thing in common: they only get executed as and when necessary. It&#x2019;s the mechanism of pause/resume in the methods that makes this possible. By using deferred execution we can make some methods simpler, some faster and some even possible where they were impossible before (remember the infinite number generator).</p>
<p>The entire LINQ part of C# is built around deferred execution. Let&#x2019;s see a few sample how deferred execution can make things more efficient:</p>
<pre><code class="language-csharp">var dollarPrices = FetchProducts().Take(10)
                                  .Select(p =&gt; p.CalculatePrice())
                                  .OrderBy(price =&gt; price)
                                  .Take(5)
                                  .Select(price =&gt; ConvertTo$(price));
</code></pre>
<p>Suppose we have 1000 products. If the above method did not have deferred execution, it would mean we would:</p>
<ul>
<li>Fetch all 1000 products</li>
<li>Calculate the price of all 1000 products</li>
<li>Order 1000 prices</li>
<li>Convert all the prices to dollars</li>
<li>Take the top 5 of those prices</li>
</ul>
<p>Because of deferred execution however, this can be reduced to:</p>
<ul>
<li>Fetch 10 products</li>
<li>Calculate the price of 10 products</li>
<li>Order 10 prices</li>
<li>Convert 5 of these prices to dollars</li>
</ul>
<p>While maybe a contrived example, it shows clearly how deferred execution can greatly increase efficiency.</p>
<blockquote>
<p>Side note: I want to make clear that deferred execution in itself does not make your code faster. Inherently, it has no effect on the speed or efficiency of your code. The value of deferred execution is that it allows you to optimize your code in a clean, readable and maintainable way. This is an important distinction.</p>
</blockquote>
<h2 id="conclusion">Conclusion</h2>
<p>The yield-keyword is often misunderstood. Its behavior can seem a bit strange at first sight. However, it&#x2019;s often the key to creating efficient code that is maintainable at the same time. Its main use cases are custom and stateful iteration which allow you to create simple yet powerful code. The yield-keyword is what&#x2019;s powering the deferred execution used in LINQ and allows us to use it in our code. I hope this article helped explaining the semantics of the yield-keyword and the effects and implications it has on calling code. Feel free to ask any questions in the comments!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Impressions as a rookie Microsoft MVP]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Last week I attended my first Open MVP day since I got the Microsoft MVP award. It was a great experience and I wanted to share what I learned and shout out to the great professionals I met there.</p>
<p>For me, it&#x2019;s an honor to be part of</p>]]></description><link>https://www.kenneth-truyers.net/2016/05/02/impressions-as-a-rookie-microsoft-mvp/</link><guid isPermaLink="false">5ab2d765fc11f500225ab607</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Mon, 02 May 2016 22:58:24 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Last week I attended my first Open MVP day since I got the Microsoft MVP award. It was a great experience and I wanted to share what I learned and shout out to the great professionals I met there.</p>
<p>For me, it&#x2019;s an honor to be part of this community. Not only does it feel good to be recognized for the work I&#x2019;ve been doing but it&#x2019;s an even greater opportunity to learn from people and build a professional network. The Open MVP day is exactly about that.</p>
<p>It was a busy week, as I just started a new contract, flew out to London and then had to rush to Rome to attend the Open MVP day. Lots of travel, lots of attention required and all in a short time span. While I was happy to be able to do all of that, I can&#x2019;t say I wasn&#x2019;t relieved to be back at home as well.</p>
<h2 id="peopleivemet">People I&#x2019;ve met</h2>
<p>Apart from the technical information I got, the most important part for me was to get to know as many people as possible and build a network of peers. It&#x2019;d be great to continue some of the conversations I&#x2019;ve had with people there and build a relationship with mutual benefits. On the night I arrived I quickly caught up with the Spanish delegation (mostly with <a href="http://soydachi.com/?ref=kenneth-truyers.net">Dachi Gogotchuri</a>, <a href="https://about.me/sergio_parra_guerra?ref=kenneth-truyers.net">Sergio Guerra</a> and <a href="http://pildorasdotnet.blogspot.com.es/?ref=kenneth-truyers.net">Asier Villanueva</a>) and we had a good chat over a beer about what we like (and don&#x2019;t like) about the technologies we work in. It was particularly interesting to see what grievances are shared and which ones are probably just my own problem :-). It was really great to see so many people with the same shared interests and a commitment to keep learning and exploring technology. If you&#x2019;re like me (a geek), you probably recognize the feeling where you have a lot of ideas and thoughts and no one to share them with (at least not in person).</p>
<p>Apart from fellow MVP&#x2019;s, we also had the chance to meet some of the technical evangelists from Microsoft. They&#x2019;re experts in building and maintaining communities. It was very interesting to hear some new ideas from <a href="https://alejandrocamposmagencio.com/?ref=kenneth-truyers.net">Alejandro Campos</a> on how to build a community and wake interest in technology. I can&#x2019;t wait to start and put these ideas in action to foster the local community and build a network of like-minded people here in Mallorca.</p>
<h2 id="thingsivelearned">Things I&#x2019;ve learned</h2>
<p>Apart from networking, there are obviously technical sessions. While I found that most sessions where rather introductory sessions, I do want the highlight the session by <a href="https://weblogs.asp.net/ricardoperes?ref=kenneth-truyers.net">Ricardo Peres</a> on ElasticSearch. While also an introductory session, this one was particularly interesting as I just started a project with heavy usage of ElasticSearch. I definitely learnt a lot in that session and hope to soon start to apply that knowledge in real life.</p>
<h2 id="thewarningsigns">The warning signs</h2>
<p>If there&#x2019;s one thing I was a bit weary about, I&#x2019;d say it&#x2019;s the effect of the <a href="https://en.wikipedia.org/wiki/Echo_chamber_(media)?ref=kenneth-truyers.net">echo chamber</a>. This has nothing to do with the organizers or the sessions, but with the very nature of a vendor specific event. Since all attendees are Microsoft MVP&#x2019;s, the focus naturally lies on Microsoft technology. Even though I&#x2019;m mostly Microsoft oriented, I like to venture into related technologies to compare, contrast and learn from them. This doesn&#x2019;t mean I find MS tech worse (or better) than other tech stacks, it just means that, while attending an event that focuses on a particular vendor, it&#x2019;s important to not get soaked up by it and keep an open mind.</p>
<p>On the other hand, I also like to mention that there a lot of MVP&#x2019;s that didn&#x2019;t only specialize in Microsoft tech. As an example, <a href="http://nicolaiarocci.com/?ref=kenneth-truyers.net">Nicola Iarocci</a>, is a python specialist on the server, but works with MS tech on the client side. His perspective was particularly interesting as it shows that it isn&#x2019;t necessary to be only &#x201C;devoted&#x201D; to MS tech to become an MVP. It shows that the movement towards openness from Microsoft is not just hollow words.</p>
<h2 id="conclusion">Conclusion</h2>
<p>All in all, I&#x2019;d say that my first Open MVP day was a great success and I can&#x2019;t wait to attend my first MVP summit later this year. As I&#x2019;ve been told, it&#x2019;s a fantastic opportunity to meet peers from all over the world as well as get an insight on &#x201C;how the sausage is made&#x201D; at Microsoft.</p>
<p>I want to thank the organizers and everybody I&#x2019;ve met for making this a great first experience and hope to see everyone at the next gathering.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Javascript sandbox pattern]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>A few years ago I wrote a post about <a href="https://www.kenneth-truyers.net/2013/04/27/javascript-namespaces-and-modules/">Javascript namespaces and modules</a>. In that post I discussed a pattern for isolating your code from outside code. I also promised to write up another pattern, the javascript sandbox pattern. I never did though. Lately I received a few emails about</p>]]></description><link>https://www.kenneth-truyers.net/2016/04/25/javascript-sandbox-pattern/</link><guid isPermaLink="false">5ab2d765fc11f500225ab606</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Mon, 25 Apr 2016 14:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>A few years ago I wrote a post about <a href="https://www.kenneth-truyers.net/2013/04/27/javascript-namespaces-and-modules/">Javascript namespaces and modules</a>. In that post I discussed a pattern for isolating your code from outside code. I also promised to write up another pattern, the javascript sandbox pattern. I never did though. Lately I received a few emails about this and decided to write it up eventually. While 3 years have past since then, and a lot has happened in the Javascript world, I still think this is a valuable pattern, if only for historical purposes. If you&#x2019;re using ES6, there are probably better alternatives, but it still is a good way to understand the semantics of Javascript.</p>
<p>The namespace pattern described in my other post has a few drawbacks:</p>
<ul>
<li>It relies on a single global variable to be the application&#x2019;s global. That means there&#x2019;s no way to use two versions of the same application or library. Since they both need the same global name, they would overwrite each other.</li>
<li>The syntax can become a bit heavy if you have deeply nested namespaces (eg: myapplication.services.data.dataservice)</li>
</ul>
<p>In this post I want to show a different pattern: a javascript sandbox. This pattern provides an environment for modules to interact without affecting any outside code.</p>
<h2 id="sandboxconstructor">Sandbox constructor</h2>
<p>In the namespace pattern, there was one global object. In the javascript sandbox this single global is a constructor. The idea is that you create objects using this constructor to which you pass the code that lives in the isolated sandbox:</p>
<pre><code class="language-javascript">new Sandbox(function(box){ 
    // your code here 
});
</code></pre>
<p>The object box, which is supplied to the function will have all the external functionality you need.</p>
<h2 id="addingmodules">Adding Modules</h2>
<p>In the above snippet, we saw that the sandboxed code receives an object box. This object will provide the dependencies we need. Let&#x2019;s see how this works. The Sandbox constructor is also an object, so we can add static properties to it. In the sample below we&#x2019;re adding a static object modules. This object contains key-value pairs where the key indicates the module name and the value is a function which returns the module.</p>
<pre><code class="language-javascript">Sandbox.modules = { 
    dom: function(){ 
        return { 
            getElement: function(){}, 
            getStyle: function(){} 
        }; 
    }, 
    ajax: function(){ 
        return { 
            post = function(){}, 
            get = function(){} 
        }; 
    } 
};
</code></pre>
<p>With this in place, let&#x2019;s now look at how we pass the modules to the sandboxed code. For that, we&#x2019;ll have a look at a first version of the Sandbox constructor:</p>
<pre><code class="language-javascript">function Sandbox(callback){ 
    var modules = []; 
    for(var i in Sandbox.modules){ 
        modules.push(i);
    } 
    for(var i = i &lt; modules.length;i++){ 
        this[modules[i]] = Sandbox.modules[modules[i]](); 
    } 
    callback(this);
}
</code></pre>
<p>First we iterate over all the modules and push the names of all of them into an array. Next, we get each module from the static modules object and assign it to the current instance of the box. Lastly we pass the instance to the sandboxed code. That ensures the box has access to those modules.</p>
<h2 id="improvements">Improvements</h2>
<p>While the above is a good proof of concept, it requires some modifications to make it more versatile and safer to use.</p>
<h3 id="enforceconstructorusage">Enforce constructor usage</h3>
<p>First of all, let&#x2019;s make sure that it&#x2019;s always called as a constructor and not just with a regular function-call:</p>
<pre><code class="language-javascript">function Sandbox(callback){ 
    if(!(this instanceOf Sandbox){ 
        return new Sandbox(callback); 
    } 
}
</code></pre>
<h3 id="allowmodulespecification">Allow module specification</h3>
<p>Next, we want to be able to define which modules we are going to use so that only those modules will be initialized and passed. We do this by accepting an array of module names and then only adding those modules to the box, instead of iterating over all the modules. That makes our constructor a bit simpler:</p>
<pre><code class="language-javascript">function Sandbox(modules, callback){ 
    if(! (this instanceOf Sandbox){ 
        return new Sandbox(modules, callback); 
    } 
    for(var i = i &lt; modules.length;i++){ 
        this[modules[i]] = Sandbox.modules[modules[i]](); 
    } 
    callback(this); 
}
</code></pre>
<h3 id="optionalarguments">Optional arguments</h3>
<p>We want to make the modules argument optional. If it&#x2019;s not provided, we will use all the modules. We also want to add the ability to pass in the modules one by one as strings, instead of in an array. For that we need to do a bit of argument parsing and again iterate over all the modules:</p>
<pre><code class="language-javascript">function Sandbox(){ 
    // transform arguments into an array 
    var args = Array.prototype.slice.call(arguments); // the last argument is the callback 
    var callback = args.pop();   
     // modules is either an array or individual parameters 
     var modules = (args[0] &amp;&amp; typeof args[0] === &quot;string&quot; ? args : args[0]; 
     if(!modules){ 
         modules = []; 
         for(var i in Sandbox.modules){ 
             modules.push[i]; 
         } 
     } 
 }
</code></pre>
<h3 id="commoninstanceproperties">Common instance properties</h3>
<p>Since we&#x2019;re passing in the instance of the box to the sandboxed code, we can add some predefined properties to each instance so that all sandboxed code has access to these:</p>
<pre><code class="language-javascript">function Sandbox(){ 
    // ... 
    this.sandboxVersion = &quot;1.0.1&quot;; 
    callback(this); 
}
</code></pre>
<h3 id="argumentsdestructuring">Arguments destructuring</h3>
<p>Currently, the client-code has to access the modules through the box-instance. It would be nicer if we could pass in the modules as separate arguments. This makes the dependencies even more explicit. To do so, instead of calling the callback directly, we can use apply to execute the callback. Also, instead of initializing the modules as properties on the sandbox, we save them in an array:</p>
<pre><code class="language-javascript">function Sandbox(){ 
    // ... 
    var moduleInstances = modules.map(function(m){ 
        return Sandbox.modules[m](); 
    }); 
    callback.apply(this, moduleInstances); 
}
</code></pre>
<h2 id="thecompletejavascriptsandbox">The complete Javascript sandbox</h2>
<p>When we put everything together, our constructor looks like this:</p>
<pre><code class="language-javascript">function Sandbox(){ 
    // parse the arguments 
    var args = Array.prototype.slice.call(arguments), 
        callback = args.pop(), 
        modules = (args[0] &amp;&amp; typeof args[0] === &quot;string&quot;) ? args : args[0]; 
        
    // add properties for all sandboxes 
    this.applicationVersion = &quot;1.0.2&quot;; 
    // ensure constructor call 
    if (!(this instanceOf Sandbox)){ 
        return new Sandbox(modules, callback); 
    } 
    // add all modules if no modules were passed 
    if(!modules){ 
        modules = []; 
        for(var i in Sandbox.modules){ 
            modules.push(i); 
        } 
    } 
    // initialize and add all modules to the sandbox 
    var moduleInstances = modules.map(function(m){ 
        return Sandbox.modules[m](); 
    }); 
    // execute the code 
    callback.apply(this, moduleInstances); 
} 
Sandbox.modules = { 
    dom: function(){ return { 
        getElement: function(){}, 
        getStyle: function(){} 
        }; 
    }, 
    ajax: function(){ 
        return { 
            get: function(){}, 
            post: function(){} 
        }; 
    } 
}

With the sandbox in place, here are a few example usages:
```javascript
new Sandbox(&apos;dom&apos;, function(dom){ 
    console.log(this.sandboxVersion); 
    var element = dom.getElement(); 
}); 

new Sandbox(function(dom, ajax){ 
    console.log(this.sandboxVersion); 
    var element = dom.getElement(); 
    ajax.post(); 
});

new Sandbox([&apos;ajax&apos;, &apos;dom&apos;], function(ajax, dom){
    // ...
});
</code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>The Javascript sandbox let&#x2019;s you isolate code from outside factors. It also allows you to explicitly define dependencies which reduces coupling and makes it easier to test. While there are other patterns to do this, and certain framework have this built in, this could be a good pattern to use if you&#x2019;re still working with ES5 and no frameworks.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Technical debt: managing code quality]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Technical debt is usually seen as a negative factor in a development process. While having too much technical debt is indeed a good indicator for a project gone bad, technical debt is not always a bad thing.</p>
<h2 id="whatistechnicaldebt">What is technical debt?</h2>
<p>When you start writing code you usually have a</p>]]></description><link>https://www.kenneth-truyers.net/2016/04/13/technical-debt-managing-code-quality/</link><guid isPermaLink="false">5ab2d765fc11f500225ab605</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Wed, 13 Apr 2016 11:06:57 GMT</pubDate><media:content url="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/technical_debt.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/technical_debt.jpg" alt="Technical debt: managing code quality"><p>Technical debt is usually seen as a negative factor in a development process. While having too much technical debt is indeed a good indicator for a project gone bad, technical debt is not always a bad thing.</p>
<h2 id="whatistechnicaldebt">What is technical debt?</h2>
<p>When you start writing code you usually have a choice: either do it quick and messy or do it, as we developers tend to call it, &#x201C;the right way&#x201D;. The quick way is obviously better for the business as it delivers value earlier.</p>
<p>From a business point of view, no one really cares in what state the code is in. If it works, the business is happy. So then, considering that a healthy code base takes more time and is thus more expensive, why should the business have to pay more for a healthy code base, when it doesn&#x2019;t really concern them?</p>
<p>Although a bad or untested code base may deliver business value, if it&#x2019;s left uncontrolled, it&#x2019;s not necessarily good in the long run. An unhealthy code base is hard to maintain and will develop stability issues. This will affect future efforts to add more business value.</p>
<p>In lay man&#x2019;s terms, technical debt is the amount of mess left behind in a code base from quick fixes.</p>
<h2 id="benefitsoftechnicaldebt">Benefits of technical debt</h2>
<p>From the previous description, it might seem obvious that technical debt is a bad thing and should be avoided at all costs. That&#x2019;s not entirely true though. Sometimes the cost of accruing technical debt is less than the cost of having to release later. Debt in the financial world has obvious advantages, and if managed correctly it can be a tool to improve yourself.</p>
<p>Taking a mortgage on a house when you have studied how to pay it back, can be a good investment. Waiting until you have saved the entire amount would be impossible for most people and thus you have a cost of opportunity. On the other hand, using a credit card like it&#x2019;s a free for all pass and buying anything that you desire is probably not a good way of managing your financial situation.</p>
<p>In software development, the same can be said. When you need to push out that release before Christmas, it&#x2019;s possibly a good idea to implement something quickly so it&#x2019;s there when the opportunity is big. On the other hand, pushing out all features as soon as possible without regards to code quality, architecture and tests will sooner or later result in a slower process and reduced quality of the application.</p>
<h2 id="managingtechnicaldebt">Managing technical debt</h2>
<p>As with financial debt, technical debt can beneficial. And also in line with how we think about financial debt, the key to balancing cost and advantages is to make sure you have a good strategy for controlling debt. Here are a few ways I found have helped me in keeping technical debt under control.</p>
<h3 id="defaulttoavoidingtechnicaldebt">Default to avoiding technical debt</h3>
<p>The default way of writing code in your organization should be to write well-factored, flexible code with decent test coverage. Acquiring debt is not a decision that should be taken lightly, therefore, when it&#x2019;s taken it should be taken deliberately and not by coincidence because someone didn&#x2019;t feel like writing good code today.</p>
<h3 id="communicatetheconsequences">Communicate the consequences</h3>
<p>There&#x2019;s a typical conversation between a developer and a manager. If you are a developer, I&#x2019;m sure you have heard it as well. It goes something like this:</p>
<pre><code>*Manager*: I need feature X. How much time do you think it will take you?  
*Developer*: 1 week  
*Manager*: Hmm, it needs to go online in two days though, as big event X is in two days  
*Developer*: OK, I&#x2019;ll see what I can do  
*Manager*: OK  
*Developer*: OK
</code></pre>
<p>Familiar? I thought so. This happens all the time and the problem here is not bad standards or bad employees. The problem is communication. Here is the same conversation, but with the thoughts of both in brackets:</p>
<pre><code>*Manager*: I need feature X. How much time do you think it will take you?  
*Developer*: 1 week (*1 day of thinking, 2 days coding and testing, 1 refactoring, 1 days extra testing*)  
*Manager*: Hmm, it needs to go online in two days though, as big event X is in two days  
*Developer*: OK, I&#x2019;ll see what I can do (*I&#x2019;ll cut down on the thinking and testing*)  
*Manager*: OK (*I&#x2019;m a great manager, I just managed to get something done in 2 days which normally takes a week*)  
*Developer*: OK (*ugh, always the same, we just can&#x2019;t write decent code here*)
</code></pre>
<p>It&#x2019;s important to realize that various factors are at play here:</p>
<ul>
<li>The manager wants feature X in two days, not because he&#x2019;s a tyrant, but for a good reason: it earns more money if implemented earlier.</li>
<li>The manager walks away with idea that it can be done in two days. If this happens often, he will realize that pushing developers works, after all, in two days he&#x2019;s going to see that it works indeed. He doesn&#x2019;t know what happens in the code base, so it&#x2019;s normal that the next time around he&#x2019;ll try to negotiate the estimate.</li>
<li>The developer wants to satisfy the need of the business, but feels incapable of doing so in both short and long term</li>
</ul>
<p>As a developer we have an obligation to communicate better to the business. Here&#x2019;s my improved version of this conversation:</p>
<pre><code>*Manager*: I need feature X. How much time do you think it will take you?  
*Developer*: 1 week  
**Manager**: Hmm, it needs to go online in two days though, as big event X is in two days  
*Developer*: It&#x2019;s impossible to do this feature well in 2 days, it needs a week to be done properly.  
*Manager*: OK, but we don&#x2019;t have a week. If it takes a week, there&#x2019;s no point as we won&#x2019;t earn as much money from it.  
*Developer*: What I can do is take a shortcut and do it quickly. That would require me to rearrange some things and I need to go back later to fix it.   
*Manager*: OK, that sounds reasonable (*great, I&#x2019;m going to get it done in time*)  
*Developer*: OK (*great, I will make sure the feature gets implemented soon and then I&#x2019;ll need to go back to make sure nothing gets left behind that can cause trouble in the future*)
</code></pre>
<p>In this conversation, a middle ground is found. The feature will be implemented, and the technical debt is accounted for and managed. The consequences are well understood by the business and can be dealt with accordingly.</p>
<h3 id="track">Track</h3>
<p>Just as with financial debt, you want to know what debt you have and how long it would take you to get rid of it (even if you&#x2019;re never going to get rid of all of it). In the above story, it would be wise to create a task for cleaning up the code base and writing tests after the feature was implemented. This way, it&#x2019;s visible to everyone that there was technical debt acquired.</p>
<p>The way you track technical debt depends on your process. In an agile process, we have created a new type of story before. Apart from user stories, tasks and bugs we&#x2019;d have another story type called &#x201C;technical debt&#x201D;. This allows us to see in a quick view how much technical debt we have and whether it&#x2019;s becoming a problem.</p>
<p>Apart from tracking technical debt as and when you create it, it&#x2019;s sometimes also necessary to track technical debt that you spot. This could be either legacy code or it could be some big refactoring that you and the team feel is necessary for the code base to be flexible towards future developments.</p>
<p>Depending on what situation you&#x2019;re in, you could incorporate a certain percentage of the time to working on technical debt stories. make sure the business is aware of this and approves.</p>
<h3 id="repayyourdebt">Repay your debt</h3>
<p>Communication and tracking don&#x2019;t serve any purpose if you don&#x2019;t repay your debt. You have to make sure that technical debt stories are dealt with on a regular basis. When deciding which stories to tackle you need to factor in a few properties:</p>
<ul>
<li>Age: Just as with financial debt, technical debt comes with an interest. The longer you leave code in a bad state, the bigger the impact it will have: it&#x2019;ll create more bugs and people will forget why and how something was implemented (remember, it&#x2019;s bad code, so it&#x2019;s probably obscure by nature)</li>
<li>Impact: There&#x2019;s a difference between a class that has some formatting issues and an entire subsystem that doesn&#x2019;t have any tests. Tackle those issues that have the biggest impact first. (to continue the analogy: pay of the credit card with 20% interest rate before you pay of the one with 2% interest rate)</li>
</ul>
<p>Apart from the need to repay your debt, there are also different ways you can choose to repay it:</p>
<ul>
<li>Repay it completely: replace the code or refactor it to a good solution</li>
<li>Partially repay it: Instead of implementing a good solution, implement a different solution that has less interest</li>
<li>Don&#x2019;t repay it at all: just deal with the &#x201C;interest&#x201D;. This can be a good option if the cost is minimal, the code is hardly ever changed and replacing it would be very costly</li>
</ul>
<h3 id="dealingwithlegacycode">Dealing with legacy code</h3>
<p>Considering that we default to writing good code and all technical debt is communicated and dealt with appropriately, a normal project should never have so much technical debt that it impacts the business. However, there are situations where we don&#x2019;t have these values from the beginning of the project: legacy projects.</p>
<p>Dealing with legacy projects is a totally different ball game. Often it&#x2019;s difficult to identify the good code (or worse, there is no good code). Furthermore, it&#x2019;s difficult to identify which parts of the code are causing most problems and how to solve them (aka: it&#x2019;s difficult to estimate the interest).</p>
<p>To deal with this situation it&#x2019;s best to create a metaphorical fork in the road, from which point you default to writing good code. All legacy code should be isolated as much as possible. Once legacy code is isolated, you can then decide that all modifications to that legacy should leave the code in a better state. A good book to read on this topic is <a href="http://www.amazon.com/gp/product/0131177052/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=0131177052&amp;linkCode=as2&amp;tag=kennethtruyer-20&amp;linkId=EPN6346OL6664AY3&amp;ref=kenneth-truyers.net">Working Effectively with Legacy Code</a><img src="http://ir-na.amazon-adsystem.com/e/ir?t=kennethtruyer-20&amp;l=as2&amp;o=1&amp;a=0131177052" alt="Technical debt: managing code quality" loading="lazy"> by Michael Feathers.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Technical debt is inevitable in software projects. Instead of trying to avoid it, we should try to manage it as effectively as possible. When managed correctly, technical debt can be a powerful tool to help your business grow faster without impacting the long term goals.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Code Reviews: why and how?]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/code_review-2.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/code_review_thumb-2.jpg" alt="code_review" title="code_review" loading="lazy"></a><br>
Of all the practices implemented to improve code quality, such as unit testing, continuous integration, continuous deployment, daily stand-ups, I find the most important one is doing proper code reviews.</p>
<p>Code reviews have a lot of advantages:</p>
<ul>
<li>It&#x2019;s much easier to spot problems with other people&#x2019;s</li></ul>]]></description><link>https://www.kenneth-truyers.net/2016/04/08/code-reviews-why-and-how/</link><guid isPermaLink="false">5ab2d765fc11f500225ab604</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Fri, 08 Apr 2016 00:07:51 GMT</pubDate><media:content url="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/code_review-2.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/code_review-2.jpg" alt="Code Reviews: why and how?"><p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/code_review-2.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/code_review_thumb-2.jpg" alt="Code Reviews: why and how?" title="code_review" loading="lazy"></a><br>
Of all the practices implemented to improve code quality, such as unit testing, continuous integration, continuous deployment, daily stand-ups, I find the most important one is doing proper code reviews.</p>
<p>Code reviews have a lot of advantages:</p>
<ul>
<li>It&#x2019;s much easier to spot problems with other people&#x2019;s code than with your own</li>
<li>Knowledge of the codebase is spread through the team, thereby avoiding the pitfall where certain developers have sole knowledge of a subsystem</li>
<li>General knowledge of the team is improved as new methods or practices are visible to everyone</li>
<li>Adherence to code standards are enforced</li>
</ul>
<p>It can be a bit awkward in the beginning, but once you have found a good flow, you&#x2019;ll notice a big change in how software is developed, not only in terms of quality but also in terms of the morale and coherence of the development team.</p>
<h2 id="rulesforreviewingcode">Rules for reviewing code</h2>
<p>For code reviews to work in your benefit, a certain set of rules need to be followed. It&#x2019;s easy to just start using some software, but if you aren&#x2019;t reviewing code consistently, it won&#x2019;t bring you all the benefits listed above. Here are a few the rules I have experienced to positively influence the gains of code reviews as well as satisfaction among developers.</p>
<h3 id="smallcommitsgoodcommitmessages">Small commits, good commit messages</h3>
<p>Having a lot of small grained commits with decent commit messages makes it much easier to review code as everything will be explained step-by-step. It also makes it easier for the reviewer to see the thinking behind the implementation.</p>
<p>As an added benefit, git blame will serve as your living documentation for all your code.</p>
<h3 id="reviewthecodenotthedeveloper">Review the code, not the developer</h3>
<p>Since you&#x2019;re reviewing someone&#x2019;s work, it might be tempting to critique that person. However, if this culture persists, you&#x2019;ll more often than not get developers that aren&#x2019;t happy anymore or who will start hiding their not-so-nice code.</p>
<p>This doesn&#x2019;t mean you should let code pass that doesn&#x2019;t live up to the standard, it just means you should critique the code instead of the person. This can be a subtle difference. Instead of saying &#x201C;You didn&#x2019;t follow the standard here&#x201D;, which can sound accusatory (intentional or not), say &#x201C;this code should be formatted differently&#x201D;. Note the difference in tone.</p>
<p>This responsibility also falls on the developer. As a developer you have to get into the mindset that you are not your code. Even if a comment might sound accusatory, don&#x2019;t take it personally and remember that it&#x2019;s a comment on the code, not on you or your behavior.</p>
<h3 id="multiplereviewers">Multiple reviewers</h3>
<p>Always assign more than 1 person to review the code. That way, if there are any disagreements, it will be easily solved by a majority vote. If you assign only 1 developer, you might run into a discussion without end between the developer and the reviewer.</p>
<h3 id="haveachecklist">Have a checklist</h3>
<p>Having a list of things to look out for, makes it easier to conduct code reviews systematically. Here&#x2019;s my personal list of things to check for (in order of priority):</p>
<ul>
<li>Function: does it do what it needs to do? (A good spec goes a long way here)</li>
<li>Bugs</li>
<li>Test coverage</li>
<li>Adherence to architecture and patterns</li>
<li>Duplicate code</li>
<li>Code style and formatting</li>
</ul>
<p>Having a checklist also makes it easier for the developer to run through it before submitting the code for review.</p>
<h3 id="whentoreview">When to review</h3>
<p>This depends a bit on the confidence of the team and how well they work together.</p>
<p>In teams with low to medium confidence, I would opt for a feature-branch strategy where code is reviewed first and only then integrated into the main line. In this case, you have to make sure that code is reviewed as soon as possible, since you don&#x2019;t want any branches to live for a long time only to see that your properly reviewed code can&#x2019;t be integrated anymore without merge conflicts.</p>
<p>In teams with a high confidence level, I would opt to integrate directly into the main line and do reviews after the fact. The reason this can only be done in high confidence teams is multiple:</p>
<ul>
<li>It might open you up to code reviews that are never completed</li>
<li>A developer can ignore the code review</li>
<li>Bad code can be committed</li>
</ul>
<h3 id="reviewprocess">Review process</h3>
<p>Whether you sit down together in front of the screen or have software to be able to complete your task, make sure reviewing code is as accessible as possible. The last thing you want is that developers come to see it as a chore. A fast code review process will yield more code reviews and better results.. I had very good experiences with <a href="https://www.fogcreek.com/kiln/features/code-reviews/?ref=kenneth-truyers.net">Kiln</a>. It does more than just code reviews, but I particularly liked their interface (YMMV).</p>
<blockquote>
<p>Quick tip: Review only tests<br>
If you have limited time and have good test coverage (or it&#x2019;s enforced by your build process), you can choose to only review the tests. The reasoning is that if the tests are good, the implementation will be good as well. By reviewing the tests you will see what the public API looks like, so you&#x2019;ll get a good feel about the code. Use this tip sparingly though, a full review is still useful.</p>
</blockquote>
<h2 id="pitfalls">Pitfalls</h2>
<p>Apart from following the above rules, there are a few pitfalls and anti-patterns to look out for.</p>
<h3 id="gamingthesystem">Gaming the system</h3>
<p>If you don&#x2019;t have complete buy-in from the whole team, this issue might come up. It happens when 2 or more devs on the team decide to approve each other&#x2019;s code without properly reviewing it. They&#x2019;re technically following the rules, but in practice the code is not reviewed at all. A possible remedy is to make sure that the rule of 2 or more reviewers is appended with a rule that says you have to select two different people all the time.</p>
<h3 id="reviewgate">Review gate</h3>
<p>This problem can manifest itself in a few ways:</p>
<ul>
<li>One dev never approves of any review so code is unnecessarily blocked in the review. If this happens frequently, find out what the underlying issue is.</li>
<li>All code needs to pass past one developer before it hits main. While this can sometimes be good for a short while, it&#x2019;s never a good plan to do this long-term. If really necessary, than at least that role should be switched regularly. Otherwise developers might stop caring about their code at all.</li>
<li>Complete lockdown: this happens where developers are taken away permission to touch the main line at all. If you really have big code quality issues it can be a good temporary measure, but otherwise this is the fastest way to sink your team&#x2019;s morale.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Code reviews are, in my experience, the most valuable tool for improving software quality, distributing knowledge and enforcing a common coding standard.</p>
<p>If you&#x2019;re just starting out with code review, keep in mind that it needs perfecting. Don&#x2019;t worry when it&#x2019;s not yet creating that huge change you expected. If you improve the process bit by bit and get more experienced, you&#x2019;ll soon see the benefits.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Build 2016 announcements]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/build.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/build_thumb.jpg" alt="build" title="build" loading="lazy"></a><br>
Build 2016 is finished and as always it was great to see Microsoft bringing new opportunities to businesses and developers. Unfortunately I wasn&#x2019;t able to attend, but luckily, the live stream of all the important sessions, especially for the keynotes, made up for that. These are the announcements</p>]]></description><link>https://www.kenneth-truyers.net/2016/04/02/build-2016-announcements/</link><guid isPermaLink="false">5ab2d765fc11f500225ab603</guid><dc:creator><![CDATA[Kenneth Truyers]]></dc:creator><pubDate>Sat, 02 Apr 2016 19:15:51 GMT</pubDate><media:content url="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/build.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/build.jpg" alt="Build 2016 announcements"><p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/build.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/build_thumb.jpg" alt="Build 2016 announcements" title="build" loading="lazy"></a><br>
Build 2016 is finished and as always it was great to see Microsoft bringing new opportunities to businesses and developers. Unfortunately I wasn&#x2019;t able to attend, but luckily, the live stream of all the important sessions, especially for the keynotes, made up for that. These are the announcements that excited me the most.</p>
<h2 id="microsoftbotframework">Microsoft Bot Framework</h2>
<p>Completely unexpected but a very cool way of building new applications and solutions. The idea behind the bot framework is to use conversations as an application framework. The big challenge for developers is to make the interaction with bots as natural as possible. To that end, Microsoft is offering a framework plus a set of intelligence services such as speech and text recognition and a wide variety of other cognitive services. This should enable developers to build clever bots that can automate the things we do on websites at the moment. I&#x2019;m not sure it will replace websites anytime soon as Microsoft claims, but it definitely has some benefits over traditional applications. User interface design becomes kind of obsolete, and if you think about it, we have been using language to communicate our intentions forever, so if we can properly crack the key to that, we could see very interesting applications.</p>
<p>Obviously Microsoft wouldn&#x2019;t be Microsoft if they didn&#x2019;t connect their existing services to this new framework. Skype and Cortana will be tied in and soon you&#x2019;ll see new integrations pop up in these tools.</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/bot_framework.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/bot_framework_thumb.jpg" alt="Build 2016 announcements" title="bot_framework" loading="lazy"></a></p>
<h2 id="bashonwindows">Bash on Windows</h2>
<p>A few years ago, the good old April&#x2019;s fool day joke would be Microsoft releasing a Linux distro or working together with Linux and open source in general. The news is the same, only this time it&#x2019;s for real. Through an integration with native Ubuntu binaries, windows developers can now use long known bash tools such as grep, awk, sed, &#x2026; This definitely opens up a lot of possibilities, not in the least for making it easier to follow online tutorials.</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/bash_on_windows.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/bash_on_windows_thumb.jpg" alt="Build 2016 announcements" title="bash_on_windows" loading="lazy"></a></p>
<h2 id="hololens">HoloLens</h2>
<p>Already announced earlier this year, but now it&#x2019;s for real: the HoloLens dev-kit is now going out to developers. We&#x2019;ve seen some impressive demos from Microsoft so far, but now it will be interesting to see what the rest of the world can do with it. This is the first real test for HoloLens. If it really is that impressive as the demos we saw from Microsoft, we&#x2019;re up for some mind blowing applications in the next couple of months. Furthermore, because of a few design changes on the actual headset, users who tried it out reported a better field of view, which was one of the points of critique up until now.</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/hololens.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/hololens_thumb.jpg" alt="Build 2016 announcements" title="hololens" loading="lazy"></a></p>
<h2 id="azureisgettingbigger">Azure is getting bigger</h2>
<p>Azure already consists of a huge set of services that make the life of developers easier. At Build 2016, Microsoft added a bunch of new services to grow Azure even more. These were all announced:</p>
<ul>
<li>Azure IoT Starter Kits are now available for purchase from partners</li>
<li>Azure IoT Hub device management and Gateway SDK will be available later in Q2</li>
<li>A new service, Azure Functions is now in preview</li>
<li>DocumentDb supports a MongoDb protocol now</li>
<li>Azure Developer Tools</li>
<li>Microsoft Cognitive Services is in preview</li>
</ul>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/azurebuild2016.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/azurebuild2016_thumb.jpg" alt="Build 2016 announcements" title="azure build 2016" loading="lazy"></a></p>
<h2 id="xamarin">Xamarin</h2>
<p>Probably the most awaited announcement. As everyone was hoping, Xamarin will now come bundled with Visual Studio. That&#x2019;s great news for developers that were using the paid version before as it was quite expensive. Not only does it come with paid versions of Visual Studio but also with the free community edition. To top it off, they also announced open sourcing the Xamarin core SDK. These announcements were certainly above expectations. While everyone was hoping for the Visual Studio bundling, no one dared to hope for inclusion in the free product and even less on having it available as open source.</p>
<p><a href="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/xamarin.jpg?ref=kenneth-truyers.net"><img src="https://storage.ghost.io/c/1b/0b/1b0be152-6721-447e-a11d-e3b2eeb8cff0/content/images/2016/04/xamarin_thumb.jpg" alt="Build 2016 announcements" title="xamarin" loading="lazy"></a></p>
<h2 id="desktopappconverter">Desktop App Converter</h2>
<p>While one of the big disappointments of the last months was the discontinuation of the Android porting project, project Astoria, Microsoft now did release another porting tool, this time to convert Win32 application to UWP. Any app based on Win32 and .NET can be converted to the AppX format. Furthermore, work is still continuing on project Islandwood, the porting tool for iOS apps. Let&#x2019;s hope these converters can make a dent in the app gap.</p>
<p>What are you planning to do with these new services?</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>