<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>PJSen Blog</title>
	<atom:link href="https://blog.pjsen.eu/?feed=rss2" rel="self" type="application/rss+xml" />
	<link>https://blog.pjsen.eu</link>
	<description>Documents nonobvious observations from the area of software development</description>
	<lastBuildDate>Thu, 17 Mar 2022 12:03:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>The Boy Scout rule and little things that matter</title>
		<link>https://blog.pjsen.eu/?p=491</link>
					<comments>https://blog.pjsen.eu/?p=491#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 17 Mar 2022 10:33:24 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=491</guid>

					<description><![CDATA[There is a rule called The Boy Scout Rule. In essence, it says that whenever you attempt to modify code, you should also try to improve something in the existing code so that you leave it a little bit better than it was before. By following the rule we can gradually and seamlessly get rid]]></description>
										<content:encoded><![CDATA[<p>
There is a rule called <a href="https://www.oreilly.com/library/view/97-things-every/9780596809515/ch08.html">The Boy Scout Rule</a>. In essence, it says that whenever you attempt to modify code, you should also try to improve something in the existing code so that you leave it a little bit better than it was before. By following the rule we can gradually and seamlessly get rid of technical debt and prevent deterioration in software systems.
</p>
<p>
The rule can be addressed at organizational level in an interesting way. I have come across an idea of a project manager who was responsible for multiple teams dealing with significant amounts of legacy code. They introduced some kind of a gamification to the development process. The teams were supposed to register as many improvements in code as they could and a team who got the biggest number won the game. The prize was some extra budget to spend on team party. Such idea may not be applicable in all organizations, but it clearly shows how to methodically approach the problem of technical debt at management level.
</p>
<p>
Although I do not immediately recommend the idea of gamification, <strong>but I certainly recommend creating some static (not assigned to any sprint) ticket for all the improvements and encourage developers to make even smallest refactor commits under such ticket during they normal development tasks</strong>. Below I would like to show some basic indicators that in my opinion qualify for being improved as soon as they are discovered.
</p>
<ol>
<li> Improper naming causing an API ambiguity
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2022-03-16-cs1.cs"></script> </p>
<p>I see a few problems here. When I first saw client code calling <code>GetValue</code> I thought it returns some custom, domain specific type. I needed to search for a method returning <code>string</code> and I skipped <code>GetValue</code>, because it did not look like it returns a <code>string</code>. It was only later that I realized it actually <strong>does</strong> return a <code>string</code>. If it returns a string, it should be named appropriately. </p>
<p> </p>
<p>More general observation here is that we have 3 ways of converting the type into a <code>string</code>. In my particular case I had 10 usages of <code>GetValue</code>, 45 usages of the operator and 0 usages of <code>ToString</code> in the codebase. When talking to the maintainers, I was told there was <i>a convention</i> not to use the <code>ToString</code> method. That situation clearly shows some adjustments are needed both at the level of code quality and at the development process level. I have nothing against operator overloading, however it is not very frequently used in business code. The code readability is a top priority in such case and being as explicit as possible is in fact beneficial from the perspective of the long term maintenance. </p>
<p> </p>
<p>The unused method should obviously be removed, and the one returning a <code>string</code> should be named <code>ToString</code>. I would keep the overloaded operators, because why not, but I still am a little bit hesitant about using them in new code. It is cool language feature when you write code, but it appears not so cool when you have to read it. Even here, I would consider sacrificing the code aesthetics of the operator in favor on simplistic <code>ToString</code>.</p>
</li>
<li> Misused pattern causing an API ambiguity
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2022-03-16-cs2.cs"></script></p>
<p>This one is very similar to the previous one, as it boils down to the fact, that we can instantiate an object in two ways. When I was introducing some modifications to the code, at first I was forced to answer the question: should I use the constructor or the <code>Create</code> method. Of course, it turns out, indeed there is a slight difference, because <code>Create</code> returns a result object, which is a common way to model logic in a kind of functional way. But still, at the level of API surface we do not see the difference clearly enough.</p>
<p>
The gist of that case is, there is a pattern in tactical (I mean, at the level of the actual source code) Domain Driven Design to <strong>use private constructors and provide static factory methods</strong>. The purpose of that is primarily to prevent default construction of an object that would render it in a default state that is not meaningful from the business point of view. Also, factory methods can have more expressive names to indicate some specific extra tasks they do.</p>
<p>The constructor should be made <code>private</code> and the factory method can be named <code>CreateAsResult</code>, if the wrapper type is prevalent in the code base.</p>
</li>
</ol>
<p>
The ideas behind such improvements can actually be very simple. Some of them have to do with trivial, but extremely relevant conclusions about engineering a software. For example:
</p>
<ul>
<li>any piece of code that slows down a programmer maintaining the code can potentially be considered not good enough</li>
<li>the code is written once, but read multiple times and thus, when writing a code, we should optimize for the easiness of reading it</li>
</ul>
<p>
The vital part of that mindset of clearly expressing intention is proper naming. I highly recommend watching excellent presentation <a href="https://www.youtube.com/watch?v=MBRoCdtZOYg">CppCon 2019: Kate Gregory “Naming is Hard: Let&#8217;s Do Better”</a>. It helps develop a proper way of thinking when writing a code.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=491</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A solution for Hangfire Dashboard authentication</title>
		<link>https://blog.pjsen.eu/?p=477</link>
					<comments>https://blog.pjsen.eu/?p=477#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Tue, 01 Mar 2022 08:59:35 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[ASP.NET]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=477</guid>

					<description><![CDATA[Let&#8217;s assume we have a typical ASP.NET Web Api back-end and a Single Page Application front-end. The front-end is authenticated with JWT tokens. The problem is, the Hangfire dashboard is a classic ASP.NET MVC-like application (more precisely, Razor Pages application) and will not seamlessly integrate with the existing JWT token authentication approach used in the]]></description>
										<content:encoded><![CDATA[
<p>
    Let&#8217;s assume we have a typical ASP.NET Web Api back-end and a <i>Single Page Application</i> front-end. The front-end is authenticated with JWT tokens. The problem is, the <a href="https://www.hangfire.io/">Hangfire</a> dashboard is a classic ASP.NET MVC-like application (more precisely, Razor Pages application) and will not seamlessly integrate with the existing JWT token authentication approach used in the back-end Web Api.</p>
<p>    I came up with the following solution: let&#8217;s create a new MVC endpoint authenticated using existing attributes, but with token included in the URL. Then use a browser&#8217;s session to communicate with Hangfire Dashboard and mark a request as authenticated, if it is such. A user accesses the dashboard by navigating to the new endpoint, then if authentication succeeds, they are redirected into the main dashboard URL which is authenticated just by a flag set in the session. <strong>The biggest advantage of this solution is it requires no changes in the existing authentication and authorization mechanisms.</strong>
    </p>
<ol>
<li>
            Create a new MVC controller in the back-end Web Api application. Use whatever authorization techniques and attributes are already used in the application for the API controllers</p>
<p>            <script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2022-03-01-cs3.cs"></script></p>
</li>
<li>
            Enable the session mechanism. It may look strange for the Web Api, but anyways. Call <code>app.UseSession</code> before <code>app.UseMvc</code>
        </li>
<li>
            In the controller&#8217;s action set some flag in the session. This way, it will be set if, and only if the authentication and authorization succeeds
        </li>
<li>
            Do the redirect to the Hangfire Dashboard endpoint
        </li>
<li>
            Create a class that implements <code>IDashboardAuthorizationFilter</code>. It is the customization for the authentication of Hangfire Dashboard. Try to read the flag from the session and decide if the request is authenticated</p>
<p>            <script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2022-03-01-cs2.cs"></script></p>
<p>            Use <code>Authorization = new[] { new HangfireAuthorizationFilter() }</code> in the <code>DashboardOptions</code></p>
</li>
<li>
            Now the most important part that enables the existing token-based authentication to work with the Hangfire Dashboard. Create new middleware class that rewrites the token from the URL into the headers. It will allow the existing authentication mechanism do its job without any modifications</p>
<p>            <script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2022-03-01-cs1.cs"></script></p>
<p>            Call <code>app.UseMiddleware&lt;TokenFromUrlMiddleware&gt</code> before <code>app.UseAuthentication</code>.
        </li>
</ol>
<p>    Though, there are some caveats for this solution.</p>
<ul>
<li>
            The solution may require some additional session setup in multi-instance back-end configuration. By default, the session is stored in memory. Each instance will have its own copy of the session store causing the session flag to be unrecognizable between the instances, if it is not configured to use a distributed cache like e.g. Redis
        </li>
<li>
            The security of the token included in the URL is disputable. However, as for my architectural drivers, it is acceptable, because the application is internal
        </li>
<li>
            There are some rough edges if the application is hosted in a virtual subdirectory of the domain using Kestrel, not IIS. Please notice the Redirect action begins with <code>/</code> which is the root of the domain. We must adjust it accordingly if a subdirectory approach is used (append the subdirectory name). Also, we must somehow inform Hangfire about the subdirectory. If a subdirectory is used, then the real URL of the dashboard is not the main Hangfire path set in its options, but actually the path with prefixed with a subdirectory. The dashboard is a black box from our point of view, and we cannot influence the way it makes its own HTTP requests. The only way of configuring its behavior is by using its apis. We use <code>PrefixPath</code> property of the <code>DashboardOptions</code> to configure this
        </li>
<li>
            In my setup I also had to use <code>IgnoreAntiforgeryToken = true</code> because of some errors which occurred only on the containerized environment under Kubernetes. The final settings are as follows:</p>
<p>            <script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2022-03-01-cs4.cs"></script>
        </li>
<li>
            Due to the discrepancies between containerized environment and local one, it is worth considering the separate, conditionally compiled setup calls for the DEBUG local build and the RELEASE build. This way we can skip the prefixes required for the subdirectory based hosting if we run locally
        </li>
<li>
            There in <a href="https://stackoverflow.com/questions/58614864/whats-the-difference-between-httprequest-path-and-httprequest-pathbase-in-asp-n">an interesting SO post</a> describing the differences between the <code>Path</code> and <code>PathBase</code> properties of the <code>HttpRequest</code>. These are used internally by the Hangfire to dynamically generate URLs for the requests sent by the dashboard. It turns out that, these properties are used to detect the subdirectory part of the URL. They behave differently under IIS and Kestrel, unless the particular middleware is additionally plugged into the pipeline
        </li>
<li>
            By default, the session cookie expires 20 minutes after closing the browser&#8217;s tab or right after closing the entire browser
        </li>
<li>
            One can imagine a very unlikely corner case, when the real token is invalidated while the Hangfire Session in open. In such case, the dashboard will remain logged in. I consider those properties as acceptable, though
        </li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=477</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Things I&#8217;ve learned about SQL Server the hard way</title>
		<link>https://blog.pjsen.eu/?p=471</link>
					<comments>https://blog.pjsen.eu/?p=471#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Wed, 20 Mar 2019 19:14:02 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[ASP.NET]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=471</guid>

					<description><![CDATA[In this post I am presenting a couple of things I&#8217;ve learned from the analysis of a problem, that manifested itself in an occasional HTTP 500 errors in production instance of an ASP.NET application. This time I don&#8217;t aim at exhaustively explaining every single point, because each of them could be a subject of a]]></description>
										<content:encoded><![CDATA[<p>In this post I am presenting a couple of things I&#8217;ve learned from the analysis of a problem, that manifested itself in an occasional HTTP 500 errors in production instance of an ASP.NET application. This time I don&#8217;t aim at exhaustively explaining every single point, because each of them could be a subject of a dedicated blog post.</p>
<p>The story begins with SQL error: <em>SQLEXCEPTION: Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim.</em></p>
<ol>
<li>
In any reasonably modern version of <a href="https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms">SQL Server Management Studio</a> there is an <code>XEvent</code> session <code>system_health</code> under <i>Management</i> → <i>Extended Events</i>. It allows for viewing some important server logs, among which <code>xml_deadlock_report</code> in particularly interesting. It is very important to have an access to the production instance of database server in order to be able to watch the logs.
</li>
<div class="wp-block-image">
<figure class="aligncenter"><img fetchpriority="high" decoding="async" width="628" height="378" src="https://blog.pjsen.eu/wp-content/uploads/2019/03/systemhealth.png" alt="" class="wp-image-472" srcset="https://blog.pjsen.eu/wp-content/uploads/2019/03/systemhealth.png 628w, https://blog.pjsen.eu/wp-content/uploads/2019/03/systemhealth-300x181.png 300w" sizes="(max-width: 628px) 100vw, 628px" /><figcaption>System health XEvent session</figcaption></figure>
</div>
<li>
In this particular case, these <code>xml_deadlock_reports</code> contained one suspicious attribute: <i>isolationlevel = Serializable (4)</i> and the SQL code was a <code>SELECT</code>. I would not expect my <code>SELECT</code>s running with <i>Serializable</i> isolation level.
</li>
<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://blog.pjsen.eu/wp-content/uploads/2019/03/deadlocklog.png" alt="" class="wp-image-472"/><figcaption>Details of a deadlock</figcaption></figure>
</div>
<li>
The isolation level is an attribute of a connection between a client and the database server. A connection is called <i>session</i> in SQL Server terminology. An explicit <code>BEGIN TRAN</code> is not necessary for the isolation level to be applied. Every SQL statement runs in its own statement-wide transaction. However, for such narrow-scoped transactions, in practice it may not make any difference whether you raise the isolation level or not. The difference can be observed when a transaction is explicit and spans multiple SQL statements.
</li>
<li>
The cause of setting the serialization level to <i>Serializable</i> was the behaviour of the <code>TransactionScope</code> <a href="#ref1">[1]</a>. If you use it, it raises the isolation level. It is just a peculiarity of this very API of the .NET framework. It is good to know this.
</li>
<li>SQL Server, at last in 2012 and some (I am not sure exactly which ones) later versions, does not reset the isolation level when ADO.NET disposes of a connection. A connection returns back to the connection pool <a href="#ref2">[2]</a> and is reused by subsequent <code>SqlConnection</code> objects unless they have different connection string.
</li>
<li>The connection pool size, if the connection pooling is active, poses the limit of how many concurrent connections to a database server a .NET application can make. If there are no free connections in the pool, an exception is thrown <a href="#ref3">[3]</a>.
</li>
<li>
Eliminating the usage of <code>TransactionScope</code> did not solve the issue. Even if you run <code>SELECT</code>s under the default <i>Read Committed</i> isolation level, these still issues <i>Shared locks</i> which may deadlock with <i>Exclusive locks</i> of <code>UPDATE</code>s. In any reasonably high production data traffic, where <code>SELECT</code>s span multiple tables, which are also very frequently updated, it is highly probable, that a deadlock will occur.
</li>
<li>
The difference between running <code>SELECT</code> under <i>Serializable</i> isolation level and <i>Read Committed</i> level is that in the former, the locks are kept from the moment of executing the <code>SELECT</code> until the transaction ends. You can observe it by manually beginning a <i>Serializable</i> transaction, running any <code>SELECT</code> and observing <code>dm_tran_locks</code> <i>DMV</i> and only then committing (or rolling back, whatever) the transaction. With <i>Read Committed</i> level locks are <strong>not</strong> kept until an explicit transaction ends, they are released immediately after execution of a <code>SELECT</code> finishes. These are the same kind of locks, <i>Shared locks</i>. This implies one cannot observe the difference between executing a <code>SELECT</code> under <i>Serializable</i> and <i>Read Committed</i>, when there is no explicit transaction and thus, there is only a statement-wide transaction which releases locks immediately after the results are returned.</li>
<li>Setting isolation level of <i>Read Uncommitted</i> is practically equivalent to running a <code>SELECT WITH(NOLOCK)</code> hint, even if you don&#8217;t explicitly open a transaction.
</li>
<li>
In Entity Framework a <code>SqlConnection</code> is opened for every materialization of the query, the results are returned, and the connection is immediately closed and returned back to the pool <a href="#ref5">[5]</a>. <strong>The connection lifetime is by no means related to the scope of <code>DbContext</code> object</strong>. I can see a kind of similarity between how Entity Framework uses <code>SqlConnection</code>s and how ASP.NET makes use of threads when executing <code>async</code> methods. A thread is released on every <code>await</code> and can be used for doing something more valuable than waiting. Similarly, a <code>SqlConnection</code> is released right after materialization and can be used for executing different command, in different request (in case of ASP.NET) even before <code>DbContext</code> is disposed of.
</li>
<li>It is not that obvious how to reset the isolation level of the connection. You see, every time your C# code using Entity Framework results in sending a SQL to the SQL Server, it can take different connection from the pool (if anyone knows if there is any ordering applied when retrieving connections from the pool, please feel free to comment). It may or may not be the same connection you used previously. Consequently, it is not easy to &#8216;catch&#8217; the underlying connection using Entity Framework. You can call <code>BeginTransaction</code> every time you use <code>DbContext</code>, and then you are guaranteed to own the connection for all your SQL commands. But that way you are forcing opening transaction when you don&#8217;t really need one.  What I recommend is to handle <code>StateChange</code> event of <code>DbConnection</code> object as described in <a href="#ref4">[4]</a>. You can do it either on opening the connection or on closing it.</li>
<li>
In SQL Server you can monitor open sessions with the following query:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2019-03-20-sessions.sql"></script></p>
</li>
</ol>
<p>References:</p>
<p style="text-align:left">
<a id="ref1">[1]</a> &nbsp;&nbsp; <a href="https://stackoverflow.com/questions/11292763/why-is-system-transactions-transactionscope-default-isolationlevel-serializable">https://stackoverflow.com/questions/11292763/why-is-system-transactions-transactionscope-default-isolationlevel-serializable</a><br />
<br />
<a id="ref2">[2]</a> &nbsp;&nbsp; <a href="https://stackoverflow.com/questions/9851415/sql-server-isolation-level-leaks-across-pooled-connections">https://stackoverflow.com/questions/9851415/sql-server-isolation-level-leaks-across-pooled-connections</a><br />
<br />
<a id="ref3">[3]</a> &nbsp;&nbsp; <a href="https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling">https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling</a><br />
<br />
<a id="ref4">[4]</a> &nbsp;&nbsp; <a href="https://stackoverflow.com/questions/28442558/entity-framework-and-transactionscope-doesnt-revert-the-isolation-level-after-d">https://stackoverflow.com/questions/28442558/entity-framework-and-transactionscope-doesnt-revert-the-isolation-level-after-d</a><br />
<br />
<a id="ref5">[5]</a> &nbsp;&nbsp; <a href="https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/bb896325(v=vs.100)#connections-and-the-entity-framework">https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/bb896325(v=vs.100)#connections-and-the-entity-framework</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=471</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>The worst Entity Framework pitfall</title>
		<link>https://blog.pjsen.eu/?p=462</link>
					<comments>https://blog.pjsen.eu/?p=462#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 13 Jan 2019 11:46:30 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=462</guid>

					<description><![CDATA[I work with a quite big enterprise system in my job. Not surprisingly, it uses Entity Framework (Core, but it does not matter) and SQL Server. The system consists of multiple reusable components also in the data access layer. I had to modify DbContext and write some flexible and reusable method accepting a predicate as]]></description>
										<content:encoded><![CDATA[
<p>
I work with a quite big enterprise system in my job. Not surprisingly, it uses Entity Framework (Core, but it does not matter) and SQL Server. The system consists of multiple reusable components also in the data access layer. I had to modify <em>DbContext</em> and write some flexible and reusable method accepting a predicate as an argument and apply the predicate on a <em>DbContext</em>. Let&#8217;s assume we are using the table <em>A</em> from the previous post. I happily coded the signature of the method to use <code>Func<a , bool></a></code>. Let&#8217;s simulate this in the LINQPad and run our <code>Func<a , bool></a></code> against a <em>DbContext</em>.
</p>



<div class="wp-block-image"><figure class="aligncenter"><img decoding="async" width="471" height="331" src="https://blog.pjsen.eu/wp-content/uploads/2019/01/efpredfunc.png" alt="" class="wp-image-465" srcset="https://blog.pjsen.eu/wp-content/uploads/2019/01/efpredfunc.png 471w, https://blog.pjsen.eu/wp-content/uploads/2019/01/efpredfunc-300x211.png 300w" sizes="(max-width: 471px) 100vw, 471px" /></figure></div>



<p>
It did not work. Or&#8230; did it? The picture above shows only generated SQL, but I promise the results show correctly the one record. <strong>The problem is, the predicate has been applied in memory after having pulled all the records from table A into memory as well</strong>. I am not going to explain what it means for any reasonably sized system. The correct way of doing this is to use <code>Expression&lt;Func&lt;A, bool&gt;&gt;</code>.
</p>



<div class="wp-block-image"><figure class="aligncenter"><img decoding="async" width="550" height="400" src="https://blog.pjsen.eu/wp-content/uploads/2019/01/efpredexprfunc.png" alt="" class="wp-image-467" srcset="https://blog.pjsen.eu/wp-content/uploads/2019/01/efpredexprfunc.png 550w, https://blog.pjsen.eu/wp-content/uploads/2019/01/efpredexprfunc-300x218.png 300w" sizes="(max-width: 550px) 100vw, 550px" /></figure></div>



<p>
The explanation is in fact really obvious for anyone deeply understanding how <em>ORM</em>s work. The data structure which allows for inspecting a predicate on the fly and building final SQL query is <code>Expression</code>. There is already an infrastructure for so-called expression visitors. Please also note, that you can always get your <code>Func</code> from <code>Expression&lt;Func&gt;</code> by calling <code>Compile</code> method on it. 
</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=462</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Where to put condition in SQL?</title>
		<link>https://blog.pjsen.eu/?p=454</link>
					<comments>https://blog.pjsen.eu/?p=454#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 30 Dec 2018 11:23:40 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=454</guid>

					<description><![CDATA[Let&#8217;s suppose I am modeling a business domain with entities A, B and C. These entities have the following properties: An entity A can have an entity B and C An entity A can have only entity B An entity A can exist without B and C An entity B has not null property Active]]></description>
										<content:encoded><![CDATA[
<p>
Let&#8217;s suppose I am modeling a business domain with entities <em>A, B and C</em>. These entities have the following properties:
</p>

<ul>
<li>An entity A can have an entity B and C </li>
<li>An entity A can have only entity B </li>
<li>An entity A can exist without B and C </li>
<li>An entity B has not null property <em>Active</em> </li>
</ul>

<p>
I am implementing the domain with the following SQL. I omit foreign key constraints for brevity.
</p>

<script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-12-30-sql1.sql"></script>

<p>
Now let&#8217;s suppose my task is to perform validity check according to special rules. I am given an <em>Id</em> of an entity <e>A as an input and I have to check:
</e></p> 

<ol>
<li>If the entity exists and </li>
<li>If it is valid </li> 
</ol>

<p>The existence will be checked by simply looking if corresponding row is present in the result set, and for validity check I will write simple <code>CASE</code> statement. These are my rules for my example data:</p>

<ul>
<li> A.1 exists and has active B.10 and has C.100 =&gt; <strong style="color:green">exists, </strong><strong style="color:green">correct</strong> </li>
<li> A.2 exists and has inactive B.20 and has C.200 =&gt; <strong style="color:green">exists</strong>, <strong style="color:red">incorrect</strong></li>
<li> A.3 exists and has active B.30 and has C.300 =&gt; <strong style="color:green">exists</strong>, <strong style="color:green">correct</strong> </li>
<li> A.4 exists and has active B.40 and DOES NOT HAVE C =&gt; <strong style="color:green">exists</strong>, <strong style="color:red">incorrect</strong> </li>
<li> A.5 exists and DOES NOT HAVE NEITHER B NOR C =&gt; <strong style="color:green">exists</strong>, <strong style="color:red">incorrect</strong> </li>
<li> A.6 <strong style="color:red">does not exist</strong>, <strong style="color:red">incorrect</strong> </li>
</ul>

<p>
I write the following query to do the task:
</p>

<script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-12-30-sql2.sql"></script>

<p>
My rules include checking if <em>B.Active</em> is true, so I just put this into <code>WHERE</code>. The result is:
</p>

<pre>AId  Correct 
---- --------
1    1       
3    1       
4    0       
</pre>

<p>
The problem is, I have been given the exact set of <em>Ids of A</em> to check: <code>1, 2, 3, 4, 5, 6</code>. But my result does not include <code>2, 5, 6</code>. <span style="text-decoration: underline">My application logic fails here, because it considers those <em>A</em> records as missing</span>. For <code>6</code> this is fine, because it is absent in table <em>A</em>, but <code>2</code> and <code>5</code> must be present in the results for my validity check.  The fix is extremely easy:
</p>

<script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-12-30-sql3.sql"></script>

<p>
Now the result is:
</p>

<pre>AId  Correct 
---- --------
1    1       
2    0       
3    1       
4    0       
5    0       
</pre>

<p>
It is very easy to understand, that <code>WHERE</code> is applied to filter all the results, no matter what my intention for <code>JOIN</code> was. When a record is <code>LEFT JOIN</code>ed, the condition is not met, because values from <em>B</em> are null. But I still need to have <em>A</em> record in my results. Thus, what I have to do is to include my condition in <code>JOIN</code>.
</p>

<p>It is also very easy to fall into this trap of thoughtlessly writing all intended conditions in the <code>WHERE</code> clause.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=454</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>A few random ASP.NET Core and .NET Core tips</title>
		<link>https://blog.pjsen.eu/?p=443</link>
					<comments>https://blog.pjsen.eu/?p=443#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Wed, 26 Sep 2018 18:48:31 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[ASP.NET]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=443</guid>

					<description><![CDATA[I&#8217;ve been working with .NET core recently and I&#8217;d like to post some random observations on this subject for the future reference. It is possible to create Nuget package upon build. This option is actually available also from the VS2017 Project properties GUI. Add this code to csproj. It is possible to add local folder]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve been working with .NET core recently and I&#8217;d like to post some random observations on this subject for the future reference.</p>
<ol>
<li>
It is possible to create Nuget package upon build. This option is actually available also from the VS2017 Project properties GUI. Add this code to <code>csproj</code>.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-09-26-xml1.xml"></script></p>
</li>
<li>
It is possible to add local folder as Nuget feed. The folder can also be current user&#8217;s profile. This one is actually not Core specific. <code>Nuget.config</code> should look like this:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-09-26-xml2.xml"></script></p>
</li>
<li>
You can compile for multiple targets in <code>.NET Core</code> compatible <code>csproj</code>. Please note the trailing <span style='color:red'><strong>s</strong></span> in the tag name. You can also conditionally include items in <code>csproj</code>. Use the following snippets: </p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-09-26-xml3.xml"></script></p>
<p>and:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-09-26-xml4.xml"></script></p>
<p>There is a reference documentation for the available targets: <a href="https://docs.microsoft.com/en-us/dotnet/standard/frameworks">here</a>.</p>
</li>
<li>
The listening port in Kestrel can be configured in multiple ways. It can be read from environment variable or can be passed as command line argument. An asterisk is required to bind to physical interfaces. It is needed e.g. when trying to display the application from mobile phone when being served from development machine. The following are equivalent:</p>
<pre>
set ASPNETCORE_URLS=http://*:11399
--urls http://*:11399
</pre>
</li>
<li>
The preferred way to pass hosting parameters to Kestrel is <code>launchSettings.json</code> file located in <code>Properties</code> of the solution root. You can select a profile defined there when running:</p>
<pre>
dotnet run --launch-profile "Dev"
</pre>
<p><code>dotnet run</code> is used to build and run from the directory where <code>csproj</code> resides. It is not a good idea to run the app&#8217;s dll directly. Settings file can be missing from <code>bin</code> folder and/or launch profile may not be present there.</p>
</li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=443</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to run Tmux in GIT Bash on Windows</title>
		<link>https://blog.pjsen.eu/?p=440</link>
					<comments>https://blog.pjsen.eu/?p=440#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Wed, 18 Jul 2018 18:34:18 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=440</guid>

					<description><![CDATA[I know everyone uses Cmder, but it didn&#8217;t work for me. It hung a few times, it has way too many options, it has issues sending signal to kill a process. I gave up on using it. I work with carefully configured default Windows console and believe it or not, it serves the purpose. I]]></description>
										<content:encoded><![CDATA[<p><div id="attachment_441" style="width: 738px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-441" class="wp-image-441 size-full" src="https://blog.pjsen.eu/wp-content/uploads/2018/07/tmuxgitbash.png" alt="" width="728" height="344" srcset="https://blog.pjsen.eu/wp-content/uploads/2018/07/tmuxgitbash.png 728w, https://blog.pjsen.eu/wp-content/uploads/2018/07/tmuxgitbash-300x142.png 300w" sizes="auto, (max-width: 728px) 100vw, 728px" /><p id="caption-attachment-441" class="wp-caption-text">Tmux running under Git Bash default terminal with two shell processes</p></div></p>
<p>I know everyone uses <a href="http://cmder.net/">Cmder</a>, but it didn&#8217;t work for me. It hung a few times, it has way too many options, it has issues sending signal to kill a process. I gave up on using it. I work with carefully configured default Windows console and believe it or not, it serves the purpose. I also know you can use <a href="https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux">Windows Subsystem For Linux</a> under Windows 10, which is truly amazing, but I am just talking about the cases where you need standard Git for Windows installation.</p>
<p>When I worked with Unix I liked <a href="https://en.wikipedia.org/wiki/GNU_Screen">GNU Screen</a>, which is terminal multiplexer. It gives you a bunch of keyboard shortcuts to create separate shell processes under the same terminal window. The problem is, it is not available under GIT Bash. But it turns out, its alternative &#8212; <a href="https://en.wikipedia.org/wiki/Tmux">Tmux</a> is.</p>
<p>I did a little research and have found that GIT Bash uses MINGW compilation of GNU tools. It uses only selected ones. You can install the whole distribution of the tools from <a href="https://www.msys2.org/">https://www.msys2.org/</a> and run a command to install <em>Tmux</em>. And then copy some files to installation folder of Git. This is what you do:</p>
<ol>
<li>Install before-mentioned msys2 package and run bash shell</li>
<li>Install tmux using the following command: <code>pacman -S tmux</code></li>
<li>Go to msys2 directory, in my case it is <code>C:\msys64\usr\bin</code></li>
<li>Copy <code>tmux.exe</code> and <code>msys-event-2-1-4.dll</code> to your Git for Windows directory, mine is <code>C:\Program Files\Git\usr\bin</code>. Please note, that in future, you can see this file with the version number higher than <em>2-1-4</em></li>
</ol>
<p>And you are ready to go. <strong> Please note, that I do this on 64-bit installations of Git and MSYS </strong>. Now when you run Git Bash enter <code>tmux</code>. My most frequently used commands are:</p>
<ul>
<li><code>CTRL+B</code>, <span style="color: gray;">(release and then) </span> <code>C</code> — create new shell within existing terminal window</li>
<li><code>CTRL+B</code>, <code>N</code> — switch between shells</li>
<li><code>CTRL+B</code>, <em>a digit</em> — switch to the chosen shell by the corresponding number</li>
<li><code>CTRL+B</code>, <code>"</code> — split current window horizontally into panels (panels are inside windows)</li>
<li><code>CTRL+B</code>, <code>o</code> — switch between panels in current window</li>
<li><code>CTRL+B</code>, <code>x</code> — close panel</li>
</ul>
<p>This is everything you need to know to start using it. Simple. There are many other options which you can explore yourself, for example here <a href="http://hyperpolyglot.org/multiplexers">http://hyperpolyglot.org/multiplexers</a>.</p>
<p>Update 1: Users in comments are reporting the method not always works. If you have any experiences with this method please feel free to comment, so that we can figure out what are the circumstances under which it works</p>
<p>Update2: I managed to run this on Windows 7, Windows 2012 R2 and Windows 10. My Git installation is set up to use MinTTy console and tmux works only when run from this console, not from default Windows command line console. Still haven&#8217;t figured out what are precise requirements for this trick</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=440</wfw:commentRss>
			<slash:comments>30</slash:comments>
		
		
			</item>
		<item>
		<title>UPDATE with JOIN subtle bug</title>
		<link>https://blog.pjsen.eu/?p=438</link>
					<comments>https://blog.pjsen.eu/?p=438#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 29 Mar 2018 15:52:42 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=438</guid>

					<description><![CDATA[I have been diagnosing very subtle bug in SQL code which led to unexpected results. It happens under rare circumstances, when you do update with join and you want to increase some number by one. You just write value = value + 1. The thing is, you are willing to increase the value by the]]></description>
										<content:encoded><![CDATA[<p>I have been diagnosing very subtle bug in SQL code which led to unexpected results. It happens under rare circumstances, when you do update with join and you want to increase some number by one. You just write <code>value = value + 1</code>. The thing is, you are willing to increase the value by the number of joined rows. The SQL code kind of expresses your intent. However, what actually happens is, the existing value is read only once. It is updated 3 times, indeed. But with the same value, incremented only by one.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-03-29-sql1.sql"></script></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=438</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Unix tools always work. Even when Windows ones don&#8217;t</title>
		<link>https://blog.pjsen.eu/?p=435</link>
					<comments>https://blog.pjsen.eu/?p=435#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Mon, 08 Jan 2018 19:57:50 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[Solutions]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://blog.pjsen.eu/?p=435</guid>

					<description><![CDATA[In the video above, you can see, that we cannot drag and drop a file onto a PowerShell script. Conversely, we can easily do this with bash script having installed Git for Windows. I needed to perform some trivial conversion within a SQL script, i.e. replace ROLLBACK with COMMIT. I thought I would implement it]]></description>
										<content:encoded><![CDATA[<p><center><br />
<div style="width: 378px;" class="wp-video"><video class="wp-video-shortcode" id="video-435-1" width="378" height="212" preload="metadata" controls="controls"><source type="video/mp4" src="https://blog.pjsen.eu/wp-content/uploads/2018/01/powershell_drag_and_drop.mp4?_=1" /><a href="https://blog.pjsen.eu/wp-content/uploads/2018/01/powershell_drag_and_drop.mp4">https://blog.pjsen.eu/wp-content/uploads/2018/01/powershell_drag_and_drop.mp4</a></video></div><br />
</center></p>
<p>In the video above, you can see, that we cannot drag and drop a file onto a PowerShell script. Conversely, we can easily do this with bash script having installed Git for Windows.</p>
<p>I needed to perform some trivial conversion within a SQL script, i.e. replace <em>ROLLBACK</em> with <em>COMMIT</em>. I thought I would implement it with PowerShell. I am not going to comment on the implementation itself, even though it turned out to be not that obvious. Then I realized, it would be nice, if I could drag and drop a file in question to apply the conversions on it. </p>
<p>This does not work with default configuration of PowerShell. I did not have time to hack it somewhere in the registry, as I assume it is doable. I switched to old, good bash shell instead. </p>
<p>It&#8217;s a pity I couldn&#8217;t do that with Windows native scripting technology. It is very interesting, that MinGW port of shell has been so carefully implemented, that even dragging and dropping works in non-native environment.</p>
<p>I recall the book <a href="https://www.amazon.com/Pragmatic-Programmer-Journeyman-Master/dp/020161622X">Pragmatic Programmer: From Journeyman To Master</a>. There is a whole subchapter about the power of Unix tools. The conclusion was, that over time we would come across plenty of distinguished file formats and tools to manipulate data stored with them. Some of them may become forgotten and abandoned years later, making it difficult to extract or modify the data. Some may work only on specific platforms.</p>
<p>But standard Unix tools like shell scripting, Perl, AWK will always exist. I should say: not only will they always exist, but also they will thrive and support almost every platform you can imagine. They work on plain text, which is easy to process. I am a strong proponent of before-mentioned technologies and I have successfully used them many times in everyday work. It sounds particularly strange among .NET developers, but this what it looks like. The PowerShell simply did not do the trick for me. Perl did. As it always does.  </p>
<p>For the sake of the future reference I am including the actual scripts:</p>
<p>PowerShell script:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-01-08-ps1.ps1"></script></p>
<p>Bash script running Perl:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2018-01-08-pl.pl"></script></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=435</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://blog.pjsen.eu/wp-content/uploads/2018/01/powershell_drag_and_drop.mp4" length="89054" type="video/mp4" />

			</item>
		<item>
		<title>Basic example of SQL Server transaction deadlock with serializable isolation level</title>
		<link>https://blog.pjsen.eu/?p=400</link>
					<comments>https://blog.pjsen.eu/?p=400#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 28 Dec 2017 19:27:23 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=400</guid>

					<description><![CDATA[Today I am demonstrating a deadlock condition which I came across after I had accidentally increased isolation level to serializable. We can replay this condition with the simplest table possible: Now let&#8217;s open SSMS (BTW, since version 2016 there is an installer decoupled from SQL Server available here) with two separate tabs. Please consider, that]]></description>
										<content:encoded><![CDATA[<p>Today I am demonstrating a deadlock condition which I came across after I had accidentally increased isolation level to <em>serializable</em>. We can replay this condition with the simplest table possible:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-12-28-sql1.sql"></script></p>
<p>Now let&#8217;s open SSMS (BTW, since version 2016 there is an installer decoupled from SQL Server available <a href="https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms">here</a>) with two separate tabs. Please consider, that each tab has its own connection. Then execute the following statement to increase isolation level in both tabs:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-12-28-sql2.sql"></script></p>
<p>The following code tries to resemble application level function which does a bunch of possibly time consuming things. These are emulated with <code>WAITFOR</code> instruction. But the point is that the transaction does both <code>SELECT</code> and <code>UPDATE</code> on the same table having those time consuming things in between.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-12-28-sql3.sql"></script></p>
<p>Let&#8217;s put the code in both tabs and then execute one tab and the second tab. After waiting more than 10 seconds, which is the delay in code, we will observe an error message on the first tab:</p>
<pre>
Msg 1205, Level 13, State 56, Line 8
Transaction (Process ID 54) was deadlocked on lock resources with another 
process and has been chosen as the deadlock victim. Rerun the transaction.
</pre>
<p>This situation occurred in a web application where concurrent execution of methods is pretty common. For application developer it is very easy to be tricked into thinking that having set <code>SERIALIZABLE</code> isolation level we magically make sequential execution of our SQL code. But this is wrong. By setting <code>SERIALIZABLE</code> level we <strong>do not</strong> automatically switch the behavior of the code wrapped with transaction to the behavior of <code>lock</code> statement known from C# (technically <code>lock</code> is a <em>monitor</em>).</p>
<p>I would advise having a closer look at the instructions wrapped in transaction. In real application the execution flow is much more `polluted` with an ORM calls, but my simplified code from above just tries to model common scenario of reads followed by writes. What happens here is that SQL Server takes a reader lock on the table after executing the <code>SELECT</code>. When we execute the code again in another session we have one more reader lock taken on the table. Now when the first session passes <code>waitfor</code> and comes to <code>UPDATE</code> it needs to take a writer lock and waits (I am purposely using generic vocabulary instead of SQL Server specific one &mdash; these locks inside database engine all have their names). We observe the first tab waits more than 10 seconds. This is because when the first tab reaches its <code>UPDATE</code> it needs to take writer lock, but it is locked by the <code>SELECT</code> in the second tab. Conversely, the second&#8217;s tab <code>UPDATE</code> waits for the lock taken by the <code>SELECT</code> in the first tab. This is deadlock which fortunately is detected by the engine. </p>
<p>The problem is caused by the lock taken witch <code>SELECT</code> instruction having <code>SERIALIZABLE</code> isolation level set. The lock is not taken in this place with <code>READ COMMITED</code> which is the default level.</p>
<p>I am writing about this for the following reasons:</p>
<ul>
<li>
This is very simple scenario from the application point of view: to read some data, update the data, do some things and have all of this wrapped with a transaction.
</li>
<li>
It is very easy to make <strong>wrong</strong> assumption, that <code>SERIALIZABLE</code> level guarantees that our SQL code will be executed sequentially. But it only guarantees, that if the transactions execute, their observable effects will be as if they both executed sequentially i.e. one after another. But it is your job to make them actually execute not run into a deadlock.
</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=400</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>&#8216;Differs only in casing&#8217; class of problems</title>
		<link>https://blog.pjsen.eu/?p=403</link>
					<comments>https://blog.pjsen.eu/?p=403#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Wed, 26 Jul 2017 18:13:06 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[JavaScript]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=403</guid>

					<description><![CDATA[This one attacked me yesterday by the end of the day. This is a TypeScript project with some instrumentation done with gulp. The command shown attempts to launch TypeScript compiler. The console is standard git bash for Windows. I didn&#8217;t realize what was going on there at the first sight. The messages were confusing and]]></description>
										<content:encoded><![CDATA[<p><div style="width: 936px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-full wp-image-404" src="http://blog.pjsen.eu/wp-content/uploads/2017/07/s1.png" alt="" width="926" height="496" srcset="https://blog.pjsen.eu/wp-content/uploads/2017/07/s1.png 926w, https://blog.pjsen.eu/wp-content/uploads/2017/07/s1-300x161.png 300w, https://blog.pjsen.eu/wp-content/uploads/2017/07/s1-768x411.png 768w" sizes="auto, (max-width: 926px) 100vw, 926px" /><p class="wp-caption-text">Error message from TypeScript compiler</p></div></p>
<p>This one attacked me yesterday by the end of the day. This is a TypeScript project with some instrumentation done with <code>gulp</code>. The command shown attempts to launch TypeScript compiler. The console is standard git bash for Windows. I didn&#8217;t realize what was going on there at the first sight. The messages were confusing and mysterious. I made sure I didn&#8217;t have any changes in the repository comparing to the HEAD. I opened the folder in Visual Studio Code and everything was fine. It was only the compiler which saw the problems. </p>
<p>The next day I figured out there is a small detail in my path. Just take a look at directory names casing:</p>
<p><div style="width: 936px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-405" src="http://blog.pjsen.eu/wp-content/uploads/2017/07/s2.png" alt="" width="926" height="496" srcset="https://blog.pjsen.eu/wp-content/uploads/2017/07/s2.png 926w, https://blog.pjsen.eu/wp-content/uploads/2017/07/s2-300x161.png 300w, https://blog.pjsen.eu/wp-content/uploads/2017/07/s2-768x411.png 768w" sizes="auto, (max-width: 926px) 100vw, 926px" /><p class="wp-caption-text">TypeScript  compiler ran successfully</p></div></p>
<p>After changing the directory using names with proper casing everything was fine. What actually happened was that, I somehow opened the git bash providing the current directory name using lower case letters. It is not that difficult to do that. This is a flaw in the MINGW implementation of bash, I think. It just allows you to change the directory using its name written with whatever case. That is not the problem itself, because <code>cmd.exe</code> allows you to do so as well. The problem is that it then stores the path (probably in PWD environment variable). Some cross-platform tools, which are case sensitive on other platforms, when executed from git bash with mismatched working directory casing may begin to treat such path as separate one, different from the original one. Especially tools which process files using their paths relative to the current directory, like TypeScript compiler for instance.</p>
<p>This can possibly be a wider class of problems and I guess there are other development tools which behave like that when launched from git bash under before mentioned conditions. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=403</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>In C# interface implementations are not inherited</title>
		<link>https://blog.pjsen.eu/?p=398</link>
					<comments>https://blog.pjsen.eu/?p=398#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Tue, 21 Feb 2017 18:57:26 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[General programming]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=398</guid>

					<description><![CDATA[It may be obvious to some readers, however I was a little bit surprised when I discovered that. Actually, I realized this by looking at a non-trivial class hierarchy in real world application. One can easily think that discussion about inheritance is kind of theoretical area and it primarily appears during job interviews, but it]]></description>
										<content:encoded><![CDATA[<p>It may be obvious to some readers, however I was a little bit surprised when I discovered that. Actually, I realized this by looking at a non-trivial class hierarchy in real world application. One can easily think that discussion about inheritance is kind of theoretical area and it primarily appears during job interviews,  but it is not true. I will explain real use case and real reasoning behind this hierarchy later in this post, now please take a look at the following program. Generally, the point is that <strong>1)</strong> we have to use reference of an interface type and we want more than one specialized implementations of the interface <strong>2)</strong> we need to have <code>class B</code> inherit from <code>class A</code>. Without the second requirement it would be obvious: it would be sufficient just to write two separate implementations of <code>IActivity</code> and we are done.  </p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-02-21-cs1.cs"></script></p>
<p>It prints <code>A does activity</code> despite <code>ia</code> variable storing reference to an object of type <code>B</code> and also despite explicitly declaring hiding of base method.  It was not clear to me why is it so. It is obvious that the type <code>B</code> has its own implementation, so why is it not run here? To overcome this I initially created base class declared as abstract:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-02-21-cs2.cs"></script></p>
<p>It prints <code>B does activity</code>, but it also is overcomplicated. Then I came up with simpler solution &mdash; it turns out we have to explicitly mark <code>class B</code> as implementing <code>IActivity</code>.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-02-21-cs3.cs"></script></p>
<p>It prints <code>B does activity</code>, but it is not perfect. Method hiding is not a good practice. Finally I ended up with more elegant (and the simplest, I guess) solution:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2017-02-21-cs4.cs"></script></p>
<p>In here we are using <code>virtual</code> and <code>override</code> modifiers to clearly specify an intention of specializing the method. It is better than previous one, because just by looking at <code>class A</code> we are already informed that further specialization is going to occur. </p>
<p>The real usage scenario is that we have an interface representing <i>Entity Framework Core</i> context. We have two distinct implementations: one uses real database and the other uses in-memory database for tests. The latter inherits from the former because what inheritance is all about is copying code. We just want to have the same methods for in-memory implementation like for regular one, but with some slight modifications e.g. in methods executing raw SQL. We also have to use an interface to refer to these objects, because this is how dependency injection containers work.</p>
<p>As you can see, what might seem purely theoretical, actually is used in real world line of business application. Although I have been pretty confident I understand the principles of inheritance in object oriented programming since my undergrad computer science lectures, but as I mentioned, I was surprised discovering that we have to explicitly put <code>: IActivity</code> on <code>class B</code>. The implementation has already been there. Anyway, this example encourages me to be always prepared to verify assumptions I make.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=398</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Beware LINQ Group By custom class</title>
		<link>https://blog.pjsen.eu/?p=396</link>
					<comments>https://blog.pjsen.eu/?p=396#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Mon, 21 Nov 2016 19:30:52 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=396</guid>

					<description><![CDATA[Actually this one is pretty obvious. But if you are focused on implementing some complex logic, it is easy to forget about this requirement. Let&#8217;s assume there is following piece of code: This will compile and run, however it will not distinguish ProductCodeCentreNumPair instances. I.e. it will not do the grouping, but produce a sequence]]></description>
										<content:encoded><![CDATA[<p>Actually this one is pretty obvious. But if you are focused on implementing some complex logic, it is easy to forget about this requirement. Let&#8217;s assume there is following piece of code:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-11-21-cs1.cs"></script></p>
<p>This will compile and run, however it will not distinguish <code>ProductCodeCentreNumPair</code> instances. I.e. it will not do the grouping, but produce a sequence of <code>IGrouping</code>s, each for corresponding source item. The reason is self-evident, if we try to think for a while. This custom class does not have custom equality comparison logic implemented. Default logic is based on <code>ReferenceEquals</code> so, as each separate object resides at different memory address, they all will be recognized as not equal to each other. Even if they contain the same values in their fields (strings behave like value types, although they are reference types). I used the following set of overridden methods to provide my custom equality comparison logic to solve the problem. It is important to note, that <code>GetHashCode</code> is also needed in order for the grouping to work.  </p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-11-21-cs2.cs"></script></p>
<p>Alternatively you can use anonymous types, I mean: </p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-11-21-cs3.cs"></script></p>
<p>will just work. This is because instances of anonymous classes have automatically generated equality comparison logic based on values of their fields. Contrary to <code>ReferenceEquals</code> based implementation generated for typical named classes. They are most frequently used in the context of comparisons, so it seems reasonable. </p>
<p>One more alternative is to use a structure instead of a class. But structures should only be used if their fields are value types, because only then you can benefit from binary comparison of their value. And even having <code>struct</code>s instead of <code>class</code>es requires implementing custom <code>GetHashCode</code>. By not implementing it, there is a risk that 1) the auto-generated implementation will use reflection or 2) will not be well distributed across <code>int</code> leading to performance problems when adding to <code>HashSet</code>. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=396</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>When profiling a loop, look at entire execution time of that loop</title>
		<link>https://blog.pjsen.eu/?p=394</link>
					<comments>https://blog.pjsen.eu/?p=394#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 11 Nov 2016 12:22:50 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[General programming]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=394</guid>

					<description><![CDATA[Today&#8217;s piece of advice will not be backed by concrete examples of code. It will be just loose observations of some profiling session and related conclusions. Let&#8217;s assume there is a C# code doing some intense manipulations over in-memory stored data. The operations are done in a loop. If there is more data, there are]]></description>
										<content:encoded><![CDATA[<p>Today&#8217;s piece of advice will not be backed by concrete examples of code. It will be just loose observations of some profiling session and related conclusions. Let&#8217;s assume there is a C# code doing some intense manipulations over in-memory stored data. The operations are done in a loop. If there is more data, there are more iterations of the loop. The aim of the operations is to generate kind of summary of data previously retrieved from a database. It has been proved to work well in test environments. Suddenly, in production, my team noticed this piece of code takes about 20 minutes to execute. It turned out there were about 16 thousand iterations of the loop. It was impossible to test such high load with manual testing. The testing only confirmed correctness of the logic itself.</p>
<p>After an investigation and some experiments it turned out that bad data structure was to blame. The loop did some lookup-heavy operations over <code>List&lt;T&gt;</code>. Which are obviously <em>O(n)</em>, as list is implemented over array. Substitution of <code>List&lt;T&gt;</code> for <code>Hashset&lt;T&gt;</code> caused dramatic reduction of execution time to a dozen or so of seconds. This is not as surprising as it may seem because <code>Hashset&lt;T&gt;</code> is implemented over hashtable and has <em>O(1)</em> lookup time. These are some data structures fundamentals taught in every decent computer science lecture. The operation in a loop looked innocent and the author did not bother to try to anticipate future load of the logic. A programmer should always keep in mind the estimation of the amount of data to be processed by their implementation and think of appropriate data structure. By appropriate data structure I mean choice between fundamental structures like array (vector), linked list, hashtable, binary tree. This can be the first important conclusion, but I encouraged my colleagues to perform profiling. The results were not so obvious.</p>
<p>Although the measurement of timing of the entire loop with 16 thousand iterations showed clearly that hashtable based implementation performs orders of magnitude better, but when we stepped over individual loop instructions there was almost no difference in their timings. The loop consisted of several <code>.Where</code> calls over entire collection. These calls took something around 10 milliseconds in both <code>List&lt;T&gt;</code> and <code>Hashset&lt;T&gt;</code> implementations. If we had not bothered to measure entire loop execution, it would have been pretty easy to draw conclusion there is no difference! Even performing such step measurement during the development can me misleading, because at first sight, does 10 milliseconds look suspicious? Of course not. Not only does it look unsuspicious but also it works well. At least in test environment with test data. </p>
<p>As I understand it, the timings might have been similar, because we measured only some beginning iterations of the loop. If the data we are searching for are at the beginning of the array, the lookup can obviously be fast. For some edge cases even faster than doing hashing and looking up corresponding list of slots in a hashtable. </p>
<p>For me there are two important lessons learned here:</p>
<ul>
<li>Always think of data structures and amount of data to be processed.</li>
<li>When doing performance profiling, measure everything. Concentrate on amortized timings of entire scope of code in question. Do not try to reason about individual subtotals.</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=394</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>My Visual Studio workflow when working on web applications</title>
		<link>https://blog.pjsen.eu/?p=392</link>
					<comments>https://blog.pjsen.eu/?p=392#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 28 Jul 2016 18:29:20 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=392</guid>

					<description><![CDATA[I use IIS Express for majority of my ASP.NET development. Generally I prefer not to restart IIS Express because it may take some time to load big application again. Unfortunately Visual Studio is eager to close IIS Express after you finish debugging. In past, there were some tips on how to prevent such behavior, for]]></description>
										<content:encoded><![CDATA[<p>I use IIS Express for majority of my ASP.NET development. Generally I prefer not to restart IIS Express because it may take some time to load big application again. Unfortunately Visual Studio is eager to close IIS Express after you finish debugging. In past, there were some tips on how to prevent such behavior, for instance <i>Edit and Continue</i> should be disabled, and then IIS Express stayed after having finished debugging. I observed after Visual Studio 2015 Update 2 this no longer works. But there is yet another way to preserve IIS Express. We must not <b>stop</b> debugging, but <b>detach</b> the debugger instead in order to accomplish this. I prefer creating custom toolbar button:</p>
<p>Right click on toolbar <span style='font-size: 14pt;'>&xrarr;</span> <i>Customize <span style='font-size: 14pt;'>&xrarr;</span> Commands tab <span style='font-size: 14pt;'>&xrarr;</span> Toolbar <span style='font-size: 14pt;'>&xrarr;</span> Debug <span style='font-size: 14pt;'>&xrarr;</span> Add command <span style='font-size: 14pt;'>&xrarr;</span> Debug <span style='font-size: 14pt;'>&xrarr;</span> Detach all</i></p>
<p>An important caveat is also to use <i>Multiple startup projects</i> configured in the solution properties. A web application typically consists of many projects, so it is convenient to run them all at once.</p>
<p>Having IIS Express continuously running in the background I would like to introduce how to quickly start debugging. Here I also prefer custom toolbar button:</p>
<p>Right click on toolbar <span style='font-size: 14pt;'>&xrarr;</span> <i>Customize <span style='font-size: 14pt;'>&xrarr;</span> Commands tab <span style='font-size: 14pt;'>&xrarr;</span> Toolbar <span style='font-size: 14pt;'>&xrarr;</span> Standard <span style='font-size: 14pt;'>&xrarr;</span> Add command <span style='font-size: 14pt;'>&xrarr;</span> Debug <span style='font-size: 14pt;'>&xrarr;</span> Attach to process</i></p>
<p>Or even better, keyboard binding:</p>
<p><i>Tools <span style='font-size: 14pt;'>&xrarr;</span> Options <span style='font-size: 14pt;'>&xrarr;</span> Environment <span style='font-size: 14pt;'>&xrarr;</span> Keyboard <span style='font-size: 14pt;'>&xrarr;</span> Debug.AttachtoProcess</i></p>
<p>And now we face the most difficult part: choosing correct iisexpress.exe process. It is likely there will be a few of them. We can search for the PID manually by right clicking IIS Express icon <span style='font-size: 14pt;'>&xrarr;</span> <i>Show All</i> applications and we can view the PID. But this may drive you crazy as you have to do this many times. I recommend simple cmd script which invokes PowerShell command to query WMI:</p>
<pre>
@echo off
powershell.exe -Command &quot;Get-WmiObject Win32_Process -Filter &quot;&quot;&quot;CommandLine LIKE '%%webapi%%' AND Name = 'iisexpress.exe'&quot;&quot;&quot; | Select ProcessId | ForEach { $_.ProcessId }&quot;
</pre>
<p>Here I am searching for the process whose path contains &#8220;webapi&#8221; string. I have to use triple quotes and double percent signs because that is how cmd&#8217;s escaping works. The final pipe to <code>ForEach</code> is not necessary, it serves the purpose of formatting the output to raw number, not a fragment of a table. I always have running cmd window so I put this little script into <code>PATH</code> variable and I can view desired PID instantaneously. By the way, Windows Management Instrumentation is tremendously powerful interface for obtaining information about anything in the operating system.</p>
<p>Knowing the PID you can scroll down the process list by simply pressing <i>i</i> letter and then you can visually distinguish the instance with a relevant PID.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=392</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Entity object in EF is partially silently read-only</title>
		<link>https://blog.pjsen.eu/?p=390</link>
					<comments>https://blog.pjsen.eu/?p=390#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 23 Jun 2016 20:16:06 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=390</guid>

					<description><![CDATA[What this post is all about is the following program written in C# using Entity Framework 6.1.3 throwing at line 25 and not at line 23. We can see the simplest usage of Entity framework here. There is a Test class and a TestChild class which contains a reference to an instance of Test named]]></description>
										<content:encoded><![CDATA[<p>What this post is all about is the following program written in C# using Entity Framework 6.1.3 throwing at line 25 and not at line 23. </p>
<p>We can see the simplest usage of Entity framework here. There is a <code>Test</code> class and a <code>TestChild</code> class which contains a reference to an instance of <code>Test</code> named <code>Parent</code>. This reference is marked as <code>virtual</code> so that Entity Framework in instructed to load an instance in a lazy manner, i.e. upon first usage of that reference. In DDL model obviously <code>TestId</code> column is a foreign key to <code>Test</code> table. </p>
<p>I create an entity object, save it into database and then I retrieve it at line 21. Because the class uses virtual properties, Entity Framework dynamically creates some custom type in order to be able to implement lazy behavior underneath.</p>
<p>Now let&#8217;s suppose I have a need to modify something in an object retrieved from a database. It turns out I can modify <code>Value</code> property, which is of pure <code>string</code> type. What is more, I can also modify <code>Parent</code> property, but&#8230; <strong>the modification is not preserved!</strong>. This program throws at line 25 because an assignment from line 24 is silently ignored by the framework.</p>
<p>I actually have been trapped by this when I was in need of modifying some collection in complicated object graph. I am deeply disappointed the Entity Framework on one hand allows modification of non-virtual properties, but on the other hand it ignores virtual ones. This can make a developer run into a trouble. </p>
<p>Of course I am aware it is not good practice to work on objects of data model classes. I recovered myself from this situation with AutoMapper. But this is kind of a quirk, and a skilled developer has to hesitate to even try to modify something returned by Entity Framework.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-06-23-cs1.cs"></script></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=390</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AngularJS custom validator, an excellent example of duck typing</title>
		<link>https://blog.pjsen.eu/?p=387</link>
					<comments>https://blog.pjsen.eu/?p=387#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 26 May 2016 16:37:52 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[JavaScript]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=387</guid>

					<description><![CDATA[A StackOverflow answer usually is not worth writing a blog post, but this time I am sharing something, which 1) is an invaluable idea for developers using AngularJS 2) may serve as a beginning of deeper theoretical considerations regarding types in programming languages. The link to before-mentioned SO: http://stackoverflow.com/questions/18900308/angularjs-dynamic-ng-pattern-validation I have created a codepen of]]></description>
										<content:encoded><![CDATA[<p>A StackOverflow answer usually is not worth writing a blog post, but this time I am sharing something, which 1) is an invaluable idea for developers using AngularJS 2) may serve as a beginning of deeper theoretical considerations regarding types in programming languages. </p>
<p>The link to before-mentioned SO:<br />
<a href="http://stackoverflow.com/questions/18900308/angularjs-dynamic-ng-pattern-validation">http://stackoverflow.com/questions/18900308/angularjs-dynamic-ng-pattern-validation</a></p>
<p>I have created a codepen of this case:</p>
<p>[codepen_embed height=&#8221;265&#8243; theme_id=&#8221;0&#8243; slug_hash=&#8221;wWwEdw&#8221; default_tab=&#8221;js&#8221; user=&#8221;przemsen&#8221;]See the Pen <a href='http://codepen.io/przemsen/pen/wWwEdw/'>AngularJS ng-pattern &#8212; an excellent example of duck typing</a> by Przemyslaw S. (<a href='http://codepen.io/przemsen'>@przemsen</a>) on <a href='http://codepen.io'>CodePen</a>.[/codepen_embed]</p>
<p></p>
<p>In the codepen you can see a JavaScript object set up in <code>ng-pattern</code> directive, which should typically be a regular expression. The point is that AngularJS check for being a regular expression is simply a call to <code>.test()</code> function. So we can just attach such <code>test</code> function to an object and implement whatever custom validation logic we need. In here we can see the beauty of duck typing allowing us to freely customize behavior of the framework. </p>
<p>I said this could be a beginning of a discussion on how to actually understand duck typing. It is not that obvious and programmers tend to understand it intuitively rather than precisely, though it is not a term with any kind of formal definition. I recommend Eric Lippert&#8217;s blog post on the subject: <a href="https://ericlippert.com/2014/01/02/what-is-duck-typing/">https://ericlippert.com/2014/01/02/what-is-duck-typing/</a>.  </p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=387</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The horror of JavaScript Date</title>
		<link>https://blog.pjsen.eu/?p=384</link>
					<comments>https://blog.pjsen.eu/?p=384#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 27 Mar 2016 18:12:06 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[JavaScript]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=384</guid>

					<description><![CDATA[I have heard about difficulties of using JavaScript date APIs, but it was not until recently when I eventually experienced it myself. I am going to describe one particular phenomenon that can lead to wrong date values sent from client&#8217;s browser to a server. When analysing these examples please keep in mind they were executed]]></description>
										<content:encoded><![CDATA[<p>I have heard about difficulties of using JavaScript date APIs, but it was not until recently when I eventually experienced it myself. I am going to describe one particular phenomenon that can lead to wrong date values sent from client&#8217;s browser to a server. When analysing these examples please keep in mind they were executed on machine with UTC+01:00 time zone unless I explicitly tell that an example refers to a different time zone.</p>
<p>Let&#8217;s try to parse a JavaScript Date object:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js1.js"></script></p>
<p>What draws my attention is the time value. It is 01:00 which may look strange. But it is not if we correlate this value with time zone information which is stored along with JavaScript object. The time zone information is an inherent part of the object and it comes from the browser, which obviously derives it from the operating system&#8217;s culture settings. It turns out these two pieces of information are essential when making AJAX calls, because then <code>.toJSON()</code> method is called. I am making this assumption based on the example behaviour of <em>ngResource </em>library, but other frameworks or libraries probably do the same, because they must somehow convert JavaScript Date object to a universal text format to be able to send it via HTTP. By the way, <code>.toJSON()</code> returns the same result as to <code>.toISOString()</code>.  </p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js2.js"></script></p>
<p>What we have got here is UTC-normalized date and time value. The time zone offset of actual time value used with time zone information allow us to normalize the date when sending it to a server. The most important thing here is that the values stored in Date object are expressed in local time zone i.e. the browser&#8217;s one. This implies some strange consequences like the one of being in UTC-xx:xx time zones. Let&#8217;s try the same example after setting time zone to UTC-01:00.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js3.js"></script></p>
<p>The problem here is that we have actually ended up with parsed values which are different from their original textual representation i.e. March 1st versus February 28th. But it is still OK providing that our date processing logic relies on normalized values:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js4.js"></script></p>
<p>However, it can be misleading when we try to get individual date components. Here we try to get day component.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js5.js"></script></p>
<p>But in general the object still can serve the purpose if we rely on normalized values and call appropriate methods.</p>
<p>The problem is that not all Date constructors behave in the same way. Let&#8217;s try this one:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js6.js"></script></p>
<p>Now the parsed date although it still contains time zone information derived from the operating system, but it does not contain time value modified by the corresponding offset. In the first example we have 1am time which corresponds to GMT+01:00, here we have just 00:00 time and, of course, we still do have GMT+01:00 time zone information. This time zone information without correctly shifted time value is actually catastrophic. Look what happens when <code>.toJSON()</code> is called:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js7.js"></script></p>
<p>The result is wrong date sent to a server.  This is not the end of the observation. The same phenomenon can also happen in the other way round, i.e. when we are transferring date values from a server to a client. Now let&#8217;s assume the server sent the following date and we are parsing it. Please keep in mind that the actual process of parsing may happen implicitly in some framework&#8217;s code, for instance when specifying data source for <em>Kendo </em>grid. So the one who parses it for us can be a framework&#8217;s code.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js8.js"></script></p>
<p>As we see, this constructor results in shifted time value just like the Date(&#8220;2015-03-01&#8221;) one. But when considering displaying these retrieved values we inevitably have to answer the question, whether we aim at showing local time or the server time. We have to remember that in case when the client&#8217;s browser is in GMT-xx:xx time zone and we try to show the parsed value (like in <code>c.getDate()</code> example), not the normalized one, this may result in wrong date displayed in front of the user. I say `may`, because this can really be a desired behaviour depending on the requirements. For example, in <em>Angular</em>we can enforce displaying normalized value by providing optional time zone parameter to <code>$filter('date')</code>.</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2016-03-27-js9.js"></script></p>
<p>Here we do not worry about internal, actual component values of object c whose prototype is Date. It internally may store February 28th but it does not matter. <code>$filter</code> is told to output values for UTC time. It is also worth mentioning that the Date constructor also assumes that its argument in specified in UTC time. So we populate the Date object with UTC value and return also UTC value not worrying about internal representation which is local. This approach results in the output date and time being equal to the intended input one. </p>
<p>As a conclusion I should write some recommendations on how to use Date object and what to avoid. But honestly I cannot say I gained enough knowledge in this area to make any kind of guidelines. I can only afford making some general statements. Just pay attention to what your library components like, for instance, a date picker control operates on. Is their output a Date object or string representation of the date? What do you do with this output? Is their input a Date object or a string representation and they do the parsing on their own? Just examine carefully and do not blindly trust your framework. I personally do not accept situation when something works and I do now know why it does. I tend to dig into details and try to find the nitty gritty. I have always believed deep research (even one which takes much time) and understanding of underlying technology is worthwhile and I often recall the post by Scott Hanselman <a href="http://www.hanselman.com/blog/StopThinkResearchDebug.aspx">who also appears to follow this principle.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=384</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>The bookmarks problem</title>
		<link>https://blog.pjsen.eu/?p=362</link>
					<comments>https://blog.pjsen.eu/?p=362#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 15 May 2015 17:30:39 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=362</guid>

					<description><![CDATA[I have been using Mozilla based web browsers since 2003. Back in the days, the application was called Mozilla Suite, then in 2004 the Firefox showed up using the same engine, but with completely new front end. I migrated my profile over the years many times, but I always kept bookmarks. Some of my bookmarks]]></description>
										<content:encoded><![CDATA[<p>I have been using Mozilla based web browsers since 2003. Back in the days, the application was called <a href="https://en.wikipedia.org/wiki/Mozilla_Application_Suite">Mozilla Suite</a>, then in 2004 the Firefox showed up using the same engine, but with completely new front end. I migrated my profile over the years many times, but I always kept bookmarks. Some of my bookmarks surely remember those early days before Firefox (yet, majority of the oldest are no longer valid, because sites were shut down). The total number of my browser bookmarks gathered over that time is over 1k. And this is `the problem`.</p>
<p>I had several attempts to clean up and organise this huge collection. I have tried to remove dead ones and to group them in folders. I have tried using keywords and descriptions to be able to search more effectively. But with no success. Now I have something about dozen of folders, but I still find myself in trouble when I need to search for particular piece of information. The problem boils down to that: I absolutely remember what the site is about, I am absolutely sure I have it in my collection but I cannot find it because either it has some strange title or words in URL are meaningless (Firefox searches only within titles and urls, because obviously that is all it can do).</p>
<p>I realized I need a tool which is much more powerful when it comes to bookmarks searching. I could not find anything to satisfy my requirements so I implemented it myself. Today I am introducing <strong>BookmarksBase</strong> which is an open source tool written in C# to solve this issue.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-378" src="http://blog.pjsen.eu/wp-content/uploads/2015/07/bookmarksbase_search.png" alt="BookmarksBase.Search" width="483" height="351" srcset="https://blog.pjsen.eu/wp-content/uploads/2015/07/bookmarksbase_search.png 483w, https://blog.pjsen.eu/wp-content/uploads/2015/07/bookmarksbase_search-300x218.png 300w" sizes="auto, (max-width: 483px) 100vw, 483px" /></p>
<p>BookmarksBase embraces a concept that may seem ridiculous: why don&#8217;t we pull all textual contents from all sites in bookmarks. Do you think it is lots of data? How much it would be? Even if you were to sacrifice a few hundreds of megs in order to be able to search really effectively, isn&#8217;t it worth that space?</p>
<p>Well, it turns out it takes much less space than I originally expected and the tool works surprisingly fast, although it is implemented in managed code without any distinguished optimizations. First we have to run separate tool to collect data (BookmarksBase Importer). Downloading + parsing takes maybe a minute or two. Produced index file containing all text from all sites in bookmarks, which I call <code>bookmarksbase.xml</code> in my case is only 12 MiB (over 1000 bookmarks). Then we can run BookmarksBase Search that allows us to perform actual searching within contents/addresses/titles. Surely, when you have <code>bookmarksbase.xml</code> created you can run whatever serves the purpose for you e.g. <code>grep</code>, <code>findstr</code> (in Windows) or any kind of decent text editor that can handle big amounts of text. I crafted XML so that it can be easily readable by human: there is new lines, and the text is preserved in nice column of fixed width (thanks to <a href="http://lynx.isc.org/">Lynx</a> &#8212; see source for details).</p>
<p>More details and download link are available on <a href="https://github.com/przemsen/BookmarksBase">GitHub</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=362</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PowerShell &#8212; my points of interest</title>
		<link>https://blog.pjsen.eu/?p=357</link>
					<comments>https://blog.pjsen.eu/?p=357#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 15 Feb 2015 09:56:57 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=357</guid>

					<description><![CDATA[I have never used PowerShell until quite recently. I successfully solved problems with bunch of other scripting languages e.g. Python, Perl, Bash, AWK. They all served the purpose really well and I did not feel like I need yet another scripting language. Furthermore, PowerShell looks nothing like any of those technologies that I am familiarized]]></description>
										<content:encoded><![CDATA[<p>I have never used PowerShell until quite recently. I successfully solved problems with bunch of other scripting languages e.g. Python, Perl, Bash, AWK. They all served the purpose really well and I did not feel like I need yet another scripting language. Furthermore, PowerShell looks nothing like any of those technologies that I am familiarized with, so I refused to start learning it many times. </p>
<p>However, when you work as a .NET developer, chances are sooner or later you will come across a solution implemented with PowerShell. It could be, for instance, a deployment script and you will have to maintain it. This happened to me a while ago. Although modification that I committed was relatively simple and I made it up rather quickly with little help of Google, I decided to dig into the subject and check few more things out. What I found after a bit of random research was quite impressive to me. I would like to share three main features I found so far and I consider valuable in a scripting technology. At the bottom of this post I also put some code snippets for quick reference how to accomplish particular tasks. </p>
<p><strong>1. Out-GridView</strong></p>
<p>In PowerShell you can manipulate format of the output in many ways. You can generate HTML, CSV, white space formatted text tables etc. But there is also an option to view output of a command with WPF grid that has built-in filter. Look at the effect of <code>Get-Process | Out-GridView</code> command &#8212; this is functionality you get out of the box with just a few keystrokes!</p>
<p><div id="attachment_358" style="width: 511px" class="wp-caption aligncenter"><a href="http://blog.pjsen.eu/wp-content/uploads/2015/02/ogv.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-358" src="http://blog.pjsen.eu/wp-content/uploads/2015/02/ogv.png" alt="Out-GridView" width="501" height="323" class="size-full wp-image-358" srcset="https://blog.pjsen.eu/wp-content/uploads/2015/02/ogv.png 501w, https://blog.pjsen.eu/wp-content/uploads/2015/02/ogv-300x193.png 300w" sizes="auto, (max-width: 501px) 100vw, 501px" /></a><p id="caption-attachment-358" class="wp-caption-text">Out-GridView</p></div></p>
<p><strong>2. Embedding C# code</strong></p>
<p>This feature seems quite powerful. If you need more advanced techniques in your script you can basically implement them inline using C# and then just invoke them.</p>
<pre>
Add-Type @'
using System;
using System.IO;
using System.Text;
      
public static class Program
{
    public static void Main()
    {
        Console.WriteLine(&quot;Hello World!&quot;);
    }
}
'@
 
[Program]::Main()
</pre>
<p><strong>3. XML parsing done simply right</strong></p>
<p>Any time I had to do some XML parsing in my scripts using other languages I always felt somewhat confused. This is not sort of things that you just recall from your head and type as a code. You have to use specific APIs, you have to call them in specific way, in specific order etc. I do not mean this complicated in any way, it is not, but it is cumbersome in many languages. I always had to look things up in a cheat-sheet.  Not any more <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> From now on, I will always lean toward the simplest, and perhaps basically the best implementation of XML parsing:</p>
<pre class="brush:powershell">
$d = [xml][/xml] "<a><b>1</b><c>2</c></a>"
$d.a.b
</pre>
<p>This outputs <code>1</code>. Yes, it is as simple as that. You basically call member properties with appropriate names that match XML nodes. </p>
<p>I am sharing these features because I did not imagine a scripting language can offer something as powerful. And this possibly is only a tip of an iceberg, as I just scratched the surface of PowerShell world. I also suggest checking out little script I wrote to explore PowerShell functionalities: <a href="https://github.com/przemsen/main/blob/master/misc/managesites.ps1">managesites.ps1</a>. It may be useful for ASP.NET developers &#8212; it allows you to delete sites from IIS Express config file. </p>
<p>Miscellaneous code snippets:</p>
<ul>
<li><code>if (test-path "c:\deploy"){ "aaa" } </code></li>
<li><code>$f="\file.txt";(gc $f) -replace "a","d" | out-file $f</code> &#8212; this one is particularily important, because equivalent functionality of in-line editing in MinGW implementation of Perl and SED seems not to work correctly</li>
<li><code>foreach ($line in [System.IO.File]::ReadLines($filename)){  } </code></li>
<li><code>-match <i>regex</i></code></li>
<li><code>( Invoke-WebRequest <i>URL</i> | select content | ft -autosize -wrap | out-string )</code></li>
<li style="text-align:left;"><code>reflection.assembly]::LoadWithPartialName("Microsoft.VisualBasic") | Out-Null<br />
$input = [Microsoft.VisualBasic.Interaction]::InputBox("Prompt", "Title", "Default", -1, -1);</code></li>
<li><code>foreach ($file in dir *.vhd) { } </code></li>
<li><code>Set-ExecutionPolicy unrestricted</code></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=357</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>You are billed for turned off Azure VMs as well</title>
		<link>https://blog.pjsen.eu/?p=355</link>
					<comments>https://blog.pjsen.eu/?p=355#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 25 Jan 2015 11:53:04 +0000</pubDate>
				<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=355</guid>

					<description><![CDATA[If you are new to Microsoft Azure you will barely guess that. When you shut down your virtual machine, compute hour counter counts just like when it is running and you have to pay for it as well. This &#8220;minor&#8221; detail is not explained in many official introductory documentation materials I have read. I have]]></description>
										<content:encoded><![CDATA[<p>If you are new to Microsoft Azure you will barely guess that. When you shut down your virtual machine, compute hour counter counts just like when it is running and you have to pay for it as well. This &#8220;minor&#8221; detail is not explained in many official introductory documentation materials I have read. I have realized that only because I am kind of person who likes to re-verify things over and over again and that is why I went to my account&#8217;s billing details. I had used my VM just for few days and each day only few hours and after that I saw nearly 200 compute hours in my bill.</p>
<p>Indeed, there are reasonable technical reasons why even powered off machine is billed too. When you create a virtual machine you consume data center resources and they have to remain allocated for you e.g. IP address, CPU cores, storage etc. It does not matter if it is running as this resources still must be reserved and ready for you.</p>
<p><u>The solution for this problem is to use <a href="http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/">Azure Powershell</a> to control your virtual machines. The default options of stopping command does also what is called deallocation and then the payment counter stops.</u></p>
<p>Below I present quick reference of relevant commands.</p>
<ol>
<li>You need to &#8220;log in&#8221; to your Azure account from PowerShell. You do this either with <code>Import-AzurePublishSettingsFile filename</code> or with <code>Add-AzureAccount</code> commands. Use the former if you would like to use profile settings file downloaded from the portal, and use the latter if you prefer to just type Microsoft account credentials and have the shell store them for you. In both cases credentials are stored in <code>C:\Users\**name**\AppData\Roaming\Windows Azure Powershell</code>.</li>
<li>Use <code>Get-AzureSubscription</code> to list your subscriptions.</li>
<li>Use <code>Select-AzureSubscription -SubscriptionName **name**</code> to switch the shell to apply following commands to this subscription.</li>
<li>Use <code>Get-AzureVM</code> to list your virtual machines, their names and their states.</li>
<li>Use <code>Stop-AzureVM -ServiceName **name** -Name **name**</code> to shut down and <strong>deallocate</strong> a virtual machine.</li>
<li>Use <code>Start-AzureVM -ServiceName **name** -Name **name**</code> to power on a virtual machine.</li>
</ol>
<p>When you close the shell, and open it again you do not have to log in to your Microsoft Account again, but before you are able to control virtual machines you have to select subscription first.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=355</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>My configuration files</title>
		<link>https://blog.pjsen.eu/?p=307</link>
					<comments>https://blog.pjsen.eu/?p=307#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 18 Jan 2015 16:26:12 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=307</guid>

					<description><![CDATA[This post is mostly for my personal reference, as it is useful to have one, easily accessible place for quick lookups of configuration files for commonly used tools. There is also special link to this post: https://blog.pjsen.eu/conf .gitconfig https://github.com/przemsen/main/blob/master/configs/.gitconfig (raw) .bashrc https://github.com/przemsen/main/blob/master/configs/.bashrc (raw) .vimrc https://github.com/przemsen/main/blob/master/configs/.vimrc (raw) git-prompt.sh &#8212; Git Bash for Windows https://github.com/przemsen/main/blob/master/configs/git-prompt.sh (raw) Main]]></description>
										<content:encoded><![CDATA[<p>This post is mostly for my personal reference, as it is useful to have one, easily accessible place for quick lookups of configuration files for commonly used tools. There is also special link to this post: <a href="https://blog.pjsen.eu/conf">https://blog.pjsen.eu/conf</a></p>
<hr />
<h2>.gitconfig</h2>
<p><a title="" href="https://github.com/przemsen/main/blob/master/configs/.gitconfig">https://github.com/przemsen/main/blob/master/configs/.gitconfig</a> (<a title="" href="https://raw.githubusercontent.com/przemsen/main/master/configs/.gitconfig">raw</a>)</p>
<h2>.bashrc</h2>
<p><a title="" href="https://github.com/przemsen/main/blob/master/configs/.bashrc">https://github.com/przemsen/main/blob/master/configs/.bashrc</a> (<a title="" href="https://raw.githubusercontent.com/przemsen/main/master/configs/.bashrc">raw</a>)</p>
<h2>.vimrc</h2>
<p><a title="" href="https://github.com/przemsen/main/blob/master/configs/.vimrc">https://github.com/przemsen/main/blob/master/configs/.vimrc</a> (<a title="" href="https://raw.githubusercontent.com/przemsen/main/master/configs/.vimrc">raw</a>)</p>
<h2>git-prompt.sh &#8212; Git Bash for Windows</h2>
<p><a title="" href="https://github.com/przemsen/main/blob/master/configs/git-prompt.sh">https://github.com/przemsen/main/blob/master/configs/git-prompt.sh</a> (<a title="" href="https://raw.githubusercontent.com/przemsen/main/master/configs/.git-prompt.sh">raw</a>)</p>
<hr />
<h2>Main GitHub repository</h2>
<p><a title="" href="https://github.com/przemsen/main">https://github.com/przemsen/main</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=307</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>My GitHub + first simple project published</title>
		<link>https://blog.pjsen.eu/?p=288</link>
					<comments>https://blog.pjsen.eu/?p=288#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Tue, 06 Jan 2015 15:48:51 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=288</guid>

					<description><![CDATA[I have eventually set up my GitHub account and published some of my code. The URL of the account is: https://github.com/przemsen. And the first and very basic project is: https://github.com/przemsen/WebThermometer. WebThermometer is a WPF application to be used as a desktop gadget. It repeatedly downloads (default is 5 min. interval) current temperature from arbitrary web]]></description>
										<content:encoded><![CDATA[<p>I have eventually set up my GitHub account and published some of my code. The URL of the account is:</p>
<p><a href="https://github.com/przemsen">https://github.com/przemsen</a>.</p>
<p>And the first and very basic project is:</p>
<p><a href="https://github.com/przemsen/WebThermometer">https://github.com/przemsen/WebThermometer</a>.</p>
<p><div id="attachment_294" style="width: 217px" class="wp-caption aligncenter"><a href="https://blog.pjsen.eu/wp-content/uploads/2015/01/webtherm.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-294" class="size-full wp-image-294" src="https://blog.pjsen.eu/wp-content/uploads/2015/01/webtherm.png" alt="WebThermometer" width="207" height="151" /></a><p id="caption-attachment-294" class="wp-caption-text">WebThermometer</p></div></p>
<p>WebThermometer is a WPF application to be used as a desktop gadget. It repeatedly downloads (default is 5 min. interval) current temperature from arbitrary web site and displays it. I personally find it useful as I like to observe current weather conditions right from my computer. I tried to write in a way so that it can easily be modified for use with other data sources. You can also download already compiled and ready to run version from <a href="http://pjs.blox.pl/2015/01/Internetowy-termometr-dla-Warszawy.html">my Polish blog</a>.</p>
<p>My plan is to successively select some of my entire projects and some code snippets which in my opinion are and/or will somehow be valuable to show and demonstrate. You can freely modify and recompile all of the published code providing that you specify it has originally been authored by me.</p>
<p>PS. Today auto updating mechanism of my WordPress failed (apparently this sometimes happens) and I ended up with damaged entire installation. I restored from backup and I apologize for deleting few comments since last 2 months. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=288</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The basics do matter</title>
		<link>https://blog.pjsen.eu/?p=267</link>
					<comments>https://blog.pjsen.eu/?p=267#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 05 Sep 2014 07:13:36 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=267</guid>

					<description><![CDATA[Recently I have spotted the following method in the large C# code base: This code works and does what it supposed to do. However, I had a slight inconvenience while debugging it. I tend to frequently use Visual Studio DEBUG->Exceptions->CLR Exceptions Thrown (check this out!) functionality which to me is invaluable tool for diagnosing actual]]></description>
										<content:encoded><![CDATA[<p>Recently I have spotted the following method in the large C# code base:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2014-09-05-cs1.cs"></script></p>
<p>This code works and does what it supposed to do. However, I had a slight inconvenience while debugging it. I tend to frequently use Visual Studio <code>DEBUG->Exceptions->CLR Exceptions Thrown</code> (check this out!) functionality which to me is invaluable tool for diagnosing actual source of an exception. The code base relied heavily on this very <code>ConvToInt</code> method, thus it generated lots of exceptions and caused Visual Studio to break in with the debugger over and over again. I then had to disable <code>CLR Exceptions Thrown</code> to protect myself from being hit by flying exceptions all the time. Having switched this off I ended up with somehow incomplete diagnosing capabilities. It is bad either way. So, what I did was basically simple refactoring:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2014-09-05-cs2.cs"></script></p>
<p>This method also works. One can even argue for better performance of this code, because throwing exceptions is considered to be slow. And this is also correct. Although performance was not key factor here (for line of business applications rarely is), but I measured it anyway. I ran both methods in a <code>for</code> loop <code>5</code> million times in release mode having wrapped them with appropriate calls to the methods of <code>Stopwatch</code> class. The results are surely not surprising. For valid string representations of a number, the former method (i.e. one using <code>System.Convert</code>) gave the average result of</p>
<p><code>663 milliseconds</code></p>
<p>and the latter (i.e. one using <code>TryParse</code>) gave the average result of</p>
<p><code>642 milliseconds</code></p>
<p>We can safely assume both methods have the same performance in this case. Then I ran the test with a not valid string representation of a number (i.e. passing &#8220;x&#8221; as an argument). Now the <code>TryParse</code> version gave the average result of:</p>
<p><code>546 milliseconds</code></p>
<p>And the <code>System.Convert</code> version, which indeed repeatedly threw exceptions gave the (not average, I ran this once) result of</p>
<p><code>233739 milliseconds</code></p>
<p>That is a huge difference in 3 orders of magnitude. Then I was fairly convinced my small and undoubtedly not impressive refactoring was right and justified. <span style="color:red;">Except that it is not correct</span>. It has worked well and has been successfully tested. But after a few weeks, when a different set of use cases was being tested, the application called <code>ConvToInt</code> with <code>-1</code> in the second argument. It turned out, that the method returned <code>0</code>, not the <code>-1</code> for invalid string representations of a number. What I want to convey here is:</p>
<blockquote><p><code>TryParse</code> sets its out argument to <code>0</code>, even if it returns false and did not successfully convert a string value to a number.</p></blockquote>
<p>I scanned the code base and have found this pattern a few times. Apparently I was not the only programmer to not know this little fact about <code>TryParse</code> method. Of course, it is well documented (<a title="" href="http://msdn.microsoft.com/en-us/library/f02979c7.aspx">http://msdn.microsoft.com/en-us/library/f02979c7.aspx</a>). The problem with this very API to me seems even more serious. The <code>0</code> value is supposed to be the most frequently used number value when it comes to string conversion failure in general. However, in a construct like this above, it comes from <code>TryParse</code> itself, despite the fact that it is provided by the caller and, more importantly, is primarily expected to be used as a default number value in case of failure. One can easily get into trouble when he or she expects (and passes as argument) different default value, e.g. <code>-1</code> and still receives <code>0</code> because <code>TryParse</code> works this way by definition. Obviously the solution here is to add an <code>if</code> statement:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2014-09-05-cs3.cs"></script></p>
<p>The performance does not get significantly worse because of this one conditional statement, I measured it and it is roughly the same.</p>
<p>The lessons learned here:</p>
<ul>
<li>Exceptions actually <strong>ARE EXPENSIVE</strong>. This is <strong>NOT</strong> a myth.</li>
<li>Do not rely on the value passed as out variable to <code>TryParse</code> method in case of a failure. Always back up yourself with an if statement and check for the failure.</li>
<li>More general one: learn the APIs, go to the documentation, do not simply assume you know what the method does. Even if it comes to basics. The descriptive and somewhat verbose method name can still turn into an evil when ran under edge cases. Always be 100% sure about what is the contract between the API authors and you, i.e. what the API <strong>actually</strong> does.</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=267</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Kdiff3 as Git mergetool and &#8211;auto pitfall</title>
		<link>https://blog.pjsen.eu/?p=221</link>
					<comments>https://blog.pjsen.eu/?p=221#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 02 May 2014 11:09:41 +0000</pubDate>
				<category><![CDATA[Quick tip]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=221</guid>

					<description><![CDATA[In Git, when merging you can sometimes observe the behavior when a conflict has been somehow auto-magically solved without any user interaction at all and without displaying resolution tool window. I have experienced this with commonly used Kdiff3 tool set up as the default mergetool. It turns out that Git has hardcoded --auto option when]]></description>
										<content:encoded><![CDATA[<p>In Git, when merging you can sometimes observe the behavior when a conflict has been somehow auto-magically solved without any user interaction at all and without displaying resolution tool window. I have experienced this with commonly used Kdiff3 tool set up as the default mergetool. It turns out that Git has hardcoded <code>--auto</code> option when invoking Kdiff3. This option instructs Kdiff3 to perform merge automatically and silently and display the window only if it is not able to figure out conflict resolutions itself. My understanding is, that it is intended solely for trivial cases and most of the time the window is displayed anyway. However, I am writing this post obviously because such &#8220;feature&#8221; once has got me into trouble &#8212; the tool fixed conflict, but should not have and, of course, did it wrong. In my opinion it is undoubtedly better to always make the decision on your own, instead of relying on some not well known logic of the tool.</p>
<p>The solution is to define custom tool in <code>.gitconfig</code> with Kdiff3 executable and custom command line parameters. Here is the configuration which I use on Windows:</p>
<pre class="brush:plain">
[merge]
    tool = kdiff3NoAuto
    conflictstyle = diff3

[mergetool "kdiff3NoAuto"]
    cmd = C:/Program\\ Files\\ \\(x86\\)/KDiff3/kdiff3.exe --L1 \"$MERGED (Base)\" --L2 \"$MERGED (Local)\" --L3 \"$MERGED (Remote)\" -o \"$MERGED\" \"$BASE\" \"$LOCAL\" \"$REMOTE\"
</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=221</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Enabling the net.tcp protocol in WCF running on top of IIS &#8212; the checklist</title>
		<link>https://blog.pjsen.eu/?p=237</link>
					<comments>https://blog.pjsen.eu/?p=237#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 23 Mar 2014 15:32:54 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[ASP.NET]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=237</guid>

					<description><![CDATA[Windows Communication Foundation is becoming sort of obsolete in favor of ASP.NET Web API, which has been advertised as primary technology for building web services. However, the latter obviously cannot serve as a full equivalent of the former. WCF still is a powerful technology for enterprise class service oriented architecture. For that purposes, the decision]]></description>
										<content:encoded><![CDATA[<p>Windows Communication Foundation is becoming sort of obsolete in favor of ASP.NET Web API, which has been advertised as primary technology for building web services. However, the latter obviously cannot serve as a full equivalent of the former. WCF still is a powerful technology for enterprise class service oriented architecture. For that purposes, the decision of switching the transport protocol from http to net.tcp sooner or later must be made clearly for performance reasons. From my experience I can tell having 100% working configuration of a service hosted inside the IIS is surprisingly hard and a developer has to face a series of quirks to bring the services back to life. Let&#8217;s sum up all the activities that can help make a service working with the net.tcp.</p>
<ol>
<li>WCF Non-HTTP Activation service must be installed in Add/Remove programs applet of the Control Panel. It is not obvious, that it is a component of the operating system itself, not of the IIS.</li>
<li>The TCP listener processes must be running. Check <code>netstat -a</code> to see if there is a process listening on the port of your choice (the default is 808), check the following system services and start them if need be: <code>Net.Tcp Listener Service</code> and <code>Net.Tcp Port Sharing</code>. I have observed cases, where those services were unexpectedly shut down, e.g. after restart of the operating system.</li>
<li>IIS management: the application must have net.tcp protocol enabled in its properties, as well as the site must have bindings for that protocol configured. If you have large number of services you can use my simplistic C# program which parses the IIS global configuration file &#8212; <code>applicationHist.config</code>. <a href="https://onedrive.live.com/?cid=495e1d8a1f5853ec&#038;id=495E1D8A1F5853EC!109#cid=495E1D8A1F5853EC&#038;id=495E1D8A1F5853EC!2424" title="My OneDrive">Link to my OneDrive</a> </li>
<li>If this is first try of enabling the net.tcp protocol, run the following tools to ensure the components of the .NET Framework are correctly set up: <code>c:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i</code> and <code>c:\Windows\Microsoft.NET\Framework\v4.0.30319\servicemodelreg.exe -ia</code>. Use <code>Framework64</code> for 64 bit system.</li>
<li>Make sure that you are <strong>not</strong> running on the default endpoint configuration. The default configuration can be recognized in the WSDL of the service. It contains <code>&lt;msf:protectionlevel&gt;EncryptAndSign&lt;/msf:protectionlevel&gt;</code> code which is responsible for default authorization settings. These defaults manifest themsevles in a strange symptom of the service working in Test Client and not working in target client application. It is caused by Test Client having successfully recognized default binding configuration from WSDL whereas the target application uses your custom configuration and it is very likely these two do not match.</li>
<li>Check for the equality of the service names in an <code>.svc</code> file and in the <code>Web.config</code>  file (assuming that declarative binding is used instead of programmatically created one) in section <code>&lt;system.serviceModel&gt; -&gt; &lt;services&gt; -&gt; &lt;service&gt;</code></li>
<li>Make sure that IIS has created a virtual directory for the application. Create it from Visual Studio by pressing appropriate button in the application&#8217;s properties.</li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=237</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>One thing cmd.exe is better at than *nix shells (with default configuration)</title>
		<link>https://blog.pjsen.eu/?p=215</link>
					<comments>https://blog.pjsen.eu/?p=215#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sat, 01 Feb 2014 17:06:37 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[Quick tip]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=215</guid>

					<description><![CDATA[I know the statement might be considered controversial. I even encourage you to try to prove me wrong, because I wish I knew better solution. In my every day work I tend to use command prompt a lot. I have both cmd.exe and bash (from Git for Windows) opened all the time. My typical environment]]></description>
										<content:encoded><![CDATA[<p>I know the statement might be considered controversial. I even encourage you to try to prove me wrong, because I wish I knew better solution.</p>
<p>In my every day work I tend to use command prompt a lot. I have both <code>cmd.exe</code> and <code>bash</code> (from Git for Windows) opened all the time. My typical environment comprises numerous directories, I mean more than one hundred. Many of them share parts of their names. The names are long, dozens of characters. I have to move between them over and over again. The problem is that it is not feasible to type longish directory name many times manually.</p>
<p>Now, let&#8217;s suppose we have the following (shortened, of course) subdirectories structure of a directory which we are in at the moment:</p>
<pre>..
aa
bbaa
ccaadd
eeaaff</pre>
<p>When I would like to change the directory to <code>bbaa</code> in cmd.exe I type <code>cd *aa* &lt;Tab&gt; &lt;Tab&gt;</code>, I get the second result of auto completion, I press <code>&lt;Enter&gt;</code> and I have moved to <code>bbaa</code>. If I press <code>&lt;tab&gt;</code> three times, I get <code>ccaadd</code>. And when I press <code>&lt;tab&gt;</code> four times, I get <code>eeaaff</code>. This feature is brilliant. The auto completion works with wildcards and matches <strong>not</strong> only beginnings of a name. Last, but not least, it <strong>allows to cycle through suggestions while in-line editing a command</strong>.</p>
<p>The most important part here is: <strong>not only beginning of a name (which is, as far as I know, the behavior of a typical Unix shell) AND also ability to have the suggestions inserted in place, not only displayed them below the command prompt</strong>.</p>
<p>A Unix shell also does match wildcards, but only displays matched names. It does not offer (or I am not aware of it) a way to instantaneously pass matched name to a command. It only lists relevant suggestions and a user have to then manually re-edit the command so that it has desired argument. <code>cmd.exe</code> is better in that it allows a user to cycle through suggestions while in-line editing command argument. Which is great when it comes to long names of which only some parts can be conveniently memorized by a human.</p>
<p>I propose the following function which could be appended to <code>.bashrc</code>.</p>
<pre class="brush:shell">function cdg() { ls -d */ | grep -i "$1" | awk "{printf(\"%d : %s\n\", NR, \$0)}"; read choice; if [ "$choice" == "0" ]; then : ; else cd "`ls -d */ | grep -i \"$1\" | awk \"NR==$choice\"`"; fi; }</pre>
<p>It is a simplistic function that searches through <code>ls</code> results with <code>grep</code>, parses them with <code>awk</code> and finally picks one of them and calls <code>cd</code>.</p>
<p>Now we can type <code>cdg aa</code> and we get all possible choices:</p>
<pre>1 : aa/
2 : bbaa/
3 : ccaadd/
4 : eeaaff/</pre>
<p>We simply type the number and we are done being moved to the desired directory. Without the need to manually re-enter the <code>cd</code> command with proper argument. Obviously, in <code>cmd.exe</code> we get this nice auto completion for every command typed in the interpreter, and my solution only solves changing directory use case in <code>bash</code>.</p>
<p>2014.02.02 UPDATE 1: After some deeper research, it turned out that the behavior of <code>cmd.exe</code> can be achieved in <code>bash</code> as well. The following line should be included into <code>.bashrc</code>:</p>
<pre class="brush:shell">bind '"\t":menu-complete'</pre>
<p>However, my solution still serves the purpose as it uses <code>grep -i</code> which makes it case insensitive and thus renders it useful.</p>
<p>2014.02.06 UPDATE 2: I have experienced the second reason my solution is still relevant. It is much faster than pressing <code>&lt;tab&gt;</code> and waiting for the shell to suggest names. This can be observed in an environment with number of directories greater than a few, where MinGW tooling tends to be slow in general. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=215</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The limit of 260 characters path length and mysterious error message</title>
		<link>https://blog.pjsen.eu/?p=176</link>
					<comments>https://blog.pjsen.eu/?p=176#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Thu, 02 Jan 2014 12:04:00 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=176</guid>

					<description><![CDATA[If you work on small and relatively simple projects (in terms of number of components) you may not encounter this limitation. But in any non trivial &#8216;line of business&#8217; application it is very likely that sooner or later you will come across this troublesome problem: Visual Studio refuses to open a project when the length]]></description>
										<content:encoded><![CDATA[<p>If you work on small and relatively simple projects (in terms of number of components) you may not encounter this limitation. But in any non trivial &#8216;line of business&#8217; application it is very likely that sooner or later you will come across this troublesome problem: Visual Studio refuses to open a project when the length of its (or any of its references) file system path is longer than 260 characters. The issue is more serious as it seems to be because of its <strong>manifestation in somewhat cryptic error message</strong> (I suppose different error messages caused by this problem may be spotted in the wild as well).</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-177 aligncenter" alt="error" src="http://blog.pjsen.eu/wp-content/uploads/2013/12/error.png" width="481" height="202" srcset="https://blog.pjsen.eu/wp-content/uploads/2013/12/error.png 481w, https://blog.pjsen.eu/wp-content/uploads/2013/12/error-300x125.png 300w" sizes="auto, (max-width: 481px) 100vw, 481px" /></p>
<p>The error message gives absolutely no clue what the real problem is. After some research I was aware of the existence of this limitation, but I could not believe the path issue can end up with such error message. Eventually, I decided to conduct in-depth investigation with some advanced debugging tools to confirm the root cause of the problem. I followed the great advice from <a href="http://technet.microsoft.com/en-us/sysinternals/bb963887.aspx" title="Case of the Unexplained">Case of the Unexplained</a> series by Mark Russinovich: <em>when in doubt, run Process Monitor</em>. The picture below shows file system activity of <code>devenv.exe</code> process when it is opening solution containing suspicious projects. </p>
<p><a href="http://blog.pjsen.eu/wp-content/uploads/2013/12/pm.png"><img loading="lazy" decoding="async" src="http://blog.pjsen.eu/wp-content/uploads/2013/12/pm-300x163.png" alt="pm" width="300" height="163" class="aligncenter size-medium wp-image-180" srcset="https://blog.pjsen.eu/wp-content/uploads/2013/12/pm-300x163.png 300w, https://blog.pjsen.eu/wp-content/uploads/2013/12/pm-1024x556.png 1024w, https://blog.pjsen.eu/wp-content/uploads/2013/12/pm.png 1598w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a></p>
<ul>
<li>We can see the querying directory event and part of its results.</li>
<li>The results comprise file names in alphabetical order. These files should be opened by Visual Studio and thus the appropriate project references should be loaded. <strong>All</strong> of these files are expected to take part in consecutive file system events.</li>
<li>File names in <span style="color:#007e0f;"><strong>green</strong></span> frames are both in the result list of querying directory and are then opened by the <code>devenv.exe</code> process. As expected. </li>
<li>File names in <span style="color:#fe0201;"><strong>red</strong></span> frames are in the result list of querying directory, <strong>but are missing from file system activity events further (second window in the background)</strong>. And this causes the problem. </li>
<li>All files that are missing have path length longer than 260 characters.</li>
<li>All files that are correctly opened by <code>devenv.exe</code> and loaded into the project and whose file system events are visible in Process Monitor have path length shorter than 260 characters. Obviously, the example shows only some of them, but I analyzed them all to draw the conclusion:</li>
</ul>
<p>This proves that project references that point to dependencies with path length longer than 260 characters were not loaded and thus prevented the whole project from being loaded properly. After moving the solution to the root directory of the drive (the single letter, e.g. C, is indeed the shortest directory name possible) the problem was solved. </p>
<p>To wrap up:</p>
<ol>
<li>Be aware that <a href="http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx">260 characters limit for path does exist in Windows operating system</a>.</li>
<li>The observable symptoms of hitting this limitation probably will not help you solve the problem.</li>
<li>The solution in most cases could be to rename one or more directories or to make a symbolic link in the root directory of the drive.</li>
<li>The problem described above occured under Windows 7 SP1 64-bit and Visual Studio 2012 Professional.
</li>
<li>Do not hesitate to use <a href="http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx" title="Process Monitor">Process Monitor</a>. It is incredibly powerful tool when it comes to solving wide range of operating system problems. </li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=176</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
			</item>
		<item>
		<title>Python script executed by cron crashes when printing Unicode string</title>
		<link>https://blog.pjsen.eu/?p=156</link>
					<comments>https://blog.pjsen.eu/?p=156#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 27 Dec 2013 16:34:59 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=156</guid>

					<description><![CDATA[I have written a little Python script for my personal purposes and scheduled it to be run on Raspberry Pi by cron. After some polishing work I was pretty sure it worked well and was successfully tested. What I mean by to be tested is to be executed manually from the shell and to observe the]]></description>
										<content:encoded><![CDATA[<p>I have written a little Python script for my personal purposes and scheduled it to be run on Raspberry Pi by cron. After some polishing work I was pretty sure it worked well and was successfully tested. What I mean by <em>to be tested</em> is to be executed manually from the shell and to observe the expected results. So far so good.</p>
<p>However when the script was run by cron, it failed at the line where it <strong>prints some string containing Unicode characters</strong>. The line executes normally when running from the shell. I suspected there is some issue with the standard output of the processes run by the cron, because in such case there is no meaningful notion of standard output.</p>
<p>As it turns out, programs executed by cron have no terminal attached to their standard output. According to <a href="http://unix.stackexchange.com/questions/105058/bash-if-script-is-called-from-terminal-echo-stdout-to-terminal-if-from-cron-do">this Stack Exchange post</a> the standard output is sent to the system&#8217;s mail system and thus delivered to a user this way. One can easily verify this by issuing <code>tty</code> command in the cron and redirecting the output to a file. Something similar to <code>this is not a terminal</code> (message translated directly from my system with Polish locale) should be observed.</p>
<p>The further explanation goes as follows: if there is no terminal attached to the process, the Python interpreter cannot detect the encoding of the terminal (it has nothing to do with the system&#8217;s environmental variables describing locale. They are all global and are part of the process&#8217; environment, but they do not affect the non-terminal device being attached to standard output). You can verify this by running a Python script that tries to output terminal&#8217;s encoding: <code>print sys.stdout.encoding</code>. The <code>None</code> can be observed. So, the interpreter falls back to Ascii encoding and crashes when printing Unicode characters.</p>
<p>The solution in this case was to enforce the interpreter to use Unicode encoder for standard output.</p>
<pre class="brush:py">UTF8Writer = codecs.getwriter('utf8')
sys.stdout = UTF8Writer(sys.stdout)</pre>
<p>The output is discarded at all, but these lines prevent the interpreter from using default Ascii encoder which is not appropriate for printing Unicode string.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=156</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Unix shell scripting &#8212; code smells</title>
		<link>https://blog.pjsen.eu/?p=143</link>
					<comments>https://blog.pjsen.eu/?p=143#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 15 Nov 2013 11:41:21 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=143</guid>

					<description><![CDATA[I develop and maintain a bunch of bash shell scripts for my Raspberry Pi (e.g. downloading of files from list of urls, monitoring physical gpio power off switch etc.). I have to admit I did not pay close attention to designing them perfectly, I just wanted to get the job done. However, even in such]]></description>
										<content:encoded><![CDATA[<p>I develop and maintain a bunch of bash shell scripts for my Raspberry Pi (e.g. downloading of files from list of urls, monitoring physical gpio power off switch etc.). I have to admit I did not pay close attention to designing them perfectly, I just wanted to get the job done. However, even in such a simple cases I have experienced myself kind of <a title="technical debt" href="http://en.wikipedia.org/wiki/Technical_debt">technical debt</a> when something suddenly went wrong. It is <strong>always</strong> good to follow good design principles. Here I mean avoiding unnecessary dependencies between software modules. Even in such simple programs. </p>
<p>These are my quick observations of what to do and what not to do when writing either shell scripts or interpeter scripts under Unix/Linux environment. </p>
<ul>
<li><strong>DO NOT USE DOT CHARACTER in script&#8217;s code</strong>. Use variables, assign them a value once and then refer to them in the code. Relying on external assumptions that script will be executed in certain directory is bad and <strong>will</strong> hurt. It is extremely likely you will forget the (in fact, unnecessary) requirements when executing script elsewhere, e.g. from cron.</li>
<li>If you absolutely have to refer to some file located in the same directory as the script itself, <strong>consider writing another script</strong> whose solely purpose is to <strong>change the directory</strong> to meet target&#8217;s expectations <strong>and then immediately execute it</strong>. Although it would be strange to write shell script to execute another shell script, this technique can be used with other interepreted languages e.g. Python or Perl scripts.</li>
<li><strong>DO NOT USE TILDE CHARACTER in script&#8217;s code</strong>. It has not obvious expansion rules. For instance, it works at the beginning of the word, but not in the middle. That also means, it is very likely to forget the rules and have the tilde character not expanded in location where it is expected to. Use the value of <code>HOME</code> variable and expand it in one of traditional ways. </li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=143</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Little semantic pitfall of try..finally</title>
		<link>https://blog.pjsen.eu/?p=107</link>
					<comments>https://blog.pjsen.eu/?p=107#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Mon, 26 Aug 2013 15:49:37 +0000</pubDate>
				<category><![CDATA[General programming]]></category>
		<category><![CDATA[C#]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=107</guid>

					<description><![CDATA[This time I would like to point out the behaviour, that should be absolutely clear to any C# developer. When an exception is thrown inside try..finally block (without catch), and consequently in the scope of a using statement, it is bubbled up to the containing scope, rather than handled in any way. It implies that]]></description>
										<content:encoded><![CDATA[<p><span class="gmw_">This time I would like to point out the behaviour, that should be absolutely clear to any <span class="gm_ gm_94843a8e-e8df-1478-ffd9-a1796bd73c34 gm-spell">C#</span> developer. </span><strong>When an exception is thrown inside <code>try..finally</code> block (without <code>catch</code>), and consequently in the scope of a <code>using</code> statement, it is bubbled up to the containing scope, rather than handled in any way</strong>. It implies that <code>try..finally</code> without <code>catch</code> has in fact nothing to do with exception handling.</p>
<p>I have already come across learning materials that suggest the otherwise. Let&#8217;s have a look at [1] (my own translation from Polish):</p>
<blockquote><p>With a using clause we end up having code which is proof against exceptions</p></blockquote>
<p>and [2]:</p>
<blockquote><p>A finally block can be used to handle any exception</p></blockquote>
<p>In my opinion the fact, that in the very case of <code>try..finally</code> apart from <code>catch</code> an exception is simply thrown out of the scope, is not stressed enough in the literature and claims like these above can be misleading.</p>
<p>Going a little bit further, I consider this as a little semantic pitfall in the language. When we think of a <code>try</code> statement, we immediately recall exception handling mechanism. However this time this is not the case. Maybe other languages have better (i.e. more meaningful) way of expressing the intent of a code being executed at the end of a scope. Have a look at <code>scope(exit)</code> and <code>scope(failure)</code> instructions in D language in [3].</p>
<p>[1]. <a href="http://programistamag.pl/">Polish magazine &#8220;Programista&#8221;</a>, issue 6/2013 (13) p. 26</p>
<p>[2]. <a href="http://books.google.pl/books?id=oe6-s1a75nAC&amp;printsec=frontcover&amp;hl=pl#v=onepage&amp;q&amp;f=false">&#8220;Programming in C#, A primer, second edition&#8221;</a>, chapter 18.8</p>
<p>[3]. <a title="Three Unlikely Successful Features of D" href="http://ecn.channel9.msdn.com/events/LangNEXT2012/AndreiLangNext.pdf">Three Unlikely Successful Features of D</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=107</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Less known feature of C# 5.0 &#8212; modified closure behaviour</title>
		<link>https://blog.pjsen.eu/?p=81</link>
					<comments>https://blog.pjsen.eu/?p=81#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Tue, 09 Jul 2013 16:34:54 +0000</pubDate>
				<category><![CDATA[Quick tip]]></category>
		<category><![CDATA[C#]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=81</guid>

					<description><![CDATA[If you were asked to mention new features of C# 5.0, then you would probably say, first of all, async / await. However, on MSDN there is list of changes that could hardly be considered as well-known, even after almost 1 year after .NET 4.5 RTM was published. In this post I briefly explain one]]></description>
										<content:encoded><![CDATA[<p>If you were asked to mention new features of C# 5.0, then you would probably say, first of all, <code>async / await</code>. However, on MSDN there is <a title="list of changes" href="http://msdn.microsoft.com/library/hh678682%28v=vs.110%29.aspx">list of changes</a> that could hardly be considered as well-known, even after almost 1 year after .NET 4.5 RTM was published. In this post I briefly explain one of them, that in my opinion is worth remembering.</p>
<p>As a C# developer, you are hopefully aware of the <em>outer variable trap</em> issue. Yet, in the version 5.0 of the language, the behaviour of a closure has been slightly altered. Let&#8217;s take a look at first (regarding comment &#8220;switches&#8221;) half of the following code:</p>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2013-07-09-cs1.cs"></script></p>
<p>Now let&#8217;s try to compile it using version 3.5 and 4.5 of the compiler. This assumes default installations, the file is named <code>Program.cs</code> and, of course, as 4.5 version of run-time is in-place update, it resides in the directory named after 4.0.</p>
<ol>
<li><code>c:\Windows\Microsoft.NET\Framework\v3.5\csc.exe Program.cs &amp;&amp; Program</code></li>
<li><code>c:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe Program.cs &amp;&amp; Program</code></li>
</ol>
<p>The former example results in <code>1 1</code> printed into the console and the latter results in <code>0 1</code> printed. The <code>1 1</code> result is caused by typical <em>outer variable trap</em>, where the lambdas are kind of bound to the captured variable itself (which in the end has a value of 1), but not to its value at the time of creation of the lambda. The breaking change introduced in version 5.0 of the language brings the behaviour to what actually might have been expected &#8212; capture the value indicated by the sequence of the code being executed. <strong>However, this works only inside <code>foreach</code> loop</strong>.</p>
<p>By switching the comments (deleting the first slash) you can prove standard <code>for</code> loop behaves exactly the same in both versions of compiler and results in printing <code>2 2</code> indicating <em>outer variable trap</em>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=81</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Interface type parameter covariance and contravariance in C#</title>
		<link>https://blog.pjsen.eu/?p=72</link>
					<comments>https://blog.pjsen.eu/?p=72#comments</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Mon, 24 Jun 2013 16:44:20 +0000</pubDate>
				<category><![CDATA[Quick tip]]></category>
		<category><![CDATA[C#]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=72</guid>

					<description><![CDATA[I would like this blog post to serve as a quick reference that recalls the basic concept of covariant and contravariant type parameters of generic interfaces in the C# language. I tried to keep the example as simple as possible. Included comments explain the key points. No long stories and no dissertations. The code does]]></description>
										<content:encoded><![CDATA[<ul>
<li>I would like this blog post to serve as a quick reference that recalls the basic concept of covariant and contravariant type parameters of generic interfaces in the C# language.</li>
<li>I tried to keep the example as simple as possible. Included comments explain the key points. No long stories and no dissertations.</li>
<li>The code does nothing, but compiles on C# 4.0 or newer compiler.</li>
<li>Try deleting the first slash character in the first line to kind of switch between the snippets (BTW this is cool trick <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> ).</li>
</ul>
<p><script src="https://gist.github.com/przemsen/97eaf5028e91b9111fae417055eb9c3e.js?file=blog-2013-06-24-cs1.cs"></script></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=72</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>What do I use Raspberry Pi for?</title>
		<link>https://blog.pjsen.eu/?p=57</link>
					<comments>https://blog.pjsen.eu/?p=57#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Fri, 24 May 2013 16:49:57 +0000</pubDate>
				<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=57</guid>

					<description><![CDATA[I have to admit I am really impressed by the ideas of Pi use cases that people come up with all around the world (e.g. this list). Raspberry Pi is a microcomputer that almost every engineer passionate about computer science and/or electronics could hardly resist playing around with. Yet, I do not have plenty of]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="aligncenter" title="Raspberry Pi" alt="Raspberry Pi" src="https://blog.pjsen.eu/wp-content/uploads/2013/09/rpi.jpg" /></p>
<p>I have to admit I am really impressed by the ideas of Pi use cases that people come up with all around the world (e.g. <a href="http://www.howtogeek.com/?s=raspberry+pi">this list</a>). <a title="Raspberry Pi" href="http://www.raspberrypi.org/">Raspberry Pi </a>is a microcomputer that almost every engineer passionate about computer science and/or electronics could hardly resist playing around with. Yet, I do not have plenty of creative ideas what to do with it. But, as it turns out, it can perform extremely well doing even simple tasks.</p>
<p>I began my set up with downloading and flashing <a title="Raspbian" href="http://www.raspbian.org/"><span class="gm_ gm_096eb73b-a59a-4dcc-8f26-f260f4945e32 gm-spell">Raspbian</span></a><span class="gmw_"> Linux <span class="gm_ gm_204bc870-8e45-94a6-950e-16b9f95c0b9a gm-spell">distro</span> into rather high class SD card. It is important to pay attention to card&#8217;s class because difference between 3-4 MB/sec and 10 MB/sec transfer speed </span><strong></strong><span class="gmw_"><span class="gmw_">does matter. It is pretty easy to flash Raspbian and to do</span> the initial set up. You do not have to install it, because you are flashing ready to run image of working system. Then I installed additional packages with services:</span></p>
<ul>
<li>VNC server allows me to detach monitor, keyboard and mouse from the device and to connect to its desktop remotely. <a title="This instruction" href="http://www.howtogeek.com/141157/how-to-configure-your-raspberry-pi-for-remote-shell-desktop-and-file-transfer/">This tutorial</a> was the starting point for me.</li>
<li>SSH server is also a must-have. I can can easily login to the shell with Putty or from Android device using <a title="JuiceSSH" href="https://play.google.com/store/apps/details?id=com.sonelli.juicessh">JuiceSSH.</a></li>
<li>Last but not least, <strong>I started Samba server, which is the main &#8220;server role&#8221; for my Raspberry Pi.</strong></li>
<li>I also started <a title="Nginx" href="http://nginx.org/">nginx </a>web server to be able to access shared folder with a web browser.</li>
</ul>
<p><strong>I use my Raspberry Pi as a file exchange server</strong> between all my devices including laptops and Android devices. As for me, it is extremely useful. I do not have to power on my laptop to send a file to or from the tablet. Now I have the computer that is always on and just serves my files. On Windows I have mapped network drive to the Samba share and on Android device I use <a title="X-Plore file manager" href="https://play.google.com/store/apps/details?id=com.lonelycatgames.Xplore">X-Plore file manager</a> which has an option to connect to SMB share. One important thing is to set <strong>an anonymous Samba share</strong> so that Android clients do not have to enter Windows credentials. Just in case, here are the <code>smb.conf</code> contents that work great for me:</p>
<pre class="brush:plain">workgroup = HOME
security = share
guest account = pi 
[LAN_EXCHANGE]
comment = LAN EXCHANGE
path = /home/pi/LAN_EXCHANGE
browseable = yes
read only = no
guest ok = yes
create mask = 0666
directory mask = 0777</pre>
<p>And please do not tell me there already exist inventions like this <a href="http://www.kingston.com/us/usb/wireless/">WiFi &#8220;Drives&#8221;</a>. I am pretty sure they not only are more expensive, but also have nothing to do with aforementioned protocols which IT pros use <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=57</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Bringing old projects back to life: RGBgen XP</title>
		<link>https://blog.pjsen.eu/?p=51</link>
					<comments>https://blog.pjsen.eu/?p=51#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Tue, 07 May 2013 14:38:03 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=51</guid>

					<description><![CDATA[The original application was written back in 2001 in Visual Basic 6. It is quite trivial application (despite the fact, that when I first created it being a kid, for me it was not). It is intended for easily creating hexadecimal RGB codes using sliders. I know, there are plenty of similar tools, also inform]]></description>
										<content:encoded><![CDATA[<p>The original application was written back in 2001 in Visual Basic 6. It is quite trivial application (despite the fact, that when I first created it being a kid, for me it was not). It is intended for easily creating hexadecimal RGB codes using sliders. I know, there are plenty of similar tools, also inform of online apps. HoweverI still like the &#8220;mechanics&#8221; of the UI and its simplicity. It turns out, that I still need it.</p>
<p>However, the originally compiled executable does not work on modern Windows system. I decided to check if it is possible to fix this. I took my Visual Basic 6 installation CD, started snapshotted virtual machine running Windows XP and opened source project files. The application had simple HTML editor and used a few really esoteric OCX controls. In fact, they were not essential for the application&#8217;s core functionality. After removing unnecessary references from the project and disabling some of the UI components, I recompiled the source code so that it uses only &#8220;out of the box&#8221; Visual Basic libraries. The Windows operating system includes Visual Basic 6 runtime DLL even now and finally I managed to run the application on my Windows 7 64-bit machine.</p>
<p>If you want to give it a try, you can download it from <a title="" href="https://1drv.ms/u/s!AuxTWB-KHV5JkwHByx_24QmrJ4NF?e=hqhnn8">my Skydrive</a></p>
<p><img decoding="async" class="aligncenter" alt="RGBGen XP" src="https://blog.pjsen.eu/wp-content/uploads/2013/09/rgbgenxp.png" /></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=51</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>VirtualBox USB Filters</title>
		<link>https://blog.pjsen.eu/?p=46</link>
					<comments>https://blog.pjsen.eu/?p=46#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Mon, 06 May 2013 15:58:41 +0000</pubDate>
				<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=46</guid>

					<description><![CDATA[Although primarily I work on Windows operating system, as a person having some Unix background I feel comfortable having a linux distro close at hand. I use Oracle VirtualBox for virtualization. One of the tasks that I find pretty hard to do with this hypervisor is creating USB device filter (allowing guest os to use]]></description>
										<content:encoded><![CDATA[<p>Although primarily I work on Windows operating system, as a person having some Unix background I feel comfortable having a linux distro close at hand. I use Oracle VirtualBox for virtualization. One of the tasks that I find pretty hard to do with this hypervisor is creating USB device filter (allowing guest os to use host&#8217;s devices). More often than not, the application fails at recognizing USB device name, and serves raw protocol codes as labels.</p>
<p><img decoding="async" class="aligncenter" alt="VirtualBox USB settings" src="https://blog.pjsen.eu/wp-content/uploads/2013/09/vboxusb.png" /></p>
<p>The problem here is that it is really difficult to guess the actual device. In such situations I would like to recommend rather obsucre Microsoft tool called USB View. It is not very easy to find a download, because the original version of the tool was created back in Windows 9x days, however it still works, has updated versions and could be particularly useful. Just look at the screen showing <strong>all</strong> connected USB devices:</p>
<p><img decoding="async" class="aligncenter" alt="VirtualBox USB settings" src="https://blog.pjsen.eu/wp-content/uploads/2013/09/vboxusb2.png" /></p>
<p>Now I can quickly decode the device numbers and connect the right one, which is card reader, to the virtual machine. BTW, it also good to know the internal architecture of modern laptops &#8212; their equipment are most likely connected to the mother board via USB.</p>
<p>UPDATE: USB View is part of Windows SDK, it is installed along with debuggers. <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/bg162891.aspx">Link to Windows SDK for Windows 8.1</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=46</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Quick-tip: Using newest Entity Framework assembly in ASP.NET MVC 4</title>
		<link>https://blog.pjsen.eu/?p=39</link>
					<comments>https://blog.pjsen.eu/?p=39#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Mon, 01 Apr 2013 17:17:19 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[ASP.NET]]></category>
		<category><![CDATA[Quick tip]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=39</guid>

					<description><![CDATA[For me it has been kind of unexpected behavior. When I update NuGet package, I get newest Entity Framework binaries. Today it is version 5.0. However, default MVC template targets .NET version 4.0. Newest version of Entity Framework for .NET 4.0 is 4.4. The NuGet package contains both assemblies, but the project will use 4.4]]></description>
										<content:encoded><![CDATA[<p>For me it has been kind of unexpected behavior. When I update NuGet package, I get newest Entity Framework binaries. Today it is version 5.0. However, default MVC template targets .NET version 4.0. Newest version of Entity Framework for .NET 4.0 is 4.4. The NuGet package contains both assemblies, but the project <strong>will use 4.4</strong> because the project <strong>targets .NET 4.0 by default</strong>. Furthermore, simply changing target runtime version in project properties is not enough. What finally has worked for me was manually editing <code>.csproj</code> file. I located assembly reference in XML and changed path in <code>HintPath</code> tag from <code>\lib\<strong><span style="color: #339966;">net40</span></strong>\EntityFramework.dll</code> to <code>\lib\<strong><span style="color: #339966;">net45</span></strong>\EntityFramework.dll</code>. The conclusion is to pay close attention to <strong>what particular version</strong> of assembly is <strong>actually being referenced</strong> and not relying only on NuGet versioning.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=39</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Start your own service detecting if an e-mail was read</title>
		<link>https://blog.pjsen.eu/?p=22</link>
					<comments>https://blog.pjsen.eu/?p=22#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Tue, 19 Mar 2013 16:46:12 +0000</pubDate>
				<category><![CDATA[Solutions]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=22</guid>

					<description><![CDATA[I have come up with the idea after trying out bananatag.com which is such tracking service. Their solution is simple: let&#8217;s attach small, one-pixel image to an e-mail and log when it was downloaded. Providing that getting an image happened exactly when a user opens an e-mail. What concerns me is the way the idea]]></description>
										<content:encoded><![CDATA[<p>I have come up with the idea after trying out <a href="http://bananatag.com">bananatag.com</a> which is such tracking service. Their solution is simple: let&#8217;s attach small, one-pixel image to an e-mail and log when it was downloaded. Providing that getting an image happened exactly when a user opens an e-mail.</p>
<p>What concerns me is the way the idea is implemented. They provide dedicated browser extension working with GMail or MS Outlook extension. So far so good, but if you use neither of these you have to &#8220;sync&#8221; your Banatag account with your e-mail account. It boils down to simply giving your log-in and password information to Bananatag. Then they act as a proxy for sending e-mails and they attach appropriate images on the fly.</p>
<p>Both solutions sound not so good to me. I prefer not to install many extenstions to keep my software installations as lightweight as possible. I typically work on more than one machine and installing extensions is not always possible. Last but not least, giving my credentials to some company is unacceptable from security point of view.</p>
<p>However, the idea is so simple that it is possible almost for everyone to start up completely own tracking web application. In this post I provide complete instructions as well as small, self-contained source code written in pure ASP.NET without any external dependencies.</p>
<p>First, you need free web hosting provider supporting ASP.NET. I have come across <a href="http://somee.com">somee.com</a> which seems to be fairly good. They offer 150MB space and require you to access web page at last once a month. After signing up and creating new web site you end up with address like <code>http://(yourname).somee.com</code>. Then you can <a href="https://1drv.ms/u/s!AuxTWB-KHV5Jknuj5p7S_wZkRgJx?e=RldHe9">download source code from my Skydrive</a> which is <em>pr.</em>ashx. It stands for pixel recorder, because it is an application which serves one pixel image and records each event in randomly named text file. Simply upload that file into the root of your newly created somee.com account. Now you can access it from the Internet typing <code>http://(yourname).somee.com/pr.ashx</code>.</p>
<p>When the application starts for the first time it creates uniquely named text file and provides a link to it. Now you can start using it. When you are writing an e-mail insert an image from a URL to have it tracked. Please make sure that you force your e-mail composing app not to attach image to the message body, but rather to preserve the reference to your web application serving the image. The URL is as simple as this: <code>http://(yourname).somee.com/pr.ashx?token=1</code>. Where <em>token</em> is a number between 1 and 100. You can further adjust the range in source code. Now every time someone attempts to access this URL, one-pixel image is served and the time stamp is written into unique text file. You can <strong>use it from anywhere</strong> in <strong>any e-mail composing app</strong>, web mail or desktop. You can view your log and check from anywhere using any web browser whether an e-mail have already been read. Just remember what token number was associated with particular message and keep in mind some e-mail providers do not download any images by default (e.g. GMail &#8212; user has to click button to download any images included in the message body). Have fun!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=22</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hello world!</title>
		<link>https://blog.pjsen.eu/?p=1</link>
					<comments>https://blog.pjsen.eu/?p=1#respond</comments>
		
		<dc:creator><![CDATA[pjsen]]></dc:creator>
		<pubDate>Sun, 17 Mar 2013 15:01:23 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://blog.pjsen.eu/?p=1</guid>

					<description><![CDATA[Welcome to my main technical blog! Recently I have decided to diverge my concept of blogging. On this blog I will publish all new technical articles only in English. My old blog &#8212; pjs.blox.pl still exists, but only less important, Polish-specific articles will be published there. There are several reasons why I decided to build]]></description>
										<content:encoded><![CDATA[<p>Welcome to my main technical blog! Recently I have decided to diverge my concept of blogging. On this blog I will publish all new technical articles only in English. My old blog &#8212; <a title="pjs.blox.pl" href="https://pjsen.eu/legacy/blog/">pjs.blox.pl</a> still exists, but only less important, Polish-specific articles will be published there. There are several reasons why I decided to build a new blog. Firstly, target audience is incomparably bigger for English content. Secondly, the old platform &#8212; blox.pl is real pain in the neck. It has many limitations and it is just inconvenient. Therefore, installing and starting own, decent platform such as WordPress is an obvious choice for any non-amateur writer.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.pjsen.eu/?feed=rss2&#038;p=1</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
