<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-2496415891665263000</id><updated>2026-03-26T09:25:28.493+02:00</updated><category term="Database"/><category term="General"/><category term="C#"/><category term="PowerShell"/><category term="Web"/><category term="AWS"/><category term="ASP.NET"/><category term=".net"/><category term="Agile"/><category term="Azure"/><category term="ELB"/><category term="Deployment"/><category term="Event Grid"/><category term="Linux"/><category term="SharePoint"/><category term="Ubuntu"/><category term="WebHooks"/><category term="Ajax"/><category term="Design"/><category term="LINQ"/><category term="Scrum"/><category term="Security"/><category term="docker"/><category term="Architecture"/><category term="CDK"/><category term="Certification"/><category term="Continuous Delivery"/><category term="EDC"/><category term="Entity Framework"/><category term="GAE"/><category term="Git"/><category term="HTML5"/><category term="Kubernetes"/><category term="LLMs"/><category term="MDC"/><category term="Mind"/><category term="Nano"/><category term="Python"/><category term="SQL Server"/><category term="Usability"/><category term="app engine"/><category term="logging"/><title type='text'>For the Love of Software</title><subtitle type='html'>Hesham A. Amin&#39;s blog about his love..Software</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><link rel='next' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default?start-index=26&amp;max-results=25&amp;redirect=false'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>112</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2831763589265795582</id><published>2025-01-17T12:40:00.002+02:00</published><updated>2025-01-17T12:40:41.157+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="AWS"/><category scheme="http://www.blogger.com/atom/ns#" term="CDK"/><title type='text'>Managing Logical Environments in AWS CDK: Using Qualifiers to Avoid Conflicts</title><content type='html'>&lt;p&gt;Infrastructure
as Code or IaC is an essential tool in modern DevOps. If you&#39;re working with
AWS as your primary cloud provider, you&#39;re very likely working with one of the
two tools AWS offers for IaC: CloudFormation or CDK.&lt;/p&gt;&lt;p&gt;While CDK
provides some extra capabilities compared to CloudFormation, it requires some infrastructure to already exist in order to perform certain functionality. This
infrastructure was optional in CDK v1 but is mandatory in CDK 2 to provision
any stacks. The process of creating these infrastructure components is called &lt;a href=&quot;https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html&quot; target=&quot;_blank&quot;&gt;bootstrapping&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Assuming
that you have CDK tooling already set up on your machine, it&#39;s relatively easy
to perform the bootstrapping process: &lt;br /&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cdk bootstrap aws://{account-id}/{region}
cdk bootstrap aws://123456789012/ap-southeast-2&lt;/code&gt;&lt;/pre&gt;&lt;p style=&quot;text-align: left;&quot;&gt;By default, this will create a stack with the name &lt;b&gt;CDKToolkit&lt;/b&gt;:&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC9m5SWePBlZFmCjVCRecPpfJfhwosLZiBslsg2HNwt1gTvcMVCb_pIh3UFMx4FvkeqMNBKFTCEyAx_9dICvfuhlezvUmDGgXxGETOyj4dwjaU0ojO18xF7aBXL9RCNq_YOiv0706Nd8t2mYC4Nz6vHe7IZyVo567eTl0x9abp4RVHkYz_yo7ndGzIAow/s408/01-CDKStack.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;92&quot; data-original-width=&quot;408&quot; height=&quot;90&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC9m5SWePBlZFmCjVCRecPpfJfhwosLZiBslsg2HNwt1gTvcMVCb_pIh3UFMx4FvkeqMNBKFTCEyAx_9dICvfuhlezvUmDGgXxGETOyj4dwjaU0ojO18xF7aBXL9RCNq_YOiv0706Nd8t2mYC4Nz6vHe7IZyVo567eTl0x9abp4RVHkYz_yo7ndGzIAow/w400-h90/01-CDKStack.png&quot; width=&quot;400&quot; /&gt;&amp;nbsp;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;&amp;nbsp;This stack include some resource like an ECR repository and S3 bucket:

&lt;/div&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBqy4e26ZlY_2ltj7PgoW2Q26Ik40HN25itIAVKFXdhJhNjKW_rqhOZNd0206FuwfyOe2kIZtcSF3_7oHs2e4TEeQtJsgPq-6Rp9kO5P6af_oNwrSUpeY-m55ktf3ZIEBHGnkoz7jYEczj0rjix9mi-Zmk6nBNdaNwmKdnMMebzd5-U1hrWXyMnQP4IYM/s741/02-CDKBucket.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;175&quot; data-original-width=&quot;741&quot; height=&quot;152&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBqy4e26ZlY_2ltj7PgoW2Q26Ik40HN25itIAVKFXdhJhNjKW_rqhOZNd0206FuwfyOe2kIZtcSF3_7oHs2e4TEeQtJsgPq-6Rp9kO5P6af_oNwrSUpeY-m55ktf3ZIEBHGnkoz7jYEczj0rjix9mi-Zmk6nBNdaNwmKdnMMebzd5-U1hrWXyMnQP4IYM/w640-h152/02-CDKBucket.png&quot; width=&quot;640&quot; /&gt;&amp;nbsp;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJ-qQWy1h6qCrQN9Cc_nRU0dJon-tZFaDCjPMXJQgjFq1KJX75YmsCs9JlrSZ42EInX1UMYjNF5yyAFLUBrGxHyAboiiQQNtRMK_1tWW5M_bW_Fsmaz66BnPo8rgLRoUq1ViY866uuYDN8XSpremvzEJLgaQF2gC3C07MbwtwFqZ3a73KWrLFJQfj9rjM/s850/03-CDKECR.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;125&quot; data-original-width=&quot;850&quot; height=&quot;94&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJ-qQWy1h6qCrQN9Cc_nRU0dJon-tZFaDCjPMXJQgjFq1KJX75YmsCs9JlrSZ42EInX1UMYjNF5yyAFLUBrGxHyAboiiQQNtRMK_1tWW5M_bW_Fsmaz66BnPo8rgLRoUq1ViY866uuYDN8XSpremvzEJLgaQF2gC3C07MbwtwFqZ3a73KWrLFJQfj9rjM/w640-h94/03-CDKECR.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;

&lt;/p&gt;&lt;p&gt;As shown
in the screenshots, the resources include a default qualifier
&quot;hnb659fds&quot;, which aside of being a random ugly string, this default
is challenging when we have multiple logical environments like &lt;b&gt;dev &lt;/b&gt;and &lt;b&gt;test
&lt;/b&gt;in the same AWS account because we&#39;ll have to use the same resources to serve
these logical environments.&lt;/p&gt;

&lt;p&gt;If it&#39;s
required to separate these environments and have two different sets of
CDK resources, we need to distinguish between these environments by specifying
a qualifier. At the same time, we need to ensure that stack names don&#39;t
conflict.&lt;/p&gt;

&lt;p style=&quot;text-align: left;&quot;&gt;To achieve this, CDK CLI provides two parameters:&lt;/p&gt;&lt;ol style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;b&gt;--qualifier:&lt;/b&gt; replaces the default &quot;hnb659fds&quot; string in resource names.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;b&gt;--toolkit-stack-name:&lt;/b&gt; overrides the default stack name.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;For example, to bootstrap two separate logical environments, we can use:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cdk bootstrap aws://123456789012/ap-southeast-2 --qualifier dev --toolkit-stack-name CDKToolkit-dev
cdk bootstrap aws://123456789012/ap-southeast-2 --qualifier test --toolkit-stack-name CDKToolkit-test&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This results in creating stacks and resources as shown:&lt;br /&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEickVnNCtNFkQvu2IXXzJ6PFHLusvHbaqsL58Db55Rwo8xBdigcDN0fIqMT4GNZpFd0tKmMnPn1rRRbvwm99Ck43HkogvroAJiCcaoXmh1LGTYHVHaBkn2sqmfqMsZ6Yy4Vd0-I7XcD26Td8cM3F_p_ihOaTlnbdKEN3f_-1amVVot6uKlz2GeqJ7CT1Bk/s596/04-NewCDKStack.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;113&quot; data-original-width=&quot;596&quot; height=&quot;122&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEickVnNCtNFkQvu2IXXzJ6PFHLusvHbaqsL58Db55Rwo8xBdigcDN0fIqMT4GNZpFd0tKmMnPn1rRRbvwm99Ck43HkogvroAJiCcaoXmh1LGTYHVHaBkn2sqmfqMsZ6Yy4Vd0-I7XcD26Td8cM3F_p_ihOaTlnbdKEN3f_-1amVVot6uKlz2GeqJ7CT1Bk/w640-h122/04-NewCDKStack.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-FkQbz8DVWlMsZxuEDmKzhhuYdSdrhn6rN7EcVFK0jkOffcSHZQtKQ8KtAYCon321wOUVoGTDldxfijGGcRfUWATZyAiL14F5q8DcizrnD5Vw5bXLXsQmSik_Q3hxkK5hGhLaHFX_xX9Nu7LY4FlYs4MDubQF60xdq_sqcSJVWGKibRvWnEYgyC0yr-o/s725/05-NewCDKBuckets.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;70&quot; data-original-width=&quot;725&quot; height=&quot;62&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-FkQbz8DVWlMsZxuEDmKzhhuYdSdrhn6rN7EcVFK0jkOffcSHZQtKQ8KtAYCon321wOUVoGTDldxfijGGcRfUWATZyAiL14F5q8DcizrnD5Vw5bXLXsQmSik_Q3hxkK5hGhLaHFX_xX9Nu7LY4FlYs4MDubQF60xdq_sqcSJVWGKibRvWnEYgyC0yr-o/w640-h62/05-NewCDKBuckets.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;The next question is: How to target a specific bootstrap qualifier when you deploy?&lt;br /&gt;One way is to update &lt;b&gt;cdk.json&lt;/b&gt; configuration file to set the bootstrap identifier:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;cdk.json
&quot;context&quot;: {
    &quot;@aws-cdk/core:bootstrapQualifier&quot;: &quot;test&quot;
  }&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If you prefer to set this value dynamically, for example to pass the environment name in a CI pipeline as an environment variable, you can use code instead: (This example uses C#)&lt;br /&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var qualifier = Environment.GetEnvironmentVariable(&quot;CDK_QUALIFIER&quot;);
var synthesizer = new DefaultStackSynthesizer(new DefaultStackSynthesizerProps() { Qualifier = qualifier });
var stack = new Stack(app, &quot;my-stack&quot;, new StackProps() { Synthesizer = synthesizer });
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This method allows you to set up multiple environments within a single AWS account, enabling the application of distinct policies to CDK assets for logical divisions like &lt;b&gt;dev &lt;/b&gt;and &lt;b&gt;test &lt;/b&gt;or for separate teams sharing the account.&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2831763589265795582/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2831763589265795582?isPopup=true' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2831763589265795582'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2831763589265795582'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2025/01/managing-logical-environments-in-aws.html' title='Managing Logical Environments in AWS CDK: Using Qualifiers to Avoid Conflicts'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC9m5SWePBlZFmCjVCRecPpfJfhwosLZiBslsg2HNwt1gTvcMVCb_pIh3UFMx4FvkeqMNBKFTCEyAx_9dICvfuhlezvUmDGgXxGETOyj4dwjaU0ojO18xF7aBXL9RCNq_YOiv0706Nd8t2mYC4Nz6vHe7IZyVo567eTl0x9abp4RVHkYz_yo7ndGzIAow/s72-w400-h90-c/01-CDKStack.png" height="72" width="72"/><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-6921563420276940640</id><published>2024-09-07T00:03:00.000+02:00</published><updated>2024-09-07T00:03:50.944+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="docker"/><category scheme="http://www.blogger.com/atom/ns#" term="LLMs"/><title type='text'>Unlocking the Power of LLMs Locally with Docker Compose</title><content type='html'>&lt;p&gt;Although the expectations that came with the advent of Large Language Models (LLMs) could be largely exaggerated, they still have proven to be useful in many scenarios and for a very wide audience. Probably, this is the closest non-technical users have come to interact with AI since its inception. ChatGPT has set the &lt;a href=&quot;https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/&quot; target=&quot;_blank&quot;&gt;record for the fastest-growing user base&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Being an online tool, it comes with some concerns over privacy, customizability, and the constant need for an internet connection.&lt;br /&gt;However ChatGPT is not the only player in this game. Many models are available online and for offline download. The latter is the focus of this post.&lt;/p&gt;&lt;p&gt;Recently, Meta release its latest model &lt;a href=&quot;https://ai.meta.com/blog/meta-llama-3-1/&quot; target=&quot;_blank&quot;&gt;Llama 3.1&lt;/a&gt;. And I wanted to give it a try on my laptop, after positive feedback about it.&lt;/p&gt;&lt;p&gt;So, the objective I had was:&lt;br /&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Find and easy way to setup the tooling required to download LLMs and get responses to my prompts locally.&lt;/li&gt;&lt;/ul&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Have a nice user interface that I can use to give prompts, save prompt history, and customize my environment.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&amp;nbsp;Two tools play very well together:&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ollama: https://ollama.com/&lt;/b&gt;&lt;br /&gt;Think of Ollama as the npm or pip for language models. It enables you to download models, execute prompts from the CLI, list models and so on. It also provides APIs that can be called from other applications. These APIs are compatible with OpenAI Chat Completions API.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Open WebUI: https://openwebui.com/&lt;/b&gt;&lt;br /&gt;Self-hosted web interface, very similar to what you get from OpenAI&#39;s ChatGPT web interface. Open WebUI can interface with Ollama. Think about it as a front end for the backend provided by Ollama.&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;Since I prefer to use Docker whenever possible to experiment with new tools, I opted to use it instead of installing any tools locally. Especially that I&#39;m almost a complete beginner to this space.&lt;/p&gt;&lt;p&gt;Open WebUI provides docker images that include both Open WebUI and Ollama in the same image! Which makes setting up the whole stack locally super easy.&lt;/p&gt;&lt;p&gt;The documentation provides this example command to run the container:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama&lt;/code&gt;&lt;/pre&gt;This command does the following:&lt;br /&gt;&lt;ol style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Starts the container, from the image ghcr.io/open-webui/open-webui:ollama. The --restart always parameter restarts the container automatically if it fails or was stopped.&lt;/li&gt;&lt;li&gt;Maps the local port 3000 to Open WebUI port 8080.&lt;/li&gt;&lt;li&gt;Maps docker volumes to paths within the container. This persists data even if the container is deleted. This is important to persist history and downloaded models, which are big.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;I prefer to use docker compose, additionally I wanted to have the easy visibility on the data created by Open WebUI and the models downloaded by Ollama, so I chose to bind folders on my local machine instead of using volumes. Here is how the docker-compose.yml looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;services:
  OpenWebUI:
    image: ghcr.io/open-webui/open-webui:ollama
    container_name: open-webui
    environment:
      - WEBUI_AUTH=False
    volumes:
      - C:\open-webui:/app/backend/data
      - C:\ollama:/root/.ollama
    ports:
      - 3000:8080
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
&lt;br /&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Starting the stack is easy:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose up -d&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It may take a short while before it&#39;s ready, If you check the container logs (I use Docker Desktop on windows) you should see something similar to:&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEcdYXGghdCMv0HOPafJw-MStZYppk0BFiLn6N5RrvI4jecGiH3k8A3KyyuovZ9G0hmFCiouqpMfdmVZwWjtBDFVI8J_wSs7BbH2CJRIbSkomZej2IL2_1RnaysvagHezG0lhyaDxkKqkE0Kxi2E4epIoAaQvz83mvwik3n8SVbiwE2NBAvfmjNW3IU9Y/s522/open-webui.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;272&quot; data-original-width=&quot;522&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEcdYXGghdCMv0HOPafJw-MStZYppk0BFiLn6N5RrvI4jecGiH3k8A3KyyuovZ9G0hmFCiouqpMfdmVZwWjtBDFVI8J_wSs7BbH2CJRIbSkomZej2IL2_1RnaysvagHezG0lhyaDxkKqkE0Kxi2E4epIoAaQvz83mvwik3n8SVbiwE2NBAvfmjNW3IU9Y/s16000/open-webui.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then you can open your browser on&amp;nbsp;http://localhost:3000/ and start playing.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDF50UzHW2tdq1p4PxljXqwFp-c8UVd5fjrluFOfpEp3t3yVH5x1RIEDs5S5mkgiFW398idonp9EjP1iMBf2LyQrLUEQTXqnHWFQfJMp1YE-LGlOVFMTXQAf-8Z6IeokMLeeIbqwSsia5sQPcE3dMD-BqYLn-fLa4HriIScZZtBKTJqOVWGY8SU0iAmNY/s1343/prompt.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;652&quot; data-original-width=&quot;1343&quot; height=&quot;311&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDF50UzHW2tdq1p4PxljXqwFp-c8UVd5fjrluFOfpEp3t3yVH5x1RIEDs5S5mkgiFW398idonp9EjP1iMBf2LyQrLUEQTXqnHWFQfJMp1YE-LGlOVFMTXQAf-8Z6IeokMLeeIbqwSsia5sQPcE3dMD-BqYLn-fLa4HriIScZZtBKTJqOVWGY8SU0iAmNY/w640-h311/prompt.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;h3 style=&quot;text-align: left;&quot;&gt;Downloading models&lt;/h3&gt;&lt;p&gt;Remember that this docker image does not include any models yet so probably the first step is to click the plus icon beside the &quot;Select a model&quot; label and write a model name. You&#39;ll find a list of model names in&amp;nbsp;&lt;a href=&quot;https://ollama.com/library&quot;&gt;https://ollama.com/library&lt;/a&gt;. Click a model name, and choose the model size and the model name will be shown. So to download the 8b (8 billion parameters version) of llama3.1, write &lt;b&gt;llama3.1:8b&lt;/b&gt; in the Open WebUI interface.&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6U_7xdOTMY6FNHs6LyyFSJaXWqh6n3OzDoGHyqCb-_j0zVT_qc3gX0MU7rtbqSdNDlFXYnbuQ_-Fb_kCjwLhdIo6fcekb4zbPKolYCVRYiJgUYBNQPn4Kg306Hi_TsK1pJb7XavTnkEyD_X3_QjK7cAgDHuQjkFJyH_PqulDsvt3iqiVZVkTqAaMQyE8/s801/download-model.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;524&quot; data-original-width=&quot;801&quot; height=&quot;418&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6U_7xdOTMY6FNHs6LyyFSJaXWqh6n3OzDoGHyqCb-_j0zVT_qc3gX0MU7rtbqSdNDlFXYnbuQ_-Fb_kCjwLhdIo6fcekb4zbPKolYCVRYiJgUYBNQPn4Kg306Hi_TsK1pJb7XavTnkEyD_X3_QjK7cAgDHuQjkFJyH_PqulDsvt3iqiVZVkTqAaMQyE8/w640-h418/download-model.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;As shown in the screenshot below, I downloaded llama3.1:8b, gemma2:2b (Google&#39;s lightweight model). Note that the larger the number of parameters, the higher the specs your computer needs to have.&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVDwCVJvC7TlLfhvhxzgl7TojXdaQ_wFQxoJ2vhzKzz4oMQqAs-xlhtYlF6aAJLg_cdZ0_LUsFw6ojcRcKsTXRGt_yI6tinKr24tJN4fkbqAn4gnHv97QHIimmzOSZ_1Ij6qvoLed1Sim1BeMMlvpELllXi6aGDMtvLF37jmMLqJhoRn2DutO-ZOyuAOQ/s546/models-list.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;239&quot; data-original-width=&quot;546&quot; height=&quot;140&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVDwCVJvC7TlLfhvhxzgl7TojXdaQ_wFQxoJ2vhzKzz4oMQqAs-xlhtYlF6aAJLg_cdZ0_LUsFw6ojcRcKsTXRGt_yI6tinKr24tJN4fkbqAn4gnHv97QHIimmzOSZ_1Ij6qvoLed1Sim1BeMMlvpELllXi6aGDMtvLF37jmMLqJhoRn2DutO-ZOyuAOQ/s320/models-list.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;h3 style=&quot;text-align: left;&quot;&gt;Testing with some prompts&lt;/h3&gt;&lt;p&gt;After downloading models, you can try some prompts:&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-wWXqscfhAFA_iwFDaYw6wSMd7BlFvp3RRJUgcJA0TFxqMtZxUJdT86WyxuYnH2mxFHjzuePHzVFvbu39yk3H9m3JTNMiNidAZI7kxSdM20SfDQLg6JpPoi6okk4KBop7QFngEHHqRTT5n-gYrZd1xjRwxQIxYNuSc4luPARZarM2DJvarmJuzY-awuY/s1024/prompt-docker.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;281&quot; data-original-width=&quot;1024&quot; height=&quot;176&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-wWXqscfhAFA_iwFDaYw6wSMd7BlFvp3RRJUgcJA0TFxqMtZxUJdT86WyxuYnH2mxFHjzuePHzVFvbu39yk3H9m3JTNMiNidAZI7kxSdM20SfDQLg6JpPoi6okk4KBop7QFngEHHqRTT5n-gYrZd1xjRwxQIxYNuSc4luPARZarM2DJvarmJuzY-awuY/w640-h176/prompt-docker.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;Note that you can interact with ollama CLI directly. Either use &lt;b&gt;docker compose exec OpenWebUI bash&lt;/b&gt; or use docker desktop:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiG_4S0VnN6tqeKolKdblbyZeyYNNSlTTKmuoj0sck_RjB6Ti4yTlKUJOeQZpT_QbqWv-Kj7NygDrevBWLIvqQbFanQIUATlDBO3B0cAaTyNVh-gaIPDBnH7Sz9kDoQJhJtAQxXjZr0nNXdkHcSsgm-wQFcDKZOR-VubFl_7XgRfU7Q8kqZyyhjlMSG9MY/s983/ollama.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;345&quot; data-original-width=&quot;983&quot; height=&quot;224&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiG_4S0VnN6tqeKolKdblbyZeyYNNSlTTKmuoj0sck_RjB6Ti4yTlKUJOeQZpT_QbqWv-Kj7NygDrevBWLIvqQbFanQIUATlDBO3B0cAaTyNVh-gaIPDBnH7Sz9kDoQJhJtAQxXjZr0nNXdkHcSsgm-wQFcDKZOR-VubFl_7XgRfU7Q8kqZyyhjlMSG9MY/w640-h224/ollama.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;Now
if you want to stop the container, run &lt;span style=&quot;font-weight: bold;&quot;&gt;docker
compose down&lt;/span&gt;&lt;/p&gt;&lt;h3 style=&quot;text-align: left;&quot;&gt;A note on GPUs&lt;br /&gt;&lt;/h3&gt;&lt;p&gt;Ollama can run models using GPU, or CPU only. As you see in the docker compose file, I&#39;m specifying that the container can use all the available GPUs on my machine. If you&#39;re using Docker desktop with WSL support, ensure that you have the latest WSL and Nvidia drivers installed. You can use this command to test that docker GPU access is working fine:&lt;br /&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark&lt;/code&gt;&lt;/pre&gt;




&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;h3 style=&quot;text-align: left;&quot;&gt;Closing notes:&lt;br /&gt;&lt;/h3&gt;&lt;p&gt;It&#39;s very exciting to have an LLM running locally. It opens a lot of customization possibilities and keeps your data private.&lt;br /&gt;The machine I tried this experiment on is relatively old, so gemma2:2b was much faster than llama3.1:7b and still performed very well.&lt;br /&gt;Looking forward to experiment more!&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Note&lt;/b&gt;: The title of this post was recommended by gemma2 :)&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/6921563420276940640/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/6921563420276940640?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/6921563420276940640'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/6921563420276940640'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2024/09/unlocking-power-of-llms-locally-with.html' title='Unlocking the Power of LLMs Locally with Docker Compose'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEcdYXGghdCMv0HOPafJw-MStZYppk0BFiLn6N5RrvI4jecGiH3k8A3KyyuovZ9G0hmFCiouqpMfdmVZwWjtBDFVI8J_wSs7BbH2CJRIbSkomZej2IL2_1RnaysvagHezG0lhyaDxkKqkE0Kxi2E4epIoAaQvz83mvwik3n8SVbiwE2NBAvfmjNW3IU9Y/s72-c/open-webui.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2207703570650289625</id><published>2024-06-08T13:49:00.004+02:00</published><updated>2024-06-08T13:57:22.139+02:00</updated><title type='text'>The simplest data pipeline ever</title><content type='html'>&lt;p&gt;As software engineers what we care most about is getting stuff done. The simplest approach is probably the best, as long as it doesn&#39;t cause long term issues.&lt;/p&gt;&lt;p&gt;I was working on a project where the creation and initialization of a few PostgreSQL databases required data migration and transformation from another set of databases. The first step in this process involved loading a few hundreds of millions of records from the source databases into the destination databases.&lt;br /&gt;I wanted to implement this in a way that is fully automated, and at the same time, I wanted to simplify this process as much as possible. So, no fancy tools. No cloud infrastructure. No nothing. Is that even possible?&lt;/p&gt;&lt;p&gt;The approach we followed was simply using Linux pipes! Linux pipes can transfer data from one process to another. So how could this help with this data import process ?&lt;/p&gt;&lt;p&gt;Postgres provides useful command line utilities that can be used to export and import data. For example this command exports data from a table called &lt;b&gt;mydata&lt;/b&gt; to the standard output:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;psql -h source -U postgres -d test -c &quot;\copy mydata TO STDOUT&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On the other hand you can import data from standard input to a database table using this command:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;psql -h destination -U postgres -d test -c &quot;\copy mydata FROM STDIN&quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;With the power of Linux pipelines it&#39;s possible to stitch these commands together so that the output of the first command feeds into the input of the next and data flows from one database to another.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;psql -h source -U postgres -d test -c &quot;\copy mydata TO STDOUT&quot; | psql -h destination -U postgres -d test -c &quot;\copy mydata FROM STDIN&quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Pretty simple. Isn&#39;t it? However since this operation may take a few hours, wouldn&#39;t it be nice to have some sort of progress indication ?&lt;br /&gt;This is where another handy Linux utility comes to play: The &lt;b&gt;pv&lt;/b&gt; utility. From the &lt;a href=&quot;https://man7.org/linux/man-pages/man1/pv.1.html&quot; target=&quot;_blank&quot;&gt;man page&lt;/a&gt;:&lt;/p&gt;&lt;blockquote&gt;pv - monitor the progress of data through a pipe&lt;/blockquote&gt;
&lt;p&gt;By default this tool shows the number of bytes flowing through the pipe, however in the process of data import, the number of imported records is a better representative of the progress of the operation. The good thing is that pv has switches that enable counting the number of lines instead of bytes. So the final solution would look like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;psql -h source -U postgres -d test -c &quot;\copy mydata TO STDOUT&quot; | \ 
pv --line-mode --size 100000000 | \ 
psql -h destination -U postgres -d test -c &quot;\copy mydata FROM STDIN&quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Note that the &lt;b&gt;--size&lt;/b&gt; parameter assumes knowledge of the total number of records, which can be retrieved using a simple &lt;b&gt;select count(*)&lt;/b&gt;, or just omitted.&lt;/p&gt;&lt;p&gt;When I run this in terminal, the progress looks like:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJTaxzbeVYFFC94L0DYfe07Z9jrE1_UgYPPMqxBEoe8_PmmGQnus7I3NNR8AMoit6-f2foaF1uJbl8wNGO9vopPbhWn0u7geL4ke4AUxFPDyUsi0Lc-Wd4pSglfucjzs_tjnTPt5UQegy1khovjxq89ubHq8GkPT0DM9zm_AeZqQoZN0Ch0fJh1TTamJY/s800/progress.gif&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;48&quot; data-original-width=&quot;800&quot; height=&quot;38&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJTaxzbeVYFFC94L0DYfe07Z9jrE1_UgYPPMqxBEoe8_PmmGQnus7I3NNR8AMoit6-f2foaF1uJbl8wNGO9vopPbhWn0u7geL4ke4AUxFPDyUsi0Lc-Wd4pSglfucjzs_tjnTPt5UQegy1khovjxq89ubHq8GkPT0DM9zm_AeZqQoZN0Ch0fJh1TTamJY/w640-h38/progress.gif&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;Transferring 100m records from one database to another both running as containers on my local machine took about 8 minutes. In case of transferring data over the network, it&#39;s expected to be slower.&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHGPJNgawdhvCSyxOWvOac6T89V3rDxo8AfCoDST9oxg_JuzbI4X5S61Is9rre2rIhjc9HCIwXKo6TCFWmtEFfkbz69c7U4M1Hc_gjX2Mf0UO-QbQZ6BsujO7V9Whc2hsgkfOvayx3vUOdBzrWy0r0lUvc4qj2Ig6mwOpiZot8ASH_DVjNnTR7T5UuUgw/s3779/finished.jpg&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;270&quot; data-original-width=&quot;3779&quot; height=&quot;46&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHGPJNgawdhvCSyxOWvOac6T89V3rDxo8AfCoDST9oxg_JuzbI4X5S61Is9rre2rIhjc9HCIwXKo6TCFWmtEFfkbz69c7U4M1Hc_gjX2Mf0UO-QbQZ6BsujO7V9Whc2hsgkfOvayx3vUOdBzrWy0r0lUvc4qj2Ig6mwOpiZot8ASH_DVjNnTR7T5UuUgw/w640-h46/finished.jpg&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;Needless to say that in the real life implementation other steps were required like retrieving the credentials for the source and destination databases and hooking the scripts into a CI pipeline.&lt;/p&gt;&lt;p&gt;Surely, this isn&#39;t the most efficient way to transfer a lot of data. Note that the data is transferred as text which is far less efficient than the binary transfer that proper data import tools would use. As with any decision we make as software engineers it&#39;s all about tradeoffs. My priorities were clear: We need the simplest possible repeatable solution.&lt;/p&gt;&lt;p&gt;If you&#39;re interested in trying this on your machine, this is how I prepared the above screenshots:&lt;br /&gt;Let&#39;s start with a docker compose file which instantiates 3 containers:&lt;br /&gt;&lt;/p&gt;&lt;ol style=&quot;text-align: left;&quot;&gt;&lt;li&gt;A source database&lt;/li&gt;&lt;li&gt;A destination database&lt;/li&gt;&lt;li&gt;And a client where there data import / export process is executed.&lt;/li&gt;&lt;/ol&gt;

&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: &quot;3.8&quot;

networks:
  db-network:
    driver: bridge

services:
  source:
    image: postgres:16.1
    environment:
      - POSTGRES_PASSWORD=MYPASS123
    volumes:
      - type: volume
        source: source-data
        target: /var/lib/postgresql/data
    ports:
      - 9432:5432


  destination:
    image: postgres:16.1
    environment:
      - POSTGRES_PASSWORD=MYPASS123
    volumes:
      - type: volume
        source: destination-data
        target: /var/lib/postgresql/data
    ports:
      - 9433:5432

  client:
    container_name: postgres_client
    build: .
    entrypoint: [ &quot;sleep&quot;, &quot;infinity&quot; ]


volumes:
  source-data:
    external: true
    name: source-data

  destination-data:
    external: true
    name: destination-data
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;Note that for the client container a Dockerfile is used and that is to ensure that the required utilities for this process -in particular pv- are installed. Additionally to copy the .pgpass file which contains the database passwords.&amp;nbsp;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-plain&quot;&gt;FROM postgres:16.1

RUN apt-get update

RUN apt-get install pv

COPY pgpass /root/.pgpass

RUN chmod 0600 /root/.pgpass

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-plain&quot;&gt;source:5432:test:postgres:MYPASS123
destination:5432:test:postgres:MYPASS123
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;Then start this docker compose stack using:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose up -d --build&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Connect to the databases using your favorite tool (mine is Azure data studio) and create the test table by executing this query:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;CREATE TABLE public.mydata (
	id int NOT NULL,
    firstname varchar NULL,
	lastname varchar NULL,
	email varchar NULL,
	CONSTRAINT mydata_pk PRIMARY KEY (id)
);
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The next step would be to populate some test data into the source database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;INSERT INTO public.mydata
(firstname, lastname, email, id)
select concat(&#39;firstname&#39;, counter), concat(&#39;lastname&#39;, counter), concat(&#39;firstname&#39;, counter, &#39;.&#39;, &#39;lastname&#39;, counter, &#39;@email.com&#39;), counter
	from pg_catalog.generate_series(1, 100000000) as counter&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then connect to the client container using:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose exec client bash&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And execute the script to start the data migration:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;psql -h source -U postgres -d test -c &quot;\copy mydata TO STDOUT&quot; | \ 
pv --line-mode --size 100000000 | \ 
psql -h destination -U postgres -d test -c &quot;\copy mydata FROM STDIN&quot;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;I hope this helps.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2207703570650289625/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2207703570650289625?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2207703570650289625'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2207703570650289625'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2024/06/the-simplest-data-pipeline-ever.html' title='The simplest data pipeline ever'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJTaxzbeVYFFC94L0DYfe07Z9jrE1_UgYPPMqxBEoe8_PmmGQnus7I3NNR8AMoit6-f2foaF1uJbl8wNGO9vopPbhWn0u7geL4ke4AUxFPDyUsi0Lc-Wd4pSglfucjzs_tjnTPt5UQegy1khovjxq89ubHq8GkPT0DM9zm_AeZqQoZN0Ch0fJh1TTamJY/s72-w640-h38-c/progress.gif" height="72" width="72"/><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-1381680795251495053</id><published>2024-02-16T12:41:00.002+02:00</published><updated>2024-02-16T12:41:09.552+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term=".net"/><category scheme="http://www.blogger.com/atom/ns#" term="logging"/><title type='text'>Changing log level for .net apps on the fly</title><content type='html'>&lt;p&gt;Logging is very important to understand the behavior of an application. Logs can be used to analyze application behavior over an extended time period to understand trends or anomalies, but they&#39;re also critical to diagnose issues in production environments when the application is not behaving as expected.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;How much logs an application should emit is a matter of tradeoffs. Writing too much logs may negatively impact application performance and increase data transfer and storage costs without adding value. Too few logs makes it very difficult to troubleshoot issues. This is why most logging frameworks allow configuring log levels so that the application developers can add as much logging as needed, but only logs with a specific level or below will actually be written to the destination.&lt;/p&gt;&lt;p&gt;The challenge is that you don&#39;t need all the logs all the time. You certainly can redeploy or reconfigure the application and restart it to change the log level, but this would be a bit disruptive. The good thig is that .net configuration system allows updating configuration values on the fly. Consider this simple web API:&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var builder = WebApplication.CreateBuilder(args);

builder.Logging.AddConsole();

var app = builder.Build();

app.MapGet(&quot;/numbers&quot;, () =&amp;gt;
{
    app.Logger.LogDebug(&quot;Debug&quot;);
    app.Logger.LogInformation(&quot;Info&quot;);
    app.Logger.LogWarning(&quot;Warning&quot;);
    app.Logger.LogError(&quot;Error&quot;);

    return Enumerable.Range(0, 10);
});

app.Run();
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;With logging configuration file:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;Logging&quot;: {
    &quot;LogLevel&quot;: {
      &quot;Default&quot;: &quot;Error&quot;,
      &quot;Microsoft.AspNetCore&quot;: &quot;Warning&quot;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;When the &lt;code&gt;/numbers&lt;/code&gt; endpoint is called, these logs are written to the console:
&lt;pre&gt;&lt;code class=&quot;language-http&quot;&gt;fail: ConfigReload[0]
      Error
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This is clearly because the configured default log level is &quot;Error&quot;. You can add a simple endpoint that changes the log level on the fly, like this:&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;app.MapGet(&quot;/config&quot;, (string level) =&amp;gt; 
{
    if (app.Services.GetRequiredService&amp;lt;IConfiguration&amp;gt;() is not IConfigurationRoot configRoot)
        return;

    configRoot[&quot;Logging:LogLevel:Default&quot;] = level;
    configRoot.Reload();
});&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When you issue the GET request &lt;code&gt;/config?level=Information&lt;/code&gt; Then invoke the &lt;code&gt;/numbers&lt;/code&gt; endpoint again, the log output will look like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-http&quot;&gt;info: ConfigReload[0]
      Info
warn: ConfigReload[0]
      Warning
fail: ConfigReload[0]
      Error
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;


Similarly, to configure the log level to Debug, invoke &lt;code&gt;/config?level=Debug&lt;/code&gt;. Very simple.&lt;/p&gt;&lt;p&gt;There are a few gotchas to consider:&lt;/p&gt;&lt;ol style=&quot;text-align: left;&quot;&gt;&lt;li&gt;This the /config endpoint should be secured, only a privileged user should be able to invoke it as it changes the application behavior. I&#39;ve intentionally ignored this in my example for simplicity.&lt;/li&gt;&lt;li&gt;In case there are many instances serving the same API the /config invocation will be directed by the load balancer to only one instance of your application which most probably won&#39;t be sufficient. In this case you will need another approach to communicate with your application that the log level should be modified. One approach could be a pub-sub system that allows multiple consumers. This may be a subject of another blog post.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Another common approach for reconfiguring.net applications on the fly is by using a configuration source that refreshes automatically every specific time interval or based on config file change detection. &lt;br /&gt;However the time based approach means that you have to wait until a certain time elapses for the application to reconfigure itself which may not be desirable as you want to change the log level as quickly as possible. A file change detection approach is not great for immutable deployments like container based applications or serverless functions.&lt;/p&gt;&lt;p&gt;Logging and monitoring are quality attributes that should be taken into consideration during the application design. In case you&#39;re not using a more advanced observability tooling that allow profiling for example then the technique proposed in this blog post may be of help.&lt;br /&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/1381680795251495053/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/1381680795251495053?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/1381680795251495053'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/1381680795251495053'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2024/02/changing-log-level-for-net-apps-on-fly.html' title='Changing log level for .net apps on the fly'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-7101793762499315241</id><published>2024-01-12T12:05:00.000+02:00</published><updated>2024-01-12T12:05:21.601+02:00</updated><title type='text'>Assertions of Equality and Equivalence</title><content type='html'>&lt;p&gt;I remember that I encountered an interesting bug that was not detected by unit tests because the behaviour of the test framework did not match my expectations.&lt;br /&gt;The test was supposed to verify that the contents of an array (or a list) returned by the code under test match an expected array of elements in the specific order of that expected array. The unit test was passing, however, later the team discovered a bug, and the root cause was that the array was not in the correct order! This is exactly why we write automated tests, but the test failed us.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;The test, which uses &lt;a href=&quot;https://fluentassertions.com/&quot; target=&quot;_blank&quot;&gt;FluentAssertions&lt;/a&gt; library basically looked like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;[Test]
public void FluentAssertions_Unordered_Pass()
{
	var actual = new List&amp;lt;int&amp;gt;  {1, 2, 3}; // SUT invocation here
	var expected = new [] {3, 2, 1};

	actual.Should().BeEquivalentTo(expected);
}
&lt;/code&gt;&lt;/pre&gt;
Although the order of the elements of the actual array don&#39;t match the expected, the test passes. This is not a bug in FluentAssertions. It&#39;s by design, and the solution is simple:

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;actual.Should().BeEquivalentTo(expected, config =&amp;gt; config.WithStrictOrdering());
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;The config parameter enforces a specific order of the collection. It&#39;s also possible to configure this globally, when initializing the test assembly for example:

&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;AssertionOptions.AssertEquivalencyUsing(config =&amp;gt; config.WithStrictOrdering());
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The default behavior of this method annoyed me. In my opinion, the test method should be strict by default. That is, it should assume that the collection should be sorted, and can be made more lenient by overriding this behavior. Not the opposite.&lt;/p&gt;&lt;p&gt;Probably I got into the habit of using &lt;code&gt;BeEquivalentTo()&lt;/code&gt;, while an &lt;code&gt;Equal()&lt;/code&gt; assertion exists, which &quot;Expects the current collection to contain all the same elements in the same order&quot; as it&#39;s default behavior. There are other differences between &lt;code&gt;BeEquivalentTo()&lt;/code&gt; and &lt;code&gt;Equal()&lt;/code&gt; that don&#39;t matter in this context.&amp;nbsp;&lt;/p&gt;&lt;p&gt;

&lt;/p&gt;&lt;p&gt;Similar behavior applies to Nunit assertions, although there is no way to override the equivalence behavior:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;[Test]
public void NUnit_Unordered_Pass()
{
	var actual = new [] {1, 2, 3};
	var expected = List&amp;lt;int&amp;gt;  {3, 2, 1};

	Assert.That(actual, Is.EquivalentTo(expected)); // pass
	CollectionAssert.AreEquivalent(expected, actual); // pass
}
&lt;/code&gt;&lt;/pre&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;[Test]
public void NUnit_Unordered_Fail()
{
	var actual = new [] {1, 2, 3};
	var expected = new List&amp;lt;int&amp;gt; {3, 2, 1};

	Assert.That(actual, Is.EqualTo(expected)); // fail
	CollectionAssert.AreEqual(expected, actual); // fail
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;It&#39;s important to understand the behavior of the testing library to avoid similar mistakes. We rely on tests as our safetly net, and they better be reliable!

&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/7101793762499315241/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/7101793762499315241?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/7101793762499315241'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/7101793762499315241'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2024/01/assertions-of-equality-and-equivalence.html' title='Assertions of Equality and Equivalence'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-4367613949834761491</id><published>2023-09-22T14:12:00.004+02:00</published><updated>2023-09-22T14:18:49.997+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term=".net"/><category scheme="http://www.blogger.com/atom/ns#" term="C#"/><title type='text'>Handling special content with Handlebars.net Helpers</title><content type='html'>&lt;p&gt;Generating formatted reports based on application data is a very common need. For example, you may want to create an HTML page with content from a receipt. This content may be sent in an HTML formatted email or converted to PDF or any other use case. To achieve this, a flexible and capable templating engine is needed to transform the application data to a human readable format.&lt;br /&gt;.net has a very powerful templating engine that&#39;s used in its asp.net web framework which is Razor templates. But what if you want to use a templating engine that is simpler, and doesn&#39;t require a web stack as in the case of building background jobs, desktop or mobile applications?&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcGswphhxCStflt5FBVBgKsyv1MHHrLXhvFONloWbXDGGl6y8QVJaEFninlpnoHVojn0rjmR69qM8O1HnmCJZZFlxAT-ZcvAj_k2HHao8l8zTdqkyLfDUl3QS9NK-MfrUGbC-yrbV8w4MxWroJEA9jvbfyq9MzSzHaoqq5pINYRz-w8eKlYKvrQdQzZuM/s308/Screenshot%202023-09-22%20221733.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;222&quot; data-original-width=&quot;308&quot; height=&quot;222&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcGswphhxCStflt5FBVBgKsyv1MHHrLXhvFONloWbXDGGl6y8QVJaEFninlpnoHVojn0rjmR69qM8O1HnmCJZZFlxAT-ZcvAj_k2HHao8l8zTdqkyLfDUl3QS9NK-MfrUGbC-yrbV8w4MxWroJEA9jvbfyq9MzSzHaoqq5pINYRz-w8eKlYKvrQdQzZuM/s1600/Screenshot%202023-09-22%20221733.jpg&quot; width=&quot;308&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;http://Handlebars.net&quot; target=&quot;_blank&quot;&gt;Handlebars.net&lt;/a&gt; is a .net implementation of the famous &lt;a href=&quot;https://handlebarsjs.com/&quot; target=&quot;_blank&quot;&gt;HandlebarsJS&lt;/a&gt; templating framework. From Handlebars.net Github repository:&lt;br /&gt;&lt;/p&gt;&lt;blockquote&gt;&quot;Handlebars.Net doesn&#39;t use a scripting engine to run a Javascript library - it compiles Handlebars templates directly to IL bytecode. It also mimics the JS library&#39;s API as closely as possible.&quot; &lt;/blockquote&gt;For
example: consider this collection of data that should be rendered as an HTML
table:&lt;p&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var employees = new [] 
{
    new Employee
    {
        BirthDate= DateTime.Now.AddYears(-20),
        Name = &quot;John Smith&quot;,
        Photo = new Uri(&quot;https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg/800px-Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg&quot;)
    },
    new Employee
    {
        BirthDate= DateTime.Now.AddYears(-25),
        Name = &quot;Jack&quot;,
        Photo = new Uri(&quot;https://upload.wikimedia.org/wikipedia/commons/e/ec/Jack_Nicholson_2001.jpg&quot;)
    },
    new Employee
    {
        BirthDate= DateTime.Now.AddYears(-40),
        Name = &quot;Iron Man&quot;,
        Photo = new Uri(&quot;https://upload.wikimedia.org/wikipedia/en/4/47/Iron_Man_%28circa_2018%29.png&quot;)
    },
};

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;A Handlebars template may look like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;html&amp;gt;
&amp;lt;body&amp;gt;
	&amp;lt;table&amp;nbsp;border=&quot;1&quot;&amp;gt;
		&amp;lt;thead&amp;gt;
			&amp;lt;tr&amp;gt;
				&amp;lt;th&amp;gt;Name&amp;lt;/th&amp;gt;
				&amp;lt;th&amp;gt;Age&amp;lt;/th&amp;gt;
				&amp;lt;th&amp;gt;Photo&amp;lt;/th&amp;gt;
			&amp;lt;/tr&amp;gt;
		&amp;lt;/thead&amp;gt;
		&amp;lt;tbody&amp;gt;
			{{#each&amp;nbsp;this}}
			&amp;lt;tr&amp;gt;
				&amp;lt;td&amp;gt;{{Name}}&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;{{BirthDate}}&amp;lt;/td&amp;gt;
			&amp;lt;/tr&amp;gt;
			{{/each}}
		&amp;lt;/tbody&amp;gt;
	&amp;lt;/table&amp;gt;

&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The template is fairly simple. Explaining the syntax of Handlebars templates is
beyond the scope of this article. Check &lt;a href=&quot;https://handlebarsjs.com/guide/&quot;&gt;Handlebarjs Language Guide&lt;/a&gt; for
information regarding its syntax.&lt;/p&gt;&lt;p&gt;Passing
the data to the Hanledbar.net and render the template is easy:&lt;/p&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp line-numbers&quot;&gt;var template = File.ReadAllText(&quot;List.handlebars&quot;);
var compiledTemplate = Handlebars.Compile(template);
var output = compiledTemplate(employees);

Console.WriteLine(output);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Line 1 reads the List.handlebars template which is stored in the same application folder, alternatively the template can be stored as an embedded resource or retrieved from a database or even created on the fly.&lt;br /&gt;Line 2 compiles the template, generating a function that can be invoked later.&amp;nbsp;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Note&lt;/b&gt;: For good performance, the compiled template should be generated once and used multiple times during the lifetime of the application.&lt;/i&gt;&lt;br /&gt;&lt;/p&gt;&lt;p&gt;Line 3 invokes the function passing the employees collection and receives the rendered output in a string variable.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;This is the generated HTML:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;
&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;html&amp;gt;
&amp;lt;body&amp;gt;
	&amp;lt;table&amp;nbsp;border=&quot;1&quot;&amp;gt;
		&amp;lt;thead&amp;gt;
			&amp;lt;tr&amp;gt;
				&amp;lt;th&amp;gt;Name&amp;lt;/th&amp;gt;
				&amp;lt;th&amp;gt;Age&amp;lt;/th&amp;gt;
				&amp;lt;th&amp;gt;Photo&amp;lt;/th&amp;gt;
			&amp;lt;/tr&amp;gt;
		&amp;lt;/thead&amp;gt;
		&amp;lt;tbody&amp;gt;
			&amp;lt;tr&amp;gt;
				&amp;lt;td&amp;gt;John&amp;nbsp;Smith&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;2003-09-09T22:08:23.3541971+10:00&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;&amp;lt;img&amp;nbsp;src=&quot;https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg/800px-Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg&quot;&amp;nbsp;width=&quot;200px&quot;&amp;nbsp;height=&quot;200px&quot;&amp;nbsp;/&amp;gt;&amp;lt;/td&amp;gt;
			&amp;lt;/tr&amp;gt;
			&amp;lt;tr&amp;gt;
				&amp;lt;td&amp;gt;Jack&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;1998-09-09T22:08:23.3839317+10:00&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;&amp;lt;img&amp;nbsp;src=&quot;https://upload.wikimedia.org/wikipedia/commons/e/ec/Jack_Nicholson_2001.jpg&quot;&amp;nbsp;width=&quot;200px&quot;&amp;nbsp;height=&quot;200px&quot;&amp;nbsp;/&amp;gt;&amp;lt;/td&amp;gt;
			&amp;lt;/tr&amp;gt;
			&amp;lt;tr&amp;gt;
				&amp;lt;td&amp;gt;Iron&amp;nbsp;Man&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;1983-09-09T22:08:23.3839479+10:00&amp;lt;/td&amp;gt;
				&amp;lt;td&amp;gt;&amp;lt;img&amp;nbsp;src=&quot;https://upload.wikimedia.org/wikipedia/en/4/47/Iron_Man_%28circa_2018%29.png&quot;&amp;nbsp;width=&quot;200px&quot;&amp;nbsp;height=&quot;200px&quot;&amp;nbsp;/&amp;gt;&amp;lt;/td&amp;gt;
			&amp;lt;/tr&amp;gt;
		&amp;lt;/tbody&amp;gt;
	&amp;lt;/table&amp;gt;

&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And this is how the output is rendered by a browser:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha2QtxJaeQDEmKzP11iFUDPSMdWti20LLfNjUdeOxVFG2VHXxPqoA-Vjtff9GYsMlfTz3w1T9D5ILggk-kYZ0oNo3Sf0MI9eXweirQjN5xqTnEVyZwuuGZESZ0RSczYvEhfvnxf8vGas8GOLUyq1bBzIigOtwMYlAX5A5257c6wFnClpIbU0AmGwBEFAA/s646/1.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;646&quot; data-original-width=&quot;541&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha2QtxJaeQDEmKzP11iFUDPSMdWti20LLfNjUdeOxVFG2VHXxPqoA-Vjtff9GYsMlfTz3w1T9D5ILggk-kYZ0oNo3Sf0MI9eXweirQjN5xqTnEVyZwuuGZESZ0RSczYvEhfvnxf8vGas8GOLUyq1bBzIigOtwMYlAX5A5257c6wFnClpIbU0AmGwBEFAA/s16000/1.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;Putting aside lack of styling which has nothing to do with Handlebars, the output seems good but suffers for two issues:&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;ol style=&quot;text-align: left;&quot;&gt;&lt;li&gt;The format of the Age property is not great.&lt;/li&gt;&lt;li&gt;The image tags rendered by the template reference the full URL of the images. Every time the generated HTML is consumed and rendered, it will have to fetch the images from their sources, which may be inconvenient. Additionally, the generated template is not self-contained, and other services that consume the generated HTML (like an HTML to PDF conversion service) will have to download the images.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Although the Handlebars has a powerful templating language, it&#39;s impossible to cover all needs that may arise, this is why Handlebars.net provides the ability to define custom helpers.&lt;br /&gt;&amp;nbsp;&lt;/p&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;Custom Helpers:&amp;nbsp;&lt;/h4&gt;&lt;div style=&quot;text-align: left;&quot;&gt;Helpers provide an extensibility mechanism to customize the rendered output. Once created and registered with Handlebars.net, they can be invoked from templates as if they were part of Handlebar&#39;s templating language.&lt;br /&gt;Let&#39;s use helpers to solve the date format issue:&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;

&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;Handlebars.RegisterHelper(&quot;formatDate&quot;, (output, context, arguments)
                =&amp;gt; { output.Write(((DateTime)arguments[0]).ToString(arguments[1].ToString())); });
&lt;/code&gt;&lt;/pre&gt;
  
  
&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;This one-line registers a formatDate helper that takes the first argument and formats it using the second argument. To call this helper in the template:&lt;br /&gt;&lt;br /&gt;&lt;/div&gt;

&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;td&amp;gt;{{formatDate&amp;nbsp;BirthDate&amp;nbsp;&quot;dd/MM/yyyy&quot;}}&amp;lt;/td&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The rendered output is much better now:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYOiTiDNk3xywfdwcSdIAVbIaS82lze2CYhFW_jupZ_23DSkDc8WqjK2Y9ZLMeVQPdWzMcuzEgBzz8zuQniiV0TbTGqcy0Td9komdODrxs2Gb-wIX-ZyEogXyR-d1rnmRsNeqGTWDNROasZIuaDFwiKCyRriRcCdpTkBc5o2xpGZ1yR7nYmZXSQ8Oabgo/s646/2.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;646&quot; data-original-width=&quot;367&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYOiTiDNk3xywfdwcSdIAVbIaS82lze2CYhFW_jupZ_23DSkDc8WqjK2Y9ZLMeVQPdWzMcuzEgBzz8zuQniiV0TbTGqcy0Td9komdODrxs2Gb-wIX-ZyEogXyR-d1rnmRsNeqGTWDNROasZIuaDFwiKCyRriRcCdpTkBc5o2xpGZ1yR7nYmZXSQ8Oabgo/s16000/2.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;Embedding images in the HTML output&lt;/h4&gt;&lt;div style=&quot;text-align: left;&quot;&gt;To solve the second issue mentioned above, we can write a custom helper to embed image content using the &lt;a href=&quot;https://en.wikipedia.org/wiki/Data_URI_scheme&quot; target=&quot;_blank&quot;&gt;data URI scheme&lt;/a&gt;.&lt;br /&gt;This is a basic implementation of this &quot;embeddedImage&quot; helper:&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;Handlebars.RegisterHelper(&quot;embeddedImage&quot;, (output, context, arguments) =&amp;gt;
{
    var url = arguments[0] as Uri;
    using var httpClient = new HttpClient();

    // add user-agent header required by Wikipedia. You should safely ommit the following line for other sources
    httpClient.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue(&quot;example.com-bot&quot;, &quot;1.0&quot;));

    var content = httpClient.GetByteArrayAsync(url).Result;
    var encodedContent = Convert.ToBase64String(content);
    output.Write(&quot;data:image/png;base64,&quot; + encodedContent);
});
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The
code uses an HttpClient to download the image as a byte array, then encode it
using base64 encoding, then writes the output as a data URI using the standards
format. And the usage is very simple:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;img&amp;nbsp;width=&quot;200px&quot;&amp;nbsp;height=&quot;200px&quot;&amp;nbsp;src=&quot;{{embeddedImage&amp;nbsp;Photo}}&quot;&amp;nbsp;&amp;nbsp;/&amp;gt;&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;And the HTML output looks like: (trimmed for brevity)&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;img&amp;nbsp;width=&quot;200px&quot;&amp;nbsp;height=&quot;200px&quot;&amp;nbsp;src=&quot;data:image/png;base64,/9j/4gIcSUNDX1BST0ZJTEUAAQEAAAIMbGNtcwIQAABtbnRyUkdCIFhZWiAH3AABABkAAwApAD.....&lt;/code&gt;&lt;/pre&gt;


&lt;h4 style=&quot;text-align: left;&quot;&gt;&amp;nbsp;&lt;/h4&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;Conclusion &lt;br /&gt;&lt;/h4&gt;&lt;p&gt;One of the most important design principals is the &lt;a href=&quot;https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle&quot; target=&quot;_blank&quot;&gt;Open-Closed Principal&lt;/a&gt;: software entities should be open for extension but closed for modification. Handlebars and Handlebars.net apply this principal by allowing users to extend the functionality of the library without having to modify its source code, which is a good design. &lt;br /&gt;With a plethora of free and commercial libraries available for developers, the level of extensibility should be one of the evaluation criteria used during the selection process.&lt;br /&gt;And you, what other templating libraries have you used in .net applications? How extensible are these libraries? &lt;br /&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/4367613949834761491/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/4367613949834761491?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/4367613949834761491'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/4367613949834761491'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2023/09/handling-special-content-with.html' title='Handling special content with Handlebars.net Helpers'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcGswphhxCStflt5FBVBgKsyv1MHHrLXhvFONloWbXDGGl6y8QVJaEFninlpnoHVojn0rjmR69qM8O1HnmCJZZFlxAT-ZcvAj_k2HHao8l8zTdqkyLfDUl3QS9NK-MfrUGbC-yrbV8w4MxWroJEA9jvbfyq9MzSzHaoqq5pINYRz-w8eKlYKvrQdQzZuM/s72-c/Screenshot%202023-09-22%20221733.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2586059287431506765</id><published>2023-06-30T14:01:00.000+02:00</published><updated>2023-06-30T14:01:18.314+02:00</updated><title type='text'>Mind games of measurements and estimates: Hidden meanings behind numbers and units</title><content type='html'>&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQN4qIsoyLT6hPqU8rO9szCgjsJWvhSFMU3UQBjdeh9c0Ne8NoATUX51Qi3662qscrajoY_J12vNLNHqXUENomW9q0Mos2aOZiw7c-sltd9I0Bz-ogYpIQL-Jep-q7GmZb8lonwTOHZ34fF6hK8w7U-Oia2gtl49H3YxcswWu3tAjI5YHRH0PM0rUBGek/s960/measurement-1476913_960_720.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;640&quot; data-original-width=&quot;960&quot; height=&quot;288&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQN4qIsoyLT6hPqU8rO9szCgjsJWvhSFMU3UQBjdeh9c0Ne8NoATUX51Qi3662qscrajoY_J12vNLNHqXUENomW9q0Mos2aOZiw7c-sltd9I0Bz-ogYpIQL-Jep-q7GmZb8lonwTOHZ34fF6hK8w7U-Oia2gtl49H3YxcswWu3tAjI5YHRH0PM0rUBGek/w433-h288/measurement-1476913_960_720.jpg&quot; width=&quot;433&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;I&#39;m a fan of science and nature documentaries. A few years ago, National Geographic Abu Dhabi was my favorite channel. It primarily featured original NatGeo content, which was dubbed in Arabic.&lt;br /&gt;The content variety and interesting topics from construction, to wild life, air crash investigations and even UFO; provided me with a stream of knowledge and enjoyment. But in some times, also confusion!&lt;br /&gt;&lt;br /&gt;One source of confusion was the highly accurate numbers used to describe things that normally could not be measured to that level of accuracy!&lt;br /&gt;In one instance, a wild animal was described to have a weight reaching something like 952 kilograms. Not 900, not 1000 or even 950, but exactly 952.&lt;br /&gt;In another instance, a man was describing a flying object, and he mentioned that the altitude of that object was 91 meters. That man must have laser distance meters in his eyes!&lt;br /&gt;&lt;br /&gt;When I thought about this, I figured out that probably while translating these episodes, units of measurements were converted from pounds to kilograms, from feet and yards to meters, and from miles to kilometers, and so on. This is because the metric system is used in the Arab world and is more understandable by the audience.&lt;br /&gt;Converting the above numbers back to the original units made them sound more logical. The wild animal weighed approximately 2200 pounds, and the man was describing an object flying about 100 yards or 300 feet high. That made much more sense.&lt;br /&gt;&lt;br /&gt;But why did these round figure numbers seem more logical and more acceptable when talking about things that cannot be accurately measured? After all, 2200 pound are equal to 952 kilograms, and 100 yards are 91.44 meters. Right?&lt;br /&gt;&lt;br /&gt;Apparently, the way we perceive numbers in casual conversations implicitly associates an accuracy level.&lt;br /&gt;This &lt;a href=&quot;https://en.wikipedia.org/w/index.php?title=Decimal&amp;amp;oldid=1162494076#cite_note-8&quot; target=&quot;_blank&quot;&gt;Wikipedia note&lt;/a&gt; gives an example of this:&lt;br /&gt;&quot;Sometimes, the extra zeros are used for indicating the accuracy of a measurement. For example, &quot;15.00 m&quot; may indicate that the measurement error is less than one centimetre (0.01 m), while &quot;15 m&quot; may mean that the length is roughly fifteen metres and that the error may exceed 10 centimetres.&quot;&lt;br /&gt;&lt;br /&gt;Similarly, smaller units can be used to give a deceiving indication of accuracy. A few years ago, I was working with a colleague on a high level estimates of a software project. We used weeks as our unit of estimate because -as expected- we knew very little about the project and we expressed this in terms of coarse-grained estimates.&lt;br /&gt;From experience, we knew that this level of accuracy won&#39;t be welcome by who requested the estimates, and they may want to get more accurate ones. I laughingly told my colleague: &quot;If they want the estimates in hours, they can multiply these numbers by 40!&quot;. I feel I was mean saying that. Of course the point was the accuracy, not the unit conversion.&lt;br /&gt;&lt;br /&gt;One nice thing about using Fibonacci numbers in relative estimates, is that they detach the numeric estimates from any perceived accuracy. When the estimate is 13 story points, it&#39;s totally clear that the only reason why it&#39;s 13, - not 12 or 14&amp;nbsp; for example- is not because we believe it to be accurately 13. It&#39;s just because we don&#39;t have the other numbers on the estimation cards. It&#39;s simply a best guess.&lt;br /&gt;&lt;br /&gt;Beware of the effects of units and numbers you use. They may communicate more than what you originally intended.&lt;br /&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2586059287431506765/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2586059287431506765?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2586059287431506765'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2586059287431506765'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2023/06/mind-games-of-measurements-and.html' title='Mind games of measurements and estimates: Hidden meanings behind numbers and units'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQN4qIsoyLT6hPqU8rO9szCgjsJWvhSFMU3UQBjdeh9c0Ne8NoATUX51Qi3662qscrajoY_J12vNLNHqXUENomW9q0Mos2aOZiw7c-sltd9I0Bz-ogYpIQL-Jep-q7GmZb8lonwTOHZ34fF6hK8w7U-Oia2gtl49H3YxcswWu3tAjI5YHRH0PM0rUBGek/s72-w433-h288-c/measurement-1476913_960_720.jpg" height="72" width="72"/><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2660375476664549020</id><published>2023-05-10T13:37:00.004+02:00</published><updated>2023-05-10T13:38:26.389+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term=".net"/><category scheme="http://www.blogger.com/atom/ns#" term="C#"/><title type='text'>Setting exit code of a .net worker application</title><content type='html'>&lt;p&gt;When building a .net worker application with a hosted service based on the &lt;code&gt;BackgroundService&lt;/code&gt; class, it&#39;s some times it&#39;s required to set the application exit code based on the outcomes of the execution of the hosted service.&lt;/p&gt;&lt;p&gt;One trivial way to do this is to to set the &lt;code&gt;Environment.ExitCode&lt;/code&gt; property from the hosted service:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;



&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;
public class Worker : BackgroundService
{
    public Worker()
    {

    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        try
        {
            throw new Exception(&quot;Something bad happened&quot;);
        }
        catch
        {
            Environment.ExitCode = 1;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This works, however consider these unit tests:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;
[Test]
public async Task Test1()
{
    Worker sut = new Worker();
    await sut.StartAsync(new CancellationToken());

    Assert.That(Environment.ExitCode, Is.EqualTo(1));
}

[Test]
public void Test2()
{
    // another test
    Assert.That(Environment.ExitCode, Is.EqualTo(0));
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
  &lt;code&gt;Test1&lt;/code&gt; passes, however &lt;code&gt;Test2&lt;/code&gt; fails as &lt;code&gt;Environment.ExitCode&lt;/code&gt; is a static variable. You can reset back to zero it after the test, but this is error-prone. So what is the alternative?&lt;/p&gt;&lt;p&gt;One simple solution is to use a status code-holding class as a singleton and inject it into the background service: &lt;br /&gt;&lt;/p&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;
public interface IStatusHolder
{
    public int Status { get; set; }
}

public class StatusHolder : IStatusHolder
{
    public int Status { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;
public class Worker : BackgroundService
{
    private readonly IStatusHolder _statusHolder;

    public Worker(IStatusHolder statusHolder)
    {
        _statusHolder = statusHolder;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        try
        {
            throw new Exception(&quot;Something bad happened&quot;);
        }
        catch
        {
            _statusHolder.Status = 1;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As simple &lt;code&gt;Program.cs&lt;/code&gt; would look like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;line-numbers language-csharp&quot;&gt;
using EnvironmentExit;

IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices(services =&amp;gt;
    {
        services.AddHostedService&amp;lt;Worker&amp;gt;();
        services.AddSingleton&amp;lt;IStatusHolder, StatusHolder&amp;gt;();
    })
    .Build();

host.Start();

var statusHolder = host.Services.GetRequiredService&amp;lt;IStatusHolder&amp;gt;();
Environment.ExitCode = statusHolder.Status;

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Note that line number 8 registers &lt;code&gt;IStatusHolder&lt;/code&gt; as a singleton, which is important to maintain its state. &lt;br /&gt;&lt;/p&gt;&lt;p&gt;Now all tests pass.&amp;nbsp;Additionally, when the application runs, the exit code is 1. &lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2660375476664549020/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2660375476664549020?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2660375476664549020'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2660375476664549020'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2023/05/setting-exit-code-of-net-worker.html' title='Setting exit code of a .net worker application'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-6275653866211242134</id><published>2023-01-27T14:12:00.004+02:00</published><updated>2023-01-27T14:13:41.545+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Linux"/><category scheme="http://www.blogger.com/atom/ns#" term="PowerShell"/><title type='text'>PowerShell core compatibility: A lesson learned the hard way</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;&lt;p style=&quot;text-align: left;&quot;&gt;PowerShell
core is my preferred scripting language. I&#39;ve been excited about it since its
early days. Here&#39;s a tweet from back in 2016 when PowerShell core was still in
beta:&lt;/p&gt;&lt;/div&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote class=&quot;twitter-tweet&quot; data-theme=&quot;light&quot;&gt;&lt;p dir=&quot;ltr&quot; lang=&quot;en&quot;&gt;Running &lt;a href=&quot;https://twitter.com/hashtag/PowerShell?src=hash&amp;amp;ref_src=twsrc%5Etfw&quot;&gt;#PowerShell&lt;/a&gt; on &lt;a href=&quot;https://twitter.com/hashtag/bash?src=hash&amp;amp;ref_src=twsrc%5Etfw&quot;&gt;#bash&lt;/a&gt; on &lt;a href=&quot;https://twitter.com/hashtag/Ubuntu?src=hash&amp;amp;ref_src=twsrc%5Etfw&quot;&gt;#Ubuntu&lt;/a&gt; on &lt;a href=&quot;https://twitter.com/hashtag/Windows10?src=hash&amp;amp;ref_src=twsrc%5Etfw&quot;&gt;#Windows10&lt;/a&gt; . Just because I can :) &lt;a href=&quot;https://t.co/VlBppczZ6i&quot;&gt;pic.twitter.com/VlBppczZ6i&lt;/a&gt;&lt;/p&gt;— Hesham A. Amin (@HeshamAmin) &lt;a href=&quot;https://twitter.com/HeshamAmin/status/766614271149109249?ref_src=twsrc%5Etfw&quot;&gt;August 19, 2016&lt;/a&gt;&lt;/blockquote&gt; &lt;script async=&quot;&quot; charset=&quot;utf-8&quot; src=&quot;https://platform.twitter.com/widgets.js&quot;&gt;&lt;/script&gt; 
&lt;p&gt; I&#39;ve
used PowerShell to automate build steps, deployments, and other tasks on both
dev environments and CICD pipelines. It&#39;s great to write a script on my Windows
machine, test it using PowerShell core, and run it on my docker Linux-based
build environments with 100% compatibility. Or so I thought until I learned
otherwise!&lt;/p&gt;
&lt;p&gt;A few years ago, I was automating a process which required creating a folder if it didn&#39;t exist. Out of laziness, this is how I implemented this functionality:&amp;nbsp;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;line-numbers language-powershell&quot;&gt;mkdir $folder -f&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the folder exists and the -f (or --Force) flag is passed, the command will return the existing directory object without errors. I
know this is not the cleanest way -more on this later- but it works on my
Windows machine, so it should also work in the docker Linux container, except
that it didn&#39;t. When the script ran, it resulted in this error:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;/bin/mkdir: invalid option -- &#39;f&#39;
Try &#39;/bin/mkdir --help&#39; for more information.&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Why did the behavior differ? It turns out that mkdir means different things depending on whether you&#39;re running PowerShell on Windows or Linux. And this can be observed using Get-Command Cmdlet:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;line-numbers language-powershell&quot;&gt;# Windows:
Get-Command mkdir&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The output is: &lt;br /&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;CommandType     Name                                               Version
-----------     ----                                               -------
Function        mkdir&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Under Windows, mkdir is a function, and the definition of this function can be obtained using&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;(Get-Command mkdir).Definition&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And the output is:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;line-numbers language-powershell&quot;&gt;&amp;lt;#
.FORWARDHELPTARGETNAME New-Item
.FORWARDHELPCATEGORY Cmdlet
#&amp;gt;

[CmdletBinding(DefaultParameterSetName=&#39;pathSet&#39;,
    SupportsShouldProcess=$true,
    SupportsTransactions=$true,
    ConfirmImpact=&#39;Medium&#39;)]
    [OutputType([System.IO.DirectoryInfo])]
param(
    [Parameter(ParameterSetName=&#39;nameSet&#39;, Position=0, ValueFromPipelineByPropertyName=$true)]
    [Parameter(ParameterSetName=&#39;pathSet&#39;, Mandatory=$true, Position=0, ValueFromPipelineByPropertyName=$true)]
    [System.String[]]
    ${Path},

    [Parameter(ParameterSetName=&#39;nameSet&#39;, Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [AllowNull()]
    [AllowEmptyString()]
    [System.String]
    ${Name},

    [Parameter(ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
    [System.Object]
    ${Value},

    [Switch]
    ${Force},

    [Parameter(ValueFromPipelineByPropertyName=$true)]
    [System.Management.Automation.PSCredential]
    ${Credential}
)

begin {
    $wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand(&#39;New-Item&#39;, [System.Management.Automation.CommandTypes]::Cmdlet)
    $scriptCmd = {&amp;amp; $wrappedCmd -Type Directory @PSBoundParameters }

    $steppablePipeline = $scriptCmd.GetSteppablePipeline()
    $steppablePipeline.Begin($PSCmdlet)
}

process {
    $steppablePipeline.Process($_)
}

end {
    $steppablePipeline.End()
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Which
as you can see, wraps the New-Item Cmdlet. However
under Linux, it&#39;s a different story:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Linux:
Get-Command mkdir
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;CommandType     Name                                               Version
-----------     ----                                               -------
Application     mkdir                                              0.0.0.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&#39;s an application, and the source of this applications can be retrieved as:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;(Get-Command mkdir).Source
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;/bin/mkdir&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now that I know the problem, the solution is easy:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;New-Item -ItemType Directory $folder -Force&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&#39;s generally recommended to use Cmdlets instead of aliases or any kind of
shortcuts to improve readability and portability. Unfortunately
&lt;a href=&quot;https://learn.microsoft.com/en-us/powershell/module/psscriptanalyzer/?view=ps-modules&quot; target=&quot;_blank&quot;&gt;PSScriptAnalyzer&lt;/a&gt; - which integrates well with VSCode- will highlight this issue
in scripts but only for aliases (like ls) and not for functions. &lt;a href=&quot;https://learn.microsoft.com/en-us/powershell/utility-modules/psscriptanalyzer/rules/avoidusingcmdletaliases?view=ps-modules&quot;&gt;AvoidUsingCmdletAliases&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I learned my lesson. However, I did it the hard way. &lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/6275653866211242134/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/6275653866211242134?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/6275653866211242134'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/6275653866211242134'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2023/01/powershell-core-compatibility-lesson.html' title='PowerShell core compatibility: A lesson learned the hard way'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-4396545942889059342</id><published>2022-06-05T12:34:00.006+02:00</published><updated>2022-06-05T12:35:39.424+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term=".net"/><category scheme="http://www.blogger.com/atom/ns#" term="docker"/><title type='text'>Reading a file from a Docker container in .net core</title><content type='html'>&lt;p&gt;In many situations it might be needed to read files from a docker container using .net code.&lt;br /&gt;&lt;a href=&quot;https://www.nuget.org/packages/Docker.DotNet/&quot; target=&quot;_blank&quot;&gt;Docker.DotNet&lt;/a&gt; library is very useful to interact with docker from .net. And it provides a useful method (&lt;code&gt;GetArchiveFromContainerAsync&lt;/code&gt;) to read files from a docker container.&lt;br /&gt;When I tried to use this method to read a small csv/text file, the file content looked weird a bit. It seemed like there was an encoding issue!&lt;/p&gt;&lt;p&gt;When I checked the &lt;a href=&quot;https://github.com/dotnet/Docker.DotNet/blob/f58748616cc5b679b25496926c5688294c94d850/src/Docker.DotNet/Endpoints/IContainerOperations.cs&quot; target=&quot;_blank&quot;&gt;code on Github&lt;/a&gt;, I found that the returned data is a tarball stream. Which makes sense as &lt;a href=&quot;https://docs.docker.com/engine/api/v1.21/ &quot; target=&quot;_blank&quot;&gt;Docker documentation&lt;/a&gt; mentions that the returned stream is a Tar stream.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;To read the Tar stream, I tried to use &lt;a href=&quot;https://www.nuget.org/packages/SharpZipLib/&quot; target=&quot;_blank&quot;&gt;SharpZipLib&lt;/a&gt; library&#39;s &lt;code&gt;TarInputStream&lt;/code&gt; class. However, that didn&#39;t work as apparently the library requires a seekable stream while the stream contained in the &lt;code&gt;GetArchiveFromContainerResponse&lt;/code&gt; returned from the method is not.&lt;br /&gt;The workaround -which works well for relatively small files- is to copy the stream to a memory stream and use that instead.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is a sample code:&lt;/p&gt;


&lt;pre&gt;&lt;code class=&quot;line-numbers language-csharp&quot;&gt;DockerClientConfiguration config = new();
using var client = config.CreateClient();

GetArchiveFromContainerParameters parameters = new()
{ 
	Path = &quot;/root/eula.1028.txt&quot;
};
var file = await client.Containers.GetArchiveFromContainerAsync(&quot;example&quot;, parameters, false);

using var memoryStream = new MemoryStream();
file.Stream.CopyTo(memoryStream);
file.Stream.Close();

memoryStream.Seek(0, SeekOrigin.Begin);

using var tarInput = new TarInputStream(memoryStream, Encoding.ASCII);
tarInput.GetNextEntry();

using var reader = new StreamReader(tarInput);

var content = reader.ReadToEnd();

Console.WriteLine(content);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I hope this helps!&lt;/p&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/4396545942889059342/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/4396545942889059342?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/4396545942889059342'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/4396545942889059342'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2022/06/reading-file-from-docker-container-in.html' title='Reading a file from a Docker container in .net core'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-3836150498575180491</id><published>2020-09-19T07:39:00.004+02:00</published><updated>2020-09-19T20:33:36.837+02:00</updated><title type='text'>Burnout</title><content type='html'>&lt;p&gt;&amp;nbsp;

&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMO4zKtuBaT5xgRz76lPIV73s9ZC_iv2gJEx39c5nvZGZZTmrPi7__ZaheG7BBgMB2tr8xpJIF6GMHDw69RB1LP0C_Qo8nK-cRt3HYKh-ww3GU8Z6URGWxEw_DDb4Y-zeXE1TsTyEFVAo/s858/match-wood-matches-red-sulfur-wallpaper.jpg&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;483&quot; data-original-width=&quot;858&quot; height=&quot;360&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMO4zKtuBaT5xgRz76lPIV73s9ZC_iv2gJEx39c5nvZGZZTmrPi7__ZaheG7BBgMB2tr8xpJIF6GMHDw69RB1LP0C_Qo8nK-cRt3HYKh-ww3GU8Z6URGWxEw_DDb4Y-zeXE1TsTyEFVAo/w640-h360/match-wood-matches-red-sulfur-wallpaper.jpg&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;image via &lt;a href=&quot;https://www.peakpx.com&quot; target=&quot;_blank&quot;&gt;Peakpx&lt;/a&gt;&lt;br /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&amp;nbsp;I
recently listened to an interesting &lt;a href=&quot;https://hanselminutes.com/697/managing-the-burnout-burndown-with-dr-aneika-simmons&quot; target=&quot;_blank&quot;&gt;podcast&lt;/a&gt; about burnout that stimulated some
thoughts regarding this silent killer that could easily get rampant, especially
in the software industry which is known to be very mentally demanding.

&lt;p&gt;This
industry attracts very passionate persons who -given an interesting enough
problem- will voluntarily give up a lot of their time, energy and other aspects
of their social and health lives.&lt;/p&gt;

&lt;p&gt;While
seeking the satisfaction of solving complex problems or under tight delivery
pressure, developers &quot;get into the zone&quot; and spend extended hours
without even noticing.&lt;/p&gt;

&lt;p&gt;Commonly,
developers take pride in this aspect of their work. Other developers consider
this as a role model for how a dedicated developer should be. Managers celebrate heroic efforts of their developers and even more take it for granted
and it become a normal expectation.&lt;/p&gt;

&lt;p&gt;But
what&#39;s wrong with this? If the developer is really passionate about his/her
work, so what?&lt;/p&gt;&lt;p&gt;One of
the light bulb moments in this podcast is when Dr Aneika (PhD&amp;nbsp; in Organizational Behavior and Human Resources) said:

&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;span style=&quot;font-style: italic;&quot;&gt;&quot;…you would think that some research or previous
research said, well, maybe engagement is the antonym to burnout. But no, what
we really found out is that people that are really, really engaged are the ones
that are most susceptible to burnout&quot;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;&quot;…to be a great developer, to be a great
programmer, or to be a great coder, you have to really be involved. And that
involvement that takes you in and sucks you in could be the same thing that can
lead you down the road of burnout.&quot;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;

&lt;p&gt;No
surprise then that developers could go through waves of extreme productivity
followed by low performance, if not conscious enough to how their mind and
emotions work.&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;

&lt;p&gt;Another
important aspect to consider especially if you&#39;re a leader in tech is the
impact of your burnout on how you interact with those who you lead.&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;

&lt;p&gt;One component of
burnout is depersonalization, that is when you&#39;re burnt out, you get detached
from the surrounding team members, and focus only on what you get out of them.
To you, they become more like functions with inputs and outputs, and your
relationship becomes merely transactional, which is very dangerous.&lt;/p&gt;

&lt;p&gt;&lt;/p&gt;

&lt;p&gt;To me,
one of the most important leadership traits is empathy. When you&#39;re drained to
the extent that you have no emotional capacity for empathy, you lose the
ability to connect and support your team members. And especially if you&#39;re
normally understanding and supportive, your fluctuating behaviour might hurt
the trust you&#39;ve earned.&lt;/p&gt;

&lt;p&gt;Take care of the
signs of burnout. And remember not to deplete all your energy before taking the
time to recharge.&lt;/p&gt;

</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/3836150498575180491/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/3836150498575180491?isPopup=true' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/3836150498575180491'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/3836150498575180491'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2020/09/burnout.html' title='Burnout'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMO4zKtuBaT5xgRz76lPIV73s9ZC_iv2gJEx39c5nvZGZZTmrPi7__ZaheG7BBgMB2tr8xpJIF6GMHDw69RB1LP0C_Qo8nK-cRt3HYKh-ww3GU8Z6URGWxEw_DDb4Y-zeXE1TsTyEFVAo/s72-w640-h360-c/match-wood-matches-red-sulfur-wallpaper.jpg" height="72" width="72"/><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-661062639261764381</id><published>2020-01-04T10:42:00.003+02:00</published><updated>2020-01-04T10:44:00.122+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Agile"/><title type='text'>Which language should I speak?</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;
Working
in a diverse environment with team members from many nationalities is a great
experience. You get to know new cultures and recognize how similar people are
across the world although the seemingly extreme differences.&lt;/div&gt;
&lt;div&gt;
In such an environment, you hear different languages all
the time! And although there is usually a de facto business language, -English
in my case, since I&#39;m currently working in Australia-, some people prefer to
have conversations in their native tongue
with colleagues that share the same language even in a business context.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
Well,
is that OK?&lt;/div&gt;
&lt;div&gt;
There
are many angles from which I see this matter.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
It&#39;s
good to feel natural&lt;/h3&gt;
&lt;/div&gt;
&lt;div&gt;
As a
non-native English speaker myself, I feel very weird speaking with my Arabic
speaking colleagues -especially Egyptians- in a secondary language, it just
doesn&#39;t feel natural! Why speak in a language that we wouldn&#39;t normally use if
we were having a casual chat? Put aside losing access to a huge stock of
vocabulary and expressions that we share. This leads to the second point:&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
It&#39;s
about effective communication&lt;/h3&gt;
&lt;/div&gt;
&lt;div&gt;
We
need to get the job done, right? So why put a barrier in front of effective
communication? Undoubtedly using my native language makes conveying my thoughts
much easier. Besides, it gives better control over the tone of the
conversation. I suppose the same goes for other nationalities as well.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
But
what are we missing?&lt;/h3&gt;
&lt;/div&gt;
&lt;div&gt;
Some
people might feel excluded when others around them speak in a language they
don&#39;t understand. However, I haven&#39;t seen this causing real issues.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
A
virtual wall?&lt;/h3&gt;
&lt;/div&gt;
&lt;div&gt;
I&#39;ve
been working in Agile teams for years. And I believe in the value of having
collocated teams in facilitating communication.&amp;nbsp;&lt;/div&gt;
&lt;div&gt;
It
happened many times that I overheard a discussion between other colleagues in
my team area when I jumped in and gave help to solve an issue, guided on a
topic, or threw in a piece of information that was necessary to solve a
problem. Even if you&#39;re not intentionally paying attention, it&#39;s possible to
save the team from consuming a lot of time going in circles.&lt;/div&gt;
&lt;div&gt;
Speaking
in a different language defies the purpose of collocation and creates virtual
walls. It&#39;s the same reason why some Agile practitioners recommend not putting
headphones as they isolate the team member from the surrounding team
interactions.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
What
about you? Do you prefer speaking in your first language if different from the
common one used at work? On the other side, how do you feel about other
colleagues speaking in a language that you don&#39;t understand?&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/661062639261764381/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/661062639261764381?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/661062639261764381'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/661062639261764381'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2020/01/which-language-should-i-speak.html' title='Which language should I speak?'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-8961327480134827789</id><published>2019-04-26T08:40:00.004+02:00</published><updated>2019-04-26T08:51:02.542+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Git"/><title type='text'>Using Git hooks to alter commit messages</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
As developers we try to get the repetitive boring stuff out of our ways. Hence we try to use tools that automate some of our workflows, or if no tools is available for our specific needs, no problem, we automate them ourselves, we&#39;re developers after all!&lt;br /&gt;
&lt;br /&gt;
In one of the projects I worked on, there was a convention to add the task id as part of each commit message because some tools are used to generate reports based on it. I&#39;m not sure why this was required in that situation, but I had to follow the convention anyway. Since I tend to make many small commits every day, I was sure I&#39;ll forget to add the task id most of the time. So I started investigating Git hooks.&lt;br /&gt;
&lt;br /&gt;
Git provides many hooks that could be used to automate some of the repetitive behaviors that are required to happen with the different life cycle steps of Git usage. For example:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp; • Pre-commit&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp; • Pre-push&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp; • Prepate-commit-message&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp; • Commit-message&lt;br /&gt;
&lt;br /&gt;
The folder &quot;.git/hooks&quot; within the git repository folder contains many sample commit hook files which are good starting points. The one of interest in this case was the &lt;b&gt;commit-msg&lt;/b&gt; hook.&lt;br /&gt;
&lt;br /&gt;
In my scenario, we had a convention to name our branches using the patterns &quot;feature/&lt;task-id&gt;&quot; or &quot;bug/&lt;task-id&gt;&quot;.&lt;br /&gt;&lt;br /&gt;So I decided to deduce the task id from the branch name and prepend it to the commit message.&lt;br /&gt;I created a file with the name &lt;b&gt;commit-msg&lt;/b&gt; in the &lt;b&gt;.git/hooks&lt;/b&gt; folder, the code inside this file is similar to:

&lt;/task-id&gt;&lt;/task-id&gt;&lt;br /&gt;
&lt;pre&gt;&lt;code class=&quot;line-numbers language-bash&quot;&gt;#!/bin/sh
message=$(cat $1)
branch=$(git branch | grep \* | cut -d &#39; &#39; -f2-)
task=$(echo $branch | cut -d / -f2-)
echo &quot;$task - $message&quot; &amp;gt; $1&lt;/code&gt;&lt;/pre&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;Line 2: reads the original commit message from the temp file, whose name is passed as the first parameter to the script.&lt;/li&gt;
&lt;li&gt;Line 3: reads the current branch name. Thanks to &lt;a href=&quot;https://stackoverflow.com/questions/6245570/how-to-get-the-current-branch-name-in-git/11868440&quot; target=&quot;_blank&quot;&gt;StackOverflow&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Line 4: extracts the task id from the branch name by splitting the string by the &quot;/&quot; character and taking the second part.&lt;/li&gt;
&lt;li&gt;Line 5: overwrites the commit message with the required format.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
Now when I commit code using:&lt;br /&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git commit -m&quot;test message&quot;&lt;/code&gt;&lt;/pre&gt;
And then inspect the logs using git log command, the commit message is modified as needed:

&lt;br /&gt;
&lt;pre data-line=&quot;5&quot;&gt;&lt;code class=&quot;language-bash&quot;&gt;commit f1fe8918c754ca89649a2a86ef4ab0a9a53c0496 (HEAD -&amp;gt; feature/1234)
Author: Hesham A. Amin
Date:   Fri Apr 26 08:24:40 2019 +0200

    1234 - test message

commit 4e3e180d3a27772a32230bf6dbbd039b949dc30e
...&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
Investing few minutes to automate daunting repetitive tasks pays off on the long term.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/8961327480134827789/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/8961327480134827789?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/8961327480134827789'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/8961327480134827789'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2019/04/using-git-hooks-to-alter-commit-messages.html' title='Using Git hooks to alter commit messages'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2008454443304519220</id><published>2018-12-27T14:30:00.000+02:00</published><updated>2018-12-27T14:30:10.218+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="ASP.NET"/><category scheme="http://www.blogger.com/atom/ns#" term="Security"/><title type='text'>Removing the Server header from Kestrel hosted ASP.NET core apps</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
In the continuous battle of software builders against attackers, the less information the application discloses about its infrastructure the better.&lt;br /&gt;
One of the issues I&#39;ve repetitively seen in penetration testing reports for web applications is the existence of the Server header, which as mentioned in &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Server&quot; target=&quot;_blank&quot;&gt;MDN&lt;/a&gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;q&gt;
The Server header contains information about the software used by the origin server to handle the request.&lt;/q&gt;&lt;br /&gt;
&lt;br /&gt;
Also as mentioned by MDN:&lt;br /&gt;
&lt;br /&gt;
&lt;q&gt;
Overly long and detailed Server values should be avoided as they potentially reveal internal implementation details that might make it (slightly) easier for attackers to find and exploit known security holes. &lt;/q&gt;
&lt;br /&gt;
&lt;br /&gt;
By default, when using Kestrel web server to host an ASP.NET core application, Kestrel returns the Server header with the value Kestrel as shown in this screenshot from Postman:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOQlJpoJ3opcDWw3Yr_OGSg1aP7RV4PBtZmE8MURo3PL-XI2_GS9rl1Z1F_Hrzf6ZVROCzaxaWdajCxGo1QQjXJii-3JYk6v_S5KZlZ9cpk2GsHoGvN5EEhxCPyazYKkdxF85sD9HowUE/s1600/kestrel.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;117&quot; data-original-width=&quot;348&quot; height=&quot;133&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOQlJpoJ3opcDWw3Yr_OGSg1aP7RV4PBtZmE8MURo3PL-XI2_GS9rl1Z1F_Hrzf6ZVROCzaxaWdajCxGo1QQjXJii-3JYk6v_S5KZlZ9cpk2GsHoGvN5EEhxCPyazYKkdxF85sD9HowUE/s400/kestrel.PNG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
Even though it doesn&#39;t sound like a big security risk, I just prefer to remove this header. This could be achieved by adding this line to the &lt;b&gt;ConfigureServices&lt;/b&gt; method in the application &lt;b&gt;Startup&lt;/b&gt; class:&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;services.PostConfigure&lt;kestrelserveroptions&gt;(k =&amp;gt; k.AddServerHeader = false);&lt;/kestrelserveroptions&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
The &lt;b&gt;PostConfigure&lt;/b&gt; configurations &lt;a href=&quot;https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options&quot; target=&quot;_blank&quot;&gt;run after all&lt;/a&gt; &lt;b&gt;Configure&lt;t&gt;&lt;/t&gt;&lt;/b&gt; methods. So it&#39;s a good place to override the default behavior.&lt;br /&gt;
&lt;b&gt;&lt;/b&gt;&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2008454443304519220/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2008454443304519220?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2008454443304519220'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2008454443304519220'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2018/12/removing-server-header-from-kestrel.html' title='Removing the Server header from Kestrel hosted ASP.NET core apps'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOQlJpoJ3opcDWw3Yr_OGSg1aP7RV4PBtZmE8MURo3PL-XI2_GS9rl1Z1F_Hrzf6ZVROCzaxaWdajCxGo1QQjXJii-3JYk6v_S5KZlZ9cpk2GsHoGvN5EEhxCPyazYKkdxF85sD9HowUE/s72-c/kestrel.PNG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-6942780739206737478</id><published>2017-09-24T13:50:00.002+02:00</published><updated>2017-09-24T13:54:41.641+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Azure"/><category scheme="http://www.blogger.com/atom/ns#" term="Event Grid"/><category scheme="http://www.blogger.com/atom/ns#" term="WebHooks"/><title type='text'>Azure Event Grid WebHooks - Retries (Part 3)</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
Building distributed systems is challenging. If not carefully designed and implemented, a failure in one component can cause cascading failures that affect the whole system. That&#39;s why patterns like &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/architecture/patterns/retry&quot; target=&quot;_blank&quot;&gt;Retry&lt;/a&gt; and &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker&quot; target=&quot;_blank&quot;&gt;Circuit Breaker&lt;/a&gt; should be considered to improve system resilience. In case of sending WebHooks the situation might be even worse as your system is calling a totally external system with no availability guarantees and over the internet which is less reliable than your internal network.&lt;br /&gt;
Continuing on the previous parts of this series (&lt;a href=&quot;http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html&quot; target=&quot;_blank&quot;&gt;Part 1&lt;/a&gt;, &lt;a href=&quot;http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-filtering.html&quot; target=&quot;_blank&quot;&gt;Part 2&lt;/a&gt;) I&#39;ll show how to use Azure Event Grid to overcome this challenge.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Azure Event Grid Retry Policy&lt;/h3&gt;
Azure Event Grid provides a built-in capability to retry failed requests with exponential backoff, which means that in case the WebHook request fails, it will be retried with increased delays.&lt;br /&gt;
As per the &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/event-grid/delivery-and-retry&quot; target=&quot;_blank&quot;&gt;documentation&lt;/a&gt; failed requests will be retried after 10 seconds, and if the request fails again, it will keep retrying after 30 seconds, 1 minute, 5 minutes, 10 minutes, 30 minutes, and 1 hour. However these numbers aren&#39;t exact intervals as Azure Event Grid adds some randomization to these intervals.&lt;br /&gt;
Events that take more than 2 hours to be delivered will be expired. This duration should be increased to 24 hours after the preview phase.&lt;br /&gt;
This behavior is not trivial to implement which adds to the reasons why using a service like Azure Event Grid should be considered as an alternative to implementing it&#39;s capabilities from scratch.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Testing Azure Event Grid Retry&lt;/h3&gt;
To try this capability and building on the example used in &lt;a href=&quot;http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html&quot; target=&quot;_blank&quot;&gt;Part 1&lt;/a&gt;, I made a change to the AWS Lambda function that receives the WebHook to introduce random failures:&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;pre class=&quot;line-numbers&quot; data-line=&quot;9-15&quot;&gt;&lt;code class=&quot;language-csharp&quot;&gt;public object Handle(Event[] request)
{
    Event data = request[0];
    if(data.Data.validationCode!=null)
    {
        return new {validationResponse = data.Data.validationCode};
    }

    var random = new Random(Guid.NewGuid().GetHashCode());
    var value = random.Next(1 ,11);

    if(value &gt; 5)
    {
        throw new Exception(&quot;Failure!&quot;);
    }

    return &quot;&quot;;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
Lines 9-15 produce almost 50% failure rate. When I pushed an event (as shown in the previous posts) to a 1000 WebHook subscribers, the result was the below chart depicting the number of API calls per minute and number of 500 errors per minute:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4vzmbN6Wv8ikoq3CyGPYBUAUHYBzPAHT4ka98gUAFWCapoJRRc6YkgrmI2vIg0sGJ4TWRP2H9eHqYtCQZ2cSFH7TkAY2zQboCjSxOiax8lADhYcFgrr8LAc1AJdPAOX22WnX5maxo32I/s1600/Azure-Event-Grid-Retry.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;421&quot; data-original-width=&quot;1600&quot; height=&quot;168&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4vzmbN6Wv8ikoq3CyGPYBUAUHYBzPAHT4ka98gUAFWCapoJRRc6YkgrmI2vIg0sGJ4TWRP2H9eHqYtCQZ2cSFH7TkAY2zQboCjSxOiax8lADhYcFgrr8LAc1AJdPAOX22WnX5maxo32I/s640/Azure-Event-Grid-Retry.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;Number of requests per minute (Blue) - Number of 500 Errors per minute (Orange)&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;br /&gt;
We can observe the following:&lt;br /&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;The number of errors (orange) is almost half the number of requests (blue)&lt;/li&gt;
&lt;li&gt;Number of requests&amp;nbsp; per minute is around 1500 for the first minute. My explanation is that since we have 1000 listeners and 50% failure rate, Azure has made extra 500 requests.&lt;/li&gt;
&lt;li&gt;After a bit less than 2 hours (not shown in the chart for size constraints) the number of errors has dropped to 5 and no more requests were made. This is due to the expiration period during the preview.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Summary&lt;/h3&gt;
Azure Event Grid is a scalable and resilient service that can be used in case of handling thousands (maybe more) of WebHook receivers. Whether your solution is hosted on premises or on Azure, you can use this service to offload a lot of work and effort.&lt;br /&gt;
I wish that Azure Event Grid could give some insights on how events are pushed and received which would help a lot in troubleshooting as the subscriber is usually not under your control. I hope this will become an integrated part of the Azure portal.&lt;br /&gt;
It&#39;s worth mentioning that other cloud providers support similar functionality as Event Grid that are worth checking, specifically &lt;a href=&quot;https://aws.amazon.com/sns/&quot; target=&quot;_blank&quot;&gt;Amazon Simple Notification Service (SNS)&lt;/a&gt; and &lt;a href=&quot;https://cloud.google.com/pubsub/docs/overview&quot; target=&quot;_blank&quot;&gt;Google Cloud Pub/Sub&lt;/a&gt;. Both have overlapping functionality with Azure Event Grid.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/6942780739206737478/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/6942780739206737478?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/6942780739206737478'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/6942780739206737478'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/09/azure-event-grid-webhooks-retries-part-3.html' title='Azure Event Grid WebHooks - Retries (Part 3)'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4vzmbN6Wv8ikoq3CyGPYBUAUHYBzPAHT4ka98gUAFWCapoJRRc6YkgrmI2vIg0sGJ4TWRP2H9eHqYtCQZ2cSFH7TkAY2zQboCjSxOiax8lADhYcFgrr8LAc1AJdPAOX22WnX5maxo32I/s72-c/Azure-Event-Grid-Retry.PNG" height="72" width="72"/><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-5497525061565342022</id><published>2017-08-27T04:27:00.002+02:00</published><updated>2017-08-27T04:41:48.036+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Azure"/><category scheme="http://www.blogger.com/atom/ns#" term="Event Grid"/><category scheme="http://www.blogger.com/atom/ns#" term="WebHooks"/><title type='text'>Azure Event Grid WebHooks - Filtering (Part 2)</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
I my &lt;a href=&quot;http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html&quot; target=&quot;_blank&quot;&gt;previous post&lt;/a&gt; I introduced &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/event-grid/&quot; target=&quot;_blank&quot;&gt;Azure Event Grid&lt;/a&gt;. I demonstrated how simple it is to use Event Grid to push hundreds of events to subscribers using WebHooks.&lt;br /&gt;
In today&#39;s post I&#39;ll show a powerful capability of Event Grid which is filters.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
What are Filters?&lt;/h3&gt;
Subscribing to a topic means that all events pushed to this topic will be pushed to the subscriber. But what if the subscriber is interested only in a subset of the events? For example in my previous post I created a blog topic and all subscribers to this topic will receive notifications about new and updated blog posts, new comments, etc. But some subscribers might be interested only in posts and want to ignore comments. Instead of creating multiple topics for each type of event which will required separate subscriptions, Event Grid has the concept of filters. Filters are applied on the event content of and events will only be pushed to subscribers with matching filters.&lt;br /&gt;
The below diagram demonstrates this capability:&lt;br /&gt;
&lt;br /&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFVKDHsncmGsZo8x6K9KHtNwSfg3_v08mwkLsrqd8RgEnXWe_n3bdg9SrB5ZcidtMELn9jnXogO1Ss8kICFI2gm_eYqqhXJ2EnzY3_CYbJ418V9LJOR6jBzyTeMujvevw0G1jovF_QgzA/s1600/event-grid-topic-filters.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;680&quot; data-original-width=&quot;1140&quot; height=&quot;380&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFVKDHsncmGsZo8x6K9KHtNwSfg3_v08mwkLsrqd8RgEnXWe_n3bdg9SrB5ZcidtMELn9jnXogO1Ss8kICFI2gm_eYqqhXJ2EnzY3_CYbJ418V9LJOR6jBzyTeMujvevw0G1jovF_QgzA/s640/event-grid-topic-filters.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;Filtering based on Subject prefix/suffix&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Azure Event Grid supports two types of filters:&lt;/div&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;Subject prefix and suffix filters.&lt;/li&gt;
&lt;li&gt;Event type filters. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3 class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Subject prefix and suffix filters &lt;/h3&gt;
&lt;h4 class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;/h4&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
In this example I&#39;ll use a prefix filter to receive only events with subject starting with &quot;post&quot; using the &lt;b&gt;--subject-begins-with post&lt;/b&gt; parameter.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az eventgrid topic event-subscription create --name postsreceiver --subject-begins-with post --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/post -g rg --topic-name blog&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Similarly:&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az eventgrid topic event-subscription create --name commenstreceiver --subject-begins-with comment  --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/comment -g rg --topic-name blog&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
An event that looks like:&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
    {
        &quot;id&quot;: &quot;2134&quot;,
        &quot;eventType&quot;: &quot;new&quot;,
        &quot;subject&quot;: &quot;comments&quot;,
        &quot;eventTime&quot;: &quot;2017-08-20T23:14:22+1000&quot;,
        &quot;data&quot;:{
            &quot;content&quot;: &quot;Azure Event Grid&quot;,
            &quot;postId&quot;: &quot;123&quot;
        }
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Will only be pushed to the second subscriber because it matches the filter.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;h3 class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Filtering based on event type&lt;/h3&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Another way for the subscriber to filter the pushed message is specifying event types. By default when a new subscription is added the subscriber filter data looks like&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&quot;filter&quot;: {                       
  &quot;includedEventTypes&quot;: [         
    &quot;All&quot;                         
  ],                              
  &quot;isSubjectCaseSensitive&quot;: null, 
  &quot;subjectBeginsWith&quot;: &quot;&quot;, 
  &quot;subjectEndsWith&quot;: &quot;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
The &lt;b&gt;includedEventTypes &lt;/b&gt;attribute&lt;b&gt; &lt;/b&gt;equals to &quot;All&quot; which means that the subscriber will get all events regardless the type.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
You can filter on multiple event types as space separated values using the &lt;b&gt;--included-event-types&lt;/b&gt; parameter: &lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az eventgrid topic event-subscription create --name newupdatedreceiver --included-event-types new updated --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/newupdated -g rg --topic-name blog
&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
which results in:&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt; &quot;filter&quot;: {                     
   &quot;includedEventTypes&quot;: [       
     &quot;new&quot;,                      
     &quot;updated&quot;                   
   ],                            
   &quot;isSubjectCaseSensitive&quot;: null,
   &quot;subjectBeginsWith&quot;: &quot;&quot;,      
   &quot;subjectEndsWith&quot;: &quot;&quot;  
&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Which means that only events with type &quot;new&quot; or &quot;updated&quot; will be pushed to this subscriber. This event won&#39;t be pushed:&lt;/div&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
    {
        &quot;id&quot;: &quot;123456&quot;,
        &quot;eventType&quot;: &quot;deleted&quot;,
        &quot;subject&quot;: &quot;posts&quot;,
        &quot;eventTime&quot;: &quot;2017-08-20T23:14:22+1000&quot;,
        &quot;data&quot;:{
            &quot;postId&quot;: &quot;123&quot;
        }
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;h3 class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;
Summary&lt;/h3&gt;
Enabling the subscriber to have control on which events it will receive based on subject prefix, suffix, or event type (and a mix of these options) is a powerful capability of Azure Event Grid. Routing events in a declarative way without writing any logic on the event source side significantly simplifies this scenario.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/5497525061565342022/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/5497525061565342022?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/5497525061565342022'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/5497525061565342022'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-filtering.html' title='Azure Event Grid WebHooks - Filtering (Part 2)'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFVKDHsncmGsZo8x6K9KHtNwSfg3_v08mwkLsrqd8RgEnXWe_n3bdg9SrB5ZcidtMELn9jnXogO1Ss8kICFI2gm_eYqqhXJ2EnzY3_CYbJ418V9LJOR6jBzyTeMujvevw0G1jovF_QgzA/s72-c/event-grid-topic-filters.png" height="72" width="72"/><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-4230147733384873211</id><published>2017-08-22T23:20:00.000+02:00</published><updated>2017-09-21T12:56:35.657+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Azure"/><category scheme="http://www.blogger.com/atom/ns#" term="Event Grid"/><category scheme="http://www.blogger.com/atom/ns#" term="WebHooks"/><title type='text'>Azure Event Grid WebHooks (Part 1)</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
Few days ago, Microsoft &lt;a href=&quot;https://azure.microsoft.com/en-us/blog/introducing-azure-event-grid-an-event-service-for-modern-applications/&quot; target=&quot;_blank&quot;&gt;announced&lt;/a&gt; the new &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/event-grid/&quot; target=&quot;_blank&quot;&gt;Event Grid&lt;/a&gt; service. The service is described as:&lt;br /&gt;
&quot;&lt;i&gt;... a fully-managed intelligent event routing service 
that allows for uniform event consumption using a publish-subscribe 
model.&lt;/i&gt;&quot;&lt;br /&gt;
Although not directly related, I see this service as a complement to the serverless offerings provided by Microsoft after Azure Functions and Logic Apps.&lt;br /&gt;
&lt;br /&gt;
Event Grid has many capabilities and scenarios. In brief , it&#39;s a service that is capable of listening to multiple event sources using topics and publishing them to subscribers or handlers that are interested in these&amp;nbsp; events.&lt;br /&gt;
Event sources can be Blob storage events, Event hub events, custom events, etc. And subscribers can be Azure functions, logic apps, WebHooks.&lt;br /&gt;
In this post I&#39;ll focus on pushing WebHooks in a scalable, reliable, pay as you go, and easy manner using Event Grid.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Topics, and WebHooks&lt;/h3&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Topics are a way to categorize events. A publisher defines topics and sends specific events to these topics. Publishers can subscribe to topics to listen and respond to events published by event sources.&lt;/div&gt;
The concept of WebHooks is not new. WebHooks are HTTP callbacks that respond to events that were originated in other systems. For example you can create HTTP endpoints that listen to WebHooks published by GitHub when code is pushed to a specific repository. This creates an almost endless number of integration possibilities.&lt;br /&gt;
In this post we&#39;ll simulate a blogging engine that pushes events when new posts are published. And we&#39;ll create a subscriber that listens to these events.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Creating a topic&lt;/h3&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The first step to publishing a custom event is to create a topic. As other Azure resources, Event Grid topics are created in resource groups. To create a new resource group named &quot;rg&quot; we can execute this command using &lt;a href=&quot;https://docs.microsoft.com/en-us/cli/azure/overview&quot; target=&quot;_blank&quot;&gt;Azure CLI v2.0.&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az group create --name rg --location westus2&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
I Chose westus2 region because currently Event Grid has limited region availability. But this changes &lt;a href=&quot;https://azure.microsoft.com/en-us/regions/services/&quot; target=&quot;_blank&quot;&gt;all the time&lt;/a&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The next step is to create a topic in the resource group. We&#39;ll name our topic &quot;blog&quot;:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az eventgrid topic create --name blog -l westus2 -g rg&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
When you run the above command, the response should look like:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{                                                                                                    
  &quot;endpoint&quot;: &quot;https://blog.westus2-1.eventgrid.azure.net/api/events&quot;,                                                     
  &quot;id&quot;: &quot;/subscriptions/5f1ef4e8-6358-4a75-b171-58904114fb57/resourceGroups/rg/providers/Microsoft.EventGrid/topics/blog&quot;, 
  &quot;location&quot;: &quot;westus2&quot;,                                                                                                    
  &quot;name&quot;: &quot;blog2&quot;,                                                                                                          
  &quot;provisioningState&quot;: &quot;Succeeded&quot;,                                                                                         
  &quot;resourceGroup&quot;: &quot;rg&quot;,                                                                                                    
  &quot;tags&quot;: null,                                                                                                             
  &quot;type&quot;: &quot;Microsoft.EventGrid/topics&quot;                                                                                      
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Observe the endpoint attribute. Now we have the URL to be used to to push events: &lt;i&gt;https://blog.westus2-1.eventgrid.azure.net/api/events&lt;/i&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Subscribing to a topic&lt;/h3&gt;
To show the capabilities of the Event Grid, I need to create hundreds of subscribers. You can create your subscribers in any HTTP capable framework. I chose to use AWS Lambda functions + API Gateway hosted in Sydney region. This proves that there is no Azure magic by any means. Just pure HTTP WebHooks sent from Azures data centers in west US to AWS data centers in Sydney.&lt;br /&gt;
The details of creating Lambda functions and exposing them using API Gateway aren&#39;t relevant to this post, the important thing is to understand that I have an endpoint that listens to HTTP requests on: &lt;i&gt;https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/{id}&lt;/i&gt; and forwards them to AWS Lambda implemented in C#.&lt;br /&gt;
The command to create a subscription looks like:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az eventgrid topic event-subscription create --name blogreceiver   --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/   -g rg  --topic-name blog &lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
I created 100 subscriptions using this simple Powershell script:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;while($val -ne 100) { $val++ ;  az eventgrid topic event-subscription create --name blogreceiver$val   --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/$val   -g rg  --topic-name blog}&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
An important thing to notice which is the security implications of this model. If I was able to specify any URL as a subscriber to my topic, I&#39;d be able to use Azure Event Grid as a DDoS attacking tool. That&#39;s why subscription verification is very important.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Subscription verification&lt;/h3&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
To verify that the subscription endpoint is a real URL and is really willing to subscribe to the topic, a verification request is sent to the subscription endpoint when the subscription is created. This request looks like:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
{
    &quot;Id&quot;: &quot;dbb80f11-6fbb-4fc3-9c1f-034f00da3b5f&quot;,
    &quot;Topic&quot;: &quot;/subscriptions/5f1ef4e8-6358-4a75-b171-58904114fb57/resourceGroups/rg/providers/microsoft.eventgrid/topics/blog&quot;,
    &quot;Subject&quot;: &quot;&quot;,
    &quot;Data&quot;: {
        &quot;validationCode&quot;: &quot;4fc3f59c-2d03-41f4-b466-da65a81f8ba5&quot;
    },
    &quot;EventType&quot;: &quot;Microsoft.EventGrid/SubscriptionValidationEvent&quot;,
    &quot;EventTime&quot;: &quot;2017-08-20T11:11:00.0101361Z&quot;
}
]&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The validationCode attribute has a unique key to identify the subscription request. The endpoint should respond to the verification request with the same code:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&quot;validationResponse&quot;:&quot;3158cb2f-a2c4-46ca-96b0-ae2c8562fa43&quot;}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
The subscriber &lt;/h3&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The subscriber is very simple. It checks whether the request has a validation code. If so, it responds with the validation response. Otherwise it just returns 200 or 202. &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;
    public class Event
    {
        public Data Data { get; set; }
    }
    public class Data
    {
        public string validationCode { get; set; }
    }

    public class Receiver
    {
        public object Handle(Event[] request)
        {
            Event data = request[0];
            if(data.Data.validationCode!=null)
            {
                return new {validationResponse = data.Data.validationCode};
            }
            return &quot;&quot;;
        }
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Note that the AWS API Gateway is responsible for setting the status code to 200.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
&amp;nbsp;&lt;/h3&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Pushing events&lt;/h3&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
As I showed above, I created 100 subscribers. Now it&#39;s time to start pushing events which is a simple post request but of course this request must be authenticated. The authentication methods supported are Shared Access Signature &quot;SAS&quot; and keys. I&#39;ll use the latter for simplicity.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
To retrieve the key, you can use the management portal or this command:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;az eventgrid topic key list --name blog --resource-group rg&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
To configure my .net core console application that will push the events, I created 2 environment variables using Powershell:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$env:EventGrid:EndPoint = &quot;https://blog.westus2-1.eventgrid.azure.net/api/events&quot;
$env:EventGrid:Key = &quot;HQI2Ff7MoqlV8RFc/U.........&quot;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
I created a class to read the configuration variables into an instance of it:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;class EventGridConfig
{
    public string EndPoint { get; set; }
    public string Key { get; set; }
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The rest is simple. Reading the configuration variables, and posting an event to the endpoint.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;Configuration = builder.Build();
var config = new EventGridConfig();
Configuration.GetSection(&quot;EventGrid&quot;).Bind(config);

var http = new HttpClient();
string content = @&quot;
    [
        {
            &quot;&quot;id&quot;&quot;: &quot;&quot;123&quot;&quot;,
            &quot;&quot;eventType&quot;&quot;: &quot;&quot;NewPost&quot;&quot;,
            &quot;&quot;subject&quot;&quot;: &quot;&quot;blog/posts&quot;&quot;,
            &quot;&quot;eventTime&quot;&quot;: &quot;&quot;2017-08-20T23:14:22+1000&quot;&quot;,
            &quot;&quot;data&quot;&quot;:{
                &quot;&quot;title&quot;&quot;: &quot;&quot;Azure Event Grid&quot;&quot;,
                &quot;&quot;author&quot;&quot;: &quot;&quot;Hesham A. Amin&quot;&quot;
            }
        }
    ]&quot;;

http.DefaultRequestHeaders.Add(&quot;aeg-sas-key&quot;, config.Key);
var result = http.PostAsync(config.EndPoint, new StringContent(content)).Result;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Now it&#39;s Azure&#39;s Event Grid turn to push this event to the 100 subscribers.&lt;/div&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
The result&lt;/h3&gt;
Running the above console application sends a request to Azure Event Hub. In turn in sends the event to the 100 subscribers I&#39;ve created.&lt;br /&gt;
To see the result. I use AWS API Gateway CloudWatch graphs which show the number of requests to my endpoint. I ran the application few times and the result was this graph:&lt;br /&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAGsz7XLBUX_mclssnzYKqra7mB1B1x5yiFiCICL2ViTlK5JCRqRFdC4NmIWnkyO8jxzCeWnl32vMaFZi8vsOFCtw1jrjwIGm8cpoGfwmBC5wVxzCYajtkC6ywecjP9rBES_EIDOreLkw/s1600/EventHub-CloudWatch.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;301&quot; data-original-width=&quot;1600&quot; height=&quot;120&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAGsz7XLBUX_mclssnzYKqra7mB1B1x5yiFiCICL2ViTlK5JCRqRFdC4NmIWnkyO8jxzCeWnl32vMaFZi8vsOFCtw1jrjwIGm8cpoGfwmBC5wVxzCYajtkC6ywecjP9rBES_EIDOreLkw/s640/EventHub-CloudWatch.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;Requests per minute&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Summary&lt;/h3&gt;
In this post I&#39;ve shown how to use Azure Event Grid to push WebHooks to HTTP endpoints and how to subscribe to these WebHooks.&lt;br /&gt;
In next posts I&#39;ll explore more capabilities of Azure Event Grid.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/4230147733384873211/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/4230147733384873211?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/4230147733384873211'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/4230147733384873211'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html' title='Azure Event Grid WebHooks (Part 1)'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAGsz7XLBUX_mclssnzYKqra7mB1B1x5yiFiCICL2ViTlK5JCRqRFdC4NmIWnkyO8jxzCeWnl32vMaFZi8vsOFCtw1jrjwIGm8cpoGfwmBC5wVxzCYajtkC6ywecjP9rBES_EIDOreLkw/s72-c/EventHub-CloudWatch.PNG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-7238872394882817514</id><published>2017-08-17T23:27:00.004+02:00</published><updated>2017-08-17T23:27:56.170+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="AWS"/><category scheme="http://www.blogger.com/atom/ns#" term="ELB"/><title type='text'>My AWS IaaS playlist for Arabic speakers</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
If you&#39;re an Arabic speaker and interested in learning about AWS IaaS, check my &lt;a href=&quot;https://www.youtube.com/playlist?list=PLIv0fHmhJRMJlzDiAWjaaZrFCvzqrhZIi&quot; target=&quot;_blank&quot;&gt;AWS IaaS [Arabic]&lt;/a&gt; Youtube playlist. In this series of videos I go step by step creating a scalable, secure web application using AWS infrastructure as a service offering.&lt;br /&gt;
I&#39;m following a problem-solution approach. I start with a very basic but functional solution, I identify the challenges the solution has, then I move to the next step in a logical progression towards achieving the end goal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/videoseries?list=PLIv0fHmhJRMJlzDiAWjaaZrFCvzqrhZIi&quot; width=&quot;560&quot;&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;br /&gt;
And if you have no idea what capabilities AWS has, you can check my &lt;a href=&quot;https://www.youtube.com/watch?v=QQ7gmr6RPlI&amp;amp;t=60s&quot; target=&quot;_blank&quot;&gt;introductory video&lt;/a&gt;. It&#39;s a bit dated but still relevant.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/7238872394882817514/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/7238872394882817514?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/7238872394882817514'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/7238872394882817514'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/08/my-aws-iaas-playlist-for-arabic-speakers.html' title='My AWS IaaS playlist for Arabic speakers'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://img.youtube.com/vi/videoseries/default.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-593676394525417049</id><published>2017-07-23T11:39:00.001+02:00</published><updated>2017-07-23T11:39:52.294+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="ASP.NET"/><category scheme="http://www.blogger.com/atom/ns#" term="AWS"/><category scheme="http://www.blogger.com/atom/ns#" term="Kubernetes"/><title type='text'>My talk at DDDSydney 2017</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV9jPJhpBY0_8Pg101LPgkft_GBMh34Y5BSmuHwpEtC1KPl1PgeqTaYWXiV9btgUue0b1LdU3JGmvHyqBvlkmaMDNf6TauHdtsLkdTO_zJHyDkB2R3ASeI6Gut1t5McZot-y_30MNahCI/s1600/dddsydney.png&quot; imageanchor=&quot;1&quot; style=&quot;clear: right; float: right; margin-bottom: 1em; margin-left: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;971&quot; data-original-width=&quot;975&quot; height=&quot;316&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV9jPJhpBY0_8Pg101LPgkft_GBMh34Y5BSmuHwpEtC1KPl1PgeqTaYWXiV9btgUue0b1LdU3JGmvHyqBvlkmaMDNf6TauHdtsLkdTO_zJHyDkB2R3ASeI6Gut1t5McZot-y_30MNahCI/s320/dddsydney.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;It was very excising to attend and speak at &lt;a href=&quot;http://2017.dddsydney.com.au/&quot; target=&quot;_blank&quot;&gt;DDDSydney 2017&lt;/a&gt;. a lot of interesting topics have been presented and the organizers have done a good job classifying the sessions into tracks that one can follow to get a complete picture about a certain area of interest. For example my session &quot;&lt;i&gt;Avoiding death by a thousand containers. Kubernetes to the rescue!&lt;/i&gt;&quot; was the last in a track that had sessions about microservices and docker. That made it a logical conclusion on how to host containerized microservices in a highly available and easy to manage environment.&lt;br /&gt;
&lt;br /&gt;
In my demos I used AWS. This choice was intentional since AWS doesn&#39;t support Kubernetes out of the box as both Google Container Engine (GKE) and Azure Container Service (ACS) do. I wanted to show that Kubernetes could be deployed to other environments as well. Thanks to Kops (Kubernetes Operations) which made it relatively easy to deploy the Kubernetes cluster on AWS.&lt;br /&gt;
I this session I showed how to expose services using an external load balancer and how deployments make it easy to declare the desired state of the Pods deployed to Kubernetes. I also demonstrated the very powerful concept of Labels and Selectors which is a loosely coupled way to connect services to the Pods that contain the service logic.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;script src=&quot;https://gist.github.com/heshamamin/4ba2d8c0781eb909e59a14b0bb7522c1.js&quot;&gt;&lt;/script&gt;

I Also demonstrated how easy it is to perform an updated to the deployment by switching from Nginx to Apache (httpd).&lt;/div&gt;
In another demo I wanted to demonstrate how to connect services inside the cluster. I made a simple .net core web application that counts the number of hits each frontend gets. The hit count is stored in a Redis instance that&#39;s exposed through a service.&lt;br /&gt;
&lt;br /&gt;
&lt;script src=&quot;https://gist.github.com/heshamamin/2e499261fb7e7c16c05855379b83584e.js&quot;&gt;&lt;/script&gt;

&lt;br /&gt;
The interesting part is how the web application determines the address of the Redis instance. As the docker image should be immutable once created, configurations should be &lt;a href=&quot;https://12factor.net/config&quot; target=&quot;_blank&quot;&gt;stored in the environment.&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;script src=&quot;https://gist.github.com/heshamamin/bc2ecf3784e4292a4d59a4056d5e84bc.js&quot;&gt;&lt;/script&gt;

As in the above code snippet, the environment variable REDIS_SERVICE_HOST is used to get the address of the Redis service. This environment variable is automatically populated by Kubernetes since the Redis service is created before the web application deployment. Otherwise DNS service discovery could be used.
I used a simple script to hit the web API and the result was. I also manually deleted Pods that host the web API and thanks to Kubernetes&#39; desired state magic it kept creating new instances automatically. And that was the result of hitting the service:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMhi4U7sFYZRQLi1j8cyWUUAk4LDr3i5ZqaXMBVE_7XeIxu3Zqmm6YUKyD4gVm8NDZ0UPgzm9hiI3jpl7C1VhlMnrdU13BIlYph-7XgO0QfIkzRD2BvVF0xX5b02IvQZp0J-SanuW40mQ/s1600/KubeVote.gif&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;912&quot; data-original-width=&quot;1600&quot; height=&quot;363&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMhi4U7sFYZRQLi1j8cyWUUAk4LDr3i5ZqaXMBVE_7XeIxu3Zqmm6YUKyD4gVm8NDZ0UPgzm9hiI3jpl7C1VhlMnrdU13BIlYph-7XgO0QfIkzRD2BvVF0xX5b02IvQZp0J-SanuW40mQ/s640/KubeVote.gif&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
Requests go through AWS load balancing to Kubernetes nodes. The service passes the requests to Pods hosting the API.&lt;br /&gt;
&lt;br /&gt;
Kubernetes is one of the fast moving open source projects and I think the greatest thing about it is the community and wide support. So if you&#39;re planning to host containerized workloads, give it a try!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;485&quot; marginheight=&quot;0&quot; marginwidth=&quot;0&quot; scrolling=&quot;no&quot; src=&quot;//www.slideshare.net/slideshow/embed_code/key/sG9q0DHttwWXUf&quot; style=&quot;border-width: 1px; border: 1px solid #ccc; margin-bottom: 5px; max-width: 100%;&quot; width=&quot;595&quot;&gt; &lt;/iframe&gt; &lt;br /&gt;
&lt;div style=&quot;margin-bottom: 5px;&quot;&gt;
&lt;b&gt; &lt;a href=&quot;https://www.slideshare.net/HeshamAmin/kubernetes-talk-at-dddsydney-2017&quot; target=&quot;_blank&quot; title=&quot;Kubernetes talk at DDDSydney 2017&quot;&gt;Kubernetes talk at DDDSydney 2017&lt;/a&gt; &lt;/b&gt; from &lt;b&gt;&lt;a href=&quot;https://www.slideshare.net/HeshamAmin&quot; target=&quot;_blank&quot;&gt;Hesham Amin&lt;/a&gt;&lt;/b&gt; &lt;/div&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/593676394525417049/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/593676394525417049?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/593676394525417049'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/593676394525417049'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/07/my-talk-at-dddsydney-2017.html' title='My talk at DDDSydney 2017'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV9jPJhpBY0_8Pg101LPgkft_GBMh34Y5BSmuHwpEtC1KPl1PgeqTaYWXiV9btgUue0b1LdU3JGmvHyqBvlkmaMDNf6TauHdtsLkdTO_zJHyDkB2R3ASeI6Gut1t5McZot-y_30MNahCI/s72-c/dddsydney.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-7838194489389271411</id><published>2017-05-20T03:35:00.004+02:00</published><updated>2017-05-20T03:35:44.192+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Database"/><category scheme="http://www.blogger.com/atom/ns#" term="SQL Server"/><title type='text'>Detecting applications causing SQL Server locks</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
On one of our testing environments, login attempts to a legacy web application that uses MS SQL Server were timing out and failing. I suspected that the reason might be that another process is locking one of the table needed in the login process.&lt;br /&gt;
I ran a query similar to this:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;SELECT request_mode,
 request_type,
 request_status,
 request_session_id,
 resource_type,
 resource_associated_entity_id,
 CASE resource_associated_entity_id 
  WHEN 0 THEN &#39;&#39;
  ELSE OBJECT_NAME(resource_associated_entity_id)
 END AS Name,
 host_name,
 host_process_id,
 client_interface_name,
 program_name,
 login_name
FROM sys.dm_tran_locks
JOIN sys.dm_exec_sessions
 ON sys.dm_tran_locks.request_session_id = sys.dm_exec_sessions.session_id
WHERE resource_database_id = DB_ID(&#39;AdventureWorks2014&#39;)

&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
Which produces a result similar to:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmaI7MzPukA44Efdi57WQzbpKugXJQzIZ59PV9FqPYZY7B9w8MQMlZoHPh1v1FKVe9-mOgvod5aQcbnCPOqcCvTx8oq0d_RGhCzEJ_oguZzn11jVPIHl598Uv6PZm_XEEzBHl2Lve3sVs/s1600/lock.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;60&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmaI7MzPukA44Efdi57WQzbpKugXJQzIZ59PV9FqPYZY7B9w8MQMlZoHPh1v1FKVe9-mOgvod5aQcbnCPOqcCvTx8oq0d_RGhCzEJ_oguZzn11jVPIHl598Uv6PZm_XEEzBHl2Lve3sVs/s640/lock.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
It shows that an application is granted exclusive lock on the table EmailAddress, and another query is waiting for a shared lock to read from the table. But who is holding this lock?
In my case, by checking the client_interface_name and program_name columns from the result we could identify that a long running VBScript import job was locking the table.
I created a simple application that simulates a similar condition which you can check on &lt;a href=&quot;https://github.com/heshamamin/blog/tree/master/DbLock&quot; target=&quot;_blank&quot;&gt;Github&lt;/a&gt;. You can run the application and run the query to see the results.&lt;br /&gt;
&lt;br /&gt;
It&#39;s a good practice to include &quot;Application Name&quot; property in your connection strings (as in the provided application source code) to make diagnosing this kind of errors easier.
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/7838194489389271411/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/7838194489389271411?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/7838194489389271411'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/7838194489389271411'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/05/detecting-applications-causing-sql.html' title='Detecting applications causing SQL Server locks'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmaI7MzPukA44Efdi57WQzbpKugXJQzIZ59PV9FqPYZY7B9w8MQMlZoHPh1v1FKVe9-mOgvod5aQcbnCPOqcCvTx8oq0d_RGhCzEJ_oguZzn11jVPIHl598Uv6PZm_XEEzBHl2Lve3sVs/s72-c/lock.PNG" height="72" width="72"/><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2864409515881664717</id><published>2017-02-18T06:25:00.001+02:00</published><updated>2017-02-18T06:25:49.318+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Agile"/><category scheme="http://www.blogger.com/atom/ns#" term="Scrum"/><title type='text'>Abuse of Story Points</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div&gt;
Relative estimates are usually recommended in Agile teams. However nothing mandates a specific
sizing units like story points or T-shirt sizing. I believe that - used
correctly - relative estimation is a powerful and flexible tool.&lt;/div&gt;
&lt;div&gt;
I usually prefer T-shirt sizing for road-mapping to determine which features will be included in
which releases. When epics are too large and subject to may changes, it makes
sense to use an estimation technique that is quick and fun and doesn&#39;t give a false
indication of accuracy.&lt;/div&gt;
&lt;div&gt;
On the release level, estimating backlog items using story points helps planning and creating
a shared understanding between all team members. However used incorrectly, the
team can get really frustrated and might try to avoid story points in favor of
another estimation technique.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;
In a team I&#39;m working with, one of the team members suggested during a sprint retrospective
to change the estimation technique from story points to T-shirt sizing. The
reasons were:&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Velocity (measured by story points achieved in a sprint) are sometimes used to compare the performance of different teams.&lt;/li&gt;
&lt;li&gt;Story points are used as a tool to force the team to do a specific amount of work during a sprint.&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;
&lt;div&gt;
Both reasons make a good case against the use of story points. &lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
The first one clearly contradicts with the relative nature of story points as each team has different capacity and baseline for their estimates. Also the fact that some
teams use velocity as a primary success metric is a sign of a &lt;a href=&quot;http://ronjeffries.com/articles/016-03/you-want/&quot; target=&quot;_blank&quot;&gt;crappy agile implementation&lt;/a&gt;.&lt;/div&gt;
&lt;div&gt;
The second point is also a bad indicator. The reason is that you simply get what you ask for: If
the PO/SM/Manager wants higher velocity then inflated estimates is what (s)he gets. Quite similar to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Observer_effect&quot; target=&quot;_blank&quot;&gt;Observer effect&lt;/a&gt;.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
Fortunately in our case both of these concerns were based on observations from other teams. Both the Product Owner and Scrum Master were knowledgeable enough to avoid these
pitfalls and they explained how our team is using velocity just as a planning tool. However, the fact that some team members might get affected by the surrounding atmosphere in the organization is interesting and brings into attention the importance of having
consistent level of maturity and education.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
What is your experience with using story points or any other estimation technique? What worked for you and what didn’t? Share your thoughts in a comment below.&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2864409515881664717/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2864409515881664717?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2864409515881664717'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2864409515881664717'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2017/02/abuse-of-story-points.html' title='Abuse of Story Points'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-9030724881001732060</id><published>2016-11-09T22:42:00.000+02:00</published><updated>2016-11-09T22:42:33.742+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="AWS"/><category scheme="http://www.blogger.com/atom/ns#" term="Nano"/><category scheme="http://www.blogger.com/atom/ns#" term="PowerShell"/><title type='text'>Nano Server on AWS: Step by Step</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;/div&gt;
&lt;div&gt;
Windows server 2016 comes in many flavors. Nano server is the new addition that is optimized to be lightweight and with smaller attack surface. It has much less memory and disk footprint and much faster boot time than Windows Core and the full windows server. These characteristics make Nano a perfect OS for the cloud and similar scenarios.&lt;br /&gt;
However, being a headless (no GUI) OS means that no RDP connection can be made to administer the server. Also since only the very core bits are included by default means that configuring the server features is a different story than what we have in the full windows server.&lt;br /&gt;
In this post I&#39;ll explain how to launch and connect to a Nano instance on AWS. And then use the package management features to install IIS.&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Launching an EC2 Nano server instance:&lt;/h3&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;In the AWS console under the EC2 section, click &quot;Launch Instance&quot;&lt;/li&gt;
&lt;li&gt;Select the &quot;Microsoft Windows Server 2016 Base Nano&quot; AMI.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_cP6QNyA0rqdA56x-xgVPQZIv1fXlBEZWRreEzU0-8N8P1FoOtA9dam55-2250q76QyhCOMwv4VVxfe1BSJx4ElCnIG9M_ku3lFQNh18TWUhSbHWFFvM5rNBDPjC693W1v3FqGVUgmzA/s1600/Choose+an+Amazon+Machine+Image+%2528AMI%2529.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;130&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_cP6QNyA0rqdA56x-xgVPQZIv1fXlBEZWRreEzU0-8N8P1FoOtA9dam55-2250q76QyhCOMwv4VVxfe1BSJx4ElCnIG9M_ku3lFQNh18TWUhSbHWFFvM5rNBDPjC693W1v3FqGVUgmzA/s640/Choose+an+Amazon+Machine+Image+%2528AMI%2529.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;In the &quot;Choose an Instance Type&quot; page, select &quot;t2.nano&quot; instance type. This instance type has 0.5GB of RAM. Yes! this will be more than enough for this experiment.&lt;/li&gt;
&lt;li&gt;Use the default VPC and use the default 8GB storage.&lt;/li&gt;
&lt;li style=&quot;margin-bottom: 0px; margin-top: 0px; vertical-align: middle;&quot;&gt;In the &quot;&lt;span class=&quot;gwt-InlineLabel KX&quot;&gt;Configure Security Group&quot; page things will start to be a bit different from the usual full windows server. Create a new security group and select these two inbound rules:&lt;/span&gt;&lt;span style=&quot;font-family: &amp;quot;calibri&amp;quot;; font-size: 11.0pt;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/li&gt;
&lt;ul&gt;
&lt;li style=&quot;margin-bottom: 0px; margin-top: 0px; vertical-align: middle;&quot;&gt;WinRM-HTTP: Port 5985. This will be used for the remote administration.&lt;/li&gt;
&lt;li style=&quot;margin-bottom: 0px; margin-top: 0px; vertical-align: middle;&quot;&gt;HTTP: Port 80. To test IIS from our local browser.&lt;/li&gt;
&lt;/ul&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5b_IcQC8plb7aVKiqtt5mm2B5pb6waAQHpmzoVAlztfpAvlBVseGrZzSc1UpRUyq_Wlt5a5mGFY1pBkCKZjRBuqhJ8vwXViRk2w4kEcdhPL7sElx0oKQt84h8Rz7PYE-U6tN8r5XPd0A/s1600/Configure+Security+Group.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;154&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5b_IcQC8plb7aVKiqtt5mm2B5pb6waAQHpmzoVAlztfpAvlBVseGrZzSc1UpRUyq_Wlt5a5mGFY1pBkCKZjRBuqhJ8vwXViRk2w4kEcdhPL7sElx0oKQt84h8Rz7PYE-U6tN8r5XPd0A/s640/Configure+Security+Group.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;Note that AWS console gives a warning regarding port 3389 which is used for RDP. We can safely ignore this rule as we&#39;ll use WinRM. RDP is not an option with Nano server.&lt;/li&gt;
&lt;li&gt;Continue as usual and use an existing key pair or let AWS generate a new key pair to be used for windows password retrieval.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
&amp;nbsp;&lt;/h3&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Connecting to the Nano server instance:&lt;/h3&gt;
After the instance status becomes &quot;running&quot; and all status checks pass, observe the public IP of the instance. To manage this server, we&#39;ll use WinRM (Windows Remote Management) over HTTP. To be able to connect the machine, we need to add it to the trusted hosts as follows:&lt;br /&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;
&lt;li&gt;Open PowerShell in administrator mode&lt;/li&gt;
&lt;li&gt;Enter the following commands to add the server : (assuming the public IP is 52.59.253.247)&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;pre&gt;&lt;code&gt;$ip = &quot;52.59.253.247&quot;
Set-Item WSMan:\localhost\Client\TrustedHosts &quot;$ip&quot; -Concatenate -Force&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
Now we&#39;re ready to connect to the Nano server:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Enter-PSSession
-ComputerName $ip -Credential &quot;~\Administrator&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;br /&gt;
PowerShell will ask for the password which you can retrieve from AWS console using the &quot;Get Windows Password&quot; menu option and uploading your key pair you saved on your local machine.&lt;br /&gt;
&lt;br /&gt;
If everything goes well, all PowerShell commands you&#39;ll enter from now on will be executed on the remote server. So now let&#39;s reset the administrator password for the Nano instance:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;$pass = ConvertTo-SecureString -String &quot;MyNewPass&quot; -AsPlainText -Force
Set-LocalUser -Name Administrator -Password $pass&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Exit &lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
This will change the password and disconnect. To connect again, we can use the following commands and use the new password:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;$session = New-PSSession -ComputerName $ip -Credential &quot;~\Administrator&quot;
Enter-PSSession $session&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Installing IIS:&lt;/h3&gt;
As Nano is a &quot;Just Enough&quot; OS. Feature binaries are not included by default. We&#39;ll use external package repositories to install other features like IIS, Containers, Clustering, etc. This is very similar to apt-get and yum tools in the Linux world and the windows alternative is &lt;a href=&quot;http://www.oneget.org/&quot; target=&quot;_blank&quot;&gt;OneGet&lt;/a&gt;. The &lt;a href=&quot;https://github.com/OneGet/NanoServerPackage&quot; target=&quot;_blank&quot;&gt;NanoServerPackage&lt;/a&gt; repository has instructions regarding adding the Nano server package source which depends on the Nano server version. We know that the AWS AMI is based on the released version, but it doesn&#39;t harm to do a quick check:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Get-CimInstance win32_operatingsystem | Select-Object Version&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
The version in my case is 10.0.14393. So to install the provider, we&#39;ll run the following:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Save-Module -Path &quot;$env:programfiles\WindowsPowerShell\Modules\&quot; -Name NanoServerPackage -minimumVersion 1.0.1.0
Import-PackageProvider NanoServerPackage&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
Now let&#39;s explore the available packages using:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Find-NanoServerPackage&lt;/code&gt;&lt;/pre&gt;
or the more generic command:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Find-Package -ProviderName NanoServerPackage&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;/div&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm2j-FIZv9AZ98NpQGPl2oeksDvXGdlg8E_L_9uHetb3Cl7FI3SLhcGMbXEWFQyzp-D7lOhRh3FLNjN5CQEZurBhXDgrYLiIDmiOGt-7g9e_O9-Wh4XytdbcZ90qcZK-gGSxXQqrwI15U/s1600/Find-Package.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;190&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm2j-FIZv9AZ98NpQGPl2oeksDvXGdlg8E_L_9uHetb3Cl7FI3SLhcGMbXEWFQyzp-D7lOhRh3FLNjN5CQEZurBhXDgrYLiIDmiOGt-7g9e_O9-Wh4XytdbcZ90qcZK-gGSxXQqrwI15U/s640/Find-Package.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
We&#39;ll find the highlighted IIS package. So let&#39;s install it and start the required services:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Install-Package -ProviderName NanoServerPackage -Name Microsoft-NanoServer-IIS-Package
Start-Service WAS
Start-Service W3SVC&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;br /&gt;
Now let&#39;s point our browser to the IP address of the server. And here is our beloved IIS default page:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiyc7aUBvFgL7T5XbBCmUjWVCDt_8a0V1616hOiOOrWgFYAL4al9NxdWpy5sXvhNoNaNhtZBMhYI1WUSLZ3k1IpSkZeHn-s380A_jIy2xM04O3UptBC0Qihd0FdkMH-IvVMpb3u1BFdLw/s1600/IIS+default.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;420&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiyc7aUBvFgL7T5XbBCmUjWVCDt_8a0V1616hOiOOrWgFYAL4al9NxdWpy5sXvhNoNaNhtZBMhYI1WUSLZ3k1IpSkZeHn-s380A_jIy2xM04O3UptBC0Qihd0FdkMH-IvVMpb3u1BFdLw/s640/IIS+default.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Uploading a basic HTML page:&lt;/h3&gt;
Just for fun, create a basic HTML page on your local machine using your favorite tool and let&#39;s upload it and try accessing it. First enter the &lt;b&gt;exit &lt;/b&gt;command to exit the remote management session and get back to the local computer. Note that in a previous step, we had the result of the &lt;b&gt;New-PSSession&lt;/b&gt; in the &lt;b&gt;$session&lt;/b&gt; variable so we&#39;ll use it to copy the HTML page to the remote server over the management session:&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;Copy-Item &quot;C:\start.html&quot;&amp;nbsp; -ToSession $session -Destination C:\inetpub\wwwroot\&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
Navigate to http://nanoserverip/start.html to verify the successful copy of the file.&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h3 style=&quot;text-align: left;&quot;&gt;
Conclusion:&lt;/h3&gt;
Nano server is a huge step forward to enable higher density of infrastructure and applications especially on the cloud. However it requires adopting a new mindset and a set of tools to get the best of it.&lt;br /&gt;
In this post I just scratched the surface of using Nano Server on AWS. In future posts we&#39;ll explore deploying applications on it to get real benefits.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/9030724881001732060/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/9030724881001732060?isPopup=true' title='9 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/9030724881001732060'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/9030724881001732060'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2016/11/nano-server-on-aws-step-by-step.html' title='Nano Server on AWS: Step by Step'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_cP6QNyA0rqdA56x-xgVPQZIv1fXlBEZWRreEzU0-8N8P1FoOtA9dam55-2250q76QyhCOMwv4VVxfe1BSJx4ElCnIG9M_ku3lFQNh18TWUhSbHWFFvM5rNBDPjC693W1v3FqGVUgmzA/s72-c/Choose+an+Amazon+Machine+Image+%2528AMI%2529.PNG" height="72" width="72"/><thr:total>9</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-294768593562714102</id><published>2016-06-25T15:19:00.000+02:00</published><updated>2016-06-25T15:20:25.914+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Agile"/><category scheme="http://www.blogger.com/atom/ns#" term="Continuous Delivery"/><title type='text'>Agile and Continuous Delivery Awareness Session</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;br /&gt;
This is a recording of a talk that I and Mona Radwan from http://www.agilearena.net/ gave at the Greek Campus in Cairo.&lt;br /&gt;
My part was focusing on the value of Continuous Delivery from a business perspective and the related technical practices required to achieve it.&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;320&quot; src=&quot;https://www.youtube.com/embed/57RbT5-nzhM&quot; width=&quot;570&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/294768593562714102/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/294768593562714102?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/294768593562714102'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/294768593562714102'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2016/06/agile-and-continuous-delivery-awareness.html' title='Agile and Continuous Delivery Awareness Session'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://img.youtube.com/vi/57RbT5-nzhM/default.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-2388462174448190939</id><published>2016-05-20T23:03:00.002+02:00</published><updated>2016-05-20T23:05:29.874+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="AWS"/><title type='text'>Introduction to AWS video [Arabic] </title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
My video &quot;Introduction to AWS [Arabic]&quot; on Youtube.&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;320&quot; src=&quot;https://www.youtube.com/embed/QQ7gmr6RPlI&quot; width=&quot;570&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/2388462174448190939/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/2388462174448190939?isPopup=true' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2388462174448190939'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/2388462174448190939'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2016/05/introduction-to-aws-video-arabic.html' title='Introduction to AWS video [Arabic] '/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://img.youtube.com/vi/QQ7gmr6RPlI/default.jpg" height="72" width="72"/><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2496415891665263000.post-3538724157179384849</id><published>2016-02-27T17:28:00.000+02:00</published><updated>2016-02-27T17:35:38.793+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="AWS"/><category scheme="http://www.blogger.com/atom/ns#" term="ELB"/><title type='text'>AWS Elastic Load Balancing session stickiness - Part 2</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
In my previous post &quot;&lt;a href=&quot;http://blog.heshamamin.com/2016/01/aws-elastic-load-balancing-session.html&quot; target=&quot;_blank&quot;&gt;AWS Elastic Load Balancing session stickiness&lt;/a&gt;&quot; I demonstrated the use of AWS ELB&amp;nbsp; Load Balancer Generated Cookie Stickiness. In this post we&#39;ll use application generated cookie to control session stickiness.&lt;br /&gt;
To demonstrate this feature, I created a simple ASP.NET MVC application that just displays some instance details to test the load balancing.&lt;br /&gt;
&lt;br /&gt;
Starting from the default ASP.NET MVC web application template, I modified the Index action of the HomeController:&lt;br /&gt;
&lt;br /&gt;
&lt;script src=&quot;https://gist.github.com/heshamamin/57a791dcb26fb71a3a22.js&quot;&gt;&lt;/script&gt;

&lt;br /&gt;
&lt;br /&gt;
Similar to what I&#39;ve done in the previous posts using Linux shell scripts, this time I&#39;m using C# code to request instance metadata from the http://169.254.169.254/latest/meta-data/ URL then I store the host name and IP address in the ViewBag object and display them in the view:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2MUeE-GyM2kSHMt5dd3lMk_Z4qufm49dcrmsV7fQD99bE1vG9P4Pz648u5Nt9pNIaoFRRHv7fMkZrNWxI4DkZ_keUKqeHPSl97omQ2LT_Oxzg6oO7hDY7BiCu6TZ42UD4monMKg0sDfA/s1600/ELB-no-sticky.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;227&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2MUeE-GyM2kSHMt5dd3lMk_Z4qufm49dcrmsV7fQD99bE1vG9P4Pz648u5Nt9pNIaoFRRHv7fMkZrNWxI4DkZ_keUKqeHPSl97omQ2LT_Oxzg6oO7hDY7BiCu6TZ42UD4monMKg0sDfA/s400/ELB-no-sticky.PNG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
I deployed the application to two EC2 Windows 2012 R2 instances. As expected, using the default ELB settings, requests will be routed randomly to one of the instances. This can be tested by looking at the host name and IP displayed in the response.&lt;br /&gt;
&lt;br /&gt;
Looking to the request and response cookies, we can find the asp.net session cookie added:&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWVwMS3y4nMX4zB1ZSA5gGazolkBkq2vcW0Df1fYSu3R7eQjNU-QgjKIHQA1TTmNdScEDYnc2dd65Un4ugYakfG0gkQRkDukxvANOfHX4bgnWxyxErxOr0csW25ohOZzKR7a1cxaSGc8U/s1600/session-cookie.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;115&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWVwMS3y4nMX4zB1ZSA5gGazolkBkq2vcW0Df1fYSu3R7eQjNU-QgjKIHQA1TTmNdScEDYnc2dd65Un4ugYakfG0gkQRkDukxvANOfHX4bgnWxyxErxOr0csW25ohOZzKR7a1cxaSGc8U/s400/session-cookie.PNG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
To configure stickiness based on the &lt;b&gt;ASP.NET_SessionId&lt;/b&gt; cookie, edit the stickiness configuration and enter the cookie name:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGv-ptmFkbco7fucIU0qfTtryOaajU8_NsC0rrPdwSZyzI-oC1R9W5ex7lKLF6DXFzjDcVsAhbvCzVS5l_qn2Li54YRsunciduij7E6emfNL8wR4hpuE0sqPSGq8hq3gnCPxlZCzzAZa0/s1600/ELB-enable-sticky-session.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;272&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGv-ptmFkbco7fucIU0qfTtryOaajU8_NsC0rrPdwSZyzI-oC1R9W5ex7lKLF6DXFzjDcVsAhbvCzVS5l_qn2Li54YRsunciduij7E6emfNL8wR4hpuE0sqPSGq8hq3gnCPxlZCzzAZa0/s640/ELB-enable-sticky-session.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
Checking the cookies, we find that ELB generates a cookie named &quot;&lt;b&gt;AWSELB&lt;/b&gt;&quot;. As &lt;a href=&quot;http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html&quot; target=&quot;_blank&quot;&gt;documented&lt;/a&gt;: &quot;&lt;i&gt;The load balancer only inserts a new stickiness cookie if the application response
    includes a new application cookie.&lt;/i&gt;&quot;&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqIaaZXUsdG2X2VimccmTs5oOBZcoAhXv2lMXReHQiDSfM5pBxsh0n95lbWPkUxnhQXwweiGzdkcfj0wfEZbBuptMncI4jqFhEA3yHwBueApdGnfXtl_DxSJ8hGeGzVUzPAt4L8xoNd9Y/s1600/elb-response-cookies.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;97&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqIaaZXUsdG2X2VimccmTs5oOBZcoAhXv2lMXReHQiDSfM5pBxsh0n95lbWPkUxnhQXwweiGzdkcfj0wfEZbBuptMncI4jqFhEA3yHwBueApdGnfXtl_DxSJ8hGeGzVUzPAt4L8xoNd9Y/s640/elb-response-cookies.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
Now the browser will send back both the session and ELB cookies:&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOnYrgjMJg1BfDoV5c7un1s9YB7FPMGF549GA05H-AEypdf-BeYaTXqtnE67wdii-o69sZv-1kZ5A3G7_iT5tv35QdOHKLjAth13ah19Dvw-g2wCnZUbEFQTe1J1c2zqRlekvYV0fCTOQ/s1600/elb-request-cookies.PNG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;90&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOnYrgjMJg1BfDoV5c7un1s9YB7FPMGF549GA05H-AEypdf-BeYaTXqtnE67wdii-o69sZv-1kZ5A3G7_iT5tv35QdOHKLjAth13ah19Dvw-g2wCnZUbEFQTe1J1c2zqRlekvYV0fCTOQ/s640/elb-request-cookies.PNG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
Still my preference for maintaining session state is to use a distributed cache service like Redis or even SQL server. Because in case an instance goes down or is removed from an auto-scaling the user will lose his session data in case it&#39;s stored in memory.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://blog.heshamamin.com/feeds/3538724157179384849/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/2496415891665263000/3538724157179384849?isPopup=true' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/3538724157179384849'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2496415891665263000/posts/default/3538724157179384849'/><link rel='alternate' type='text/html' href='http://blog.heshamamin.com/2016/02/aws-elastic-load-balancing-session.html' title='AWS Elastic Load Balancing session stickiness - Part 2'/><author><name>Hesham A. Amin</name><uri>http://www.blogger.com/profile/00063404912692423973</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='28' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguMazmq9out-la7AM5oPpRgE3vriHazDwnxHQ6Ynk-swFHkGgyCtFETb4eRLyWoatxMXEvKjpKWOA-fW6PxPPxmbUBnNx-SzskdtijvMKJQZS93AVdPE772mpESiiaWvI/s100/Hesham+-+Profile.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2MUeE-GyM2kSHMt5dd3lMk_Z4qufm49dcrmsV7fQD99bE1vG9P4Pz648u5Nt9pNIaoFRRHv7fMkZrNWxI4DkZ_keUKqeHPSl97omQ2LT_Oxzg6oO7hDY7BiCu6TZ42UD4monMKg0sDfA/s72-c/ELB-no-sticky.PNG" height="72" width="72"/><thr:total>0</thr:total></entry></feed>