﻿<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
  version="2.0">
  <channel>
    <title>DevOps in the Enterprise</title>
    <atom:link
      href="https://review.docs.microsoft.com/archive/blogs/technet/devops/feed.xml"
      rel="self"
      type="application/rss+xml" />
    <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/feed.xml</link>
    <description>The role of IT in DevOps</description>
    <lastBuildDate>Fri, 08 Dec 2017 16:29:41 GMT</lastBuildDate>
    <language>en-US</language>
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <item>
      <title>DevOps Culture</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/devops-culture</link>
      <pubDate>Fri, 08 Dec 2017 09:29:41 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1585</guid>
      <description><![CDATA[Introducing DevOps is not about slapping a bunch of tools onto a broken process or firing all your...]]></description>
      <content:encoded><![CDATA[Introducing DevOps is not about slapping a bunch of tools onto a broken process or firing all your ops teams and/or hiring great coders. To be successful, DevOps practices require careful consideration, planning, and a matching organizational culture. Much has been written about the technology side of DevOps. Let’s take a look at the cultural practices my team observed over the years in leadership teams in support of a lasting DevOps transformation of large organizations.

One of the best-known secrets of successful DevOps implementations is the fact that it is not about technology first. Focusing on technology first is the natural inclination of teams embarking on their DevOps journey. Most likely, the majority of people involved in a DevOps transformation have a strong background in tech. They are IT professionals in operations or have been developers for most of their careers. And as they say, “when you have a hammer…”

While of critical importance, a successful DevOps adoption depends on more than the effective use of tools or learning and implementing new processes. Meaningful and lasting success also demands a transformation of a different kind. Traditional cultures, sometimes established over generations of teams, are often in the way of modern IT and development processes that are essential to the success of DevOps implementations. A DevOps transformation must go hand-in-hand with some well-planned and carefully introduced adoption of cultural practices.

What are the cultural practices organizations might want to explore? Who is impacted? How do you go about introducing them?
<h2>Back to the Roots</h2>
Plenty of definitions have been written, many opinions been shared, about what DevOps stands for. Even I made an attempt. But, in the first few seconds of <a href="https://www.youtube.com/watch?v=Fx8OBeNmaWw">this</a> now famous video, Adam Jacob, working with Opscode in 2010, eloquently defined DevOps once and for all: “DevOps is a cultural and professional movement. Period. That’s it. It’s culture, it’s about your job. That’s it. It’s all it is”.

Since his presentation in 2010 at the Velocity conference a lot of code has been written and DevOps is on its way to become more and more mainstream. Initially DevOps was often considered only something startups would adhere to. Cash strapped or otherwise compelled to forgo a dedicated operations staff they follow a “you build it, you run it” approach. Over time, more and more teams in the largest enterprise organizations discovered the <a href="http://www.devopsdigest.com/devops-advantages-1">all too obvious benefits of DevOps</a> and its practices.
<h2>Cultural Practices</h2>
While there is no one blueprint for how to “do DevOps”, there are a number of common behaviors supporting the long-term success of  DevOps practices in order to deliver value to customers through software more effective and efficient. A number of core technology practices like CI/CD, automated testing, and others represent a <a href="http://www.itproguy.com/2015/06/26/devops-practices/">core set of DevOps Practices</a>.

All practices connect to technologies, tools, and products. There are literally millions of lines of code, written to use, combine, and effective apply endless permutations of toolchains to improve software delivery processes.

Observing successful organization you find all code and tools are only as good as the supporting organizational culture. You can develop complex processes, use the most sophisticated tools; if your culture does not display certain habits, much of the work goes to waste.

These are common <strong>cultural practices</strong> noticed in organizations that successfully kicked off a DevOps transformation:

<strong>Stay focused</strong><strong>
</strong>Don’t try to boil the ocean. With your team, define a narrow enough scope to get you started. Early successes, they may be small, are an essential encouragement for the team.

<strong>Allow risk-taking</strong><strong>
</strong>Don’t avoid making mistakes. Only if you empower teams to fail will they learn. This allows for an iterative process to discover the best approach. The faster they fail the better they become.

<strong>Give leeway</strong><strong>
</strong>Refrain from expecting a fully baked plan going into a DevOps journey. With the discovery of every failure, your processes will adjust. They will become stronger. Allowing to start the journey with a few assumptions the team will incrementally refine improve any plan. Trust your team!

<strong>Be supportive</strong><strong>
</strong>You as a leader are here to support your team. Delegate responsibility and decision-making. Shield your team from outside influence. Try to minimize bureaucratic overhead for the team.

<strong>Foster confidence</strong><strong>
</strong>Your team is the best. Give your team the confidence that they can do it. Everyone is a professional. Create an environment where people don’t fear to say they don’t know (yet).

<strong>Communicate</strong><strong>
</strong>Last but not least what we consider the key ingredient of a DevOps culture: Communication. It starts with the regular stand-ups but must go way beyond. Carefully involve business leaders in your communication about progress. Consider setting up a place where everyone can find out about its current state. Install regular meetings to celebrate successes and to share lessons learned. Become an evangelist for your team.

&nbsp;

This short list is certainly not a comprehensive inventory of cultural practices and it goes without saying, in order to enable a culture built around these habits, you need to find the individuals willing to and capable of working in such environment. Later posts will look at the cultural practices supporting a DevOps transformation in more details and share additional observations from real-world engagements.

Here’s to your successful DevOps transformation
@volkerw]]></content:encoded>
    </item>
    <item>
      <title>Getting started with node.js and Azure Web Apps</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/getting-started-with-node-js-and-azure-web-apps</link>
      <pubDate>Wed, 14 Dec 2016 10:16:58 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1475</guid>
      <description><![CDATA[Our teams are involved in working with partners and customers in many advanced Azure Services...]]></description>
      <content:encoded><![CDATA[Our teams are involved in working with partners and customers in many advanced Azure Services projects. Sometimes one forgets that there’s many, many developers just getting started on their journey. Here are a few tips from real world projects and some resources we find helpful to get you started. All resources have either been created during a real project or built to educate internal and external audiences.
<h4>DevOps Fun</h4>
<a href="https://github.com/ritazh">Rita</a> is one of the core SMEs on this topic on our team. A while back she created a simple webpage (and GitHub repo) <a href="https://ritazh.github.io/devopsfun/">DevOpsFun</a>. It contains a full blown hands-on lab, start to finish. It will guide you from setting up your environment all the way to scaling the ready app supporting product-like workloads. The lab is based on node.js, MongoDB, Jenkins, and is using Git.

The content on <a href="https://github.com/ritazh/devopsfun">GitHub</a> walks you through
<ul>
 	<li>Setting up your environment</li>
 	<li>Development</li>
 	<li>Provisioning for Dev/Test</li>
 	<li>Deployment</li>
 	<li>Provisioning a Continuous Integration (CI) server</li>
 	<li>CI and CD with Jenkins</li>
 	<li>Scaling the app</li>
</ul>
Go check out its <a href="https://ritazh.github.io/devopsfun/">homepage</a> to find the modules, all tools used, and links to GitHub.
<h4>Parts Unlimited</h4>
My team just released an updated version of <a href="https://github.com/Microsoft?utf8=%E2%9C%93&amp;q=partsunlimited&amp;type=&amp;language=">two projects</a> based on the story behind <a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262509">The Phoenix Project</a>, the DevOps novel written by one of the leaders in the DevOps space, <a href="https://twitter.com/realgenekim">Gene Kim</a>:

“Parts Unlimited is an example eCommerce website site based for training purposes on the website described in chapters 31-35 of The Phoenix Project, by Gene Kim, Kevin Behr and George Spafford, © 2013 IT Revolution Press LLC, Portland, OR. Resemblance to “Project Unicorn” in the novel is intentional; resemblance to any real company is purely coincidental.”

As the book, both projects are based on the same fictitious company Parts Unlimited. Both projects implement the same solution. The first one, <a href="https://github.com/Microsoft/PartsUnlimited">PartsUnlimited</a>, fully takes advantage of the Microsoft stack of tools. The second project, <a href="https://github.com/Microsoft/PartsUnlimitedMRP">PartsUnlimitedMRP</a>, is using only OSS tooling. The projects contain (of course) all source code and detailed manuals.

Needless to say, the projects are structured in a way to help you understand and implement not only the project but also implement the proper DevOps practices to support team development and maintenance.
When you check out the project on GitHub you will find that they contain way more than just code, some manuals, and a bit of scripts. Since they are intended for self-study of the novice, they contain information about adjacent topics like how to get started on the projects on Linux and macOS, Machine Learning, Authentication and much, much more.
As always, you are invited to try and <b>contribute</b>!

Many of the steps are also highlighted in a Channel 9 video you find <a href="https://channel9.msdn.com/Blogs/TalkDevOps/TalkDevOps--Deploying-a-Java-application-with-VSTS">here</a>.
<h4>VorlonJS – A Journey to DevOps</h4>
While this might be a bit of an advanced topic, it is well worth mentioning it here. In <a href="https://blogs.technet.microsoft.com/devops/tag/vorlonjs/">this series of blog posts</a> <a href="https://twitter.com/jcorioland">Julien Corioland</a> combines the shared experience of a hackathon earlier this year in Munich. The goal of this event was to figure out and work on the <a href="http://blogs.msdn.com/b/eternalcoding/archive/2015/04/30/why-we-made-vorlon-js-and-how-to-use-it-to-debug-your-javascript-remotely.aspx">Vorlon.JS project </a>“…to improve the way the team develop, test and release …” applications. The series of blog posts are well worth your time as they share all the challenges the team met, how they overcame them and it teaches about DevOps practices and how Microsoft and non-Microsoft tools can be used to deploy applications to Microsoft Azure.
<h4>Wait, there’s more</h4>
Above list only highlights three out of a countless and ever-growing number of helpful resources.

Even if you peruse each of the projects in more detail, you will find tons and tons of references to additional opportunities to learn more and find answers to questions and solutions to problems you may face. Below a few additional resources
<ul>
 	<li><a href="https://channel9.msdn.com/Series/DevOps-Fundamentals">DevOps Fundamentals</a> – to get you started with DevOps practices</li>
 	<li><a href="https://www.visualstudio.com/en-us/docs/release/examples/nodejs/node-to-azure-webapps">Publishing node.js web apps to Azure</a> – Using Visual Studio Team Services</li>
 	<li><a href="https://github.com/ritazh/slack-textmeme">Node.js app, js test, Travis CI integration</a> – Example web app with deployment scripts to Azure web app</li>
</ul>
Hope you found this useful and please share your best practices or links to more resources.

This post is based on an internal email exchange between team members <a href="https://https/twitter.com/ritazzhang">Rita</a>, <a href="http://twitter.com/nzthiago">Thiago</a>, <a href="http://twitter.com/Ju_Stroh">Julien</a>, <a href="https://twitter.com/dcaro">Damien</a>, and others.

Have Fun!
<a href="https://twitter.com/volkerw">@volkerw</a>]]></content:encoded>
    </item>
    <item>
      <title>ChefConf 2016</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/chefconf-2016</link>
      <pubDate>Tue, 30 Aug 2016 09:24:45 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1455</guid>
      <description><![CDATA[My team and I have been reflecting on ChefConf 2016, which we attended last month in Austin, TX. The...]]></description>
      <content:encoded><![CDATA[My team and I have been reflecting on ChefConf 2016, which we attended last month in Austin, TX. The conference and the people we met were a perfect mix of inspiration, knowledge sharing, and community.

We found the Chef management team eager to share their DevOps passion and expertise, and, of course, to discuss the ingredients for DevOps success. To learn more, Seth Juarez cooked up interviews with some Chef VIPs:
<ul>
 	<li>Justin Arbuckle, Chef VP of Transformation, discusses how to transform enterprise IT for speed, scale, and consistency. Justin’s focus on velocity as a delivery goal as well as a DevOps performance metric was shared by many of the industry speakers.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Delivering-Products-at-Velocity/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
 	<li>Nicole Forsgren, Chef’s Director of Organization Performance and Analytics, shares 2016 DevOps research results. She details how, with the right mix of technology, processes, and a great culture, DevOps helps drive organization profitability, productivity, and market share.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/What-Chef-Learned-From-Four-Years-of-Science-ing-the-Crap-Out-of-DevOps/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
 	<li>Nathen Harvey, VP of Community Development at Chef, discusses Chef’s portfolio of products and the amazing contributions of the open source community.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/What-is-Chef-Recipes-Cookbooks-and-Community-Oh-My/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
</ul>
Throughout the conference, we heard incredible stories of organizational transformation through DevOps that delivered phenomenal results, including:
<ul>
 	<li>The <a href="https://www.youtube.com/watch?time_continue=2&amp;v=mGJkhuRlvTo">inspiring and entertaining keynote</a> by Alaska Airlines CIO Veresh Sita discussing how Alaska is transforming the customer’s airplane experience (it made me proud that we share greater Seattle as our companies’ headquarters).</li>
 	<li><a href="https://www.youtube.com/watch?v=FUla4UHlZEM">GE Digital’s story</a> on how they’re transforming GE’s extensive portfolio of industrial products with software and ServiceOps.</li>
 	<li><a href="http://ttps:/www.youtube.com/watch?v=1w0Oscd4D_k&amp;list=PL11cZfNdwNyPo_EEgCGDe9mrUlMtTf361&amp;index=9">How Westpac Bank is delivering ideas at velocity</a> to stay ahead of the curve in the constantly evolving financial services industry.</li>
</ul>
There were still more stories of DevOps successes from NCR, Hearst Corporation, and others—stories that spanned industries and that all had a common foundation: alignment of their teams around a shared, customer-focused goal.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Implementing-DevOps-to-Deliver-Business-Value-A-Real-World-Example/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Using-Value-Stream-Mapping-For-Continuous-Improvement/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

Seth Juarez and Joe Breslin were able to fold in some speakers to elaborate on the importance of culture in the DevOps world. Check out these videos:
<ul>
 	<li>Industry analyst Ben Kepes provides insights on how DevOps can help drive organizational agility and innovation.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Using-DevOps-to-Drive-Organizational-Agility-and-Innovation/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
 	<li>RunasRadio podcaster Richard Campbell discusses the strained dynamic between devs and ops and how to turn it around to create high-performance, collaborative teams.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Making-DevOps-Work-It-All-Starts-with-Lunch/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
 	<li>DevOps evangelist Cads Oakley describes how to jumpstart a DevOps initiative within your organization.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/DevOps-Culture-People-First/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
</ul>
Chef even coined a new term to promote this DevOps cultural phenomenon: HugOps. <a href="https://www.youtube.com/watch?v=pW48v1xAPyI">This amusing video</a> will help you understand what it’s all about (even though the boiling hot Austin weather precluded much interest in hugs!).

With all the learnings around DevOps culture and processes percolating in my mind, I, as an engineer, was also interested in diving into the tools and technology aspects of DevOps. Again, Seth Juarez captured some interviews that are sure to sizzle (I hope you’re noticing all my culinary references in this blog!), including:
<ul>
 	<li>Chef Principal Engineer and Microsoft MVP Steven Murawski, explaining how, by using Chef and PowerShell Desired State Configuration (DSC) together, you can streamline the change management process and successfully deploy code and infrastructure on-demand and in-compliance.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Chef-and-PowerShell-DSC-Bringing-Your-Machine-to-Its-Desired-State/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
 	<li>Microsoft engineers Narayanan Lakshmanan and Boris Scholl elaborating on the technical details around DSC, microservices, and DevOps.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/PowerShell-DSC-Microservices-and-DevOps/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
 	<li>Nirmal Mehta, Chief Technologist at Booz Allen Hamilton, expounding on how you can accelerate compliance using Infrastructure as Code.

<iframe width="960" height="540" src="https://channel9.msdn.com/Events/DevOps-Microsoft-Chef/ChefConf-2016/Accelerating-Compliance-Using-Infrastructure-as-Code/player" frameborder="0" allowfullscreen="allowfullscreen"></iframe></li>
</ul>
Innovation, inspiration, friends: ingredients for a great meal—and conference. ChefConf 2016 was a gourmet experience!

Have fun.
<a target="_blank" href="http://twitter.com/volkerw">@volkerw</a>]]></content:encoded>
    </item>
    <item>
      <title>First Look: Docker for Azure Beta</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/first-look-docker-for-azure-beta</link>
      <pubDate>Tue, 16 Aug 2016 09:23:55 GMT</pubDate>
      <dc:creator><![CDATA[William Buchwalter MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1406</guid>
      <description><![CDATA[A lot of new and exciting stuff was announced at DockerCon 2016 a couple of months ago, including...]]></description>
      <content:encoded><![CDATA[A lot of new and exciting stuff was announced at DockerCon 2016 a couple of months ago, including Docker for Azure.
I received my invitation a couple of days ago, and wanted to share my first impressions.
<h3>Setting up</h3>
The invitation email contains a "deploy to azure" button, clicking it send us to the custom deployment interface on Azure.
Indeed, Docker for Azure come as an ARM template, and here is what the important bits of this template look like (in the case of a single manager):

[caption id="attachment_1415" align="aligncenter" width="795"]<a href="https://msdnshared.blob.core.windows.net/media/2016/08/swarmviz.jpg"><img src="https://msdnshared.blob.core.windows.net/media/2016/08/swarmviz.jpg" alt="Docker Swarm ARM Template" width="795" height="449" class="wp-image-1415 size-full" /></a> Note: This diagram was generated with the new version of <a href="http://armviz.io">Armviz</a>, which you should take a look at![/caption]

&nbsp;

Of course, the template comes with some parameters, allowing you to choose the number of managers and workers, the size of the VMs and the name of the swarm.
The template will also ask you to provide an ssh public key for the manager as well as a principal ID and secret. You can refer to <a href="https://beta.docker.com/docs/azure/">Docker's own documentation</a> for details about how the set up everything.
<h3>Diving In</h3>
In my case, I simply went with one manager and one worker node.
To connect to the manager, simply ssh into it with the user `docker`
<pre>&gt; ssh docker@xxx.xxx.xxx.xxx

Welcome to Docker!

dockerswarm-manager0:~$
</pre>
Obligatory docker version:
<pre>dockerswarm-manager0:~$ docker -v

Docker version 1.12.0, build 8eab29e, experimental</pre>
Checking the nodes shows that everything is running as expected:
<pre><code>dockerswarm-manager0:~$ docker node ls</code>

ID                           HOSTNAME                    STATUS  AVAILABILITY  MANAGER STATUS

6p1xtdlrfwf3suzoqzws3fldh    _dockerswarm-worker-vmss_0  Ready   Active

bmzw0jiufycg23p3u3pfixsih *  _dockerswarm-manager0       Ready   Active        Leader</pre>
Let's see which images are present right out of the box:
<pre>dockerswarm-manager0:~$ docker images

REPOSITORY             TAG                   IMAGE ID            CREATED             SIZE

docker4x/agent-azure   latest                b2b22beefac8        4 days ago          94.13 MB

docker4x/init-azure    azure-v1.12.0-beta4   dd6652cf2f87        7 days ago          35.32 MB

docker4x/controller    azure-v1.12.0-beta4   d61704e07424        10 days ago         22.46 MB

docker4x/guide-azure   azure-v1.12.0-beta4   8dca840e0fc0        2 weeks ago         35.15 MB</pre>
And Let's see which one are running:
<pre><code>dockerswarm-manager0:~$ docker ps</code>

CONTAINER ID        IMAGE                                      COMMAND

eec4385cd9a4        docker4x/controller:azure-v1.12.0-beta4    "loadbalancer run --d"

334c9369735a        docker4x/guide-azure:azure-v1.12.0-beta4   "/entry.sh"

c50f292b7dc6        docker4x/agent-azure                       "supervisord --config"</pre>
3 images are currently running on our manager, you won't find a lot of info about them online, and their Docker hub repository are currently void of any doc, but here is what they do:
<ul>
 	<li><b>Docker4x/agent-azure</b>: this is <a href="https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-agent-user-guide/">Azure Linux Agent</a> which "manages interaction between a virtual machine and the Azure Fabric Controller". This is basically responsible for communication between the VM and Azure for diagnostics, provisioning etc.</li>
 	<li><b>Docker4x/guide-azure</b>: I did not find a lot of info on this one. It is running a cron task that call `<strong>buoy</strong>` , a custom tool, which seem to be used for logging important events in the swarm.</li>
 	<li><b>Docker4x/controller</b>: This is the most interesting one. It is running `<strong>loadbalancer</strong>`, another custom tool from docker (which doesn't seem to be open source at the time of this writing). This container is actually managing the load balancing rules of our external load balancer, more about that following.</li>
</ul>
We saw another image was present, but not currently running: <b>docker4x/init-azure</b>. As the name implies, this images is responsible for the initialization of the different nodes composing your swarm: if the VM role is a manager it will init the swarm, and if the VM role is worker it will join the existing swarm.

Let's create a simple service to see everything in action:
<pre>dockerswarm-manager0:~$ docker service create --name nginx --publish 80:80 nginx

dockerswarm-manager0:~$ docker service ps nginx

ID                         NAME     IMAGE  NODE                        DESIRED STATE  CURRENT STATE           ERROR

6lnn6ucp4hg0f5l3dnm47fluq  nginx.1  nginx  _dockerswarm-worker-vmss_0  Running        Running 13 minutes ago</pre>
I created a service containing a single replica of nginx and I published the port 80. If I open my browser an navigate to the public IP of my swarm, sure enough I get a "welcome to nginx!", I don't have anything else to do.

Let's take a look at our load balancer rules in azure:

<a href="https://msdnshared.blob.core.windows.net/media/2016/08/swarmlbrules.jpg"><img src="https://msdnshared.blob.core.windows.net/media/2016/08/swarmlbrules-1024x716.jpg" alt="Azure Load Balancing Rules Docker Swarm" width="879" height="615" class="aligncenter size-large wp-image-1425" /></a>

We can see that one rule was created for the port 80 (the port I published when creating the nginx service), but I never created anything myself!

Remember the <strong>docker4x/controller</strong> image that we saw earlier was running on the manager? This is actually taking care of monitoring the services running on the swarm and updating the load balancing rules accordingly. When you create a new service in swarm, <strong>loadbalancer </strong>will create a new load balacing rule for each port that was published, and delete them when you delete the service in swarm.

<b>Conclusion</b>

Setting up the swarm took less than 5 minutes as all I needed to do was to provide 8 parameters for the ARM template.
Once the creation of the resources in Azure is completed, you have a ready to use swarm!
Thanks to <strong>loadbalancer</strong>, Azure Load Balancer is nicely integrated into the workflow making services creation a breeze.]]></content:encoded>
    </item>
    <item>
      <title>Docker Swarm with Linux and Windows</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/docker-swarm-with-linux-and-windows</link>
      <pubDate>Thu, 30 Jun 2016 08:04:59 GMT</pubDate>
      <dc:creator><![CDATA[DavidTesar]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1405</guid>
      <description><![CDATA[As soon as you need to run containers in any form more available or production-ready configuration,...]]></description>
      <content:encoded><![CDATA[As soon as you need to run containers in any form more available or production-ready configuration, you're going to need to have more than one Docker engine host OS/VM/node where the containers can run.  In this post and embeded three-part video series I did with Dongluo Chen from Docker we will address how Docker Swarm helps to solve some challenges you will need to address when running in a more availalable or production-ready configuration, including:
<ul>
	<li>Which Docker engine node should I run my next container on?</li>
	<li>Will the Docker engine node have enough capacity / resources available or proper host OS software installed (i.e. on Windows or Linux) to meet the container's requirements to run?</li>
	<li>What if I keep adding and removing Docker engine nodes - how will I keep track of them all?</li>
	<li>How can I make my containerized application be balanced across multiple Docker engine node hosts?</li>
	<li>If a Docker engine node fails - how will I handle this?</li>
</ul>
Docker Swarm comes to the rescue! Docker Swarm is a clustering and scheduling tool for Docker containers.  Swarm establishes and manages a cluster of Docker nodes as a single virtual system.  One of the big benefits of Swarm is it allows people to use the native Docker commands they are familiar with to run containers.  It should also be duly noted that Swarm needs to be paired with a separate technology for things like node leader election and discovery to truly be effective.  One such option is Consul but see here for <a href="https://www.consul.io/intro/vs/index.html">other alternative comparisons</a>.

Docker Swarm is supported in Microsoft Azure and can be deployed easily using these two Azure resource manager deployment templates which already have the highly-available baseline configuration details worked out for you:
1) <a href="https://github.com/Azure/azure-quickstart-templates/blob/42bd8e92268fc4c46f0dc94ecde50923cfe05b1b/docker-swarm-cluster/README.md">Using CoreOS as the host OS and Consul</a>
2) <a href="https://github.com/Azure/azure-quickstart-templates/tree/master/101-acs-swarm">Using Azure Container Service with Swarm</a>

Below is Part 1 of the video interview with Docker engineer <a href="http://twitter.com/dongluochen">@Dongluochen</a> walking through how to run the 1st option on Azure.

<iframe src="https://channel9.msdn.com/Blogs/containers/Docker-Swarm-Part-1/player" width="640" height="360" frameborder="0" allowfullscreen="allowfullscreen"></iframe>
<a href="https://channel9.msdn.com/Blogs/containers/Docker-Swarm-Part-1">Link to source video on Channel 9</a>

Now that you have a highly available Swarm Cluster, how might you go about adding a Windows Server 2016 Docker engine host node to be available in the Swarm cluster?  This would enable you to provision Windows-based containers through Swarm.  This is exactly what we cover in this part 2 video with Dongluo:

<iframe src="https://channel9.msdn.com/Blogs/containers/Docker-Swarm-Part-2/player" width="640" height="360" frameborder="0" allowfullscreen="allowfullscreen"></iframe>
<a href="https://channel9.msdn.com/Blogs/containers/Docker-Swarm-Part-2">Link to source video on Channel 9</a>

Did you know that you can have Windows Server 2012 R2 run as host nodes in a Swarm cluster?
If you'd like to learn - <a href="https://channel9.msdn.com/Blogs/containers/How-to-Add-a-Windows-Server-as-a-Swarm-Node-Part-3">watch this Part 3 demonstration of how to create swarm binaries for Windows Server and to join the Windows Server as a part of the Swarm cluster</a>.

<iframe src="https://channel9.msdn.com/Blogs/containers/How-to-Add-a-Windows-Server-as-a-Swarm-Node-Part-3/player" width="640" height="360" allowFullScreen frameBorder="0"></iframe>

For more information:
<ul>
	<li>Docker Swarm information page</li>
	<li><a href="https://channel9.msdn.com/Blogs/containers">The Containers Channel</a> video series on @ch9</li>
</ul>
David Tesar - <a href="http://twitter.com/dtzar">@dtzar</a>]]></content:encoded>
    </item>
    <item>
      <title>A Git Workflow for Continuous Delivery</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/a-git-workflow-for-continuous-delivery</link>
      <pubDate>Tue, 21 Jun 2016 07:28:58 GMT</pubDate>
      <dc:creator><![CDATA[William Buchwalter MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1285</guid>
      <description><![CDATA[This post is based on a talk Nathan Henderson (@nathos) and I (@wbuchw) gave a few weeks ago. Nathan...]]></description>
      <content:encoded><![CDATA[<em>This post is based on a talk Nathan Henderson (<a target="_blank" href="https://twitter.com/nathos">@nathos</a>) and I (<a target="_blank" href="http://twitter.com/wbuchw">@wbuchw</a>) gave a few weeks ago. Nathan is Services Principal Engineer at GitHub. </em>
<h3><strong>The current state of things (spoiler: git-flow)</strong></h3>
In the last few years, a lot of us have chosen Git to handle our version control for a lot of good reasons.
One of them is easy branching: branching in Git is instantaneous, since it is just a diff and not a whole copy, giving us an easy way to isolate our work in progress and to switch between tasks.
But with power, comes responsibility: how to keep this flexibility without creating a total mess? As a team, we need to agree on a common place to store finished work at the very least. This agreement is often called a branching model, or a workflow. Different team will use different branching models, but some are more common than others.

One particular branching model gained a lot of popularity over the years, to the point that it is almost considered a standard. It is commonly referred as git-flow.
Search for "git workflow" and you will most likely end up on this article: "<a target="_blank" href="http://nvie.com/posts/a-successful-git-branching-model/">A successful Git branching model</a>".

You might notice something though, this post was written in 2010 and a lot of things happened since then.
Since then, the DevOps culture has gained a lot of adoption and continuous delivery is the new grail.
Companies such as Flickr and ThoughtWorks showed us how integrating and releasing way more often will reduce the pain of... integrating and releasing.
New tools appeared in those 5 years, helping us release software easily. We are now able to do it in a matter of minutes!

Git-flow is actually not well suited to accommodate continuous delivery.
It makes the assumption that every feature branch will be long lived, hence contains a lot of changes. In turn, this means it will take a lot of time integrating a feature branch back into the trunk, so we have a "dev" branch which is used to integrate and make a preliminary QA check.
Since there might be many people merging back their (huge) changes for a single release, creating a release branch is needed before merging back to master where we do another QA round and fix the (hopefully) last bugs.
On top of that, we have hotfix branches, release tags, etc., adding yet another layer of complexity.

[caption id="attachment_1375" align="aligncenter" width="668"]<a href="https://msdnshared.blob.core.windows.net/media/2016/06/gitflow-4.png"><img class=" wp-image-1375" src="https://msdnshared.blob.core.windows.net/media/2016/06/gitflow-4-1024x429.png" alt="Simplified git-flow representation" width="668" height="280" /></a> Simplified git-flow representation[/caption]

&nbsp;

Isn't there a better, simpler alternative?

There is and it's actually being used today by companies like GitHub, Microsoft, ThoughtWorks and many others doing CD.
This workflow has different names, but it is mostly known as either GitHub-flow or Trunk Based Development (I'll use GitHub-flow from here). So what is this flow about?
<h3><strong>Enter GitHub Flow</strong></h3>
GitHub Flow is all about short feedback loops (everything in DevOps mostly is, actually). This means work branches (‘work’ could mean a new feature or a bug fix - there is no distinction) starts from the production code (master) and are short lived - the shorter the better. Merging back becomes a breeze and we are truly continuously integrating. Indeed, if two developers are working in two separate branches for 3 months, they are by definition not continuously integrating their code and will have a lot of fun when it’s merging time :).
That’s pretty much all there is to know about the actual workflow, really.

[caption id="attachment_1335" align="aligncenter" width="580"]<a href="https://msdnshared.blob.core.windows.net/media/2016/06/ghflow.png"><img class=" wp-image-1335" src="https://msdnshared.blob.core.windows.net/media/2016/06/ghflow-1024x449.png" alt="Simplified GitHub Flow" width="580" height="254" /></a> Simplified GitHub Flow[/caption]

Collaboration is the other cornerstone of the GitHub Flow. Everyone agrees that code reviews are a good thing, but few will actually do it seriously. In many cases this is simply because it takes too much time: most developers will wait until they think they are done to open a pull request (or Merge Request if you use GitLab). How can we review 3 weeks of changes in a reasonable amount of time? We cannot, so we just skim over the surface and approve 👍 .

A better approach is to open a pull request as early as possible and code in the open, discuss implementation details and architecture choices as you go, tag people that can help you while you’re coding.
This has the side effect of creating a living documentation for you: wondering why someone made a particular decision? Check the discussion in the related pull request.

Of course, you could do that with any flow, it is not something specific to GitHub Flow, but it happens to be a best practice among teams implementing this flow and a smaller change set is always easier to review.

<a href="https://msdnshared.blob.core.windows.net/media/2016/06/PR-1.png"><img class="aligncenter wp-image-1365" src="https://msdnshared.blob.core.windows.net/media/2016/06/PR-1-1024x443.png" alt="Pull Request flow" width="568" height="246" /></a>

When you're done and the pull request is accepted, your code should be deployed to a production-like environment as soon as possible (production is a good production-like environment ;) ). If anything breaks, it's easier to narrow down an issue in a release where only one or two developers made changes, than in a 100+ commits mess.
<h3><strong>Making large-scale changes</strong></h3>
Short lived branches and merging into master frequently seems like a good idea, but how can we make major changes to the code base in this context?
<ul>
 	<li><em><strong>Feature flags</strong></em>: Split your work in smaller self contained changes and release them behind a feature flag. You can still get true CI and fast feedback from a staging environment for example, without exposing your work in progress in production.</li>
 	<li><em><strong>Branch by abstraction</strong></em>: We branch so our modifications are isolated and don't impact others and vice-versa. Developers already have another tool do the job: abstraction. Hide the component you want to refactor behind an interface and incrementally swap the old implementation with the new. You should still be able to release in production at anytime without breaking anything. <a target="_blank" href="http://martinfowler.com/bliki/BranchByAbstraction.html">Here is a nice read on the subject by Martin Fowler</a>.</li>
 	<li><em><strong>Split your work</strong></em> <em>(microservices):</em> This won't apply to most cases, but when creating a new component, ask yourself if you could make it stand alone. A series of smaller and independent components are much easier to change safely.</li>
</ul>
Short lived branches are something to strive for. They remove a lot of pain from integrating and merging while giving you feedback quickly.
When dealing with messy and highly coupled code though, the cost of doing branching by abstraction may be bigger than the benefits of short lived branches. Like many things, it's a tradeoff. There are some legitimate cases for a long lived branch, but it should remain an exception.
<h3><strong>Going One step further</strong></h3>
<ul>
 	<li><em><strong>Deploying before merging</strong></em>: GitHub deploys in production directly from the feature branch and if everything looks good, it is then merged into master. This means that master becomes a record of known good production code.</li>
 	<li><em><strong>Scheduled release</strong></em>: GitHub.com is deployed continuously, but GitHub Enterprise's updates are shipped every few weeks. To achieve that, GitHub creates release branches from master (say release-2.6) and cherry-pick the features they want to ship in that version. No development happens on this release branch.</li>
</ul>
<h3><strong>When GitHub Flow doesn't make sense</strong></h3>
GitHub Flow is awesome, but it is not a silver bullet. Trying to implement it before having a reliable test suite and continuous integration in place (at least) will lead to serious quality issues in production.

Let me stress this point: <strong>Git-flow is not a bad workflow</strong>, but it is not well suited for continuous delivery.
If you still need manual verifications to ensure the quality of your product, then by all means, stay with Git-flow.

<strong>Resources</strong>
<ul>
 	<li>The excellent "<a href="https://guides.github.com/introduction/flow/">Understanding the GitHub-flow</a>" by GitHub themselves.</li>
 	<li><a href="https://www.thoughtworks.com/insights/blog/enabling-trunk-based-development-deployment-pipelines">"Enabling Trunk Based Development with Deployment Pipelines"</a> by Vishal Naik from ThoughtWorks</li>
</ul>
&nbsp;]]></content:encoded>
    </item>
    <item>
      <title>DevOps Dimensions</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/devops-dimensions</link>
      <pubDate>Mon, 13 Jun 2016 07:52:00 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1245</guid>
      <description><![CDATA[Over the past few years, my team and I have pondered and discussed deeply the question, what is...]]></description>
      <content:encoded><![CDATA[Over the past few years, my team and I have pondered and discussed deeply the question, what is DevOps. We continually come back to the consistent components—people, process, and tools—but what stands out to us is that the journey is different for everyone. And while that is a bit daunting, the community is incredibly open (pun intended) in truly wanting to help others overcome the foreseeable and many not so foreseeable hurdles in the transformation to DevOps.

This was our inspiration around our DevOps Dimension show on Channel 9, and I’m really thrilled at the conversations we are able to share to date, including:
<ul>
	<li>Companies with solutions enabling DevOps practices, like <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/8--Docker-Best-Practices--Industry-Future">Docker</a>, <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/5--State-of-DevOps-with-Puppet-Labs">Puppet</a>, and <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/9--DevOps--Deployment-Automation-Best-Practices">Octopus</a>.</li>
	<li>Teams that have taken their own DevOps journey, such as <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/2--Nordstrom--Tips-for-making-a-DevOps-transformation">Nordstrom</a> and <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/6--Blameless-Postmortems-with-PushPay">Pushpay</a>.</li>
	<li>Some of own Microsoft DevOps journeys, featuring <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/1--Shift-to-DevOps-Inside-MSFT-An-engineers-perspective">Visual Studio</a>, <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/3--MSN-and-Universal-Store-Combining-PaaS-with-Configuration-Management">MSN</a>, <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/4--Bing-Experimentation-and-Testing-at-Scale">Bing</a>, and <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/Mobile-DevOps">OneDrive</a>.</li>
</ul>
In addition, we love going where the community goes and recently attended <a href="http://www.devopsdays.org/events/2016-london/program/">DevOps Days London</a>, where we had a great opportunity to interview luminaries driving the DevOps movement.

For example, <a href="https://twitter.com/OguzPastirmaci">Oguz</a> and <a href="https://twitter.com/NZThiago">Thiago</a> from my team sat down with <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/10--Gene-Kim-Interview-at-DevOps-Days-London">Gene Kim</a>, renowned DevOps advocate, entrepreneur, and <a href="http://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592">author</a>, to discuss the past, present, and future of DevOps, including how people can get started with DevOps or take the next step.

<iframe src="https://channel9.msdn.com/Shows/DevOps-Dimension/10--Gene-Kim-Interview-at-DevOps-Days-London/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

Oguz also had the opportunity to interview <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/11--Kris-Saxton-Automation-Logic-Founder">Kris Saxton</a> of Automation Logic about Bimodal IT and why he views it as a highly flawed strategic approach to IT. He argues that Bimodal IT, which splits IT into a traditional, day-to-day mode and an innovative, agile mode, fails to promote either stability or agility and instead blocks vital collaboration.

<iframe src="https://channel9.msdn.com/Shows/DevOps-Dimension/11--Kris-Saxton-Automation-Logic-Founder/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

Plus, Oguz connected with <a href="https://channel9.msdn.com/Shows/DevOps-Dimension/12--Claire-Agutter-ITSM-Zone-Director">Claire Agutter</a> to discuss barriers to implementing DevOps and how to overcome them. Claire, ITSM Zone director and online education specialist, notes that many financial institutions are resistant to adopting DevOps practices because they’ve invested in ITIL and fundamental processes. As ITIL and DevOps are not mutually exclusive, she discusses how to implement organizational change and how ITIL and DevOps can complement each other.

<iframe src="https://channel9.msdn.com/Shows/DevOps-Dimension/12--Claire-Agutter-ITSM-Zone-Director/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

We will have many additional episodes coming soon that highlight more industry experts, more companies making the transformation, and more Microsoft stories, so continue to follow our blog or check back regularly.<b> </b> And if you want more resources right now, check out the following blog articles:
<ul>
	<li><a href="http://www.itproguy.com/devops-practices/">DevOps Practices </a></li>
	<li><a href="http://www.talmeida.net/blog/devops-fundamentals-series">DevOps Fundamentals Video Series</a></li>
	<li><a href="https://blogs.technet.microsoft.com/juliens/2016/02/14/devops-where-do-i-start-cheat-sheet/">DevOps Getting Started Cheat Sheet</a></li>
</ul>
Additional Resources:
<ul>
	<li>Read more from <a href="http://www.realgenekim.me/">Gene Kim</a>, co-author of <a href="http://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592">The Phoenix Project</a> and the upcoming DevOps Handbook</li>
	<li>Read the <a href="https://puppet.com/resources/white-paper/2015-state-of-devops-report">State of DevOps Report</a></li>
	<li>Read more from <a href="http://continuousdelivery.com/">Jez Humble</a>, author of <a href="http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912">Continuous Delivery</a> and <a href="http://www.amazon.com/Lean-Enterprise-Performance-Organizations-Innovate/dp/1449368425/ref=pd_sim_14_2?ie=UTF8&amp;dpID=51rqmvE3A-L&amp;dpSrc=sims&amp;preST=_AC_UL160_SR105%2C160_&amp;refRID=1QZBRCT8WRNJMGFDHG1B">Lean Enterprise</a></li>
	<li>Read <a href="http://www.amazon.com/Practical-Approach-Large-Scale-Agile-Development/dp/0321821726/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1463279390&amp;sr=1-1&amp;keywords=large+scale+agile">Large Scale Agile</a> by Gary Gruver</li>
	<li>Visit <a href="http://blog.gardeviance.org/">Simon Wardly's blog</a> to learn more about organization and cell structure</li>
	<li>Visit <a href="https://www.youtube.com/channel/UCO1-auh6FwgROoXy--ozOMg">The ITSM Crowd channel</a> for regular chats on all things service management</li>
</ul>
Have fun,
@volkerw]]></content:encoded>
    </item>
    <item>
      <title>The DevOps Factory</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/the-devops-factory</link>
      <pubDate>Tue, 03 May 2016 19:59:33 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1005</guid>
      <description><![CDATA[A few days ago Microsoft launched a new factory. Yes, you heard right, a factory. But it is not a...]]></description>
      <content:encoded><![CDATA[<p align="justify"><a href="http://www.thedevopsfactory.com/" target="_blank"><img width="389" height="201" title="clip_image002" align="right" style="margin: 0px 12px;border: 0px currentcolor;float: right" alt="clip_image002" src="https://msdnshared.blob.core.windows.net/media/2016/05/clip_image002.png" border="0" hspace="12"></a>A few days ago Microsoft launched a new factory. Yes, you heard right, a factory. But it is not a traditional foundry and it is not producing any gadgets or physical goods. <br>It is located at the intersection of software engineering, technology operations, and quality assurance. It will produce lasting business success with DevOps through learning.</p>
<p>We all know there is no way you can buy <a href="https://www.youtube.com/watch?v=AipWnliauV8">DevOps-in-a-Box</a>, but we thought it should be possible to at least produce DevOps experts, off-the-shelf, factory style. How did this factory come about and how would a DevOps factory do that? Let me explain.</p>
<p>True to a DevOps approach we setup a multi-disciplinary team of subject matters experts and had them brainstorm and build this new virtual factory from ground up. No pre-conditions for the team except two: It had to be a <strong><strong><strong>DevOps learning experience spanning Microsoft and OSS tools and products</strong></strong></strong> and it had to be a fun experience. Out came <a href="http://www.thedevopsfactory.com/" target="_blank">the DevOps Factory</a>: A gaming, competition, and learning experience enabling people to learn about a diverse set of technologies and practices around DevOps.</p>
<p>As much as learning new stuff always is fun, we wanted it to be more. So we added a competitive angle to the experience of the DevOps Factory. And while you check out the different floors and rooms of the factory, you will learn new things at every step.</p>
<h5>Learn and Earn</h5>
<p>The whole factory consists of different floors and each floor has a number of practice rooms. Before you leave any one room you want to test your knowledge and earn well deserved points. While you wander throughout the factory floors, automated factory staff will make it easy for you to collect points and pick up badges to demonstrate your growing knowledge and expertise in DevOps practices. At any time you will be able to see how you stack up against peers and compare your results, points, and badges.</p>
<p>Later on, before you leave, don’t forget to check your point balance. While it is rewarding and fun to learn and compete, the points you collect are also worth something else. The DevOps Factory is connected to a rewards system that turns your points into real world rewards like gift cards, concert tickets, software, devices, and more. If you register with the factory, you will keep your points across subsequent visits to the practice rooms.</p>
<h5>The Tour</h5>
<p>The factory consists of several production floors where individual rooms on each floor are associated with key DevOps practices. Let me take you on a tour through the factory. The current iteration covers the following practices in detail:</p>
<h6>Automated Testing</h6>
<p>Conveniently located on the second floor, in Automated Testing, you learn about the benefits, tools, and best practices automating test infrastructure provides. Automated tests allow for repeatable test and comparable results among other benefits. Learn about Automated Testing on the second floor.</p>
<h6>Continuous Integration</h6>
<p>Sharing code among developers of the same and/or different teams requires a common repository being used. Checking in code allows for automating builds and testing. This and more you learn at the factory’s ground level.</p>
<h6>Infrastructure as Code</h6>
<p>Are your releases delayed due to inadequate error tracking and version control? You might want to look at the DevOps practice, Infrastructure as Code, conveniently located on ground level, next to Continuous Integration.</p>
<h6>Application Performance Management</h6>
<p>What do you know about your apps performance? Are customers happy with the response time? Is your app performance diagnosis time consuming and incomplete? Check and learn about APM on level 2.</p>
<h6>Continuous Deployment</h6>
<p>Pushing a new known good build to a single environment automatically via automation capabilities removes human error and enables for faster and more predicable outputs to dev, test, and even production environments. The factory’s third floor is all over this.<br>
By the way, don’t confuse Continuous Deployment and Continuous Delivery. They are often confused – partially because people tend to throw around those terms interchangeably.</p>
<h6>Release Management</h6>
<p>We have dedicated a high ceiling floor to this. Release Management, RM for short, is about managing, planning, and controlling builds throughout the stages of the software lifecycle. RM includes things like testing and deploying of software releases. Lots of good materials to look at in this section.</p>
<h6>Configuration Management</h6>
<p>Last but not least, Configuration Management. On the top floor of our little factory you will dive into details about how to establish and maintain consistency of a product, tool, environment, or service over its lifetime.</p>
<h5>In Closing</h5>
<p>Our factory will grow over time and we are constantly working on new and improved content. I would be very interested in your feedback, ideas and suggestions. Please use the comment section of this post to share.<br>
And now go and hit the <a href="http://www.thedevopsfactory.com/" target="_blank">factory floor</a>.</p>
<p>Have fun,</p>
<p><a href="https://twitter.com/volkerw">@volkerw</a></p>]]></content:encoded>
    </item>
    <item>
      <title>VorlonJS - A Journey to DevOps: publish image in the Docker Hub using Visual Studio Team Services</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/vorlonjs-a-journey-to-devops-publish-docker-image-visual-studio-team-services</link>
      <pubDate>Mon, 02 May 2016 08:00:00 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/?p=1136</guid>
      <description><![CDATA[If you have any question about this blog post series or DevOps, feel free to contact me directly on...]]></description>
      <content:encoded><![CDATA[<em>If you have any question about this blog post series or DevOps, feel free to contact me directly on Twitter : </em><a href="https://twitter.com/jcorioland"><em>https://twitter.com/jcorioland</em></a><em>.</em>

<em>This post is part of </em><a href="https://blogs.technet.microsoft.com/devops/vorlonjs-a-journey-to-devops-introducing-the-blog-post-series/"><em>the series “VorlonJS – A Journey to DevOps”</em></a>
<h3>Introduction</h3>
The <a href="https://hub.docker.com" target="_blank">Docker Hub</a> is a great way to make containers' images available to all Docker users.

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1094.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb854.png" alt="image" width="804" height="397" border="0" /></a>

In this post I will explain how we used the Visual Studio Team Services (VSTS) build system to automate the way we create and push this image in the hub, using a Linux agent.

<em>Note: if you're using a Docker Trusted Registry to store private images, the following will work too.</em>
<h3>Configure a Linux Build Agent</h3>
The new <a href="https://www.visualstudio.com/en-us/features/continuous-integration-vs.aspx" target="_blank">build system of Visual Studio Team Services</a> works with build agents. It is possible to use a Windows hosted agent for free or you can bring your own one running on Windows, Linux or Mac OS. It allows to handle workflows that are not supported on Windows hosted agent or to reuse some scripts and tools you may already have on Linux machines, for example.

In this case, we need <a href="https://www.docker.com/" target="_blank">Docker </a>to build the image, so we have chosen to use a Linux agent.

First, you need to have a Linux machine. You can create one in <a href="https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-endorsed-distros/" target="_blank">Microsoft Azure</a>. In this case we have deployed an simple <a href="https://azure.microsoft.com/en-us/marketplace/partners/canonical/ubuntuserver1404lts/" target="_blank">Ubuntu 14.04 LTS machine from the Azure Marketplace</a>:

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1095.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb855.png" alt="image" width="804" height="491" border="0" /></a>

Once the machine is up and running, open an SSH session and install all the stuff needed for your build workflow. In this case we have installed <a href="https://docs.docker.com/engine/installation/linux/ubuntulinux/" target="_blank">docker-engine</a> and Node.js tools on the machine. We don’t need anything else to build the Vorlon.JS Dashboard Docker image.

The next step is the VSTS agent configuration. The agent will use a Personal Access Token to connect to your account. To create one, click on your name in the VSTS portal (top right), then choose <strong>My profile</strong>. Click on the Security tab and click on the Add button in the right pane. Click Create Token and save the generated token in a secure place, you will need it to start the agent.

Now, you need to authorize your account to use the agent pools. Click on the settings icon on the top right in the VSTS Dashboard. Go in the <strong>Agent Pools</strong> tab and click on the Default pool. Add your account in the two groups: <strong>Agent Pool Administrator</strong> and <strong>Agent Pool Service Accounts</strong>.

Go back on your Linux machine and create a <strong>vsts-agent</strong> directory. Go in this directory and type the following command:
<blockquote>curl -skSL https://aka.ms/xplatagent | bash</blockquote>
It will download all the stuff you need to run the VSTS agent.

To configure and start the agent, type:
<blockquote>./run.sh</blockquote>
You will be ask for the following information:
<ul>
	<li>username: this field is ignored when using personal access toke, so you can type any random username</li>
	<li>password: enter your personal access token</li>
	<li>agent name: the name of the agent (will be used to display the agent the VSTS agent queues)</li>
	<li>pool name: you can leave default</li>
	<li>server url: your VSTS account URL (<a href="https://youraccount.visualstudio.com">https://<strong>youraccount</strong>.visualstudio.com</a>)</li>
</ul>
Let the default values for all other parameters. Wait for the agent to start.

Once started, you can go back in the Agent pools settings on the VSTS portal and you will see the agent that you have just configured:

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1096.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb856.png" alt="image" width="804" height="227" border="0" /></a>

<em>Note: you will find all the information about the VSTS agent for Linux and Mac OS on <a href="https://github.com/Microsoft/vso-agent" target="_blank">this page</a>.</em>
<h3>Create the build definition to create a Docker image</h3>
Create a Docker image is pretty simple. For Vorlon.JS we have two relevant element in our source code repository:

A Dockerfile, that defines how the image should be built: <a title="https://github.com/MicrosoftDX/Vorlonjs/blob/dev/Dockerfile" href="https://github.com/MicrosoftDX/Vorlonjs/blob/dev/Dockerfile">https://github.com/MicrosoftDX/Vorlonjs/blob/dev/Dockerfile</a>
<blockquote># use the node argon (4.4.3) image as base

FROM node:argon

# Set the Vorlon.JS Docker Image maintainer

MAINTAINER Julien Corioland (Microsoft, DX)

# Expose port 1337

EXPOSE 1337

# Set the entry point

ENTRYPOINT ["npm", "start"]

# Create the application directory

RUN mkdir -p /usr/src/vorlonjs

# Copy the application content

COPY . /usr/src/vorlonjs/

# Set app root as working directory

WORKDIR /usr/src/vorlonjs

# Run npm install

RUN npm install</blockquote>
A Bash script, that uses the <strong>docker build</strong>, <strong>docker login</strong> and <strong>docker push</strong> commands to build the image, login to the Docker hub and push the image that has been built: <a title="https://github.com/MicrosoftDX/Vorlonjs/blob/dev/build-docker-image.sh" href="https://github.com/MicrosoftDX/Vorlonjs/blob/dev/build-docker-image.sh">https://github.com/MicrosoftDX/Vorlonjs/blob/dev/build-docker-image.sh</a>
<blockquote>#!/bin/bash

# get version from package.json

appVersion=$(cat package.json | jq -r '.version')

echo "Building Docker Vorlon.JS image version $appVersion"

docker build -t vorlonjs/dashboard:$appVersion .

docker login --username="$1" --password="$2"

echo "Pushing image..."

docker push vorlonjs/dashboard:$appVersion

docker logout

exit 0</blockquote>
As you can see in the Bash script, we get the version directly from the package.json file and we use two variables $1 and $2 that will be set by VSTS when executing a new build.

Let’s define our build !

Go to the BUILD section of your VSTS team project and choose to add a new build definition. Choose to start from an empty template. We only need a simple step in this case, the Shell Script execution:

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1097.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb857.png" alt="image" width="804" height="344" border="0" /></a>

This task is really easy to configure: you just need to give the path to the Bash script to execute and the arguments that should be passed to this script. As you can see in the capture above, we do not use the username and password directly, but we are using two variables: <strong>$(docker.username)</strong> and <strong>$(docker.password)</strong>.

These variables can be configured in the Variables tab of the build definition:

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1098.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb858.png" alt="image" width="804" height="364" border="0" /></a>

These are the credentials of your Docker Hub account that will be used by the <a href="https://docs.docker.com/engine/reference/commandline/login/" target="_blank"><strong>docker login</strong></a> command.

In the General tab, make the build definition is using the Default agent queues where you have added the Linux VSTS agent.

And it’s done! Just need to Save the build definition and queue a new build:

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1099.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb859.png" alt="image" width="804" height="421" border="0" /></a>

Once the build is completed, the image has been pushed in the Docker Hub:

<a href="https://msdnshared.blob.core.windows.net/media/2016/04/image1103.png"><img style="float: none;padding-top: 0px;padding-left: 0px;margin-left: auto;padding-right: 0px;margin-right: auto;border-width: 0px" title="image" src="https://msdnshared.blob.core.windows.net/media/2016/04/image_thumb860.png" alt="image" width="804" height="257" border="0" /></a>

This is how we made <a href="http://vorlonjs.io">VorlonJS</a> available in the <a href="http://hub.docker.com/r/vorlonjs/dashboard" target="_blank">Docker Hub</a> for everyone. You want to try it ? Just run the following command on any Docker host:
<blockquote>docker run -d -p 80:1337 vorlonjs/dashboard:0.2.1</blockquote>
Enjoy building your own images using Visual Studio Team Services !

Julien - <a href="https://aka.ms/jcorioland">https://aka.ms/jcorioland</a>]]></content:encoded>
    </item>
    <item>
      <title>VorlonJS - A Journey to DevOps: Tests in production with Azure App Service</title>
      <link>https://review.docs.microsoft.com/archive/blogs/technet/devops/vorlonjs-a-journey-to-devops-tests-in-production-with-azure-app-service</link>
      <pubDate>Tue, 16 Feb 2016 08:30:00 GMT</pubDate>
      <dc:creator><![CDATA[Volker Will MSFT]]></dc:creator>
      <guid
        isPermaLink="false">https://blogs.technet.microsoft.com/devops/2016/02/16/vorlonjs-a-journey-to-devops-tests-in-production-with-azure-app-service/</guid>
      <description><![CDATA[If you have any question about this blog post series or DevOps, feel free to contact me directly on...]]></description>
      <content:encoded><![CDATA[<p align="justify"><em>If you have any question about this blog post series or DevOps, feel free to contact me directly on Twitter : </em><a href="https://twitter.com/jcorioland"><em>https://twitter.com/jcorioland</em></a><em>.</em></p> <p align="justify"><em>This post is part of </em><a href="/b/devops/archive/2016/01/12/vorlonjs-a-journey-to-devops-introducing-the-blog-post-series.aspx"><em>the series “VorlonJS – A Journey to DevOps”</em></a></p> <p align="justify"><a href="http://blogs.technet.com/b/devops/archive/2016/02/03/vorlonjs-a-journey-to-devops-release-management-with-visual-studio-team-services.aspx">In the previous post of this series</a> we discussed about how Visual Studio Team Services and Release Management can be used to automate the creation of environments and the deployment of an application in these different environments. <p align="justify">If you’ve gone deeper into <a href="https://github.com/MicrosoftDX/Vorlonjs/tree/dev/DeploymentTools">the Azure Resource Management templates that are used to deploy Vorlon.JS</a> you may have noticed that we are using a different template for <a href="https://github.com/MicrosoftDX/Vorlonjs/blob/dev/DeploymentTools/production-deployment-template.json">the production environment</a>. This is because we are not deploying de application directly in the production web app but we are using a really nice feature of Azure Web App called “deployment slots”. <h3>What are deployment slots?</h3> <p align="justify">Deployment slots are available if you are using the Standard or Premium App Service application plan. This feature allows to be able to deploy an application on a separate slot than the production one. Technically, a slot is just another web app running on the same service plan, with its own URL. <p align="justify">Using slots enable features like instant swapping or tests in production. Swapping is a really useful feature that make possible the upgrade to a new version of an application without service interruption / down time. <p align="justify">When deploying a new application, you always have a warm up time that could be more or less important and could impact the experience of your users. Using slots, it’s possible to deploy a new version of the application on a preview or staging slot, boot the application and then use the swap function that will swap the virtual IP address of the slots at the load balancers configuration level. <p align="justify">In the case of Vorlon.JS, the ARM template that describes the production environment is responsible for creating a Web App slot named staging and we use Release Management to deploy the application on this slot. After a deployment has occurred, we have two applications running in the production web app: <p align="justify"><img src="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_1.png" alt=" " /> <p align="justify">In this example, the production of Vorlon.JS (version 1.4) is deployed and used on the production slot of the Azure Web App named vorlonjs-production and accessible at <a href="http://vorlonjs-production.azurewebsites.net">http://vorlonjs-production.azurewebsites.net</a>. The new version is 1.5 has been deployed on the staging slot and is accessible at <a href="http://vorlonjs-production-staging.azurewebsites.net">http://vorlonjs-production-staging.azurewebsites.net</a>. <p align="justify">Once the application is booted and we are done with our last checks on the staging slot we use the swap function to make the 1.5 version available on the production URL. As explained before, swapping does not deploy the 1.5 version to the production slot but only update the network configuration so it’s done in only a few seconds with no down time! <p align="justify">The swap function is available in the Azure Portal or using the command line (PowerShell or Azure Cross Platform CLI): <p align="justify"><a href="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_2.png"><img style="border-top:0px;border-right:0px;border-bottom:0px;float:none;padding-top:0px;padding-left:0px;margin-left:auto;border-left:0px;padding-right:0px;margin-right:auto" border="0" src="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_2.png" width="640" height="273" alt=" " /></a> <h3>What are tests in production?</h3> <p align="justify">Test in production is a common practice that consist to redirect (transparently) a bunch of application’s users to a new version to get more feedback before making the application available for all the users. <p align="justify">With Azure App Service and deployment slots, implementing test in production is super easy. Once you have two versions of an application on different slots (see 1.4 and 1.5 versions of Vorlon.JS above), you just have to go in the Azure portal, in the routing settings and select the Traffic Routing option: <p align="justify"><a href="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_3.png"><img style="border-top:0px;border-right:0px;border-bottom:0px;float:none;padding-top:0px;padding-left:0px;margin-left:auto;border-left:0px;padding-right:0px;margin-right:auto" border="0" src="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_3.png" width="640" height="367" alt=" " /></a> <p align="justify">Then, you can configure Azure Web App routing to automatically redirect a percentage of users to another slot. For example, you can choose to redirect 30% of your production traffic on the staging slot so 30% of your regular users will use the new version of the application:</p> <p align="justify"><a href="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_4.png"><img style="border-top:0px;border-right:0px;border-bottom:0px;float:none;padding-top:0px;padding-left:0px;margin-left:auto;border-left:0px;padding-right:0px;margin-right:auto" border="0" src="https://msdnshared.blob.core.windows.net/media/2016/02/VorlonJS_DevOps_Part5_4.png" width="640" height="393" alt=" " /></a></p> <p align="justify"><i>Note: once a user has been redirected to a given slot using traffic routing, a cookie is automatically set to ensure that he will not be redirected to another version the next time its browser sends an http request.</i> <h3>Conclusion</h3> <p align="justify">As explained in this article, Azure App Service deployment slots and traffic routing are really nice features to simplify the way you can upgrade an application with no service interruption. Tests in production, used with technologies like Azure Application Insights that allows to get information and metrics about application usage can be really helpful to determinate if a new version of the application will be really appreciated by the users or not.  <p align="justify">Julien</p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p>]]></content:encoded>
    </item>
  </channel>
</rss>