<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:blogChannel="http://backend.userland.com/blogChannelModule" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:pingback="http://madskills.com/public/xml/rss/module/pingback/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:betag="http://dotnetblogengine.net/schemas/tags">
  <channel>
    <title>DarksideCookie</title>
    <description>Come to the dark side...we have cookies!</description>
    <link>https://chris.59north.com/</link>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <generator>BlogEngine.NET 3.1.0.1</generator>
    <language>en-US</language>
    <blogChannel:blogRoll>https://chris.59north.com/opml.axd</blogChannel:blogRoll>
    <blogChannel:blink>http://www.dotnetblogengine.net/syndication.axd</blogChannel:blink>
    <dc:creator>Chris Klug</dc:creator>
    <dc:title>DarksideCookie</dc:title>
    <geo:lat>0.000000</geo:lat>
    <geo:long>0.000000</geo:long>
    <item>
      <title>Versioning ASP.NET Core HTTP APIs using the Accept header</title>
      <description>&lt;p&gt;For some reason, I have spent a couple of days thinking about versioning HTTP based APIs. Actually, the reason is that I have a client who is using quite a lot of HTTP based APIs, but in a way that I find less than perfect. I’m not blaming them in any way, as it is the result of growing an application over many years, using many different forms of Microsoft tech, and continuosly focusing on delivering features to users instead of building a maintainable system. Currently their application uses ASP.NET WebForms, ASP.NET Core MVC, asmx&amp;nbsp; webservices, WCF webservices, Silverlight, Angular etc, which is what happens in quite a lot of cases over time.&lt;/p&gt;&lt;p&gt;One of the things that have been bothering me is their use of the HTTP services. Sure, it is a great step up from asmx, and even WCF services, as they have been moving from Silverlight to Angular and JavaScript. Unfortunately, as different developers have worked on different applications in the solution, they have created their own API endpoints to suit their needs. Often endpoints are more or less duplicated just to get the returned entity representation to take a slightly different shape.&lt;/p&gt;&lt;p&gt;A very basic example would be a user endpoint. Sometimes when you retrieve a user, you might only need a very limited subset of information about that user. And in other cases you might want a lot more information. And being developers, we obviously don’t think we should retrieve a bunch of fields regarding a user when we don’t need it. Not to mention that the user retrieving the data might not be allowed to view all the fields. So to solve that, there are a bunch of different “get user” endpoints, each returning a specifically designed representation of the requested user, each used from a specific application, or even a specific part of an application. And this means a lot of duplicated code, as well as a very hard to use API.&lt;/p&gt;&lt;h3&gt;API versioning&lt;/h3&gt;&lt;p&gt;Even if we don’t need to have several different representation of an entity in our application, there is another situation that is very similar, and that comes up all the time. Versioning… How do we version an API? For example, in version 1, the user entity contained a set of very well defined set of values, including an age field. However, when working through the API for version 2, it was decided that passing a date of birth would be much better, as some apps wanted more control than just an age. So how do we solve this? Well, there are a couple of ways…&lt;/p&gt;&lt;p&gt;In this particular case, one could just add another field to the representation and be done with it. It wouldn’t cause any problems with existing clients, but would offer the new functionality consumers of version 2. ANd this is a great way to extend the API from v1 to v2. However, in some cases this doesn’t work. What if in v1 we return a date of birth, but then realize that that might be a privacy issue and want to replace it with just an age. That would be a breaking change, causing problems for every single application depending on the API.&lt;/p&gt;&lt;p&gt;There are 4 common ways to handle versioning of APIs&lt;/p&gt;&lt;h4&gt;1. Modify the path – https://example.com/v1/users/1&lt;/h4&gt;&lt;p&gt;Adding the version to the path is simple and easy to do, and easy to consume. The problem with this solution however, is that the path should be stable. This means that the representation of an entity should always be available at the same path. By changing the path, we are kind of saying that it is 2 different entities, whereas it is really just 2 different representations of the same entity.&lt;/p&gt;&lt;h4&gt;2. Add a querystring parameter – https://example.com/users/1?version=1&lt;/h4&gt;&lt;p&gt;Using a querystring to pass the version is also very nice and easy. However, for me, querystring parameters has to do with querying. And by that I mean that, for me, querystring parameters should be used to filter, or query, the information that is being requested, not to tell it what representation that should be used for the returned data.&lt;/p&gt;&lt;h4&gt;3. Add a custom header – version: 1&lt;/h4&gt;&lt;p&gt;The third option is to have the client consuming the API pass the version to the server using a custom header. This takes moves the concern from the path to another part of the transport, which is nice. However, it does make it harder to send the request. You can’t easily just type it into the addressbar of you browser and get the response. Instead you need a specific tool to do it. On the other hand, for me, that isn’t a huge problem. APIs are supposed to be consumed by applications (code), not by users using a browser, so it shouldn’t be a big problem. But why would you do it this way, when there is a built in way in the HTTP specification as you will see in the fourth way?&lt;/p&gt;&lt;h4&gt;4. Use the Accept header – Accept: application/vnd.myapp.v1.user+json&lt;/h4&gt;&lt;p&gt;HTTP already defines a header to this problem. It is the Accept header. It is used to tell the server what types of response that the client can accept. Often this header is used to request JSON by passing the value application/json, or html by passing the value text/html. However, the spec defines the value as a string. So you can literally pass along any form of string value that you want. But there are some nice conventions that you might want to follow. First of all, it is generally a 2 part value, split by a /. The first part gives a high-level type like “text” or “image” for example, and the second part defines it more spcifically like “text/html” or “image/png”. But as I said, any string is valid, so the xxx/yyy is just a convention, it isn’t in any way enforced. For formats that are consumed by applications, like JSON for example, the first part is often “application”. And for custom formats used by specific APIs, the convention is to prefix the 2nd part with “vnd.” to indicate that it is vendor specific. After that, you put the definition of what you are requesting. In this case I chose “myapp.v1.user” to indicate that the representation that I want should be a “user” from the “myapp” application, and that I want version 1. But once again, you can format this in any way you want.&lt;/p&gt;&lt;p&gt;However, using the accept header to indicate a custom representation causes a problem. Normally, this header is used to define what serialization format should be used, such as JSON or XML. Replacing this with a custom string means that we need to find another way to convey that information. The common way to do this, is to append it to the string, separating it with a + sign. In this case I added “+json” to indicate that I want my “application/vnd.myapp.v1.user” representation sent to me using JSON. I could have added “+xml” to get it in XML format if my backend supported this.&lt;/p&gt;&lt;h3&gt;API versioning thoughts&lt;/h3&gt;&lt;p&gt;All of the above mentiond ways work for versioning APIs. They each have benefits and downsides, and I don’t think any of them are “the right way”. It all depends on you requirements, as so much does in our line of business. But my personal preference is option 4, using the Accept header. Mostly because it uses a predefined feature in HTTP, and following standards makes things a lot easier in most cases. I’m sure that the people who designed HTTP are MUCH smarter than me, and have thought of WAY more scenarios than I have. So I’ll trust those guys, and try to follow their recommendation. However, if it is a requirement that you should be able to call the API from a browser, well, then I guess I would have to choose option 1 or 2. Option 3 is probably my least favorite, but on the other hand it has some benefits. By using the Accept header to pass the format to be used, JSON or XML for example, I can used the built in content negotiation in the ASP.NET Core framework to solve the formatting, and just look at the custom header to figure out the representation to use. So they all have their place in one form or another.&lt;/p&gt;&lt;h3&gt;Using the Accept header versioning in ASP.NET Core MVC routing&lt;/h3&gt;&lt;p&gt;ASP.NET Core MVC routes requests to actions using either convention based routing with templates, or using attributes. Personally I prefer the attributes when building APIs, as it gives me better control over the paths being used, and also makes it easier to understand the path by looking at the controller and the attributes. &lt;/p&gt;&lt;p&gt;However, the built in stuff doesn’t take any headers into account. The Accept header is only taken into account when formatting the returned value. So to make use of that header while routing, we need to write some code, but luckily it is not very complicated code.&lt;/p&gt;&lt;p&gt;What we need is an implementation of &lt;strong&gt;IActionConstraint&lt;/strong&gt;. This is an interface that is used to programmatically add a constraint to an action, basically allowing us to use logic to tell the routing engine whether or not an action can be used to handle the current request. Using this, we can add an attribute to an action, and define what accept header value should be passed for this specific action to be executed.&lt;/p&gt;&lt;p&gt;The IActionConstraint interface has 2 members that needs to be implemented&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;bool Accept(ActionConstraintContext context)&lt;br&gt;int Order { get; }&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;The Accept method gets a “context” with information regarding the current request, and returns a boolean defining whether or not this request can be used to handle this request. And the Order property defines in what order the action’s costraints should be executed. The lower the number, the earlier in the list it will be executed.&lt;/p&gt;&lt;p&gt;There are a couple of ways to go about building the IActionConstraint that we want. We could create a class that inherits from Attribute, and implements IActionConstraint. Or, we could go and create a class that inherits from &lt;strong&gt;ActionMethodSelectorAttribute&lt;/strong&gt;, and implement the &lt;strong&gt;IsValidForRequest&lt;/strong&gt; method. And being me, I’ll go for the simplest solution, and just create a class called &lt;strong&gt;AcceptHeaderAttribute&lt;/strong&gt; and inherit from ActionMethodSelectorAttribute.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public class AcceptHeaderAttribute : ActionMethodSelectorAttribute {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; public override bool IsValidForRequest(RouteContext routeContext, ActionDescriptor action) { … }&lt;br&gt;}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Next, I need to know what Accept header value it should check for. So I add a constructor that accepts a string defining this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public class AcceptHeaderAttribute : ActionMethodSelectorAttribute {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; private readonly string _acceptType;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; public AcceptHeaderAttribute(string acceptType)&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; this._acceptType = acceptType;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;&amp;nbsp;&amp;nbsp; …&lt;br&gt;}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Now that I know what Accept header value should be passed, I can implement the IsValidForRequest method&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public override bool IsValidForRequest(RouteContext routeContext, ActionDescriptor action)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var accept = routeContext.HttpContext.Request.Headers.ContainsKey("Accept") ? routeContext.HttpContext.Request.Headers["Accept"].ToString() : null;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (accept == null) return false;&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; var acceptWithoutFormat = accept;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (accept.Contains("+"))&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; acceptWithoutFormat = accept.Substring(0, accept.IndexOf("+"));&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; return acceptWithoutFormat == _acceptType;&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;I start out by checking to see whether or not there is an Accept header at all in the request by looking at the RouteContext’s HttpContext. If there isn’t an Accept header I return false, as without the header this action is not an option for execution.&lt;/p&gt;&lt;p&gt;Next I remove any potential formatting information from the header by removing anything after a +-sign, including the potential +-sign.&lt;/p&gt;&lt;p&gt;And finally, I just return whether or not the remaining string corresponds to the configured value.&lt;/p&gt;&lt;p&gt;That’s all there is to it! And using it is just as simple. Just add it to an action like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;[HttpGet("{id:int}")]&lt;br&gt;
[AcceptHeader("application/vnd.mydemo.v1.user")]&lt;br&gt;
public async Task&amp;lt;IActionResult&amp;gt; GetUserById(int id)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var user = await _users.WhereIdIs(id);&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (user != null)&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return Ok(user.ToModel());&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return NotFound();&lt;br&gt;
}&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;[HttpGet("{id:int}")]&lt;br&gt;
[AcceptHeader("application/vnd.mydemo.v2:user")]&lt;br&gt;
public async Task&amp;lt;IActionResult&amp;gt; GetUserByV2(int id)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var user = await _users.WhereIdIs(id);&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (user != null)&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return Ok(user.ToModelV2());&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return NotFound();&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Another nice feature is that you can easily handle the case where an Accept header is not passed at all, by adding&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;[HttpGet("{id:int}")]&lt;br&gt;
public IActionResult GetUserByIdDefault(int id)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return StatusCode(406, "Invalid Accept header");&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, it has the same HttpGet attribute as the other actions, but no AcceptHeader attribute. This means that when the router looks at the actions, it will select the one without the AcceptHeader attribute if it’s missing. This means that the client gets an HTTP 406 back if the header is missing. You could obviously also have the “bare” action be v1, and default to that for any request missing the header or with an invalid header if you wanted to. On the other hand, that might be a little confusing to someone who wanted v2 but forgot to pass the Accept header, or passed an incorrect value. But if you have an API that currently doesn’t use this kind of versioning, it might be a great way to move forward with it without breaking existing clients.&lt;/p&gt;&lt;h3&gt;Customizing the ASP.NET Core MVC content negotiation to handle custom Accept headers&lt;/h3&gt;&lt;p&gt;There is a small problem with the current solution. If you want to support content negotiation, and multiple response formats, ASP.NET Core MVC can easily to add other formats than the default JSON formatter by adding other output formatters. This is done when configuring the MVC services like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;services.AddMvc(config =&amp;gt; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; config.OutputFormatters.Add(new XmlSerializerOutputFormatter());&lt;br&gt;
});&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;However, the content negotiation built into ASP.NET Core MVC uses the Accept header to do it’s work. So when we stop using “application/json” and “text/xml”, it breaks down. So to sort this out, we either need to create our own custom &lt;strong&gt;IOutputFormatter &lt;/strong&gt;implementations, or modify the way that the formatter is selected. I’m going to go for the latter in a slightly hacky, but working way. &lt;/p&gt;&lt;p&gt;I don’t feel like doing too much work, and I believe that what is already in the framework works very well, so all I want to do is hack in a little change using a custom &lt;strong&gt;OutputFormatterSelector&lt;/strong&gt; that makes the new Accept header format work with the existing implementation.&lt;/p&gt;&lt;p&gt;For this, I’ll go ahead and add a new class called &lt;strong&gt;AcceptHeaderOutputFormatterSelector&lt;/strong&gt;, and have it inherit &lt;strong&gt;OutputFormatterSelector&lt;/strong&gt;.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public class AcceptHeaderOutputFormatterSelector : OutputFormatterSelector {&lt;br&gt;}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And since I like the default implmentation from the framework, I’ll add a constructor that sets up an instance of that OutputFormatterSelector internally in my class&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public AcceptHeaderOutputFormatterSelector(IOptions&amp;lt;MvcOptions&amp;gt; options, ILoggerFactory loggerFactory, string defaultContentType = "application/json")&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; _fallbackSelector = new DefaultOutputFormatterSelector(options, loggerFactory);&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; _formatters = new List&amp;lt;IOutputFormatter&amp;gt;(options.Value.OutputFormatters);&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; _defaultContentType = defaultContentType;&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, it accepts some MvcOptions, an ILoggerFactory and a string containing the default format to use, which I default to &lt;em&gt;application/json&lt;/em&gt;. And then I use those values to create a new instance of &lt;strong&gt;DefaultOutputFormatterSelector&lt;/strong&gt;, which is the implementation used by the framework by default. The values are then stored globally so that I can use them later in my class.&lt;/p&gt;&lt;p&gt;The OutputFormatterSelector class is abstract, and requires you to implement the SelectFormatter method. It looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public IOutputFormatter SelectFormatter(OutputFormatterCanWriteContext context, IList&amp;lt;IOutputFormatter&amp;gt; formatters, MediaTypeCollection mediaTypes);&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This is the method that is called by the framework to figure out what output formatter to use when serializing the response. It gets a “context” with all the essential information, a list of formatters to use, which should override the list of formatters passed into the constructor, and a collection of media types. &lt;/p&gt;&lt;p&gt;So how do I implement this. Well, I want a version that, if the Accept header contains a custom content type with a “+&lt;em&gt;[format]&lt;/em&gt;” definition, replaces the custom content type with a standard one, and then let’s the default implmentation of the selector figure out the rest.&lt;/p&gt;&lt;p&gt;To do this, I implement the method like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public override IOutputFormatter SelectFormatter(OutputFormatterCanWriteContext context, IList&amp;lt;IOutputFormatter&amp;gt; formatters, MediaTypeCollection mediaTypes)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (!HasVendorSpecificAcceptHeader(context.HttpContext.Request))&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return _fallbackSelector.SelectFormatter(context, formatters, mediaTypes);&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (formatters.Count == 0)&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; formatters = _formatters;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; context.ContentType = GetContentTypeFromAcceptHeader(context.HttpContext.Request);&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; var formatter = formatters.FirstOrDefault(x =&amp;gt; x.CanWriteResult(context));&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; context.ContentType = context.HttpContext.Request.Headers["Accept"].First();&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; return formatter;&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;First, I verify that the request has a vendor specific content type. This basically just means that I check if the Accept header value starts with “application/vnd.”&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;private bool HasVendorSpecificAcceptHeader(HttpRequest request)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return request.Headers["Accept"].First().IndexOf("application/vnd.") == 0;&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;If it doesn’t have a vendor specific value, it just uses the default implementation to do the work. If it does on the other hand I carry on by checking to see if the passed in list of formatters is empty, in which case I use the default once that I received in the constructor instead.&lt;/p&gt;&lt;p&gt;Once I have my list of formatters, I get the “proper” content type by calling a helper method that looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;private string GetContentTypeFromAcceptHeader(HttpRequest request)&lt;br&gt;
{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var acceptHeaderValue = request.Headers["Accept"].First();&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (acceptHeaderValue.IndexOf("+") &amp;gt; 0)&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; var contentType = acceptHeaderValue.Substring(acceptHeaderValue.IndexOf("+") + 1);&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (_contentTypeMap.ContainsKey(contentType))&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return _contentTypeMap[contentType];&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return _defaultContentType;&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;It pulls out the Accept header and looks to see if it contains a +-sign. If it doesn’t, it returns the default content type fromt he constructor. But if it does, it takes the part of the string after the +-sign, eg json or xml, and passes that to a dictionary that will map those values to proper types like application/json or text/xml. If there is no conversion available in the map, it just fallbacks to the default type.&lt;/p&gt;&lt;p&gt;Once I have the “proper” type that the framework knows about, I set the context’s ContentType property to this value, before iterating through all the formatters to see which formatter can write a result for this value. Once I have my formatter, I reset the context’s ContentType to the value of the Accept header to make sure that the response to the client has the Content-Type header set to the same type that has been requested. If I don’t do this, the client will request one type, but get a generic application/json Content-Type back.&lt;/p&gt;&lt;p&gt;Once I have my AcceptHeaderOutputFormatterSelector implemented, I can add the &lt;strong&gt;XmlSerializerOutputFormatter&lt;/strong&gt; and replace the default format selector in the ConfigureServices method in the Startup class like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;services.AddMvc(config =&amp;gt; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; config.RespectBrowserAcceptHeader = true;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; config.OutputFormatters.Add(new XmlSerializerOutputFormatter());&lt;br&gt;
});&lt;br&gt;
services.AddSingleton&amp;lt;OutputFormatterSelector, AcceptHeaderOutputFormatterSelector&amp;gt;();&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;ASP.NET Core MVC defaults to JSON and ignores the Accept header by default. So to enable content negotiation based on the Accept header, you need to set the &lt;strong&gt;RespectBrowserAcceptHeader &lt;/strong&gt;property on the &lt;strong&gt;MvcOptions&lt;/strong&gt;.&lt;/p&gt;&lt;p&gt;That’s “all” there is to it to get this to work… Yes, it it not real production quality code, but it works and could be extended to production quality with some testing and so on. But for now, all I wanted, and needed, was a prrof of concept that it works, and it does!&lt;/p&gt;&lt;p&gt;If you want to see some code, I have uploaded some sample code to my GitHub account, which you can find here: &lt;a title="https://github.com/ChrisKlug/AspNetCoreMvcAcceptHeaderRouting" href="https://github.com/ChrisKlug/AspNetCoreMvcAcceptHeaderRouting" target="_blank"&gt;https://github.com/ChrisKlug/AspNetCoreMvcAcceptHeaderRouting&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Hope this helps in some way, or at least gave you some ideas of how to solve some problem that you have, or might run into in the future!&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/Versioning-ASPNET-Core-HTTP-APIs-using-Accept-header</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/Versioning-ASPNET-Core-HTTP-APIs-using-Accept-header#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=6b00cb3f-55b7-4e34-8068-35848d8f6590</guid>
      <pubDate>Wed, 25 Jul 2018 20:19:26 +0000</pubDate>
      <category>ASP.NET Core</category>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=6b00cb3f-55b7-4e34-8068-35848d8f6590</pingback:target>
      <slash:comments>2</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=6b00cb3f-55b7-4e34-8068-35848d8f6590</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/Versioning-ASPNET-Core-HTTP-APIs-using-Accept-header#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=6b00cb3f-55b7-4e34-8068-35848d8f6590</wfw:commentRss>
    </item>
    <item>
      <title>Using Azure Files with Azure Container Services and Docker volumes</title>
      <description>&lt;p&gt;
As a continuation to my &lt;a href="https://chris.59north.com/post/A-brief-look-at-Azure-Container-Service" target="_blank"&gt;last post&lt;/a&gt; about setting up an Azure Container Service, I thought it might be a good idea to have a look at persistent storage for the containers. Even if I prefer “outsourcing” my storage to external services, having persistent storage can be really useful in some cases.&lt;/p&gt;
&lt;p&gt;Docker sets up storage using volumes, which are bascially just some form of storage that is mounted to a path in the container. By default, the volumes are just directories on the host that are mounted inside the container. However, this has a couple of drawbacks. &lt;/p&gt;
&lt;p&gt;If the container is taken down and set up on a new host, the data in the volume will be gone. Or rather, it will still be there, but on another host. So, in the eyes of the container, it's gone. The only way to make sure that the data is persisted properly, is by setting up affinity between the service and the host. But this is a REALLY crappy solution. It breaks the idea that a container should preferably be able to run on any agent in the cluster. &lt;/p&gt;
&lt;p&gt;On top of that, if a host is replaced, the data dissappears. And in a cluster where machines should be cattle, and load should be handled by expanding and contracting the cluster, hosts come and go. So tieing the storage to a specific host is just not a good idea.&lt;/p&gt;
&lt;p&gt;The solution is to map our volumes to something else than the host, and in the case of Azure, that would preferrably be Azure Storage. And the way this is done, is by setting up an Azure File in storage, and map that as a volume using SMB.&lt;/p&gt;
&lt;p&gt;But wait...this sounds really complicated! Is this really the only way? Can't I just click some buttons and have it done for me? Well...no, yes, no. It isn't that complicated to be honest. But yes, there are a few steps involved, but they are all pretty basic. And yes, you currently have to set it up manually. There is no button you can click in the Azure portal to get it up and running unfortunately. Not yet at least.&lt;/p&gt;
&lt;p&gt;So, without a button in the Portal, how do we go about doing it?&lt;/p&gt;&lt;h5&gt;Step 1 - Creating a Docker Swarm and Storage Account&lt;/h5&gt;
&lt;p&gt;
The first step is to set up a new Docker Swarm in Azure. Luckily this is a piece of cake to do using the Azure Portal, and the Azure Container Service. If you haven't done that before, I suggest having a look at my &lt;a href="https://chris.59north.com/post/A-brief-look-at-Azure-Container-Service" target="_blank"&gt;previous post&lt;/a&gt;. It covers how you set up a swarm, and what Azure actually provisions for you when you do it.&lt;/p&gt;
&lt;p&gt;In my case, I created a Docker Swarm with a single master and a single agent. There is no need for a bigger cluster to test this. More nodes, just mean that you have to repeat the set up process for the driver more times. &lt;/p&gt;
&lt;p&gt;Besides the Container Service, we will need a storage account. So go ahead and set that up as well while you wait for the ACS to provision all of its resources.&lt;/p&gt;
&lt;h5&gt;Step 2 - Connect to the Swarm&lt;/h5&gt;
&lt;p&gt;
Once the swarm is up an running, you need to connect to the swarm master using SSH. Once again, this is sort of covered in the previous post, but in that post, I showed how to set up a tunnel and have the Docker client working against the master. In this case, I want to connect straight to the master, and execute commands on it. So to do that, I opened a terminal and executed&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;ssh -p 2200 zerokoll@chrisacstestmgmt.westeurope.cloudapp.azure.com&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This connects to a Azure Container Service called &lt;strong&gt;&lt;em&gt;chrisacstest&lt;/em&gt;&lt;/strong&gt; using a user called &lt;strong&gt;&lt;em&gt;zerokoll&lt;/em&gt;&lt;/strong&gt;. If your service isn't called &lt;strong&gt;&lt;em&gt;chrisacstest&lt;/em&gt;&lt;/strong&gt;, it will look like this&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;ssh -p 2200 &lt;em&gt;[USERNAME]&lt;/em&gt;@&lt;em&gt;[CONTAINER SERVICE NAME]&lt;/em&gt;mgmt.&lt;em&gt;[AZURE REGION]&lt;/em&gt;.cloudapp.azure.com&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It might also be worth noting that I use port 2200 instead of the SSH default 22. Port 2200 is then forwarded to 22 by the load balancer...&lt;/p&gt;
&lt;p&gt;Once connected, you can run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker info&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;to see what the current state of the Docker Swarm is. &lt;/p&gt;&lt;p&gt;At least you might think so... However, you might note that this doesn't give you the response you would have expected. It contains no information about a swarm...but there is a good reason for this. You are currently connected to the master host. Running &lt;strong&gt;&lt;em&gt;docker info&lt;/em&gt;&lt;/strong&gt; gives you the information about the Docker set up on this node. And this machine isn't part of a swarm.&lt;/p&gt;
&lt;p&gt;Wait...what!? What do I mean that the master node isn't part of a swarm? Well...it is...but it's not. It runs a Docker container that is configured as a master in the swarm.&lt;/p&gt;
&lt;p&gt;So, if you run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker ps&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;you can see that the host you are connected to is running a container based on an image called &lt;strong&gt;&lt;em&gt;swarm:1.1.0&lt;/em&gt;&lt;/strong&gt;, which is the actual swarm master. That container has port 2375 mapped to the host. So, in the previous post, when we set up the SSH tunnel and bound port 2375 on the host to port 2375 on the local machine, we were actually binding to a port that was in turn is bound to port 2375 on the swarm master container. So we were issuing commands to that container, not the host...&lt;/p&gt;
&lt;p&gt;To get the info about the Docker master, we need to run the following command&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker -H 127.0.0.1:2375 info&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This tells the Docker client on the host machine to issue its commands against port 2375 on the local machine, which is then bound to the Docker container that we really want to query. So, this should give us some information about the swarm.&lt;/p&gt;
&lt;p&gt;Anyhow...let's not go further down that rabbit hole! I just thought that I needed to give that information as it might cause some confusion...&lt;/p&gt;
&lt;p&gt;The swarm master host isn’t actually the machine that we will be running containers that use Azure File backed volumes, but I'll go ahead and add the driver here anyway. Mainly because it is easier to see what is being done at this level, than it is another level down in the agents. So I'll start out by setting it up here, just to show how it's done, and then I'll go on and set it up on the agent node.&lt;/p&gt;
&lt;h3&gt;Step 3 - Setting up the Azure File Volumet Driver&lt;/h3&gt;
&lt;p&gt;
The first step is to get hold of the required volume driver, which is located on GitHub. The available releases are available &lt;a href="https://github.com/Azure/azurefile-dockervolumedriver/releases/" target="_blank"&gt;here&lt;/a&gt;. At the time of writing, the latest version is 0.5.1, so I'll be using that... And since we'll be doing things that require root access, I'll go ahead and run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;sudo -s&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This will start an interactive shell run as root&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are a Windows user and have very little experience with Linux, like me, you'll note the change in the prompt from &lt;em&gt;[user]@swarm-master-XXXX-0:~#&lt;/em&gt; to &lt;em&gt;root@swarm-master-XXXX-0:~#&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Next, we want to download the driver and place it in /usr/bin. And to do that, I'll use wget&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;wget -qO /usr/bin/azurefile-dockervolumedriver &lt;a href="https://github.com/Azure/azurefile-dockervolumedriver/releases/download/v0.5.1/azurefile-dockervolumedriver"&gt;https://github.com/Azure/azurefile-dockervolumedriver/releases/download/v0.5.1/azurefile-dockervolumedriver&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;and since that file needs to be allowed to execute, I'll run chmod and set the access permision to allow it to execute&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;chmod +x /usr/bin/azurefile-dockervolumedriver&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Once the driver is in place, we need to get the upstart init file for it. This is a file used by upstart to start the service.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The master node currently runs Ubuntu 14.04.4, which uses upstart instead of systemd, which is used by later versions. If you are&amp;nbsp; running a later build, the service set up is a little different&lt;/p&gt;
&lt;p&gt;The init config file is located on GitHub as well, so all I have to do is call&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;wget -qO /etc/init/azurefile-dockervolumedriver.conf &lt;a href="https://raw.githubusercontent.com/Azure/azurefile-dockervolumedriver/master/contrib/init/upstart/azurefile-dockervolumedriver.conf"&gt;https://raw.githubusercontent.com/Azure/azurefile-dockervolumedriver/master/contrib/init/upstart/azurefile-dockervolumedriver.conf&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The last thing that is required, is to configure the volume driver. We need to tell it what Storage Account and key should be used. To do this, we need to create a file called azurefile-dockervolumedriver at /etc/default. So I'll run the following commands&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;touch /etc/default/azurefile-dockervolumedriver&lt;br&gt;echo "AF_ACCOUNT_NAME=chrisacsteststorage" &amp;gt;&amp;gt; /etc/default/azurefile-dockervolumedriver&lt;br&gt;echo "AF_ACCOUNT_KEY=D+rYUTUC14ALS13gxprCsBJMEu0..." &amp;gt;&amp;gt; /etc/default/azurefile-dockervolumedriver&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;First I use &lt;strong&gt;&lt;em&gt;touch&lt;/em&gt;&lt;/strong&gt; to create the file, and then I just use &lt;strong&gt;&lt;em&gt;echo&lt;/em&gt;&lt;/strong&gt; to write the values I want in there. As you can see, I set the &lt;strong&gt;&lt;em&gt;AF_ACCOUNT_NAME&lt;/em&gt;&lt;/strong&gt; to the name of my storage account, and the &lt;strong&gt;&lt;em&gt;AF_ACCOUNT_KEY&lt;/em&gt;&lt;/strong&gt; to the access key for that account.&lt;/p&gt;
&lt;p&gt;Once all of that is in place, I can reload the service configuration to include the new stuff, and then start the Azure File Volume Driver service&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;initctl reload-configuration&lt;br&gt;initctl start azurefile-dockervolumedriver&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This should output &lt;em&gt;azurefile-dockervolumedriver start/running, process XXX&lt;/em&gt; to let youy know that everything has worked as expected.&lt;/p&gt;
&lt;p&gt;Now that the service is up and running, we can test the driver by creating a container that writes some data to it.&lt;/p&gt;
&lt;h5&gt;
Step 4 - Testing the driver&lt;/h5&gt;&lt;p&gt;
The easiest way to verify that everything is working as it should, is to create a volume and write some data to it. And yes, I'm still on the "wrong" host to try this on. This should be done on the agents, but once again, it is just easier to try it here before we move on to the agents.&lt;/p&gt;
&lt;p&gt;So let's try running&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker volume create --name my_volume -d azurefile -o share=myshare&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;to create a new Docker volume called &lt;strong&gt;&lt;em&gt;my_volume&lt;/em&gt;&lt;/strong&gt;, using the driver called &lt;strong&gt;&lt;em&gt;azurefile&lt;/em&gt;&lt;/strong&gt;, using an Azure File called &lt;strong&gt;&lt;em&gt;myshare&lt;/em&gt;&lt;/strong&gt;. &lt;/p&gt;&lt;p&gt;Next I'll go ahead and start an interactive alpine-based container, with the newly created volume mapped to /data/&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker run -it -v my_volume:/data --rm alpine&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;and inside that container, I'll go ahead and run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;touch /data/test.txt&lt;br&gt;echo "Hello World" &amp;gt;&amp;gt; /data/test.txt&lt;br&gt;
 exit&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If everything went well, you should now be able to open up the Azure Portal, browse to the Storage Account you are using and click Files and see a new file service called &lt;em&gt;myshare&lt;/em&gt;. Inside it, you'll find the file you just created in the container.&lt;/p&gt;
&lt;p&gt;Ok, so that's kind of cool, but how does that help us? Well...that is persistent storage. So if we were to map that drive as a volume on the agents, and then use that volume from the containers in the swarm, we should have a great place to store presistent data. So let's try that!&lt;/p&gt;
&lt;h5&gt;
Step 5 - Installing the driver on the agent(s)&lt;/h5&gt;
&lt;p&gt;
Now that we know that it works, we need to set up the driver on each one of the agent nodes. And yes, you need to do it on each one. And yes, if you ever add more agents, you need to set it up on those as well...&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This process could probably be simplified quite a bit by using some smart scripts, but I only have a single agent, so I'll just go ahead and do it manually in this case.&lt;/p&gt;
&lt;p&gt;The first thing we need to figure out is how we can connect to the agent nodes. They aren't set up to be available over SSH from the internet. However, they are open to connect to over SSH from the master node, so that's what I'll do. But..to be able to do that, we need to have the private key for the SSH. So I'll start out by disconnecting the current connection to the master by running &lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;exit&lt;br&gt;exit&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The first one exits out of the root shell, and the second exits from the SSH connection. &lt;/p&gt;&lt;p&gt;Next I'll run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;scp C:\Users\chris\.ssh\id_rsa zerokoll@chrisacstestmgmt.westeurope.cloudapp.azure.com:~/.ssh/id_rsa&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This uses secure copy to copy my private key, &lt;strong&gt;&lt;em&gt;id_rsa&lt;/em&gt;&lt;/strong&gt;, from my local machine to the master node, placing it in the /.ssh/ directory. I can then use that key when connecting from the master to the agent.&lt;/p&gt;
&lt;p&gt;Next, I re-open the SSH connection to the master node&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;ssh -p 2200 zerokoll@chrisacstestmgmt.westeurope.cloudapp.azure.com&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;and once I'm connected, I need to set the permissions to the &lt;em&gt;id_rsa&lt;/em&gt; file by calling&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;chmod 600 ~/.ssh/id_rsa&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This sets the permissions required to give the owner of the file is read and write access&lt;/p&gt;
&lt;p&gt;With that in place, we need to figure out where my agents are located. This can be done by calling&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker -H 127.0.0.1:2375 info | grep -oP '(?:[0-9]{1,3}\.){3}[0-9]{1,3}' &lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This asks for the swarm master container's information, and then uses &lt;strong&gt;&lt;em&gt;grep&lt;/em&gt;&lt;/strong&gt; to get the IP addresses based on a Regex. In my case, I get a single address, 10.0.0.5.&lt;/p&gt;
&lt;p&gt;Now that I know where my agent is, I can start setting up the Azure File Volume Driver on it. So I'll go ahead and run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;scp /usr/bin/azurefile-dockervolumedriver &lt;a href="mailto:zerokoll@10.0.0.5:~/ssh"&gt;zerokoll@10.0.0.5:~/&lt;br&gt;ssh&lt;/a&gt; zerokoll@10.0.0.5 sudo mv azurefile-dockervolumedriver /usr/bin/&lt;br&gt;scp /etc/default/azurefile-dockervolumedriver /etc/init/azurefile-dockervolumedriver.conf zerokoll@10.0.0.5:~/&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;to copy all the required files to the agent node. Unfortunately 2 of the files are named the same, so I have to start out by copying the driver and moving that to the correct location, before copying the rest of the files. I can then connect to the agent and run the following commands&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;ssh &lt;a href="mailto:zerokoll@10.0.0.5"&gt;zerokoll@10.0.0.5&lt;/a&gt;&lt;br&gt;
 sudo -s&lt;br&gt;mv azurefile-dockervolumedriver.conf /etc/init/&lt;br&gt;mv azurefile-dockervolumedriver /etc/default/&lt;br&gt;chmod +x /usr/bin/azurefile-dockervolumedriver&lt;br&gt;initctl reload-configuration&lt;br&gt;initctl start azurefile-dockervolumedriver&lt;br&gt;exit&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;As you can see, I start out by connecting to the agent node using SSH. Then I elevate my priviledges to root before moving the config and init files to their final locations, and setting execute permission to the driver executable. Finally I reload the configuration and start the azurefile-dockervolumedriver service before exiting the elevated shell.&lt;/p&gt;
&lt;p&gt;Now that the driver is up and running, we can verify that the driver works by running&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker volume create --name my_volume -d azurefile -o share=myshare&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;to create a volume connected to the Azure File we used earlier, and then&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker run -it -v my_volume:/data --rm alpine&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;to start a new interactive alpine-based container. Once inside the container, we can run&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;cat data/test.txt&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If everything works as it should, this should output "Hello World". The content we put into that file when we tried out the driver from the master host.&lt;/p&gt;
&lt;p&gt;From here, you should now be able to deploy containers to the cluster, using the Azure File Volume driver to connect the volumes to persistent storage. Just remember to repeat the process for all the agents if you have more than one. Otherwise you will get issues when you try running a container on a host that doesn't have the Azure File Volume driver installed and configured.&lt;/p&gt;
&lt;p&gt;That's pretty much it for this time! The whole set up is a bit convoluted, and could probably be simplified a bitusing some smart scripts, but as a demo of how it works, I think this works...&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/Using-Azure-Files-with-Azure-Container-Services-and-Docker-volumes</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/Using-Azure-Files-with-Azure-Container-Services-and-Docker-volumes#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=33201f41-6b3d-487a-8b2d-28a595eda065</guid>
      <pubDate>Mon, 23 Oct 2017 16:45:35 +0000</pubDate>
      <category>Docker</category>
      <category>Azure</category>
      <betag:tag>docker</betag:tag>
      <betag:tag>swarm</betag:tag>
      <betag:tag>docker swarm</betag:tag>
      <betag:tag>azure</betag:tag>
      <betag:tag>azure container service</betag:tag>
      <betag:tag>azure storage</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=33201f41-6b3d-487a-8b2d-28a595eda065</pingback:target>
      <slash:comments>1</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=33201f41-6b3d-487a-8b2d-28a595eda065</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/Using-Azure-Files-with-Azure-Container-Services-and-Docker-volumes#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=33201f41-6b3d-487a-8b2d-28a595eda065</wfw:commentRss>
    </item>
    <item>
      <title>A brief look at Azure Container Service</title>
      <description>&lt;p&gt;Yesterday I had a couple of hours left over as I was on a train on the way to do a presentation. So I thought i would play around a little with the Azure Container Service. Seeing that I have gotten hooked on Docker, having the ability to spin up a Docker cluster while on the train using just my mobile phone for the connection, seems like a really cool thing. I guess normal people read books and watch Netflix on the train. Me...I spin up 5 Docker clusters...&lt;/p&gt;
&lt;p&gt;Setting up an Azure Container Service is really simple, so let's have a look at it. &lt;/p&gt;
&lt;h5&gt;Setting up a new Container Service&lt;/h5&gt;
&lt;p&gt;You just go to the portal and choose to add a new Azure Container Service, and accept that it will use the Resource Manager deployment model. Then you have to fill out a bit of information.&lt;/p&gt;
&lt;p&gt;First you have to fill out some basic information. It wants a name for the service, the subscruption that should pay for it, a resource group to place it in and a region to put it in. Pretty basic stuff.&lt;/p&gt;
&lt;p&gt;Next it wants to know what kind of orchestrator you want to use, and how you want to set up the masters in the cluster. In my case, I chose Docker Swarm for the orchestrator. Then you need to give it a unique DNS name prefix. It needs to be unique as it will be used as a prefix for the 2 domain names that are set up for your cluster. Once you have defined that, you need to tell it what username should be used for the admin account for the machines, as well as the SSH public key to use when connecting to it. And finally, you have o tell it how many masters you want. You get the option of 1, 3 or 5, which are all automatically distributed among fault and update domains to make sure that they don't all go down at the same time. It would be kind of useless to have multiple masters, if they all go down when something goes wrong, or they are updated...&lt;/p&gt;
&lt;p&gt;But before I go any futher, I want to go out on a tangent and talk about SSH keys... &lt;br&gt;&lt;/p&gt;&lt;h5&gt;
SSH keys&lt;/h5&gt;
&lt;p&gt;A lot of developers work with SSH keys and SSH connections all the time, but not all of us. To be perfectly honest, this is actually one of the first times ever that I have had to create SSH keys. It all depends on the environment you work in. &lt;/p&gt;
&lt;p&gt;So what are they, and what do we need them for? Well, it is actually not that hard. It is just a private/public key pair that is used to securely communicate between 2 endpoints. When you connect to a master in the cluster, you do that by setting up an SSH tunnel. This is a secure connection between your machine and the server, where your machine encrypts the data with the private key, and the server uses the public key that you just provided in the portal.&lt;/p&gt;
&lt;p&gt;If you haven't created a set of SSH keys before, I thought I would quickly mention how to do that. It is really simple. All you have to do is run the following command in the terminal&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;ssh-keygen -t rsa -b 4096 -C "&lt;em&gt;[YOUR E-MAIL ADDRESS]&lt;/em&gt;"&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This will then ask you where to put the generated files. I just accepted the default, which is &lt;strong&gt;&lt;em&gt;C:\Users\[USERNAME]\.ssh\&lt;/em&gt;&lt;/strong&gt;. And then it wants you to input a passphrase to make sure that you are the only one that can use the keys. &lt;/p&gt;
&lt;p&gt;Once the command has completed, you will have a brand new set of SSH keys!&lt;/p&gt;
&lt;p&gt;However, if you are on a Windows machine, like me, you probably get an error saying that it can't find the executable ssh-keygen. So where do yougo to get that? Well, if you have Git installed on your machine, you are in luck. That installation actually gives you that executable, as well as a few other ones, as part of the package. All you have to do is to add the path &lt;strong&gt;&lt;em&gt;%Program Files%\Git\usr\bin&lt;/em&gt;&lt;/strong&gt; to your path, and you are good to go.&lt;/p&gt;
&lt;p&gt;Once you have the keys generated, you can just open the file called &lt;em&gt;id_rsa.pub&lt;/em&gt; in Notepad, and copy the key.&lt;/p&gt;
&lt;p&gt;Ok, after that side note, let's get back to creating the cluster!&lt;/p&gt;
&lt;h5&gt;More configuration&lt;/h5&gt;
&lt;p&gt;As soon as we have set up the master node configuration, we need to configure the agents. However, this is a lot easier than the rest of the config. You just need to set up how many agents you want, and what size of machines you want to use. In other words, how much load do you need to handle, and how big is your wallet.&lt;/p&gt;
&lt;p&gt;That's it! As soon as the portal has verified the values, you can click OK, and get your cluster set up! It takes a bit of time though... On the other hand, once you see what it has set up, you will understand why. There are quite a few pieces involved in setting up the cluster. And that is actually part of the reason I wanted to write this post. I wanted to have a better look at what is actually being created. Since everything is created automatically, the pieces are named less than perfect, and since there as so much stuff being created, it can be hard to figure out how it all fits together. &lt;/p&gt;
&lt;p&gt;When the creation is done, you get all your new services added to the resource group you chose. So what is actually being created? Well, let's walk through it and have a look!&lt;/p&gt;
&lt;h5&gt;What do you get?&lt;/h5&gt;
&lt;p&gt;I'll go outside in, from the internet to the machines, instead of by name or service, as I think that makes it easier to understand how the different parts fit together.&lt;/p&gt;
&lt;p&gt;First of all, we get a Container Service with the name we set up. This service in itself is pretty boring, and doesn't give us a whole lot of information.&lt;/p&gt;
&lt;p&gt;"Inside" that, we get a Virtual Network (&lt;em&gt;swarm-vnet-XXX&lt;/em&gt;) that connects all of the VMs in the cluster. Actually, it connects to the network interfaces connected to the master VMs and to a Virtual Machine Scales Set containing the agents. But for simplicity, let's just say that it connects the machines in the cluster...&amp;nbsp; The virtual network is set up with 2 address spaces, 10.0.0.0/8 and 172.16.0.0/24. The 2 address spaces are used by 2 subnets, &lt;em&gt;swarm-subnetMaster&lt;/em&gt; (172.16.0.0) and &lt;em&gt;swarm-subnet&lt;/em&gt; (10.0.0.0).&lt;/p&gt;
&lt;p&gt;Seeing that the network is split into 2 parts, one for the masters and one for the agents, I'll look at the cluster based on that. I'll start out by looking at the master side of things...&lt;/p&gt;
&lt;p&gt;On the &lt;em&gt;swarm-masterSubnet&lt;/em&gt; subnet, there is a Load Balancer (&lt;em&gt;swarm-master-lb-XXX&lt;/em&gt;) that manages the masters. It adds NAT, mapping TCP port 22 and 2200 to port 22 on the master. &lt;/p&gt;&lt;p&gt;And what is port 22? Well, it's SSH. So this NAT allows us to connect to the master using SSH. &lt;/p&gt;&lt;p&gt;And if you have more than one master, it maps port 220x to port 22 on each of the masters, where x is a sequential number that gives us port 2200, 2201 and 2202 in a 3 master cluster. This makes it possible to connect to each one of the masters by choosing the correct port to use.&lt;/p&gt;
&lt;p&gt;The Load Balancer connect to the master machines using one or more Network Interfaces (&lt;em&gt;swarm-master-XXX-nic-x&lt;/em&gt;). So depending on the number of masters you choose, you will have one or more Network Interfaces defined.&lt;/p&gt;
&lt;p&gt;Each one of the potentially multiple Network Interfaces are then connected to a Virtual Machine (&lt;em&gt;swarm-master-XXX-x&lt;/em&gt;) that plays the role of Swarm Master in the cluster.&lt;/p&gt;
&lt;p&gt;And that is pretty much it for the master side of ot the network. However, there are 2 more things that is set up for us. &lt;/p&gt;
&lt;p&gt;First of all, all the masters are connected to an Availibility Set (&lt;em&gt;swarm-master-availabilitySet-XXX&lt;/em&gt;). This availibility set is responsible for spreading out the the masters across fault and update domains, to make sure that they don't all go down together if something happens.&lt;/p&gt;
&lt;p&gt;And secondly, there is a Public IP Address (&lt;em&gt;swarm-master-ip-[ACS SERVICE NAME]mgmt-XXX&lt;/em&gt;). This is a public IP address that is connected to the Load Balancer, giving us a public endpoint that we can use to reach our master nodes. It also happens to be configured with a DNS name, which is defined as &lt;em&gt;&lt;strong&gt;[ACS SERVICE NAME]mgmt.[REGION].cloudapp.azure.com&lt;/strong&gt;&lt;/em&gt;. So whenever we want to connect to our Swarm masters, we can connect to that address.&lt;sub&gt;&lt;/sub&gt;&lt;sub&gt;&lt;/sub&gt;&lt;sub&gt;&lt;/sub&gt;&lt;sub&gt;&lt;/sub&gt;&lt;sub&gt;&lt;/sub&gt;&lt;/p&gt;
&lt;p&gt;Ok, so that takes care of the master end of things. But what's the situation on the agent side of things? Well, a lot of it is very similar...&lt;/p&gt;
&lt;p&gt;On the &lt;em&gt;swarm-subnet&lt;/em&gt;, there is another Load Balancer (&lt;em&gt;swarm-agent-lb-XXX&lt;/em&gt;) that manages the incoming requests, and fowards them to the agents. However, this Load Balancer actually does sort of load balance the incoming traffic instead of just forward ports to machines. It has load balancing set up for port 80, 443 and 8080, passing the requests to something called Backend pools. These "pool" of machines, are sets of machines that should handle the incoming requests. In this case, there is a single pool (&lt;em&gt;swarm-agent-pool-XXX&lt;/em&gt;), containing...no not a set of agents…but a Virtual Machine Scale Set.&lt;/p&gt;
&lt;p&gt;So the Load Balancer forwards the incoming requests to a Virtual Machine Scale Set (&lt;em&gt;swarm-agent-XXX-vmss&lt;/em&gt;). This is a uniform set of machines that are handled together as a "unit". The set can then be scaled out and in, and Azure takes care of setting everything up for us. So all we have to do, is to tell it what OS, machine size and count we want, and Azure takes care of the rest. It even makes sure to spread out the machines across fault and update domains and so on, to make sure it is as highly available as possible. &lt;/p&gt;
&lt;p&gt;And finally, we get a bunch of Storage Accounts to hold the OS disks and diagnostics data.&lt;/p&gt;
&lt;p&gt;So that's pretty much what we get when we set up a new Azure Container Service! It is quite a few pieces that has to be set up, and work together for everything to work. So it’s kind of nice that Azure sorts that all out once we have defined the basic requirements! But how do we go about starting up a container in the cluster we just created? Well...that isn't that hard!&lt;/p&gt;
&lt;h5&gt;Connecting to the master a.k.a Getting your SSH on&lt;/h5&gt;
&lt;p&gt;The first thing we need to do, is to set up an SSH tunnel to the Swarm Master. This isn't complicated as such, but for me as a Windows person, it feels a bit awkward. But just relax and let go of that! It’s not hard, and it is pretty cool once you have tried it. Just open a terminal and run the following command&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;ssh -fNL 2375:localhost:2375 -p 2200 [ADMIN ACCOUNT]@[CONTAINER SERVICE NAME]mgmt.northeurope.cloudapp.azure.com&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This opens up an SSH tunnel from your machine to the Swarm manager in the cloud. It connects to the manager using port 2200, encrypting the traffic using the SSH keys. It then maps port 2375 on your local machine to port 2375 on the remote machine. So anything you send to localhost:2375 will be sent to port 2375 on the the remote machine, tunneled securely using SSH. The endpoint &lt;strong&gt;&lt;em&gt;[ADMIN ACCOUNT]@[CONTAINER SERVICE NAME]mgmt.northeurope.cloudapp.azure.com&lt;/em&gt;&lt;/strong&gt; just says that you want to connect to &lt;strong&gt;&lt;em&gt;[CONTAINER SERVICE NAME]mgmt.northeurope.cloudapp.azure.com&lt;/em&gt;&lt;/strong&gt;, using the account &lt;strong&gt;&lt;em&gt;[ADMIN ACCOUNT]&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Actually, that is what happens if you just pass in &lt;em&gt;-L&lt;/em&gt;. The &lt;em&gt;-fN&lt;/em&gt; is to connect the current terminal to the tunnel, allowing us to use the terminal to communicate with the master.&lt;/p&gt;
&lt;p&gt;Once the tunnel is up and running, we need to tell our Docker client that we want to communicate with our Docker master in the cloud using that port. So, to do that, we need to set the DOCKER_HOST environment variable, which is easily done by calling&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;set DOCKER_HOST=:2375&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;if you are using CMD, or&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;export DOCKER_HOST=:2375&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;if you are in Bash.&lt;/p&gt;
&lt;p&gt;And if you want to verify that you are connected, you can just try running&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker info&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;to see if you get a response.&lt;/p&gt;
&lt;h5&gt;Starting your first container&lt;/h5&gt;
&lt;p&gt;Once you are connected, you can run a new container like you would in any other scenario... For example, I ran&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;docker run -d -p 80:80 --name demo_app zerokoll/demoapp&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;which started a very simple ASP.NET Core container that I had uploaded to my own Docker Hub repo. And as you can see, I mapped the external port 80 to the containers port 80. This is possible because port 80 is mapped in the agent load balancer by default. Together with port 443 and 8080. But, if you want to use any other ports, you need to remember to map those ports through the balancer.&lt;/p&gt;
&lt;p&gt;Once the container has started, you can just browse to the agent endpoint (&lt;strong&gt;&lt;em&gt;[CONTAINER SERVICE NAME]agents.[REGION].cloudapp.azure.com&lt;/em&gt;&lt;/strong&gt;) in your browser.&lt;/p&gt;
&lt;p&gt;If you want to deploy something more complicated than a single container like that, you can go ahead and use a &lt;strong&gt;&lt;em&gt;docker-compose.yml&lt;/em&gt;&lt;/strong&gt; file. However, since ACS is only running Docker 1.24, we can't use "docker stack", as this requires version 1.25. Instead, we need to use &lt;em&gt;&lt;strong&gt;docker-compose&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Ok...I think that was all I had to cover. It is probably just a bunch of stuff you could have figured out on your own, but was hopefully faster to read it here. And to be honest, the post is mostly written with myself in mind. I needed to write this down to sort it all out in my head, and to have a place to come back to when I forget how it is all connected. But hopefully it might help someone else out there as well at some point!&lt;/p&gt;
&lt;p&gt;Cheers!&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/A-brief-look-at-Azure-Container-Service</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/A-brief-look-at-Azure-Container-Service#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=f538f2d3-ac13-4519-a127-434d6e80760c</guid>
      <pubDate>Fri, 20 Oct 2017 14:57:30 +0000</pubDate>
      <category>Docker</category>
      <category>Azure</category>
      <betag:tag>docker</betag:tag>
      <betag:tag>swarm</betag:tag>
      <betag:tag>azure</betag:tag>
      <betag:tag>intro</betag:tag>
      <betag:tag>introduction</betag:tag>
      <betag:tag>walkthrough</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=f538f2d3-ac13-4519-a127-434d6e80760c</pingback:target>
      <slash:comments>2</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=f538f2d3-ac13-4519-a127-434d6e80760c</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/A-brief-look-at-Azure-Container-Service#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=f538f2d3-ac13-4519-a127-434d6e80760c</wfw:commentRss>
    </item>
    <item>
      <title>My intro to Docker - Part 5 of something</title>
      <description>&lt;p&gt;In the previous posts, I have talked about everything from what Docker is, to how you can set up a stack of containers using docker-compose, but all of the posts have been about setting up containers on a single machine. Something that can be really useful, but imagine being able to deploy your containers just as easily to a cluster of machines. That would be awesome!&lt;/p&gt;&lt;p&gt;With Docker Swarm, this is exactly what you can do. You can set up a cluster of Docker hosts, and deploy your containers to them in much the same way that you would deploy to a single machine. How cool is that!&lt;/p&gt;&lt;h5&gt;&lt;/h5&gt;&lt;h5&gt;Creating a cluster, or swarm&lt;/h5&gt;&lt;p&gt;To be able to try out what it’s like working with a cluster of machines, we need a cluster of machines. and by default, Docker for Windows/Mac includes only a single Docker host when it’s installed. However, it also comes with a tool called &lt;strong&gt;&lt;em&gt;docker-machine &lt;/em&gt;&lt;/strong&gt;that can be used to create more hosts very easily.&lt;/p&gt;&lt;p&gt;There is just one thing we need to do before we can create our cluster of hosts. We need to set up a virtual network switch for them to attach to. And to do that, I’m going to use the Hyper-V manager.&lt;/p&gt;&lt;p&gt;Remember that our hosts run in Hyper-V on Windows, so we are dependent on that infrastructure for it to work. On a Mac, the situation is different. So if you are doing this on a Mac, I suggest Googling how to create a virtual switch on a Mac.&lt;/p&gt;&lt;p&gt;In the Hyper-V Manager, right-click the local machine in the menu to the left&lt;/p&gt;&lt;p&gt;&lt;a href="https://chris.59north.com/image.axd?picture=2017/9/image.png"&gt;&lt;img width="523" height="226" title="image" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" alt="image" src="https://chris.59north.com/image.axd?picture=2017/9/image_thumb.png" border="0"&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;and then click &lt;em&gt;Virtual Switch Manager…&lt;/em&gt;, followed by &lt;em&gt;Create Virtual Switch&lt;/em&gt;. Name the switch something useful, and connect it to an external network of your choice. You might more than one network adapter on your machine. In that case, choose the one that you want the Docker hosts to be connected to.&lt;/p&gt;&lt;p&gt;Personally I named it DockerSwitch, and attached it to my wireless adapter on my laptop&lt;/p&gt;&lt;p&gt;&lt;a href="https://chris.59north.com/image.axd?picture=2017/9/image_1.png"&gt;&lt;img width="550" height="523" title="image" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" alt="image" src="https://chris.59north.com/image.axd?picture=2017/9/image_thumb_1.png" border="0"&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;One thing to note about this, is that attaching the hosts to a public network adapter like this will give the hosts new IP addresses everytime you switch network. This will in turn screw up you cluster. But it works well as long as you are just quickly trying this out.&lt;/p&gt;&lt;p&gt;With that switch in place, we can go ahead and create a couple of Docker host machines using the &lt;strong&gt;&lt;em&gt;docker-machine&lt;/em&gt;&lt;/strong&gt; tool. A process that is almost too easy to believe. All you have to do, is call docker-machine, passing it the virtualization platform that is being used, the name of the virtual switch to use, and the name to use for the machine. So I’ll go ahead and execute the following&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine create -d hyperv --hyperv-virtual-switch DockerSwitch SwarmManager&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;using a Powershell window running as an Administrator. This will set up a new Hyper-V based virtual machine using a Boot2Docker ISO, naming it &lt;em&gt;SwarmManager&lt;/em&gt; and attaching it to the &lt;em&gt;DockerSwitch&lt;/em&gt; virtual switch.&lt;/p&gt;&lt;p&gt;This machine will be the manager of my cluster. That means that it will be the machine that I will execute my Docker commands against. It then makes sure to set up the state in the swarm for us. In the case of starting some new containers, it means that it will either host the container itself, or delegate to a worker in the cluster, depending on load and requirements and so on.&lt;/p&gt;&lt;p&gt;You don’t really have to set up any workers to be able to run the swarm commands that we are going to be running in this post. A one machine cluster is perfectly fine. But it is a little boring. So I’ll go ahead and create one more machine that will acts as a worker, by running the following command&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine create -d hyperv --hyperv-virtual-switch DockerSwitch SwarmWorker01&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will create a second VM called &lt;em&gt;SwarmWorker01&lt;/em&gt;, which I’ll set up to be a worker in the swarm.&lt;/p&gt;&lt;p&gt;Next, I need to create the actual swarm. And to do that, I need to run a command that looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker swarm init&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;on the manager node. However, it needs to run on the actual host. So to do that, I’ll use docker-machine to SSH into the machine and execute the command. Like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker swarm init"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This tells docker-machine to SSH into SwarmManager, and execute “docker swarm init”. This will return some information about how to join the swarm from a worker. It should look similar to this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;To add a worker to this swarm, run the following command:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; docker swarm join --token SWMTKN-1-0poaqqcljbote3vg8zpl7g68ehrcr3xsqpbth1ckqxnqip6vyd-5vsl5hjgmplhk98lnc9n7og70 192.168.43.16:2377&lt;br&gt;To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;which is pretty descriptive. So let’s go ahead and use docker-machine to execute that on the swarm node by running&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmWorker01 “docker swarm join --token SWMTKN-1-0poaqqcljbote3vg8zpl7g68ehrcr3xsqpbth1ckqxnqip6vyd-5vsl5hjgmplhk98lnc9n7og70 192.168.43.16:2377”&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This should return a simple statement telling you that the node has been joined to the swarm as a worker&lt;/p&gt;&lt;p&gt;These simple commands have now given us a&amp;nbsp; tiny swarm, containing a manager and a single worker node, and it’s now ready for us to go ahead and deploy our containers. &lt;/p&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you want to reconfigure your Docker client to target the SwarmManager host instead of using SSH, you can do that by running &lt;strong&gt;&lt;em&gt;docker-machine.exe env SwarmManager | Invoke-Expression&lt;/em&gt;&lt;/strong&gt;. This will set the environment variables needed to reconfigure the Docker client to talk to the SwarmManager instead of the default Docker for Windows Docker host. This is temporary though, and will only affect the current PowerShell window.&lt;/p&gt;&lt;h5&gt;Adding images to use in the swarm&lt;/h5&gt;&lt;p&gt;The next step is to set up some images that we can use to for our containers. But this time this is going to be a little different. Because, to be able to share images between nodes in the swarm, the images need to be on an external repository. And for this, I’m going to use Docker Hub. But before we get to that, we need an appplication to deploy. And in this case, it will be a simple ASP.NET Core web application. It’s VERY simple. It uses a startup file that looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;public class Startup&lt;br&gt;{&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; public void Configure(IApplicationBuilder app, IHostingEnvironment env)&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; app.Run(async (context) =&amp;gt; {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; await context.Response.WriteAsync("Hello World!");&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; })&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, it is just a VERY basic application that returns “Hello World” on every incoming request.&lt;/p&gt;&lt;p&gt;And the dockerfile for this application, which is located in the same folder as the application, looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;FROM microsoft/aspnetcore-build&lt;br&gt;COPY . /app&lt;br&gt;WORKDIR /app&lt;br&gt;EXPOSE 80&lt;br&gt;ENTRYPOINT ["dotnet", "run"]&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Once I have my application and my dockerfile, I need to build an image by running &lt;em&gt;docker build&lt;/em&gt;. So I’ll go ahead and exeucte&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker build . --tag zerokoll/demo_app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will build an image called &lt;em&gt;zerokoll/demo_app&lt;/em&gt; locally on my machine. The “zerokoll” part is my Docker ID. It needs to be named like this to allow me to push it to Docker Hub.&lt;/p&gt;&lt;p&gt;Next I need to push it to Docker Hub, but to do that, I first need to log in by executing&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker login&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This requires me to pass in my Docker ID and password. &lt;/p&gt;&lt;p&gt;Once that is done, I can push my demo_app image to Docker Hub by executing&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker push zerokoll/demo_app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will push the entire image to my repo at Docker Hub, making available to all my swarm nodes.&lt;/p&gt;&lt;p&gt;With that in place, we can go ahead and start a new container in the cluster.&lt;/p&gt;&lt;h5&gt;Starting containers in the swarm&lt;/h5&gt;&lt;p&gt;However, before we do that, I just want to mention that it is actually called starting a service in the cluster. A service is basically a container running in a swarm, but a service supports some other features like being able to run multiple, load-balanced instances across swarm nodes. So to be correct, we should be talking about setting up some services.&lt;/p&gt;&lt;p&gt;To get services up and running in the swarm, we need to tell the manager what services we want. We can do this in a couple of ways. We can either run commands like &lt;strong&gt;&lt;em&gt;docker service create&lt;/em&gt;&lt;/strong&gt; manually, or we do it using using docker-compose.yml files. And since we already know how to use thses, let’s go ahead and use a docker-compose file.&lt;/p&gt;&lt;p&gt;I’ll create a new docker-compose.yml file that starts a container using the image we just created. It should look something like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;version: ‘3'&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; services:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; demo_app:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; image: zerokoll/demo_app&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ports:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - "8080:80"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will start a new service called &lt;em&gt;demo_app&lt;/em&gt;, based on the &lt;em&gt;zerokoll/demo_app&lt;/em&gt; image, binding the hosts port 8080 to the containers port 80. However, to “deploy” it to the cluster, we don’t use&amp;nbsp; &lt;em&gt;docker-compose&lt;/em&gt; command as we have done before. Instead we use &lt;strong&gt;&lt;em&gt;docker stack deploy&lt;/em&gt;&lt;/strong&gt;. And we need to run it on the manager, not our “local” host, which means that we need to copy the yml file to the manager machine, and then call the command on that machine using SSH.&lt;/p&gt;&lt;p&gt;So the first step is to copy the file to the manager using the following command&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine scp docker-compose.yml SwarmManager:~&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will copy the docker-compose.yml file from the local folder to the manager. And then we need to deploy it to the swarm using the following command&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker stack deploy -c docker-compose.yml demo_stack"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will deploy the “stack” of services defined in the docker-compose file to the swarm. &lt;p&gt;However, this will take a little while, as the manager first needs to download the image before it can start the service. “Unforuntately” this happens in the background, so it is a bit hard to figure out what is happening. But as soon as is is done , we should be able to browse to our beautiful “Hello World” text by browsing to the swarm manager. And to be able to do that, we need to know the swarm managers IP address. Something that we can find out by running &lt;blockquote&gt;&lt;p&gt;docker-machine ls&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This should give you a list of Docker machines running on you computer, including their IP addresses. So all you need to do to verify that everything is working, is to copy the IP address of the SwarmManager machine and put it into your browsers address bar, appending &lt;em&gt;:8080&lt;/em&gt;. &lt;p&gt;If you want to see what services are running in the swarm, you can run&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker service ls”&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will show you all services running in the swarm. And as you can see, the &lt;em&gt;demo_app&lt;/em&gt; service has been named by prefixing it with the stack name &lt;em&gt;demo_stack&lt;/em&gt;. If you instead just want to see the services running in the stack called &lt;em&gt;demo_stack&lt;/em&gt;, you can run&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker stack ps demo_stack"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And once you have verified that everything is working, and you are ready to tear down your “stack. You can just execute&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker stack rm demo_stack"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will stop all the services that are currently running in the stack, and remove them, including the containers.&lt;p&gt;But wouldn’t it be cool if we could see what services are running on what node? Guess what, you can easily do that by adding an extra service to the mix.&lt;h5&gt;Visualizing the containers in the swarm&lt;/h5&gt;&lt;p&gt;The only thing we need to do to add some visualization to the, is to add another service/container to the mix. And to do that, we just need to update the docker-compose.yml file like this&lt;blockquote&gt;&lt;p&gt;version: '3'&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; services:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; demo_app:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; image: zerokoll/demo_app&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ports:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - "8080:80"&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; networks:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - my_network&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; visualizer:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; image: dockersamples/visualizer:stable&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ports:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - "8081:8080"&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; volumes:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - "/var/run/docker.sock:/var/run/docker.sock"&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; deploy:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; placement:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; constraints: [node.role == manager]&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; networks:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - my_network&lt;br&gt;networks:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; my_network:&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, I have made a bunch of changes to the docker-compose.yml file. First of all, I have added a new service called &lt;em&gt;visualizer&lt;/em&gt;, which is based on the image &lt;em&gt;dockersamples/visualizer&lt;/em&gt;. This is a cool little service that gives us a website where we can see the state of the swarm. &lt;p&gt;For the visualizer service, I have mapped port 8081 on the swarm to the containers port 8080, as well as set up a volume based on the docker unix socket. However, I have also set up a placement constraint, telling the manager that this service has to run on a swarm node, where the nodes role is manager. The reason for this is that this service depends on information provided by the manager node. So adding this constraint will make sure that whenever this stack is deployed, this service will always be placed on the manager node.&lt;p&gt;Next, I have just made some things a but more eplicit. Mainly the networking. In this case, I am eplicitly setting up a network called &lt;em&gt;my_network&lt;/em&gt;, and adding both the services to that network. If we don’t explicitly tell the manager to set up a network, it will set up a default network, and add all the services to it, which is what happened in the previous version of the file.&lt;p&gt;So let’s go ahead and run&lt;blockquote&gt;&lt;p&gt;docker-machine scp docker-compose.yml SwarmManager:~&lt;br&gt;docker-machine ssh SwarmManager "docker stack deploy -c docker-compose.yml demo_stack"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will copy across the new docker-compose.yml file, and then deploy the stack to the swarm. And once again, that will take some time, as it first needs to get the dockersamples/visualizer image. But once that is done, you should be able to back to the browser and port 8080 and get that familiar “Hello World” rsponse. However, you should now also be able to go to port 8081 to see the visualization of the nodes and services in the swarm. In my case, i can see the following&lt;p&gt;&lt;a href="https://chris.59north.com/image.axd?picture=2017/9/image_2.png"&gt;&lt;img width="472" height="711" title="image" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" alt="image" src="https://chris.59north.com/image.axd?picture=2017/9/image_thumb_2.png" border="0"&gt;&lt;/a&gt;&lt;p&gt;This This tells me that I have 2 nodes, the SwarmManager and the SwormWorker01, and the manager runs the &lt;em&gt;demo_stack_visualizer&lt;/em&gt;, and the worker runs just an instance of the &lt;em&gt;demo_app&lt;/em&gt;.&amp;nbsp; The little green dots in the top left corner of the “service boxes” are also telling me that the instances are up and running.&lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;It took me a minute before I got the screenshot, so the &lt;em&gt;demo_stack_demo_app&lt;/em&gt; instance managed to turn green. But it did take a little while to go from red to green as the worker node first needed to pull the image from Docker Hub before it could get it started.&lt;p&gt;But right now, we are only running a single service besides the visualizer. So it’s going to be a really boring thing to look at. Not to mention that it doesn’t really show any of the cool load balancing features of the swarm. So let’s go ahead and modify the setting for the &lt;em&gt;demo_app&lt;/em&gt; service, adding the following&lt;blockquote&gt;&lt;p&gt;…&lt;br&gt;services:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; demo_app:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; …&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;strong&gt;deploy:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; replicas: 2&lt;/strong&gt;&lt;br&gt;…&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This tells the swarm manager that it should run 2 instances of the &lt;em&gt;demo_app&lt;/em&gt; service in a load balanced fashion.&lt;p&gt;Unfortunately, we will have no clue that the loadbalancing is taking place at the moment, since the response will be the same from both services. So let’s update the application by adding some information about what host the application is running on. And the easiest way to do this, is by updating the Startup class, changing the response like this&lt;blockquote&gt;&lt;p&gt;await context.Response.WriteAsync(&lt;strong&gt;&lt;em&gt;"\"Hello World!\" says " + Environment.MachineName&lt;/em&gt;&lt;/strong&gt;);&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Ok, so now that the application will tell us what machine it is running, let’s go ahead and deploy it.&lt;p&gt;First of all, we need to create a new image, byt running the followinf command&lt;blockquote&gt;&lt;p&gt;docker build . --tag zerokoll/demo_app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;and then push it to Docker Hub by executing&lt;blockquote&gt;&lt;p&gt;docker push zerokoll/demo_app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Finally,, we need to deploy the updated stack by executing the following commands&lt;blockquote&gt;&lt;p&gt;docker-machine scp docker-compose.yml SwarmManager:~&lt;br&gt;docker-machine ssh SwarmManager "docker stack deploy -c docker-compose.yml demo_stack"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This first copies accross the updated docker-compose.yml file, and then deploys it to the swarm, updating the &lt;em&gt;demo_stack&lt;/em&gt; stack.&lt;p&gt;&lt;strong&gt;Tip: &lt;/strong&gt;Pull up the visualizer window before executing the command to let you see what is happening. As you will see, it will take down one service at the time, setting up a new instance before taking down the next.&lt;p&gt;This time, it only needs to pull down a little bit of data, as most of the layers are common between the 2 versions. So the update should be fast. &lt;p&gt;Once everything has completed, you should the visualizer should now show you that you have 2 insances, or replicas of the demo_app service running in the swarm. And they should be deployed to different nodes in the swarm. In my case, the visualizer looks like this&lt;p&gt;&lt;a href="https://chris.59north.com/image.axd?picture=2017/9/image_3.png"&gt;&lt;img width="466" height="701" title="image" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" alt="image" src="https://chris.59north.com/image.axd?picture=2017/9/image_thumb_3.png" border="0"&gt;&lt;/a&gt;&lt;p&gt;Once at least one of the demo_app nodes are green, you should be able to browse to port 8080 and see the host name as part of the greeting. And when both instances are up, refreshing the page might show you the requests being load balanced.&lt;p&gt;I say may, because on my machine I had to pull up a new incognito browser window to get it to use another node. But I’m not sure that is needed in all cases…&lt;p&gt;But what if we need to make some changes to the demo_app service? Well, in that case, we can just update the service and re-deploy it. So let’s try that…&lt;p&gt;First I’ll just update the application in some way. So I’ll go ahead and make a simple change to the reponse, like this&lt;blockquote&gt;&lt;p&gt;await context.Response.WriteAsync(&lt;strong&gt;&lt;em&gt;"\"Guten Tag\" says " + Environment.MachineName&lt;/em&gt;&lt;/strong&gt;);&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Making the application greet the user in German instead of English.&lt;p&gt;Then we need to generate a new image&lt;blockquote&gt;&lt;p&gt;docker build . --tag zerokoll/demo_app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;and push it to Docker Hub&lt;blockquote&gt;&lt;p&gt;docker push zerokoll/demo_app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;and then we can go ahead and update the service. We can do this in 2 ways. We can either push a new stack, which will update the service for us, or we can update the individual service by calling &lt;strong&gt;&lt;em&gt;docker service update&lt;/em&gt;&lt;/strong&gt;. However, to be able to use &lt;em&gt;docker service update&lt;/em&gt;, we need to know the name of the service we want to update. So let’s first query the running services to see what services we have to choose from&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker service ls"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;should return something similar to&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;ID&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; NAME&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; MODE&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; REPLICAS&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IMAGE&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PORTS&lt;br&gt;
6v8ejov604w7&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; demo_stack_visualizer&amp;nbsp;&amp;nbsp; replicated&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1/1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; dockersamples/visualizer:stable&amp;nbsp;&amp;nbsp; *:8081-&amp;gt;8080/tcp&lt;br&gt;
mp7eeu6tmvx3&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; demo_stack_demo_app&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; replicated&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2/2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; demo_app:latest&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; *:8080-&amp;gt;80/tcp&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, all service names are prefixed with the name of the stack they are part of. This means that the service we want to update is called &lt;strong&gt;&lt;em&gt;demo_stack_demo_app&lt;/em&gt;&lt;/strong&gt;, which in turn means that the command we need to execute to update it is&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker service --image zerokoll/demo_app update demo_stack_demo_app"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will update the &lt;em&gt;demo_stack_demo_app&lt;/em&gt; service to use the latest &lt;em&gt;zerokoll/demo_app&lt;/em&gt; image. In practice, this means that one service at the time is torn down, and put back up again, using the latest version of the image.&lt;/p&gt;&lt;p&gt;You should now be able to go back to the browser on port 8080, and refresh, and see the greeting in German. If not, give it a couple of seconds and try again. It can take a little while for the services to be updated, depending on the size of the update.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you pull up the visualization on screen before you run the command, you will see the services being updated. Or rather, you will see the services being taken down one by one, and put back up using the new image.&lt;/p&gt;&lt;p&gt;Once you are done refreshing the browser and looking at the updated response coming from the updated services, you can tear down the whole thing by calling&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmManager "docker stack rm demo_stack"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And if you want to remove the hosts from the swarm as well, you can execute&lt;blockquote&gt;&lt;p&gt;docker-machine ssh SwarmWorker01 "docker swarm leave"&lt;br&gt;docker-machine ssh SwarmManager "docker swarm leave --force"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, you need to provide a --&lt;em&gt;force&lt;/em&gt; when taking the manager out of the swarm. The reason for this is that this takes down the entire swarm. Not something you want to do by mistake.&lt;p&gt;And if you want to remove the hosts from your machine as well, you can just execute&lt;blockquote&gt;&lt;p&gt;docker-machine rm SwarmManager&lt;br&gt;docker-machine rm SwarmWorker01&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This should pretty much put your machine back to the state it was in before you started. Well…not quite… You also need to remember to go back to the Hyper-V Manager and remove the virtual switch that was added at the beginning of the post. &lt;p&gt;That was it for this post… Hopefully you now have a basic understanding of how you can set up a simple swarm, and deploy your stacks to it using &lt;em&gt;docker stack&lt;/em&gt;.</description>
      <link>https://chris.59north.com/post/My-intro-to-Docker-Part-5-of-something</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/My-intro-to-Docker-Part-5-of-something#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=df4dec96-0597-4820-b26a-f0cd3cf5a3fd</guid>
      <pubDate>Mon, 18 Sep 2017 14:55:15 +0000</pubDate>
      <category>Docker</category>
      <betag:tag>docker</betag:tag>
      <betag:tag>swarm</betag:tag>
      <betag:tag>docker swarm</betag:tag>
      <betag:tag>tutorial</betag:tag>
      <betag:tag>intro</betag:tag>
      <betag:tag>introduction</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=df4dec96-0597-4820-b26a-f0cd3cf5a3fd</pingback:target>
      <slash:comments>5</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=df4dec96-0597-4820-b26a-f0cd3cf5a3fd</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/My-intro-to-Docker-Part-5-of-something#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=df4dec96-0597-4820-b26a-f0cd3cf5a3fd</wfw:commentRss>
    </item>
    <item>
      <title>My intro to Docker - Part 4 of something</title>
      <description>&lt;p&gt;In the previous posts about Docker, &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-1-of-something" target="_blank"&gt;here&lt;/a&gt;, &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-2-of-something" target="_blank"&gt;here&lt;/a&gt; and &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-3-of-something" target="_blank"&gt;here&lt;/a&gt;, we’ve looked at what Docker is, how to set up a basic container and how to set up a stack of containers using docker-compose. One thing we haven’t talked about is the fact that most projects use some form of persistent data store, and the most common store, at least in my world, is a relational database of some sort. So this time, I want to cover something that might seem slightly odd…setting up an MS SQL Server…on Linux…in Docker.&lt;/p&gt;&lt;p&gt;
Yes, you heard me right… I’m going to show you how to set up an MS SQL Server instance in a Linux-based Docker container. Something that wouldn’t have been possible, in any way, not too long ago, but Microsoft “recently” released a version of MS SQL Server that runs on Linux, which is really cool. And running it in Docker just makes sense!&lt;/p&gt;&lt;h5&gt;Running MS SQL Server in a Docker container&lt;/h5&gt;&lt;p&gt;Starting a SQL Server instance in a Docker isn’t that hard, but there are a couple of things that need to be set up for it to work.&lt;/p&gt;&lt;p&gt;
of all, SQL Server can only run on a machine that has at least 3,25 Gb of memory. And when running Docker for Windows (or Mac), that isn’t the case by default. So the first step is to update the Docker VM host to have more memory. Luckily, this is a piece of cake to do. Just go down to the systray and right-click the little Docker icon (the whale), choose Settings and then Advanced settings. In here, you can just pull the slider to update the available memory to be at least 3,25 Gb, and then click Apply. This will cause the VM that runs in Hyper-V to restart, and when it comes back online, it will have more memory for us to play with.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;
Note:&lt;/strong&gt; Don’t go in and update the VM in the Hyper-V manager…&lt;/p&gt;&lt;p&gt;
Next, SQL Server requires a couple of environment variables to be set up, for it to start up without any user interaction. First of all, it requires a variable called &lt;strong&gt;&lt;em&gt;ACCEPT_EULA&lt;/em&gt;&lt;/strong&gt;, that should be set to &lt;strong&gt;&lt;em&gt;Y&lt;/em&gt;&lt;/strong&gt;, to confirm that we accept the EULA. And then we need a admin password to be set up using the &lt;strong&gt;&lt;em&gt;SA_PASSWORD&lt;/em&gt;&lt;/strong&gt; variable. So to run a new SQL Server instance, you can just run&lt;br&gt;
&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker run --name sql_server -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=MyVeryS3cr3tPassw0rd!' -p 1433:1433 –d microsoft/mssql-server-linux&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will start a new SQL Server instance inside a detached container, mapping port 1433 on the host, to 1433 on the container. This means that once the container is up and running, and SQL Server has started (which takes a few seconds) you can just open a SQL Server Management Studio instance and connect to localhost.&lt;br&gt;
Note: If you already have SQL Server installed on your machine, port 1433 is likely already in use, so you will have to map some other port on the host, and then use that new port when connecting SSMS, using the format &lt;em&gt;&amp;lt;hostname&amp;gt;, &amp;lt;port number&amp;gt;&lt;/em&gt;.&lt;/p&gt;&lt;p&gt;And if you don’t want to use SSMS, or you are on a Mac, you can connect to the container interactively, and run queries, using this command&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker exec -it sql_server /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P MyVeryS3cr3tPassw0rd!&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
This will attach your command line to the SQL Server instance, allowing you to run commands like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
select * from sys.Tables&lt;br&gt;
go&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And when you are done, you can just run&lt;br&gt;
&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;exit&lt;br&gt;
&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;to detach.&lt;/p&gt;&lt;p&gt;However, it kind of sucks to have to attach like this, or using SSMS, and set up a database every time you start a container based on the SQL Server image. It would be much nicer to have the database set up for us when the container starts instead. And the easiest way to do this, is to create our own SQL Server Docker image, that contains the start up functionality we need.&lt;/p&gt;&lt;h5&gt;Seeding the database&lt;/h5&gt;&lt;p&gt;You’ll need to add a couple of files for this to work so I suggest creating a new directory to hold them. Inside that directory, we need a new dockerfile, in which we need to add the following&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;FROM microsoft/mssql-server-linux&lt;/p&gt;
&lt;p&gt;ENV ACCEPT_EULA=Y&lt;br&gt;
ENV MSSQL_SA_PASSWORD=MyPassw0rd!&lt;/p&gt;
&lt;p&gt;RUN mkdir -p /app&lt;br&gt;WORKDIR /app COPY . /app&lt;br&gt;RUN chmod +x /app/import-data.sh&lt;br&gt;&lt;br&gt;
EXPOSE 1433 &lt;br&gt;
ENTRYPOINT ["/bin/bash", "./entrypoint.sh"]&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;So what’s going on here? Well, first of all, it defines that the image should use the &lt;em&gt;microsoft/mssql-server-linux&lt;/em&gt; image as the base image. Then it sets the required environment variables using the &lt;strong&gt;&lt;em&gt;ENV&lt;/em&gt;&lt;/strong&gt; keyword. And since I need a set up script and some other files, I go ahead and create a new directory called &lt;strong&gt;&lt;em&gt;/app&lt;/em&gt;&lt;/strong&gt;, making it the current working directory using the &lt;strong&gt;&lt;em&gt;WORKDIR&lt;/em&gt;&lt;/strong&gt; keyword. The files from the current directory is then added to that directory using the &lt;strong&gt;&lt;em&gt;COPY&lt;/em&gt;&lt;/strong&gt; keyword. However, since I’m going to use a shell script, I need to run chmod to set the permissions to allow it to be executed. Finally it exposes port 1433, which is used by SQL, before setting up an entrypoint telling the image to execute the &lt;em&gt;entrypoint.sh&lt;/em&gt; script using bash. &lt;/p&gt;&lt;p&gt;That configures the image, but what’s inside the &lt;em&gt;entrypoint.sh&lt;/em&gt; file? Well, not much… &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
/app/import-data.sh &amp;amp; /opt/mssql/bin/sqlservr&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
It just calls another shell script called &lt;em&gt;import-data.sh&lt;/em&gt;, and then starts SQL Server by calling &lt;em&gt;sqlservr&lt;/em&gt;. And yes, it needs to be in this order, even if it makes little sense trying to import data before SQL Server has started. But, the last thing being executed has to keep running to keep the container running, and the shell script will only run some commands and then terminate, while &lt;em&gt;sqlservr&lt;/em&gt; will block the execution and keep running. &lt;br&gt;&lt;/p&gt;&lt;p&gt;The &lt;em&gt;import-data.sh&lt;/em&gt; file isn’t much more complicated &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
echo "Sleeping"&lt;br&gt;sleep 20s&lt;br&gt;&lt;br&gt;
echo "Setting up table"&lt;br&gt;/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P MyPassw0rd! -d master -i setup.sql&lt;br&gt;
&lt;br&gt;echo "Importing data in Products table"&lt;br&gt;/opt/mssql-tools/bin/bcp DemoData.dbo.Products in "/app/Products.csv" -c -t',' -S localhost -U sa -P MyPassw0rd!&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Currently, there is no way to add start up scripts to SQL Server for Linux, but hopefully there will be in the future. That way we won’t have to do a stupid sleep to get things to work… &lt;/p&gt;&lt;p&gt;
After the 20 seconds, it uses &lt;em&gt;sqlcmd&lt;/em&gt; to attach to the local SQL Server instance, using the sa account, and then runs a script called &lt;em&gt;setup.sql&lt;/em&gt;, which we’ll have a look at in a minute… And finally it uses &lt;em&gt;bcp&lt;/em&gt; to import the contents of a file called &lt;em&gt;Products.csv&lt;/em&gt; into a table called Products in a database called &lt;em&gt;DemoData&lt;/em&gt; database, which is created in the setup.sql script that looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;CREATE DATABASE DemoData;&lt;br&gt;GO&lt;br&gt;&lt;br&gt;
USE DemoData;&lt;br&gt;GO&lt;br&gt;
&lt;br&gt;CREATE TABLE Products (ID int, ProductName nvarchar(max));&lt;br&gt;GO&lt;br&gt;
&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;and just to be thorough, and show it all, the &lt;em&gt;Products.csv&lt;/em&gt; file looks like this &lt;br&gt;
&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;1,Skateboard&lt;br&gt;2,Kite&lt;br&gt;3,Parachute&lt;br&gt;4,Computer&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;With the &lt;em&gt;dockerfile&lt;/em&gt; in place, we can create a new image by running&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker build --tag sql_server .&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
naming the image &lt;strong&gt;&lt;em&gt;sql_server&lt;/em&gt;&lt;/strong&gt;. Finally, we and start up a new container based on that image, by executing&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker run –it –p 1433:1433 --rm sql_server&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This should end up with a running SQL Server instance, with a database called DemoData, containing a table called Products with some data. And as long as you don’t stop the container, you should be able to connect to it as usual using SSMS (or whatever tool you normally use) and look at the database.&lt;/p&gt;&lt;p&gt;
However…this set up kind of sucks! It will set up a new database, with the same hardcoded data, every time you spin up a new container based on this image. Not playing very well together with the idea that containers can be ephemeral, and be taken down and moved around at any given time… Unless, all you need is a database with a fixed dataset. But in most cases, you want to be able to use your database to store stuff, but adding, updating and removing things. So how do we go about solving this? &lt;/p&gt;&lt;h5&gt;Adding persistence&lt;/h5&gt;&lt;p&gt;To be able to persist data between when containers are moved around, updated etc, we need a storage location that isn’t tied to the container. Anything inside a container will be blown away when the container is deleted, which is what will happen if the container is moved for example. So in general, we don’t want to persist permanent things inside the container. Instead we want to persist permanent things external to the container.&lt;/p&gt;&lt;p&gt;
There are 2 different ways to add external storage to a container. The first way is to bind something called a volume to the container, which basically means that we take a directory on the host, or somewhere persistent, and map it to a path inside the container.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;
Note:&lt;/strong&gt; If you come from a Windows background, which I guess you probably do, you need to remember that Linux doesn’t use drive letters etc to access files. Instead they use mount points, which is basically just a path where some form of storage is located. So you could see a bound volume as a mapped network drive in Windows, but the mapped drive just gets a path…&lt;/p&gt;&lt;p&gt;&lt;strong&gt;
Note 2:&lt;/strong&gt; This doesn’t work for this special scenario for Docker for Mac. Binding volumes work, but apparently SQL has some issues with it. More information can be found &lt;a href="https://github.com/Microsoft/mssql-docker/issues/12" target="_blank"&gt;here&lt;/a&gt;. So if you are on a Mac, you need to look at the “Alternate persistence” section below.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;
Note 3:&lt;/strong&gt; Even if you are on a Windows machine, it might be a good idea to have a look at the “Alternate persistence” section below as well…&lt;/p&gt;&lt;p&gt;There are different drivers out there that allow you to use different storage mechanisms for your bound volumes, but for this demo, I’ll use a directory on the host as a volume.&lt;/p&gt;&lt;p&gt;
To add a volume using the docker run command, you just need to add a –v parameter with the value set to &lt;strong&gt;&lt;em&gt;&amp;lt;host directory&amp;gt;:&amp;lt;container path&amp;gt;&lt;/em&gt;&lt;/strong&gt;. So running&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker run –v c:\DockerVolume\:/dockervolume myimage&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
Will start a new container using the &lt;em&gt;myimage&lt;/em&gt; image, with the path &lt;strong&gt;&lt;em&gt;/dockervolume/&lt;/em&gt;&lt;/strong&gt; inside the container, mapped to the &lt;strong&gt;&lt;em&gt;c:\DockerVolume\&lt;/em&gt;&lt;/strong&gt; directory on the host. So anything inside the c:\DockerVolume\ directory will be available inside the container, and anything written to /dockervolume/ in the container, will be persisted to the c:\DockerVolume\ directory. And since that is a directory on the host, which hopefully won’t disappear, the data will be persisted between container starts.&lt;/p&gt;&lt;p&gt;
Using this, we can go ahead and make the SQL storage a bit better. So, iInstead of creating a new database in every container, we can attach an existing database on start instead. &lt;/p&gt;&lt;p&gt;
Step one, is to create the new database in some way. In my case, I just used SSMS and my local SQL Server instance to create a new database that replicates the one used in the previous example. Then I detached that database from my local SQL Server instance. This gave me a DemoData.mdf and DemoData_log.ldf file located somewhere similar to &lt;em&gt;C:\Program Files\Microsoft SQL Server\MSSQL13.SQLEXPRESS\MSSQL\DATA&lt;/em&gt; (depends on what instance name you are using and so on…but you get the point). Then copied those 2 files, to a new directory called &lt;strong&gt;&lt;em&gt;c:\MyDockerVolume\&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;p&gt;
With the mdf and ldf files in place, we need to update the set up script to attach them to SQL Server, instead of creating a new database. To start out, we can update the setup.sh file to this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;echo "Sleeping"&lt;br&gt;sleep 20s &lt;br&gt;
&lt;br&gt;echo "Attaching Database"&lt;br&gt;/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P MyPassw0rd! -d master -i setup.sql&lt;br&gt;echo "Done Attaching Database"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see, I have removed the CSV-file import, and then we need to update the &lt;em&gt;setup.sql&lt;/em&gt; to this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;USE [master]&lt;br&gt;GO&lt;br&gt;&lt;br&gt;
CREATE DATABASE [DemoData] &lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; ON ( FILENAME = N'/sql_storage/DemoData.mdf' ), ( FILENAME = N'/sql_storage/DemoData_log.ldf' )&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; FOR ATTACH&lt;br&gt;GO&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;It uses &lt;em&gt;CREATE DATABASE&lt;/em&gt; to create a new database called DemoData, attaching the DemoData.mdf and DemoData_log.ldf files from a directory called &lt;strong&gt;&lt;em&gt;/sql_storage/&lt;/em&gt;&lt;/strong&gt;. This means that when the container starts up, it will start SQL Server, and then run the &lt;em&gt;setup.sql&lt;/em&gt; script, setting up a new database using the two SQL files. All we need to remember is to make sure that we add a volume that contains the files. &lt;br&gt;
So to try this out, we first need to create a new image since the scripts have changed&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker build --tag sql_mounted .&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
and then we can create a new container using that new image &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker run -it -p 1433:1433 -v c:\MyDockerVolume:/sql_storage --rm sql_mounted&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Make sure you map the correct folder. The above command maps c:\DockerVolume\ which is just what I use…&lt;/p&gt;&lt;p&gt;&lt;strong&gt;
Tip:&lt;/strong&gt; If you get problems opening your files in the container, make sure that all the file permissions are set up properly on the host… &lt;/p&gt;&lt;p&gt;
As long as you container is up and running, you should be able to connect through it using SSMS or whatever tool you use to work with SQL Server. And if you add or remove any data, it will be persisted even if you remove the container and start it back up again. So you should be good to go. &lt;/p&gt;&lt;h5&gt;Alternate persistence&lt;/h5&gt;&lt;p&gt;The second way to persist files between container instances, is by using something called a data volume container. This is basically a mapped storage location called a volume, managed by Docker. And the way you set that up, is by first creating a new volume by calling &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker volume create sql_data&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will create a new empty volume called &lt;strong&gt;&lt;em&gt;sql_data&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;p&gt;Then you need to add the files you want in the volume, by copying them from somewhere. In this case, the host. However, to do that, you need to create a container and attach the volume to that, and then copy the files to the volume through that container. &lt;/p&gt;&lt;p&gt;
So, to do that, you can run &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker run -v sql_data:/sql_storage --name temp alpine&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
to create a new container called temp, with the &lt;em&gt;sql_data&lt;/em&gt; volume attached to&lt;em&gt; /sql_storage/&lt;/em&gt;. &lt;/p&gt;&lt;p&gt;Next, the files are copied using docker, by calling &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker cp ./Data/DemoData.mdf temp:/sql_storage docker cp ./Data/DemoData_log.ldf temp:/sql_storage&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
which will copy the DemoData.mdf and DemoData_log.ldf files to the containers /sql_data/ directory, which then maps to the sql_data volume. And when that is done, you can go ahead and delete the temporary container &lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker rm temp
&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Now, the volume should contain all of the required files, so running&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;
docker run -it -p 1433:1433 –v sql_data:/sql_storage --rm sql_mounted&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
will start up a new SQL container, but this time with the &lt;em&gt;/sql_storage/&lt;/em&gt; path mapped to the volume we just created. &lt;/p&gt;&lt;p&gt;The volume is then persisted separately from the container. So when the container is removed, the volume is persisted, read for re-use.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;
Note:&lt;/strong&gt; For a real scenario, you would just skip the &lt;em&gt;-it&lt;/em&gt; and add &lt;em&gt;-d&lt;/em&gt; to have the container running in the background instead. &lt;/p&gt;&lt;p&gt;
To view your volumes on the host, just run&lt;/p&gt;&lt;p&gt;docker volume ls&lt;br&gt;&lt;/p&gt;&lt;p&gt;and if you want to remove a volume, you just run&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker volume rm &lt;em&gt;&amp;lt;volume name&amp;gt;&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;
That’s pretty much it! Hopefully you now have an idea of how to store your data permanently, as well as how you can get SQL Server up and running inside a Docker container running Linux. Who would have thought A, that I would ever write about Linux and command lines, and B, about running Microsoft SQL Server on Linux!? It’s a brave new world!&lt;/p&gt;&lt;p&gt;The &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-5-of-something"&gt;next post&lt;/a&gt; covers Docker Swarm and how we can deploy to a cluster of machines. Very cool stuff!&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/My-intro-to-Docker-Part-4-of-something</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/My-intro-to-Docker-Part-4-of-something#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=c0c2d6c9-0afe-48b0-a58d-80945d766483</guid>
      <pubDate>Wed, 30 Aug 2017 11:45:18 +0000</pubDate>
      <category>Docker</category>
      <betag:tag>docker</betag:tag>
      <betag:tag>intro</betag:tag>
      <betag:tag>introduction</betag:tag>
      <betag:tag>volume</betag:tag>
      <betag:tag>sql</betag:tag>
      <betag:tag>storage</betag:tag>
      <betag:tag>persistence</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=c0c2d6c9-0afe-48b0-a58d-80945d766483</pingback:target>
      <slash:comments>4</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=c0c2d6c9-0afe-48b0-a58d-80945d766483</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/My-intro-to-Docker-Part-4-of-something#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=c0c2d6c9-0afe-48b0-a58d-80945d766483</wfw:commentRss>
    </item>
    <item>
      <title>My intro to Docker - Part 3 of something</title>
      <description>&lt;p&gt;So far in this little blog series about Docker, I have covered what Docker is, how it works, how to get a container up and running using pre-built images, as well as your own images. But so far, it has all been about setting up a single container with some form of application running inside it. What if we have a more complicated scenario? What if we have a couple of different things we want to run together? Maybe we want to run our ASP.NET Core app that we built in the previous post behind an nginx instance instead of exposing the Kestrel server to the internet… Well, obviously Docker has us covered.&lt;/p&gt;&lt;p&gt;However, before we go any further, I just want to mention that I will only be covering something called docker-compose in this post. This can be used to create a stack of containers that are set started and stopped together. I will not be covering distributing the application accross several nodes this time. There will be more about that later. And even if that is probably the end goal in a lot of cases, being able to just run on a single host can be useful as well. Especially while developing stuff.&lt;/p&gt;&lt;h5&gt;What is docker-compose?&lt;/h5&gt;&lt;p&gt;When you installed Docker for Windows or Docker for Mac, you automatically got some extra tools installed. One of them is docker-compose, which is a tool for setting up several containers together in a stack, while configuring their network etc. Basically setting up and configuring a set of containers/apps that work together.&lt;/p&gt;&lt;p&gt;It does this by using a file called docker-compose.yml. At least it’s called that by default. You can pass in another name asa parameter to docker-compose if you want to… And as the file extension hints, it is a &lt;a href="https://en.wikipedia.org/wiki/YAML" target="_blank"&gt;YAML&lt;/a&gt; file. This means that it defines the services we want to have in our application stack using YAML syntax. And yes, we are now talking about services. A service is basically a container. It is a bit more complicated that that, but for this you can see it as a container.&lt;/p&gt;&lt;h5&gt;Configuring an application stack using docker-compose&lt;/h5&gt;&lt;p&gt;In the &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-2-of-something" target="_blank"&gt;previous post&lt;/a&gt;, I created a simple ASP.NET Core application that we could run in a container. Then we created an image called &lt;em&gt;myimage&lt;/em&gt;, containing that application. What I want now, is to protect that application behind an nginx reverse proxy server. This means that I need 2 containers. One running nginx, exposing a port the a client can browse to, and one running the actual application, which nginx proxies requests to. &lt;/p&gt;&lt;p&gt;But before I can set up my stack, I need to set up the nginx image I want to use. And I want to have that set up placed in a sibling directory to the application. So I’ll create the following folder structure&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;DockerDemo&lt;br&gt;&amp;nbsp; -&amp;nbsp; DockerApp&lt;br&gt;&amp;nbsp; -&amp;nbsp; nginx&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And inside the nginx folder I create 2 files. One called &lt;em&gt;nginx.conf&lt;/em&gt; and one called &lt;em&gt;dockerfile&lt;/em&gt;. Inside the nginx.conf I need to add the configuration that nginx should use. And in this case, it looks like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;worker_processes 1;&lt;br&gt;events { worker_connections 1024; }&lt;br&gt;http {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; sendfile on;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; upstream web {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; server web:80;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; server {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; listen 80;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; location / {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; proxy_pass http://web;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; proxy_redirect off;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; proxy_set_header Host $host;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; proxy_set_header X-Real-IP $remote_addr;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; proxy_set_header X-Forwarded-Host $server_name;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;}&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;The only thing that is really interesting here is the “proxy_pass http://web”. This is the line that tells nginx to proxy all calls to a server called “web”. This will be a DNS name set up by docker-compose, and represent the container running the ASP.NET Core app.&lt;/p&gt;&lt;p&gt;The dockerfile contains the following&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;FROM nginx:alpine&lt;br&gt;COPY nginx.conf /etc/nginx/nginx.conf&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;which tells Docker that it should use the nginx:alpine image as the base image, and then just add the nginx.conf file to the /etc/nginx/ folder.&lt;p&gt;That’s it for setting up the nginx image. The next step is to set up the docker-compose stuff to get our stack up and running. So I start out by creating a new file called &lt;em&gt;docker-compose.yml&lt;/em&gt; in the DockerDemo root folder. I need it to be a sibling to both the DockerApp and nginx folders. It’s in this file I’m going to define the services that make up my application stack. But before I can do that, I need to define what version this file uses… So I’ll add a version entry like this&lt;blockquote&gt;&lt;p&gt;version: 3&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Then I need to configure my services. So I’ll start out with the nginx service, which I set up by adding the following&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;services:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; nginx:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; build: &lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; context: ./nginx&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ports:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; - "8080:80"&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This tells docker-compose that I want a service called nginx, built using the dockerfile in the nginx directory, and mapping the port 8080 on the host to port 80 on the container.&lt;p&gt;FInally, I add the web service as well, by adding&lt;blockquote&gt;&lt;p&gt;services:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; …&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; web:&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; image: myimage&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;As you can see here, I’m adding a service called web that should be based on the image called &lt;em&gt;myimage&lt;/em&gt;. And that’s actually everything I need. docker-compose will automatically set up a network for all the services defined, and make sure that they can communicate with eachother using the service names. So when nginx proxies calls to &lt;em&gt;http://web&lt;/em&gt;, it redirects to this service in this application stack.&lt;p&gt;With all my files in place I can go ahead and call&lt;blockquote&gt;&lt;p&gt;docker-compose up -d&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;to start up the required services in detached mode, which means that it wont attach the output from the containers to your terminal. If you want to see the output from the containers, which can be really helpful for debugging, just omit the -d flag.&lt;p&gt;Once it has built the nginx image and started up both the containers, you can just open a browser and browse to &lt;em&gt;http://localhost:8080&lt;/em&gt; to see the result. It’s not very impressive as such, but remember that you just set up a reverse proxy server and a 2 service application stack in just a couple of minutes. And adding a SQL database and/or redis cache is just as easy.&lt;p&gt;When you are done playing around with the application, you can just run&lt;blockquote&gt;&lt;p&gt;docker-compose down&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;to stop all the services and tear down the containers. The only thing left is an nginx image called &lt;em&gt;dockerdemo_nginx&lt;/em&gt;, which is the one that was automatically created for us by docker-compose. If you want to get rid of it, just run&lt;blockquote&gt;&lt;p&gt;docker rmi dockerdemo_nginx&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;That’s it! &lt;p&gt;There is obviously a whole heap more to docker-compose, but now you at least know the basics of how you can set up a stack of services that can all communicate with eachother in an easy way.&lt;p&gt;In the &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-4-of-something"&gt;next post&lt;/a&gt; I’ll be looking at how we can set up a Microsoft SQL Server instance inside of a Docker container. Yes…you read it right…a Microsoft SQL Server running on Linux in a Docker container!</description>
      <link>https://chris.59north.com/post/My-intro-to-Docker-Part-3-of-something</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/My-intro-to-Docker-Part-3-of-something#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=be1686a3-4d4f-4ece-ab4b-89665030c1c4</guid>
      <pubDate>Fri, 25 Aug 2017 13:54:51 +0000</pubDate>
      <category>Docker</category>
      <betag:tag>intro</betag:tag>
      <betag:tag>tutorial</betag:tag>
      <betag:tag>introduction</betag:tag>
      <betag:tag>docker</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=be1686a3-4d4f-4ece-ab4b-89665030c1c4</pingback:target>
      <slash:comments>3</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=be1686a3-4d4f-4ece-ab4b-89665030c1c4</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/My-intro-to-Docker-Part-3-of-something#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=be1686a3-4d4f-4ece-ab4b-89665030c1c4</wfw:commentRss>
    </item>
    <item>
      <title>Going green…a.k.a. “Hello tretton37!”</title>
      <description>&lt;p&gt;As of this Monday, I’m officially working as a ninja at tretton37 in Stockholm. And for those of you who don’t speak Swedish, tretton37 means thirteen37, which is just an awesome name for an IT company. &lt;/p&gt;&lt;p&gt;So what is a ninja? Well, it is pretty much just an IT consultant as such, but the company doesn’t like the idea of that word. At least in Sweden, it has basically been reduced to be the same as a hired resource. The word consultant isn’t about consulting, giving advice and offering knowledge anymore. So to mitigate that, the common name for a person that works at tretton37 is “ninja”. The focus is to give our clients more than just a resource that can code. It’s about more than that. It’s about giving tips and ideas, and go beyond just building something, and instead take a bigger responsibility for the solution. Making sure that we do our best to give the client what they need and not just what they ask for. It’s about listening to what they want to accomplish, and not what they want us to build.&lt;/p&gt;&lt;p&gt;This view of what we should be doing correlates well with my own view on what we should be doing. So I’m very excited to be here, and hopefully there are some cool projects for me here in the future. With some happy clients at the end…&lt;/p&gt;&lt;p&gt;While I wait for that though…I’ll take this chance to catch up on some blogging and things, with the goal being that my blog will once again be a living thing that actually provides value.&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/Going-green…</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/Going-green…#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=cdbf7338-b9af-497f-926c-5fc1369a0eb4</guid>
      <pubDate>Fri, 25 Aug 2017 12:30:28 +0000</pubDate>
      <category>Personal</category>
      <betag:tag>tretton37</betag:tag>
      <betag:tag>job</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=cdbf7338-b9af-497f-926c-5fc1369a0eb4</pingback:target>
      <slash:comments>0</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=cdbf7338-b9af-497f-926c-5fc1369a0eb4</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/Going-green…#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=cdbf7338-b9af-497f-926c-5fc1369a0eb4</wfw:commentRss>
    </item>
    <item>
      <title>My intro to Docker - Part 2 of something</title>
      <description>&lt;p&gt;In the &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-1-of-something" target="_blank"&gt;previous post&lt;/a&gt;, I talked a bit about what Docker is, how it works, and so on. And I even got to the point of showing how you can create and start containers using existing images from Docker Hub. However, just downloading images and running containers like that, is not very useful. Sure, as a Microsoft dev, it's kind of cool to start up a Linux container and try out some leet Linux commands in bash. But other than that it is a little limiting. So I thought I would have a look at the next steps involved in making this and actually useful thing…&lt;h5&gt;Creating something to host in a container&lt;/h5&gt;&lt;p&gt;The first step in using Docker is to have something that should run inside of our containers. And since I am a .NET developer, and .NET Core has Linux support, I thought I would write a small ASP.NET Core application to run in my container.&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before you can run this, you need to install the &lt;a href="https://www.microsoft.com/net/core#windowscmd" target="_blank"&gt;.NET Core SDK&lt;/a&gt;…&lt;p&gt;Step one is to create an ASP.NET Core application. So I'll open up a command line window and navigate somewhere where I want to store my files. And then I'll run&lt;blockquote&gt;&lt;p&gt;mkdir DockerApp&lt;br&gt;cd DockerApp&lt;br&gt;dotnet new web&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will create a new ASP.NET Core application for us. The next step is to set it up to do something. And to do that, you can use whatever editor or IDE you want, but I'll use &lt;a href="https://code.visualstudio.com/"&gt;VS Code&lt;/a&gt;, so I'll just go and type&lt;blockquote&gt;&lt;p&gt;code .&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And since I just want to create the simplest possible ASP.NET MVC app, I'll go ahead and add MVC in the Startup&lt;blockquote&gt;&lt;p&gt;
public void ConfigureServices(IServiceCollection services)
&lt;br&gt;{
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; services.AddMvc();
&lt;br&gt;}
&lt;br&gt;public void Configure(IApplicationBuilder app, IHostingEnvironment env)
&lt;br&gt;{
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; if (env.IsDevelopment())
    {&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; app.UseDeveloperExceptionPage();&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; app.UseMvc();&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; app.Run(async (context) =&amp;gt;
    {
        await context.Response.WriteAsync("Hello World!");
    });
&lt;br&gt;}
&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Then I'll add a new folder called Controllers, and a class inside of it called HomeController that looks like this&lt;blockquote&gt;&lt;p&gt;
using Microsoft.AspNetCore.Mvc;
&lt;br&gt;
namespace DockerApp.Controllers {
    &lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; public class HomeController : Controller {
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; [Route("/")]
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; public IActionResult Index() {
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; return View();
        }
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }
&lt;br&gt;}&lt;/p&gt;&lt;/blockquote&gt;And then I'll add the actual view by adding a folder called Views, and inside of that one called Home, and inside that a Razor view called Index.cshtml that looks like this&lt;blockquote&gt;&lt;p&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br&gt;&amp;lt;html&amp;gt;
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;head&amp;gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;title&amp;gt;DockerApp&amp;lt;/title&amp;gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/head&amp;gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;body&amp;gt;
&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;h1&amp;gt;Hello from Docker&amp;lt;/h1&amp;gt;&lt;br&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;/body&amp;gt;&lt;br&gt;&amp;lt;/html&amp;gt;
&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;All of that is stock standard ASP.NET stuff, so hopefully you already knew how to do this. But I thought I would cover it anyway…&lt;p&gt;To run this application from the command line, all you have to do is go to the DockerApp folder and type&lt;blockquote&gt;&lt;p&gt;dotnet run&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will automatically compile your application and start an HTTP server that hosts you application on port 5000.&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The port 500 is the default port used by default. This can be changed using an environment variable called ASPNETCORE_URLS&lt;h5&gt;Creating your own images&lt;/h5&gt;&lt;p&gt;Now that we have an app that we want to run as a container, we want to package it up in an image, that we can then use to start a container from. There are 2 ways to do this. &lt;p&gt;Option 1 is to do it interactively. This means that we start up a container based on the image we want to use as our base, passing in -it, to make it interactive. Then we configure and set everything up that we need inside that container. And finally, we call docker commit [container name] to create an image based on that container.&lt;p&gt;Option 2 is to create a dockerfile, and use that to build an image. And since option 1 isn't recommended for several reasons, I'll focus on this instead.&lt;p&gt;So I'll go ahead and create a file called dockerfile. And being a Windows person, I normally just create it using Explorer, but if you want to do it in the command line like a "real developer", you can run the following command&lt;blockquote&gt;&lt;p&gt;touch dockerfile&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;This requires that touch is in your path. The easiest way is to have Git installed, and add C:\Program Files\Git\usr\bin to your path… Or just use Explorer…&lt;p&gt;Ok, so what are we doing with this dockerfile? Well, this file defines how to set up the image that you want. It starts out by saying what image it should use as a base, and includes all the commands that need to be run, and files that need to be copied and so on, to make create the state that you want on your image.&lt;p&gt;So the first thing we need to do is to define what image we want to use as our base, by using the FROM keyword. And since I am doing an ASP.NET Core application, that is run using "dotnet run", I need an image that has the .NET Core SDK on it. Luckily, Microsoft has one of those for us called microsoft/aspnetcore-build. So I'll put this&lt;blockquote&gt;&lt;p&gt;FROM microsoft/aspnetcore-build&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;at the top of my file…&lt;p&gt;Then I need to tell it that I want to include everything inside the folder I am currently in. And I'll do that using the COPY keyword.&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; There is also an ADD keyword which does kind of the same thing, but it will also expand tar files and a few other things. COPY will do just that…&lt;p&gt;The COPY command takes 2 arguments. The source and the target inside the container. And remember, the image is Linux-based, so it doesn't have a drive letter and so on. Instead it just has "root folders". So I'll add&lt;blockquote&gt;&lt;p&gt;COPY . /app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;To copy ".", which means current directory, to "/app" in the image.&lt;p&gt;Then I want to set this directory as the working directory, meaning that all commands executed will have this as the active directory. And this is done using the WORKDIR keyword like this&lt;blockquote&gt;&lt;p&gt;WORKDIR /app&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;And then I want to specify that my container will expose a port, using the EXPOSE keyword like this&lt;blockquote&gt;&lt;p&gt;EXPOSE 80&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Wait…what!? The ASP.NET application used port 5000…didn't it? Well, it _did_. Inside the microsoft/aspnetcore-build image, the environment variable called ASPNETCORE_URLS is set to &lt;a href="http://+:80"&gt;http://+:80&lt;/a&gt;, which tells ASP.NET Core to use that port instead of the default 5000. So that is what we want to expose… &lt;p&gt;The other option is to change the ASPNETCORE_URLS environment variable using the ENV keyword, and then expose whatever port you want. Like this&lt;blockquote&gt;&lt;p&gt;ENV ASPNETCORE_URLS=http://+:8080&lt;br&gt;EXPOSE 8080&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;But I'll stick to using port 80.&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The fact that your container exposes port 80 doesn't mean that it will conflict with anything on port 80 on the host. Exposing a port just exposes it from the container, on the container's network interface. To have it exposed on the host requires us to specifically order Docker to do so, as you will see later on.&lt;p&gt;And finally, I add an entrypoint. Basically the command to run when starting a container based on this image. And I do that by adding&lt;blockquote&gt;&lt;p&gt;ENTRYPOINT ["dotnet", "run"]&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;There are 2 different syntaxes for this. One is this JSON array based syntax called exec form, which is preferred. The other one is called shell form, and is written just writing it like you would on the command prompt… Like this&lt;p&gt;ENTRYPOINT dotnet run&lt;h5&gt;Sidenote - ENTRYPOINT and CMD&lt;/h5&gt;&lt;p&gt;Oh shiny! I’m just going to go into some detail about this, because there are some finer details that might be interesting to note with this…&lt;p&gt;The JSON array syntax has the executable as the first entry in the array, and then passes any parameters as individual entries in the array. The downside to this is that it doesn't go through a shell, so it won't do environment variable substitution for example. So running something like&lt;blockquote&gt;&lt;p&gt;ENTRYPOINT ["echo", "$ASPNETCORE_URLS"]&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;won't echo out the value of the ASPNETCORE_URLS environment variable.&lt;p&gt;And the second "but" is that when you run this container, and pass in a parameter at the end of the "docker run" command, it will actually not change the executable that is being run, instead it adds the parameter as a parameter to the executable defined in the ENTRYPOINT.&lt;p&gt;On the other hand, running in shell form, you will run your command under /bin/sh -c. So it will do environment variable substitution etc. But…it won't handle any passed in parameters from the "docker run" command… &lt;p&gt;There are more fine points in the choice between exec form and shell form. But you will have to read up on the on the &lt;a href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;Docker docs&lt;/a&gt;.&lt;p&gt;It's also worth mentioning a keyword called CMD now as well. I won't use it, but I still want to cover it, because it can be somewhat confusing…&lt;p&gt;CMD also has both exec form and shell form, and works pretty much just like ENTRYPOINT. BUT…it is only defaults. Anything set using a CMD keyword will be the default parameters, but overridden if parameters are passed in when using the "docker run" command.&lt;p&gt;CMD can be used in 2 ways (3 if you count exec and shell form as 2 different ways). It can be used just like the instead of the ENTRYPOINT, telling the container what should happen when it starts, but still leave it open to be changed when starting the container. For example&lt;blockquote&gt;&lt;p&gt;CMD ["dotnet", "run"]&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;or &lt;blockquote&gt;&lt;p&gt;CMD dotnet run&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will cause "dotnet run" to be executed at the start of the container, either on its own, or through "bin/sh -c" depending on form. But it allows me to pass in a new startup executable when starting the container. &lt;p&gt;&lt;strong&gt;Remember:&lt;/strong&gt; This requires you to NOT have an ENTRYPOINT in your file.&lt;p&gt;If you have an ENTRYPOINT as well, the CMD will end up just being parameters passed to the entrypoint executable by default. And any passed in parameters from "docker run" will replace the defaults and be passed to the entrypoint instead.&lt;p&gt;This gives us a bit of flexibility for how our image allows the user to use it when starting the container.&lt;p&gt;I don't need any CMD in this case, but I found it reasonable to cover it here as well…&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The last little thing to mention is that it IS possible to change the entrypoint when running "docker run" by passing in --entrypoint flag.&lt;h5&gt;Creating your own image - back on track&lt;/h5&gt;&lt;p&gt;Ok…after that side note on ENTRYPOINT and CMD and stuff, it is time to move on…&lt;p&gt;Now that we have a dockerfile that looks like this&lt;blockquote&gt;&lt;p&gt;FROM microsoft/aspnetcore-build
&lt;br&gt;COPY . /app
&lt;br&gt;WORKDIR /app&lt;br&gt;
EXPOSE 80
&lt;br&gt;ENTRYPOINT ["dotnet", "run"]&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;So we can now go and build an image. To do this, you go back to the terminal and run&lt;blockquote&gt;&lt;p&gt;docker build -t myimage .&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;&lt;strong&gt;Hint:&lt;/strong&gt; If you get a weird exception about unexpected characters, make sure that your file is in UTF-8. If you create your dockerfile using the command line, or PowerShell, the file will be in UTF-16 and will need re-saving in UTF-8…&lt;p&gt;This will tell Docker to build an image based on your file, and name it myimage. So, after running that command, if you run&lt;blockquote&gt;&lt;p&gt;docker images&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;you will see an image called myimage in the list.&lt;p&gt;&lt;strong&gt;Extra:&lt;/strong&gt; What this will really do, is that it will automatically spin up the base image, and then execute the first command. It will then stop that container and save that storage layer as its own layer. It then starts a new container based on that layer, and executes the next command, and then stops and saves. It then keeps doing that until all commands have executed. At least that is the way I understand it. But the technical details aren't that important to be honest. Just knowing that in the end, you will have an image that looks just like you want every time you build it using that dockerfile is the important part. But…if the build fails, it will leave the container at the last layer, making it possible for you to start it up and attach to it, and figure out what went wrong, which can be really helpful…&lt;p&gt;And with our new image in place, we can go ahead and run&lt;blockquote&gt;&lt;p&gt;docker run --rm -d -p 8080:80 --name web myimage&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will start a container based on the newly created image, mapping the hosts port 8080 to the containers port 80, naming it to web, and making sure the container is removed when it is stopped. So opening up a browser and browsing to http://localhost:8080, allows you to browse the application running inside the container.&lt;p&gt;And to stop it when you are done, you call&lt;blockquote&gt;&lt;p&gt;docker stop web&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;which will also remove the container since we added --rm.&lt;p&gt;That’s it for this time. The &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-3-of-something"&gt;next post&lt;/a&gt; covers how you can set up mutliple containers and have them work together to create more complex applications.&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/My-intro-to-Docker-Part-2-of-something</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/My-intro-to-Docker-Part-2-of-something#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=a31acf74-c089-46c3-9ea3-00bb5fb36552</guid>
      <pubDate>Fri, 25 Aug 2017 09:02:24 +0000</pubDate>
      <category>Docker</category>
      <betag:tag>tutorial</betag:tag>
      <betag:tag>introduction</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=a31acf74-c089-46c3-9ea3-00bb5fb36552</pingback:target>
      <slash:comments>4</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=a31acf74-c089-46c3-9ea3-00bb5fb36552</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/My-intro-to-Docker-Part-2-of-something#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=a31acf74-c089-46c3-9ea3-00bb5fb36552</wfw:commentRss>
    </item>
    <item>
      <title>My intro to Docker - Part 1 of something</title>
      <description>&lt;p&gt;&lt;sub&gt;&lt;/sub&gt;It’s been quite a while since I blogged, and there are several reasons for this. First of all, I haven’t really had the time, but I also haven’t really found a topic that I feel passionate enough about to blog about. But having played around with Docker, I now have! So I thought I would jot down some stuff about Docker… If nothing else, it gives me a away to come back to what I used to know when I have forgotten all of it again…&lt;/p&gt;&lt;h5&gt;What is Docker?&lt;/h5&gt;&lt;p&gt;The first thing to cover is what Docker really is. I have seen a lot of explanations, both of what it is, and why it is so good. But I have had a hard time grasping it in the way that it has been explained to me. So here is my explanation. And by my explanation, I mean the way I think about it. It might not be 100% correct from an implementation point of view, but it is the way I see it…&lt;/p&gt;&lt;p&gt;Many people ask what the difference is between Virtual Machines and Docker containers, so I thought I would take that viewpoint.&lt;/p&gt;&lt;p&gt;When we run VMs, vi basically emulate all the hardware of the machine, including a virtual hard drive, and then boot an operating system from that virtual HDD inside of our existing operating system. So we are booting up a full machine, as we normally would, but all the hardware that it runs on is just virtual stuff from the host. This way, we can run many machines on one physical machine, but each virtual machine is a full machine with an OS and so on. Docker is different…&lt;/p&gt;&lt;p&gt;When we run Docker, we have our existing operating system as the base, and then create a area on top of that where our Docker container runs. It still uses the same operating system, but it’s isolated from everything else on the host, so that it can’t interact with the other things installed on the machine. That way, we don’t have to boot up a whole operating system, because that is already booted up with the host. Instead, all we need to do is set up a context that is isolated from the rest of the machine, and have our application run in there. Inside that context, we can then set up the environment that is needed for our application to run. But the main thing is that it doesn’t run its own OS, it just runs in an isolated context on top of the host.&lt;/p&gt;&lt;p&gt;Inside the container, it is a lot like differential disks that are used when running virtual machines. A diff disk contains all the changes from base disk to the current state. So you start out with a base disk that has all the “base” information. The base disk could for example be a disk that contains a clean Windows server install. Then you add a disk on top of that, and install IIS. That second disk only contains the added/changed bytes that were required to install IIS. And then on top of that you might add another diff disk that contains your application. And that disk only contains the bytes needed for that… Slowly building up to what you need. However, in the end, the disks are kind of mashed together, and booted up like a full server. But you can for example use the same base disk for multiple other diff disks. So the WIndows Server base disk could be used for all VMs running on that Windows version.&lt;/p&gt;&lt;p&gt;In Docker, your initial base disk is actually your own operating system, or the host. In most cases that would be your Linux machine, but with Windows Server 2016 you can use Windows containers as well. And then your images are basically diff disks on top of your empty host operating system. But it isn’t based on the actual host disk, with all the installed applications and so on. Instead, it’s based on a clean/empty version of that OS disk. Basically the raw OS part of your hosts.&lt;/p&gt;&lt;p&gt;That means that we don’t have to boot up the server to get everything running. Instead, we can use the hosts OS, and then start a new isolated context with the “diff disks” added on top of the empty OS. This makes it MUCH faster to start a container, compared to starting a physical or virtual machine.&lt;/p&gt;&lt;p&gt;This is also why you can’t run Windows containers on a Linux host and vice versa. At least at the moment. In the near future, it seems like you will be able to run Linux containers on Windows. But that is done through some magic and virtualization.&lt;/p&gt;&lt;p&gt;In Docker, we don’t talk about differential disks though. We talk about images. But an image is basically like a diff disk. It contains all the changes you want to do on top of the host OS for your application. And just as in my previous example, you could create an image that contained IIS. And then add another image based on that image, that contains your application.&lt;/p&gt;&lt;p&gt;Kind of like this&lt;/p&gt;&lt;p&gt;&lt;a href="https://chris.59north.com/image.axd?picture=2017/8/image.png"&gt;&lt;img width="550" height="387" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://chris.59north.com/image.axd?picture=2017/8/image_thumb.png" border="0"&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;My visualization skills might not be the best, but what I’m trying to show is that at the bottom, we have the physical, or virtual machine, that will be the host. On top of that, we have the host operating system. And in that OS, we might have files and apps that we have installed on the machine, but it also contains a container (green). The container has 2 images (orange) “added on top of” the host OS. In this case, one image that adds IIS, and then one on top of that includes the actual application that I want to host. But the container is completely isolated from the installed apps and files on the host. It even has its own network. So the isolation is pretty complete, unless we go in and mess with it.&lt;/p&gt;&lt;p&gt;This is a very simple example, and also potentially stupid one, as I choose to use IIS. Normally, you would probably use Apache or nginx for the example, as most Docker stuff is Linux based. But I thought that it might be a little easier to grasp using Microsoft-centric technologies. And I consider it simple since the example only runs one container on the host. In most cases you would run multiple containers on the same host, all isolated from eachother. This enables much better utilization of resources on the host, and higher density of applications on each machine.&lt;/p&gt;&lt;h5&gt;Images and containers…?&lt;/h5&gt;&lt;p&gt;As I explained in the previous section, a Docker image is a lot like a diff disk. It contains the bytes that needs to be added to get the environment you need inside your container. And the cool thing is that there are public repositories containing images for you to use. This makes it really easy to get started with new stuff. If you want to play around with Redis for eample, you just pull down the Redis image to your machine, and start a container based on that image, and all of the sudden you have Redis running with a default configuration on your machine. No installation. Not configuration. And no risk of messing up your machine.&lt;/p&gt;&lt;p&gt;So the image contains the bytes needed to get your environment as you need it to run your application. And it’s based on some other image, or the host OS. The image could be a Redis install with default configuration, or maybe Apache, or maybe just an image preloaded with the stuff you need to run .NET Core on Linux. Either way it is just a predefined set up of the environment you need.&lt;/p&gt;&lt;p&gt;The image is then used to start containers. So you don’t start or run the image as you would start a VM by using the disk. Instead, you start/run a container based on that image. So the image is just the blueprint for the environment that you want inside of your container, and it’s immutable. So you can start as many containers as you want based on the same image. Each container will use the image to set up the environment, and then start whatever processes are configured inside that environment, but the container will never change the image. Any writes inside the container writes to another layer on top of the image…&lt;/p&gt;&lt;h5&gt;Installation&lt;/h5&gt;&lt;p&gt;The first step to getting started with Docker is to install it on your machine. And since you are on my blog, I’m assuming that you are running Windows, or maybe Mac. That means that you want to install &lt;a href="https://docs.docker.com/docker-for-windows/install/" target="_blank"&gt;Docker for Windows&lt;/a&gt; or &lt;a href="https://docs.docker.com/docker-for-mac/install/" target="_blank"&gt;Docker for Mac&lt;/a&gt;. These are both apps that run on your Windows or Mac, enabling Docker. On the Windows side, it uses Hyper-V to host a Linux VM that runs Docker for us, and on the Mac side, it does something as well… I don’t actually know, but I assume it uses some form of Mac based virtualization to host a Linux machine with Docker.&lt;/p&gt;&lt;p&gt;Once your have either of these installed, and started, you can start using Docker. I use Windows, so I will be using PowerShell to run my Docker commands, but it should be pretty much identical if you do it in the Terminal on a Mac…&lt;/p&gt;&lt;p&gt;The first thing you can try, just to verify that everything is working, is to run&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker version&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will tell you what version of Docker you are running both on the server, and on the client. As well as some other stuff. &lt;/p&gt;&lt;p&gt;If you don’t get a print out telling you that, something is wrong…&lt;/p&gt;&lt;h2&gt;&lt;/h2&gt;&lt;h5&gt;Getting images and setting up a container&lt;/h5&gt;&lt;p&gt;Next, you can try and run&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker images&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will show you what images you have installed on your machine. I assume that you get very little back since you just installed it. And by very little, I mean nothing. But to be honest, I’m not 100% sure of what you get by default when you install it. But either way, let’s try and pull down a small and simple base image that we can try to run.&lt;/p&gt;&lt;p&gt;Images are stored in repositories, either public or private, or local. When you ran &lt;em&gt;docker images&lt;/em&gt;, you asked Docker to list all images in the local repository. But there is also a huge public Docker repo collection called &lt;a href="https://hub.docker.com/" target="_blank"&gt;Docker Hub&lt;/a&gt;. This is a place where people upload useful images for the public to use. And the image I want, is a tiny one called &lt;a href="https://hub.docker.com/_/alpine/" target="_blank"&gt;alpine&lt;/a&gt;, which is a 5Mb Alpine Linux image. &lt;/p&gt;&lt;p&gt;There are two ways to pull an image from a Docker Hub repo. Either, you just request Docker to set up a container based on the image you want, and Docker automatically pulls it down for you if you don’t have it, like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker run -it alpine &lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;or, you can manually pull it down to your machine first, and then run it like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker pull alpine&lt;br&gt;docker run -it alpine&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Either way, it will pull down the alpine docker image to the local host, set up a new container based on that image, and attach the input and output from that containers terminal to your PowerShell window. So after running the &lt;em&gt;docker run&lt;/em&gt; command, you can go ahead and run whatever Linux Shell command you want inside that container… And when you are done, you just type &lt;em&gt;exit&lt;/em&gt;, to exit the process, which causes the container to stop.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When pulling down the image, it mentions that it is using default tag “latest”.&amp;nbsp; “Latest” in this case is the “version” of the image. All images in a repo has a tag, or version. By default, the last image uploaded to a repo gets the “latest” tag. This way, you can ask to get a specific version of an image in a repo, or just leave it out, and get the latest by default. And since I don’t care about what version of the alpine image I get, I just ignore the tag for now to get the latest.&lt;/p&gt;&lt;p&gt;So, what am I really doing here? Well…the &lt;em&gt;docker pull alpine&lt;/em&gt; is pretty self explanatory I think. But the &lt;em&gt;run&lt;/em&gt; one is a bit more complicated as I have added some options to that command.&lt;/p&gt;&lt;p&gt;&lt;em&gt;docker run&lt;/em&gt;, tells the Docker client to use the &lt;em&gt;run&lt;/em&gt; command, which sets up, and start, a new container. The simplest version of that command is just&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker run &amp;lt;IMAGE NAME&amp;gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will set up a new container using the defined image, and start it. However, if you do that with the alpine image, it will do nothing. It will start the container, but nothing is happening inside it, so Docker will just stop it straight away.&lt;/p&gt;&lt;p&gt;This isn’t very useful for this image. But you can also give it a command to run when it starts, which can be useful. Like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker run alpine ls&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will set up a new container based on alpine, start the container, run the &lt;em&gt;ls&lt;/em&gt; command, output the returned result, and then consider the command completed and stop the container again.&lt;/p&gt;&lt;p&gt;So what is the --&lt;em&gt;it&lt;/em&gt; option? Well, first of all, it is a concatenated version of -&lt;em&gt;i -t&lt;/em&gt;, and it means that you want to attach to both the input and output of the running container, allowing your to execute commands inside the container. So when the container starts, the PowerShell prompt turns into a remote prompt for the Linux container you are running.&lt;/p&gt;&lt;h5&gt;Listing and removing containers&lt;/h5&gt;&lt;p&gt;Starting up and playing around with containers is fun, but when you run a container like this, once it stops, it isn’t actually deleted.&lt;/p&gt;&lt;p&gt;If you want to see all the containers on your machine, you use the &lt;em&gt;ps&lt;/em&gt; command like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker ps&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This will list all the running containers on your machine.&lt;/p&gt;&lt;p&gt;In this case, it will probably be empty, because all of your containers have been stopped. But if you tell it to list ALL the containers like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker ps -a&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;you probably get a list of containers that are stopped.&lt;/p&gt;&lt;p&gt;The list includes the containers id, the image is based on, the command that is run when it starts, when it was created, current status, ports and the name of the container. A good set of information about the container…&lt;/p&gt;&lt;p&gt;If you want to clean up the list, and free some space on your machine, you can remove a container by running the &lt;em&gt;rm&lt;/em&gt; command.&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker rm &amp;lt;CONTAINER ID/NAME&amp;gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;All containers get a unique id, as well as a name. The name is an auto generated 2 part name if you don’t manually set one, which is a little easier to work with than the auto generated id. You can use either when you remove a container though. And if you use the id, you only need to define enough of the characters of the id for it to be unique. You don’t need to use all…&lt;/p&gt;&lt;p&gt;Ever so often you end up starting up and playing around with a containers for a very short time, while trying something out. Having to manually remove it after just trying something out, can be a bit tedious. So you can actually automate the removal if you want, by adding the option &lt;em&gt;--rm &lt;/em&gt;to the &lt;em&gt;run&lt;/em&gt; command. Like this&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;docker run –it --rm alpine &lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;This way, you tell Docker that you want it to remove the container as soon as it stops, so you don’t end up with a heap of useless containers on your machine.&lt;/p&gt;&lt;p&gt;This is a nice convenience thing, but beware, once in a while you might wish that your test container hadn’t been removed…&lt;/p&gt;&lt;p&gt;That was it for this post. In the &lt;a href="https://chris.59north.com/post/My-intro-to-Docker-Part-2-of-something"&gt;next&lt;/a&gt; post I’ll look at creating you rown images.&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/My-intro-to-Docker-Part-1-of-something</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/My-intro-to-Docker-Part-1-of-something#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=20bc5ce7-6b1c-4e67-88be-920e4dcf8878</guid>
      <pubDate>Tue, 22 Aug 2017 13:49:39 +0000</pubDate>
      <category>Docker</category>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=20bc5ce7-6b1c-4e67-88be-920e4dcf8878</pingback:target>
      <slash:comments>3</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=20bc5ce7-6b1c-4e67-88be-920e4dcf8878</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/My-intro-to-Docker-Part-1-of-something#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=20bc5ce7-6b1c-4e67-88be-920e4dcf8878</wfw:commentRss>
    </item>
    <item>
      <title>Setting Up Continuous Deployment of an ASP.NET App with Gulp from VSTS to an Azure Web App using Scripted Build Definitions</title>
      <description>&lt;p&gt;A few weeks ago, I wrote a couple of blog posts on how to set up continuous deployment to Azure Web Apps, and how to get Gulp to run as a part of it. I covered &lt;a href="http://bit.ly/21OcTgt" target="_blank"&gt;how to do it from GitHub&lt;/a&gt; using Kudu, and how to do it &lt;a href="http://bit.ly/1I6dXps" target="_blank"&gt;from VSTS using XAML-based build definitions&lt;/a&gt;. However, I never got around to do a post about how to do it using the new scripted build definitions in VSTS. So that is why this post is going to be about!&lt;/p&gt; &lt;h5&gt;The Application&lt;/h5&gt; &lt;p&gt;The application I’ll be working with, is the same on that I have been using in the previous posts. So if you haven’t read them, you might want to go and have a look at them. Or, at least the first part of the &lt;a href="http://bit.ly/21OcTgt" target="_blank"&gt;first post&lt;/a&gt;, which includes the description of the application in use. Without that knowledge, this post might be a bit hard to follow… &lt;p&gt;If you don’t feel like reading more than you need to, the basics are these. It’s an ASP.NET web application that uses TypeScript and LESS, and Gulp for generating transpiled, bundled and minified files versions of these resources. The files are read from the &lt;strong&gt;&lt;em&gt;Styles&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;Scripts&lt;/em&gt;&lt;/strong&gt; directories, and built to a &lt;strong&gt;&lt;em&gt;dist &lt;/em&gt;&lt;/strong&gt;directory using the default” task in Gulp. The source code for the whole project, is placed in a &lt;strong&gt;&lt;em&gt;Src&lt;/em&gt;&lt;/strong&gt; directory in the root of the repo…and the application is called DeploymentDemo.  &lt;p&gt;I think that should be enough to figure out most of the workings of the application…if not, read the first post! &lt;h5&gt;Setting up a new build&lt;/h5&gt; &lt;p&gt;Ok, so the first step is to set up a new build in our VSTS environment. And to do this, all you need to do, is to log into visualstudio.com, go to your project and click the “Build” tab&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb.png" width="587" height="303"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Next, click the fat, green plus sign, which gives you a modal window where you can select a template for the build definition you are about to create. However, as I’m not just going to build my application, but also deploy it, I will click on the “Deployment” tab. And since I am going to deploy an Azure Web App, I select the “Azure Website” template and click next.&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_1.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_1.png" width="612" height="604"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Yes, Microsoft probably should rename this template, but that doesn’t really matter. It will still do the right thing.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; If you go to the Azure portal and set up CD from there, you will actually get a XAML-based build definition, and not a scripted one. So you have to do it from in here.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; VSTS has a preview feature right now, where you split up the build and deployment steps into multiple steps. However, even if this is a good idea, I am just going to keep it simple and do it as a one-step procedure.&lt;/p&gt; &lt;p&gt;On the next screen, you get to select where the source code should come from. In this case, I’ll choose Git, as my solution is stored in a Git-based VSTS project. And after that, I just make sure that the right repo and branch is selected. &lt;/p&gt; &lt;p&gt;Finally, I make sure to check the “Continuous integration…”-checkbox, making sure that the build is run every time I push a change.&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_2.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_2.png" width="620" height="356"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;That’s it! Just click “Create” to create build definition!&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;In this window you are also asked what agent queue to use by default. In this example, I’ll leave it on “Hosted”. This will give me a build agent hosted by Microsoft, which is nice. However, this solution can actually be a bit slow at times, and limited, as you only get a certain amount of minutes of free builds. So if you run into any of these problems, you can always opt in to having your own build agent in a VM in Azure. This way you get dedicated resources to do builds. Just keep in mind that the build agent will incur an extra cost. &lt;/p&gt; &lt;p&gt;Once that is done, you get a build definition that looks like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_3.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_3.png" width="518" height="448"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Or at least you did when I wrote this post…&lt;/p&gt; &lt;p&gt;As you can see, the steps included are:&lt;/p&gt; &lt;p&gt;1. Visual Studio Build – A step that builds a Visual Studio solution&lt;/p&gt; &lt;p&gt;2. Visual Studio Test – A step that runs tests in the solution and makes sure that failing tests fail the build&lt;/p&gt; &lt;p&gt;3. Azure Web App Deployment – A step that publishes the build web application to a Web App in Azure&lt;/p&gt; &lt;p&gt;4. Index Sources &amp;amp; Publish Symbols – A step that creates and published pdb-files&lt;/p&gt; &lt;p&gt;5. Copy and Publish Artifacts – A step that copies build artifacts generated by the previous steps to a specified location&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note: W&lt;/strong&gt;here is the step that downloads the source from the Git repo? Well, that is actually not its own step. It is part of the definition, and can be found under the “Repository” tab at the top of the screen.&lt;/p&gt; &lt;p&gt;In this case however, I just want to build and deploy my app. I don’t plan on running any tests, or generate pdb:s etc, so I’m just going to remove some of the steps… To be honest, the only steps I want to keep, is step 1 and 3. So it looks like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_4.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_4.png" width="526" height="255"&gt;&lt;/a&gt;&lt;/p&gt; &lt;h5&gt;Configuring the build step&lt;/h5&gt; &lt;p&gt;Ok, now that I have the steps I need, I guess it is time to configure them. There is obviously something wrong with the “Azure Web App Deployment” step considering that it is red and bold…&amp;nbsp; But before I do anything about that, I need to make a change to the “Visual Studio Build” step.&lt;/p&gt; &lt;p&gt;As there will be some npm stuff being run, which generates that awesome, and very deep, folder structure inside of the “node_modules” folder, the “Visual Studio Build” step will unfortunately fail in its current configuration. It defines the solution to build as &lt;strong&gt;&lt;em&gt;**/*.sln&lt;/em&gt;&lt;/strong&gt;, which means “any file with an .sln-extension, in any folder”. This causes the build step to walk through _all_ the folders, including the “node_modules” folder, searching for solution files. And since the folder structure is too deep, it seems to fail if left like this. So it needs to be changed to point to the specific solution file to use. In this case, that means setting the &lt;strong&gt;&lt;em&gt;Solution&lt;/em&gt;&lt;/strong&gt; setting to &lt;strong&gt;&lt;em&gt;Src/DeploymentDemo.sln&lt;/em&gt;&lt;/strong&gt;. Like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_5.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_5.png" width="756" height="219"&gt;&lt;/a&gt;&lt;/p&gt; &lt;h5&gt;Configuring the deployment step&lt;/h5&gt; &lt;p&gt;Ok, so now that the build step is set up, we need to have a look at the deployment part. Unfortunately, this is a bit more complicated than it might seem, and to be honest, than it really needed to be. At first look, it doesn’t look too bad&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_6.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_6.png" width="756" height="398"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Ok, so all we need to do is to select the subscription to use, the Web App to deploy to and so on… That shouldn’t be too hard. Unfortunately all that becomes a bit more complicated when you open the “Azure Subscription” drop-down and realize that it is empty… &lt;/p&gt; &lt;p&gt;The first thing you need to do is to give VSTS access to your Azure account, which means adding a “Service Endpoint”. This is done by clicking the &lt;strong&gt;&lt;em&gt;Manage&lt;/em&gt;&lt;/strong&gt; link to the right of the drop-down, which opens a new tab where you can configure “Service Endpoints”. &lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_7.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_7.png" width="384" height="222"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;The first thing to do is to click the fat, green plus sign and select &lt;strong&gt;Azure&lt;/strong&gt; in the drop-down. This opens a new modal like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_8.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_8.png" width="618" height="376"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;There are 3 different ways to add a new connection, &lt;em&gt;Credentials&lt;/em&gt;, &lt;em&gt;Certificate Based&lt;/em&gt; and &lt;em&gt;Service Principle Authentication&lt;/em&gt;. In this case, I’ll switch over to &lt;strong&gt;&lt;em&gt;Certificate Based&lt;/em&gt;&lt;/strong&gt;. &lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you want to use &lt;em&gt;Service Principle Authentication&lt;/em&gt; you can find more information &lt;a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/10/04/automating-azure-resource-group-deployment-using-a-service-principal-in-visual-studio-online-build-release-management.aspx" target="_blank"&gt;here&lt;/a&gt;&lt;/p&gt; &lt;p&gt;First, the connection needs a name. It can be whatever you want. It is just a name.&lt;/p&gt; &lt;p&gt;Next, you need to provide a bunch of information about your subscription, which is available in the publish settings file for your subscription. The easiest way to get hold of this file, is to hover over the tooltip icon, and then click the link called &lt;em&gt;publish settings file&lt;/em&gt; included in the tool tip pop-up. &lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_9.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_9.png" width="619" height="354"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;This brings you to a page where you can select what directory you want to download the publish settings for. So just select the correct directory, click “Submit”, and save the generated file to somewhere on your machine. Once that is done, you can close down the new tab and return to the “Add New Azure Connection” modal.&lt;/p&gt; &lt;p&gt;To get hold of the information you need, just open the newly downloaded file in a text editor. It will look similar to this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_10.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_10.png" width="564" height="443"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;As you can see, there are a few bits of information in here. And it can be MUCH bigger than this if you have many subscriptions in the directory your have chosen. So remember to locate the correct subscription if you have more than one.&lt;/p&gt; &lt;p&gt;The parts that are interesting would be the attribute called &lt;strong&gt;&lt;em&gt;Id&lt;/em&gt;&lt;/strong&gt;, which needs to be inserted in the &lt;strong&gt;&lt;em&gt;Subscription Id&lt;/em&gt;&lt;/strong&gt; field in the modal, the attribute called &lt;strong&gt;&lt;em&gt;Name&lt;/em&gt;&lt;/strong&gt;, which should be inserted in &lt;strong&gt;&lt;em&gt;Subscription Name&lt;/em&gt;&lt;/strong&gt;, and finally the attribute called &lt;strong&gt;&lt;em&gt;ManagementCertificate&lt;/em&gt;&lt;/strong&gt;, which goes in the &lt;strong&gt;&lt;em&gt;Management Certificate&lt;/em&gt;&lt;/strong&gt; textbox. Like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_11.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_11.png" width="613" height="350"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Once you click OK, the information will be verified, and if everything is ok, the page will reload, and you will have a new service endpoint to play with. Once that is done, you can close down the tab, and return to the build configuration set up. &lt;/p&gt; &lt;p&gt;The first thing you need to do here, is to click the “refresh” button to the right of the drop-down to get the new endpoint to show up. Next, you select the newly created endpoint in the drop-down. &lt;/p&gt; &lt;p&gt;After that, you would assume that the &lt;strong&gt;&lt;em&gt;Web App Name&lt;/em&gt;&lt;/strong&gt; drop-down would be populated with all the available web apps in your subscription. Unfortunately, this is not the case for some reason. So instead, you have to manually insert the name of the Web App you want to deploy to. &lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;You have two options when selecting the name of the Web App. Either, you choose the name of a Web App that you have already provisioned through the Azure portal, or you choose a new name, and if that name is available, the deployment script will create a new Web App for you with that name on the fly.&lt;/p&gt; &lt;p&gt;Next, select the correct region to deploy to, as well as any specific slot you might be deploying to. If you are deploying to the default slot, just leave the “slot” textbox empty.&lt;/p&gt; &lt;p&gt;The &lt;em&gt;&lt;strong&gt;Web Deploy Package&lt;/strong&gt; &lt;/em&gt;box is already populated with the value &lt;em&gt;$(build.stagingDirectory)\**\*.zip&lt;/em&gt; which works fine for this. If you have more complicated builds, or your application contains other zips that will be output by the build, you might have to change this.&lt;/p&gt; &lt;p&gt;Once that is done, all you have to do is click the &lt;strong&gt;&lt;em&gt;Save&lt;/em&gt;&lt;/strong&gt; button in the top left corner, give the build definition a name, and you are done with the configuration.&lt;/p&gt; &lt;p&gt;Finally, click the &lt;strong&gt;&lt;em&gt;Queue build…&lt;/em&gt;&lt;/strong&gt; button to queue a new build, and in the resulting modal, just click OK. This will queue a new build, and give you a screen like this while you wait for an agent to become available&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_12.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_12.png" width="512" height="412"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Yes, I have had a failing build before I took this “screen shot”. Yours might look a little bit less red…&lt;/p&gt; &lt;p&gt;And as soon as there is an agent available for you, the screen will change into something like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_13.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_13.png" width="628" height="568"&gt;&lt;/a&gt;&lt;/p&gt; &lt;h5&gt;&lt;/h5&gt; &lt;p&gt;where you can follow along with what is happening in the build. And finally, you should be seeing something like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_14.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_14.png" width="583" height="296"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;At least if everything goes according to plan&lt;/p&gt; &lt;h5&gt;Adding Gulp to the build&lt;/h5&gt; &lt;p&gt;So far, we have managed to configure a build and deployment of our solution. However, we are still not including the Gulp task that is responsible for generating the required client-side resources. So that needs to be sorted out.&lt;/p&gt; &lt;p&gt;The first thing we need to do is to run &lt;/p&gt; &lt;blockquote&gt; &lt;p&gt;npm install&lt;/p&gt;&lt;/blockquote&gt; &lt;p&gt;To do this, click the fat, green &lt;strong&gt;&lt;em&gt;Add build step…&lt;/em&gt;&lt;/strong&gt; button at the top of the configuration&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_15.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_15.png" width="388" height="111"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;and in the resulting modal, select &lt;strong&gt;&lt;em&gt;Package&lt;/em&gt;&lt;/strong&gt; in the left hand menu, and then add an &lt;strong&gt;&lt;em&gt;npm&lt;/em&gt;&lt;/strong&gt; build step&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_16.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_16.png" width="664" height="233"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Next, close the modal and drag the new build step to the top of the list of steps. &lt;/p&gt; &lt;p&gt;By default, the command to run is set to &lt;strong&gt;&lt;em&gt;install&lt;/em&gt;&lt;/strong&gt;, which is what we need. However, we need it to run in a different directory than the root of the repository. So in the settings for the npm build step, expand the &lt;strong&gt;&lt;em&gt;Advanced&lt;/em&gt;&lt;/strong&gt; area, and update the &lt;strong&gt;&lt;em&gt;Working Directory&lt;/em&gt;&lt;/strong&gt; to say &lt;em&gt;Src/DeploymentDemo&lt;/em&gt;.&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_17.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_17.png" width="578" height="290"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Ok, so now npm will install all the required npm packages for us before the application is built. &lt;/p&gt; &lt;p&gt;Next, we need to run&lt;/p&gt; &lt;blockquote&gt; &lt;p&gt;bower install&lt;/p&gt;&lt;/blockquote&gt; &lt;p&gt;To do this, add a new build step of the type &lt;strong&gt;&lt;em&gt;Command Line&lt;/em&gt;&lt;/strong&gt; from the &lt;strong&gt;&lt;em&gt;Utility&lt;/em&gt;&lt;/strong&gt; section, and drag it so that it is right after the npm step. The configuration we need for this to work is the following&lt;/p&gt; &lt;p&gt;Tool should be &lt;em&gt;$(Build.SourcesDirectory)\Src\DeploymentDemo\node_modules\.bin\bower.cmd&lt;/em&gt;, the arguments should be &lt;em&gt;install&lt;/em&gt;, and the working folder should be &lt;em&gt;Src/DeploymentDemo&lt;/em&gt;&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_18.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_18.png" width="630" height="252"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;This will execute the bower command file, which is the same as running &lt;strong&gt;&lt;em&gt;Bower&lt;/em&gt;&lt;/strong&gt; in the command line, passing in the argument &lt;strong&gt;&lt;em&gt;install&lt;/em&gt;&lt;/strong&gt;, which will install the required bower packages. And setting the working directory will make sure it finds the bower.json file and installs the packages in the correct folder.&lt;/p&gt; &lt;p&gt;Now that the Bower components have been installed, or at least been configured to be installed, we can run Gulp. To do this, just add a new &lt;strong&gt;&lt;em&gt;Gulp&lt;/em&gt;&lt;/strong&gt; build step, which can be found under the &lt;strong&gt;&lt;em&gt;Build&lt;/em&gt;&lt;/strong&gt; section. And then make sure that you put it right after the Command Line step.&lt;/p&gt; &lt;p&gt;As our &lt;em&gt;gulpfile.js&lt;/em&gt; isn’t in the root of the repo, the &lt;strong&gt;&lt;em&gt;Gulp File Path&lt;/em&gt;&lt;/strong&gt; needs to be changed to &lt;em&gt;Src/DeploymentDemo/gulpfile.js&lt;/em&gt;, and the working directory once again has to be set to &lt;em&gt;Src/DeploymentDemo&lt;/em&gt;&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_19.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_19.png" width="492" height="278"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;As I’m using the default task in this case, I don’t need to set the &lt;strong&gt;&lt;em&gt;Gulp Task(s)&lt;/em&gt;&lt;/strong&gt; to get it to run the right task.&lt;/p&gt; &lt;p&gt;Finally, I want to remove any left over files from the build agent as these can cause potential problems. They really shouldn’t, at least not if you are running the hosted agent, but I have run in to some weird stuff when running on my own agent, so I try to always clean up after the build. So to do this, I will run the batch file called &lt;strong&gt;&lt;em&gt;delete_folder.bat&lt;/em&gt;&lt;/strong&gt; in the &lt;strong&gt;&lt;em&gt;Tools&lt;/em&gt;&lt;/strong&gt; directory of my repo. This will use RoboCopy to safely remove deep folder structures, like the node_modules and bower_components folders.&lt;/p&gt; &lt;p&gt;To do this, I add two new build step to the end of the definition. Both of them of the type &lt;strong&gt;&lt;em&gt;Batch Script&lt;/em&gt;&lt;/strong&gt; from the &lt;strong&gt;&lt;em&gt;Utility&lt;/em&gt;&lt;/strong&gt; section of the Add Build Step modal.&lt;/p&gt; &lt;p&gt;Both of them need to have their &lt;strong&gt;&lt;em&gt;Path&lt;/em&gt;&lt;/strong&gt; set to &lt;em&gt;Tools/delete_folder.bat&lt;/em&gt;, their &lt;strong&gt;&lt;em&gt;Working Folder &lt;/em&gt;&lt;/strong&gt;set to &lt;em&gt;Src/DeploymentDemo&lt;/em&gt;, and their &lt;strong&gt;&lt;em&gt;Always run&lt;/em&gt;&lt;/strong&gt; checkbox checked. However, the first step needs to have the &lt;strong&gt;&lt;em&gt;Arguments&lt;/em&gt;&lt;/strong&gt; set to &lt;em&gt;node_modules&lt;/em&gt; and the second one have it set to &lt;em&gt;bower_components&lt;/em&gt;&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_20.png"&gt;&lt;img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; float: none; padding-top: 0px; padding-left: 0px; margin: 0px auto; display: block; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_20.png" width="444" height="345"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;This will make sure that the &lt;em&gt;bower_components&lt;/em&gt; and &lt;em&gt;node_modules&lt;/em&gt; folders are removed after each build.&lt;/p&gt; &lt;p&gt;Finally save the build configuration and you should be done! It should look something like this&lt;/p&gt; &lt;p&gt;&lt;a href="http://chris.59north.com/image.axd?picture=2016/1/image_21.png"&gt;&lt;img title="image" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; float: none; padding-top: 0px; padding-left: 0px; border-left: 0px; margin: 0px auto; display: block; padding-right: 0px" border="0" alt="image" src="http://chris.59north.com/image.axd?picture=2016/1/image_thumb_21.png" width="529" height="495"&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;However, there is still one problem. Gulp will generate new files for us, as requested, but it won’t be added to the deployment unfortunately. To solve this, we need to tell MSDeploy that we want to have those files added to the deployment. To do this, a wpp.targets-file is added to the root of the project, and checked into source control. The file is in this case called &lt;strong&gt;&lt;em&gt;DeploymentDemo.wpp.targets&lt;/em&gt;&lt;/strong&gt; and looks like this&lt;/p&gt; &lt;div id="codeSnippetWrapper" style="font-size: 8pt; overflow: auto; cursor: text; border-top: silver 1px solid; font-family: 'Courier New', courier, monospace; border-right: silver 1px solid; width: 97.5%; border-bottom: silver 1px solid; padding-bottom: 4px; direction: ltr; text-align: left; padding-top: 4px; padding-left: 4px; border-left: silver 1px solid; margin: 20px 0px 10px; line-height: 12pt; padding-right: 4px; max-height: 200px; background-color: #f4f4f4"&gt;&lt;pre id="codeSnippet" style="border-top-style: none; font-size: 8pt; overflow: visible; border-left-style: none; font-family: 'Courier New', courier, monospace; width: 100%; border-bottom-style: none; color: black; padding-bottom: 0px; direction: ltr; text-align: left; padding-top: 0px; border-right-style: none; padding-left: 0px; margin: 0em; line-height: 12pt; padding-right: 0px; background-color: #f4f4f4"&gt;&lt;span style="color: #0000ff"&gt;&amp;lt;?&lt;/span&gt;&lt;span style="color: #800000"&gt;xml&lt;/span&gt; &lt;span style="color: #ff0000"&gt;version&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="1.0"&lt;/span&gt; &lt;span style="color: #ff0000"&gt;encoding&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="utf-8"&lt;/span&gt; ?&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;&lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;Project&lt;/span&gt; &lt;span style="color: #ff0000"&gt;ToolsVersion&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="4.0"&lt;/span&gt; &lt;span style="color: #ff0000"&gt;xmlns&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="http://schemas.microsoft.com/developer/msbuild/2003"&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;  &lt;br&gt;  &lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;Target&lt;/span&gt; &lt;span style="color: #ff0000"&gt;Name&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="AddGulpFiles"&lt;/span&gt; &lt;span style="color: #ff0000"&gt;BeforeTargets&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="CopyAllFilesToSingleFolderForPackage;CopyAllFilesToSingleFolderForMsdeploy"&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;    &lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;Message&lt;/span&gt; &lt;span style="color: #ff0000"&gt;Text&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="Adding gulp-generated files to deploy"&lt;/span&gt; &lt;span style="color: #ff0000"&gt;Importance&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="high"&lt;/span&gt;&lt;span style="color: #0000ff"&gt;/&amp;gt;&lt;/span&gt;&lt;br&gt;    &lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;ItemGroup&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;      &lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;CustomFilesToInclude&lt;/span&gt; &lt;span style="color: #ff0000"&gt;Include&lt;/span&gt;&lt;span style="color: #0000ff"&gt;=".\dist\**\*.*"&lt;/span&gt; &lt;span style="color: #0000ff"&gt;/&amp;gt;&lt;/span&gt;&lt;br&gt;      &lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;FilesForPackagingFromProject&lt;/span&gt;  &lt;span style="color: #ff0000"&gt;Include&lt;/span&gt;&lt;span style="color: #0000ff"&gt;="%(CustomFilesToInclude.Identity)"&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;        &lt;span style="color: #0000ff"&gt;&amp;lt;&lt;/span&gt;&lt;span style="color: #800000"&gt;DestinationRelativePath&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;.\dist\%(RecursiveDir)%(Filename)%(Extension)&lt;span style="color: #0000ff"&gt;&amp;lt;/&lt;/span&gt;&lt;span style="color: #800000"&gt;DestinationRelativePath&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;      &lt;span style="color: #0000ff"&gt;&amp;lt;/&lt;/span&gt;&lt;span style="color: #800000"&gt;FilesForPackagingFromProject&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;    &lt;span style="color: #0000ff"&gt;&amp;lt;/&lt;/span&gt;&lt;span style="color: #800000"&gt;ItemGroup&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;  &lt;span style="color: #0000ff"&gt;&amp;lt;/&lt;/span&gt;&lt;span style="color: #800000"&gt;Target&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;br&gt;&lt;br&gt;&lt;span style="color: #0000ff"&gt;&amp;lt;/&lt;/span&gt;&lt;span style="color: #800000"&gt;Project&lt;/span&gt;&lt;span style="color: #0000ff"&gt;&amp;gt;&lt;/span&gt;&lt;/pre&gt;&lt;br&gt;&lt;/div&gt;
&lt;p&gt;It basically tells the system that any files in the &lt;strong&gt;&lt;em&gt;dist&lt;/em&gt;&lt;/strong&gt; folder should be added to the deployment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note: &lt;/strong&gt;You can read more about wpp.targets-files and how/why they work here: &lt;a title="http://chris.59north.com/post/Integrating-a-front-end-build-pipeline-in-ASPNET-builds" href="http://chris.59north.com/post/Integrating-a-front-end-build-pipeline-in-ASPNET-builds"&gt;http://chris.59north.com/post/Integrating-a-front-end-build-pipeline-in-ASPNET-builds&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That’s it! Queuing a new build, or pushing a new commit should cause the build to run, and a nice new website should be deployed to the configured location, including the resources generated by Gulp. Unfortunately, due to the npm and Bower work, the build can actually be a bit slow-ish. But it works!&lt;/p&gt;
&lt;p&gt;Cheers!&lt;/p&gt;</description>
      <link>https://chris.59north.com/post/Setting-Up-Continuous-Deployment-of-an-ASPNET-App-with-Gulp-from-VSTS-to-an-Azure-Web-App-using-Scripted-Build-Definitions</link>
      <author>chris@59north.com</author>
      <comments>https://chris.59north.com/post/Setting-Up-Continuous-Deployment-of-an-ASPNET-App-with-Gulp-from-VSTS-to-an-Azure-Web-App-using-Scripted-Build-Definitions#comment</comments>
      <guid>https://chris.59north.com/post.aspx?id=01c1c89e-662e-4ffe-ad43-0c35d6983586</guid>
      <pubDate>Fri, 08 Jan 2016 08:21:11 +0000</pubDate>
      <category>ASP.NET</category>
      <category>Azure</category>
      <betag:tag>visual studio team services</betag:tag>
      <betag:tag>vsts</betag:tag>
      <betag:tag>build</betag:tag>
      <betag:tag>buildserver</betag:tag>
      <betag:tag>cd</betag:tag>
      <betag:tag>continuous delivery</betag:tag>
      <betag:tag>continuous integration</betag:tag>
      <betag:tag>ci</betag:tag>
      <betag:tag>azure</betag:tag>
      <betag:tag>tutorial</betag:tag>
      <dc:publisher>ZeroKoll</dc:publisher>
      <pingback:server>https://chris.59north.com/pingback.axd</pingback:server>
      <pingback:target>https://chris.59north.com/post.aspx?id=01c1c89e-662e-4ffe-ad43-0c35d6983586</pingback:target>
      <slash:comments>29</slash:comments>
      <trackback:ping>https://chris.59north.com/trackback.axd?id=01c1c89e-662e-4ffe-ad43-0c35d6983586</trackback:ping>
      <wfw:comment>https://chris.59north.com/post/Setting-Up-Continuous-Deployment-of-an-ASPNET-App-with-Gulp-from-VSTS-to-an-Azure-Web-App-using-Scripted-Build-Definitions#comment</wfw:comment>
      <wfw:commentRss>https://chris.59north.com/syndication.axd?post=01c1c89e-662e-4ffe-ad43-0c35d6983586</wfw:commentRss>
    </item>
  </channel>
</rss>