<?xml version="1.0" encoding="UTF-8"?>
<!--Generated by Site-Server v@build.version@ (http://www.squarespace.com) on Fri, 03 Apr 2026 21:17:25 GMT
--><rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://www.rssboard.org/media-rss" version="2.0"><channel><title>Hurry Up and Wait!</title><link>http://www.hurryupandwait.io/</link><lastBuildDate>Fri, 04 May 2018 04:51:27 +0000</lastBuildDate><language>en-US</language><generator>Site-Server v@build.version@ (http://www.squarespace.com)</generator><description><![CDATA[]]></description><item><title>Follow your Bliss: A Quantum Perspective</title><dc:creator>Matt Wrock</dc:creator><pubDate>Fri, 04 May 2018 17:03:46 +0000</pubDate><link>http://www.hurryupandwait.io/blog/follow-your-bliss-a-quantum-perspective</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:5aebe6cf0e2e72039b774d3c</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg" data-image-dimensions="1024x639" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=1000w" width="1024" height="639" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1525410332081-XJZZ9D482I7RWSOI32IS/mind21.jpg?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>If I pursue work that I truly love and enjoy, will the money follow? Is it more important to focus on projects that we find meaningful than what may be more lucrative or more likely to lead to promotions and better compensation? I have had a somewhat complicated relationship with these questions. So I'd like to explore them, share how they have guided my own career and provide a new perspective I have settled on that is partially inspired by or reminiscent of quantum physics.</p><p>First a quick plug for some blogging I have been doing lately but have not been publishing here. Especially if you found my last post on <a href="http://www.hurryupandwait.io/blog/course-correction">course correction</a> of interest, you may also find some of my <a href="https://medium.com/@mwrockx">medium posts</a> fall into your field of interest. I have not been including them here because they are completely removed from the software topics of this blog and I feel it would be distracting to the bulk of this audience. Likewise if you enjoy this post, you may enjoy the medium posts, but I think this topic is super relevant to work in the technology industry (or any industry for that matter) and that's why I am publishing here.</p><p>Throughout the first 10 years of my career as a software developer and manager, I really didn't give the concept of meaning vs. wealth much thought. I was extremely fortunate that I almost accidentally fell into work I found very enjoyable and just so happened to be very lucrative. Certainly leaps and bounds more so than the low paying jobs in my 20s. After 7 years of solid coding, I gradually started managing more and eventually became VP of technology over a small 20 engineer department. The money was very very good. In fact for the first time in my life I didn't worry much about money. I wasn't "rich" but I had no credit card Dept and could afford occasional modest vacations. However I found that I did not enjoy management nearly as much as hands on development.</p><p>I was working for a startup at the time and had quite a few stock options that promised a potentially significant exit reward if I stuck things out. After a few years I decided that I was ready to move on and do more hands on work. It seemed like every year we were poised to sell or go public the following year and so I figured I'd stick things out. Eventually, and rather suddenly, it just hit me that life is too precious to waste time doing work one finds unrewarding.</p><p>While moving from management back to individual contributor work was a no brainier, the salary difference was hard to swallow. Even though I was starting as a Sr. Software Development Engineer at a large technology company I was taking a 40% cut in pay. It was a solid and competitive salary, but I was certainly hoping to eventually return to the salary level I was at as a manager. So I clung to the conviction that if I applied my efforts to work I was passionate about, eventually finances would take care of themselves and I would again have a matching salary to what I had before. I knew this was very possible and I came close at one point to nearly achieving that managerial salary before again changing positions because I was much more passionate about the work than what I was developing at a higher salary job.</p><p>Honestly, the road to financial progress proved bumpier than I anticipated. Also, my passion and hard work were taking a rather significant toll on my quality of life. I began to wonder if I was barking up the wrong tree. Maybe I was wrong about the financial potential of my programming talent. While I could make up for talent with hard work, I was not sure how much longer I could sustain the hours I was putting in on the combined work and non-work related projects I was involved in. I really wanted to believe that I was a "gifted" developer destined for financial independence but this thought seemed to become more unraveled as time went on.</p><p>There were a couple times where my focus on pursuing what I believed in technically was directly conflicting with my ability to follow my employer's promotion track. When I made the switch to hands on development, for about six months I just wanted to write software and it did not matter so much what the software did as long as I was challenged. However eventually it became more and more important to me to work on software I believed in. I didn't just want to pump out "widgets" but code something meaningful. If my employer's work became uninteresting, I'd often find something in open source that could cultivate my interests. However, this was work (sometimes a lot of work) done on my own time for free and often (but not always) not seen or appreciated by my employer. On the one hand that was fine. I did not expect to be appreciated for work that was not contributing to the revenue of the one signing my paycheck. On the other hand I knew for a fact that if I dove 100% into the work that I was employed to do (like what I did before I discovered open source), I'd be much better positioned for promotions and raises.</p><p>I really felt like somewhere something went wrong with my "master plan." What happened? I did not feel on track to a growing career and even felt like my self-perceived talent was overrated. Looking around at the talent that surrounded me, I was no 10x developer for sure. I don't believe in the 10x developer but long ago I thought maybe I was one. Further, I did not want to acknowledge the fallacy of my myth. My entire self-image had become so comingled with my image of the uber developer and I really wanted to believe my talent held the key to riches. I thought that to admit that I was average or even worse would be throwing away the hope of the good life.</p><p>After these thoughts and doubts came to a crescendo over a year ago, I took a sabbatical from my pursuit to achieve developer greatness. I literally just stopped. I'm still committed to my work as a developer, but I took several steps back and removed this drive from the alter of my constant attention. I knew I needed to reflect and "chew" on a larger problem. I clearly needed to become more skillful in simply living my life. I knew I was missing something important and that I needed to adjust my lens that helped me to define what greatness and success were and the path to achieving them. I still believed that path existed, but knew I had strayed.</p><p>It seems like I'm continuing to learn more every day and I plan to do so indefinitely but here are some key takeaways I have taken after a year of pondering.</p><p>First, I did the right thing when I left my VP position for individual contributor development work. I needed to pursue my desire to grow my coding skills. While it did not put me on the jetway to wealth, it has led to many great opportunities and experiences. I do think I have had a rich career that is far from over. It's also somewhat comforting to know that had I stayed in that previous VP role, I would have "sunk with the ship." The prosperous exit never happened for anyone in that startup.</p><p>Next, while the above pivot was the right thing to do, I clung to the wrong target. I erected a false image of myself as the romanticized developer genius. I believed that by conforming to that image, I would realize my material goals. This image was not who I am. It's not at all that I'm a bad developer or in fact have not done some pretty great technical things throughout my career, but this image is totally artificial and I allowed myself to be seduced by it. It became my measuring stick for greatness and any failure to reflect its shallow qualities was a threat to my ability to obtain the future I hoped for. The passion to change the type of work I was doing was a voice worth listening to and following but I misinterpreted where it was leading me and prematurely created a vision that was not grounded in my reality.</p><p>Losing the path does not mean I wasted my time. Oddly, many of us have to learn how to be ourselves by making several failed attempts to be someone who we are not. It's perhaps the only route to understand ourselves.</p><p>The path to success is not a straight line from where we currently are to an unmoving goal that we imagine to be our destiny and calling perfectly matched with our ultimate potential. For me this was a huge realization. I have long unconsciously fostered this notion that one has a calling or a singular future that one is meant to fulfill. We make decisions and choose opportunities that either align ourselves with that calling or throw us off "the path" and threaten to lead us to a possible future where we squander that perfect image of what we were meant to become.</p><p>Every moment brings with it a multitude of possible outcomes. This is why I call this a quantum perspective because it reminds me of the <a href="https://en.wikipedia.org/wiki/Many-worlds_interpretation">Many Worlds Theory</a> of quantum physics. These outcomes can be very different. Some will be wonderful and others very undesirable. There is no ONE correct outcome but many. There is no single perfect career, place to live, spouse, or any achievement destined for each individual. There is a constant myriad of possibilities. By fashioning an unmovable vision of who we think we are meant to become we blind ourselves to these possibilities and limit the direction we follow.</p><p>It's also not terrible to have chosen a "bad" possibility. Doing so is not an irrevocable act that steers us away from our fully realized potential. The opportunity for redemption lies in each moment. Because there is no single perfect future that we must aim for, there is an infinite number of possibilities to realize ourselves. There is no straight line to fall from but a field of potentials that we constantly gravitate among. Just because we miss a perceived opportunity that would have led us to our perceived destiny does not mean the next moment will not bring new opportunity for a completely and utterly different outcome but perhaps just as "perfect."</p><p>It is crucial that we recognize this. By limiting ourselves to a vision that we march toward come hell or high water is a great way to build a prison for ourselves cut off from a vast number of experiences better aligned with who we are in this particular moment. Just because one moment we find ourselves drawn to do heads down coding does not mean that is the ultimate definition of who we are and what we are to become even if it may mean it is a great path to take in that moment.</p><p>We are all truly dynamic beings and we constantly defy any solidified definition of who we are that predicts what we are meant to become. We are a collection of decisions and interactions with other elements of our reality that are always redefining who we are and redirecting the trajectory of our future. Perhaps by coming to terms with this idea that each moment presents multiple equally valid potential directions we can see a new beauty in what is right in front of us. If what we see is not reflective of the reality we have chosen to embrace for our future, maybe we don't need to change the reality of what we see but rather the image of the future we are projecting on to that reality.</p><p>While the above ideas ring true for me, they can be difficult to totally embrace. I'm trying to let go of the images I cling to that color the future I think that I want so that I can be more open to the possibilities that lie in front of me, but that's hard. We live our entire lives cultivating these visions - trying to become an embodiment of ourselves that is not the embodiment we reside inside of. I find that these images can give us not only fear and a sense of lack but also comfort and solace. They provide us with an identity that solidifies our sense of self. We tend to like that. We want to know who we are and feel like we control who we are to become. That security is hard to drop and without it, there is a feeling of impending emptiness and groundless weightlessness. Perhaps we need to fall into that emptiness and embrace the weightless vertigo if just for a moment to find our wings and fly.</p>]]></description></item><item><title>Course Correction</title><dc:creator>Matt Wrock</dc:creator><pubDate>Mon, 12 Mar 2018 00:59:22 +0000</pubDate><link>http://www.hurryupandwait.io/blog/course-correction</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:5aa5cbfee4966bde36636290</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg" data-image-dimensions="800x400" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=1000w" width="800" height="400" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1520816329672-K32Y7EL1KUTMKTN53QW6/course.jpg?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>March 1st marks a significant one year milestone for me. Over the past year I have made several lifestyle changes as a sort of major "course correction" that has had a profound impact over my general well-being and outlook on life. I made the first intentional and tangible change on March 1, 2017. However as I am writing this I am remembering other actions taken slightly earlier that seem more significant now than they did at the time. Still, March 1 seems like a good solid checkpoint. Mainly for the fact that I can actually remember it!</p><p>One byproduct of these changes has been a near halt in blogging over the last year as well as a significant cut back in open source contributions made on my own time. So I thought this one year mark may make a fine occasion to share the changes I have made, what led me to make them and how I think those changes are shaping a new life for me now.</p><p>This is not going to be a technical post by any means but I think that what I have to share may resonate with others who find themselves in self-defeating patterns of spending far too much time working at their jobs or contributing to open source or other "side projects" in their free time and are feeling a lack of connection and meaning in their lives like I was feeling a year ago. This post is also an opportunity for me personally to process the last year and to try and better make sense of what all has transpired in order to better understand the place where I am today so I can better plot a path forward.</p><p>So let's go back to February, 2017. What was going on at that time that would prompt me to change course? I had not exercised regularly or even semi-regularly in 5 years. It's worth mentioning that just prior to then I was an ultra-marathoner and had completed 100 and 50 mile events. I was heavier than I had ever been - not morbidly obese but uncomfortably over weight, well over a healthy <a target="_blank" href="https://en.wikipedia.org/wiki/Body_mass_index">BMI</a>. I worked constantly - not necessarily work I was being paid for - but it could indeed be called work: blogging, open source contributions, answering forum questions, troubleshooting packer builds. Every day I brought my laptop to bed and would usually work myself to sleep. Sometimes I would wake later in the night and work some more and then I would start working again just after waking. I was always disappointed when weekends or a vacation showed up and was relieved on Mondays. Oh and I was generally miserable and knew it. I felt like a failure in every area of my life.</p><p>I could now spend several paragraphs going into some detail about how the events over the preceding 10 years led me to this state. Believe me I did and I just cut them all out. You don't want to read all that. Let me see if I can sum that all up in a couple sentences. I thought and hoped that maybe if I worked hard enough, I could create something great. This started by dedicating some personal time to an open source project and I loved the experience but also stopped exercising to buy more time for the project. Eventually one project led to another and next I was contributing to several projects and actively blogging. Now I'm averaging 4 to 5 hours of sleep a night.</p><p>As time went on I lost track of what I was trying to accomplish. I was working constantly and had no clear vision of where I wanted to go. Eventually all the constant work simply became habit and a new default state. Eventually I was conscious of the fact that I had lost any clear long term goal. I was just chasing multiple "personal" assignments and feeling like I was drifting about getting nowhere. Simultaneously I felt totally out of shape, uncomfortably over weight and a failure as a father and husband. Something had to change.</p><p>I had a general idea of what initial changes I needed to make for years prior to this. It was simple: Change my diet, start exercising regularly, sleep like a normal person and sit down and think hard about what I wanted for myself and my family and figure out what I needed to do to get there. Again I could get very philosophical and explain over many paragraphs why it took years for me to actually make a move. I just couldn't let go of the terrible habits I acquired. I was afraid of what I might be giving up. What if stopping my work plunges me into a void of mediocrity. Well on March 1, 2017, I made the first move and have kept on going ever since.</p><p>As I mentioned above, there were actually a couple important changes I made prior to March, 2017. At the end of October, 2016 I changed teams at work that allowed me to focus on some technology that truly stimulated and interested me during work hours which made me feel less compelled to seek technical satisfaction after hours. As I very slowly weaned myself off of some open source projects I soon instituted a new personal rule: don't bring my laptop to bed. At the time that was not part of some great plan to alter my habits, it just seemed like the right thing to do, but the impact was huge. You see, in my line of work, it's really super hard to work without a computer.</p><p>I have made a lot of changes between now and last March. These all transpired rather gradually. The first changes were all physical. The very first change was cutting out my daily habit of drinking two glasses of wine with dinner. This wasn’t so much about eliminating alcohol consumption. Rather it was a strategy to keep me from eating too much. After a couple glasses of wine, my self-control would disappear and my appetite would spike and I'd slip into a semi trance state of eating fatty foods and sipping wine. In terms of improving my eating habits, this move seemed like the lowest of all hanging fruit and a good place to start. My rule was simply no drinking at home. That seemed like it would squelch my nightly binge habit but allow me the occasional drink at social events. I had been wanting to make this particular change for months but could just never do it. By the time dinner time would roll around, the thought of denying myself those two glasses of wine just seemed cruel.</p><p>Well for whatever reason I was now properly motivated and managed to successfully drop the habit. After just a few days I was feeling better and perhaps more importantly felt like I had dug myself just a tiny bit out of my hole. Every week I would make some other change to my diet. Like replacing my breakfast of Starbucks lattes and pastries every morning with home brewed coffee and oatmeal. After a few months, my diet was pretty much what it remained until today: mostly whole, unprocessed&nbsp; plant based foods. I'll eat dairy or fish when I'm out or if someone else cooks it but not as a staple.</p><p>In addition to dietary changes, I managed to carve out a daily exercise routine. This had been a real struggle over the past several years. I went from running 60 miles a week for years to intentionally dropping to zero so I could get more work done. Then years later and realizing how bad an idea that was, I just couldn't maintain a regular exercise habit. Over time my fitness digressed to where I could not run more than a couple miles without injury and then couldn't run at all. Well in March 2017 I started a daily walking habit that became a mix of walking and running and by mid June I was running four miles a day. Oh man this brought me so much joy and I remember ending those runs feeling so much gratitude. I had thought my running days were over but now I was clearly back in the saddle.</p><p>While these physical changes in diet and fitness were super great, I still felt an uncertainty and an overall lack of vision regarding the forward momentum of my life. For so long I had been razor focused on open source projects with a hope in the back of my head that eventually I would just fall in to some great opportunity that would provide moderate wealth and total independence. In March, along with the health related adjustments, I mostly suspended my "extracurricular" open source involvement. Part of my overall plan was to completely reassess my goals and essentially recalibrate my personal mission. I was and am still passionate about windows server automation but I wanted to envision an end game and perhaps it would be something bigger and broader than writing code. I knew I needed to explore what it was I wanted to contribute to the world in my lifetime as well as what was the life I wanted for me and my family. Then I needed to determine what path was going to get me from where I am to that future vision.</p><p>This turned out to be a very difficult endeavor. I just didn't really know how to answer many of the questions that needed answering and I was the only person who could possibly answer them. I knew some of the basics: I wanted financial independence, to provide a nurturing environment for my wife and children, have more time to spend with family, and generally make the world a better place. These are great things to want but do not make for very actionable goals in themselves. I felt incredibly antsy and restless. I wanted something tangible I could do and apply myself toward that would propel me in the direction of obtaining all of these things, but I had no idea what that thing or activity could or should be.</p><p>I'm pretty good at setting goals and achieving them. I'm not always good and choosing the right goals. This has especially been the case in the past few years. Most of my life has been a migration from one obsession to another. I find an interest and fully submerge myself in it. It's both my biggest strength and weakness. So being in a state with no obsession to nurse felt empty and unsatisfying. Now that all being said, I felt oddly on the right track. Despite my restlessness, I felt the most positive I had in years. With my newly recovered health, I felt like I was standing on a solid foundation and like I was observing life and my surroundings through a new and more clear lens. With this more centered outlook, I was confident some action plan would reveal itself in time.</p><p>This search for a "mission" led me to make more changes to my daily routine. If diet and exercise changes could make me feel this much better, what other positive changes could I make to move this trend forward? First, I replaced listening to technical podcasts on workouts and while driving my car with listening to books from <a target="_blank" href="https://www.audible.com">audible.com</a> related to a variety of self-development topics. I've listened to about 40 to 50 books over the last year. The topics have been all over the place: popular psychology, philosophy, finance, religion. I've listened to some incredible books and also some real duds. All in all it has been a true journey. One book will introduce new concepts or authors which will lead me to another set of books. These have taught me a lot about a variety of topics and exposed me to a ton of new ideas.</p><p>In June, another new routine I took up was meditation. Years ago I had a daily meditation practice and I stuck with it for several years. However as my career blossomed, it eventually dropped away. But now, as I found myself seeking to learn about myself and discover a new path in life, it seemed like a good activity to take up again. Remembering back to my previous practice years ago, I recalled the honest introspection it could cultivate. This seemed like something sorely needed now. As I looked around myself for a meaningful way forward, I wanted to proceed with brutal honesty and authenticity. I did not want to choose goals that just made me feel good or would allow me to gain approval from others, I wanted to find and live my unique self, grounded in what was transpiring around me and not a fantasy of some future state to which I wanted to escape.</p><p>I am going to assume that the audience reading this blog may not have direct experience with meditation. That is totally ok and I will try to describe it in enough detail that you can have a sense of what I am talking about. The topic of meditation is immensely broad. There are a multitude of different meditation disciplines and traditions. Some differ so much from one another that it is hard to say that both are the same thing and many others may appear almost identical. While I dabbled in a few forms of meditation in my early twenties, I began what I would call a formal Zen meditation practice in the mid-nineties. I lived less than a mile from the <a target="_blank" href="http://sfzc.org/">San Francisco Zen Center</a> and practiced there regularly for a few years until I moved back to Southern California where I continued to practice on my own for several more years. Zen meditation, from a "logistical" perspective is very simple. You typically sit on a cushion but may also sit in a chair or on anything that allows you to sit in an erect posture with your back straight.&nbsp; As you sit, one typically places their concentration upon their breath - paying close attention to each inhale and exhale. The intent is not to find or discover some "understanding" but to remain in the present moment. Inevitably thoughts will arise. Thoughts about some event or interaction that happened or about some future fantasy or dread. In meditation, we don't try to avoid these thoughts because that is futile, rather we observe these thoughts without attaching to them or repelling them. At least that is the idea. In actual practice, attachment and repulsion are vibrant realities that are yet more fodder for observation. We catch our mind wandering and becoming absorbed in various thoughts and emotions and then gently bring ourselves back to the breath.</p><p>That’s all I'm going to cover on the mechanics of meditation. If it is something that interests you or you are curious to learn more, there are a ton of books, blogs, and YouTube posts on the topic that can do a far better job explaining things than I can. There are several "flavors" of meditation that all follow roughly the same technique I described above. They are sometimes grouped in their more contemporary and secular label: <a target="_blank" href="https://www.mindful.org/meditation/mindfulness-getting-started/">Mindfulness practice</a> - so you might include that in your googling. A couple resources I think are great for beginners: <a target="_blank" href="https://www.amazon.com/Mindfulness-Eight-Week-Finding-Peace-Frantic-ebook/dp/B005NJ2T1G/ref=sr_1_1?ie=UTF8&amp;qid=1520816029&amp;sr=8-1&amp;keywords=mindfulness+an+eight+week+plan">Mindfulness: An Eight-Week Plan for Finding Peace in a Frantic World</a> and an audible lecture series <a target="_blank" href="https://www.audible.com/pd/Self-Development/The-Science-of-Mindfulness-Audiobook/B00MEQRUG0">The science of mindfulness - A research based path to well-being</a>.</p><p>This practice proved and continues to prove itself very powerful. I don't know if I just forgot my experience of meditating years ago but this time things seemed more focused, energetic and penetrating. Honestly, I think the experience of the past few years brought a sort of brokenness that breathed a deeper level of honesty and surrender into my practice.</p><p>Just before I started meditating again, I began taking walks with my dog Ocean in the afternoon and evening. Shortly after beginning a daily morning routine of sitting meditation, I started treating these afternoon and evening walks as a mindfulness exercise. I'd try to focus on being present during the walk instead of daydreaming about the future or obsessing about something that happened that day. Of course every day I have varying degrees of success and failure with that intention, but that’s OK. It's the intention that is important.</p><p>These new non-physical habits have unfolded a surprisingly fascinating internal journey. It's helped me to identify some of the warped ways I interpret my experiences and gain healthier insights on how to view my life and how to act in the world, but I really feel like I have just scratched the surface. This does not at all down play the benefits of my changes to diet and exercise and I sort of think I would have never gotten off the ground without the changes made to my health. While this was by no means my strategy, they gave me small attainable goals that had a tangible and measurable impact. This not only made me feel better physically but it gave me confidence in myself and left me wanting for more positive change and positivity in general.</p><p>So now after a year of making all these changes,&nbsp; have I found my mission in life? Have I received transmission of my grand path and redefined life purpose? Well not really, but that does not indicate failure. On the one hand, learning to just allow myself to live more fully in the present moment without the need to constantly focus on some future state is a sort of "goal" in itself. That sounds really paradoxical and may be a completely wrong way to phrase it but I honestly believe we can get stuck in becoming "human doings" and lose sight of what it is to be a human being. I could almost describe my entire workaholic epic as just that. I was stuck thinking I needed to do, do , do in order to achieve some incredibly vague idea of myself in an unknown future state that was never real at all. That does not discount everything I did or judge all my actions as misguided, but the predominant energy I was tapping into was an energy of supporting and chasing an image of myself that was entirely illusory.</p><p>Maybe tomorrow I will wake up and I'll have a lightning flash of insight into "the thing" I need to do or maybe over the next several years, circumstances around me will shape themselves and guide me unknowingly into an entirely different future from the present I live in now. Either scenario may be equally valid but I believe that in either case, a genuine "calling" emerges form an understanding of our true self and such an understanding best arises out of a spirit of surrender and letting go to the present.</p><p>Please don't get me wrong. There are absolutely some who are in a bad place and need to take responsibility and act in order to get themselves to somewhere else ASAP.</p><p>Here is another possibility: maybe the ultimate path of truth is right in front of us right now doing just exactly what we are doing now. As we let go into the present, we become transformed from the inside and the outside starts to look very different. Maybe as I learn to live in the present, I approach my current day job as an opportunity for meaningful global change no matter what that job is. Its where I am right now and therefore is the absolute best place for me to be and exercise my unique talents. I, like all of us, bring something unique to my present that absolutely no one else has and by embracing that truth, we may become truly great at what we do. How many of us are climbing a ladder to nowhere and feel an utter failure we have not arrived at a somewhere we cannot even define. Maybe we need to just dust ourselves off and fall off the ladder to be rescued by right where we are now.</p>]]></description></item><item><title>Retiring the Boxstarter Web Launcher</title><dc:creator>Matt Wrock</dc:creator><pubDate>Tue, 30 May 2017 05:22:16 +0000</pubDate><link>http://www.hurryupandwait.io/blog/retiring-the-boxstarter-web-launcher</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:592ceaabf7e0ab6406dd9fa3</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG" data-image-dimensions="611x367" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=1000w" width="611" height="367" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1496119008660-99CDWC5E68GWW2SP5NXL/install.PNG?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>The "Web Launcher" which installs and runs <a href="http://boxstarter.org/">Boxstarter</a> using a "<a href="https://msdn.microsoft.com/en-us/library/142dbbz4(v=vs.90).aspx">click once</a>" install URL, will soon be retiring. This post will discuss why I have decided to sunset this feature and how those who regularly use this feature can access the same functionality via other methods.</p><h2>What is the Web Launcher</h2><p>When I originally wrote boxstarter, one of the primary design goals was that one could jump on a fresh Window OS install, and launch their configuration scripts with almost no effort or pre-bootstrapping. The click once install technology seemed like a good fit and indeed, I think it has served this purpose well. With a simple, easy to remember URL, one can install boxstarter and run a boxstarter package. This only works when invoked via internet explorer, and while I do not use IE as my default browser, this restriction is completely viable for a clean install where IE is guaranteed to be present.</p><h2>Why retire a good thing?</h2><p>Again, the click once installer has been a very successful boxstarter feature. The only hassle it has really caused has been for users wanting to use it from Chrome or Firefox. It has also been known to trigger false positive malware detection from Windows Smart Screen for reasons that usually baffle me. Both of these issues are really minor.</p><p>I am retiring it due to cost and time. Using click-once requires that I maintain a Software Signing certificate. I used to be able to obtain one for free, but the provider I have used has started to charge and made the renewal process particularly burdensome. The friction is not unreasonable given the nature of the company and I am truly grateful for the years of free service. Further, the click once installer requires some server side logic requiring me to pay hosting fees. As a former Microsoft Employee, I could host this on Azure for free but I no longer benefit from free Azure services.</p><p>I don't at all mean to come off like I'm on the brink of bankruptcy or anything like that. However, it seems unwise to pay hundreds of dollars a year for cert renewals and hosting fees when the fact of the matter is that almost all of this value can be accessed for free.</p><h2>When will the Web Launcher retire?</h2><p>I do not intend to yank the installer off the Boxstarter.org site right away. I'll likely keep it there for at least a few months. However, I will not be renewing the code signing certificate which means that starting June 28th 2017, Windows will warn users that the certificate is from an untrusted publisher.</p><p>I have removed documentation from the Boxstarter.org website that talks about the Web Launcher and replaced that documentation with new instructions for installing Boxstarter over the web and installing packages via boxstarter.</p><h2>How can I install Boxstarter and install packages via the web without the Web Launcher?</h2><p>Actually quite easily thanks to powershell. For some time now, I have shipped a <a href="https://github.com/mwrock/boxstarter/blob/master/BuildScripts/bootstrapper.ps1">bootstrapper.ps1</a> embedded in a setup.bat file downloadable from the boxstarter.org website. I am making some minor enhancements to this bootstrapper that will make it easy to install the boxstarter modules by simply running:</p>
























  
    <pre class="source-code">. { iwr -useb http://boxstarter.org/bootstrapper.ps1 } | iex; get-boxstarter -Force</pre>
  




  <p>This will install <a href="https://chocolatey.org/">Chocolatey</a> and even .Net 4.5 if either are not already installed and then install all of the necessary boxstarter modules and even import them into the current shell. The installer will terminate with a warning if you are not running as an administrator or have a Restricted Powershell Execution Policy.</p><p>Once this runs successfully, one can use the Install-BoxstarterPackage command to install their package or gist URL</p>
























  
    <pre class="source-code">Install-BoxstarterPackage -PackgeName https://gist.githubusercontent.com/mwrock/5e483f46cd15791970bdd3dd221dc179/raw/2632913a757570b576b9945ed04f94b747355b69/gistfile1.txt -DisableReboots</pre>
  




  <p>One can consult the command line help of the boxstarter website for details on how to use the command.</p><p>I understand this is a tiny bit more involved than the Web Launcher. You cannot both install boxstarter and run a package in a single command and if you don't like to enter a console...well...now you have to.</p><p>The reason I did not expose the bootstrapper like this in the first place was that then Powershell v3 where Invoke-WebRequest (aliased iwr) was not at all the norm at the time and the command that accomplishes the same in Powershell v2 was more verbose and awkward:</p>
























  
    <pre class="source-code">iex ((New-Object System.Net.WebClient).DownloadString('http://boxstarter.org/bootstrapper.ps1')); get-boxstarter -Force</pre>
  




  <p>Now I suspect that the majority of boxstarter users are on Powershell 3 or more likely even higher. If you are still on version 2, you can use the longer command above.</p>]]></description></item><item><title>Habitat application portability and understanding dynamic linking of ELF binaries</title><dc:creator>Matt Wrock</dc:creator><pubDate>Fri, 30 Dec 2016 10:28:06 +0000</pubDate><link>http://www.hurryupandwait.io/blog/habitat-application-portability-and-understanding-dynamic-linking-of-elf-binaries</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:58661207f5e23172079bc9ca</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png" data-image-dimensions="601x483" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=1000w" width="601" height="483" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1483093528807-BTDU9JTM9SKN6ZMKZEHX/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>I do not come from a classical computer science background and have spent the vast majority of my career working with Java, C# and Ruby - mostly on Windows. So I have managed to evade the details of exactly how native binaries find their dependencies at compile time and runtime on Linux. It just has not been a concern in the work that I do. If my app complains about missing low level dependencies, I find a binary distribution for Windows (99% of the time these exist and work across all modern Windows platforms) and install the MSI. Hopefully when the app is deployed, those same binary dependencies have been deployed on the production nodes and it would be just super if its the same version.</p><p>Recently I joined the <a target="_blank" href="https://www.habitat.sh/">Habitat</a> team at <a target="_blank" href="https://www.chef.io/">Chef</a> and one of the first things I did to get the feel of using Habitat to build software was to start creating Habitat build plans. The first plan I set out to create was <a target="_blank" href="http://www.microsoft.com/net/core">.NET Core</a>. I would soon find out that building .NET Core from source on Linux was probably a bad choice for a first plan. It uses clang instead of GCC, it has lots of cmake files that expect binaries to live in /usr/lib and it downloads built executables that do not link to Habitat packaged dependencies. Right out the gate, I got all sorts of various build errors as I plodded forward. Most of these errors centered around a common theme: "I can't find X." There were all sorts of issues beyond linking too that I won't get into here but I'm convinced that if I knew the basics of what this post will attempt to explain, I would have had a MUCH easier time with all the errors and pitfalls I faced.</p><h2>What is linking and what are ELF binaries?</h2><p>First lets define our terms:</p><h3>ELF</h3><p>There are no "Lord of the Rings" references to be had here. ELF is the <a target="_blank" href="https://en.wikipedia.org/wiki/Executable_and_Linkable_Format">Extensible and linkable format</a> and defines how binary files are structured on Linux/Unix. This can include executable files, shared libraries, object files and more. An ELF file contains a set of headers and a number of sections for things like text, data, etc. One of the key roles of an ELF binary is to inform the operating system how to load a program into memory including all of the symbols it must link to.</p><h3>Linking</h3><p>Linking is a key part of the process of building an executable. The other key part is compiling. Often we refer to both jointly as "compiling" but they are really two distinct operations. First the compiler takes source code files and turn them into machine language instructions in the form of object files. These object files alone are not very useful to running a program.</p><p>Linking takes the object files (some might be from source code you wrote) and links them together with external library files to create a functioning program. If your source code calls a function from an external library, the compiler gleefully assumes that function exists and moves on. If it doesn't exist, don't worry, the linker will let you know.</p><p>Often when we hear about linking, two types are mentioned: static and dynamic. Static linking takes the external machine instructions and embeds them directly into the built executable. If all external dependencies of a program were statically linked, there would be only one executable file and no need for any dependent shared object files to be referenced.</p><p>However, we usually dynamically link our external dependencies. Dynamic linking does not embed the external code into the final executable. Instead it just points to an external shared object (.so) file (or .dll file on Windows) and loads that code into the running process at runtime. This has the benefit of being able to update external dependencies without having to ship and package your application each time a dependency is updated. Dynamic linking also results in a smaller application binary since it does not contain the external code.</p><p>On Unix/Linux systems, the ELF format specifies the metadata that governs what libraries will be linked. These libraries can be in many places on the machine and may exist in more than one place. The metadata in the ELF binary will help determine exactly what files are linked when that binary is executed.</p><h2>Habitat + dynamic linking = portability</h2><p>Habitat leverages dynamic linking to provide true application portability. It might not be immediately obvious what this means or why it is important or if it is even a good thing. So lets start by describing how applications typically load their dependencies in a normal environment and the role that configuration management systems like Chef play in these environments.</p><h3>How you manage dependencies today</h3><p>Lets say you have written an application that depends on the <a target="_blank" href="http://zeromq.org/">ZeroMQ</a> library. You might use apt-get or yum to install ZeroMQ and its binaries are likely dropped somewhere into /usr. Now you can build and run your application and it will consume the ZeroMQ libraries installed. Unless it is told otherwise, the linker will scan the trusted Linux library locations for shared object files to link.</p><p>To illustrate this, I have built ZeroMQ from source and it produced libzmq.so.5 and put it in /usr/local/lib. If I examine that shared object with <a target="_blank" href="https://en.wikipedia.org/wiki/Ldd_(Unix)">ldd</a>, I can see where it links to its dependencies:</p>
























  
    <pre class="source-code">mwrock@ultrawrock:~$ ldd /usr/local/lib/libzmq.so.5
linux-vdso.so.1 =&gt;  (0x00007ffffe05f000)
libunwind.so.8 =&gt; /usr/lib/x86_64-linux-gnu/libunwind.so.8 (0x00007f7e92370000)
libsodium.so.18 =&gt; /usr/local/lib/libsodium.so.18 (0x00007f7e92100000)
librt.so.1 =&gt; /lib/x86_64-linux-gnu/librt.so.1 (0x00007f7e91ef0000)
libpthread.so.0 =&gt; /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f7e91cd0000)
libdl.so.2 =&gt; /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f7e91ac0000)
libstdc++.so.6 =&gt; /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f7e917a0000)
libm.so.6 =&gt; /lib/x86_64-linux-gnu/libm.so.6 (0x00007f7e91490000)
libc.so.6 =&gt; /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7e910c0000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7e92a00000)
liblzma.so.5 =&gt; /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f7e90e80000)
libgcc_s.so.1 =&gt; /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f7e90c60000)</pre>
  




  <p>They are all linked to the dependencies found in the Linux trusted library locations.</p><p>Now the time comes to move to production and just like you needed to install the ZeroMQ libraries in your dev environment, you will need to do the same on your production nodes. We all know this drill and we have probably all been burned at some point - something new is deployed to production and either its dependencies were not there or they were but they were the wrong version.</p><h3>Configuration Management as solution</h3><p>Chef fixes this right? Kind of...it's complicated.</p><p>You can absolutely have Chef make sure that your application's dependencies are installed with the correct versions. But what if you have different applications or services on the same node that depend on a different version of the same dependency? It may not be possible to have multiple versions coexist in /usr/lib. Maybe your new version will work or maybe it won't. Especially for some of the lower level dependencies, there is simply no guarantee that compatible versions will exist. If anything, there is one guarantee:&nbsp;different distros will have different versions.</p><h3>Keeping the automation with the application</h3><p>Even more important - you want these dependencies to travel with your application. Ideally I want to install my application and know by virtue of installing it, everything it needs is there and has not stomped over the dependencies of anything else. I do not want to delegate the installation of its dependencies and the knowledge of which version to install to a separate management layer. Instead, Habitat binds dependencies with the application so that there is no question what your application needs and installing your application includes the installation of all of its dependencies. Lets look at how this works and see how dynamic linking is at play.</p><p>When you build a habitat plan, your plan will specify each dependency required by your application in your application's plan:</p>
























  
    <pre class="source-code">pkg_deps=(core/glibc core/gcc-libs core/libsodium)</pre>
  




  <p>Then when Habitat packages your build into its final, deployable artifact (.hart file), that artifact will include a list of every dependent Habitat package (including the exact version and release):</p>
























  
    <pre class="source-code">[35][default:/src:0]# cat /hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/DEPS
core/glibc/2.22/20160612063629
core/gcc-libs/5.2.0/20161208223920
core/libsodium/1.0.8/20161214075415</pre>
  




  <p>At install time, Habitat installs your application package and also the packages included in its dependency manifest (the DEPS file shown above) in the pkgs folder under Habitat's root location. Here it will not conflict with any previously installed binaries on the node that might live in /usr. Further, the Habitat build process links your application to these exact package dependencies and ensures that at runtime, these are the exact binaries your application will load.</p>
























  
    <pre class="source-code">[36][default:/src:0]# ldd /hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/lib/libzmq.so.5
linux-vdso.so.1 (0x00007fffd173c000)
libsodium.so.18 =&gt; /hab/pkgs/core/libsodium/1.0.8/20161214075415/lib/libsodium.so.18 (0x00007f8f47ea4000)
librt.so.1 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/librt.so.1 (0x00007f8f47c9c000)
libpthread.so.0 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libpthread.so.0 (0x00007f8f47a7e000)
libstdc++.so.6 =&gt; /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libstdc++.so.6 (0x00007f8f47704000)
libm.so.6 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libm.so.6 (0x00007f8f47406000)
libc.so.6 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libc.so.6 (0x00007f8f47061000)
libgcc_s.so.1 =&gt; /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libgcc_s.so.1 (0x00007f8f46e4b000)
/hab/pkgs/core/glibc/2.22/20160612063629/lib64/ld-linux-x86-64.so.2 (0x0000560174705000)</pre>
  




  <p>Habitat guarantees that the same binaries that were linked at build time, will be linked at run time. Even better, it just happens and you don't need a separate management layer to enforce this.</p><p>This is how a Habitat package provides portability. Installing and running a Habitat package brings all of its dependencies with it. They do not all live in the same .hart package, but your application's .hart package includes the necessary metadata to let Habitat know what other packages to download and install from the depot. These dependencies may or may not already exist on the node with varying versions, but it doesn't matter because a Habitat application only relies on the packages that reside within Habitat. And even within the Habitat environment, you can have multiple applications that rely on the same dependency but different versions, and these applications can run side by side.</p><h2>The challenge of portability and the Habitat studio</h2><p>So when you are building a Habitat plan into a hart package, what keeps that build from pulling dependencies from the default Linux lib directories? What if you do not specify these dependencies in your plan and the build links them from elsewhere? That could break our portability. If your application builds magically from a non-Habitat controlled location, then there is no guarantee that those dependencies will land when you install your application elsewhere. Habitat constructs a build environment called a "studio" to protect against this exact scenario.</p><p>The Habitat studio is a clean room environment. The only libraries you will find in this environment are those managed by Habitat. You will find /lib and /usr/lib totally empty here:</p>
























  
    <pre class="source-code">[37][default:/src:0]# ls /lib -la
total 8
drwxr-xr-x  2 root root 4096 Dec 24 22:46 .
drwxr-xr-x 26 root root 4096 Dec 24 22:46 ..
lrwxrwxrwx  1 root root    3 Dec 24 22:46 lib -&gt; lib
[38][default:/src:0]# ls /usr/lib -la
total 8
drwxr-xr-x 2 root root 4096 Dec 24 22:46 .
drwxr-xr-x 9 root root 4096 Dec 24 22:46 ..
lrwxrwxrwx 1 root root    3 Dec 24 22:46 lib -&gt; lib</pre>
  




  <p>Habitat installs several packages into the studio including several familiar Linux utilities and build tools. Every utility and library that Habitat loads into the studio is a Habitat package itself.</p>
























  
    <pre class="source-code">[1][default:/src:0]# ls /hab/pkgs/core/
acl       cacerts    gawk      gzip            libbsd         mg       readline    vim
attr      coreutils  gcc-libs  hab             libcap         mpfr     sed         wget bash      diffutils  glibc     hab-backline    libidn         ncurses  tar         xz
binutils  file       gmp       hab-plan-build  linux-headers  openssl  unzip       zlib bzip2     findutils  grep      less            make           pcre     util-linux
</pre>
  




  <p>This can be a double edged sword. On the one hand it protects us from undeclared dependencies being missed by our package. The darker side is that your plan may be building source that has build scripts that expect dependencies or other build tools to exist in their "usual" homes. If you are unfamiliar with how the standard Linux linker scans for dependencies, discovering what is wrong with your build may be less than obvious.</p><h2>The rules of dependency scanning</h2><p>So before we go any further lets take a look at how the linker works and how Habitat configures its build environment to influence where it finds dependencies at both build and run time. The linker looks at a combination of environment variables, cli options and well known directory paths and in a strict order of precedence. Here is a direct quote from the ld (the linker binary) man page:</p><blockquote><p>The linker uses the following search paths to locate required shared libraries:</p><p>1. Any directories specified by -rpath-link options.<br />2. Any directories specified by -rpath options. &nbsp;The difference between -rpath and -rpath-link is that directories specified by -rpath options are included in the executable and used at runtime, whereas the -rpath-link option is only effective at link time. Searching -rpath in this way is only supported by native linkers and cross linkers which have been configured with the --with-sysroot option.<br />3. On an ELF system, for native linkers, if the -rpath and -rpath-link options were not used, search the contents of the environment variable "LD_RUN_PATH".<br />4. On SunOS, if the -rpath option was not used, search any directories specified using -L options.<br />5. For a native linker, search the contents of the environment variable "LD_LIBRARY_PATH".<br />6. For a native ELF linker, the directories in "DT_RUNPATH" or "DT_RPATH" of a shared library are searched for shared libraries needed by it. The "DT_RPATH" entries are ignored if "DT_RUNPATH" entries exist.<br />7. The default directories, normally /lib and /usr/lib.<br />8. For a native linker on an ELF system, if the file /etc/ld.so.conf exists, the list of directories found in that file.</p></blockquote><p>At build time Habitat sets the $LD_RUN_PATH variable to the library path of every dependency that the building plan depends on. We can see this in Habitat's build output when we build a Habitat plan:</p>
























  
    <pre class="source-code">zeromq: Setting LD_RUN_PATH=/hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/lib:/hab/pkgs/core/glibc/2.22/20160612063629/lib:/hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib:/hab/pkgs/core/libsodium/1.0.8/20161214075415/lib</pre>
  




  <p>This means that at run time, when you run your application built by habitat, it will load from the "habetized" packaged dependencies. This is because setting the $LD_RUN_PATH influences how the ELF metadata is constructed and causes it to point to these Habitat package paths.</p><h2>Patching pre-built binaries</h2><p>Habitat not only allows one to build packages from source but also supports "binary-only" packages. These are packages that are made up of binaries downloaded from some external binary repository or distribution site. These are ideal for closed-source software or software that may be too complicated or takes too long to build. However, Habitat cannot control the linking process for these binaries. If you try to execute these binaries in a Habitat studio, you may see runtime failures.</p><p>The <a target="_blank" href="https://github.com/habitat-sh/core-plans/blob/master/dotnet-core/plan.sh">dotnet-core</a> package is a good example of this. I ended up giving up on building that plan from source and instead just download the binaries from the public .NET distribution site. Running ldd on the dotnet binary, we see:</p>
























  
    <pre class="source-code">[8][default:/src:0]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet
/hab/pkgs/core/glibc/2.22/20160612063629/bin/ldd: line 117:
/hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet:
No such file or directory</pre>
  




  <p>Well that's not very clear. This isn't even able to show us any of the linked dependencies because the glibc interpreter the ELF metadata says to use is not where the metadata says it is:</p>
























  
    <pre class="source-code">[9][default:/src:1]# file /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet
/hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet:
ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32,
BuildID[sha1]=db256f0ac90cd718d8ec2d157b29437ea8bcb37f, not stripped</pre>
  




  <p>/lib64/ld-linux-x86-64.so.2 does not exist . We can manually fix this even after a binary is built with a tool called <a target="_blank" href="http://nixos.org/patchelf.html">patchelf</a>. We will declare a build dependency in our plan to core/patchelf and then we can use the following command:</p>
























  
    <pre class="source-code">find -type f -name 'dotnet' \
  -exec patchelf --interpreter "$(pkg_path_for glibc)/lib/ld-linux-x86-64.so.2"</pre>
  




  <p>Now lets try ldd again:</p>
























  
    <pre class="source-code">[16][default:/src:130]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225151837/bin/dotnet
linux-vdso.so.1 (0x00007ffe421eb000)
libdl.so.2 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libdl.so.2 (0x00007fcb0b2cc000)
libpthread.so.0 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libpthread.so.0 (0x00007fcb0b0af000)
libstdc++.so.6 =&gt; not found
libm.so.6 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libm.so.6 (0x00007fcb0adb1000)
libgcc_s.so.1 =&gt; not found
libc.so.6 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libc.so.6 (0x00007fcb0aa0d000)
/hab/pkgs/core/glibc/2.22/20160612063629/lib/ld-linux-x86-64.so.2 (0x00007fcb0b4d0000)</pre>
  




  <p>This is better. It now links our glibc dependencies to the Habitat packaged glibc binaries, but there are still a couple dependencies that the linker could not find. At least now we can see more clearly what they are.</p><p>There is another argument we can pass to patchelf --set-rpath that can edit the ELF metadata as if $LD_RUN_PATH was set when the binary was built:</p>
























  
    <pre class="source-code">find -type f -name 'dotnet' \
  -exec patchelf --interpreter "$(pkg_path_for glibc)/lib/ld-linux-x86-64.so.2" --set-rpath "$LD_RUN_PATH" {} \;
find -type f -name '*.so*' \
  -exec patchelf --set-rpath "$LD_RUN_PATH" {} \;</pre>
  




  <p>So we set the rpath to the $LD_RUN_PATH set in the Habitat environment. We will also make sure to do this for each *.so file in the directory where we downloaded the distributable binaries. Finally ldd now finds all of our dependencies:</p>
























  
    <pre class="source-code">[19][default:/src:130]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225152801/bin/dotnet
linux-vdso.so.1 (0x00007fff3e9a4000)
libdl.so.2 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libdl.so.2 (0x00007f1e68834000)
libpthread.so.0 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libpthread.so.0 (0x00007f1e68617000)
libstdc++.so.6 =&gt; /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libstdc++.so.6 (0x00007f1e6829d000)
libm.so.6 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libm.so.6 (0x00007f1e67f9f000)
libgcc_s.so.1 =&gt; /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libgcc_s.so.1 (0x00007f1e67d89000)
libc.so.6 =&gt; /hab/pkgs/core/glibc/2.22/20160612063629/lib/libc.so.6 (0x00007f1e679e5000)
/hab/pkgs/core/glibc/2.22/20160612063629/lib/ld-linux-x86-64.so.2 (0x00007f1e68a38000)</pre>
  




  <p>Every dependency is a Habitat packaged binary as declared in our own application's (dotnet-core here) dependencies as low level as glibc. This should be fully portable across any 64 bit Linix distribution.</p>]]></description></item><item><title>Creating a Docker container Host on Windows Nano Server with Chef</title><dc:creator>Matt Wrock</dc:creator><pubDate>Wed, 28 Sep 2016 17:19:34 +0000</pubDate><link>http://www.hurryupandwait.io/blog/creating-a-docker-container-host-on-windows-nano-server-with-chef</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:57eb5b6b9f74562532d85625</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png" data-image-dimensions="663x386" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=1000w" width="663" height="386" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1475080765031-X3GWE9TXHM9PKFKT9F9Z/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>This week <a target="_blank" href="https://blogs.technet.microsoft.com/hybridcloud/2016/09/26/announcing-the-launch-of-windows-server-2016/?wt.mc_id=WW_CE_WS_OO_SCL_TW&amp;Ocid=C+E%20Social%20FY17_Social_TW_windowsserver_20160926_597706051#WinServ">Microsoft launched the release of Windows Server 2016</a> along with its ultra light headless deployment option -&nbsp;Nano Server. The Nano server images are many times smaller than what we have come to expect from a Windows server image. A Nano <a target="_blank" href="https://www.vagrantup.com/">Vagrant</a> box is just a few hundred megabytes. These machines also boot up VERY quickly and require fewer updates and reboots.</p><p>Earlier this year, I <a target="_blank" href="http://www.hurryupandwait.io/blog/instal">blogged</a> about how to run a <a target="_blank" href="https://www.chef.io/">Chef</a> client on Windows Nano Server. Things have come a long way since then and this post serves as an update. Now that the RTM Nano bits are out, we will look at:</p><ul><li>How to get and run a Nano server</li><li>How to install the chef client on Windows Nano</li><li>How to use Test-Kitchen and Inspec to test your Windows Nano Server cookbooks.</li></ul><p>The <a target="_blank" href="https://github.com/mwrock/docker_nano_host">sample cookbook</a> I'll be demonstrating here will highlight some of the new Windows container features in Nano server. It will install <a target="_blank" href="https://www.docker.com/">docker</a> and allow you to use your Nano server as a container host where you can run, manipulate and inspect Windows containers from any Windows client.</p><h2>How to get Windows Nano Server</h2><p>You have a few options here. One thing to understand about Windows Nano is that there is no separate Windows Nano ISO. Deploying a Nano server involves extracting a WIM and some powershell scripts from a Windows 2016 Server ISO. You can then use those scripts to generate a .VHD file from the WIM or you can use the WIM to deploy Nano to a bare metal server. There are some shortcuts available if you don't want to mess with the scripts and prefer a more instantly gratifying experience. Lets explore these scenarios.</p><h3>Using New-NanoServerImage to create your Nano image</h3><p>If you mount the server 2016 ISO (free evaluation versions available <a target="_blank" href="https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2016">here</a>), you will find a "NanoServer\NanoServerImageGenerator" folder containing a NanoServerImageGenerator powershell module. This module's core function is New-NanoServerImage. Here is an example of using to to produce a Nano Server VHD:</p>
























  
    <pre class="source-code">Import-Module NanoServerImageGenerator.psd1
$adminPassword = ConvertTo-SecureString "vagrant" -AsPlainText -Force

New-NanoServerImage `
  -MediaPath D:\ `
  -BasePath .\Base `
  -TargetPath .\Nano\Nano.vhdx `
  -ComputerName Nano `
  -Package @('Microsoft-NanoServer-DSC-Package','Microsoft-NanoServer-IIS-Package') `
  -Containers `
  -DeploymentType Guest `
  -Edition Standard `
  -AdministratorPassword $adminPassword</pre>
  




  <p>This will generate a Nano Hyper-V capable image file of a Container/DSC/IIS ready Nano server. You can read more about the details and other options of this function in this <a target="_blank" href="https://technet.microsoft.com/windows-server-docs/compute/nano-server/getting-started-with-nano-server?f=255&amp;MSPPError=-2147217396">Technet article</a>.</p><h3>Direct EXE/VHD download</h3><p>As I briefly noted above, you can download evaluation copies of Windows Server 2016. Instead of downloading a full multi gigabyte Windows ISO, you could choose the exe/vhd download option. This will download an exe file that will extract a pre-made vhd. You can then create a new Hyper-V VM from the vhd. With that vm, just login to the Nano console to set the administrative password and you are good to go.</p><h3>Vagrant</h3><p>This is my installation method of choice. I use a <a target="_blank" href="https://www.packer.io/">packer</a> template to automate the download of the 2016 server ISO,&nbsp;the generation of the image file and finally package the image both for <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/hyperv-nano.json">Hyper-V</a> and <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/vbox-nano.json">VirtualBox</a> Vagrant providers. I keep the image publicly available on Atlas via <a target="_blank" href="https://atlas.hashicorp.com/mwrock/boxes/WindowsNano/">mwrock/WindowsNano</a>. The advantage of these images is that they are fully patched (key for docker to work with Windows containers), work with VirtualBox and enable file sharing ports so you can map a drive to Nano.</p><h2>Vagrant Nano bug</h2><p>One challenge working with Nano Server and cross platform automation tools such as vagrant is that Nano exposes a Powershell.exe with no -EncryptedCommand argument which many cross platform WinRM libraries leverage to invoke remote Powershell on a Windows box.</p><p><a target="_blank" href="https://github.com/sneal">Shawn Neal</a> and I rewrote the <a target="_blank" href="https://github.com/WinRb/WinRM">WinRM ruby gem</a> to use PSRP (powershell remoting protocol) to talk powershell and allow it to interact with Nano server. This has been integrated with all the Chef based tools and I will be porting it to Vagrant soon. In the meantime, a "vagrant up" will hang after creating the VM. Know that the VM is in fact fully functional and connectable. I'll mention a hack you can apply to get <a target="_blank" href="http://kitchen.ci/">Test-Kitchen</a>'s vagrant driver working later in this post.</p><h2>Connecting to Windows Nano Server</h2><p>Once you have a Nano server VM up and running. You will probably want to actually use it. Note: There is no RDP available here. You can connect to Nano and run commands either using native Powershell Remoting from a Windows box (<a target="_blank" href="https://blogs.msdn.microsoft.com/powershell/2016/08/18/powershell-on-linux-and-open-source-2/">powershell on Linux</a> does not yet support remoting) or use <a target="_blank" href="https://github.com/chef/knife-windows">knife-windows</a>' "knife winrm" from Windows, Mac or Linux.</p><p>Powershell Remoting:</p>
























  
    <pre class="source-code">$ip = "&lt;ip address of Nano Server&gt;"

# You only need to add the trusted host once
Set-Item WSMan:\localhost\Client\TrustedHosts $ip
# use usename and pasword "vagrant" on the mwrock vagrant box
Enter-PSSession -ComputerName $ip -Credential Administrator</pre>
  




  <p>Knife-Windows:</p>
























  
    <pre class="source-code"># mwrock vagrant boxes have a username and password "vagrant"
# add "--winrm-port 55985 for local VirtualBox
knife winrm -m &lt;ip address of Nano Server&gt; "your command" --winrm-userator --winrm-password</pre>
  




  <p>Note that knife winrm expects "cmd.exe" style commands by default. Use "--winrm-shell powershell" to send powershell commands.</p><h2>Installing Chef on Windows Nano Server</h2><p>Quick tip: Do not try to install a chef client MSI. That will not work.</p><p>Windows Nano server jettisons many of the APIs and subsystems we have grown accustomed to in order to achieve a much more compact and cloud friendly footprint. This includes the removal of the MSI subsystem. Nano server does support the newer appx packaging system currently best known as the format for packaging Windows Store Apps. With Nano Server, new extensions have been added to the appx model to support what is now known as "Windows Server Applications" (aka WSAs).</p><p>At Chef, we have added the creation of appx packages into our build pipelines but these are not yet exposed by our Artifactory and Bintray fed Omnitruck delivery mechanism. That will happen but in the mean time, I have uploaded one to a public AWS S3 bucket. You can grab the current client (as of this post) <a target="_blank" href="https://s3-us-west-2.amazonaws.com/nano-chef-client/chef-12.14.60.appx">here</a>. To install this .appx file (note: if using Test-Kitchen, this is all done automatically for you):</p><ol><li>Either copy the .appx file via a mapped drive or just download it from the Nano server using <a target="_blank" href="https://github.com/chef/mixlib-install/blob/v2.0.0/support/install_command.ps1#L145">this powershell function</a>.</li><li>Run "Add-AppxPackage -Path &lt;path to .appx file&gt;"</li><li>Copy the appx install to c:\opscode\chef:</li></ol>
























  
    <pre class="source-code">  $rootParent = "c:\opscode"
  $chef_omnibus_root - Join-Path $rootParent "chef"
  
  if(!(Test-Path $rootParent)) {
    New-Item -ItemType Directory -Path $rootParent
  }

  # Remove old version of chef if it is here
  if(Test-Path $chef_omnibus_root) {
    Remove-Item -Path $chef_omnibus_root -Recurse -Force
  }

  # copy the appx install to the omnibus_root. There are serious
  # ACL related issues with running chef from the appx InstallLocation
  # This is temporary pending a fix from Microsoft.
  # We can eventually just symlink
  $package = (Get-AppxPackage -Name chef).InstallLocation
  Copy-Item $package $chef_omnibus_root -Recurse</pre>
  




  <p>The last item is a bit unfortunate but temporary. Microsoft has confirmed this to be an issue with running simple zipped appx applications. The ACLs on the appx install root are seriously restricted and you cannot invoke the chef client from that location. Until this is fixed, you need to copy the files from the appx location to somewhere else. We'll just copy to the well known Chef default location on Windows c:\opscode\chef.</p><h2>Running Chef</h2><p>With the chef client installed, its easiest to work with chef when its on your path. To add it run:</p>
























  
    <pre class="source-code">$env:path += ";c:\opscode\chef\bin;c:\opscode\chef\embedded\bin"

# For persistent use, will apply even after a reboot.
setx PATH $env:path /M</pre>
  




  <p>Now you can run the chef client just as you would anywhere else. Here I'll check the version using knife:</p>
























  
    <pre class="source-code">C:\dev\docker_nano_host [master]&gt; knife winrm -m 192.168.137.25 "chef-client -v" --winrm-user vagrant --winrm-password vagrant
192.168.137.25 Chef: 12.14.60</pre>
  




  <h2>Not all resources may work</h2><p>I have to include this disclaimer. Nano is a very different animal than our familiar 2012 R2. I am confident that the newly launched Windows Server 2016 should work just as 2012 R2 does today, but nano has APIs that have been stripped away that we have previously leveraged heavily in Chef and <a target="_blank" href="#">Inspec</a>. One example is Get-WmiObject. This cmdlet is not available on Nano Server so any usage that depends on it will fail.</p><p>Most of the crucial areas surrounding installing and invoking chef are patched and tested. However, there may be resources that either have not yet been patched or will simply never work. The windows_package resource is a good example. Its used to install MSIs and EXE installers not supported on Nano.</p><h2>Test-Kitchen and Inspec on Nano</h2><p>The <a target="_blank" href="http://www.hurryupandwait.io/blog/released-winrm-gem-20-first-cross-platform-open-sourced-psrp-client-implementation">WinRM rewrite</a> to leverage PSRP allows our remote execution ecosystem tools to access Windows Nano Server. We have also overhauled our <a target="_blank" href="https://github.com/chef/mixlib-install">mixlib-install</a> gem to use .Net core APIs (the .Net runtime supported on Nano) for the chef provisioners. With those changes in place, Test-Kitchen can install and run Chef, and Inspec can test resources on your Nano instances.</p><p>There are a few things to consider when using Test-Kitchen on Windows Nano:</p><h3>Specifying the Chef appx installer</h3><p>As I mentioned above, the "OmniTruck" system is not yet serving appx packages to Nano. However, you can tell Test-Kitchen in your .kitchen.yml to use a specific .msi or .appx installer. Here is some example yaml for running Test-Kitchen with Nano:</p>
























  
    <pre class="source-code">---
driver:
  name: vagrant

provisioner:
  name: chef_zero
  install_msi_url: https://s3-us-west-2.amazonaws.com/nano-chef-client/chef-12.14.60.appx

verifier:
  name: inspec

platforms:
  - name: windows-nano
    driver_config:
      box: mwrock/WindowsNano</pre>
  




  <p>Inspec requires no configuration changes.</p><h3>Working around Vagrant hangs</h3><p>Until I refactor Vagrant's winrm communicator, it cannot talk powershell with Windows Nano. Because Test-Kitchen and Inspec talks to Nano directly via the newly PSRP supporting WinRM ruby gem, they make Vagrant's limitation nearly unnoticeable. However the RTM Nano bits exacerbated the Vagrant bug causing it to hang when it does its initial winrm auth check. This can unfortunately hang your <em>kitchen create</em>. You can work around this by applying a simple "hack" to your vagrant install:</p><p>Update C:\HashiCorp\Vagrant\embedded\gems\gems\vagrant-1.8.5\plugins\communicators\winrm\communicator.rb (adjusting the vagrant gem version number as necessary)&nbsp;and change:</p>
























  
    <pre class="source-code">result = Timeout.timeout(@machine.config.winrm.timeout) do
  shell(true).powershell("hostname")
end</pre>
  




  <p>to:</p>
























  
    <pre class="source-code">result = Timeout.timeout(@machine.config.winrm.timeout) do
  shell(true).cmd("hostname")
end</pre>
  




  <p>This should get your test-kitchen runs unblocked.</p><h3>Running on Azure hosted Nano images</h3><p>If you prefer to run Test-Kitchen and Inspec against an Azure hosted VM instead of vagrant, use <a target="_blank" href="https://github.com/stuartpreston">Stuart Preston's</a>&nbsp;excellent <a target="_blank" href="https://github.com/pendrica/kitchen-azurerm">kitchen-azurerm</a>&nbsp;driver:</p>
























  
    <pre class="source-code">---
driver:
  name: azurerm

driver_config:
  subscription_id: 'your subscription id'
  location: 'West Europe'
  machine_size: 'Standard_F1'

platforms:
  - name: windowsnano
    driver_config:
      image_urn: MicrosoftWindowsServer:WindowsServer:2016-Nano-Server-Technical-Preview:latest
</pre>
  




  <p>See the <a target="_blank" href="https://github.com/pendrica/kitchen-azurerm">kitchen-azurerm readme</a>&nbsp;for details regarding azure authentication configuration.&nbsp;As of the date of this post, RTM images are not yet available but thats probably going to change very soon. In the meantime, use TP5.</p><h2>Using Chef to Configure a Docker host</h2><p>One of the exciting new features of Windows Server 2016 and Nano Server is their ability to host Windows containers. They can do this using the same Docker API we are familiar with with linux containers. You could walk through the <a target="_blank" href="https://msdn.microsoft.com/en-us/virtualization/windowscontainers/deployment/deployment_nano?f=255&amp;MSPPError=-2147217396">official instructions</a> for setting this up or you could just have Chef do this for you.</p><h3>Updating the Nano server</h3><p>Note that in order for this to work on RTM Nano images, you must install the latest Windows updates. My vagrant boxes come fully patched and ready but if you are wondering how do you install updates on a Nano server, here is how:</p>
























  
    <pre class="source-code">$sess = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate -ClassName MSFT_WUOperationsSession
Invoke-CimMethod -InputObject $sess -MethodName ApplyApplicableUpdates</pre>
  




  <p>Then just reboot and you are good.</p><h3>A sample cookbook to install and configure the Docker service</h3><p>I converted the above mentioned instructions for installing Doker and configuring the service into a <a target="_blank" href="https://github.com/mwrock/docker_nano_host">Chef cookbook</a> recipie. &nbsp;Its fairly straightforward:</p>
























  
    <pre class="source-code">powershell_script 'install Nuget package provider' do
  code 'Install-PackageProvider -Name NuGet -Force'
  not_if '(Get-PackageProvider -Name Nuget -ListAvailable -ErrorAction SilentlyContinue) -ne $null'
end

powershell_script 'install nano container package' do
  code 'Install-Module -Name xNetworking -Force'
  not_if '(Get-Module xNetworking -list) -ne $null'
end

zip_path = "#{Chef::Config[:file_cache_path]}/docker.zip"
docker_config = File.join(ENV["ProgramData"], "docker", "config")

remote_file zip_path do
  source "https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip"
  action :create_if_missing
end

dsc_resource "Extract Docker" do
  resource :archive
  property :path, zip_path
  property :ensure, "Present"
  property :destination, ENV["ProgramFiles"]
end

directory docker_config do
  recursive true
end

file File.join(docker_config, "daemon.json") do
  content "{ \"hosts\": [\"tcp://0.0.0.0:2375\", \"npipe://\"] }"
end

powershell_script "install docker service" do
  code "&amp; '#{File.join(ENV["ProgramFiles"], "docker", "dockerd")}' --register-service"
  not_if "Get-Service docker -ErrorAction SilentlyContinue"
end

service 'docker' do
  action [:start]
end

dsc_resource "Enable docker firewall rule" do
  resource :xfirewall
  property :name, "Docker daemon"
  property :direction, "inbound"
  property :action, "allow"
  property :protocol, "tcp"
  property :localport, [ "2375" ]
  property :ensure, "Present"
  property :enabled, "True"
end</pre>
  




  <p>This downloads the appropriate docker binaries, installs the docker service and configures it to listen on port 2375.</p><p>To validate that all actually worked we have these Inspec tests:</p>
























  
    <pre class="source-code">describe port(2375) do
  it { should be_listening }
end

describe command("&amp; '$env:ProgramFiles/docker/docker' ps") do
  its('exit_status') { should eq 0 }
end

describe command("(Get-service -Name 'docker').status") do
  its(:stdout) { should eq("Running\r\n") }
end</pre>
  




  <p>If this all passes, we know our server is listening on the expected port and that docker commands work.</p><h3>Converge and Verify</h3><p>So lets run these with <em>kitchen verify</em>:</p>
























  
    <pre class="source-code">C:\dev\docker_nano_host [master]&gt; kitchen verify
-----&gt; Starting Kitchen (v1.13.0)
-----&gt; Creating &lt;default-windows-nano&gt;...
       Bringing machine 'default' up with 'hyperv' provider...
       ==&gt; default: Verifying Hyper-V is enabled...
       ==&gt; default: Starting the machine...
       ==&gt; default: Waiting for the machine to report its IP address...
           default: Timeout: 240 seconds
           default: IP: 192.168.137.25
       ==&gt; default: Waiting for machine to boot. This may take a few minutes...
           default: WinRM address: 192.168.137.25:5985
           default: WinRM username: vagrant
           default: WinRM execution_time_limit: PT2H
           default: WinRM transport: negotiate
       ==&gt; default: Machine booted and ready!
       ==&gt; default: Machine not provisioned because `--no-provision` is specified.
       [WinRM] Established

       Vagrant instance &lt;default-windows-nano&gt; created.
       Finished creating &lt;default-windows-nano&gt; (1m15.86s).
-----&gt; Converging &lt;default-windows-nano&gt;...
</pre>
  




  <p>...</p>
























  
    <pre class="source-code">

  Port 2375
     ✔  should be listening
  Command &amp;
     ✔  '$env:ProgramFiles/docker/docker' ps exit_status should eq 0
  Command (Get-service
     ✔  -Name 'docker').status stdout should eq "Running\r\n"

Summary: 3 successful, 0 failures, 0 skipped
       Finished verifying &lt;default-windows-nano&gt; (0m11.94s).
</pre>
  




  <p>Ok our docker host is ready.</p><h3>Creating and running a Windows container</h3><p>First if you are running Nano on VirtualBox, you need to add a port forwarding rule for port 2375. Also note that you will need the docker client installed on the machine where you intend to run docker commands. I'm running them from my Windows 10 laptop. To install docker on Windows 10:</p>
























  
    <pre class="source-code">Invoke-WebRequest "https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing

Expand-Archive -Path "$env:TEMP\docker.zip" -DestinationPath $env:ProgramFiles

$env:path += ";c:\program files\docker"</pre>
  




  <p>No matter what platform you are running on, once you have the docker client, you need to tell it to use your Nano server as the docker host. Simply set the DOCKER_HOST environment variable to "tcp://&lt;ipaddress of server&gt;:2375".</p><p>So now lets download a <a target="_blank" href="https://hub.docker.com/r/microsoft/nanoserver/tags/">nanoserver container image</a> from the <a target="_blank" href="https://hub.docker.com/">docker hub repository</a>:</p>
























  
    <pre class="source-code">C:\dev\NanoVHD [update]&gt; docker pull microsoft/nanoserver
Using default tag: latest
latest: Pulling from microsoft/nanoserver
5496abde368a: Pull complete
Digest: sha256:aee7d4330fe3dc5987c808f647441c16ed2fa1c7d9c6ef49d6498e5c9860b50b
Status: Downloaded newer image for microsoft/nanoserver:latest</pre>
  




  <p>Now lets run a command...heck lets just launch an interactive powershell session inside the container with:</p>
























  
    <pre class="source-code">docker run -it microsoft/nanoserver powershell</pre>
  




  <p>Here is what we get:</p>
























  
    <pre class="source-code">Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS C:\&gt; ipconfig

Windows IP Configuration


Ethernet adapter vEthernet (Temp Nic Name):

   Connection-specific DNS Suffix  . : mshome.net
   Link-local IPv6 Address . . . . . : fe80::2029:a119:3e4f:851a%15
   IPv4 Address. . . . . . . . . . . : 172.30.245.4
   Subnet Mask . . . . . . . . . . . : 255.255.240.0
   Default Gateway . . . . . . . . . : 172.30.240.1
PS C:\&gt; $env:COMPUTERNAME
E1C534D94707
PS C:\&gt;</pre>
  




  <p>Ahhwwww yeeeeaaaahhhhhhh.</p><h2>What's next?</h2><p>So we have made alot of progress over the last few months but the story is not entirely complete. We still need to finish <em>knife bootstrap windows winrm </em>and plug in our azure extension.</p><p>Please let us know what works and what does not work. I personally want to see Nano server succeed and of course we intend for Chef to provide a positive Windows Nano Server configuration story.</p>]]></description></item><item><title>Released WinRM Gem 2.0 with a cross-platform, open source PSRP client implementation</title><dc:creator>Matt Wrock</dc:creator><pubDate>Wed, 31 Aug 2016 08:51:24 +0000</pubDate><link>http://www.hurryupandwait.io/blog/released-winrm-gem-20-first-cross-platform-open-sourced-psrp-client-implementation</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:57c675dd5016e157ad896485</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png" data-image-dimensions="663x481" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=1000w" width="663" height="481" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1472633184301-LU891D9FVDUU6NSI7C0E/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>Today we released the gems: <a target="_blank" href="https://github.com/WinRb/WinRM">WinRM 2.0</a>, <a target="_blank" href="https://github.com/WinRb/WinRM-fs">winrm-fs 1.0</a> and <a target="_blank" href="https://github.com/WinRb/winrm-elevated">winrm elevated 1.0</a>. I first talked about this work in <a href="http://www.hurryupandwait.io/blog/a-look-under-the-hood-at-powershell-remoting-through-a-ruby-cross-plaform-lens">this post</a> and have since performed extensive testing (but I have confidence the first bug will be reported soon) and made several improvements. Today its released and available to any consuming application wanting to use it and we should see a <a target="_blank" href="https://github.com/test-kitchen/test-kitchen">Test-Kitchen</a> release in the near future upgrading its winrm gems. Up next will be <a target="_blank" href="https://github.com/chef/knife-windows">knife-windows</a> and <a target="_blank" href="https://www.vagrantup.com/">vagrant</a>.</p><p>This is a near rewrite of the WinRM gem. Its gotten crufty over the years and its API and internal structure needed some attention. This release fixes several bugs and brings some big improvements. You should read the <a target="_blank" href="https://github.com/WinRb/WinRM/blob/master/README.md">readme</a> to catch up on the changes but here is how it looks in a nutshell (or an IRB shell):</p>
























  
    <pre class="source-code">mwrock@ubuwrock:~$ irb
2.2.1 :001 &gt; require 'winrm'
opts = {
  endpoint: 'http://127.0.0.1:55985/wsman',
  user: 'vagrant',
  password: 'vagrant'
}
conn = WinRM::Connection.new(opts); nil
conn.shell(:powershell) do |shell|
  shell.run('$PSVersionTable') do |stdout, stderr|
    STDOUT.print stdout
    STDERR.print stderr
  end
end; nil =&gt; true
2.2.1 :002 &gt; opts = {
2.2.1 :003 &gt;       endpoint: 'http://127.0.0.1:55985/wsman',
2.2.1 :004 &gt;       user: 'vagrant',
2.2.1 :005 &gt;       password: 'vagrant'
2.2.1 :006?&gt;   }
 =&gt; {:endpoint=&gt;"http://127.0.0.1:55985/wsman", :user=&gt;"vagrant", :password=&gt;"vagrant"}
2.2.1 :007 &gt; conn = WinRM::Connection.new(opts); nil
 =&gt; nil
2.2.1 :008 &gt; conn.shell(:powershell) do |shell|
2.2.1 :009 &gt;       shell.run('$PSVersionTable') do |stdout, stderr|
2.2.1 :010 &gt;           STDOUT.print stdout
2.2.1 :011?&gt;         STDERR.print stderr
2.2.1 :012?&gt;       end
2.2.1 :013?&gt;   end; nil

Name                           Value
----                           -----
PSVersion                      4.0
WSManStackVersion              3.0
SerializationVersion           1.1.0.1
CLRVersion                     4.0.30319.34209
BuildVersion                   6.3.9600.17400
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0}
PSRemotingProtocolVersion      2.2
</pre>
  




  <p>Note this is run from an Ubuntu 14.04 host targeting a Windows 2012R2 VirtualBox VM. No Windows host required.</p><h2>100% Ruby PSRP client implementation</h2><p>So for the four people reading this that know what this means: yaaay! woohoo! you go girl!! we talk PSRP now. yo.</p><h2>No...Really...why should I care about this?</h2><p>I'll be honest, there are tons of scenarios where PSRP will not make any difference, but here are some tangible points where it undoubtedly makes things better:</p><ul><li>File copy can be orders of magnitude faster. If you use the winrm-fs gem to copy files to a remote windows machine, you may see transfer speeds as much as 30x faster. This will be more noticeable transferring files larger than several kilobytes. For example, the PSRP specification PDF - about 4 and a half MB - takes about 4 seconds via this release vs 2 minutes on the previous release on my work laptop. For details as to why PSRP is so much faster, see <a href="http://www.hurryupandwait.io/blog/a-look-under-the-hood-at-powershell-remoting-through-a-ruby-cross-plaform-lens">this post</a>.</li><li>The WinRM gems can talk powershell to Windows Nano Server. The previous WinRM gem is unable to execute powershell commands against a Windows Nano server. If you are a test-kitchen user and would like to see this in action, clone <a href="https://github.com/mwrock/DSCTextfile">https://github.com/mwrock/DSCTextfile</a> and:</li></ul>
























  
    <pre class="source-code">bundle install
bundle exec kitchen verify</pre>
  




  <p>This will download my <a target="_blank" href="https://atlas.hashicorp.com/mwrock/boxes/WindowsNanoDSC">WindowsNanoDSC</a> vagrant box, provision it, converge a DSC file resource and test its success with <a target="_blank" href="https://github.com/pester/Pester">Pester</a>. You should notice that not only does the nano server's .box file download from the internet MUCH faster, it boots and converges several minutes faster than its Windows 2012R2 cousin.</p><p>Stay tuned for <a target="_blank" href="https://www.chef.io/">Chef</a> based kitchen converges on Windows Nano!</p><ul><li>You can now execute multiple commands that operate in the same scope (runspace). This means you can share variables and imported commands from call to call because calls share the same powershell runspace whereas before every call ran in a separate powershell.exe process. The winrm-fs gem is an example of how this is useful.</li></ul>
























  
    <pre class="source-code">def stream_upload(input_io, dest)
  read_size = ((max_encoded_write - dest.length) / 4) * 3
  chunk, bytes = 1, 0
  buffer = ''
  shell.run(&lt;&lt;-EOS
    $to = $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath("#{dest}")
    $parent = Split-Path $to
    if(!(Test-path $parent)) { mkdir $parent | Out-Null }
    $fileStream = New-Object -TypeName System.IO.FileStream -ArgumentList @(
        $to,
        [system.io.filemode]::Create,
        [System.io.FileAccess]::Write,
        [System.IO.FileShare]::ReadWrite
    )
    EOS
  )

  while input_io.read(read_size, buffer)
    bytes += (buffer.bytesize / 3 * 4)
    shell.run(stream_command([buffer].pack(BASE64_PACK)))
    logger.debug "Wrote chunk #{chunk} for #{dest}" if chunk % 25 == 0
    chunk += 1
    yield bytes if block_given?
  end
  shell.run('$fileStream.Dispose()')
  buffer = nil # rubocop:disable Lint/UselessAssignment

  [chunk - 1, bytes]
end

def stream_command(encoded_bytes)
  &lt;&lt;-EOS
    $bytes=[Convert]::FromBase64String('#{encoded_bytes}')
    $fileStream.Write($bytes, 0, $bytes.length)
  EOS
end
</pre>
  




  <p>Here &nbsp;we issue some powershell to create a FileStream, then in ruby we iterate over an IO class and write to that FileSteam instance as many times as we need and then dispose of the stream when done. Before, that FileStream would be gone on the next call and instead we'd have to open the file on each trip.</p><ul dir="ltr"><li>Non administrator users can execute commands. Because the former WinRM implementation was based on winrs, a user had to be an administrator in order to authenticate. Now non admin users, as long as they belong to the correct remoting users group, can execute remote commands.</li></ul><h2>This is just the beginning</h2><p dir="ltr">In and of itself, a WinRM release may not be that exciting but lays the groundwork for some great experiences. I cant wait to explore testing infrastructure code on windows nano further and, sure,&nbsp;sane file transfer rates sounds pretty great.</p>]]></description></item><item><title>How can we most optimally shrink a Windows base image?</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 14 Aug 2016 07:07:05 +0000</pubDate><link>http://www.hurryupandwait.io/blog/how-can-we-most-optimally-shrink-a-windows-base-image</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:57b00596f5e231afce97234c</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png" data-image-dimensions="752x763" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=1000w" width="752" height="763" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1471157368743-YEK84BLQ7CSNSYBSFZZB/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>I have spent alot of time trying to get my Windows vagrant boxes as small as possible. I blogged pretty extensively on <a target="_blank" href="http://www.hurryupandwait.io/blog/in-search-of-a-light-weight-windows-vagrant-box">what optimizations one can make</a> and <a target="_blank" href="http://www.hurryupandwait.io/blog/creating-windows-base-images-for-virtualbox-and-hyper-v-using-packer-boxstarter-and-vagrant">how those optimizations can be automated</a> with <a target="_blank" href="https://www.packer.io">Packer</a>. Over the last week I've leveraged that automation to allow me to collect data on exactly how much each technique I employ saves in the final image. The results I think are very interesting.</p><h2>Diving into the data</h2><p>The metrics above reflect the savings yielded in a fully patched Windows 2012 R2 VirtualBox base image. The total size of the final compressed .box <a target="_blank" href="https://www.vagrantup.com/">vagrant</a> file with no optimizations was 7.71GB and 3.71GB with all optimizations applied.</p><p>I have previously <a target="_blank" href="http://www.hurryupandwait.io/blog/in-search-of-a-light-weight-windows-vagrant-box">blogged the details</a> involved in each optimization and my Packer templates can be <a target="_blank" href="https://github.com/mwrock/packer-templates">found online</a> that automate this process.&nbsp;Let me quickly summarize these optimizations in order of biggest bang for buck:</p><ul><li>SxS Cleanup (54%): The Windows SxS folder can grow larger and larger over time. This has historically been a major problem and until not too long ago, the only remedy was to periodically repave the OS. Among other things, this folder includes backups for every installed update so that they can be undone if necessary. The fact of the matter is that most will never rollback any update. Windows now expose commands and scheduled tasks that allow us to periodically trim this data. Naturally this will have the most impact the more updates that have been installed.</li><li>Removing windows features or Features On Demand (25%): Windows ships with almost all installable features and roles on disk. In many/most cases, a server is built for a specific task and its dormant unenabled features simply take up valuable disk space. Another relatively new feature in Windows management is the ability to totally remove these features from disk. They can always be restored later either via external media or Windows Update.</li><li>Optimize Disk (13%): This is basically a defragmenter and optimizes the disk according to its used sectors. This will likely be more important as disk activity increases between OS install and the time of optimization.</li><li>Removing Junk/Temp files (5%): Here we simply delete the temp folders and a few other unnecessary files and directories created during setup. This will likely have minimal impact if the server has not undergone much true usage.</li><li>Removing the Page File (3%): This is a bit misleading because the server will have a page file. We just make sure that the image in the .box file has no page file (possibly a GB in space but compresses to far less). On first boot, the page file will be recreated.</li></ul><h2>The importance of "0ing" out unused space</h2><p>This is something that is of particular importance for VirtualBox images. This is the act of literally flipping every unused bit on disk to 0. Otherwise the image file treats this space as used in a final compressed .box file. The fascinating fact here is if you do NOT do this, you save NOTHING. At least for VirtualBox but not Hyper-V and that is all I measured. So our 7.71 GB original patched OS with all optimizations applied but without this step compressed to 7.71GB. 0% savings.</p><h2>This is small?</h2><p>Lets not kid ourselves. As hard as we try to chip away at a windows base image, we are still left with a beast of an image. Sure we can cut a fully patched Windows image almost in half but it is still just under 4 GB. Thats huge especially compared to most bare Linux base images.</p><p>If you want to experience a truly small Windows image, you will want to explore <a target="_blank" href="https://blogs.technet.microsoft.com/windowsserver/2015/04/08/microsoft-announces-nano-server-for-modern-apps-and-cloud/">Windows Nano Server</a>. Only then will we achieve orders of magnitude of savings and enter into the Linux "ballpark". The <a target="_blank" href="http://www.hurryupandwait.io/blog/a-packer-template-for-windows-nano-server-weighing-300mb?rq=nano">vagrant boxes I have created for nano</a> weigh in at about 300MB and also boot up very quickly.</p><h2>Your images may vary</h2><p>The numbers above reflect a particular Windows version and hypervisor. Different versions and hypervisors will assuredly yield different metrics.</p><h3>There is less to optimize on newer OS versions</h3><p>This is largely due to the number of Windows updates available. Today, a fresh Windows 2012 R2 image will install just over 220 updates compared to only 5 on Windows 2016 Technical Preview 5. 220 updates takes up alot of space, scattering bits all over the disk.</p><h3>Different Hypervisor file types are more efficient than others</h3><p>A VirtualBox .vmdk will not automatically optimize as well as a Hyper-V .vhd/x. Thus come compression time, the final vagrant VirtualBox .box file will be much larger if you dont take steps yourself to optimize the disk.</p>]]></description></item><item><title>Creating a Windows Server 2016 Vagrant box with Chef and Packer</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 07 Aug 2016 07:12:03 +0000</pubDate><link>http://www.hurryupandwait.io/blog/creating-a-windows-server-2016-vagrant-box-with-chef-and-packer</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:57a63fdcd2b857268ea1eea3</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png" data-image-dimensions="400x400" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=1000w" width="400" height="400" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470553844950-KDKDRRQ5AVIQJCVB2AER/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>I've been using <a target="_blank" href="https://www.packer.io/">Packer</a> for a bit over a year now to create the Windows 2012 R2 <a target="_blank" href="https://www.vagrantup.com/">Vagrant</a> box that I regularly use for testing various server configuration scripts. My packer template has been evolving over time but is composed of some <a target="_blank" href="http://boxstarter.org/">Boxstarter</a> package setup and a few adhoc Powershell scripts. I have blogged about this process <a target="_blank" href="http://www.hurryupandwait.io/blog/creating-windows-base-images-for-virtualbox-and-hyper-v-using-packer-boxstarter-and-vagrant">here</a>. This has been working great, but I'm curious how it would look differently if I used <a target="_blank" href="https://www.chef.io/">C</a><a target="_blank" href="https://www.chef.io/">hef</a> instead of Boxstarter and random powershell.</p><p>Chef is a much more mature configuration management platform than Boxstarter (which I would not even label as configuration management). My belief is that breaking up what I have now into Chef resources and recipes will make the image configuration more composable and easier to read. Also as an engineer employed by Chef, I'd like to be able to walk users through how this would look using Chef.</p><p>To switch things up further, I'm conducting this experimentation on a whole new OS - Windows Server 2016 TP5. This means I dont have to worry about breaking my other templates, my windows updates will be much smaller (5 updates vs &gt; 220) and I can use <a target="_blank" href="https://docs.chef.io/resource_dsc_resource.html">DSC resources</a> for much of the configuring.&nbsp;So this post will guide you through using Chef and Packer together and dealing with the "gotchas" which I ran into. The actual template can be found on github <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/vbox-2016.json">here</a>.</p><p>If you want to "skip to the end," I have uploaded both Hyper-V and VirtualBox providers to <a target="_blank" href="https://atlas.hashicorp.com/mwrock/boxes/Windows2016">Atlas</a> and you can use them with vagrant via:</p>
























  
    <pre class="source-code"> vagrant init mwrock/Windows2016
 vagrant up</pre>
  




  <h2>Preparing for the Chef Provisioner</h2><p>There are a couple things that need to happen before our Chef recipes can run.</p><h3>Dealing with cookbook dependencies</h3><p>I've taken most of the scripts that I run in a packer run and have broken them down into various Chef recipes encapsulated in a single cookbook I include in my packer template repository. Packer's Chef provisioners will copy this cookbook to the image being built but what about other cookbooks it depends on? This cookbook uses the <a target="_blank" href="https://github.com/chef-cookbooks/windows">windows cookbook</a>, the <a target="_blank" href="https://github.com/criteo-cookbooks/wsus-client">wsus-client cookbook</a> and dependencies that they have and so on, but packer does not expose any mechanism for discovering those cookbooks and downloading them.</p><p>I experimented with three different approaches to fetching these dependencies. The first two really did the same thing: installed git and then cloned those cookbooks onto the image. The first method I tried did this in a simple powershell provisioner and the second method used a Chef recipe. The down sides to this approach were:</p><ul><li>I had to know upfront what the exact dependency tree was and each git repo url.</li><li>I also would either have to solve all the versions myself or just settle for the HEAD of master for all cookbook dependencies.</li></ul><p>Well there is a well known tool that solves these problems: <a target="_blank" href="http://berkshelf.com/">Berkshelf</a>. So my final strategy was to run <em>berks vendor</em> to discover the correct dependencies and their versions and download them locally to vendor/cookbooks which we ignore from source control:</p>
























  
    <pre class="source-code">C:\dev\packer-templates [master]&gt; cd .\cookbooks\packer-templates\
C:\dev\packer-templates\cookbooks\packer-templates [master]&gt; berks vendor ../../vendor/cookbooks
Resolving cookbook dependencies...
Fetching 'packer-templates' from source at .
Fetching cookbook index from https://supermarket.chef.io...
Using chef_handler (1.4.0)
Using windows (1.44.1)
Using packer-templates (0.1.0) from source at .
Using wsus-client (1.2.1)
Vendoring chef_handler (1.4.0) to ../../vendor/cookbooks/chef_handler
Vendoring packer-templates (0.1.0) to ../../vendor/cookbooks/packer-templates
Vendoring windows (1.44.1) to ../../vendor/cookbooks/windows
Vendoring wsus-client (1.2.1) to ../../vendor/cookbooks/wsus-client</pre>
  




  <p>Now I include both my packer-templates cookbook and the vendored dependent cookbooks in the chef-solo provisioner definition:</p>
























  
    <pre class="source-code">"provisioners": [
  {
    "type": "chef-solo",
    "cookbook_paths": ["cookbooks", "vendor/cookbooks"],
    "guest_os_type": "windows",
    "run_list": [
      "wsus-client::configure",
      ...</pre>
  




  <h3>Configuring WinRM</h3><p>As we will find as we make our way to a completed vagrant .box file, there are a few key places where we will need to change some machine state outside of Chef. The first of these is configuring WinRM. Before you can use either the <a target="_blank" href="https://www.packer.io/docs/provisioners/chef-solo.html">chef-solo provisioner</a> or a simple <a target="_blank" href="https://www.packer.io/docs/provisioners/powershell.html">powershell provisioner</a>, WinRM must be configured correctly. The Go WinRM library cannot authenticate via NTLM and so we must enable Basic Authentication and allow unencrypted traffic. Note that my template removes these settings prior to shutting down the vm before the image is exported since my testing scenarios have NTLM authentication available.</p><p>Since we cannot do this from any provisioner, we do this in the vm build step. We add a script to the &lt;FirstLogonCommands&gt; section of our <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/answer_files/2016/Autounattend.xml">windows answer file</a>. This is the file that automates the initial install of windows so we are not prompted to enter things like admin password, locale, timezone, etc:</p>
























  
    <pre class="source-code">&lt;FirstLogonCommands&gt;
  &lt;SynchronousCommand wcm:action="add"&gt;
     &lt;CommandLine&gt;cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\winrm.ps1&lt;/CommandLine&gt;
     &lt;Order&gt;1&lt;/Order&gt;
  &lt;/SynchronousCommand&gt;
&lt;/FirstLogonCommands&gt;</pre>
  




  <p>The <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/scripts/winrm.ps1">winrm.ps1</a> script looks like:</p>
























  
    <pre class="source-code">netsh advfirewall firewall add rule name="WinRM-HTTP" dir=in localport=5985 protocol=TCP action=allow
winrm set winrm/config/service/auth '@{Basic="true"}'
winrm set winrm/config/service '@{AllowUnencrypted="true"}'
</pre>
  




  <p>As soon as this runs on our packer build, packer will detect trhat WinRM is accessible and will move on to provisioning.</p><h2>Choosing a Chef provisioner</h2><p>There are two Chef flavored provisioners that come "in the box" with packer. The <a target="_blank" href="https://www.packer.io/docs/provisioners/chef-client.html">chef-client provisioner</a> is ideal if you store your cookbooks on a Chef server. Since I am storing the cookbook with the packer-templates to be copied to the image, I am using the <a target="_blank" href="https://www.packer.io/docs/provisioners/chef-solo.html">chef-solo provisioner</a>.</p><p>Both provisioners will install the Chef client on the windows VM and will then converge all recipes included in the runlist specified in the template:</p>
























  
    <pre class="source-code">  "provisioners": [
    {
      "type": "chef-solo",
      "cookbook_paths": ["cookbooks", "vendor/cookbooks"],
      "guest_os_type": "windows",
      "run_list": [
        "wsus-client::configure",
        "packer-templates::install_ps_modules",
        "packer-templates::vbox_guest_additions",
        "packer-templates::uninstall_powershell_ise",
        "packer-templates::delete_pagefile"
      ]
    },
</pre>
  




  <h2>Windows updates and other WinRM unfriendly tasks</h2><p>The Chef provisioners invoke the Chef client via WinRM. This means that all of the <a target="_blank" href="http://www.hurryupandwait.io/blog/safely-running-windows-automation-operations-that-typically-fail-over-winrm-or-powershell-remoting">restrictions of WinRM</a> apply here. That means no windows updates, no installing .net, no installing SQL server and a few other edge case restrictions.</p><p>We can work around these restrictions by isolating these unfriendly commands and running them directly via the powershell provisioner set to run "elevated":</p>
























  
    <pre class="source-code">    {
      "type": "powershell",
      "script": "scripts/windows-updates.ps1",
      "elevated_user": "vagrant",
      "elevated_password": "vagrant"
    },
</pre>
  




  <p>When elevated credentials are used, the powershell script is run via a scheduled task and therefore runs in the context of a local user free from the fetters of WinRM. So we start by converging a Chef runlist with just enough configuration to set things up. This includes turning off automatic updates by using the wsus-client::configure recipe so that manually running updates will not interfere with automatic updates kicked off by the vm. The initial runlist also installs the <a target="_blank" href="https://gallery.technet.microsoft.com/scriptcenter/2d191bcd-3308-4edd-9de2-88dff796b0bc">PSWindowsUpdate</a> module which we will use in the above powershell provisioner.</p><p>Here is our <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/cookbooks/packer-templates/recipes/install_ps_modules.rb">install-ps-modules.rb</a> recipe that installs the Nuget package provider so we can install the PSWindowsUpdate module and the other DSC modules we will need during our packer build:</p>
























  
    <pre class="source-code">powershell_script 'install Nuget package provider' do
  code 'Install-PackageProvider -Name NuGet -Force'
  not_if '(Get-PackageProvider -Name Nuget -ListAvailable -ErrorAction SilentlyContinue) -ne $null'
end

%w{PSWindowsUpdate xNetworking xRemoteDesktopAdmin xCertificate}.each do |ps_module|
  powershell_script "install #{ps_module} module" do
    code "Install-Module #{ps_module} -Force"
    not_if "(Get-Module #{ps_module} -list) -ne $null"
  end
end</pre>
  




  <p>The windows-updates.ps1 looks like:</p>
























  
    <pre class="source-code">Get-WUInstall -WindowsUpdate -AcceptAll -UpdateType Software -IgnoreReboot</pre>
  




  <h2>Multiple Chef provisioning blocks</h2><p>After windows updates, I move back to Chef to finish off the provisioning:</p>
























  
    <pre class="source-code">    {
      "type": "chef-solo",
      "remote_cookbook_paths": [
        "c:/windows/temp/packer-chef-client/cookbooks-0",
        "c:/windows/temp/packer-chef-client/cookbooks-1"
      ],
      "guest_os_type": "windows",
      "skip_install": "true",
      "run_list": [
        "packer-templates::enable_file_sharing",
        "packer-templates::remote_desktop",
        "packer-templates::clean_sxs",
        "packer-templates::add_postunattend",
        "packer-templates::add_pagefile",
        "packer-templates::set_local_account_token_filter_policy",
        "packer-templates::remove_dirs",
        "packer-templates::add_setup_complete"
      ]
    },</pre>
  




  <p>A couple important things to include when running the Chef provisioner more than once is to tell it not to install Chef and to reuse the cookbook directories it used on the first run.</p><p>For some reason, the Chef provisioners will download and install chef regardless of whether or not Chef is already installed. Also, on the first Chef run, packer copied the cookbooks from your local environment to the vm. When it copies these cookbooks on subsequent attempts, its incredibly slow (several minutes). I'm assuming this is due to file checksum checking logic in the go library. You can avoid this sluggish file copy by just referencing the remote cookbook paths setup by the first run with the remote_cookbook_paths array shown above.</p><h2>Cleaning up</h2><p>Once the image configuration is where you want it to be, you might (or not) want to remove the Chef client. I try to optimize my packer setup for minimal size and the chef-client is rather large (a few hundred MB). Now you can't remove Chef with Chef. What kind of sick world would that be? So we use the powershell provisioner again to remove Chef:</p>
























  
    <pre class="source-code">Write-Host "Uninstall Chef..."
if(Test-Path "c:\windows\temp\chef.msi") {
  Start-Process MSIEXEC.exe '/uninstall c:\windows\temp\chef.msi /quiet' -Wait
}</pre>
  




  <p>and then clean up the disk before its exported and compacted into its final .box file:</p>
























  
    <pre class="source-code">Write-Host "Cleaning Temp Files"
try {
  Takeown /d Y /R /f "C:\Windows\Temp\*"
  Icacls "C:\Windows\Temp\*" /GRANT:r administrators:F /T /c /q  2&gt;&amp;1
  Remove-Item "C:\Windows\Temp\*" -Recurse -Force -ErrorAction SilentlyContinue
} catch { }

Write-Host "Optimizing Drive"
Optimize-Volume -DriveLetter C

Write-Host "Wiping empty space on disk..."
$FilePath="c:\zero.tmp"
$Volume = Get-WmiObject win32_logicaldisk -filter "DeviceID='C:'"
$ArraySize= 64kb
$SpaceToLeave= $Volume.Size * 0.05
$FileSize= $Volume.FreeSpace - $SpacetoLeave
$ZeroArray= new-object byte[]($ArraySize)
 
$Stream= [io.File]::OpenWrite($FilePath)
try {
   $CurFileSize = 0
    while($CurFileSize -lt $FileSize) {
        $Stream.Write($ZeroArray,0, $ZeroArray.Length)
        $CurFileSize +=$ZeroArray.Length
    }
}
finally {
    if($Stream) {
        $Stream.Close()
    }
}
 
Del $FilePath
</pre>
  




  <h2>What just happenned?</h2><p>All of the Chef recipes, powershell scripts and packer templates can be cloned from my <a target="_blank" href="https://github.com/mwrock/packer-templates">packer-templates github repo</a>, but in summary, this is what they all did:</p><ul><li>Installed windows</li><li>Installed all windows updates</li><li>Turned off automatic updates</li><li>Installed VirtualBox guest additions (only in <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/vbox-2016.json">vbox-2016.json</a> template)</li><li>Uninstalled Powershell ISE (I dont use this)</li><li>Removed the page file from the image (it will re create itself on <em>vagrant up</em>)</li><li>Removed all windows featured not enabled</li><li>Enabled file sharing firewall rules so you can map drives to the vm</li><li>Enabled Remote Desktop and its firewall rule</li><li>Cleaned up the windows SxS directory of update backup files</li><li>Set the LocalAccountTokenFilterPolicy so that local users can remote to the vm via NTLM</li><li>Removes "junk" files and folders</li><li>Wiped all unused space on disk (might seem weird but makes the final compressed .box file smaller)</li></ul><p>Most of this was done with Chef resources and we were also able to make ample use of DSC. For example, here is our <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/cookbooks/packer-templates/recipes/remote_desktop.rb">remote_desktop.rb</a> recipe:</p>
























  
    <pre class="source-code">dsc_resource "Enable RDP" do
  resource :xRemoteDesktopAdmin
  property :UserAuthentication, "Secure"
  property :ensure, "Present"
end

dsc_resource "Allow RDP firewall rule" do
  resource :xfirewall
  property :name, "Remote Desktop"
  property :ensure, "Present"
  property :enabled, "True"
end</pre>
  




  <h2>Testing provisioning recipes with Test-Kitchen</h2><p>One thing I've found very important is to be able to test packer provisioning scripts outside of an actual packer run. Think of this, even if you pair down your provisioning scripts to almost nothing, a packer run will always have to run through the initial windows install. Thats gonna be several minutes. Then after the packer run, you must wait out the image export and if you are using the vagrant post-provisioner, its gonna be several more minutes while the .box file is compressed. So being able to test your provisioning scripts in an isolated environment that can be spun up relatively quickly can save quite a bit of time.</p><p>I have found that working on a packer template includes three stages:</p><ol><li>Creating a very basic box with next to no configuration</li><li>Testing provisioning scripts in a premade VM</li><li>A full Packer run with the provisioning scripts</li></ol><p>There may be some permutations of this pattern. For example I might remove windows update until the very end.</p><p><a target="_blank" href="http://kitchen.ci/">Test-Kitchen</a> comes in real handy in step #2. You can also use the box produced by step #1 in your Test-Kitchen run. Depending on if I'm building Hyper-V or VirtualBox provider I'll go about this differently. Either way, a simple call to <em>kitchen converge</em> can be much faster than <em>packer build</em>.</p><h3>Using kitchen-hyperv to test scripts on Hyper-V</h3><p>The <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/cookbooks/packer-templates/.kitchen.yml">.kitchen.yml</a> file included in my <a target="_blank" href="https://github.com/mwrock/packer-templates">packer-templates repo</a> uses the <a target="_blank" href="https://github.com/test-kitchen/kitchen-hyperv">kitchen-hyperv</a> driver to test my Chef recipes that provision the image:</p>
























  
    <pre class="source-code">---
driver:
  name: hyperv
  parent_vhd_folder: '../../output-hyperv-iso/virtual hard disks'
  parent_vhd_name: packer-hyperv-iso.vhdx</pre>
  




  <p>If I'm using a <a target="_blank" href="http://www.hurryupandwait.io/blog/creating-hyper-v-images-with-packer">hyperv builder</a> to first create a minimal image, packer puts the build .vhdx file in output-hyperv-iso/virtual hard disks. I can use kitchen-hyperv and point it at that image and it will create a new VM using that vhdx file as the parent of a new differencing disk where I can test my recipes.&nbsp;I can then have test-kitchen run these recipes in just a few minutes or less which is a much tighter feedback loop than packer provides.</p><h3>Using kitchen-vagrant to test on Virtualbox</h3><p>If you create a .box file with a minimal packer template, it will output that .box file in the root of the packer-template repo. You can add that box to your local vagrant repo by running:</p>
























  
    <pre class="source-code">vagrant box add 2016 .\windows2016min-virtualbox.box</pre>
  




  <p>Now you can test against this with a test-kitchen driver config that looks like:</p>
























  
    <pre class="source-code">---
driver:
  name: vagrant
  box: 2016</pre>
  




  <h2>Check out my talk on creating windows vagrant boxes with packer at Hashiconf!</h2><p dir="ltr">I'll be talking on <a target="_blank" href="https://www.hashiconf.com/talks/lightweight-windows-vagrant-boxes-with-packer.html">this topic</a> next month (September 2016) at <a target="_blank" href="https://www.hashiconf.com/">Hashiconf</a>. You can use my discount code SPKR-MWROCK for 15% off General Admission tickets.</p>]]></description></item><item><title>Creating Hyper-V images with Packer</title><dc:creator>Matt Wrock</dc:creator><pubDate>Tue, 02 Aug 2016 15:27:27 +0000</pubDate><link>http://www.hurryupandwait.io/blog/creating-hyper-v-images-with-packer</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:579ee11d893fc0665d9cd838</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png" data-image-dimensions="663x361" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=1000w" width="663" height="361" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1470151526883-Q1L748BLNLQTTH05LGIV/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>Over the last week I've been playing with some new significant changes to my <a target="_blank" href="https://www.packer.io/">packer</a> build process. This includes replacing the <a target="_blank" href="http://boxstarter.org/">Boxstarter</a> based install with a <a target="_blank" href="https://www.chef.io/">Chef</a> cookbook, &nbsp;and also using a native Hyper-V builder. I'll be blogging about the Chef stuff later. This post will discuss how to use <a target="_blank" href="https://github.com/taliesins">Taliesin Sisson's</a> PR <a target="_blank" href="https://github.com/mitchellh/packer/pull/2576">#2576</a> to build Hyper-V images with Hyper-V.</p><h2>The state of Hyper-V builders</h2><p>Packer currently comes bundled with builders for the major cloud providers, some local hypervisors like VirtualBox, VMware, Parallels, and QEMU as wells as OpenStack and Docker. There is no built in Hyper-V builder - the native hypervisor on Windows.</p><p>The packer ecosystem does provide a plugin model for supporting third party builders. In early 2015 <a target="_blank" href="https://msopentech.com/blog/2015/02/20/using-packer-hyper-v-microsoft-azure/">it was announced</a> that MSOpenTech had built a <a target="_blank" href="https://github.com/Microsoft/packer-hyperv">usable Hyper-V builder plugin</a> and there were hopes to pull that into Packer core. This never happened. I personally see this like two technology asteroids rocketing past each other. The Hyper-V builder came in on a version of GO that Packer did not yet support but by the time it did, packer and Hyper-V had moved on.</p><p>I <a target="_blank" href="http://www.hurryupandwait.io/blog/creating-windows-base-images-for-virtualbox-and-hyper-v-using-packer-boxstarter-and-vagrant">started playing</a> with packer in July of 2015 and when I tried this builder on Windows 10 (a technical preview at the time) it just did not work. Likely this is because some things in Hyper-V, like its virtual machine file format had completely changed. Hard to say but as a Packer newcomer and wanting to just get an image built I quickly moved away from using a Hyper-V builder.</p><h2>Converting VirtualBox to Hyper-V</h2><p>After ditching the hope of building Hyper-V images with packer, I resurrected my daughter's half busted laptop to become my VirtualBox Packer builder. It worked great.</p><p>I also quickly discovered that I could simply convert the VirtualBox images to VHD format and create a Vagrant Hyper-V provider box without Hyper-V. I blogged about this procedure <a target="_blank" href="http://www.hurryupandwait.io/blog/creating-a-hyper-v-vagrant-box-from-a-virtualbox-vmdk-or-vdi-image">here</a> and I think its still a good option for creating multiple providers on a single platform.</p><p>Its great to take the exact same image that provisions a VirtualBox VM to also provision a Hyper-V VM. However, its sometimes a pain to have to switch over to a different environment. My day to day dev environment uses Hyper-V and ideally this is where I would develop and test Packer builds as well.</p><h2>A Hyper-V builder that works</h2><p>So early this year I started hearing mumblings of a <a target="_blank" href="https://github.com/mitchellh/packer/pull/2576">new PR</a> to packer for an updated Hyper-V builder. My VirtualBox setup worked fine and I needed to produce both VirtualBox and Hyper-V providers anyways so I was not highly motivated to try out this new builder.</p><p>Well next month <a target="_blank" href="https://www.hashiconf.com/talks/lightweight-windows-vagrant-boxes-with-packer.html">I will be speaking</a> at <a href="https://www.hashiconf.com/">Hashiconf</a> about creating windows vagrant boxes with packer. It sure would be nice not to have to bring my VirtualBox rig and just use a Hyper-V builder on my personal laptop. (Oh and hey:&nbsp;Use my discount code SPKR-MWROCK for 15% off General Admission tickets to Hashiconf!)</p><p>So I finally took this PR for a spin last week and I was pretty amazed when it just worked. One thing I have noticed in "contemporary devops tooling" is that the chances of the tooling working on Windows is sketchy and as for Hyper-V? Good luck! No one uses it in the communities where I mingle (oh yeah...except me it sometimes seems). If few are testing the tooling and most building the tooling are not familiar with Windows environment nuances, its not a scenario optimized for success.</p><h2>Using PR #2576 to build Hyper-V images</h2><p>For those unfamiliar with working with <a target="_blank" href="https://golang.org/">Go</a> source builds, getting the PR built and working is probably the biggest blocker to getting started. Its really not that bad at all and here is a step by step walk through to building the PR:</p><ol><li>Install golang using <a target="_blank" href="https://chocolatey.org/">chocolatey</a>:&nbsp;<em>cinst golang -y</em>. This puts Go in <em>c:\tools\go</em></li><li>Create a directory for Go deveopment: c:\dev\go and set $env:gopath to that path</li><li>From that path run <em>go get github.com/mitchellh/packer</em> which will put packer's master branch in c:\dev\go\src\github.com\mitchellh\packer</li><li>Navigate to that directory and add a git remote to Taliesin Sisson's PR branch: <em>git remote add hyperv https://github.com/taliesins/packer</em></li><li>Run <em>git fetch hyperv</em> and then <em>git checkout hyperv.&nbsp;</em>Now the code for this PR is on disk</li><li>Build it with <em>go build -o bin/packer.exe .</em></li><li>Now the built packer.exe is at <em>C:\dev\go\src\github.com\mitchellh\packer\bin\packer.exe</em></li></ol><p>You can now run <em>C:\dev\go\src\github.com\mitchellh\packer\bin\packer.exe build</em> and this builder will be available!</p><h2>Things to know</h2><p>If you have used the VirtualBox builder, this builder is really not much different at all. The only thing that surprised and tripped me up a bit at first is that unless you configure it differently, the builder will create a new switch to be used by the VMs it creates. This switch may not be able to access the internet and your build might break. You can easily avoid this and use an existing switch by using the <em>switch_name</em> setting.</p><h2>A working template</h2><p>As I mentioned above, I've been working on using Chef instead of Boxstarter to provision the packer image. I've been testing this building a Windows Server 2016 TP5 image. <a target="_blank" href="https://github.com/mwrock/packer-templates/blob/master/hyperv-2016.json">Here</a> is the Hyper-V template. The builder section is as follows:</p>
























  
    <pre class="source-code">  "builders": [
    {
      "type": "hyperv-iso",
      "guest_additions_mode": "disable",
      "iso_url": "{{ user `iso_url` }}",
      "iso_checksum": "{{ user `iso_checksum` }}",
      "iso_checksum_type": "md5",
      "ram_size_mb": 2048,
      "communicator": "winrm",
      "winrm_username": "vagrant",
      "winrm_password": "vagrant",
      "winrm_timeout": "12h",
      "shutdown_command": "C:/Windows/Panther/Unattend/packer_shutdown.bat",
      "shutdown_timeout": "15m",
      "switch_name": "internal_switch",
      "floppy_files": [
        "answer_files/2016/Autounattend.xml",
        "scripts/winrm.ps1"
      ]
    }
  ]</pre>
  




  <h2>Documentation</h2><p>Fortunately this PR includes updated documentation for the builder. You can view it in markdown <a target="_blank" href="https://github.com/taliesins/packer/blob/483dfd8d17c0929cd6ff88cbf4935ebca32f1139/website/source/docs/builders/hyperv-iso.html.markdown">here</a>.</p>]]></description></item><item><title>A look under the hood at Powershell Remoting through a cross plaform lens</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sat, 11 Jun 2016 22:45:25 +0000</pubDate><link>http://www.hurryupandwait.io/blog/a-look-under-the-hood-at-powershell-remoting-through-a-ruby-cross-plaform-lens</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:575b9e1fc2ea51b70d3fa47b</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png" data-image-dimensions="400x415" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=1000w" width="400" height="415" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465684993013-YTZNF2A1SHCEEWBSS3XZ/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>Many Powershell enthusiasts don't realize that when they are using commands like <a target="_blank" href="https://technet.microsoft.com/en-us/library/hh849717.aspx">New-PsSession</a> and streaming pipelines to a powershell runspace on a remote machine, they are actually writing a binary message wrapped in a SOAP envelope that leverages a protocol with the namesake of Windows Vista. Not much over a year ago I certainly wasn't. This set of knowledge all began with needing to transfer files from a linux machine to a windows machine. In a pure linux world there is a well known tool for this called <a target="_blank" href="https://en.wikipedia.org/wiki/Secure_copy">SCP</a>. In Windows we map drives or <a target="_blank" href="http://poshcode.org/2216">stream bytes to a remote powershell session</a>. How do we get a file (or command for that matter) from one of these platforms to the other?</p><p>I was about to take a plunge to go deeper than I really wanted into a pool where I did not really care to swim. And today I emerge with a cross platform "partial"&nbsp;implementation of <a target="_blank" href="https://github.com/WinRb/WinRM/tree/winrm-v2">Powershell Remoting in Ruby</a>. No not just <a target="_blank" href="https://msdn.microsoft.com/en-us/library/aa384426(v=vs.85).aspx">WinRM</a> but a working <a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd357801.aspx">PSRP</a> client.</p><p>In this post I will cover how PSRP differs from its more familiar cross platform cousin WinRM, why its of value and how one can give it a try. Hopefully this will provide an interesting perspective into what Powershell Remoting looks like from an implementor's point of view.</p><h2>In the beginning there was WinRM</h2><p>While PSRP is a different protocol from WinRM (Windows Remote Management) with its own <a target="_blank" href="https://msdn.microsoft.com/en-us/library/dd357801.aspx">spec</a>. It cannot exist or be explained without WinRM. WinRM is a SOAP based web service defined by a protocol called Web Services Management Protocol Extensions for Windows Vista (<a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251526.aspx">WSMV</a>). I love that name. This protocol defines several different message types for performing different tasks and gathering different kinds of information on a remote instance. I'm going to focus here on the messages involved with invoking commands and collecting their output.</p><p>A typical WinRM based conversation for invoking commands goes something like this:</p><ol><li>Send a <a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251739.aspx">Create Shell</a>&nbsp;message and get the shell id from the response</li><li><a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251740.aspx">Create a command</a> in the shell sending the command and any arguments and grab the command id from the response</li><li><a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251741.aspx">Send a request for output</a> on the command id which may return streams (stdout and/or stderr) containing base64 encoded text.</li><li>Keep requesting output until the command state is done and examine the exit code.</li><li>Send a command <a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251743.aspx">termination signal</a></li><li>Send a <a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251746.aspx">delete shell</a> message</li></ol><p>The native windows tool (which nobody uses anymore) to speak pure winrm is <a target="_blank" href="https://technet.microsoft.com/en-us/library/hh875630(v=ws.11).aspx">winrs.exe</a>.</p>
























  
    <pre class="source-code">C:\dev\winrm [winrm-v2]&gt; winrs -r:http://192.168.137.10:5985 -u:vagrant -p:vagrant ipconfig

Windows IP Configuration

Ethernet adapter Ethernet:

   Connection-specific DNS Suffix  . : mshome.net
   Link-local IPv6 Address . . . . . : fe80::c11b:f734:5bd4:ab03%3
   IPv4 Address. . . . . . . . . . . : 192.168.137.10
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.137.1

</pre>
  




  <p>You can turn on analytical event log messages or watch a <a target="_blank" href="https://www.wireshark.org/">wireshark</a> transcript of the communication. One thing is for sure, you will see a lot of XML and alot of namespace definitions. Its not fun to debug but you'll learn to appreciate it after examining PSRP transcripts.</p><p>Here's an example create command message:</p>
























  
    <pre class="source-code">&lt;s:Envelope
 xmlns:s="http://www.w3.org/2003/05/soap-envelope"
 xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing"
 xmlns:wsman="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd"&gt;
 &lt;s:Header&gt;
 &lt;wsa:To&gt;
 http://localhost:80/wsman
 &lt;/wsa:To&gt;
 &lt;wsman:ResourceURI s:mustUnderstand="true"&gt;
 http://schemas.microsoft.com/wbem/wsman/1/windows/shell/cmd
 &lt;/wsman:ResourceURI&gt;
 &lt;wsa:ReplyTo&gt;
 &lt;wsa:Address s:mustUnderstand="true"&gt;
 http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous
 &lt;/wsa:Address&gt;
 &lt;/wsa:ReplyTo&gt;
 &lt;wsa:Action s:mustUnderstand="true"&gt;
 http://schemas.microsoft.com/wbem/wsman/1/windows/shell/Command
 &lt;/wsa:Action&gt;
 &lt;wsman:MaxEnvelopeSize s:mustUnderstand="true"&gt;153600&lt;/wsman:MaxEnvelopeSize&gt;
 &lt;wsa:MessageID&gt;
 uuid:F8671978-E928-49DA-ADB8-5BF97EDD9535&lt;/wsa:MessageID&gt;
 &lt;wsman:Locale xml:lang="en-US" s:mustUnderstand="false" /&gt;
 &lt;wsman:SelectorSet&gt;
 &lt;wsman:Selector Name="ShellId"&gt;
 uuid:0A442A7F-4627-43AE-8751-900B509F0A1F
 &lt;/wsman:Selector&gt;
 &lt;/wsman:SelectorSet&gt;
 &lt;wsman:OptionSet xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;
 &lt;wsman:Option Name="WINRS_CONSOLEMODE_STDIN"&gt;TRUE&lt;/wsman:Option&gt;
 &lt;wsman:Option Name="WINRS_SKIP_CMD_SHELL"&gt;FALSE&lt;/wsman:Option&gt;
 &lt;/wsman:OptionSet&gt;
 &lt;wsman:OperationTimeout&gt;PT60.000S&lt;/wsman:OperationTimeout&gt;
 &lt;/s:Header&gt;
 &lt;s:Body&gt;
 &lt;rsp:CommandLine
 xmlns:rsp="http://schemas.microsoft.com/wbem/wsman/1/windows/shell"&gt;
 &lt;rsp:Command&gt;del&lt;/rsp:Command&gt;
 &lt;rsp:Arguments&gt;/p&lt;/rsp:Arguments&gt;
 &lt;rsp:Arguments&gt;
 d:\temp\out.txt
 &lt;/rsp:Arguments&gt;
 &lt;/rsp:CommandLine&gt;
 &lt;/s:Body&gt;
&lt;/s:Envelope&gt;</pre>
  




  <p>Oh yeah. Thats good stuff. This runs <em>del /p d:\temp\out.txt</em>.</p><h3>Powershell over WinRM</h3><p>When you invoke a command over WinRM, you are running inside of a cmd.exe style shell. Just as you would inside a local cmd.exe, you can always run powershell.exe and pass it commands. Why would anyone ever do this? Usually its because they are using a cross platform WinRM library and its just the only way to do it.</p><p>There are popular libraries written for <a target="_blank" href="https://github.com/WinRb/WinRM">ruby</a>, <a target="_blank" href="https://github.com/diyan/pywinrm">python</a>, java, <a target="_blank" href="https://github.com/masterzen/winrm">Go</a> and others. Some of these <a target="_blank" href="https://github.com/WinRb/WinRM/blob/master/lib/winrm/command_executor.rb#L128">abstract the extra powershell.exe call</a> and make it feel like a true native powershell repl experience. The fact is that this works quite well and so why bothering implementing a separate protocol?&nbsp;As I'll cover in a bit, PSRP is much more complicated than vanila WSMV so if you can get away with the simpler protocol, great.</p><h2>The limitations of WinRM</h2><p>There are a few key limitations with WinRM. Many of these limitations are the same limitations involved with cmd.exe:</p><h3>Multiple shells</h3><p>You have to open two shells (processes). First the command shell and then startup a powershell instance. This can be a performance suck especially if you need to run several commands.</p><h3>Maximum command length</h3><p>The command line length is limited to 8k inside cmd.exe. Now you may ask, why in the world would you want to issue a command greater than 8192 characters? There are a couple common use cases here:</p><ol><li>You may have a long script (not just a single command) you want to run. However, this script is typically fed to the <em>-command</em> or <em>-EncodedCommand</em> argument of powershell.exe so this entire script needs to stay within the 8k threshold. Why not just run the script as a file? Ha!...Glad you asked.</li><li>WinRM has no native means of copying files like SCP. So the <a target="_blank" href="https://github.com/WinRb/winrm-fs">common method</a> of copying files via WinRM is to base64 encode a file's contents and create a command that appends 8k chunks to a file.</li></ol><p>#2 is what sparked my interest in all of this. I just wanted to copy a damn file, So <a target="_blank" href="https://github.com/sneal">Shawn Neal</a>, <a target="_blank" href="https://github.com/fnichol">Fletcher Nichol</a> and I wrote a <a target="_blank" href="https://github.com/WinRb/winrm-fs">ruby gem</a> that leveraged WinRM to do just that. It basically does this alot:</p>
























  
    <pre class="source-code">"a whole bunch of base64 text" &gt;&gt; c:\some\file.txt</pre>
  




  <p>It turns out that 8k is not a whole lot of data and if you want to copy hundreds of megabytes or more, grab a book. We added some algorithms to make this as fast as possible like compressing multiple files before transferring and extracting them on the other end. However, you just cant get around the 8k transfer size and no performance trick is gonna make that fast.</p><h3>More Powershell streams than command streams</h3><p>Powershell supports much more than just stdout and stderr. Its got progress, verbose, etc, etc. The WSMV protocol has no rules for transmitting these other streams. So this means all streams other than the output stream is sent on stderr.</p><p>This can confuse some WinRM libraries and cause commands that are indeed successful to "appear" to fail. The trick is to "silence" these streams. For example the ruby WinRM gem prepends all powershell scripts with:</p>
























  
    <pre class="source-code">$ProgressPreference = "SilentlyContinue"</pre>
  




  <h3>Talking with Windows Nano</h3><p>The ruby WinRM gem uses the <em>-EncodedCommand</em> to send powershell command text to powershell.exe. This is a convenient way of avoiding quote hell and base64ing text that will be transferred inside XML markup. Well Nano's powershell.exe has no EncodedCommand argument and so the current ruby WinRM v1 gem cannot talk powershell with Windows Nano Server. Well that simply can't be. We have to be able to talk to Nano.</p><h2>Introducing PSRP</h2><p>So without further ado let me introduce PSRP. PSRP supports many message types for extracting all sorts of metadata about runspaces and commands. A full implementation of PSRP could create a rich REPL experience on non windows platforms. However in this post I'm gonna limit the discussion to messages involved in running commands and receiving their output.</p><p>As I mentioned before, PSRP cannot exist without WinRM. I did not just mean that in a philosophical sense, it literally sits on top of the WSMV protocol. Its sort of a protocol inside a protocol. Running commands and receiving their response includes the same exchange illustrated above and issuing the same WSMV messages. The key differences is that instead of issuing commands in these messages in plain text and recieving simple base64 encoded raw text output, the powershell commands are packaged as a binary PSRP message (or sequence of message fragments) and the response includes one or more binary fragments that are then "defragmented" into a single binary message.</p><h3>PSRP Message Fragment</h3><p>A complete WSMV SOAP envelope can only be so big. This size limitation is specified on the server via the MaxEnvelopeSizeKB setting. This defaults to 512 on 2012R2 and Nano server. So a very large pipeline script or a very large pipeline output must be split into fragments.</p><p>The PSRP spec illustrates a fragment as:</p>

































































 

  
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png" data-image-dimensions="588x285" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=1000w" width="588" height="285" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465677358259-JDV1PN21RBSI1SUFCX0B/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>All fragments have an object id representing the message being fragmented and each fragment of that object will have incrementing fragment ids starting at 0. E and S are each a single bit flag that indicates if the fragment is an End fragment and if it is a Start fragment. So if an entire message fits into one fragment, both E and S will be 1.</p><p>The blob (the interesting stuff) is the actual PSRP message and of course the blob length of the blob in bytes. So the idea here is that you chain the blobs ob all fragments with the same object id in the order of fragment id and that aggregated blob is the PSRP message.</p><p><a target="_blank" href="https://github.com/WinRb/WinRM/blob/winrm-v2/lib/winrm/psrp/fragment.rb">Here</a> is an implementation of a message fragment written in ruby and <a target="_blank" href="https://github.com/WinRb/WinRM/blob/winrm-v2/lib/winrm/psrp/message_defragmenter.rb">here</a> is how we unwrap several fragments into a message.</p><h3>PSRP messages</h3><p>There are 41 different types of PSRP messages. Here is the basic structure as illustrated in the PSRP spec:</p>

































































 

  
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png" data-image-dimensions="586x383" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=1000w" width="586" height="383" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1465678196754-GM2L7QY723CGFSIWX6RJ/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  






  <p>Destination signifies who the message is for: client or server. Message type is a integer representing which of the 41 possible message types this is and RPID and PID both represent runspace_id and pipeline_id respectively. The data has the "meat" of the message and its structure is determined by the message type. The data is XML. Many powershellers are familiar with CLIXML. Thats the basic format of the message data. So in the case of a create_pipeline message, this will include the CLIXML representation of the powershell cmdlets and arguments to run. It can be quite verbose but always beautiful. The symmetric nature of XML really shines here.</p><p><a target="_blank" href="https://github.com/WinRb/WinRM/blob/winrm-v2/lib/winrm/psrp/message.rb">Here</a> is an implementation of PSRP message in ruby.</p><h2>A "partial" implementation in Ruby</h2><p>So as far as I am aware, the WinRM ruby gem has the first open source implementation of PSRP. Its not officially released yet, but the source is fully available and works (at least integration tests are passing). Why am I labeling it a "partial" implementation?</p><p>As I mentioned earlier, PSRP provides many message structures for listing runspaces, commands and gathering lots of metadata. The interests of the WinRM gem are simple and aims to adhere to the same basic interface it uses to issue WSMV messges (however we have rewritten the classes and methods for v2). Essentially we want to provide an SSH like experience where a user issues a command string and gets back string based standard output and error streams as well as an exit code. This really is a "dumbed down" rendering of what PSRP is capable of providing.</p><p>The possibilities are very exciting and perhaps we will add more true powershell REPL features in the future but today when one issues a powershell script, we are <a target="_blank" href="https://github.com/WinRb/WinRM/blob/winrm-v2/lib/winrm/psrp/create_pipeline.xml.erb">basically constructing</a> CLIXML that emits the following command:</p>
























  
    <pre class="source-code">Invoke-Expression -Command "your command here" | Out-String -Stream</pre>
  




  <p>This means we do not have to write a CLIXML serializer/deserializer but we reap most of the benefits of running commands directly in powershell. No more multi shell lag, no more comand length limitations and hello Nano Server. In fact, our repo provides a <a target="_blank" href="https://github.com/WinRb/WinRM/blob/winrm-v2/Vagrantfile">Vagrantfile</a> that provisions Windows Nano server for running integration tests.</p><h2>Give it a try!</h2><p>I have complete confidence that there are major flaws in the implementation as it is now. I've been testing all along the way but I'm just about to start really putting it through the ringer. I can guarantee that Write-Host "Hello World!" works flawlessly. The Hello juxtaposed starkly against the double pointed 'W' and ending in the minimalistic line on top of a dot (!) is pretty amazing. The <a target="_blank" href="https://github.com/WinRb/WinRM/blob/winrm-v2/README.md">readme</a> in the winrm-v2 branch has been updated to document the code as it stands now and assuming you have git, ruby and <a target="_blank" href="http://bundler.io/">bundler</a> installed, here is a quick rundown of how to run some powershell using the new PSRP implementation:</p>
























  
    <pre class="source-code">git clone https://github.com/WinRb/WinRM
git fetch
git checkout winrm-v2
bundle install
bundle exec irb

require 'winrm'
opts = { 
  endpoint: "http://myhost:5985/wsman",
  user: 'administrator',
  password: 'Pass@word1'
}
conn = WinRM::Connection.new(opts)
conn.shell(:powershell) do |shell|
  shell.run('$PSVersionTable') do |stdout, stderr|
    STDOUT.print stdout
    STDERR.print stderr
  end
end</pre>
  




  <p>The interfaces are not entirely finalized so things may still change. The next steps are to refactor the <a target="_blank" href="https://github.com/WinRb/winrm-fs">winrm-fs</a> and <a target="_blank" href="https://github.com/WinRb/winrm-elevated">winrm-elevated</a> gems to use this new winrm gem and also make sure that it works with <a target="_blank" href="https://www.vagrantup.com/">vagrant</a> and <a target="_blank" href="http://kitchen.ci/">test-kitchen</a>. I cant wait to start collecting benchmark data comparing file copy speeds using this new version and the one in use today!</p>]]></description></item><item><title>Certificate (password-less) based authentication in WinRM</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 01 May 2016 16:39:08 +0000</pubDate><link>http://www.hurryupandwait.io/blog/certificate-password-less-based-authentication-in-winrm</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:572453caf85082b93e0632f0</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg" data-image-dimensions="256x256" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=1000w" width="256" height="256" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1462120637176-OGSEKGCQR5AQFMG8L8TW/image-asset.jpeg?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p class="">This week the <a href="https://github.com/WinRb/WinRM" target="_blank">WinRM ruby gem</a> version 1.8.0 released adding support for certificate authentication. Many thanks to the contributions of <a href="https://github.com/jfhutchi" target="_blank">@jfhutchi</a> and <a href="https://github.com/fgimenezm" target="_blank">@fgimenezm</a> that make this possible. As I set out to test this feature, I explored how certificate authentication works in winrm using native windows tools like powershell remoting. My primary takeaway was that it was not at all straightforward to setup. If you have worked with similar authentication setups on linux using SSH commands, be prepared for more friction. Most of this is simply due to the lack of documentation and google results (well now there is one more). Regardless, I still think that once setup, authentication via certificates is a very good thing and many are not aware that this is available in WinRM.</p><p class="">This post will walk through how to configure certificate authentication, enumerate some of the "gotchas" and pitfalls one may encounter along the way and then explain how to use certificate authentication using Powershell Remoting as well as via the WinRM ruby gem which opens up the possibility of authenticating from a linux client to a Windows WinRM endpoint.</p><h2>Why should I care about certificate based authentication?</h2><p class="">First lets examine why certificate authentication has value. What's wrong with usernames and passwords? In short, certificates are more secure. I'm not going to go too deep here but here are a few points to consider:</p><ul data-rte-list="default"><li><p class="">Passwords can be obtained via brute force. You can protect against this by having longer, more complex passwords and changing them frequently. Very few actually do that, and even if you do, a complex password is way easier to break than a certificate. Its nearly impossible to brute force a private key of sufficient strength.</p></li><li><p class="">Sensitive data is not being transferred over the wire. You are sending a public key and if that falls into the wrong hands, no harm done.</p></li><li><p class="">There is a stronger trail of trust establishing that the person who is seeking authentication is in fact who they say they are given the multi layered process of having a generated certificate signed by a trusted certificate authority.</p></li></ul><p class="">Its still important to remember that nothing may be able to protect us from sophisticated aliens or time traveling humans from the future. No means of security is impenetrable.</p><h2>Not as convenient as SSH keys</h2><p class="">So one reason some like to use certificates over passwords in SSH scenarios is ease of use. There is a one time setup "cost" of sending your public key to the remote server, but:</p><ol data-rte-list="default"><li><p class="">SSH provides command line tools that make this fairly straight forward to setup from your local environment.</p></li><li><p class="">Once setup, you just need to initiate an ssh session to the remote server and you don't have to hassle with entering a password.</p></li></ol><p class="">Now the underlying cryptographic and authentication technology is no different using winrm, but both the initial setup and the "day to day" use of using the certificate to login is more burdensome. The details of why will become apparent throughout this post.</p><p class="">One important thing to consider though is that while winrm certificate authentication may be more burdensome, I don't think the primary use case is for user interactive login sessions (although that's too bad). In the case of automated services that need to interact with remote machines, these "burdens" simply need to be part of the connection automation and its just a non sentient cpu that does the suffering. Lets just hope they won't hold a grudge once sentience is obtained.</p><h2>High level certificate authentication configuration overview</h2><p class="">Here is a run down of what is involved to get everything setup for certificate authentication:</p><ol data-rte-list="default"><li><p class="">Configure SSL connectivity to winrm on the endpoint</p></li><li><p class="">Generate a user certificate used for authentication</p></li><li><p class="">Enable Certificate authentication on the endpoint. Its disabled by default for server auth and enabled on the client side.</p></li><li><p class="">Add the user certificate and its issuing CA certificate to the certificate store of the endpoint</p></li><li><p class="">Create a user mapping in winrm with the thumbprint of the issuing certificate on the endpoint.</p></li></ol><p class="">I'll walk through each of these steps here. When the above five steps are complete, you should be able to connect via certificate authentication using powershell remoting or using the ruby or <a href="https://github.com/diyan/pywinrm" target="_blank">python</a> open source winrm libraries.</p><h2>Setting up the SSL winrm listener</h2><p class="">If you are using certificate authentication, you must use a https winrm endpoint. Attempts to authenicate with a certificate using http endpoints will fail. You can setup SSL on the endpoint with:</p>
























  
    <pre class="source-code">$ip="192.168.137.169" # your ip might be different
$c = New-SelfSignedCertificate -DnsName $ip `
                               -CertStoreLocation cert:\LocalMachine\My
winrm create winrm/config/Listener?Address=*+Transport=HTTPS "@{Hostname=`"$ip`";CertificateThumbprint=`"$($c.ThumbPrint)`"}"
netsh advfirewall firewall add rule name="WinRM-HTTPS" dir=in localport=5986 protocol=TCP action=allow</pre>
  




  <h2>Generating a client certificate</h2><p class="">Client certificates have two key requirements:</p><ol data-rte-list="default"><li><p class="">An Extended Key Usage of <strong>Client Authentication</strong></p></li><li><p class="">A <strong>Subject Alternative Name</strong> with the <strong>UPN</strong> of the user.</p></li></ol><h3>Only ADCS certificates work from Windows 10/2012 R2 clients via powershell remoting</h3><p class="">This was the step that I ended up spending the most time on. I continued to receive errors saying my certificate was malformed:</p><blockquote><p class="">new-PSSession : The WinRM client cannot process the request. If you are using a machine certificate, it must contain a DNS name in the Subject Alternative Name extension or in the Subject Name field, and no UPN name. If you are using a user certificate, the Subject Alternative Name extension must contain a UPN name and must not contain a DNS name. Change the certificate structure and try the request again.</p></blockquote><p class="">I was trying to authenticate from a windows 10 client using powershell remoting. I don't typically work or test in a domain environment and don't run an Active Directory Certificate Services authority. So I wanted to generate a certificate using either New-SelfSignedCertificate or OpenSSL.</p><p class="">In short here is the bump I hit: powershell remoting from a windows 10 or windows 2012 R2 client failed to authenticate with certificates generated from OpenSSL or New-SelfSignedCertificate. However these same certificates succeed to authenticate from windows 7 or windows 2008 R2.&nbsp;They only worked on Windows 10 and 2012 R2 if I used the ruby WinRM gem instead of powershell remoting. Note that while I tested on windows 10 and 2012 R2, I'm sure that windows 8.1 suffers the same issue. The only certificates I got to work on windows 10 and 2012 R2 via powershell remoting were created via an Active Directory Certificate Services Enterprise CA .</p><p class="">So unless I can find out otherwise, it seems that you must have access to an Enterprise root CA in Active Directory Certificate Services and have client certificates issued in order to use certificate authentication from powershell remoting on these later windows versions. If you are using ADCS, the stock client template should work.</p><h3>Generating client certificates via OpenSSl</h3><p class="">As stated above, certificates generated using OpenSSL or New-SelfSignedCertificate did not work using powershell remoting from windows 10 or 2012 R2. However, if you are using a previous version of windows or if you are using another client (like the ruby or python libraries), then these other non-ADCS methods will work fine and do not require the creation of a domain controller and certificate authority servers.</p><p class="">If you do not already have OpenSSL tools installed, you can get them via <a href="https://chocolatey.org/" target="_blank">chocolatey</a>:</p>
























  
    <pre class="source-code">cinst openssl.light -y</pre>
  




  <p class="">Then you can run the following powershell to generate a correctly formatted user certificate which was adapted from <a href="https://github.com/cloudbase/winrm-scripts/blob/master/create-winrm-client-cert.sh" target="_blank">this bash script</a>:</p>
























  
    <pre class="source-code">function New-ClientCertificate {
  param([String]$username, [String]$basePath = ((Resolve-Parh .).Path))

  $OPENSSL_CONF=[System.IO.Path]::GetTempFileName()

  Set-Content -Path $OPENSSL_CONF -Value @"
  distinguished_name = req_distinguished_name
  [req_distinguished_name]
  [v3_req_client]
  extendedKeyUsage = clientAuth
  subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$username@localhost
"@

  $user_path = Join-Path $basePath user.pem
  $key_path = Join-Path $basePath key.pem
  $pfx_path = Join-Path $basePath user.pfx

  openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out $user_path -outform PEM -keyout $key_path -subj "/CN=$username" -extensions v3_req_client 2&gt;&amp;1

  openssl pkcs12 -export -in $user_path -inkey $key_path -out $pfx_path -passout pass: 2&gt;&amp;1

  del $OPENSSL_CONF
}
</pre>
  




  <p class="">This will output a certificate and private key file both in base64 .pem format and additionally a .pfx formatted file.</p><h3>Generating client certificates via New-SelfSignedCertificate</h3><p class="">If you are on windows 10 or server 2016, then you should have a more advanced version of the New-SelfSignedCertificate cmdlet - more advaced than what shipped with windows 2012 R2 and 8.1. Here is the command to generate the script:</p>
























  
    <pre class="source-code">New-SelfSignedCertificate -Type Custom `
                          -Container test* -Subject "CN=vagrant" `
                          -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=vagrant@localhost") `
                          -KeyUsage DigitalSignature,KeyEncipherment `
                          -KeyAlgorithm RSA `
                          -KeyLength 2048</pre>
  




  <p class="">This will add a &nbsp;certificate for a vagrant user to the personal LocalComputer folder in the certificate store.</p><h2>Enable certificate authentication</h2><p class="">This is perhaps the simplest step. By default, certificate authentication is enabled for clients and disabled for server. So you will need to enable it on the endpoint:</p>
























  
    <pre class="source-code">Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true</pre>
  




  <h2>Import the certificate to the appropriate certificate store locations</h2><p class="">If you are using powershell remoting, the user certificate and its private key should be in the My directory of either the LocalMachine or the CurrentUser store on the client. If you are using a cross platform library like the ruby or python library, the cert does not need to be in the store at all on the client. However, regardless of client implementation, it must be added to the server certificate store.</p><h3>Importing on the client</h3><p class="">As stated above, this is necessary for powershell remoting clients. If you used ADCS or New-SelfSignedCertificate, then the generated certificate is added automatically. However if you used OpenSSL, you need to import the .pfx yourself:</p>
























  
    <pre class="source-code">Import-pfxCertificate -FilePath user.pfx `
                      -CertStoreLocation Cert:\LocalMachine\my</pre>
  




  <h3>Importing on the server</h3><p class="">There are two steps to timporting the certificate on the endpoint:</p><ol data-rte-list="default"><li><p class="">The issuing certificate must be present in the Trusted Root Certification Authorities of the LocalMachine store</p></li><li><p class="">The client certificate public key must be present in the Trusted People folder of the LocalMachine store</p></li></ol><p class="">Depending on your setup, the issuing certificate may already be in the Trusted Root location. This is the certificate used to issue the client cert. If you are using your own enterprise certificate authority or a publicly valid CA cert, its likely you already have this in the trusted roots. If you used OpenSSL or New-SelfSignedCertificate then the user certificate was issued by itself and needs to be imported.</p><p class="">If you used OpenSSL, you already have the .pem public key. Otherwise you can export it:</p>
























  
    <pre class="source-code">Get-ChildItem cert:\LocalMachine\my\7C8DCBD5427AFEE6560F4AF524E325915F51172C |
  Export-Certificate -FilePath myexport.cer -Type Cert</pre>
  




  <p class="">This assumes that 7C8DCBD5427AFEE6560F4AF524E325915F51172C is the thumbprint of your issuing certificate. I guarantee that is an incorrect assumption.</p><p class="">Now import these on the endpoint:</p>
























  
    <pre class="source-code">Import-Certificate -FilePath .\myexport.cer `
                   -CertStoreLocation cert:\LocalMachine\root
Import-Certificate -FilePath .\myexport.cer `
                   -CertStoreLocation cert:\LocalMachine\TrustedPeople</pre>
  




  <h2>Create the winrm user mapping</h2><p class="">This will declare on the endpoint: given a issuing CA, which certificates to allow access. You can potentially add multiple entries for different users or use a wildcard. We'll just map our one user:</p>
























  
    <pre class="source-code">New-Item -Path WSMan:\localhost\ClientCertificate `
         -Subject 'vagrant@localhost' `
         -URI * `
         -Issuer 7C8DCBD5427AFEE6560F4AF524E325915F51172C `
         -Credential (Get-Credential) `
         -Force</pre>
  




  <p class="">Again this assumes the issuing certificate thumbprint of the certificate that issued our user certificate is 7C8DCBD5427AFEE6560F4AF524E325915F51172C and we are allowing access to a local account called vagrant. Note that if your user certificate is self-signed, you would use the thumbprint of the user certificate itself.</p><h2>Using certificate authentication</h2><p class="">This completes the setup. Now we should actually be able to login remotely to the endpoint. I'll demonstrate this first using powershell remoting and then ruby.</p><h3>Powershell remoting</h3>
























  
    <pre class="source-code">C:\dev\WinRM [master]&gt;Enter-PSSession -ComputerName 192.168.137.79 `
&gt;&gt; -CertificateThumbprint 7C8DCBD5427AFEE6560F4AF524E325915F51172C
[192.168.137.79]: PS C:\Users\vagrant\Documents&gt;</pre>
  




  <h3>Ruby WinRM gem</h3>
























  
    <pre class="source-code">C:\dev\WinRM [master +3 ~0 -0 !]&gt; gem install winrm
WARNING:  You don't have c:\users\matt\appdata\local\chefdk\gem\ruby\2.1.0\bin in your PATH,
          gem executables will not run.
Successfully installed winrm-1.8.0
Parsing documentation for winrm-1.8.0
Done installing documentation for winrm after 0 seconds
1 gem installed
C:\dev\WinRM [master +3 ~0 -0 !]&gt; irb
irb(main):001:0&gt; require 'winrm'
=&gt; true
irb(main):002:0&gt; endpoint = 'https://192.168.137.169:5986/wsman'
=&gt; "https://192.168.137.169:5986/wsman"
irb(main):003:0&gt; puts WinRM::WinRMWebService.new(
irb(main):004:1*   endpoint,
irb(main):005:1*   :ssl,
irb(main):006:1*   :client_cert =&gt; 'user.pem',
irb(main):007:1*   :client_key =&gt; 'key.pem',
irb(main):008:1*   :no_ssl_peer_verification =&gt; true
irb(main):009:1&gt; ).create_executor.run_cmd('ipconfig').stdout

Windows IP Configuration


Ethernet adapter Ethernet:

   Connection-specific DNS Suffix  . : mshome.net
   Link-local IPv6 Address . . . . . : fe80::6c3f:586a:bdc0:5b4c%12
   IPv4 Address. . . . . . . . . . . : 192.168.137.169
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.137.1

Tunnel adapter Local Area Connection* 12:

   Connection-specific DNS Suffix  . :
   IPv6 Address. . . . . . . . . . . : 2001:0:5ef5:79fd:24bc:3d4c:3f57:7656
   Link-local IPv6 Address . . . . . : fe80::24bc:3d4c:3f57:7656%14
   Default Gateway . . . . . . . . . : ::

Tunnel adapter isatap.mshome.net:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . : mshome.net
=&gt; nil
irb(main):010:0&gt;</pre>
  




  <h2>Interested?</h2><p class="">This functionality just became available in the winrm gem 1.8.0 this week. This gem is used by <a href="https://www.vagrantup.com/" target="_blank">Vagrant</a>, <a href="https://www.chef.io/" target="_blank">Chef</a> and <a href="http://kitchen.ci/" target="_blank">Test-Kitchen</a> to connect to remote machines. However, none of these applications provide configuration options to make use of certificate authentication via winrm. My personal observation has been that nearly no one uses certificate authentication with winrm but that may be a false observation or a result of the fact that few no about this possibility.</p><p class="">If you are interested in using this in Chef, Vagrant or Test-Kitchen, please file an issue against their respective github repositories and make sure to @ mention me (<a href="https://github.com/mwrock" target="_blank">@mwrock</a>) and I'll see what I can do to plug this in or you can submit a PR yourself if so inclined.</p>]]></description></item><item><title>Installing and running a Chef client on Windows Nano Server</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 20 Mar 2016 19:17:48 +0000</pubDate><link>http://www.hurryupandwait.io/blog/instal</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:56ed8df68259b54cf1f72217</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png" data-image-dimensions="516x388" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=1000w" width="516" height="388" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458512305451-79QVQKPSPCI5NHAXUW31/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>I've been really excited about Nano Server ever since it was first talked about last year. It's a major shift in how we "do windows" and has the promise of addressing many pain points that exist today for those who run Windows Server farms at scale. As a <a target="_blank" href="https://www.chef.io/">Chef</a> enthusiast and employee I am thrilled to be apart of the Journey to making Chef on Nano a delight. Now note the word "journey" here. We are on the road to awesome &nbsp;but we've just laid down the foundation and scaffolding. Heck, even Nano itself is still evolving as I write this and is on it's own journey to RTM.&nbsp;So while today I would not characterize the experience of Chef on Nano as "delightful," I can say that it is "possible." For those who enjoy living on the bleeding edge, "possible" is a significant milestone on the way to "delight."</p><p>In this post I will share what Nano is, how to get it and run it on your laptop, install a chef client and run a basic recipe. I'll also point out some unfinished walls and let you know some of the rooms that are entirely missing. Also bear in mind that I have not toured the entire house so you will certainly find other gaps between here and delight and hopefully you will share. Keep in mind that I am a software developer and not a product owner at Chef. So I won't be talking about road maps or showing any gnat charts here. I'm really speaking more as a enthusiast here and as someone who can't wait to use it more and make the experience of running Chef on Nano awesome like a possum because possums are awesome. (They really are. We have one living under our front porch and its just so gosh darn cute!)</p><h2>What is Nano and why should I care?</h2><p>Some of you may be thinking, "finally a command line text editor for windows!" Yeah that would be great but no I'm not talking about the <a target="_blank" href="http://www.nano-editor.org/">GNU Nano</a> text editor familiar to many linux users, I am talking about Windows Nano Server. You have likely heard of Windows Server Core. Well this takes core to the next level (or two). While Server Core does not support GUI apps (except the ones it does) but still supports all applications and APIs available on GUI based Windows Server, Nano throws backwards compatibility to the wind and slashes vast swaths of what sits inside a Windows server today widdling it down to a few hundred megabytes.</p><p>There is no 32 bit subsystem only 64 bit is supported. While traditional Windows server has multiple APIs for doing the same thing, Nano pairs much of that down throwing out many APIs where you can accomplish the same tasks using another API. Nope, no VBScript engine here. All those windows roles and features that lie dormant on a traditional Windows server simply don't exist on Nano. You start in a bare core and then add what you need.</p><p>Naturally this is not all roses and sunshine. The road to Nano will have moments of pain and frustration. We have grown accustomed to our bloated Windows and will need to adjust our workflows and in some case our application architecture to exist in this new environment. Here is one gotcha many may hit pretty quickly. Maybe you pull down the Couchbase MSI and try to run MSIEXEC on that thing. Well guess what...on Nano "The term 'msiexec' is not recognized as the name of a cmdlet, function, script file, or operable program." Nano will introduce new ways to install applications.</p><p>But before we get all cranky about the loss of MSIs (yeah the "packaging" system that wasn't really) lets think about what this smaller surface area gets us. First an initial image file that is often less than 300MB compared to what would be 4GB on 2012R2. With less junk in the trunk, there is less to update, so no more initial multi hour install of hundreds of updates. Fewer updates will mean fewer reboots. All of this is good stuff. Nano also boots up super duper quick. The first time you provision a Nano server, you may think you have done something wrong -&nbsp;I did - because it all happens so much more quickly than what you are used to.</p>


































































  

    
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg" data-image-dimensions="648x348" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=1000w" width="648" height="348" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458499970084-XVLQKTIQ32MHJS5BZRV8/image-asset.jpeg?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
          
          <figcaption class="image-caption-wrapper">
            <p>Info graphic provided by Microsoft</p>
          </figcaption>
        
      
        </figure>
      

    
  


  





  <h2>Sounds great! How do I go to there?</h2><p>There are several ways to get a nano server up and running and I'll outline several options here:</p><ul><li><a target="_blank" href="https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview">Download</a> the latest Windows Server 2016 Technical Preview and follow <a target="_blank" href="https://technet.microsoft.com/en-us/library/mt126167.aspx">these instructions</a> to extract and install a Nano image.</li><li><a target="_blank" href="http://blogs.technet.com/b/nanoserver/archive/2016/01/07/download-just-nano-server-in-a-vhd-windows-server-2016-technical-preview-4.aspx">Download just a VHD</a> and import that into Hyper-V (user: administrator/pass: Pass@word1). While it successfully installs on <a target="_blank" href="https://www.virtualbox.org/">VirtualBox</a>, you will not be able to successfully to remote to the box.&nbsp;</li><li>Are you a <a target="_blank" href="https://www.vagrantup.com/">Vagrant</a> user? Feel free to "vagrant up" my <a target="_blank" href="https://atlas.hashicorp.com/mwrock/boxes/WindowsNano">mwrock/WindowsNano</a> Nano eval box (user: vagrant/pass: vagrant). It has a Hyper-V and VirtualBox provider. The Hyper-V provider may "blue screen" on first boot but should operate fine after rebooting a couple times.</li><li>Don't want to use some random dude's box but want to make your own with <a target="_blank" href="https://www.packer.io/">Packer</a>? Check out <a target="_blank" href="http://www.hurryupandwait.io/blog/a-packer-template-for-windows-nano-server-weighing-300mb">my post</a> where I walk through how to do that.</li></ul><p>Depending on wheather you are a Vagrant user or VirtualBox user or not, options 2 and 3 are by far the easiest and should get you up and running in minutes. The longest part is just downloading the image but even that is MUCH faster than a standard windows ISO. About 6 minutes compared to 45 minutes on my FIOS home connection.</p><h2>Preparing your environment</h2><p>Basically we just need to make sure we can remote to the nano server and also copy files to it.</p><h3>Setup a Host-Only network on VirtualBox</h3><p>You will need to be able to access the box with a non localhost IP in order to transfer files over SMB. This only applies if you are using a NAT VirtualBox network (the default). Hyper-V Switches should require no additional configuration.</p><p>First create a Host-Only network if your VirtualBox host does not already have one:</p>
























  
    <pre class="source-code">VBoxManage hostonlyif create</pre>
  




  <p>Now add a NIC on your Nano guest that connects over the Host-Only network:</p>
























  
    <pre class="source-code">VBoxManage modifyvm "VM name" --nic2 hostonly</pre>
  




  <p>Now log on to the box to discover its IP address. Immediately after login, you should see its two IPs. One starts with 10. and the other starts with 172. You will want to use the 172 address.</p>


































































  

    
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png" data-image-dimensions="683x583" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=1000w" width="683" height="583" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1458500671528-VLDTQCNU6YA8K7FOD0XE/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <h3>Make sure you can remote to the Nano box</h3><p>We'll establish a remote powershell connection to the box. If you can't do this now, we won't get far with Chef. So run the following:</p>
























  
    <pre class="source-code">$ip = "&lt;ip address of Nano Server&gt;"
Set-Item WSMan:\localhost\Client\TrustedHosts $ip
Enter-PSSession -ComputerName $ip -Credential $ip\Administrator</pre>
  




  <p>This should result in dropping you into an interactive remote powershell session on the nano box:</p>
























  
    <pre class="source-code">[172.28.128.3]: PS C:\Users\vagrant\Documents&gt;</pre>
  




  <h3>Open File and Print sharing Firewall Rule</h3><p>If you are using my vagrant box or packer templates, this should already be setup. Otherwise, you will need to open the File and Print sharing rule so we can mount a drive to the Nano box. Don't you worry, we won't be doing any printing.</p>
























  
    <pre class="source-code">PS C:\dev\nano&gt; Enter-PSSession -ComputerName 172.28.128.3 -Credential vagrant
[172.28.128.3]: PS C:\Users\vagrant\Documents&gt; netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes

Updated 16 rule(s).
Ok.</pre>
  




  <h3>Mount a network drive from the host to the Nano server</h3><p>We'll just create a drive that connects to the root of the C: drive on the Nano box:</p>
























  
    <pre class="source-code">PS C:\dev\nano&gt; net use z: \\172.28.128.3\c$
Enter the user name for '172.28.128.3': vagrant
Enter the password for 172.28.128.3:
The command completed successfully.

PS C:\dev\nano&gt; dir z:\


    Directory: z:\


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----       11/21/2015   2:47 AM                Program Files
d-----       11/21/2015   2:47 AM                Program Files (x86)
d-----       11/21/2015   2:47 AM                ProgramData
d-----       11/21/2015   2:47 AM                Users
d-----       11/21/2015  10:50 AM                Windows</pre>
  




  <p>Ok. If you have gotten this far, you are ready to install Chef.</p><h2>Installing Chef</h2><p>Remember earlier in this post when I mentioned that there is no msiexec on Nano? We can't just download the MSI on Nano and install it like we normally would. There are some pre steps we will need to follow and we also need to make sure to use just the right Chef version.</p><h3>Downloading the right version of Chef</h3><p>If you remember, not only does Nano lack the ability to install MSIs, it also lacks a 32 bit subsystem. Lucky for us, Chef now ships a 64 bit Windows Chef client. On downloads.chef.io, the last few versions of the chef client provide a 64bit Windows install. However, don't get the latest one. The last couple versions have a specially compiled version of ruby that is not compatible with Nano server. This will be remedied in time, but for now, download the 64 bit install of Chef 12.7.2.</p>
























  
    <pre class="source-code">PS C:\dev\nano&gt; Invoke-WebRequest https://packages.chef.io/stable/windows/2008r2/chef-client-12.7.2-1-x64.msi -OutFile Chef-Client.msi</pre>
  




  <h3>Extract the Chef client files from the MSI</h3><p>There are a couple ways to do this. The most straight forward is to simply install the MSI locally on your host. That will install the 64 bit Chef client, and in the process, extract its files locally.</p><p>Another option is to use a tool like <a target="_blank" href="https://github.com/activescott/lessmsi">LessMSI</a> which can extract the contents without actually running the installer. This can be beneficial because it won't run installation code that adjusts your registry, changes environment variables or changes any other state on your local host that you would rather leave in tact.</p><p>You can grab lessmsi from <a target="_blank" href="https://chocolatey.org/packages/lessmsi">chocolatey</a>:</p>
























  
    <pre class="source-code">choco install lessmsi</pre>
  




  <p>Now extract the Chef MSI contents:</p>
























  
    <pre class="source-code">lessmsi x .\chef-client.msi c:\chef-client\</pre>
  




  <p>This extracts the contents of the MSI to c:\chef-client. What we are interested in now is the extracted C:\chef-client\SourceDir\opscode\chef.zip. That has the actual files we will copy to the Nano server. Lets expand that zip:</p>
























  
    <pre class="source-code">Expand-Archive -Path C:\chef-client\SourceDir\opscode\chef.zip -DestinationPath c:\chef-client\chef</pre>
  




  <p>Note I am on Windows 10. The Expand-Archive command is only available in powershell version 5. However if you are on a previous version of powershell, I am confident you can expand the zip using other methods.</p><p>Once this zip file is extracted by whatever means, you should now have a full Chef install layout at c:\chef-client\chef.</p><h3>Editing win32-process gem before copying to Nano</h3><p>Here's another area on "undelight." The latest version of the win32-process gem introduced coverage of some native kernel API functions that were not ported to Nano. So attempts to attach to these C functions results in an error when the gem is loaded. Since win32-process is a dependency of the chef client, loading the chef client will blow up.</p><p>Not to worry, we can perform some evasive maneuvers to work around this. Its gonna get dirty here, but I promise no one is looking. There are basically four lines in the win32-process source that we simply need to remove:</p>
























  
    <pre class="source-code">PS C:\dev\nano&gt; (Get-content -Path C:\chef-client\chef\embedded\lib\ruby\gems\2.0.0\gems\win32-process-0.8.3\lib\win32\proces
s\functions.rb) | % { if(!$_.Contains(":Heap32")) { $_ }} | Set-Content C:\chef-client\chef\embedded\lib\ruby\gems\2.0.0\gems
\win32-process-0.8.3\lib\win32\process\functions.rb</pre>
  




  <h2>Copy the Chef files to the Nano server</h2><p>Now we are ready to copy over all the chef files to the Nano Server:</p>
























  
    <pre class="source-code"> Copy-Item C:\chef-client\chef\ z:\chef -Recurse</pre>
  




  <h2>Validate the chef install</h2><p>Lets set the path to include the chef ruby and bin files and then call chef-client -v:</p>
























  
    <pre class="source-code">PS C:\dev\nano&gt; Enter-PSSession -ComputerName 172.28.128.3 -Credential vagrant
[172.28.128.3]: PS C:\Users\vagrant\Documents&gt; $env:path += ";c:\chef\bin;c:\chef\embedded\bin"
[172.28.128.3]: PS C:\Users\vagrant\Documents&gt; chef-client -v
Chef: 12.7.2</pre>
  




  <p>Hopefully you see similar output including the version of the chef-client.</p><h2>Converging on Nano</h2><p>Now we are actually ready to "do stuff" with Chef on Nano. Lets start simple. We will run chef-apply on a trivial recipe:</p>
























  
    <pre class="source-code">file "c:/blah.txt" do
  content 'blah'
end</pre>
  




  <p>Lets copy the recipe and converge:</p>
























  
    <pre class="source-code">[172.28.128.3]: PS C:\Users\vagrant\Documents&gt; chef-apply c:/chef/blah.rb
[2016-03-20T07:49:38+00:00] WARN: unable to detect ipaddress
[2016-03-20T07:49:45+00:00] INFO: Run List is []
[2016-03-20T07:49:45+00:00] INFO: Run List expands to []
[2016-03-20T07:49:45+00:00] INFO: Processing file[c:/blah.txt] action create ((chef-apply cookbook)::(chef-apply recipe) line
 1)
[2016-03-20T07:49:45+00:00] INFO: file[c:/blah.txt] created file c:/blah.txt
[2016-03-20T07:49:45+00:00] INFO: file[c:/blah.txt] updated file contents c:/blah.txt
[172.28.128.3]: PS C:\Users\vagrant\Documents&gt; cat C:\blah.txt
blah</pre>
  




  <p>That worked! Our file is created and we can see it has the intended content.</p><h3>Bootstrapping a Nano node on a Chef server</h3><p>Maybe you want to join a fleet of nano servers to a chef server. Well at the moment knife bootstrap windows throws encoding errors. This could be related to the fact that the codepage used by a Nano shell is UTF-8 unlike the "MS-DOS"(437) codepage used on all previous versions of windows.</p><p>Lets just manually bootstrap for now:</p>
























  
    <pre class="source-code">knife node create -d nano
# We use ascii encoding to avoid a UTF-8 BOM
knife client create -d nano | Out-File -FilePath z:\chef\nano.pem -Encoding ascii
knife acl add client nano nodes nano update</pre>
  




  <p>Note that the knife acl command requires the knife-acl gem that is not shipped with chef-dk but you can simply chef gem install it.</p><p>Now lets create a basic client.rb file in z:\chef\client.rb:</p>
























  
    <pre class="source-code">log_level        :info
log_location     STDOUT
chef_server_url  'https://api.opscode.com/organizations/ment'
client_key 'c:/chef/nano.pem'
node_name  'nano'</pre>
  




  <p>Now lets converge against our server:</p>
























  
    <pre class="source-code">[172.28.128.3]: PS C:\Users\vagrant\Documents&gt; chef-client
[2016-03-20T18:28:07+00:00] INFO: *** Chef 12.7.2 ***
[2016-03-20T18:28:07+00:00] INFO: Chef-client pid: 488
[2016-03-20T18:28:08+00:00] WARN: unable to detect ipaddress
[2016-03-20T18:28:15+00:00] INFO: Run List is []
[2016-03-20T18:28:15+00:00] INFO: Run List expands to []
[2016-03-20T18:28:15+00:00] INFO: Starting Chef Run for nano
[2016-03-20T18:28:15+00:00] INFO: Running start handlers
[2016-03-20T18:28:15+00:00] INFO: Start handlers complete.
[2016-03-20T18:28:16+00:00] INFO: Loading cookbooks []
[2016-03-20T18:28:16+00:00] WARN: Node nano has an empty run list.
[2016-03-20T18:28:16+00:00] INFO: Chef Run complete in 1.671495 seconds
[2016-03-20T18:28:16+00:00] INFO: Running report handlers
[2016-03-20T18:28:16+00:00] INFO: Report handlers complete
[2016-03-20T18:28:16+00:00] INFO: Sending resource update report (run-id: 1fb8ae2a-8455-4b94-b956-99c5a34863ea)
[172.28.128.3]: PS C:\Users\vagrant\Documents&gt;</pre>
  




  <p>We can use chef-client -r recipe[windows] if you have the windows cookbook on your server to converge its default recipe.</p><h2>What else is missing?</h2><p>The other gaping hole here is <a target="_blank" href="http://kitchen.ci/">Test-Kitchen</a> cannot converge a Nano test instance. Two main reasons prevent this:</p><ol><li>There is no provisioner that can get around the MSI install issue</li><li>Powershell calls from the <a target="_blank" href="https://github.com/WinRb/WinRM">winrm</a> gem will fail because Nano's Powershell.exe does not accept the -EncodedCommand argument.</li></ol><p>One might think that the first issue could probably be worked around by creating a custom provisioner to extract the chef files on the host and copy them to the instance. However, because <a target="_blank" href="https://github.com/WinRb/winrm-fs">winrm-fs</a>, used by Test-Kitchen to copy files to a windows test instance, makes ample use of the -EncodedCommand argument. <a target="_blank" href="https://twitter.com/sneal78">Shawn Neal</a> and I are currently working on a <a target="_blank" href="https://github.com/WinRb/WinRM/pull/191">v2 of the winrm gem</a> that will use a more modern protocol for powershell calls to get around the -EncodedCommand limitation and provide many other benefits as well.</p><h2>Try it out!</h2><p>Regardless of the missing pieces, as you can see, one can at least play with Chef and Nano today and get a glimpse of what life might be like in the future. Its no flying car but it beats our multi gigabyte Windows images of today.</p>]]></description></item><item><title>Run Kitchen tests in Travis and Appveyor</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 06 Mar 2016 08:47:53 +0000</pubDate><link>http://www.hurryupandwait.io/blog/run-kitchen-tests-in-travis-and-appveyor-using-the-kitchen-machine-driver</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:56dbc7e8e32140a3347256d2</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png" data-image-dimensions="678x206" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=1000w" width="678" height="206" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457253955233-X09QODWVJB5NB5O2RWJU/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>There are a few cookbook repositories I help to maintain and services like <a target="_blank" href="https://travis-ci.org/">Travis</a> and <a target="_blank" href="https://ci.appveyor.com/">Appveyor</a> provide a great enhancement to the pull request workflow both for contributors and maintainers. As a contributor I get fast feedback without bothering any human on whether my PR meets a minimum bar of quality and complies with the basic code style standards of the maintainers. As a maintainer, I can more easily triage incoming contributions and simply not waste time reviewing failed commits.</p><p>Its common to use travis and appveyor for unit tests and even longer running functional tests as long as they take a reasonable amount of time to run. However, in the <a target="_blank" href="https://www.chef.io/">Chef</a> cookbook world, you usually do not see them include <a target="_blank" href="http://kitchen.ci/">Test-Kitchen</a> tests which are the cookbook equivalent of end to end functional or integration tests. I have found myself wishing for an easy way to kick off cloud based Test-Kitchen runs triggered by pull request pushes. Often there just is no way to know for sure if a commit breaks a cookbook or if it "works" unless I manually pull down the PR and invoke the kitchen test (usually via vagrant). This takes time and I wish I could just see a green check mark or red 'X' automatically without doing anything.</p><p>This week while reviewing PRs for the <a target="_blank" href="https://github.com/chocolatey/chocolatey-cookbook">Chocolatey cookbook</a>, I set out to run the kitchen tests in appveyor and did not see any readily available way to do so until <a target="_blank" href="https://twitter.com/StuartPreston">Stuart Preston</a> steered me to the <a target="_blank" href="https://github.com/test-kitchen/test-kitchen/blob/master/lib/kitchen/driver/proxy.rb">proxy driver</a> included with test-kitchen.</p><p>In this post I'll talk about some other patterns folks have used to run kitchen tests in travis,&nbsp;why I went with the proxy driver, &nbsp;and how to go about using it in your cookbook repository.</p><h2>Using Docker</h2><p>One approach is to install and start docker inside the travis VM and then use <a target="_blank" href="https://github.com/portertech/kitchen-docker">kitchen-docker</a> or <a target="_blank" href="https://github.com/someara/kitchen-dokken">kitchen-dokken</a> to run them in the travis run. This is a very viable approach. You can fire up a docker container very quickly and most cookbooks will run fine containerized. There are a few downsides:</p><ul><li>Installing and configuring docker while fairly straight forward may require several lines of setup script.</li><li>Some cookbooks may not run well in containers.</li><li>Not a viable approach for windows based cookbooks. Looking forward to when they are.</li></ul><h2>Leveraging AWS or other cloud alternatives</h2><p>Another popular approach is to run the kitchen tests from travis but reach out to another cloud provider to host the actual test instances. This also has its drawbacks:</p><ul><li>Unless using the aws free tier, its not free and if you are, its not fast.</li><li>You have to stash keys in your travis.yml</li></ul><h2>I have an ephemeral machine, why cant I use it?</h2><p>Both of the above solutions don't sit well for me because it just seems sub optimal to have to bring up another "instance" whether it be container or cloud based when travis or appveyor just did that for me. The beauty of these services is that they bring up an isolated and full fledged test VM so why do I need to bring up another one?</p><h2>Most kitchen drivers are built to go "somewhere else"</h2><p>This usually makes perfect sense. I don't want to run kitchen tests and have the test instance BE my machine because the state of that machine will be changed and I want to remain isolated from those changes and easily undo them. However in travis or appveyor, I AM somewhere else.</p><p>Yet having looked, I found no kitchen driver that would converge and test locally. The closest thing was <a target="_blank" href="https://github.com/neillturner/kitchen-ssh">kitchen-ssh</a> which simply communicates with a test instance over SSH and does not try to spin up a vm. Surely one can easily just SSH to localhost. However, its just SSH and does not also leverage WinRM when talking to windows.</p><h2>Enter the built in proxy driver</h2><p>I wanted a driver that could run commands using whatever kitchen transport was configured (SSH or WinRM) but would not try to interact with a hypervisor or cloud to start a separate instance. If I just point the transport to localhost, that should succeed in running locally. Of course a "local" transport that would run native shell commands locally and use the native filesystem for file operations would be one step better. However, pointing SSH and WinRM to localhost seems to work just fine and requires no additional work.</p><h2>Using the proxy driver</h2><p>The chocolatey-cookbook repository includes a <a target="_blank" href="https://github.com/chocolatey/chocolatey-cookbook/blob/master/appveyor.yml">"real world" example</a> that uses appveyor. I'll also briefly walk through both appveyor and travis samples here.</p><h3>The .kitchen.yml or .kitchen.{travis or appveyor}.yml file</h3><p>First we'll look at the <a target="_blank" href="https://docs.chef.io/config_yml_kitchen.html">.kitchen.yml</a> file that may be the same for either travis or appveyor. Most actual cookbook repositories will likely use .kitchen.travis.yml or .kitchen.appveyor.yml since they will want to use .kitchen.yml for locally run tests.</p>
























  
    <pre class="source-code">---
driver:
  name: proxy
  host: localhost
  reset_command: "exit 0"
  port: &lt;%= ENV["machine_port"] %&gt;
  username: &lt;%= ENV["machine_user"] %&gt;
  password: &lt;%= ENV["machine_pass"] %&gt;

provisioner:
  name: chef_zero

platforms:
  - name: ubuntu-14.04
  - name: windows-2012R2

verifier:
  name: inspec

suites:
  - name: default
    run_list:
      - recipe[machine_test]</pre>
  




  <p>The driver section uses the proxy driver and configures a host, port username and password for the transport. The host setting is required and we will use "localhost".&nbsp;It also uses a reset_command which can be used for running a command on the instance during the "create" phase but we don't need it to do anything on a fresh appveyor or travis instance. The reset_command is required so we just specify "exit 0" which should work cross platform to do nothing.</p><p>The transport being used is determined by the <a target="_blank" href="https://github.com/test-kitchen/test-kitchen/blob/master/lib/kitchen/config.rb#L212">same kitchen logic</a>, either the transport is explicitly declared in the YAML or kitchen will default to SSH unless the platform name starts with "win." This example configures the credential and port from environment variables. That will be more clear when we look at the .travis.yml and appveyor.yml files.</p><h3>.travis.yml</h3>
























  
    <pre class="source-code">language: ruby

env:
  global:
    - machine_user=travis
    - machine_pass=travis
    - machine_port=22
    - KITCHEN_YAML=.kitchen.travis.yml

rvm:
  - 2.1.7

sudo: required
dist: trusty

before_install:
  - sudo usermod -p "`openssl passwd -1 'travis'`" travis

script:
  - bundle exec rake
  - bundle exec kitchen verify ubuntu

branches:
  only:
  - master</pre>
  




  <p>Here we set the port, user name and password environment variables and we set the KITCHEN_YAML variable to our special travis targeted kitchen.yml. We also give the travis user (what travis runs under) a password. Our test "script" runs kitchen verify against the ubuntu platform.</p><h3>appveyor.yml</h3>
























  
    <pre class="source-code">version: "master-{build}"

os: Windows Server 2012 R2
platform:
  - x64

environment:
  machine_user: test_user
  machine_pass: Pass@word1
  machine_port: 5985
  KITCHEN_YAML: .kitchen.appveyor.yml
  SSL_CERT_FILE: c:\projects\kitchen-machine\certs.pem

  matrix:
    - ruby_version: "21"

clone_folder: c:\projects\kitchen-machine
clone_depth: 1
branches:
  only:
    - master

install:
  - ps: net user /add $env:machine_user $env:machine_pass
  - ps: net localgroup administrators $env:machine_user /add
  - ps: $env:PATH="C:\Ruby$env:ruby_version\bin;$env:PATH"
  - ps: gem install bundler --quiet --no-ri --no-rdoc
  - ps: Invoke-WebRequest -Uri http://curl.haxx.se/ca/cacert.pem -OutFile c:\projects\kitchen-machine\certs.pem

build_script:
  - bundle install || bundle install || bundle install

test_script:
  - bundle exec kitchen verify windows</pre>
  




  <p>This one is a bit more involved but still not too bad. Like we did in the travis.yml, we assign our port, username and password environment variables and also set the .kitchen.yml file override. Here we actually go ahead and create a new user and make it an administrator. This is because its tough to get at the appveyor user's password and there may be issues changing its password in the middle of an active session.</p><p>Another thing to note for windows is we need to do some special SSL cert setup. Ruby and openssl do not use the native windows certificate store to manage certificate authorities;&nbsp;so we go ahead and download them and point openssl at them with the SSL_CERT_FILE variable. Those who use the <a target="_blank" href="https://github.com/chef/chef-dk">Chef-dk</a> can be thankful that hides this detail from you, but there is no chef-dk in this environment.</p><p>Finally our test_script invokes all tests in the windows platform.</p><h2>The Results</h2><p>Now I can see a complete Test-Kitchen log of converged resources and verifier test results right in my travis and appveyor build output.</p><p>Here is the end of the travis log:</p>


































































  

    
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png" data-image-dimensions="663x504" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=1000w" width="663" height="504" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251917410-SXD8QIHJKOXMW34THCIZ/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>And here is appveyor:</p>


































































  

    
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png" data-image-dimensions="672x533" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=1000w" width="672" height="533" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1457251984591-U2PNMH7G7PWVM582FUBV/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p id="yui_3_17_2_3_1457243863499_153451"><br></p>]]></description></item><item><title>Need an SSH client on Windows? Don't use Putty or CygWin...use Git</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sat, 20 Feb 2016 18:32:36 +0000</pubDate><link>http://www.hurryupandwait.io/blog/need-an-ssh-client-on-windows-dont-use-putty-or-cygwinuse-git</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:56c363f101dbae0dedb3671f</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png" data-image-dimensions="600x347" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=1000w" width="600" height="347" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1455992903216-WOZ82BF3F4TVUY0J2ECV/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
          
          <figcaption class="image-caption-wrapper">
            <p>SSH in a native powershell window. I prefer to use console2 and enjoy judging others who don't (conemu is good too).</p>
          </figcaption>
        
      
        </figure>
      

    
  


  





  <p>Every once in a while I hear of windows users trying to find a good SSH client for Windows to connect to their Linux boxes. For the longest time, a couple of the more popular choices have been <a target="_blank" href="https://www.cygwin.com/">Cygwin</a> and <a target="_blank" href="http://www.putty.org/">Putty</a>.</p><p>These still work today but I personally find the experience of both to be sub-optimal. There are lots of annoyances I find in each but the main thing they both lack is an integrated SSH experience in the shell console I use for everything else (mainly powershell) day in/day out. Cygwin and Putty run in separate console experiences. I just want to type 'ssh mwrock@blahblah' in my console of choice and have it work.</p><h2>You already have the software</h2><p>Ok, maybe not...but its very likely that if you are reading this and find yourself needing to SSH here and there, you also use GIT. Well many are unaware that <a target="_blank" href="https://git-for-windows.github.io/">git for windows</a>&nbsp;bundles several Linux familiar tools. Many might use these in the git bash shell.</p><h2>Friends don't let friends use the git bash shell on windows</h2><p>Don't get me wrong here - I'm not anti bash when I am on Linux. Its great. But I find tools like bash and cygwin offer a "worst of both worlds" experience on Windows. You don't need to run in the bash window to access SSH. You just need to make a small modification to your path. Assuming git was installed to C:/Program Files/Git (the default location), just add C:/Program Files/Git/usr/bin to your path:</p>
























  
    <pre class="source-code">$new_path = "$env:PATH;C:/Program Files/Git/usr/bin"
$env:PATH=$new_path
[Environment]::SetEnvironmentVariable("path", $new_path, "Machine")
</pre>
  




  <p>Bam! Now type 'ssh':</p>
























  
    <pre class="source-code">C:\dev\WinRM [psrp +1 ~1 -0 !]&gt; ssh
usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
       [-D [bind_address:]port] [-E log_file] [-e escape_char]
       [-F configfile] [-I pkcs11] [-i identity_file]
       [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec]
       [-O ctl_cmd] [-o option] [-p port]
       [-Q cipher | cipher-auth | mac | kex | key]
       [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port]
       [-w local_tun[:remote_tun]] [user@]hostname [command]
C:\dev\WinRM [psrp +1 ~1 -0 !]&gt;
</pre>
  




  <h2>Don't have git?</h2><p>Well that's a problem easily solved. Just grab <a target="_blank" href="https://chocolatey.org/">chocolatey</a> if you don't have it already and install git:</p>
























  
    <pre class="source-code">iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
choco install git -params "/GitAndUnixToolsOnPath"</pre>
  




  <p>Note that the "GitAndUnixToolsOnPath " param sets the environment variable for you.&nbsp;You will need to open a new shell for git and ssh to be available in your console and then you are ready to SSH to your heart's content!</p>]]></description></item><item><title>Sane authentication/encryption arrives to ruby based cross platform WinRM remote execution</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 24 Jan 2016 01:19:26 +0000</pubDate><link>http://www.hurryupandwait.io/blog/sane-authenticationencryption-arrives-to-ruby-based-cross-platform-winrm-remote-execution</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:56a3111fa2bab880d922fe7f</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png" data-image-dimensions="491x658" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=1000w" width="491" height="658" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1453598210111-9I5IJPKNKOG5KTGCFO1N/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
          
          <figcaption class="image-caption-wrapper">
            <p>Dan Wanek's notes when first developing the <a target="_blank" href="https://github.com/WinRb/rubyntlm">ruby implementation</a> of NTLM.</p>
          </figcaption>
        
      
        </figure>
      

    
  


  





  <p>This week the <a target="_blank" href="https://github.com/WinRb/WinRM/blob/v1.6.0/changelog.md">WinRM ruby gem version 1.6.0</a> was released and there is really just one new feature it delivers but it is significant: NTLM/Negotiate authentication. Simply stated this provides a safe, low friction authentication and encryption mechanism long available in native Windows remote management tooling but absent in cross platform tools that implement the <a target="_blank" href="http://go.microsoft.com/fwlink/p/?linkid=73965">WinRM protocol</a>.</p><p>In this post I'll highlight why I think this is a big deal, share a little history behind the feature release and discuss why authentication/encryption over WinRM has historically been a problem in the cross platform ecosystem.</p><h2>Tell me again? Why is this awesome?</h2><p>This is awesome because it means that assuming you are connecting to a Windows Server OS of Windows 2012 R2 or later, you can now securely connect from either a Windows or Linux application leveraging the ruby WinRM gem without any preconfiguration of the target machine. Client OS SKUs (like Windows 7, 8 and 10) and server versions 2008 R2 and prior still need to have WinRM explicitly enabled.</p><p>No SSL setup and no WinRM configuration that compromises its security. Just to be clear SSL is still a good idea and encouraged for production nodes, but if you are just trying to get your feet wet with Windows remote execution using tools like <a target="_blank" href="https://www.chef.io/">Chef</a> or <a target="_blank" href="https://www.vagrantup.com/">Vagrant</a>, it is now much easier to accomplish and keep the security bar at a sane level.</p><h2>How did this come to be?</h2><p>I am a developer at Chef and we use the WinRM gem inside of <a target="_blank" href="https://github.com/chef/knife-windows">Knife-Windows</a> in order to execute remote commands on a Windows node from either Windows or Linux. We use a monkey patch gem called <a target="_blank" href="https://github.com/chef/winrm-s">winrm-s</a> to provide Negotiate authentication and encryption BUT it only works from windows workstations because it leverages the native Win32 APIs available only on Windows. Also its just monkey patches on top of the winrm and <a target="_blank" href="https://github.com/nahi/httpclient">httpclient</a> gems and therefore quite fragile.</p><p>My initial objective was to simply port the patches in winrm-s to winnrm and possibly the httpclient gem so that we could provide the same functionality but the implementation would live downstream where it belongs.</p><p>Well I remembered I had seen a <a target="_blank" href="https://github.com/WinRb/WinRM/pull/144">PR in the WinRM</a> repo that mentioned providing Negotiate auth implementation. It was submited by <a target="_blank" href="https://github.com/zenchild">Dan Wanek</a>, the original author of the WinRM gem. The PR was from late 2014 and I never really looked at it but filed it away in my mind as something worth looking at when the time came to drop winrm-s. So the time came and I quickly noticed that it did not seem to use any native Win32 APIs but leveraged another gem, <a target="_blank" href="https://github.com/WinRb/rubyntlm">rubyntlm</a> (also authored by Dan Wanek), which appeared to be a pure ruby implementation of <a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc236621.aspx">NTLM/Negotiate</a>. That means it would work on Linux in addition to Windows which would be really really great.</p><p>It was pretty straight forward to get working and needed just a small amount of tweaking to be production ready - impressive since it had been dormant for over a year. Once it was up and running I verified that it indeed worked both from Windows and Linux. Nice work Dan!</p><h2>The cross platform Windows remote execution landscape</h2><p>I've written several posts covering different aspects of WinRM. Like <a target="_blank" href="http://www.hurryupandwait.io/blog/safely-running-windows-automation-operations-that-typically-fail-over-winrm-or-powershell-remoting">this one</a> and <a target="_blank" href="http://www.hurryupandwait.io/blog/understanding-and-troubleshooting-winrm-connection-and-authentication-a-thrill-seekers-guide-to-adventure">this one</a> and also <a target="_blank" href="http://www.hurryupandwait.io/blog/fixing-winrm-firewall-exception-rule-not-working-when-internet-connection-type-is-set-to-public">this one</a>. Its a protocol that many native windows users likely take for granted and perhaps even forget they are using it when they are using its more full featured cousin: Powershell Remoting. However those who dont have access to direct Powershell because they either run on other operating systems that need to talk to Windows machines or are on Windows but use tools that are portable to non windows platforms are likely using a library that implements the WinRM protocol.</p><p>There are many such libraries:</p><ul><li>The ruby <a target="_blank" href="https://github.com/WinRb/WinRM">WinRM</a> gem (covered in this post)</li><li><a target="_blank" href="https://github.com/masterzen/winrm">WinRM for GO</a></li><li><a target="_blank" href="https://pypi.python.org/pypi/pywinrm">pywinrm</a> python library</li><li><a target="_blank" href="https://github.com/xebialabs/overthere">Xebialabs Overthere</a> Java library for remote execution</li></ul><p>This is just a list of the most popular libraries but there are many more. These are used by well known "DevOps" automation tools such as <a target="_blank" href="https://www.chef.io/">Chef</a>, <a target="_blank" href="http://www.ansible.com/">Ansible</a>, <a target="_blank" href="https://www.packer.io/">Packer</a>, <a target="_blank" href="https://www.vagrantup.com/">Vagrant</a> and others.</p><p>WinRM is a simple SOAP based client/server protocol. So the above libraries merely implement a web service client that issues requests to a WinRM Windows service and interprets the responses. Basically these exchanges result in:</p><ul><li>Creating a Shell</li><li>Creating a Command</li><li>Requesting Command Output and Exit Code</li></ul><p>The output can include both Standard Output and Standard Error streams.</p><p>Kind of like SSH but not.</p><h2>What about Powershell Remoting</h2><p>I'll save the details for another post but be aware there is a more modern protocol called the Powershell Remoting Protocol (<a target="_blank" href="http://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/%5BMS-PSRP%5D.pdf">PSRP</a>). There is no cross platform implementation that I am aware of. It uses the same SOAP based wire protocol -&nbsp;<a target="_blank" href="https://msdn.microsoft.com/en-us/library/cc251526.aspx">MS-WSMV</a> (Web Services Management Protocol Extensions for Windows Vista). I just love the "shout out" to Vista here.</p><p>PSRP is more feature rich but more difficult to implement. I've been playing with a <a target="_blank" href="https://github.com/WinRb/WinRM/compare/psrp">ruby based partial implementation</a> and will blog more details soon. You can run powershell commands with WinRM but you are shelling out to Powershell.exe from the traditional "fisher-price" Windows command shell.</p><h2>Securing the transport</h2><p>If you are using native WinRM on Windows (likely via Powershell Remoting) the most popular methods of authentication and encryption are (but are not limited to):</p><h3>Negotiate</h3><p>To quote the <a target="_blank" href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa384465(v=vs.85).aspx#winrm.gloss_negotiate_authentication">Windows Remote Management Glossary</a>, Negotiate Authentication is defined:</p><blockquote>A negotiated, single sign on type of authentication that is the Windows implementation of Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO). SPNEGO negotiation determines whether authentication is handled by Kerberos or NTLM. Kerberos is the preferred mechanism. Negotiate authentication on Windows-based systems is also called Windows Integrated Authentication.</blockquote><p>Mmmmmmm. Lets all take a minute or two and think about just what this means and how it might guide our relationships and shape our perspectives on global social injustice.</p><p>Yeah its complex right? Anyways its also secure. Its much better than emailing your password to the contacts in your address book.</p><h3>Kerberos</h3><p>You know this Windows Remote Management Glossary is pretty darn smart. Lets just see what it has to say about Kerberos:</p><blockquote>A method of mutual authentication between the client and server that uses encrypted keys. For computers running on a Windows-based operating system, the client account must be a domain account in the same domain as the server. When a client uses default credentials, Kerberos is the authentication method if the connection string is not one of the following: localhost, 127.0.0.1, or [::1].</blockquote><p>Takeaways here are its "mutual", "encrypted" and importantly "for computers." Again,&nbsp;better than emailing your password to the contacts in your address book.</p><h3>Basic Authentication</h3><p>Secrets are hard. Not only technically but emotionally as well. There is no small effort involved hiding the truth. Well thank goodness Basic Authentication makes it easy to share our secrets easily. Ah sweet freedom. &nbsp;Here credentials are transmitted in plain text. I like to think of it as a "giving" protocol. Sure we all think our secrets are worth being kept but are they? Really? Just use Basic Authentication and you might find out.</p><h3>SSL</h3><p>This really isn't so much an authentication mechanism but it is a familiar means of securely transporting data from one point to another where the contents are only to be accessible to the sender and receiver. WinRM communication can use either HTTP or HTTPS (SSL). HTTP is the default but HTTPS provides an added layer of encryption.</p><p>One of the critical keys to securely using SSL is having a <strong>valid</strong>&nbsp;certificate issued by a reputable certification authority that serves to ensure that those on either side of the communication are who they say they are. Without this, for example using a non validated self signed certificate, you run the risk of a "Man in the Middle Attack". Not a nice man who offers to fix your tire, provide financial assistance or offer grievance counseling after the loss of a loved family member or pet. Rather a mean man who wants to take what you have and not give it back. He can do that because the authenticity of your certificate cannot be validated and therefore he can stand in the middle between you and the remote windows machine pretending to be that machine.</p><h2>The dilemma of sane encryption using cross platform libraries</h2><p>Some of the dilemmas I'll mention are shared in both native windows and cross platform libraries. For example, the friction of getting SSL up and running is pretty much the same on both sides. The key difference is that cross platform libraries may not (usually don't in fact) have access to all the authentication mechanisms listed above or getting them installed and configured is far less than clear.</p><h3>Good news: SSL is Everywhere, Bad News: SSL is a pain to setup everywhere</h3><p>Of course this statement is somewhat relative. There are those that are familiar with the basic rules of computer security and work with the knobs and levers of these mechanisms frequently enough that it is straight forward for them to setup. Even among the technically savvy, this is NOT the majority and its especially true in the world of WinRM vs. SSH. Not because SSH users are smarter, but because in Linux land, SSH is just so ubiquitous and pervasive its hard not to deal with regularly enough for it to be unfamiliar.</p><p>Here are some points to highlight the friction and pitfalls of SSL over WinRM:</p><ul><li>Its not on by default and there are several steps to set it up</li><li>Without a "valid" certificate its still not secure (but better than Basic Authentication over HTTP) and valid certificates take effort to obtain.</li><li>Bootstrapping problem: How do I get the valid certificate onto the remote machine before establishing a secure connection? There are several ways to do this depending on your cloud provider or image prep system but that assumes you have a cloud provider or an image prep system.</li></ul><h3>Kerberos: I cant tell you in a paragraph how to set it up and get it working</h3><p>First just getting the right library can be the worst part - one that's compatible with your OS, architecture and the language runtime of your WinRM library. Most cross platform libraries support it but its less than trivial to get working.</p><h3>Negotiate is likely not implemented</h3><p>Well on ruby it is now, however, its the only non native Windows implementation I am aware of (which does not mean that there are not others). If it is implemented, and again - it is on Ruby, its secure and it "Just Works."</p><h3>Basic auth is available everywhere, horrible everywhere and slightly painful to setup but easier than SSL</h3><p>If you dig into almost all of the Readme files of the cross platform WinRM libraries, they will all tell you how to run horrible commands on your computer that put out the Welcome Mat for the bad guys. I've listed these in a few posts and for once I will not do so here.</p><p>Not only are running these commands a bad idea in general (certainly on production nodes) but again they represent a bootstrapping problem. Before you can successfully talk to the machine they have to be run. There are ways to accomplish this but again rely on cloud APIs or pre-baking images.</p><h2>Next steps, caveats and how can we make this even better</h2><p>Its totally awesome that NTLM/Negotiate authentication is now available as a cross platform option but it only lays down the foundation.</p><h3>Its not the default and consuming applications need to "turn it on"</h3><p>When using the WinRM gem, consumers must specify which authentication transport they want to use. There is no default. So today applications that specify ":plaintext, basic_auth: true" will continue to use basic authentication.</p><h3>Chef and Test-Kitchen support coming very soon, and Vagrant support on the way</h3><p>I am a developer at Chef and my PR for porting this into Knife-Windows is imminent. I will also be porting to <a target="_blank" href="http://kitchen.ci/">Test-Kitchen</a>'s winrm transport and I plan to submit a PR to do the same for Vagrant.</p><p><strong>Update:&nbsp;</strong>PRs to <a target="_blank" href="https://github.com/chef/knife-windows/pull/334">knife-windows</a>, <a target="_blank" href="https://github.com/test-kitchen/test-kitchen/pull/902">test-kitchen</a> and <a target="_blank" href="https://github.com/mitchellh/vagrant/pull/6922">vagrant</a> are all submitted.</p><h3>Still no support outside of Ruby</h3><p>For instance Ansible (Python) and Packer (GO) - both very popular tools that help manage a data center have yet to have working Negotiate Auth implementations available.</p><p>The fact is its hard and tedious to implement something like an encryption specification solely from following a specification PDF. Further, there are other ways to get secure so its not a "show stopper."&nbsp;However as more become aware of the <a target="_blank" href="https://github.com/WinRb/rubyntlm">rubyntlm</a> library, referencing such a library makes it much easier to implement.</p>]]></description></item><item><title>Whats new in the ruby WinRM gem 1.5</title><dc:creator>Matt Wrock</dc:creator><pubDate>Mon, 11 Jan 2016 18:02:45 +0000</pubDate><link>http://www.hurryupandwait.io/blog/whats-new-in-the-ruby-winrm-gem-15</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:56928cb10e4c1197d2472b82</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png" data-image-dimensions="460x460" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=1000w" width="460" height="460" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1452453989444-2L9OKF4Z3WWFDHSPIPEH/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>Its been well over a year since <a target="_blank" href="https://github.com/afiune">Salim Afiune</a> started the effort of making <a target="_blank" href="http://kitchen.ci/">Test-Kitchen</a> (a popular infrastructure testing tool) compatible with Windows. I got involved in late 2014 to improve the performance of copying files to Windows Test instances from Windows or Linux hosts. There was a lot of dust flying in the air as this work was nearing completion in early 2015. <a target="_blank" href="https://github.com/sneal">Shawn Neal</a> nicely refactored some of the work I had submitted to the winrm gem into a spin off gem called <a target="_blank" href="https://github.com/WinRb/winrm-fs">winrm-fs</a>. At the same time, <a target="_blank" href="https://github.com/fnichol">Fletcher Nichol</a> was feverishly working to polish off the Test-Kitchen work by Chef Conf 2015 and was making some optimizations on Shawn's refactorings and those found their way into a new gem <a target="_blank" href="https://github.com/test-kitchen/winrm-transport">winrm-transport</a>.</p><p>Today we are releasing a new version of the winrm gem that pulls in alot of this work into the core winrm classes and will make it possible to pull <a target="_blank" href="https://github.com/WinRb/winrm-fs/pull/28">the rest</a> into winrm-fs, centralizing ruby winrm client implementations and soon deprecating the winrm-transport gem. If you use the winrm gem, there are now some changes available that improve the performance of cross platform remote execution calls to windows along with a couple other new features. This post summarizes the changes and explains how to take advantage of them.</p><h2>Running multiple commands in one shell</h2><p>The most common pattern for invoking a command in a windows shell has been to use the run_cmd or run_powershell_script methods of the WinRMWebService class.</p>
























  
    <pre class="source-code">endpoint = 'https://other_machine:5986/wsman'
winrm = WinRM::WinRMWebService.new(endpoint, :ssl, user: 'user', pass: 'pass')
winrm.run_cmd('ipconfig /all') do |stdout, stderr|
  STDOUT.print stdout
  STDERR.print stderr
end

winrm.run_powershell_script('Get-Process') do |stdout, stderr|
  STDOUT.print stdout
  STDERR.print stderr
end</pre>
  




  <p>Under the hood both run_cmd and run_powershell_script each make about 5 round trips to execute the command and return its output:</p><ol><li>Open a "shell" (equivalent of launching a cmd.exe instance)</li><li>Create a command</li><li>Request command output (potentially several calls for long running streamed output or long blocking calls)</li><li>Terminate the command</li><li>Close the shell</li></ol><p>The first call, opening the shell, can be very expensive. It has to spawn a new process and involves authenticating the credentials with windows. If you need to make several calls using these methods, a new shell is spawned for each one. And things are even worse with powershell since that spawns yet another process (powershell.exe) from the command shell incurring the cost of creating a new runspace on each call. Stay tuned for a <a target="_blank" href="https://github.com/WinRb/WinRM/issues/169">future winrm release</a> (likely 1.6) that implements the <a target="_blank" href="http://download.microsoft.com/download/9/5/E/95EF66AF-9026-4BB0-A41D-A4F81802D92C/%5BMS-PSRP%5D.pdf">powershell remoting protocol</a> (psrp) to avoid that extra overhead.</p><p>While there is currently a way to make several calls in the same shell, its not very friendly or safe (you could easily end up with orphaned processes running on the remote windows machine). Typically this is not such a big deal because most will batch up several calls into one larger script. But here's the kicker: you are limited to a 8000 character command in a windows command shell (again stay tuned for the psrp implementation to avoid that). So imagine you are copying a 100MB file over the command line. Well, you will have to break that up into 8k chunks. While this may provide an excellent opportunity to review the six previous Star Wars episodes as you wait to transfer your music library, its far from ideal.</p><h3>Using the CommandExecutor to stream commands</h3><p>This 1.5 release, exposes a new class, CommandExecutor that you can use to make several commands from the same shell. The CommandExecutor provides run_command and run_powershell_script methods but these simply run a command, collect the output and terminate the command. You get a CommandExecutor by calling create_executor from a WinRMWebService instance. There are two usage patterns:</p>
























  
    <pre class="source-code">endpoint = 'https://other_machine:5986/wsman'
winrm = WinRM::WinRMWebService.new(endpoint, :ssl, user: 'user', pass: 'pass')
winrm.create_executor do |executor|
  executor.run_cmd('ipconfig /all') do |stdout, stderr|
    STDOUT.print stdout
    STDERR.print stderr
  end
  executor.run_powershell_script('Get-Process') do |stdout, stderr|
    STDOUT.print stdout
    STDERR.print stderr
  end
end</pre>
  




  <p>This yields an executor to a block that uses the executor to make calls. When the block completes, the shell opened by the executor will be closed.</p><p>The other pattern:</p>
























  
    <pre class="source-code">endpoint = 'https://other_machine:5986/wsman'
winrm = WinRM::WinRMWebService.new(endpoint, :ssl, user: 'user', pass: 'pass')
executor = winrm.create_executor

executor.run_cmd('ipconfig /all') do |stdout, stderr|
  STDOUT.print stdout
  STDERR.print stderr
end
executor.run_powershell_script('Get-Process') do |stdout, stderr|
  STDOUT.print stdout
  STDERR.print stderr
end

executor.close</pre>
  




  <p>Here we are responsible for the executor and this closing the shell it owns. Its important to close the shell because if it remains open, that process will continue to live on the remote machine. Will it dream?...we may never know.</p><h2>Self signed SSL certificates and ssl_peer_fingerprint</h2><p>If you are using a self signed certificate, you can currently use the :no_ssl_peer_verification option to disable verification of the certificate on the client:</p>
























  
    <pre class="source-code">WinRM::WinRMWebService.new(endpoint, :ssl, :user =&gt; myuser, :pass =&gt; mypass, :basic_auth_only =&gt; true, :no_ssl_peer_verification =&gt; true)</pre>
  




  <p>This is not ideal since it still risks "Man in the Middle" attacks. Still not completely ideal but better than completely ignoring validation is using a known fingerprint and passing that to the :ssl_peer_fingerprint option:</p>
























  
    <pre class="source-code">WinRM::WinRMWebService.new(endpoint, :ssl, :user =&gt; myuser, :pass =&gt; mypass, :ssl_peer_fingerprint =&gt; '6C04B1A997BA19454B0CD31C65D7020A6FC2669D')</pre>
  




  <p>This ensures that all messages are encrypted with a certificate bearing the given fingerprint. Thanks here is due to <a target="_blank" href="https://twitter.com/hippiehacker">Chris McClimans</a> (<a target="_blank" href="https://github.com/hh">HippieHacker</a>) for submitting this feature.</p><h2>Retry logic added to opening a shell</h2><p>Especially if provisioning a new machine, it's possible the winrm service is not yet running when first attempting to connect. The WinRMWebService now accepts new options :retry_limit and :retry_delay to specify the maximum number of attempts to make and how long to wait in between. These default to 3 attempts and a 10 second delay.</p>
























  
    <pre class="source-code">WinRM::WinRMWebService.new(endpoint, :ssl, :user =&gt; myuser, :pass =&gt; mypass, :retry_limit =&gt; 30, :retry_delay =&gt; 10)</pre>
  




  <h2>Logging</h2><p>The WinRMWebService now exposes a logger attribute and uses the <a target="_blank" href="https://rubygems.org/gems/logging">logging</a>&nbsp;gem to manage logging behavior. By default this appends to STDOUT and has a level of :warn, but one can adjust the level or add additional appenders.</p>
























  
    <pre class="source-code">winrm = WinRM::WinRMWebService.new(endpoint, :ssl, :user =&gt; myuser, :pass =&gt; mypass)

# suppress warnings
winrm.logger.warn = :error

# Log to a file
winrm.logger.add_appenders(Logging.appenders.file('error.log'))</pre>
  




  <p>If a consuming application uses its own logger that complies to the logging API, you can simply swap it in:</p>
























  
    <pre class="source-code">winrm.logger = my_logger</pre>
  




  <h2>Up next: PSRP</h2><p>Thats it for WinRM 1.5 but I'm really looking forward to productionizing a <a target="_blank" href="https://github.com/WinRb/WinRM/tree/psrp">spike I recently completed</a> getting the powershell remoting protocol working, which promises to improve performance and opens up other exciting cross platform scenarios.</p>]]></description></item><item><title>Vagrant Powershell - The closest thing to vagrant ssh yet for windows</title><dc:creator>Matt Wrock</dc:creator><pubDate>Mon, 28 Dec 2015 13:55:22 +0000</pubDate><link>http://www.hurryupandwait.io/blog/vagrant-powershell-the-closest-thing-to-vagrant-ssh-yet-for-windows</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:5680b43d0ab37790ca5cfbad</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png" data-image-dimensions="823x511" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=1000w" width="823" height="511" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1451310723648-TB40LJO0MCX5HJTKSPTJ/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
          
          <figcaption class="image-caption-wrapper">
            <p>Using "vagrant powershell" to remote to my windows nano server</p>
          </figcaption>
        
      
        </figure>
      

    
  


  





  <p>A couple years ago when I started playing with <a target="_blank" href="https://www.vagrantup.com/">vagrant</a> to setup both windows and linux VMs, I wished there was a powershell equivalent of Vagrant's <a target="_blank" href="https://docs.vagrantup.com/v2/cli/ssh.html">ssh command</a> that would drop me into a powershell session on the windows vagrant box. Sure its simple enough to find the IP and winrm port of the box and run Enter-PSSession with the credentials given to vagrant, but a simple "vagrant powershell" just makes it so much more seamless.</p><p>After gaining some familiarity with ruby, I submitted &nbsp;a <a target="_blank" href="https://github.com/mitchellh/vagrant/pull/4400">PR to the vagrant github repo</a> implementing this command. The command essentially shells out to powershell.exe and runs Enter-PSSession pointing to the vagrant guest using the same credentials that the vagrant's winrm communicator uses. Additionally, it temporarily adds the vagrant winrm endpoint to the host's Trusted Host entries, restoring the original entries once the command exits.</p><p>Its been a good while since that submission, but I'm excited to see it released in <a target="_blank" href="https://hashicorp.com/blog/vagrant-1-8.html">this week's first minor version update</a> since that time.</p><h2>Small bug/annoyance</h2><p>Note that the up and down arrow keys do not cycle through command history. While I thought that was working a ways back, its possible that other changes around vagrant's handling of subprocesses caused this to pop up. At any rate,&nbsp;I just submitted <a target="_blank" href="https://github.com/mitchellh/vagrant/pull/6749">a fix</a> for that today and hope it is merged by the next bug fix release.</p><h2>Only works on windows</h2><p>The vagrant powershell command is only available on Windows hosts. Currently there is no cross platform implementation of the Powershell Remoting Protocol. There is a ruby based <a target="_blank" href="https://github.com/WinRb/WinRM/blob/master/bin/rwinrm">winrm repl</a> but it is extremely limited compared to a true powershell session. However, the vagrant <a target="_blank" href="https://docs.vagrantup.com/v2/cli/powershell.html">powershell command</a> does provide a --command argument for running ad hoc non-interactive commands on the vagrant box. It would be easy to at least support that on non windows boxes.</p>]]></description></item><item><title>Hosting a .Net 4 runtime inside Powershell v2</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 20 Dec 2015 19:46:30 +0000</pubDate><link>http://www.hurryupandwait.io/blog/hosting-a-net-4x-runtime-inside-powershell-v2</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:5676f0ba40667afe3f6a909c</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png" data-image-dimensions="1026x875" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=1000w" width="1026" height="875" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450640680426-WWHBKXG051AC961DVPFN/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>My wish for the readers of this post is that you find this completely irrelevant and wonder why folks would wish to inflict powershell v2 on themselves now that we are on a much improved <a target="_blank" href="http://blogs.msdn.com/b/powershell/archive/2015/12/16/windows-management-framework-wmf-5-0-rtm-is-now-available.aspx">v5</a>. However the reality is that many many machines are still running windows 7 and server 2008R2 without an upgraded powershell.</p><p>As I was working on <a target="_blank" href="http://boxstarter.org/">Boxstarter</a> 2.6 to support <a target="_blank" href="https://chocolatey.org/">Chocolatey</a> 0.9.9 which now ships as a <a target="_blank" href="https://www.nuget.org/packages/chocolatey.lib/0.9.10-beta-20151210">.net 4 assembly</a>, I had to be able to load it inside of Powershell 2 since I still want to support virgin win7/2008R2 environments. Without "help", this will fail because Powershell 2 hosts .Net 3.5. I really don't want to ask users to install an updated WMF prior to using Boxstarter because that violates the core mission of Boxstarter which is to setup a machine from <strong>scratch</strong>.</p><h2>Adjusting CLR version system wide</h2><p>So after some investigation I found several posts telling me what I already knew which included the following solutions:</p><ol><li>Upgrade to a WMF 3 or higher</li><li>Create or edit a Powershell.exe.config file in C:\WINDOWS\System32\WindowsPowerShell\v1.0 setting the supportedRuntime to .net 4</li><li>Edit the &nbsp;<span>hklm\software\microsoft\.netframework registry key to only use the latest CLR</span></li></ol><p>I have already mentioned why option 1 was not an option. Options 2 and 3 are equally unpalatable if you do not "own" the system since both change system wide behavior. I just want to change the behavior when my application is running.</p><h2>An application scoped solution</h2><p>So after more digging <a target="_blank" href="http://www.paraesthesia.com/archive/2009/10/29/complus_version-and-the-.net-framework-runtime.aspx/">I found</a> an obscure, and seemingly undocumented environment variable that can impact the version of the .net runtime loaded:&nbsp;$env:COMPLUS_version. If you set this variable to "v4.0.30319" and then spawn a new process, that process will use the specified version of the .net runtime.</p>
























  
    <pre class="source-code">PS C:\Users\Administrator&gt; $PSVersionTable

Name                           Value
----                           -----
CLRVersion                     2.0.50727.5420
BuildVersion                   6.1.7601.17514
PSVersion                      2.0
WSManStackVersion              2.0
PSCompatibleVersions           {1.0, 2.0}
SerializationVersion           1.1.0.1
PSRemotingProtocolVersion      2.1


PS C:\Users\Administrator&gt; $env:COMPLUS_version="v4.0.30319"
PS C:\Users\Administrator&gt; &amp; powershell { $psVersionTable }

Name                           Value
----                           -----
PSVersion                      2.0
PSCompatibleVersions           {1.0, 2.0}
BuildVersion                   6.1.7601.17514
CLRVersion                     4.0.30319.17929
WSManStackVersion              2.0
PSRemotingProtocolVersion      2.1
SerializationVersion           1.1.0.1</pre>
  




  <h2>A script that runs commands in .net 4</h2><p>So given that this works, I created a <a target="_blank" href="https://github.com/mwrock/boxstarter/blob/master/BoxStarter.Common/Enter-DotNet4.ps1">Enter-DotNet4</a> command that allows one to run ad hoc scripts inside .net 4. Here it is:</p>
























  
    <pre class="source-code">function Enter-Dotnet4 {
&lt;#
.SYNOPSIS
Runs a script from a process hosting the .net 4 runtile

.DESCRIPTION
This function will ensure that the .net 4 runtime is installed on the
machine. If it is not, it will be downloaded and installed. If running
remotely, the .net 4 installation will run from a scheduled task.

If the CLRVersion of the hosting powershell process is less than 4,
such as is the case in powershell 2, the given script will be run
from a new a new powershell process tht will be configured to host the
CLRVersion 4.0.30319.

.Parameter ScriptBlock
The script to be executed in the .net 4 CLR

.Parameter ArgumentList
Arguments to be passed to the ScriptBlock

.LINK
http://boxstarter.org

#&gt;
    param(
        [ScriptBlock]$ScriptBlock,
        [object[]]$ArgumentList
    )
    Enable-Net40
    if($PSVersionTable.CLRVersion.Major -lt 4) {
        Write-BoxstarterMessage "Relaunching powershell under .net fx v4" -verbose
        $env:COMPLUS_version="v4.0.30319"
        &amp; powershell -OutputFormat Text -ExecutionPolicy bypass -command $ScriptBlock -args $ArgumentList
    }
    else {
        Write-BoxstarterMessage "Using current powershell..." -verbose
        Invoke-Command -ScriptBlock $ScriptBlock -argumentlist $ArgumentList
    }
}

function Enable-Net40 {
    if(!(test-path "hklm:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319")) {
        if((Test-PendingReboot) -and $Boxstarter.RebootOk) {return Invoke-Reboot}
        Write-BoxstarterMessage "Downloading .net 4.5..."
        Get-HttpResource "http://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe" "$env:temp\net45.exe"
        Write-BoxstarterMessage "Installing .net 4.5..."
        if(Get-IsRemote) {
            Invoke-FromTask @"
Start-Process "$env:temp\net45.exe" -verb runas -wait -argumentList "/quiet /norestart /log $env:temp\net45.log"
"@
        }
        else {
            $proc = Start-Process "$env:temp\net45.exe" -verb runas -argumentList "/quiet /norestart /log $env:temp\net45.log" -PassThru 
            while(!$proc.HasExited){ sleep -Seconds 1 }
        }
    }
}
</pre>
  




  <p>This will install .net 4.5 if not already installed and then spawn a new powershell process to run the given commands with the .net 4 runtime hosted.</p><h2>Does not work in a remote shell</h2><p>One scenario where this does not work is if you are remoted on a Powershell v2 machine. The .net4 CLR will almost immediately crash. My guess is that this is related to the fact that remote shells have an inherently different hosting model and run under wsmprovhost.exe or winrshost.exe.</p><p>The workaround for this in Boxstarter is to call the chocolatey.dll in a <a target="_blank" href="https://github.com/mwrock/boxstarter/blob/master/BoxStarter.Common/Invoke-FromTask.ps1">Scheduled Task</a> instead of using Enter-DotNet4 when running remote.</p>]]></description></item><item><title>Released Boxstarter 2.6 now with an embedded Chocolatey 0.9.10 Beta</title><dc:creator>Matt Wrock</dc:creator><pubDate>Mon, 14 Dec 2015 06:59:56 +0000</pubDate><link>http://www.hurryupandwait.io/blog/released-boxstarter-26-now-supporting-chocolatey-0910-beta</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:566e3e79cbced62f3798a5f3</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png" data-image-dimensions="600x200" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=1000w" width="600" height="200" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1450076243375-ACXUKK2IA1T364WFMAYD/image-asset.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>Today I released <a target="_blank" href="http://boxstarter.org/">Boxstarter</a> 2.6. This release brings <a target="_blank" href="https://chocolatey.org/">Chocolatey</a> support up to the latest beta release of the Chocolatey core library. In March of this year, Chocolatey released a fully rewritten version <a target="_blank" href="https://github.com/chocolatey/choco/blob/master/CHANGELOG.md#099-march-3-2015">0.9.9</a>. Prior to this release, Chocolatey was released as a Powershell module. Boxstarter intercepted every Chocolatey call and could easily maintain state as both chocolatey and boxstarter coexisted inside the same powershell process. With the 0.9.9 rewrite, Chocolatey ships as a .Net executable and creates a separate powershell process to run each package. So there has been lot of work to create a different execution flow requiring changes to almost every aspect of Boxstarter.&nbsp;While this may not introduce new boxstarter features, it does mean one can now take advantage of all new features in Chocolatey today.</p><h2>A non breaking release</h2><p>This release should not introduce any breaking functionality from previous releases. I have tested many different usage scenarios. I also increased the overall unit and functional test coverage of boxstarter in this release to be able to more easily validate the impact of the Chocolatey overhaul. That all said, there has been alot of changes to how boxstarter and chocolatey interact and its always possible that some bugs have hidden themselves away. So please report <a target="_blank" href="https://github.com/mwrock/boxstarter/issues">issues on github</a> as soon as you encounter problems and I will do my best to resolve problems quickly. Pull requests are welcome too!</p><h2>Where is Chocolatey?</h2><p>One thing that may come as a surprise to some is that Boxstarter no longer installs Chocolatey. One may wonder, how can this be? Well, Chocolatey now exposes its core functionality via an API (<a target="_blank" href="https://www.nuget.org/packages/chocolatey.lib/0.9.10-beta-20151210">chocolatey.dll</a>). So if you are setting up a new machine with boxstarter, you will still find the Chocolatey repository in c:\ProgramData\Chocolatey, but no choco.exe. Further the typical chocolatey commands: choco, cinst, cup, etc will not be recognized commands on the command line after the Boxstarter run unless you explicitly <a target="_blank" href="https://github.com/chocolatey/choco/wiki/Installation">install Chocolatey</a>.</p><p>You may do just that: install chocolatey inside a boxstarter package if you would like the machine setup to include a working chocolatey command line.</p>
























  
    <pre class="source-code">iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))</pre>
  




  <p id="yui_3_17_2_4_1450065371135_10216">You'd have to be nuts NOT to want that.</p><h2>Running 0.9.10-beta-20151210</h2><p id="yui_3_17_2_4_1450065371135_3826">When I say Boxstarter is running the latest Chocolatey, I really mean the latest prerelease. Why? That has a working version of the WindowsFeatures chocolatey feed. When the new version of Chocolatey was released, The WindowsFeature source feed did not make it in. However it has been recently added and because it is common to want to toggle windows features when setting up a machine and many Boxstarter packages make use of it, I consider it an important feature to include.</p><p><br></p>]]></description></item><item><title>Fixing - WinRM Firewall exception rule not working when Internet Connection Type is set to Public</title><dc:creator>Matt Wrock</dc:creator><pubDate>Sun, 08 Nov 2015 15:26:04 +0000</pubDate><link>http://www.hurryupandwait.io/blog/fixing-winrm-firewall-exception-rule-not-working-when-internet-connection-type-is-set-to-public</link><guid isPermaLink="false">53eb0624e4b022b28bd6cc45:53f3208ce4b02368bad86b71:563ef531e4b019ac36cfc051</guid><description><![CDATA[<figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg" data-image-dimensions="468x394" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=1000w" width="468" height="394" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=100w 100w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=300w 300w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=500w 500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=750w 750w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/53eb0624e4b022b28bd6cc45/1446996287431-COVVWNVVHR8O6DNMHQAQ/image-asset.jpeg?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  





  <p>You may have seen the following error when either running Enable-PSRemoting or Set-WSManQuickConfig:</p>
























  
    <pre class="source-code">Set-WSManQuickConfig : &lt;f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="2150859113"
Machine="localhost"&gt;&lt;f:Message&gt;&lt;f:ProviderFault provider="Config provider"
path="%systemroot%\system32\WsmSvc.dll"&gt;&lt;f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault"
Code="2150859113" Machine="win81"&gt;&lt;f:Message&gt;WinRM firewall exception will not work since one of the network
connection types on this machine is set to Public. Change the network connection type to either Domain or Private and
try again. &lt;/f:Message&gt;&lt;/f:WSManFault&gt;&lt;/f:ProviderFault&gt;&lt;/f:Message&gt;&lt;/f:WSManFault&gt;
At line:1 char:1
+ Set-WSManQuickConfig -Force
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Set-WSManQuickConfig], InvalidOperationException
    + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.SetWSManQuickConfigCommand</pre>
  




  <p>This post will explain how to get around this error. There are different ways to do this depending on your operating system version. Windows 8/2012 workarounds are fairly sane while windows 7/2008R2 may seem a bit obtuse.</p><p>This post is inspired by an experience I had this week trying to get a customer's Chef node able to connect via WinRM over SSL. Her test node was running Windows 7 and she was getting the above error when trying to enable WinRM. Windows 7 has no way to change the connection Type via a native powershell cmdlet. I had done this via the commandline before on Windows 7 but did not have the commands handy. Further, it had been so long since changing the connection type on windows 7 via the GUI, I had to fire up my own win 7 VM and run through it just so I could relay propper instructions to this very patient customer.</p><p>So I write this to run through the different nuances of connection types on different operating systems but primarily to have a known place on the internet where I can stash the commands.</p><h2>TL;DR for Windows 7 or 2008R2 Clients:</h2><p>If you dont care about anything else other than getting past this error on windows 7 or 2008R2 without ceremonial pointing and clicking, simply run these commands:</p>
























  
    <pre class="source-code">$networkListManager = [Activator]::CreateInstance([Type]::GetTypeFromCLSID([Guid]"{DCB00C01-570F-4A9B-8D69-199FDBA5723B}")) 
$connections = $networkListManager.GetNetworkConnections() 

# Set network location to Private for all networks 
$connections | % {$_.GetNetwork().SetCategory(1)}</pre>
  




  <p>This works on windows 8/2012 and up as well but there are much friendlier commands you can run instead. &nbsp;Unless you are one partial to GUIDs.</p><h2>Connection Types - what does it mean?</h2><p>Windows provide different connection type profiles (or Network Locations) each with different levels of restrictions on what connections can be granted to the local computer on the network.</p><p>I have personally always found these types to be confusing yet well meaning. You are perhaps familiar with the message presented the first time you logon to a machine asking you if you would like the computer to be discoverable on the internet. If you choose no, the network interface is given a public internet connection profile. If you choose "yes" then it is private. For me the confusion is that I equate "public" with "publicly accessible" but here the opposite applies.</p><p>Public network locations have <a target="_blank" href="http://windows.microsoft.com/en-us/windows/what-is-network-discovery#1TC=windows-7">Network Discovery</a> turned off &nbsp;and restrict your firewall for some applications. You cannot create or join Homegroups with this setting. WinRM firewall exception rules also cannot be enabled on a public network. Your network location must be private in order for other machines to make a WinRM connection to the computer.</p><h2>Domain Networks</h2><p>If your computer is on a domain, that is an entirely different network location type. On a domain network, the accessibility of the machine is governed by your domain policies. This network location type is automatically chosen if your machine is part of an Active Directory domain.</p><h2>Working around Public network restrictions on Windows 8 and up</h2><p>When enabling WinRM, client SKUs of windows (8, 8.1, 10) expose an additional setting that allow the machine to be discoverable over WinRM publicly but only on the same subnet. By using the -SkipNetworkProfileCheck switch of Enable-PSRemoting or Set-WSManQuickConfig you can still allow connections to your computer but those connections must come from other machines on the same subnet.</p>
























  
    <pre class="source-code">Enable-PSRemoting -SkipNetworkProfileCheck</pre>
  




  <p id="yui_3_17_2_3_1446966384213_68937">So this can work for local VMs but will still be restrictive for cloud based VMs.</p><h2>Changing the Network Location</h2><p>Once you answer yes or no the initial question of whether you want to be discovered, you are never prompted again, but you can change this setting later.</p><p>This is a family friendly blog so I am not going to cover how to change the Network Location via the GUI. It can be done but you are a dirty person for doing so (full disclosure - I have been guilty of doing this).</p><h3>Windows 8/2012 and up</h3><p>Powershell version 3 and later expose cmdlets that allow you to see and change your Network Location. Get-NetConnectionProfile shows you the network location of all network interfaces:</p>
























  
    <pre class="source-code">PS C:\Windows\system32&gt; Get-NetConnectionProfile


Name             : Network  2
InterfaceAlias   : Ethernet
InterfaceIndex   : 3
NetworkCategory  : Private
IPv4Connectivity : Internet
IPv6Connectivity : LocalNetwork</pre>
  




  <p>Note the NetworkCategory above. The Network Location is private.</p><p>Use the Set-NetConnectionProfile to change the location type:</p>
























  
    <pre class="source-code">Set-NetConnectionProfile -InterfaceAlias Ethernet -NetworkCategory Public</pre>
  




  <p>or</p>
























  
    <pre class="source-code">Get-NetConnectionProfile | Set-NetConnectionProfile -NetworkCategory Private</pre>
  




  <p>The later is convenient if you want to ensure that all network interfaces are set to a particular location.</p><h3>Windows 7 and 2008R2</h3><p>You will not have the above cmdlets available on Windows 7 or 2008R2. You can still change the location on the command line but the commands are far less intuitive. As shown in the tl;dr, here is the command:</p>
























  
    <pre class="source-code">$networkListManager = [Activator]::CreateInstance([Type]::GetTypeFromCLSID([Guid]"{DCB00C01-570F-4A9B-8D69-199FDBA5723B}")) 
$connections = $networkListManager.GetNetworkConnections() 

# Set network location to Private for all networks 
$connections | % {$_.GetNetwork().SetCategory(1)}</pre>
  




  <p>First we get a reference to a COM instance of an INetworkListManager which naturally has a Class ID of DCB00C01-570F-4A9B-8D69-199FDBA5723B. We then grab all the network connections and finally set them all to the desired location:</p><ul dir="ltr"><li>0 - Public</li><li>1 - Private</li><li>2 - Domain</li></ul>]]></description></item></channel></rss>