<feed xmlns="http://www.w3.org/2005/Atom">
  <title>unwiredcouch.com on unwiredcouch.com</title>
  <id>https://unwiredcouch.com/atom.xml</id>
  <updated>2026-04-07T00:00:00Z</updated>
	<subtitle>thoughts which have made it into written existence</subtitle>
	<link rel="self" href="https://unwiredcouch.com/atom.xml"/>
	<author>
    <name>Daniel Schauenberg</name>
		<email>d@unwiredcouch.com</email>
	</author>
  
	<entry>
    <title type="html"><![CDATA[Monsters]]></title>
    <published>2026-04-07T00:00:00Z</published>
    <updated>2026-04-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/dederer-monsters-2023/</id>
    <content type="html"><![CDATA[<p>I was really curious about this book as the topic of enjoying art made by
people who express world views very counter to my own is something I often
think about. It&rsquo;s one of the reasons why I was intrigued by and enjoyed <a href="/reading/gay-badfeminist-2014/">Bad
Feminist</a> when I read it, with its discussion of enjoying rap music - which
often has very questionable lyrics to say the least - while being a feminist.</p>
<p>There are countless more examples like this, but I was generally interested in
reading a more detailed discussion about the topic. And to be honest I was
secretly hoping for the book to tell me how to resolve those dilemmas, &ldquo;to
hand me a calculator&rdquo; - as the author calls it - that gives me a result
whether I&rsquo;m allowed to enjoy some art or not. But obviously that&rsquo;s not how
it&rsquo;s going. The author discusses fairly in depth the problematic lives and
behaviours of a number of authors and explores the spectrum of &ldquo;monstrosity&rdquo;
and the question when the threshold is crossed. And one of the most
interesting perspectives for me was this notion of how viewing of art is
influenced by the artist as well as the audience:</p>
<blockquote>
<p>Consuming a piece of art is two biographies meeting: the biography of the
artist that might disrupt the viewing of the art; the biography of the
audience member that might shape the viewing of the art. - p. 80</p></blockquote>
<p>And also the discussion of what consuming art says about oneself (or any other
person for that matter). Especially given the very limited influence one has
as a consumer on how things go with the author of the art:</p>
<blockquote>
<p>The way you consume art doesn&rsquo;t make you a bad person, or a good one. You&rsquo;ll
have to find some other way to accomplish that. - p. 242</p></blockquote>
<p>Obviously this topic is extremely nuanced and cases of artists being or
turning out to be &ldquo;monsters&rdquo; vary wildly. And while the book didn&rsquo;t give me
the hand holding guide I was initially hoping for, it definitely gave me more
to think about on this topic.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/dederer-monsters-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Safety Anarchist]]></title>
    <published>2026-03-31T00:00:00Z</published>
    <updated>2026-03-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/dekker-thesafetyanarchist-2017/</id>
    <content type="html"><![CDATA[<p>After having read Dekker&rsquo;s <a href="/reading/dekker-fieldguidetounderstandinghumanerror-2002/">Field Guide to Understanding Human Error</a> many
years ago I had been loosely following his work through recorded talks and
lectures, as well as <a href="/reading/dekker-safetyafterneoliberalism-2020/">papers</a>. And since I&rsquo;ve been thinking more about
organizational safety and <a href="/reading/hollnagel-safety-iandsafety-ii-2014/">reading books about it</a> lately, I was curious
about some more recent writing by him.</p>
<p>The book touches on some topics from the aforementioned paper, but expands on
it a lot. A big focus point is the modeling of safety culture after
&ldquo;authoritarian high modernism&rdquo; similar to the discussion in <a href="https://en.wikipedia.org/wiki/Seeing_Like_a_State">James C. Scott&rsquo;s
book &ldquo;Seeing Like a State</a>:</p>
<blockquote>
<p>Authoritarian high modernism believes that every aspect of our lives and
work can be improved with rational planning, with better techniques and more
science. - p. 35</p></blockquote>
<p>From there the modeling of all safety culture to follow a standard, be
controlled centrally and synoptically legible (i.e. loose all nuance and
detail to become &ldquo;understandable&rdquo; on the highest level of control) leads to an
eroding of safety in favour of compliance and authority.</p>
<p>A suggested path more towards the localized understanding and handling of
safety is to take some pages out of the book of anarchism (very distinct from
anarchy which the book goes into a lot of detail about). In anarchism as an
idea the fostering of communities over centralized control and imposed
standards and the organization of communities on a voluntary, cooperative,
horizontal basis. Which reminded me a lot - unsurprisingly - of themes in
<a href="/reading/woods-strategicagilitygap-2020/">David Woods&rsquo; paper &ldquo;The Strategic Agility Gap&rdquo;</a>.</p>
<p>Overall a very interesting book and definitely recommended if you work at all
with an organizational safety setup.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/dekker-thesafetyanarchist-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Beschleunigung]]></title>
    <published>2026-02-28T00:00:00Z</published>
    <updated>2026-02-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rosa-beschleunigung-2005/</id>
    <content type="html"><![CDATA[<p>The feeling of acceleration and &ldquo;missing time&rdquo; has been on my mind for quite a
while (in which I&rsquo;m probably not alone) and when I heard about this book I was
immediately intrigued. And since the author is German I decided to read it in
German as well. I don&rsquo;t have a lot of experience in reading German sociology
texts, let alone ones written in an academic setting. And I will say that I
definitely struggled and it took me a lot longer to read the book than usual.
But there are a lot of interesting ideas in there. The interconnection of
acceleration in technology, societal structures, and personal lives and how
they reinforce each other is something I definitely want to work through and
think deeper on.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rosa-beschleunigung-2005/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Thursday Murder Club]]></title>
    <published>2026-01-24T00:00:00Z</published>
    <updated>2026-01-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/osman-thethursdaymurderclub-2020/</id>
    <content type="html"><![CDATA[<p>I had this on my shelf for forever and always wanted to read it and never got
to it. And with there being a Netflix show now that looks like fun, I really
wanted to read the book first. And I have to say I thoroughly enjoyed it. The
book was very funny and entertaining and a kind of different take on a crime
novel in many ways. I can super recommend it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/osman-thethursdaymurderclub-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Adventures in Democracy]]></title>
    <published>2025-11-27T00:00:00Z</published>
    <updated>2025-11-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/benner-adventuresindemocracy-2024/</id>
    <content type="html"><![CDATA[<p>With democracy currently having a much rougher time and being attacked in many
ways, I was really intrigued by this book. The author touches on many
different subjects that relate to democracy like the power of the people,
elections, demagogues, women&rsquo;s rights to participate and much more. I really
liked reading about the history of democracy and the various &ldquo;implementations&rdquo;
and differences around the world. I don&rsquo;t know what I wanted to get out of the
book and I think while I was reading I was yearning for a simple answer to
have to &ldquo;save&rdquo; democracy in our current times. Which of course isn&rsquo;t in there.
But I can still recommend the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/benner-adventuresindemocracy-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Newsletter feeds with procmail and Feedbin]]></title>
    <published>2025-11-10T00:00:00Z</published>
    <updated>2025-11-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2025/11/10/newsletter-feeds-procmail-feedbin.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;m a huge fan of <a href="https://en.wikipedia.org/wiki/Web_feed">web feeds</a> to keep up with content that interests
me. I&rsquo;ve been using them since the fairly early days of <a href="https://en.wikipedia.org/wiki/Google_Reader">Google
Reader</a> and yes I was also sad and disappointed when the service got
shut down. However in my opinion the landscape of web feeds today is vastly
better than what we had 15 years ago even though it&rsquo;s a well loved cliché to
still bemoan the demise of Reader. I still read web feeds daily today. This
includes blogs and news pages and also YouTube channels (I found out about
this not that long ago at a time I had almost abandoned following YouTube
channels because I found the experience of following channels so
insufferable). For many years now I&rsquo;m a happy paying customer of
<a href="https://feedbin.com/home">Feedbin</a> to have my feeds available on all my devices but read it
99% of the time through <a href="https://www.reeder.app/classic/">Reeder Classic</a> on my iPhone. I really like
the service and also use one of their features heavily, which is the ability
to configure email addresses to subscribe to newsletters and have them show up
as feeds as well. I&rsquo;ve been using that fairly heavily and switched a bunch of
my subscriptions over to it. This however came with two inconveniences for me:</p>
<ol>
<li>The effort of unsubscribing and re-subscribing to existing newsletters</li>
<li>Sometimes newsletters aren&rsquo;t easily separated from the address I use for
generally using a service (like a shop I buy from but also get newsletters)</li>
<li>My newsletters are linked to an email address that is not mine</li>
</ol>
<p>So I wanted an easier way to benefit from the service with less hassle and
also make it easier to decide to stop reading newsletters through the service
without doing another unsubscribe and re-subscribe cycle. Luckily for me I
still run my own IMAP server that runs <a href="https://en.wikipedia.org/wiki/Procmail">procmail</a> for filtering my emails. I
mostly use it to filter out some spam and move messages from mailing lists I
am subscribed to into their own folders.</p>
<p>In order to take advantage from the setup for getting my newsletters into
feedbin more easily I first created a new email address in Feedbin as a kind
of catch all for newsletters. Next up I configured a procmail rule for each
email address I get newsletters from with the following form:</p>
<pre tabindex="0"><code>:0
* ^From.*news@coolwebpage.de
* ^List-Unsubscribe.*
{
  :0 c
  ! cool.address.333@feedb.in
  :0:
  $HOME/Mails/newsletters/
}
</code></pre><p>This matches any email sent from the configured address that has a
<code>List-Unsubscribe</code> header as well. While this isn&rsquo;t a perfect fit it&rsquo;s a
requirement for newsletters being treated properly by major mail providers and
not marked as spam. Having both rules in there also makes sure that emails
that aren&rsquo;t newsletters from that same address (say purchase receipts from a
shop) don&rsquo;t get matched. Then there is a rule body that first forwards the
mail to Feedbin to turn into a feed item. And then also files it into a
<code>newsletter</code> folder. That second part is mostly a backup for me so I have the
messages handy still if something gets lost somewhere.</p>
<p>So now if I notice a newsletter in my inbox, I can just add the sender address
to a list and my configuration management system on my IMAP server generates a
new procmail rule to filter it and send it to Feedbin from now on. And while
procmail definitely doesn&rsquo;t have the easiest syntax to understand and get
going with, I really enjoy this workflow it enables for me. And many mail
providers also allow this sort of things with their rule system.</p>
<p>Happy feed reading!</p>
]]></content>
    <link href="https://unwiredcouch.com/2025/11/10/newsletter-feeds-procmail-feedbin.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Pleasures of Reading in an Age of Distraction]]></title>
    <published>2025-11-02T00:00:00Z</published>
    <updated>2025-11-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/jacobs-thepleasuresofreadinginanageofdistraction-2011/</id>
    <content type="html"><![CDATA[<p>I randomly found this book at the bookstore and was very curious about it.
I&rsquo;ve been trying to make regular time for reading throughout the last couple
of years and it&rsquo;s often a pull between reading time and all the chores and
distractions that also exist. So I was really curious about this book in a
kind of meta way. The author approaches the topic of reading from a number of
angles and first and foremost clears up the ever lasting cliche of there being
&ldquo;good&rdquo; or &ldquo;proper&rdquo; books to read and lesser ones. He encourages reading on a
whim (or Whim as he defines it) that which one is curious about. And actively
shuns reading list that should be worked through. Probably owed to the fact
that the author is a literature academic the book feels fairly intellectual at
times. But overall I really enjoyed the book and it made me want to read even
more.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/jacobs-thepleasuresofreadinginanageofdistraction-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Slow Productivity]]></title>
    <published>2025-10-19T00:00:00Z</published>
    <updated>2025-10-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/newport-slowproductivity-2024/</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve read a couple of Newport&rsquo;s books so far (see <a href="/reading/newport-digitalminimalism-2019/">here</a> and <a href="/reading/calnewport-deepwork-2016/">here</a>) and I&rsquo;ve generally liked them but also found them somewhat repetitive. As in the general idea of not frantically running from thing to thing and getting distracted by things that aren&rsquo;t a priority in improving your goals are the recurring topic in slightly variated form in most of his writing. Still I find it helpful as periodic reminders to actually also follow this advice and breaking out of cycles of distraction and - in the case of this book - what he calls &ldquo;pseudo productivity&rdquo;. I think if you&rsquo;re only interested in his steps for slow productivity it&rsquo;s enough to watch one of his videos about it (e.g. <a href="https://m.youtube.com/watch?v=v520wFzpAd0">this</a>). But I can also recommend the book if you want to just spend some more time with Newport&rsquo;s thoughts about this topic and some examples and anecdotes of famous people that have achieved great work in a manner that is compatible with the concept of &ldquo;slow productivity&rdquo;. It&rsquo;s a pretty entertaining and short read as well. And the ideas resonate a lot with me, even if they don&rsquo;t differ in spirit much from his previous books.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/newport-slowproductivity-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Safety-I and Safety–II]]></title>
    <published>2025-10-03T00:00:00Z</published>
    <updated>2025-10-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hollnagel-safety-iandsafety-ii-2014/</id>
    <content type="html"><![CDATA[<p>After years of following Hollnagel&rsquo;s work and having read <a href="/reading/hollnagel-ettoprinciple-2009/">ETTO</a> many years
ago, I finally read this (now) classic that he wrote in 2014. And I have to
say it still holds up extremely well even 10 years later. It&rsquo;s a mostly easy
read that introduces and explains concepts from the approach that is branded
in the book as &ldquo;Safety I&rdquo;, which is the focus on preventing accidents versus
what he calls &ldquo;Safety II&rdquo;, the focus on fostering and increasing things that
go right. There&rsquo;s a lot in this book about the history of safety and risk
management and how we got to this focus on accidents as the data point to
fixate on. It&rsquo;s a fairly quick read and I can highly recommend it if you do
anything related to safety, risk, or resilience management or even just work
in an area where you are responsible for operating any form of live system.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hollnagel-safety-iandsafety-ii-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Reimagining Capitalism in a World on Fire]]></title>
    <published>2025-08-25T00:00:00Z</published>
    <updated>2025-08-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/henderson-reimaginingcapitalisminaworldonfire-2021/</id>
    <content type="html"><![CDATA[<p>In my journey this year to read a bit more about economic topics, I came
across this book in my bookstore and was curious about. There&rsquo;s lots of
discussion currently about how bad capitalism is but I haven&rsquo;t seen a ton
about what could be better. So I was curious to read more about that part. The
book overall did an ok job there. A lot of cases are outlined where people went
against &ldquo;common sense&rdquo; of capitalism and increasing shareholder value and
built or grew companies that were and are very successful. There are a lot of
examples there that the author outlines as strong reasons to be hopeful.
However reading this in 2025 where things feel like they&rsquo;ve only gotten worse
in the years since the book got published a lot of it rings kinda hollow. And
the examples feel more like anecdotes than signs for a better way. It&rsquo;s a very
well written and researched book. And I think if I had read it in 2021 I
would&rsquo;ve taken away a lot more positive things.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/henderson-reimaginingcapitalisminaworldonfire-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Die Schuldenbremse]]></title>
    <published>2025-08-22T00:00:00Z</published>
    <updated>2025-08-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/bajohr-dieschuldenbremse-2015/</id>
    <content type="html"><![CDATA[<p>In the same spirit that made me read <a href="/reading/charleswheelan-nakedeconomicsundressingthedismalscience-2019/">Naked Economics</a> I had also asked a
friend who&rsquo;s an economist whether he knew any good books to learn the basics
about economics but also specifically about the &ldquo;Schuldenbremse&rdquo;, Germany&rsquo;s
<a href="https://en.wikipedia.org/wiki/German_balanced_budget_amendment">balanced budget amendment</a> flippantly called &ldquo;debt brake&rdquo;. So he
recommended me Bajohr&rsquo;s essay on the topic which ended up a very brief but
interesting read. The author collects a good amount of history on the topic
and how it came to be and then goes on to outline his critique of it and how
it doesn&rsquo;t help in many cases and in others even makes things worse. It is
written in a very accessible way even for someone like me who&rsquo;s not super well
versed in economics terms and theories. I can highly recommend it if you want
to learn about this topic that was a huge contributing factor in the failing
of the last German government.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/bajohr-dieschuldenbremse-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Leichtes Herz und schwere Beine]]></title>
    <published>2025-08-21T00:00:00Z</published>
    <updated>2025-08-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/schlegl-leichtesherzundschwerebeine-2025/</id>
    <content type="html"><![CDATA[<p>I knew the author from when he was moderating tons of shows on VIVA, the
German version of MTV. And I kinda knew that he had mostly given up working in
media to work as a paramedic instead. So when I saw that he wrote a book about
hiking with his mother I was kinda curious about it. I think the start is a
bit bumpy and it feels more like a boring day by day telling of someone not
enjoying their hike. But even though the trip gets harder and harder
throughout the book, the narration gets a lot more interesting and personal.
And the musings about hiking, tourism, but also mortality spoke to me a lot
and I really enjoyed the book overall.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/schlegl-leichtesherzundschwerebeine-2025/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Morrigan]]></title>
    <published>2025-07-10T00:00:00Z</published>
    <updated>2025-07-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/curran-themorrigan-2025/</id>
    <content type="html"><![CDATA[<p>After reading <a href="/reading/katherinearden-thebearandthenightingale-2017/">The Bear and the
Nightingale</a> and
having already ordered the followups I wanted some more mystical fiction books
to read. So when I stumbled upon this book at my bookstore I was immediately
interested. Especially since my knowledge of Irish mythology is/was
practically non-existent and I wouldn&rsquo;t dare claim the parts that made it into
Marvel comicbooks count. The book itself I ended up being kinda meh on. I
think it&rsquo;s written well while still being more crass and gory than I expected.
There also aren&rsquo;t many happy or fun stories in there which is something I&rsquo;ve
kinda expected since despite the rampant awfulness that exists in probably all
mythologies, from the Norse, Roman, and Greek mythologies I remembered some
stories that at least had some funny or silly aspects to it. Whereas The
Morrigan is mostly a compendium of men being awful. I&rsquo;m happy I read it and
having learned a bit more about Irish mythology. But I wouldn&rsquo;t recommend it
as a happy, entertaining read.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/curran-themorrigan-2025/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Naked Economics: Undressing the Dismal Science]]></title>
    <published>2025-07-08T00:00:00Z</published>
    <updated>2025-07-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/wheelan-nakedeconomicsundressingthedismalscience-2019/</id>
    <content type="html"><![CDATA[<p>I go this book originally because I wanted to start getting a better
understanding of economics as a field. Something I&rsquo;ve shied away from through
most of my university years because it was hardly ever talked about in a way
that wasn&rsquo;t either extremely obnoxious or extremely boring. Reading through
the book I kinda alternated between &ldquo;yes this seems obvious&rdquo; or &ldquo;they are
doing what??&rdquo; depending on the topic. But in general it&rsquo;s very well written
and does a good job conveying economic ideas in plain English that is easy to
understand. The chapters generally are structured in a way that first
it is emphasized that ideally the market shouldn&rsquo;t be regulated and then
following up on this with a more nuanced explanation of why regulation and
government influence and rule making still is needed to some extent for the
whole thing to work. I would definitely recommend it even though even though I
felt triggered a couple of times early in chapters before the more nuanced
progression took place :).</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/wheelan-nakedeconomicsundressingthedismalscience-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Metal Gear Solid 2]]></title>
    <published>2025-05-20T00:00:00Z</published>
    <updated>2025-05-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/benson-metalgearsolid-2010/</id>
    <content type="html"><![CDATA[<p>When I ordered <a href="/reading/raymondbenson-metalgearsolid-2008/">the first Metal Gear Solid book</a> I also ordered the second
one at the same time. And I will say that it reads much the same way. I
personally find the story of the Metal Gear Solid 2 video game less exciting
than the first one as well. I also never played it as a kid so it&rsquo;s missing a
ton of nostalgia for me as well. But the second book basically is written in
the same style than the first one. It was an easy read but not something I
wouldn&rsquo;t necessarily recommend unless you are a huge MGS fan and want to
consume literally everything that exists in that universe.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/benson-metalgearsolid-2010/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Atlas of AI]]></title>
    <published>2025-05-16T00:00:00Z</published>
    <updated>2025-05-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/crawford-theatlasofai-2021/</id>
    <content type="html"><![CDATA[<p>After reading <a href="/reading/melaniemitchell-artificialintelligence-2019/">about the technology of AI</a> I decided to stay with the topic
for a bit longer and also read more about the impacts of the technology and
its deployment on the world outside of plainly looking at the outputs of
the algorithms. Much of it I already knew but the book very clearly looks at
the aspects of environmental as well as human and societal impact of these
technologies for mass data gathering and processing. It&rsquo;s very well written
and recommended even though it&rsquo;s not fun to read about this very dark aspect
of our current age of technology.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/crawford-theatlasofai-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Artificial Intelligence]]></title>
    <published>2025-04-27T00:00:00Z</published>
    <updated>2025-04-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/mitchell-artificialintelligence-2019/</id>
    <content type="html"><![CDATA[<p>I got this book in order to get a more nuanced idea about current approaches
to what&rsquo;s generally being called &ldquo;AI&rdquo;. And especially to read more about the
basic of neural nets since I didn&rsquo;t have a lot of lecture content about it in
university. The book does a really good job of explaining the history of the
field of artificial intelligence and the different cycles of excitement it has
gone through. The author also explains very well the basics of historical and
modern AI technologies and how e.g. images and text are being ingested and
evaluated. The book makes it very clear that tools like neural nets are
automatic classification tools and not more. The book is highly recommended if
you want an understanding of the field without the hype and fantastic claims
that have become so rampant in the last couple of years.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/mitchell-artificialintelligence-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Bear and The Nightingale]]></title>
    <published>2025-04-13T00:00:00Z</published>
    <updated>2025-04-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/arden-thebearandthenightingale-2017/</id>
    <content type="html"><![CDATA[<p>I got this recommend by a friend as I had just finished <a href="/reading/cassandraclare-cityofbones-2017/">City of Bones</a> and
wanted something similar in some regards.  And the fact that it was set on the
backdrop on some old Russian folk stories was kinda interesting. And overall
the book really held up. The first 100 pages were really slow, mostly
depressing, and I wasn&rsquo;t able to really keep track of the characters. But
after that I was really drawn into the story, to the point where some parts I
found creepy enough that I stopped reading it before going to bed and read it
in daylight instead 😅. I finished the final 150 pages or so in one setting
because I really wanted to know how it ends. And then immediately ordered the
two follow up books in the series.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/arden-thebearandthenightingale-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Ordinal Society]]></title>
    <published>2025-04-12T00:00:00Z</published>
    <updated>2025-04-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/fourcadehealy-theordinalsociety-2024/</id>
    <content type="html"><![CDATA[<p>I was really curious about this book as I&rsquo;ve been thinking a lot lately about
my use of technology, the amount of data that is tracked on devices, and just
the general state of software (and hardware) where a lot things seem to be
getting developed more for the purpose of keeping people hooked and
participating in a closed platform rather than mostly offering a useful
service or a solution to an actual problem. The book overall is definitely a
challenging read at times as it&rsquo;s often written in fairly academic language
(the disclaimer here being that English isn&rsquo;t my first language and I don&rsquo;t
have a degree in the humanities, so ymmv) which made me re-read some sentences
occasionally to get to the gist of it. But overall it was a really interesting
book that gave me a lot of new ideas and avenues to think about my
relationship with technology.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/fourcadehealy-theordinalsociety-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Notes on Complexity]]></title>
    <published>2025-03-16T00:00:00Z</published>
    <updated>2025-03-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/theise-notesoncomplexity-2024/</id>
    <content type="html"><![CDATA[<p>I found this at the local book store and wasn&rsquo;t really sure what to make of it
but found it interesting enough to buy. I&rsquo;ve dabbled in complexity theory
topics over the years but haven&rsquo;t really dug deep into it so far.  I overall
really enjoyed the book. It does a good job introducing the basics of
complexity in my opinion and guiding through a lot of historical developments
in math and related fields in relation to the study of complexity. Which also
makes up the majority of the book. The part when the author actually gets to
consciousness is rather short and is more a starting point than a real outline
of their research in my mind. So while I can&rsquo;t say I have a more interesting
insight into consciousness after reading the back, I still took a lot away
from reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/theise-notesoncomplexity-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Notebook]]></title>
    <published>2025-03-07T00:00:00Z</published>
    <updated>2025-03-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rolandallen-thenotebook-2023/</id>
    <content type="html"><![CDATA[<p>I honestly devoured and thoroughly enjoyed this book. I&rsquo;ve recently been going
back to leaning on my notebooks a lot more for a lot of my tasks including
planning and todos (both at work and personal). So the book just came at the
right time. I gives a wonderful overview over how paper (and notebooks) were
used throughout the centuries by different people of various professions. It
was really interesting to compare how much of these ways of journaling and
taking notes is still in use today and how the question of organization of
notes kinda still has so many approaches. If you like writing on paper, I can
absolutely recommend this book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rolandallen-thenotebook-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Notebook]]></title>
    <published>2025-03-07T00:00:00Z</published>
    <updated>2025-03-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/allen-thenotebook-2023/</id>
    <content type="html"><![CDATA[<p>I honestly devoured and thoroughly enjoyed this book. I&rsquo;ve recently been going
back to leaning on my notebooks a lot more for a lot of my tasks including
planning and todos (both at work and personal). So the book just came at the
right time. I gives a wonderful overview over how paper (and notebooks) were
used throughout the centuries by different people of various professions. It
was really interesting to compare how much of these ways of journaling and
taking notes is still in use today and how the question of organization of
notes kinda still has so many approaches. If you like writing on paper, I can
absolutely recommend this book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/allen-thenotebook-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[City of Bones]]></title>
    <published>2025-02-09T00:00:00Z</published>
    <updated>2025-02-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/clare-cityofbones-2017/</id>
    <content type="html"><![CDATA[<p>I started this book many years ago after I had finished watching the Netflix
show and really was kinda bummed it ended. So I wanted to continue spending
time in that phantasy universe. Initially however I didn&rsquo;t super enjoy the
differences between the book and the show (as I really wanted to have it make
the show keep going for me) and I had the book on my nightstand for a long
time without actually reading it. But then I decided to pick it back up - with
some good distance from the Netflix show - and just read it as its own. And I
thoroughly enjoyed it and finished it pretty quickly. I also got the follow up
book already, so I&rsquo;m intending to keep going. If you like young adult stories
set within a universe of vampires, demons, werewolves, and other fantastical
creatures, I can recommend the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/clare-cityofbones-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Creative Act]]></title>
    <published>2025-02-01T00:00:00Z</published>
    <updated>2025-02-01T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rubin-thecreativeact-2023/</id>
    <content type="html"><![CDATA[<p>As a long time fan of a lot of the music that Rick Rubin has produced I was
really curious about his take on creativity and the pursuit of making art. And
there were a lot of interesting bits in there. I found his views on the
pressure to make something that will be popular and how to approach
experimentation really interesting. And also the contrast of emphasizing the
more structured parts like note taking and practicing was very interesting.
Overall I think the book could&rsquo;ve been a lot shorter as some parts felt
repetitive in some ways. And of course it mostly leaves out the challenges of
making a living with art. But I still overall enjoyed the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rubin-thecreativeact-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Rebel Girl]]></title>
    <published>2025-01-14T00:00:00Z</published>
    <updated>2025-01-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hanna-rebelgirl-2024/</id>
    <content type="html"><![CDATA[<p>I stumbled upon this in a bookshop and while I knew about Bikini Kill, I can&rsquo;t
say I had listened to a lot of their music before. So I was really curious
what their lead singer&rsquo;s memoir would be like. And even though it has a lot of
hard to read and sad passages I really liked this book. There is so much raw
realness in there and so many interesting passages about their first gigs, the
struggle to keep the band and the zines and all the stuff around it going,
their history with Kurt Cobain, and a lot of sweet details from their
relationship with Ad-Rock from the Beastie Boys. I really enjoyed the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hanna-rebelgirl-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Vacation Factor]]></title>
    <published>2025-01-13T00:00:00Z</published>
    <updated>2025-01-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2025/01/13/vacation-factor.html</id>
    <content type="html"><![CDATA[<p>One of the phrases that are fairly heavily used in tech (as well as many other industries I&rsquo;m sure) is <a href="https://en.m.wikipedia.org/wiki/Bus_factor">&ldquo;bus factor&rdquo;</a>. It basically describes the risk of information, knowledge, and other capabilities not shared amongst team members and therefore would be lost if the only person possessing those things would be &ldquo;hit by a bus&rdquo;.</p>
<p>I think that&rsquo;s an absolutely awful thought and extremely cruel way to talk and think about other humans. So many years ago (I don&rsquo;t remember when exactly but probably somewhere around 2014) I started to instead use the phrase &ldquo;vacation factor&rdquo; at work. With the definition of &ldquo;what&rsquo;s the risk of information, knowledge, or skills being inaccessible when a team member suddenly decides to go on vacation&rdquo;. As a European, vacation to me means it&rsquo;s impossible to contact them because they don&rsquo;t have their laptop with them, don&rsquo;t check slack or email, and otherwise don&rsquo;t answer work calls. As it should be. So the person needs to be assumed completely inaccessible.</p>
<p>This way of thinking about it sounds a lot nicer to me. And it - in addition - normalizes taking frequent vacations as it also should be. It also helps team members think of the fact that coworkers go on vacation and plan for it. Hopefully making the &ldquo;guilt of taking vacation&rdquo; that definitely exists for some people also grow smaller.</p>
<p>I&rsquo;m sure I&rsquo;m not the first person to think of this. But I like the phrase and think it should be used more often.</p>
]]></content>
    <link href="https://unwiredcouch.com/2025/01/13/vacation-factor.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Gott ist nicht schüchtern]]></title>
    <published>2025-01-03T00:00:00Z</published>
    <updated>2025-01-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/grjasnowa-gottistnichtsch%C3%BCchtern-2017/</id>
    <content type="html"><![CDATA[<p>I had read books by Olga Grjasnowa before and really liked her style and view
on the topics like migration and multi-lingualism. So I started reading this
book on winter break and I wasn&rsquo;t really prepared for the how real and rough
this story is. It details a lot of the cruelties that happen to the
protagonists during mid-2010s Syria and their route to flee the country. It
took me some struggling to get through it but also gave me a lot of additional
perspective on the realities of living in a country that often struggles with
the question of whether and how many refugees to take in. It&rsquo;s not an easy
read but I can recommend it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/grjasnowa-gottistnichtsch%C3%BCchtern-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Letters to My Palestinian Neighbor]]></title>
    <published>2024-12-25T00:00:00Z</published>
    <updated>2024-12-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/kleinhalevi-letterstomypalestinianneighbor-2019/</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve not really concerned myself in depth with the Israel-Palestine conflict
much before. But with all the recent events I wanted to educate myself a bit
more and got this book to read from some people that live there. And I found
the book to be very interesting. Especially the contrast between the letters
written by the author and then the reply letters by Palestinians that are
added in this edition give a good idea about the contrasting (and sometimes
similar) views on the various topics of this situation.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/kleinhalevi-letterstomypalestinianneighbor-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Mapmatics]]></title>
    <published>2024-12-18T00:00:00Z</published>
    <updated>2024-12-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rowinska-mapmatics-2024/</id>
    <content type="html"><![CDATA[<p>I was super curious about this book. The combination of maths and maps
immediately captured my attention, having been interested in both for a long
time. And I absolutely loved the book. It expectedly goes into detail of the
mathematics to make projections of 3D territory on a 2D plane works and what
the trade-offs are there. But then goes so much wider into (often no well
known) pioneers of map making and how drawing maps can help people but also
have impact on politics in other areas as we essentially draw arbitrary lines
around territories to categorize them. If you like maps and maths I can highly
recommend the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rowinska-mapmatics-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Africa Is Not A Country]]></title>
    <published>2024-11-15T00:00:00Z</published>
    <updated>2024-11-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/faloyin-africaisnotacountry-2022/</id>
    <content type="html"><![CDATA[<p>I super enjoyed this book. I&rsquo;ve been to Tunisia as a kid and Cape Town for
giving a talk at a conference before but my knowledge of Africa is still very
limited and I was really curious to learn more. The book is very well written
with a lot of no-nonsense historical details and tons of humor. I learned a
lot about the history of the continent as well as some of it countries and
culture. Can highly recommend this book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/faloyin-africaisnotacountry-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Metal Gear Solid]]></title>
    <published>2024-11-02T00:00:00Z</published>
    <updated>2024-11-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/benson-metalgearsolid-2008/</id>
    <content type="html"><![CDATA[<p>As someone who has played the original Metal Gear Solid game multiple times on
the first Playstation as well as the Switch and has fond childhood memories of
the story I was curious how it could or would translate to a book. And I can
say I definitely wasn&rsquo;t impressed. The book is written fairly cringy, and
especially the way the boss fights had to be adapted to a book was awkward at
times. And often times it feels like pages needed to be filled by mentioning
the full name, number, and make of the weapons Snake is using - almost missing
the serial number as well. I don&rsquo;t regret reading it but I wouldn&rsquo;t call it
good literature either.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/benson-metalgearsolid-2008/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Wintering]]></title>
    <published>2024-10-13T00:00:00Z</published>
    <updated>2024-10-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/may-wintering-2020/</id>
    <content type="html"><![CDATA[<p>I stumbled upon this book in the book store and was immediately intrigued by
this idea of &ldquo;winter&rdquo; as a global but also personal season of struggle, death,
and renewal. The author shares a lot of personal stories of struggle and
accommodating the inevitable in her life. And the idea that a lot of people
try to pretend winter isn&rsquo;t happening or at least wishing for months that it
was summer. Which is something that rings very true for me as someone who
routinely falls into a winter depression around January when it&rsquo;s been cold
and grey for more than 2 months here in Berlin.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/may-wintering-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How to Build Impossible Things]]></title>
    <published>2024-10-04T00:00:00Z</published>
    <updated>2024-10-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/ellison-howtobuildimpossiblethings-2023/</id>
    <content type="html"><![CDATA[<p>I found this book at my favourite book store and was really interested in it
without knowing too much about it. I initially thought it was more about
actually building things. But the book turned out to be more like listening to
an old carpenter reminiscing about life. Which I thoroughly enjoyed. I come
from a line of car mechanics and carpenters and it reminded me a lot of
talking to my dad or grandpa. The book is divided into chapters on various
topics around life. And usually a lot of them have examples from houses the
author built. Lots of them including the fanciest apartments of New York. Many
stories reminded me of just general vibes in project management even in
software engineering. But also stories I&rsquo;ve heard from carpenters and other
engineers. There&rsquo;s a chapter about &ldquo;Friendship and Death&rdquo; that made me very
emotional and I actually stopped reading there for a bit. But I can definitely
overall recommend it if &ldquo;old carpenter talks about life&rdquo; is a vibe you&rsquo;d
enjoy.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/ellison-howtobuildimpossiblethings-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Reality Is Not What It Seems]]></title>
    <published>2024-09-15T00:00:00Z</published>
    <updated>2024-09-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rovelli-realityisnotwhatitseems-2017/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/reading/rovelli-realityisnotwhatitseems-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Science of Can and Can&#39;t]]></title>
    <published>2024-08-12T00:00:00Z</published>
    <updated>2024-08-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/marletto-thescienceofcanandcant-2021/</id>
    <content type="html"><![CDATA[<p>I was intrigued by the title when I saw the book in the bookshop as
counterfactuals are something that is seen as not desired in the process of
learning from incidents. So I was really curious how this can serve as a
probably different perspective and interpretation in other sciences. The book
is not a completely easy read and there are definitely dense parts in
understanding the more complex topics like quantum information and heat/work
like transfers. But I can definitely say the book opened my mind to thinking
about capabilities rather than dynamic law like rules of systems and it&rsquo;s
something I still chew on, especially in relation to learning from incidents
and analyzing complex systems to learn from their failures.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/marletto-thescienceofcanandcant-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Der Spurenfinder]]></title>
    <published>2024-08-03T00:00:00Z</published>
    <updated>2024-08-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/kling-derspurenfinder-2023/</id>
    <content type="html"><![CDATA[<p>I read this as a kind of in between book while I was reading something else.
But I already liked the author and we were thinking about gifting this book so
I wanted to read it first to know what it&rsquo;s about. And I really enjoyed it.
It&rsquo;s a great fantasy story suitable for children around the age of 10-12. It
has some scary parts but is overall very joyous and entertaining.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/kling-derspurenfinder-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Sprache und Sein]]></title>
    <published>2024-07-06T00:00:00Z</published>
    <updated>2024-07-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/g%C3%BCm%C3%BCsay-spracheundsein-2021/</id>
    <content type="html"><![CDATA[<p>I enjoyed reading this book thoroughly. I&rsquo;ve been living in Germany almost all
of my life and yet most of the things I have read and have influenced my
thinking around topics like integration, diversity, migration, race,
discrimination, and related topics have been in English and often centered on
the US. It was really interesting and good to read a book about the topic in
German by a German author who also has the background of being German. It&rsquo;s a
quick but deep and thought provoking book that touches on many topics around
how categories are erected to try and tightly define people and their being.
Can definitely recommend reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/g%C3%BCm%C3%BCsay-spracheundsein-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Blueberries]]></title>
    <published>2024-06-21T00:00:00Z</published>
    <updated>2024-06-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/savage-blueberries-2020/</id>
    <content type="html"><![CDATA[<p>I wasn&rsquo;t sure what to expect from this book and I will say that it definitely
threw me off a couple of times. The writing style changes a lot throughout the
chapters and the topics do as well which is something I personally often have
a hard time following along with. Which also meant it took me quite some time
to finish it. But I overall enjoyed it and there is quite a lot of text about
finding (defining?) one&rsquo;s purpose and identity amongst other things.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/savage-blueberries-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Meditation timer with Apple Shortcuts]]></title>
    <published>2024-06-11T00:00:00Z</published>
    <updated>2024-06-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2024/06/11/meditation-timer.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve meditated on and off for at least 10 years or so (since ca. 2013) at this point. I started when I was living in New York and noticed that my days were often full of busyness and I rarely take the time for breaks and slowing down. I would generally describe myself as pretty far on the introvert side and while I wouldn&rsquo;t say I&rsquo;m shy or overly awkward (anymore) in social settings, I definitely need quiet time and being alone with my own brain to recharge. I had heard a lot about meditation then through work benefits but also friends and people in my social media circles about their practice. So while it wasn&rsquo;t a completely new topic to me, I still had never done it and no idea where to start. But lots of people were recommending <a href="https://www.calm.com">Calm</a> as their favorite app for meditation. And since the app had a lot of &ldquo;tutorials&rdquo; and guided meditation courses it seemed like a pretty good way to start. So I downloaded the app and got started and really liked it. The guided meditation courses worked well for me to give me an understanding of what the intention of meditation was and also clear up some common misconceptions (looking at you &ldquo;don&rsquo;t think about anything&rdquo;). And pretty quickly I made meditation a regular habit. Sometimes longer and sometimes shorter sessions. Often on the ferry ride home after work, which was an extra magical setting for me. And I got a yearly subscription to Calm as my constant companion for meditation.</p>
<p>Over time and with a couple of life events, work changes, and moving countries happening my practice also changed. I had long periods over the last years where I didn&rsquo;t meditate at all because it would slip my mind or not fit in the day. And while I&rsquo;ve been meditating again almost daily for the last year or so I also changed my practice away from the guided meditations and towards exclusively using what the app called &ldquo;Timed Meditation&rdquo; that is just planning rain sounds (or any other sound from a collection) and stops after a configured amount of time. Basically Calm had been reduced to a fancy timer with rain sounds and apple health integration to record mindfulness minutes for me. Which is fine, I think it was doing a decent job, setting aside the fact that recording mindfulness minutes for open ended meditations (those that just keep on going until you hit stop yourself) just broke at some point and support didn&rsquo;t seem to care much about it. The 40 Euros a year seemed a bit unnecessary to me but also not enough to worry about it.</p>
<p>But then I randomly stumbled upon the app in the App Store. And since I first downloaded the app all these years ago, Apple added the so called &ldquo;App Privacy&rdquo; section that details what kind of data an app is gathering and using as per their privacy policy. And that section for Calm presented me with this:</p>
<p><img src="/images/posts/meditation-timer/app-privacy-large.jpeg" alt="Screenshot of the App privacy details of the Calm app on the iOS App Store.  Amongst other things it lists as &ldquo;Data used to track you&rdquo;: - Purchases - Identifiers - Usage Data And under &ldquo;Data Linked to You&rdquo; - Health &amp; Fitness - Contact Info - Identifiers - Diagnostics - Purchases - Search History - Usage Data" title="App Privacy section of Calm on the App Store"></p>
<p>And it seemed like a fairly &ldquo;un-zen&rdquo; amount of data collection for a fancy meditation timer with rain sounds to me. I had already been in a mindset for a while where the overeager data collection of a lot of modern software was annoying me. And I didn&rsquo;t want to think about that while meditating at all. So I very quickly <a href="https://chaos.social/@mrtazz/112581112615869868">decided to deinstall the app and cancel the subscription</a>.</p>
<p>But since I don&rsquo;t want to stop meditating and having a timer like that has been really helpful. I wanted a replacement. But I also didn&rsquo;t want to have to deal with researching a new app and then maybe have them also change privacy policies at some point. I was also curious how much the &ldquo;fancy timer with rain sounds and HealthKit integration&rdquo; quip was actually true for me. And so I thought if that is true I should be able to build an equivalent workflow in the Shortcuts app. So I searched for an album with rain sounds on Apple Music and started clicking together a Shortcuts workflow. And to my surprise with a bit of tinkering and about 15 minutes I managed to finagle something together.</p>
<p>The workflow basically does these steps:</p>
<ol>
<li>Ask for how many minutes I want to meditate</li>
<li>Set a timer for the amount</li>
<li>Start playing the rain sounds album</li>
<li>Enable the &ldquo;Meditation focus&rdquo; that turns off notifications</li>
<li>Then waits the number of minutes</li>
<li>After time is up it stops the music</li>
<li>Disables the meditation focus</li>
<li>And then logs the amount of time as &ldquo;Mindfulness Minutes&rdquo; in HealthKit</li>
</ol>
<p>The screenshots below show what it looks like in the shortcuts app.</p>
<p><img src="/images/posts/meditation-timer/meditate-shortcut-1-large.jpeg" alt="">
<img src="/images/posts/meditation-timer/meditate-shortcut-2-large.jpeg" alt=""></p>
<p>And I then tried out the workflow and it really works for me. Granted this doesn&rsquo;t really have any of the advanced options, there&rsquo;s no guided meditation, no fancy landscape pictures it shows, there is no open ended meditation option. But for my habit of meditating it does exactly what I need without gathering a whole bunch of data about me that I haven&rsquo;t configured to be recorded.</p>
<p>It&rsquo;s a bit bittersweet because I&rsquo;ve been using Calm for so long. But for me the trade-off of giving all this data to another company wasn&rsquo;t worth it, especially since I already pay for the product. And this Shortcuts workflow feels like a very nice and adaptable alternative.</p>
]]></content>
    <link href="https://unwiredcouch.com/2024/06/11/meditation-timer.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Brain Food]]></title>
    <published>2024-05-15T00:00:00Z</published>
    <updated>2024-05-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/mosconi-brainfood-2018/</id>
    <content type="html"><![CDATA[<p>I really enjoyed this book and its thorough focus on the biological and
nutritional aspects of nurturing the brain. Especially the very thorough
breakdown of which foods contain which nutrients and how and when they benefit
the brain. For my taste the fairly uncritical recommendations of alcohol
(specifically red wine) and taking at face value the validity of &ldquo;blue zones&rdquo;
and centennials (especially after the data seems to be less robust than
previously assumed) as a proof for nutritional choices was less scientific
than it could be. But in my mind didn&rsquo;t retract from the interesting rest of
the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/mosconi-brainfood-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Chemistry for Breakfast]]></title>
    <published>2024-05-04T00:00:00Z</published>
    <updated>2024-05-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/nguyen-kim-chemistryforbreakfast-2022/</id>
    <content type="html"><![CDATA[<p>I really like these kind of &ldquo;quick lessons in science&rdquo; style books where
different chapters talk about a specific topic. And this book is no
difference. The basic setup is a day in the author&rsquo;s life where everyday
things like drinking coffee, brushing teeth, or charging the phone are being
used to illustrate the chemistry behind how it works and debunk some common
misconceptions along the way. The book is very entertaining and an absolute
fun and educating read.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/nguyen-kim-chemistryforbreakfast-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Transgender Issue]]></title>
    <published>2024-04-18T00:00:00Z</published>
    <updated>2024-04-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/faye-thetransgenderissue-2021/</id>
    <content type="html"><![CDATA[<p>I stumbled upon this book somewhere on the internet and ordered it pretty
immediately as I thought it would be a good way to get more exposure into some
realities of trans life. And given this isn&rsquo;t my lived experience - so I don&rsquo;t
have first hand knowledge of things - and I don&rsquo;t want to have my trans
friends have to explain too many, often very painful, experiences to me, I was
happy to have another way to educate myself on my reading list. The book
itself was very informative albeit very UK centered (which is fine and I knew
beforehand). It touches on a lot of the frequently discussed topics of bodies,
transitioning, the role of society and the state, class struggles, and much
more. It also gave me some good input and perspectives to examine my own view
on things. While I&rsquo;m not an expert on the topic by any means I can definitely
recommend this book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/faye-thetransgenderissue-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Abigail Brand]]></title>
    <published>2024-04-09T00:00:00Z</published>
    <updated>2024-04-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/abigail-brand-04-2024/</id>
    <content type="html"><![CDATA[<p>Abigail Brand in her suit and tie outfit from S.W.O.R.D. #5, drawn by Valerio Schiti. I&rsquo;m still reading Reign of X and Brand is such a good and badass character that I just had to attempt to draw her. This is another watercolor and ink sketch, done while watching TV.</p>
]]></content>
    <link href="https://unwiredcouch.com/art/abigail-brand-04-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Hot chocolate painting]]></title>
    <published>2024-03-20T00:00:00Z</published>
    <updated>2024-03-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/hot-chocolate-03-2024/</id>
    <content type="html"><![CDATA[<p>Quick watercolor sketch while watching TV. I tend to focus on people a lot
with my art so I wanted to do something different where I&rsquo;d mostly practice
values. The reference is from <a href="https://www.watercoloraffair.com/how-to-paint-complex-subjects/">Watercolor
Affair</a> but I
decided to not follow the tutorial and try out where I&rsquo;d end up by myself
instead.</p>
]]></content>
    <link href="https://unwiredcouch.com/art/hot-chocolate-03-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Design of Everyday Things]]></title>
    <published>2024-03-17T00:00:00Z</published>
    <updated>2024-03-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/norman-designofeverydaythings-2013/</id>
    <content type="html"><![CDATA[<p>I had this book on my reading list for years at this point and never quite got
to it. I actually already had it on my Kindle in like 2015 or so and then
bought a paper copy a couple of years ago after I had gotten rid of the
Kindle. So this year I finally made the time to read it. And I super enjoyed
it. It gives a very good theoretical framework to think about design and how
to distinguish good from bad design. It goes also goes into detail on a range
of other topics like human biases, the business of design, and the role of
feedback and discoverability in design. I took a lot of notes while reading
this book and it let me look at design in a very different light.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/norman-designofeverydaythings-2013/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Rachel Summers]]></title>
    <published>2024-03-05T00:00:00Z</published>
    <updated>2024-03-05T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/rachel-summers-03-2024/</id>
    <content type="html"><![CDATA[<p>Still feeling inspired by Phil Noto&rsquo;s art reading the Reign of X storyline. So
here&rsquo;s a TV evening Rachel Summers sketch this time based on a panel in X-Men
#16  #art #watercolor #ink #mastoart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/rachel-summers-03-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Domino]]></title>
    <published>2024-03-02T00:00:00Z</published>
    <updated>2024-03-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/domino-03-2024/</id>
    <content type="html"><![CDATA[<p>Been a while since I did some art. So tonight in order to study from the best
I drew this Domino in ink and watercolor based on a panel in Cable #8 (part of
the Reign of X storyline) by Phil Noto.</p>
]]></content>
    <link href="https://unwiredcouch.com/art/domino-03-2024/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Antidote]]></title>
    <published>2024-02-29T00:00:00Z</published>
    <updated>2024-02-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/burkeman-theantidote-2012/</id>
    <content type="html"><![CDATA[<p>I found this at the bookstore and bought it on a whim since I had already read
his book 4000 weeks and really liked it. &ldquo;The Antidote&rdquo; starts with making fun
of &ldquo;positivity seminars&rdquo; and explains why forbidden yourself to think about
something doesn&rsquo;t work. And then gives a kind of all around summary of some of
the ways people have dealt with hard topics like pain, negative outcomes of
events, inevitability, and death. And still managed to be happy (for whatever
that means). There are some well known approaches like stoicism and Buddhism
explained in there and even approaches to specifically death like the &ldquo;day of
the dead&rdquo; celebrations in Mexico. Some of these weren&rsquo;t new to me so I kinda
knew what to expect there but the book still was very entertaining and
interesting and a good read.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/burkeman-theantidote-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How Infrastructure Works: Transforming our shared systems for a changed world]]></title>
    <published>2024-02-04T00:00:00Z</published>
    <updated>2024-02-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/chachra-howinfrastructureworks-2023/</id>
    <content type="html"><![CDATA[<p>I preordered this book immediately when I saw it flying by in my Mastodon
stream. I&rsquo;ve long been interested in infrastructure and especially having
worked in tech infrastructure for a long time I love thinking about how to
provide good services and utilities. The book was actually a lot different
from what I expected. I somehow thought there was more in depth discussions of
the inner workings of different types of infrastructure. But what I got
instead was way better. The book is a fantastic ode to infrastructure and does
a great job conveying the kind of fascination I&rsquo;ve had with it for a long
time. But it also goes even further. It talks a lot about the kind of contexts
in which infrastructure was and is built. And how it furthers general
assumptions about who benefits in what ways from it. And there is also a lot
of focus on climate change, both how infrastructure is a contributing factor
in it and how it can be part of a more sustainable future.</p>
<p>All in all I throughly enjoyed this book and can highly recommend it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/chachra-howinfrastructureworks-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[“Six Easy Pieces”]]></title>
    <published>2023-12-31T00:00:00Z</published>
    <updated>2023-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/feynman-sixeasypieces-2011/</id>
    <content type="html"><![CDATA[<p>This book was a very fun read. I&rsquo;ve read a bunch of physics related books in
the last couple of years. And while I don&rsquo;t claim to deeply understand any of
it, the topic still keeps being interesting to me. The way the book is
structured and the lectures are delivered is also very engaging. One of my
favourite parts being the relation of physics to other sciences which is
outlined as relationship of building on each others knowledge and feeding back
information that can be useful to further progress. Rather than what&rsquo;s often
communicated as physics being &ldquo;the most fundamental science&rdquo;. It was
refreshing to hear that from such a renowned physicists like Feynman.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/feynman-sixeasypieces-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Engineering a Safer World: Systems Thinking Applied to Safety]]></title>
    <published>2023-12-29T00:00:00Z</published>
    <updated>2023-12-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/leveson-engineeringsaferworld-2012/</id>
    <content type="html"><![CDATA[<p>I was very torn reading this book as it is a lot more slow going than I
expected. A lot of the discussions felt very drawn out and like they could&rsquo;ve
been explained with a lot fewer words. That being said especially the early
chapters are rich with rethinking incidents and approaches to safety. There&rsquo;s
a lot of good discussion about viewing our complex systems through the lens of
systems engineering and getting away from single cause sequential accident
models and stopping at human error to declare an incident &ldquo;solved&rdquo;. A lot of
the book also talks about hazard analysis and setting up process to have
analysis be part of designing and maintaining systems. I don&rsquo;t generally work
in safety critical systems on the level of aerospace and chemical plants
(anymore). So this part was less interesting to me and I had a bit of hard
time getting through it.</p>
<p>Overall I got a lot of interesting inspiration from the book though. And even
though I was already familiar with the ideas of emergent behaviour,
non-sequential accident models, and learning from complex socio-technical
systems, there was still a lot of angles that spurred my interest to go deeper
on. And the bibliography of the book to me was a rich source of things I want
to read next.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/leveson-engineeringsaferworld-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Resolving the Command–Adapt Paradox: Guided Adaptability to Cope with Complexity]]></title>
    <published>2023-12-17T00:00:00Z</published>
    <updated>2023-12-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/woods-resolvingcommandadaptparadox-2023/</id>
    <content type="html"><![CDATA[<h1 id="resolving-the-commandadapt-paradox-guided-adaptability-to-cope-with-complexity">Resolving the Command–Adapt Paradox: Guided Adaptability to Cope with Complexity</h1>
<h2 id="key-takeaways">Key takeaways</h2>
<p>The paper discusses the apparent paradox of “plan and conform” (aka centralised control) and “plan and revise” (aka guided adaptability) perspectives in safety management. As the paper discusses, this paradox is only apparent as both are needed in conjunction to navigate an ever faster changing world full of brittleness.</p>
<h2 id="detailed-discussion">Detailed discussion</h2>
<p>The paper starts of with an introduction to failure as a result of brittle systems:</p>
<blockquote>
<p>“Failure is due to brittle systems, not erratic components, subsystems, or human beings.” (Woods, 2023, p. 2)</p></blockquote>
<p>Which sets the theme for failure (and success for that matter) being an emergent property of a system that can’t be explained by looking at the individual components alone. This is a very common view on failure in complex systems (see e.g. “Engineering a safer world”) and very important as well later on when discussing the limits of the “plan and conform” perspective.</p>
<p>Brittleness as a core emergent property of systems is subsequently defined as:</p>
<blockquote>
<p>“Descriptively, brittleness is how rapidly a system’s performance declines when it nears and reaches its boundary.” (Woods, 2023, p. 3)</p></blockquote>
<p>Following from the definition of brittleness the central challenge of operating highly complex systems with emergent behaviour is then stated as:</p>
<blockquote>
<p>“Because competence envelopes are bounded, a core question for all systems is—how does the system perform when events push it near or beyond the edge of its envelope?” (Woods, 2023, p. 3)</p></blockquote>
<p>And this is very interesting I think because it’s not just the core question for all systems from an engineering and design standpoint. But also the core question for safety management. With the general task of safety management being to keep the system within its operating boundaries (setting aside the problem of knowing what those boundaries are) it is also core to safety management.</p>
<p>Next up the apparent paradox “Command-Adapt” is described in a contrasting description of upper-echelons view of the work and units-of-action view of the work (essentially the blunt-end/sharp-end contrasted view on work). And with that view of the work also comes a view on incidents and failures. For the blunt end:</p>
<blockquote>
<p>“Incidents and failures generally are diagnosed as failures of operational personnel to work-to-rule/role/plan which then leads to new pressures to conform. This is the systems architecture that underlies an emphasis on rule compliance in safety management.” (Woods, 2023, p. 4)</p></blockquote>
<p>And  for the sharp end as the other side of the apparent paradox:</p>
<blockquote>
<p>“The central theme of the guided adaptability perspective is “plan and revise”—being poised to adapt. This perspective recognizes that disrupting events will challenge plans-in-progress, requiring adaptations, reprioritization, and reconfiguration in order to meet key goals given the effects of disturbances and changes.” (Woods, 2023, p. 4)</p></blockquote>
<p>And with these two perspectives (“follow the plan as it describes the safe boundaries of system operations” vs “things aren’t gonna go as planned and we need to change things around”)  defined, the apparent paradox that you have to choose one or the other for how safety management is done is outlined. And then immediately uncovered as a wrong contrasting of approaches:</p>
<blockquote>
<p>“Empirical studies, experience, and science all reveal that the paradox is only apparent: “good” systems embedded in this universe need to plan and revise—to do both. And the necessity of both is evident in the need to manage the risk of brittleness while coping with the side effects of growth and change” (Woods, 2023, p. 5)</p></blockquote>
<p>And to me this is really the core of the paper. That this strict splitting up of two perspectives on safety management that we often see (plan-and-follow vs adapt-and-improvise, blunt-end vs sharp-end, Safety I vs. Safety II, etc) is largely the wrong question. Because we need to understand that both have their place and there needs to be an understanding of the trade-offs of when one is being employed rather than the other. And the problems arise when this isn’t recognised:</p>
<blockquote>
<p>“The paradox dissolves, in part, when one realizes guided adaptability depends in part on plans. The difficulty arises when organizations over-rely on plans [7]. Over-reliance undermines adaptive capacity when beyond-plan challenges arise. Beyond-plan challenges occur regularly for complex systems. The catch is: pressure to comply focuses only on the first and degrades the second.” (Woods, 2023, p. 5)</p></blockquote>
<p>In order to underline the fact that plans eventually fall short 2 classical assumptions about plans are discussed:</p>
<ol>
<li>Plans can completely specify actions</li>
<li>Rationalisations about why findings about shortcomings of plans only apply to other areas and not one’s own</li>
</ol>
<p>From the belief that the first assumption is true, it is usually derived why work-to-rule should be a guiding principle of safety management:</p>
<blockquote>
<p>“If plans can fully specify actions, or nearly so, then work-to-rule/role/ plan is sufficient for productive and safe systems.” (Woods, 2023, p. 5)</p></blockquote>
<p>This is the very common and alluring perspective that work is much more algorithmic rather than heuristic. This assumption that it’s possible to fully specify work also underlies for example the frequent over-eager and over-optimistic assumption of how much work can be (easily) automated (e.g. by a script or so-called “AI”).</p>
<p>Related to that assumption (and illusion) of control then rationalisations are produced of why one is in a special case and the ample findings of short comings of plans don’t apply.</p>
<blockquote>
<p>“The usual response from organizations to these classic findings is simple: my world is stable and not like space operations, military operations, and emergency or critical care medicine. In my world variability can be blocked or suppressed, minimizing the need for adaptation since work-to-plan/role/rule will reliably produce desired outcomes.” (Woods, 2023, p. 7)</p></blockquote>
<p>This rationalisation according to the paper is based on several erroneous assumptions:</p>
<ul>
<li>Surprises occur rarely</li>
<li>It’s easy to know when a plan needs to be modified</li>
<li>It’s quick to put modified plans in action</li>
<li>Interdependencies are easy to limit and be analysed and modelled a-priori</li>
<li>Effects of surprise can be easily compartmentalised and contained away from interdependencies</li>
</ul>
<p>Some of these might be true for a moment but aren’t overall true throughout operation and especially lifecycle (design, growth, adaptations) of a system. And while these assumptions have been shown to be true and rediscovered over and over through research as well as experience they still  serve as a kind of feedback loop to statement 1 and the call for compliance.</p>
<p>In order then to reconcile the two apparent-paradoxical perspectives is a reconceptualisation of plans through the lens of adaptability through 4 parts:</p>
<ol>
<li>Plans are resources for action</li>
<li>Plans are necessary to recognise anomalies</li>
<li>Plans (and Automata) are competent but brittle</li>
<li>People (with the right help) provide the extra adaptive capacity to mitigate brittleness</li>
</ol>
<p>This is to recognise that plans are useful as a starting off point and a resource to grab from when action is needed, but not as a strict specification. And one very important point is that improvisation and adaptability at the sharp-end requires the pre-requisite step of detecting anomalies and deviations. This is a lot harder without a baseline of “normal” which plans can provide. In terms of automation it needs to be recognised that automation (which includes various forms of AI in their respective hype cycles as well) is competent but brittle. And that there is a persistent believe that it just needs a new push of the technology to overcome this.</p>
<blockquote>
<p>“Studies looking at joint systems of people and AI or operators and advanced automation revealed the fundamental brittleness of automata regardless of the underlying technology [13].” (Woods, 2023, p. 8)</p></blockquote>
<p>And getting to the people part (which is the source of adaptive capacity) it’s very important to recognise that challenges and near-misses happen much more often than expected. And that created control systems like automation are subject to the same pressures as the systems they are supposed to control. And thus the same risk for brittleness:</p>
<blockquote>
<p>“All systems are developed and operate with finite resources and live in a changing environment. As a result, plans, procedures, automation, agents, and roles are inherently limited and unable to completely cover the complexity of activities, events, and demands.” (Woods, 2023, p. 9)</p></blockquote>
<p>And so the paper concludes on a way to perform work and safety management which is dubbed “Plan and revise: Guided Adaptability”. Which still means that there should be plans for the work to be done that are intended to be followed until it doesn’t make sense any more. The complement then is to learn from how they don’t make sense anymore and include that in the revision of plans for the future based on the best source there is for adaptations: humans.</p>
<blockquote>
<p>“The irony is you can only monitor how well plans fit the world by understanding how people have to adapt to fill the gaps and holes that inevitably arise as variability in the world exceeds the capability of plans and the competencies built into any system [12].” (Woods, 2023, p. 11)</p></blockquote>
<p>And to make it clear that adaptation itself is also subject to adaptation and not some perfect state of behaviour, the following quote towards the end of the paper is extremely apt in my opinion:</p>
<blockquote>
<p>“You will have to establish the continuous feedback/learning loop in order to adapt how you adapt.” (Woods, 2023, p. 13)</p></blockquote>
<h2 id="personal-thoughts">Personal thoughts</h2>
<p>I really liked the paper and the way it made the trade-offs of both perspectives on safety management very clear. It’s way too often the case in my opinion that a silver bullet solution is sought and once something is assumed as such, all the rest gets discarded. When in reality trade-offs and “best of both worlds” approaches to real world problems are much more likely to yield much better results. The different levels of views on the work in terms of high level planning and low level implementation also reminded me a lot of the waterfall vs agile and scheduled release vs continuous deployment discussions that are happening in technology where often one is seen as the superior approach over the other. But in reality even agile processes need longer term planning to fit into the bigger picture. And even continuous deployment means you <em>are able</em> to deploy whenever not that you have to. And sometimes planning a deploy to match the larger circumstances (outside of another teams test, maybe not on a Friday 😬, or even it can wait till the next morning) makes much more sense rather than deploying something as soon as you got the code review approved. As the famous mature engineering proverb goes</p>
<blockquote>
<p>It’s trade-offs all the way down</p></blockquote>
<h2 id="notes">Notes</h2>
<h3 id="abstract">Abstract</h3>
<blockquote>
<p>“The central theme of the centralized control perspective is “plan and conform”. The central theme of the guided adaptability perspective is “plan and revise”—” (Woods, 2023, p. 2)</p></blockquote>
<blockquote>
<p>“The paradox dissolves, in part, when one realizes guided adaptability is a capability that builds on plans. The difficulty arises when organizations over-rely on plans. Over-reliance undermines adaptive capacity when beyond-plan challenges arise. Beyond-plan challenges occur regularly for complex systems.” (Woods, 2023, p. 2)</p></blockquote>
<h3 id="81-introduction-failure-is-due-to-brittle-systems">8.1 Introduction: Failure is due to Brittle Systems</h3>
<blockquote>
<p>“Failure is due to brittle systems, not erratic components, subsystems, or human beings.” (Woods, 2023, p. 2)</p></blockquote>
<blockquote>
<p>“Descriptively, brittleness is how rapidly a system’s performance declines when it nears and reaches its boundary.” (Woods, 2023, p. 3)</p></blockquote>
<ul>
<li>Is a system with tight boundaries also considered brittle?</li>
</ul>
<blockquote>
<p>“Because competence envelopes are bounded, a core question for all systems is—how does the system perform when events push it near or beyond the edge of its envelope?” (Woods, 2023, p. 3)</p></blockquote>
<blockquote>
<p>“With the right forms of adaptive capacity, systems have capabilities to anticipate bottlenecks ahead, to synchronize activities across roles and layers for mutual assistance as stress grows, and possess the readiness-to-respond to reconfigure and reprioritize activities to fit the challenges [5].” (Woods, 2023, p. 3)</p></blockquote>
<h3 id="82-the-command-adapt-paradox">8.2 The Command-Adapt Paradox</h3>
<blockquote>
<p>“Incidents and failures generally are diagnosed as failures of operational personnel to work-to-rule/role/plan which then leads to new pressures to conform. This is the systems architecture that underlies an emphasis on rule compliance in safety management.” (Woods, 2023, p. 4)</p></blockquote>
<blockquote>
<p>“The concern is how to keep pace with changing situations to mitigate the risk of brittle collapse.” (Woods, 2023, p. 4)</p></blockquote>
<blockquote>
<p>“From this perspective, safety staff support sharp end roles by putting in place organizational features that allow mutual assistance, or reciprocity, as situations deteriorate in the face of challenges [7].” (Woods, 2023, p. 4)</p></blockquote>
<blockquote>
<p>“The central theme of the guided adaptability perspective is “plan and revise”—being poised to adapt. This perspective recognizes that disrupting events will challenge plans-in-progress, requiring adaptations, reprioritization, and reconfiguration in order to meet key goals given the effects of disturbances and changes.” (Woods, 2023, p. 4)</p></blockquote>
<blockquote>
<p>“Empirical studies, experience, and science all reveal that the paradox is only apparent: “good” systems embedded in this universe need to plan and revise—to do both. And the necessity of both is evident in the need to manage the risk of brittleness while coping with the side effects of growth and change” (Woods, 2023, p. 5)</p></blockquote>
<ul>
<li>This is I think a very crucial part calling out the fact that trade offs and not ultimates are the way to go</li>
</ul>
<blockquote>
<p>“The paradox dissolves, in part, when one realizes guided adaptability depends in part on plans. The difficulty arises when organizations over-rely on plans [7]. Over-reliance undermines adaptive capacity when beyond-plan challenges arise. Beyond-plan challenges occur regularly for complex systems. The catch is: pressure to comply focuses only on the first and degrades the second.” (Woods, 2023, p. 5)</p></blockquote>
<h3 id="83-classic-findings-on-the-limits-of-plans-procedures-automata">8.3. Classic Findings on the Limits of Plans, Procedures, Automata</h3>
<h4 id="831-can-plans-completely-specify-actions">8.3.1. Can Plans Completely Specify Actions?</h4>
<blockquote>
<p>“If plans can fully specify actions, or nearly so, then work-to-rule/role/ plan is sufficient for productive and safe systems.” (Woods, 2023, p. 5)</p></blockquote>
<ul>
<li>(Woods, 2023, p. 5) This is the automation fallacy</li>
</ul>
<blockquote>
<p>“Keeping pace with events invokes skills, forms of cognition, and coordinated activity over multiple roles that cannot be specified in procedures.” (Woods, 2023, p. 5)</p></blockquote>
<ul>
<li>(Woods, 2023, p. 5) Algorithmic vs. Heuristic</li>
</ul>
<blockquote>
<p>“(a) plans will miss the potential for bottlenecks, overload, and oversubscription of key assets and contingency backups (this is the risk of saturation) and (b) plans will always tend to lag change in the real world. And modifying plans will lag the changes already underway” (Woods, 2023, p. 6)</p></blockquote>
<blockquote>
<p>“Hidden interdependencies are a potent source of saturation and lag as problems in one area push saturation to others, diagnostic work has to track effects at a distance from the originating disruption, and an expanding set of roles and players have to coordinate and synchronize their activities, often across organizational boundaries, to resolve losses of valued services [10, 29, 31]” (Woods, 2023, p. 6)</p></blockquote>
<h4 id="832-rationalizations">8.3.2. Rationalizations</h4>
<blockquote>
<p>“When an incident occurs, the limits of some components have to be part of the story (a) given the trade-offs that were necessary since resources are limited and goals conflict and (b) given that the system and its environment continue to change.” (Woods, 2023, p. 7)</p></blockquote>
<blockquote>
<p>“Believes the effects of surprise can be compartmentalized, whereas actually, surprises compound and spread over the extensive interdependencies in all modern systems.” (Woods, 2023, p. 7)</p></blockquote>
<blockquote>
<p>“In the aftermath of incidents and breakdowns, the assumptions lead to increased pressure for compliance rather than learning the importance of guided adaptability” (Woods, 2023, p. 7)</p></blockquote>
<h3 id="84-reconceptualization">8.4. Reconceptualization</h3>
<p><strong>1. Plans are Resources for Action</strong></p>
<blockquote>
<p>“The finding that plans only function as resources for action—not specifications is generally traced to [11].” (Woods, 2023, p. 8)</p></blockquote>
<blockquote>
<p>“This is highlighted in definitions of skill: the ability to adapt behavior in changing circumstances to pursue goals despite trade-offs” (Woods, 2023, p. 8)</p></blockquote>
<p><strong>2. Plans are Necessary to Recognize Anomalies</strong></p>
<blockquote>
<p>“To see events and changes as unexpected requires a strong appreciation of what is typical, standard, or even “normally” abnormal” (Woods, 2023, p. 8)</p></blockquote>
<blockquote>
<p>“Seeing what doesn’t fit your model of what has been going on, or what should be going on, or what usually happens is a form of insight” (Woods, 2023, p. 8)</p></blockquote>
<p><strong>3. Plans (and Automata) are Competent but Brittle</strong></p>
<blockquote>
<p>“Studies looking at joint systems of people and AI or operators and advanced automation revealed the fundamental brittleness of automata regardless of the underlying technology [13].” (Woods, 2023, p. 8)</p></blockquote>
<blockquote>
<p>“the problem identified was the way the new capability was deployed produced competent but brittle systems.” (Woods, 2023, p. 8)</p></blockquote>
<blockquote>
<p>“Risk of brittleness is universal.” (Woods, 2023, p. 9)</p></blockquote>
<p><strong>4. People (with the right help) provide the extra adaptive capacity to mitigate brittleness</strong></p>
<blockquote>
<p>“(a) challenges occurred much more often than stakeholders realized, and (b) people in some roles were the critical source for resilient performance despite the stresses, risks, uncertainties, threat of overload, and bottlenecks” (Woods, 2023, p. 9)</p></blockquote>
<blockquote>
<p>“All systems are developed and operate with finite resources and live in a changing environment. As a result, plans, procedures, automation, agents, and roles are inherently limited and unable to completely cover the complexity of activities, events, and demands.” (Woods, 2023, p. 9)</p></blockquote>
<blockquote>
<p>“Without this capability for extensibility, brittle collapse would occur much more often than it is observed [6].” (Woods, 2023, p. 10)</p></blockquote>
<blockquote>
<p>“Adaptation is not about always changing the plan, model, or previous approaches but about the potential to modify plans to continue to fit changing situations.” (Woods, 2023, p. 10)</p></blockquote>
<h4 id="841-plan-and-revise-guided-adaptability">8.4.1. Plan and Revise: Guided Adaptability</h4>
<blockquote>
<p>“The new science shows that this assumption is guaranteed to be wrong in the future, regardless of how well the plan has guided performance in the past. The timing on this guarantee is linked to the pace of change within and around the organization and how those changes expand the tangle of interdependencies it exists within.” (Woods, 2023, p. 10)</p></blockquote>
<blockquote>
<p>“The irony is you can only monitor how well plans fit the world by understanding how people have to adapt to fill the gaps and holes that inevitably arise as variability in the world exceeds the capability of plans and the competencies built into any system [12].” (Woods, 2023, p. 11)</p></blockquote>
<blockquote>
<p>“Monitoring how people adapt to make the system work does not constitute approval that these adaptations are the “best” given the trade-offs faced in different situations. What is “best” is itself a dynamic judgment that can and should change as challenges vary— reprioritization rebalances goals in the trade space to fit the situation.” (Woods, 2023, p. 11)</p></blockquote>
<blockquote>
<p>“Driving gap-bridging adaptations underground also makes it harder to recognize how plans do not fit the changing patterns of variability in the world.” (Woods, 2023, p. 11)</p></blockquote>
<blockquote>
<p>“Recognizing what adaptations are going on allows one to see the resources—physical, cognitive, collaborative, and others—that people draw on to produce resilient performances in the face of challenges small and large.” (Woods, 2023, p. 11)</p></blockquote>
<blockquote>
<p>“(a) about challenges that recur in general even though the specifics vary in individual events and (b) about the ways people work and coordinate to handle challenges.” (Woods, 2023, p. 12)</p></blockquote>
<blockquote>
<p>“If safety is about “repair after something goes wrong”, no organization can keep up with the pace of change, growth, and scale of modern systems and activities [32].” (Woods, 2023, p. 12)</p></blockquote>
<blockquote>
<p>“However effective your organization has become, however you have developed and deployed new capabilities to grow, whatever your record of past improvement in reliability/ productivity/efficiency, and whatever the promises of new capabilities to-be-deployed, the world, in the near future, will produce challenges that go beyond the competencies embodied and require adaptive capacity to stretch.” (Woods, 2023, p. 12)</p></blockquote>
<blockquote>
<p>“You will have to establish the continuous feedback/learning loop in order to adapt how you adapt.” (Woods, 2023, p. 13)</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/woods-resolvingcommandadaptparadox-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Ramona Flowers]]></title>
    <published>2023-11-26T00:00:00Z</published>
    <updated>2023-11-26T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/ramona-flowers-11-2023/</id>
    <content type="html"><![CDATA[<p>not much time for art lately. But I at least took some time for a quick Ramona
Flowers (from the Scott Pilgrim comic/movie) sketch with ink and watercolor
tonight</p>
]]></content>
    <link href="https://unwiredcouch.com/art/ramona-flowers-11-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Strategic Agility Gap: How Organizations Are Slow and Stale to Adapt in Turbulent Worlds]]></title>
    <published>2023-11-19T00:00:00Z</published>
    <updated>2023-11-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/woods-strategicagilitygap-2020/</id>
    <content type="html"><![CDATA[<h1 id="the-strategic-agility-gap-how-organizations-are-slow-and-stale-to-adapt-in-turbulent-worlds">The Strategic Agility Gap: How Organizations Are Slow and Stale to Adapt in Turbulent Worlds</h1>
<h2 id="key-takeaways">Key Takeaways</h2>
<p>This paper outlines how past successes drive the creation of bigger and ever
more complex systems. And the risk of not being able to adapt fast enough to
changing environments and systems. The gap between the adaptation possible and
the adaptation required by an organisation is called “The Strategic Agility
Gap” and a fundamental requirement to be able to bridge that gap is the
ability to react to unplanned failure called “SNAFU catching”.</p>
<h2 id="detailed-discussion">Detailed discussion</h2>
<p>The paper approaches the topic of how organisations deal with the challenge of
keeping up with an ever accelerating pace of change in environments and
systems that get more and more complex. While the focus here is mostly threats
and challenges in the term of surprising failures. It’s also mentioned that
slow adaptability also has implications for the ability to seize
opportunities. Organisations of course aren’t generally not adapting at all.
They are always navigating within their range of agility to adapt to new
situations. However the problem arises when this “strategic agility” as it is
called in the paper isn’t enough to account for the surrounding changes (e.g.
systems and environment) and provide adaptability for new challenges. The
difference between the existing level of agility and the required agility to
match the surrounding pace of change is what is called “the Strategic Agility
Gap”:</p>
<blockquote>
<p>“It is a mismatch in velocities of change and velocities of adaptation (Fig. 1)” (Woods, 2020, p. 96)</p></blockquote>
<p>This is also outlined nicely in this diagram in the paper:</p>
<p><img src="/images/reading/woods-strategicagilitygap-2020-diagram.png" alt=""></p>
<p>In order to illustrate this gap the paper discusses two cases: The 2012 Knight
Capital Collapse during which an organisation wasn’t prepared to adapt to
their changed environment and an incident related to “runaway” automation. And
the case of large transport firms coping with the impacts and fallout of
Hurricane Sandy in the same year.</p>
<p>In the case of the Knight Capital collapse a list of 5 risks are outlined that
contributed to the organization’s inability to deal with the incident:</p>
<blockquote>
<p>“First, small problems can interact and cascade quickly and surprisingly
given the tangle of dependencies across layers inside and outside the
organization. Second, as effects cascade and uncertainties grow, multiple
roles struggle to understand anomalies, diagnose underlying drivers,
identify compensatory actions. Third, difficulties arise getting
authorization from appropriate roles to make non-routine, risky, and
resource costly actions, while uncertainty remains. Fourth, all of the above
take effort, time, and require coordination across roles. […] Fifth, when
critical replanning decisions require serial communication vertically
through the levels of the organization, responses are unable to keep pace
with events.” (Woods, 2020, p. 98)</p></blockquote>
<p>And in contrast the transportation firms were “poised to adapt” in the
scenario they were dealing with due to constant preparatory work that preceded
the incident that put them in a position to be able to quickly react:</p>
<blockquote>
<p>“Upper management developed mechanisms for this shift prior to particular
challenge events. As hurricane Sandy approached New York, temporary teams
were created quickly to provide timely updates (weather impact analysis
teams).” (Woods, 2020, p. 98)</p></blockquote>
<blockquote>
<p>“These mechanisms existed because this firm’s business model, environment,
clientele, and external events regularly required adaptation as surprises
were a normal experience.” (Woods, 2020, p. 98)</p></blockquote>
<p>And in contrasting the cases, the paper again emphasises the importance of
being able to adapt not only within a layer of work but also vertically
throughout an organisation to push decision making and coordination onto
layers that allow for timely reaction to the current situation:</p>
<blockquote>
<p>“In the strategic agility gap, the challenge for organizations is to develop
new forms of coordination across functional, spatial, and temporal
scales—otherwise organizations will be slow, stale and fragmented as they
inevitably confront surprising challenges.” (Woods, 2020, p. 99)</p></blockquote>
<p>The paper then zooms out to a more high level view on systems and their
messiness. Highlighting the fact that all development and operations of
systems are limited by the fact that resources are finite. And thus
preparatory vehicles like plans and automation can never cover the full extent
of complexity of a given system:</p>
<blockquote>
<p>“All systems are developed and operate given finite resources and live in a
changing environment [5]. As a result, plans, procedures, automation, all
agents and roles are inherently limited and unable to completely cover the
complexity of activities, events, demands, and change” (Woods, 2020, p. 99)</p></blockquote>
<p>This is a point that is also expanded upon in Woods’ 2023 paper <a href="https://unwiredcouch.com/reading/paper-command-adapt-paradox/">“Resolving
the Command–Adapt Paradox: Guided Adaptability to Cope with
Complexity“</a>)
in greater detail to highlight the limitations of plans and automation.</p>
<p>As an essential capability in dealing with this predicament of having to
quickly react to an always changing world that is only predictable in very
narrow terms, Woods notes what him and Cook termed “SNFU” catching: The
ability of people to adapt outside of standard plans:</p>
<blockquote>
<p>“SNAFU catching, however technologically facilitated, is a fundamentally
human capability essential for organizational viability […] people in some
roles provide the essential adaptive capacity for SNAFU catching, though
this may be local, underground, and invisible to distant perspectives [12].”
(Woods, 2020, p. 99)</p></blockquote>
<p>And with SNAFU being the normal operation, even though as the paper argues it
is often rationalised away as a fringe phenomenon to put pressure on
work-to-{plan,role,rule} and with that hindering the effectiveness and
visibility of SNAFU catching:</p>
<blockquote>
<p>“The compliance pressure undermines the adaptive capacities needed for SNAFU
catching (initiative), creates double binds that drive adaptations to make
the system work ‘underground,’ and generates role retreat that undermines
coordinated activities.” (Woods, 2020, p. 100)</p></blockquote>
<p>In the conclusion the paper then outlines a path out of the Strategic Agility
Gap which is continuous adaptation:</p>
<blockquote>
<p>“For organizations to flourish in the gap they need to build and sustain the
ability to continuously adapt.” (Woods, 2020, p. 101)</p></blockquote>
<p>It draws on lessons from Web Operations where outages and near outages are
common in a world where systems are constantly changing and due to past
success are enabled to grow and change continuously. It is also a field that
has and is becoming more and more important as almost every system is becoming
a digital system at least in parts. And in this complex world the limitations
of planning are becoming more and more apparent with the underlying layer of
resilience to not have them fail catastrophically all the time is SNAFU
catching:</p>
<blockquote>
<p>“Organizational systems succeed despite the basic limits of plans in a
complex, interdependent and changing environment because responsible people
adapt to make the system work despite its design—SNAFU catching. The
ingredients are:” (Woods, 2020, p. 102)</p></blockquote>
<p>With the mentioned ingredients being:</p>
<ul>
<li>anticipation</li>
<li>contingent synchronisation</li>
<li>readiness to respond</li>
<li>proactive learning</li>
</ul>
<p>With these behaviours and properties often being severely reduced by the
pressure to work-to-{plan/rule/role}, compliance, and worst the fear of
sanctions when any of these expectations are violated.  And in summary:</p>
<blockquote>
<p>“Strategic agility gap arises as organizations’ trajectory of improvement
cannot match the emergence of new challenges, risks, and opportunities as
complexity penalties grow (Fig. 1). To flourish in the gap requires
organizations to build and sustain capabilities for SNAFU catching.” (Woods,
2020, p. 103)</p></blockquote>
<h2 id="personal-thoughts">Personal Thoughts</h2>
<p>This paper was fascinating to me in a couple of ways. But having the visual of
“a gap” in the capacity and capabilities of an organisation was to me an
extremely useful way of framing the problem. The paper also casually gave me
the best and most succinct definition of DevOps I have ever seen in a decade
or so of blog posts and talks trying to define it:</p>
<blockquote>
<p>“Reciprocity in collaborative work is commitment to mutual assistance.”
(Woods, 2020, p. 103)</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/woods-strategicagilitygap-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Safety after neoliberalism]]></title>
    <published>2023-11-07T00:00:00Z</published>
    <updated>2023-11-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/dekker-safetyafterneoliberalism-2020/</id>
    <content type="html"><![CDATA[<h2 id="key-takeaways">Key takeaways</h2>
<p>Neoliberalism has contributed to an environment where regulation and rule making is done on a central per organisation basis that often is implemented by non-experts and gets in the way of frontline workers while providing (next to) no improvements for them in terms of safety and incident reduction. The path to safety management after neoliberalism is through &ldquo;the restoration of professional judgement&rdquo; and increased and encouraged participation and compensation of workers at the sharp end.</p>
<h2 id="detailed-discussion">Detailed discussion</h2>
<p>The paper starts with a short introduction into neoliberalism and the definition that is used throughout the paper based on</p>
<blockquote>
<p>“Most scholars tend to agree that neoliberalism is broadly defined as the extension of competitive markets into all areas of life, including the economy, politics, and society” (Dekker, 2020, p. 1)</p></blockquote>
<p>With that as a starting point an example is cited from studying impact in the mining industry and summarising a set of lived experiences of safety management in a neoliberal context:</p>
<blockquote>
<ul>
<li>“Rules and regulations that have become overly obstructive;</li>
<li>Safety systems that encourage a dumbing down of individuals and a dilution of personal autonomy and discretion;</li>
<li>Higher stress levels due to a sense of loss of control;</li>
<li>Considerable wasted effort;</li>
<li>Systems that have become far too complicated;</li>
<li>Common sense and initiative that have been discouraged;</li>
<li>Cynicism about slogans, stated priorities and the motivation behind rules;</li>
<li>Safety staff detached from the front line—either by their inappropriate experience or because of their physical location being remote from the workplace (cf. Woods, 2006).” (Dekker, 2020, p. 2)</li>
</ul></blockquote>
<p>With that it is also discussed that statistically a reduction in workplace fatalities hasn&rsquo;t happened under this shift in safety management. And the safety management that is often encountered today is positioned as a &ldquo;corollary of neoliberalism&rdquo; following from the push to deregulation, encouraging organisations to make up their own centralised safety management with a focus on responsibilization of workers and a tap into a free market of &ldquo;safety services&rdquo; that are introducing outside auditing, researching, training, etc as well accreditation and consultancy.</p>
<p>From this assessing the current state of neo-liberal tainting of safety management as a non-working setup, the discussion is then switching its focus on the core question of what safety management could look like after neoliberalism.</p>
<p>The first focus area then is what is called &ldquo;the restoration of professional judgement&rdquo; in 4 areas:</p>
<p><strong>1. The management of complex risks</strong>
The first area here is related to the familiar idea of &ldquo;sharp end professionals&rdquo; having an immense amount of expertise that can and should be tapped into:</p>
<blockquote>
<p>“Highly practiced professional skills of pattern recognition, scenario formulation and mental simulation of the execution of possible decision options (Klein, 1998) have shown to be far better at managing dynamic, complex situations than imposing fixed rules that supposedly reduce uncertainty.” (Dekker, 2020, p. 2)</p></blockquote>
<p><strong>2. Deregulation and reregulation</strong>
Here it is pointed out that deregulation in spirit is fairly aligned with the idea of worker autonomy and authority needing to be positioned where expertise is located:</p>
<blockquote>
<p>“Deregulation aligns, in spirit and in principle, with the ideas and ideals of worker discretion and autonomy, and with the notion that decision authority needs to flow to where expertise sits” (Dekker, 2020, p. 2)</p></blockquote>
<p>But with the caveat that this needs to be done intentionally and mindfully. And not in a way that leaves a vacuum of safety operations and practices:</p>
<blockquote>
<p>“But an unanticipated effect of deregulation has been not a reduction but a displacement of rulemaking, documentation and inspection activities.” (Dekker, 2020, p. 2)</p></blockquote>
<p><strong>3. What governments can do after neoliberalism</strong>
As for the questions of what governments can do this is split in two parts of reassertion of ownership as well as re-regulation. However this re-regulation needs to then take a new form, away from pure expectation of compliance and towards a Safety II mindset of safety management:</p>
<blockquote>
<p>“Some regulators have become interested in more formally examining why things go right (consistent with Safety II principles (Hollnagel, 2017)), and trying to enhance or assure the presence of the capacities (or ‘resilience potentials’) in an organization’s people, pro- cesses and systems that make it so (Jacobsen, 2017).” (Dekker, 2020, p. 3)</p></blockquote>
<p><strong>4. What organizations can do after neoliberalism</strong>
And finally the opportunities of organisations post neoliberalism which reads in this paper like the hardest (but also maybe most important) part of the shift.  First of all the almost natural tendency of &ldquo;solving safety with bureaucracy&rdquo; needs to be tackled:</p>
<blockquote>
<p>“as soon as safety is involved, there seems to be an irresistible push towards a wider scope of norms, procedures and processes, whatever the context” (Dekker, 2020, p. 3)</p></blockquote>
<blockquote>
<p>“A recent poll in the mining industry showed that the majority “of the workforce feels things are being imposed on them that add no value, wastes their time, adds to their frustration and, at worst, creates a disconnect by removing control over their work”” (Dekker, 2020, p. 3)</p></blockquote>
<p>What this de-buraeucratizing of safety management looks like is again related to the first point of recognising front-line expertise:</p>
<blockquote>
<p>“De-bureaucratizing safety means putting safety expertise closer to the nuances, ‘messy details’ and quotidian risks of actual practice—as well as to operational decision-makers (CAIB, 2003; Galison, 2000; Roe, 2013; Woods, 2006).” (Dekker, 2020, p. 3)</p></blockquote>
<p>And also an emphasis on collaboration, relation, and empathy that is achieved through working <em>together</em> towards a common goal (safer operations) rather than imposing rules and regulations:</p>
<blockquote>
<p>“deliberately avoided using bureaucratized safety systems, and instead built on their collective responsibility for mitigating risk by reframing official safety programs in terms of kinship—specifically the ties of relatedness crew members create with each other in their everyday work.” (Dekker, 2020, p. 3)</p></blockquote>
<p>The paper then talks about participatory equality and workers&rsquo; compensation as a an area to change after neoliberalism. While not directly related to safety management they are outlined as requirements for changes to become sustainable.</p>
<blockquote>
<p>“One is participatory equality: the actual, meaningful involvement that workers have in decisions about the design, preconditions, implementation, execution, circumstances, monitoring and remuneration of their work—including of course those aspects to do with safety.” (Dekker, 2020, p. 3)</p></blockquote>
<p>And equally important is the shift way from the neoliberalism induced view of risk and injuries being a sole responsibility of the worker and attempts for improvements stopping there instead of looking at organisations as a whole:</p>
<blockquote>
<p>“The individualization of workplace risk, or ‘responsibilization,’ re- fers to safety programs and violation notices targeted at workers, not companies, which has helped tilt the distribution of work-related injury costs in favor of corporate interests” (Dekker, 2020, p. 4)</p></blockquote>
<p>The paper then closes with an overview of safety and global capitalism after neoliberalism in which studies are examined that found a correlation &ldquo;between  economic globalisation and the probability of industrial accidents&rdquo;.  And interestingly also a negative correlation to free speech:</p>
<blockquote>
<p>“In particular, freedom of speech is negatively correlated with industrial disasters.” (Dekker, 2020, p. 4)</p></blockquote>
<p>And the concludes with the conclusion that complex risk in safety after neoliberalism should be approached by</p>
<blockquote>
<p>“trusting and enabling practitioner decision discretion; by finding a new balance between written guidance and risk appetite, between professional judgment and risk competence.” (Dekker, 2020, p. 5)</p></blockquote>
<h2 id="personal-thoughts">Personal Thoughts</h2>
<p>I initially had quite a lot of problems following the argument from neoliberalism to essentially the description of Safety I management of incidents and safety as a whole. Originally I saw the inverse as more logical. And even though it is addressed in the paper as “Deregulation aligns, in spirit and in principle, with the ideas and ideals of worker discretion and autonomy, and with the notion that decision authority needs to flow to where expertise sits” (Dekker, 2020, p. 2)&quot; it took me some mind bending to make this make sense in my head. I also don&rsquo;t really have any background in learning or even thinking much about neoliberalism, so there&rsquo;s a good chance I&rsquo;m missing some stuff. But overall I liked how the paper made me think about these two concepts (safety management and neoliberalism) that I had never thought about in relation to each other.</p>
<h2 id="notes">Notes</h2>
]]></content>
    <link href="https://unwiredcouch.com/reading/dekker-safetyafterneoliberalism-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Hercule Poirot]]></title>
    <published>2023-11-04T00:00:00Z</published>
    <updated>2023-11-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/hercule-poirot-11-2023/</id>
    <content type="html"><![CDATA[<p>A quick sketch for this week. Staying with the theme of oldschool TV. David
Suchet as Hercule Poirot in pencil and ink. #art</p>
]]></content>
    <link href="https://unwiredcouch.com/art/hercule-poirot-11-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Meaningful Availability]]></title>
    <published>2023-11-01T00:00:00Z</published>
    <updated>2023-11-01T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/diwan-meaningfulavailability-2020/</id>
    <content type="html"><![CDATA[<p>SLOs the way they are usually constructed heavily bias towards the most active
user. This isn&rsquo;t necessarily a good representation of the overall customer
impact of an incident. In order to address that, a mechanism called &ldquo;windowed
user-uptime&rdquo; is introduced to make the measure of user impact more
representative.</p>
<h2 id="detailed-discussion">Detailed discussion</h2>
<p>The paper starts with an introduction of the usual approach for SLOs:</p>
<blockquote>
<p>“Success- ratio is the fraction of the number of successful requests to total requests over a period of time (usually a month or a quarter) [” (Diwan et al., 2020, p. 1)</p></blockquote>
<p>This plain calculation has the advantage that it&rsquo;s very easy to calculate but comes with the bias of skewing towards most active users as they will represent more requests (successful as well as unsuccessful) in the overall ratio.</p>
<p>The general form of availability metrics are defined as</p>
<pre tabindex="0"><code>availability = good service / total demanded service
</code></pre><p>And the way service is measured here generally falls in one of two buckets. Either time based which means the time between failures is uptime and the time recovering from a failure is downtime. Which leads to a metric defined as</p>
<pre tabindex="0"><code>availability = uptime / (uptime + downtime)
</code></pre><p>This measure strongly relies on the definition and precise meaning of &ldquo;failure&rdquo; and &ldquo;recovery&rdquo;. Especially given situations of partial failure, the definition on whether failure means for all users or at least one user or something in between becomes important for the result of what it means for measuring availability. Here percentage thresholds are often introduced for the definition, to be able to say something like &ldquo;failure means at least 5% of users encounter errors&rdquo;. Which still comes with the problem of an arbitrary threshold. Within this definition a system with 5% error rate is treated the same as one with 0.0001% error rate given the chosen threshold as the delineation line. And the general downsides of this measure are:</p>
<ul>
<li>it&rsquo;s not proportional to the severity of the system&rsquo;s unavailability</li>
<li>not proportional to users affected</li>
<li>not actionable with not insight into source of failures</li>
<li>not meaningful as they rely on arbitrary thresholds and/or manual judgements</li>
</ul>
<p>For the other approach of <em>count based</em> availability which is much more rare and is based on the similar definition of</p>
<pre tabindex="0"><code>availability = successful requests / total requests
</code></pre><p>There is some popularity to this approach as it is easy to implement. However users ultimately care about time they were able to use a service and not the number of requests they were able to do. So this approach also comes with the downsides of:</p>
<ul>
<li>not meaningful as it&rsquo;s not based on time</li>
<li>being biased towards highly active users</li>
<li>being biased due to different client behaviour during outages
There is a quick discussion of synthetic probes for measuring system availability but this isn&rsquo;t really discussed deeply as it comes with the downsides of not being representative of user activity and not proportional to what users experience. So for the goals of meaningful measure of uptime it&rsquo;s not a good fit.</li>
</ul>
<p>In order to improve on these</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/diwan-meaningfulavailability-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit]]></title>
    <published>2023-09-28T00:00:00Z</published>
    <updated>2023-09-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/benjamin-kunstwerkzeitalterreproduzierbarkeit-1936/</id>
    <content type="html"><![CDATA[<p>I don’t really think I got the gist of this book. It had the usual barrier to
understanding that I’ve kinda come to expect from early 20th century
philosophy texts. There is a lot of talking about film specifically and how it
enables misuse of media. And when it comes to the reproducibility of art it’s
mostly said that the “aura” of it gets lost. I&rsquo;m likely gonna do a second pass
of it at some point maybe with some relevant secondary literature in hand
because I feel like there is more I can get from this text. Especially since
there were many parts in there that lamented the dissolving distinction
between author and audience. And the fact that &ldquo;everyone these days can
publish whatever they think they have to say&rdquo;. Which feels very similar to a
lot of discourse today about content creation.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/benjamin-kunstwerkzeitalterreproduzierbarkeit-1936/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Nick Fury from the Secret Invasion TV show]]></title>
    <published>2023-09-25T00:00:00Z</published>
    <updated>2023-09-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/nick-fury-09-2023/</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve not been making time for art these last couple of months and I&rsquo;m trying
to get back into it. Feels incredibly rusty but I did a Secret Invasion Nick
Fury tonight that I don&rsquo;t completely hate</p>
]]></content>
    <link href="https://unwiredcouch.com/art/nick-fury-09-2023/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Sources of Power]]></title>
    <published>2023-09-24T00:00:00Z</published>
    <updated>2023-09-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/klein-sourcesofpower-1998/</id>
    <content type="html"><![CDATA[<p>I deeply enjoyed this book. I’ve had many discussions about Klein’s ideas and
concepts before. I have for example applied his idea of <a href="https://www.gary-klein.com/premortem">premortems</a> a lot
before in architectural and operational reviews of technology changes. And his
ideas about knowledge of expert workers is an important part of how I approach
incident reviews and learning from them. The book is also an interesting
opposite view in a lot of ways to Kahneman’s classic <a href="https://unwiredcouch.com/reading/kahnemann-thinkingfastandslow-2011/">“Thinking, Fast and
Slow”</a> where a lot of emphasis is put on how the human mind can be tricked
or is wrong. I took tons of notes from this book and there were so many things
that reminded me of operating web services or planning and maintaining other
complex systems. Definitely a strong recommendation to read from me.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/klein-sourcesofpower-1998/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Building a second brain]]></title>
    <published>2023-08-24T00:00:00Z</published>
    <updated>2023-08-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/forte-buildingasecondbrain-2022/</id>
    <content type="html"><![CDATA[<p>This book was a surprisingly quick and easy read. I wasn&rsquo;t really sure if I
was gonna like another book about taking notes (or find it useful) after
reading <a href="/reading/ahrens-smartnotes-2017/">How to take smart notes</a> had comparatively little practical things
for me to take away from and implement. But &ldquo;Building a Second Brain&rdquo; was very
different in that regard. The book puts a lot of focus on building a system
that is very flexible and adaptable even to stressful schedules. There are a
couple of guiding principles (the PARA folder structure, refining notes
regularly, making it easy to capture notes) that are elaborated on. But there
is a lot of emphasis on making this work in even tiny moments throughout the
week. A lot of the system and ideas reminded me of bullet journaling in a way
which I&rsquo;ve been doing for years at this point and has been working very well
for me</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/forte-buildingasecondbrain-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Gender Trouble]]></title>
    <published>2023-08-06T00:00:00Z</published>
    <updated>2023-08-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/butler-gendertrouble-2011/</id>
    <content type="html"><![CDATA[<p>I don’t remember how I got to this book as I had never heard of Judith Butler
before given I’ve not really had any philosophy or literature classes on a
higher education level and haven’t read much about academic feminist or gender
theory. So I went into this book pretty unprepared and I was immediately in
for a rude awakening. I’ve always thought of my English being pretty decent
for a second language but there were so many sentences in this book that even
on the third try I wasn’t able to grasp. The fact I haven’t read Freud, Lacan,
Foucault, or any of the other works that “Gender Trouble” discusses and bases
a lot of arguments on definitely also didn’t help. Only while I was already
somewhat deep in did I find out that Butler is somewhat notorious for having a
difficult and dense writing style. Nevertheless I stayed curious enough about
the topic to get through the book and it was definitely interesting. I had
heard of gender being a social performance before but reading it in a more
academic context and having more (albeit difficult to grasp) discussion around
it was pretty insightful for me. I don’t know if I’d recommend it unless
you’re really into the topic and have maybe read some of the referenced
material. But I don’t regret reading and struggling through it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/butler-gendertrouble-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Static URL shortening with nginx maps]]></title>
    <published>2023-07-02T00:00:00Z</published>
    <updated>2023-07-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2023/07/02/static-url-shortening-nginx-maps.html</id>
    <content type="html"><![CDATA[<p>In 2012 when it was hip and cool to do so, I also had my own URL shortener. It
was based on what I called <a href="https://github.com/mrtazz/katana">&ldquo;katana&rdquo;</a>, a
convenience ruby wrapper around
<a href="https://github.com/technoweenie/guillotine/">&ldquo;guillotine&rdquo;</a> that made it easy
to run it on Heroku backed by Redis. Back then Heroku still had a free tier and
RedisToGo was available as a free add-on for databases up to 5MB or so. It was
really fun to run, it had <a href="https://github.com/mrtazz/katana/blob/master/app.rb#L28-L39">its own endpoint to support Tweetbot&rsquo;s custom URL
shortening
integration</a> and
the free tier was more than good enough for the occasional shortening.</p>
<p>Over the years however I used it less and less, mostly because Twitter had
started forcing its own URL shortening with auto-expansion when viewing a tweet
on everything. And the experience of using a custom shortener was not on par
with that. I&rsquo;ve also almost lost the database of the shortener a couple of
times because free tiers don&rsquo;t usually come with backups. So I rigged up a
quick GitHub Action that ran once a day,
<a href="https://github.com/jeremyfa/node-redis-dump" title="node-redis-dump"><code>redis-dump</code>-ed</a> all the contents to plain text and committed them to a git
repo as a low budget backup job. At this point I wasn&rsquo;t really shortening
anything anymore but wanted to keep the existing URLs functional. I had moved
to Heroku&rsquo;s own Redis service at that point and there was no real work involved
to keep it running.</p>
<p>Fast forward to 2022 Heroku <a href="https://blog.heroku.com/next-chapter" title="Heroku’s Next Chapter">announced</a> the end of the free tiers. And while I&rsquo;m generally
happy to pay for things I wasn&rsquo;t convinced that just maintaining what was
essentially by now a barely used URL lookup app  was worth the $7/month for me.
So I shut it down and thought about alternatives. I could run the app on my
server that I use for a couple of things. But I really don&rsquo;t want to run a ruby
app + redis in my free time. I thought about implementing the shortener logic
in Go and back it by something like sqlite or even just a yaml file. But again
that felt like a lot of effort for not actually shortening anything.</p>
<p>And then I thought &ldquo;this is just hosting 301 redirects, surely something nginx
is good at&rdquo;. And sure enough, after a quick internet search I found <a href="https://stackoverflow.com/questions/29354142/nginx-how-to-mass-permanent-redirect-from-a-given-list">a
stackoverflow
post</a>
that provided a good example for managing a lookup map in a handful of lines of
code. The core of it is basically:</p>
<pre tabindex="0"><code># head -n 5 /usr/local/etc/nginx/mrtz_cc_redirect_map.conf
/-KmaJA &#39;http://lusis.github.com/blog/2014/04/13/omnibus-redux/&#39;;
/-vvREg &#39;http://hannahmontana.sourceforge.net/&#39;;
/-yW3mQ &#39;http://s3itch.unwiredcouch.com/1._tmux-20140719-180429.jpg&#39;;
/09nQKA &#39;http://s3itch.unwiredcouch.com/Projects-20141130-133209.jpg&#39;;
/0YK2gg &#39;https://speakerdeck.com/mrtazz/statsd-workshop-monitorama-2013&#39;;

# wc -l /usr/local/etc/nginx/mrtz_cc_redirect_map.conf
424 /usr/local/etc/nginx/mrtz_cc_redirect_map.conf


# cat /usr/local/etc/nginx/sites/redirect
map_hash_bucket_size 256; # see http://nginx.org/en/docs/hash.html

map $request_uri $new_uri {
    include /usr/local/etc/nginx/mrtz_cc_redirect_map.conf;
}

server {
  listen 94.130.5.59:443 ssl;
  server_name mrtz.cc;

  if ($new_uri) {
    return 301 $new_uri;
  }

  ...
}
</code></pre><p>So all I had to do was convert the plain text backup of my redis instance into
the nginx map format, which was easy enough with this <code>awk</code> one-liner:</p>
<pre tabindex="0"><code>% head -n 5 backups/mrtz.cc/mrtz.cc.dump
SET     guillotine:hash:-KmaJA &#39;http://lusis.github.com/blog/2014/04/13/omnibus-redux/&#39;
SET     guillotine:hash:-vvREg &#39;http://hannahmontana.sourceforge.net/&#39;
SET     guillotine:hash:-yW3mQ &#39;http://s3itch.unwiredcouch.com/1._tmux-20140719-180429.jpg&#39;
SET     guillotine:hash:09nQKA &#39;http://s3itch.unwiredcouch.com/Projects-20141130-133209.jpg&#39;
SET     guillotine:hash:0YK2gg &#39;https://speakerdeck.com/mrtazz/statsd-workshop-monitorama-2013&#39;

% awk &#39;/guillotine:hash/ { split($2,a,/:/); print &#34;/&#34;a[3]&#34; &#34;$3&#34;;&#34;}&#39; &lt; backups/mrtz.cc/mrtz.cc.dump | head -n 5
/-KmaJA &#39;http://lusis.github.com/blog/2014/04/13/omnibus-redux/&#39;;
/-vvREg &#39;http://hannahmontana.sourceforge.net/&#39;;
/-yW3mQ &#39;http://s3itch.unwiredcouch.com/1._tmux-20140719-180429.jpg&#39;;
/09nQKA &#39;http://s3itch.unwiredcouch.com/Projects-20141130-133209.jpg&#39;;
/0YK2gg &#39;https://speakerdeck.com/mrtazz/statsd-workshop-monitorama-2013&#39;;
</code></pre><p>And then chef out the nginx config and Let&rsquo;s Encrypt setup for the domain to my
server, change the DNS records to the server instead of Heroku<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. And
voila:</p>
<pre tabindex="0"><code>% curl -sv https://mrtz.cc/-vvREg 2&gt;&amp;1 | grep Location
&lt; Location: http://hannahmontana.sourceforge.net/
</code></pre><p>I really like this setup because running nginx is pretty straight forward for
the small scale I use it at. And I care about keeping URLs working. So this
makes me happy. I might at some point maybe want to start using it and adding
URLs again. At which point I have to figure something out. But I don&rsquo;t expect
that to be any time soon (if at all).</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>There were actually some hiccups in the middle where I still had the
DNS configure in dnsimple but had apparently let the domain lapse 😅. But
re-registering it with dnsimple was super fast and I just had to wait a bit
then for the registration to propagate.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>
]]></content>
    <link href="https://unwiredcouch.com/bits/2023/07/02/static-url-shortening-nginx-maps.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Senior Rotations]]></title>
    <published>2023-06-27T00:00:00Z</published>
    <updated>2023-06-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2023/06/27/senior-rotations.html</id>
    <content type="html"><![CDATA[<p>When I started at Etsy (this is end of 2011) onboarding meant spending the
first 6 weeks on different teams every week. The idea was to get to know some
people right away, outside of the team we&rsquo;d be working on. And also learn a bit
about some areas that we were most likely going to be overlapping with in our
future work. And while this was generally fun and a good time it was also
during an already overwhelming time. So it wasn&rsquo;t rare that the week with a
team was over just when it felt like you understood what was kind of going on
(which was probably intended).</p>
<p>But we also had this concept of &ldquo;Senior rotations&rdquo;, which to me always felt
like an extension of onboarding. And it was one of the most fun things I got to
do during my time there. The basic concept is that every Senior Engineer and
above (back then that meant Staff, Principal, and Distinguished) was entitled
to a 4 week rotation with a different team once a year. The idea was that for
one month out of the year the more experienced engineers from any given team
would be working on something completely different. Specifically we were
encouraged to pick a team that was working on something very different than
what our team was doing. And inversely every team was also hosting an engineer
for a month once or twice a year (which is what in my experience it usually
came down to).</p>
<p>The purpose of this - as it was explained to me - was multi-faceted. It was a
way to make sure senior people gain and maintain connections across the
organization, especially as the company was growing. Connections in the sense
of relationships, but also which problems are being solved elsewhere, and what
the impact is from changes in one part of the system to other parts of it. And
having intricate knowledge of some parts of the stack or the product can often
mean there are some good ideas ready to emerge when suddenly confronted with a
different area. Another idea - though I don&rsquo;t know how much that actually
worked -  was that it was a way to improve retention. For senior engineers that
might feel like they’ve gotten into a rut it was a way to experience work on a
different team and consider switching or find new excitement in their work. On
the flipside it also meant junior engineers get to learn from more senior
engineers from across the company. And it was a good way to build empathy for
other teams as well. In addition the &ldquo;borrowing team&rdquo; needed to regularly plan
for senior engineers to not be around. Which was good practice for handing
projects off, documenting work, and generally increasing the vacation factor
(the way better and less morbid version of the <a href="https://en.wikipedia.org/wiki/Bus_factor" title="Bus factor">bus
factor</a>, because let&rsquo;s
be honest we should be planning for people going on spontaneous vacation much
more than them being hit by a bus).</p>
<p>I&rsquo;ve done two of those during my time at Etsy with wildly different
experiences. And most of it came down to preparation. I did one rotation where
I didn&rsquo;t really plan and prepped for it. It was also a team that I was already
closely working with in my regular day to day and I didn&rsquo;t make sure I wasn&rsquo;t
gonna have to work on projects for my original team. It was during a period
where there was a lot going on. So I wasn&rsquo;t disconnecting from the chatter and
questions for my team either. So all in all while I worked on some fun stuff
and learned things, it definitely wasn&rsquo;t really the change of pace and areas of
focus that I wanted it to be.</p>
<p>The following year I really wanted to do another rotation. And as it happens we
also had a reorg that meant my team was being disbanded and I was starting on a
newly created team anyways. The perfect opportunity to not have to worry about
any lingering projects or feeling like I abandoned my team mates during a time
of high stress. And the timing matched up so that I would start on the new team
after my summer vacation anyways. So it wasn&rsquo;t gonna matter much if I started a
month later anyways. This time I wanted to choose a team to rotate with that
was about as far away from my day to day working on infrastructure as I could
imagine. Which is why I was super excited to hear that the design system team
was hosting senior rotators, so I immediately signed up.</p>
<p>I didn’t have many expectations going into the rotation other than wanting to
learn about something completely different. Plus I’ve never liked the attitude
that there are inherent backend and frontend engineers and that you have to be
either one. It’s all just yelling at computers in different ways. So I was
excited to learn more about the frontend part of the stack. My general project
during that month was designing a versioning and deployment workflow for CSS
(specifically SCSS) changes that we wanted to support going forward as an
extension of the existing CSS build. The details are a bit fuzzy 7 years later.
But basically it was a project that was somewhat rooted in infrastructure in a
way that gave me a great starting point to talk to designers (who wrote large
parts of CSS and JS changes themselves back then) and engineers about their
requirements and workflows around CSS changes. I was in daily standups with the
design system team, got to hear about the daily challenges of designers and
frontend engineers, what goes into maintaining a design system, and most
importantly got to ask all the “stupid” questions that came to my mind. Plus as
my rotation mentor I had the fantastic <a href="https://sylormiller.com">Katie
Sylor-Miller</a> who is one of the most frontend savvy
people I’ve ever met and I got to ask her all the questions as well. And I
can’t count all the things I learned during that time. I had fantastic
conversations about how to design for a brand in different shades of orange
(Etsy’s brand colour back then), the implications of designing maintainable
SCSS code organization, and just a whole bunch of things around design systems
and frontend development for a big, busy website.</p>
<p>All in all I really enjoyed that rotation and all the things I got to learn
through it and the people I got to know and hang out with for a month. I really
appreciated that I got to get out of my comfort zone for a bit and in some ways
“be a junior engineer” again. And it gave me insight into different parts of
the company that I probably would’ve never gotten otherwise. As I mentioned,
planning and executing senior rotations is a decent amount of effort. But I
think it’s a really fantastic idea and I’ve benefitted a lot from being able to
participate in it.</p>
]]></content>
    <link href="https://unwiredcouch.com/2023/06/27/senior-rotations.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Incidents, tickets, and standardized learning]]></title>
    <published>2023-06-07T00:00:00Z</published>
    <updated>2023-06-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2023/06/07/incidents-tickets-standardized-learning.html</id>
    <content type="html"><![CDATA[<p>One of the struggles of learning from incidents in a business environment (i.e. &ldquo;at work&rdquo;) is often the time factor required to really dive into an incident and learn from it. Incidents are a vast source of interesting discoveries and ways to learn more about your systems and the humans that keep them running. But learning is really hard to time box. And in the usual work setting you kinda go by the defaults of your calendaring software - which is usually either a 30 minute or 1 hour meeting - to allot time for learning from an incident in a group setting. And given how much talk there always is about the expensiveness of meetings and how many meetings could be emails (or worse a Slack thread) having a 1 hour meeting about an incident already feels a bit much sometimes. I&rsquo;ve had many conversations about this in the past, especially when I used to facilitate lots of learning reviews for incidents myself as well as teach facilitation to my coworkers. And usually someone wanted to know how we know a 1 hour meeting is enough to get through an incident. And the honest answer is it&rsquo;s never enough. But 1 hour is the somewhat arbitrary amount of time you usually are able to schedule amongst a large-ish group of people without too many conflicts. And yet it often already feels hard to justify why it&rsquo;s needed. Especially having a meeting where the outcome might not be as easily quantifiable as lots of people would like. The expectation is always that you have something to show for after that hour. Like a bunch of remediation tickets to close, some reasons why exactly something happened, and steps and actions why it will never happen again. Ideally one reason, one cause for why it happened which can be easily prevented in the future.</p>
<p>I&rsquo;ve thought a lot over the years about why that is. What is so alluring and comforting about single cause incidents for a business. And conversely what is so hard about accepting the fact that we will never be able to prevent another incident from happening. We&rsquo;ll never fully &ldquo;solve&rdquo; an incident and we&rsquo;ll never be able to describe and map a system to the extent where we can know the full impact of every change in the future.</p>
<h2 id="the-base-unit-of-work-at-a-company-is-a-ticket">The base unit of work at a company is a ticket</h2>
<p>What I&rsquo;ve mostly come down to when thinking about this is the realization that the base unit of work in a corporate context is a ticket (or an issue, or whatever your tracking software calls it). The way you know what needs to be done and that you have done a certain amount of work is because there exists a ticket in a tracker. Many conversations (and &ldquo;process optimizations&rdquo;) in a usual work setting at certain times include a discussion about the fact that everything that is being worked on needs to be tracked in a ticket to make sure it&rsquo;s all visible and surfaced. And once the work is done (or abandoned) the ticket needs to be closed to communicate its status (and sometimes status needs to be communicated on open tickets as well). These updates are then used to roll up into higher level tickets and summaries to higher up management to communicate to them what got done and what teams are working on. There is much more that could be said about this but the point I want to bring across is that the core measurement of work planned, to be done, in progress, done, or even abandoned is to have a ticket that can be closed.</p>
<h2 id="incident-investigations-are-about-learning">Incident investigations are about learning</h2>
<p>And now the contrast here is that reviewing and investigating incidents is about learning. Learning what was previously unknown and thus contributed to a surprise that manifested as an outage or reduced availability or data loss, or any other unwanted event that we commonly call an incident. And learning - as probably almost everyone has witnessed in some form before - is far from linear and sequential. Sometimes it&rsquo;s very quick, but usually it takes its sweet time. Especially in learning through research, where there isn&rsquo;t anyone already who knows the answer and who can say whether a hypothesis or an understanding is true or false, learning is far from linear. It&rsquo;s full of dead ends, red herrings, misunderstandings, re-discoveries, reformulations, conversations, disagreeing opinions, and probably late nights and long weekends.</p>
<p>Trying to now wrap learning into a time boxed setting like work where it can be reflected by a ticket that can be closed is surely a challenge. The irony here is that a form of this already exists in arguably the main arena of learning: education. Every school and college setting knows the setup of students having to learn (and ideally understand) a topic in a given time and then pass a test to be able to mark it as done (or as learned). Essentially a trade off that attempts to summarize learning into a checkbox style format where the options are more or less either pass or fail. And many, many discussions have been had and continue to happen about this suboptimal setup where it incentivizes students to learn for passing the test and not for understanding the topic. It gives rise to many frustrations where someone who is able to recite the words or equations (potentially without having understood the meaning behind them) is given the same or a better grade than someone who took more time to dig into what something means and study additional material around the topic to improve their understanding - but didn&rsquo;t do as good of a job demonstrating that in a test setting. And it leads to a considerable amount of people - not least in the software engineering world - actively despising education and its standardized testing.</p>
<p>But the truth of the matter is that learning can have more than one goal. There is no &ldquo;true&rdquo; and &ldquo;false&rdquo; learning. One way of learning has the goal of passing a test and the other has more the focus on establishing and deepening one&rsquo;s understanding of a topic. Both are valid and they satisfy different requirements. But you should know which one you choose and what you will get out of it. You can&rsquo;t not learn, but you can definitely learn different things depending on how you approach it.</p>
<h2 id="question-of-focus">Question of focus</h2>
<p>Tying this back to corporate incident investigations we are presented with a very similar choice. Do we want to review an incident for understanding the complex interplay of contributing factors that allowed it to manifest? Or do we want to be able to just close out the ticket already and move on? Both are valid in their own way. Because there will honestly never be enough time to investigate every incident in full. There is other stuff that needs to be done as well and you&rsquo;ll never get the 6 or 12 or 25 people that were closest to the incident and know the most about what happened in the same room to share all their knowledge and experience to untangle the full incident (which is already a trade off because ideally a facilitator should interview them individually to make sure there is no barrier to sharing).</p>
<p>However the huge downside and difference to the education setting is that there is no actual &ldquo;passing the test&rdquo; in work incidents. Just by doing the due diligence to be able to say we can close the ticket we don&rsquo;t actually gain or pass anything. We just miss out on learning. So while it&rsquo;s appealing to make it look (and feel) like an incident review was &ldquo;completed&rdquo;, following a linear, causal accident model to get to a single cause that lets us close the ticket, we just cheat ourselves out of valuable learning and insights.</p>
<h3 id="sidenote-mttr">Sidenote: MTTR</h3>
<p>Viewing incident review as learning and incidents as a source for and opportunity to do so, also makes it a lot more clear how some common &ldquo;measures&rdquo; of incidents are not as useful as one might think. Let&rsquo;s take the very popular metric MTTR (Mean Time To Recovery) for example, which generally is intended to denote how long it takes on average to recover from an incident. On the face of it, it makes sense. Because we want to be available as much we can and work on making sure incidents - when they happen - are as short as possible. However viewing incident handling (which comes before the review but has some overlapping properties) through the lens of learning basically lets us structure it in roughly the following phases:</p>
<ol>
<li>Understand what&rsquo;s wrong (e.g. too many 500 errors on the website)</li>
<li>Understand how this situation manifested (e.g. a combination of more traffic, a ramped up feature flag on a specific code path, and an upgraded app server version)</li>
<li>Understand what to change to mitigate (e.g. ramp down the feature flag for now)</li>
</ol>
<p>Of course incident handling is more concerned with finding the most easy to change contributing factor that is still <a href="https://www.kitchensoap.com/2012/02/10/each-necessary-but-only-jointly-sufficient/">necessary but only jointly sufficient</a> to give rise to the current incident and change it to make the incident go back from active to passive. So it&rsquo;s less about getting a full (as much as possible) view on the incident - that&rsquo;s what the review is for. Nevertheless in this view on incident handling it&rsquo;s really three phases of understanding that we go through, so what MTTR really measures is MTTU (mean time to understanding). And thus we are back in the same situation. We are basically trying to force learning into an arbitrarily timeboxed, measurable, and summarized metric (similar to issues closed). Which again is something you can totally do. It might not be very useful and not serve you in the way you&rsquo;d like it to do. Which makes it even more important to understand the limits and usefulness of a metric like that to not overly rely on it for the wrong reasons. Plus there are plenty <a href="https://www.verica.io/blog/mttr-is-a-misleading-metric-now-what/">more</a> <a href="https://www.adaptivecapacitylabs.com/blog/2018/03/23/moving-past-shallow-incident-data/">reasons</a> why these measure are generally not all that useful.</p>
]]></content>
    <link href="https://unwiredcouch.com/2023/06/07/incidents-tickets-standardized-learning.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Existential Physics]]></title>
    <published>2023-05-29T00:00:00Z</published>
    <updated>2023-05-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hossenfelder-existentialphysics-2022/</id>
    <content type="html"><![CDATA[<p>I saw this book in my regular bookstore and the topic really interested me. So
I started reading the book with no preconception or notion of who the author
was. Only halfway through or so I found out she also has a somewhat popular
YouTube channel.</p>
<p>Reading the book was a bit of a split for me. The author definitely does a
good job (as far as I can tell) explaining physics concepts and theories in a
way that is understandable and with the given topic in mind. And the fact that
the book is structured as chapters where each answers a specific question also
makes it a really quick and interesting read. However there have also been
parts of the book where the tone bordered on obnoxious for me with the cliche
undertone (or sometimes even explicit statement) that eventually everything
can “just” be explained with physics and how - in the author’s view - many
people (even other scientists) are incorrect in their assumptions and world
views. Which I find more jarring than entertaining. But the core idea in the
book that there are some views (e.g. the existence of a god) that are neither
scientific nor un-scientific as they can’t be proven or disproven. But rather
a-scientific as they are compatible with current science since nothing we
currently know confirms or refutes them. I’ve found this to be a really good
frame for thinking about the world. Overall I enjoyed the book with some
caveats.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hossenfelder-existentialphysics-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[What is Existentialism?]]></title>
    <published>2023-05-16T00:00:00Z</published>
    <updated>2023-05-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/debeauvoir-whatisexistentialism-2021/</id>
    <content type="html"><![CDATA[<p>This was another book that I didn’t really have any expectations on when I
bought it. And having never really had any philosophy classes in school or
university there was definitely a certain challenge to reading this. Maybe
some of it was also the translation from French but not being used to the way
philosophers talk about topics and structure their sentences it often took me
more than reading a sentence once to understand what was being said. I don’t
think I’ve really understood what existentialism is after reading the book on
a level where I would be able to argue about it. However there were many
interesting ideas in there about what it means to live in the present and even
such grim topics like the (far) future always presenting death and how to
think about and live with that.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/debeauvoir-whatisexistentialism-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Incidents can&#39;t be prevented, but learned from]]></title>
    <published>2023-05-06T00:00:00Z</published>
    <updated>2023-05-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2023/05/06/incident-prevention.html</id>
    <content type="html"><![CDATA[<h2 id="incidents-are-painful">Incidents are painful</h2>
<p>If your job is taking care of a running system, then you know how painful
incidents can be. They are often sudden and unexpected. They are disruptive to
your (and your coworker&rsquo;s) workday. And they are a stark reminder of the fact
that you are operating on inaccurate assumptions about the world your systems
are operating in. Often in seconds you are forced to through a painful
realignment of your understanding of the world. And at the very latest when
the incident is remediated and the adrenaline starts to dissipate you are left
with those irritating feelings full of hindsight and regret (especially if the
incident was triggered through the deploy of a change). &ldquo;How did we not
foresee this?&rdquo;, &ldquo;Why didn&rsquo;t we stop before running the command?&rdquo;, &ldquo;How could
we let this error happen?&rdquo;. &ldquo;We have to make sure this never happens again!&rdquo;</p>
<h2 id="prevention-of-incidents">Prevention of incidents</h2>
<p>This urge to prevent uncomfortable situations from happening again is
absolutely natural. And we all go through it. We don&rsquo;t want to be in this
situation where we have no idea how something could suddenly break. Much less
in a situation where we can&rsquo;t be sure it won&rsquo;t break again. And the next
logical thing is looking for just even a single thing that makes sure we
aren&rsquo;t ever going to be in this situation again. And even if you <a href="https://www.kitchensoap.com/2012/02/10/each-necessary-but-only-jointly-sufficient/" title="Each necessary but only jointly sufficient">already know
that root causes aren’t a
thing</a>, it’s an incredibly tempting
thing to try and find. Even Nietzsche said it over a hundred years ago:</p>
<blockquote>
<p>To derive something unknown from something familiar relieves, comforts, and satisfies, besides giving a feeling of power. With the unknown, one is confronted with danger, discomfort, and care; the first instinct is to abolish these painful states. First principle: any explanation is better than none. Since at bottom it is merely a matter of wishing to be rid of oppressive representations, one is not too particular about the means of getting rid of them: the first representation that explains the unknown as familiar feels so good that one considers it true</p>
<p class="cite">
&mdash; <cite> Friedrich Nietzsche, Twilight of the Idols (1888) </cite>
</p></blockquote>
<p>And so in the case of an incident we look for anything that lets us prevent
this incident (or others like it) to happen again.</p>
<p>The problem with this kind of goal fixation on prevention (as noble as it is)
is exactly that. It&rsquo;s fixation on a single outcome. A single &ldquo;good&rdquo; state. And
this means it severely limits how much can be learned from an incident.
Because even as you have now realized something new about the system and the
world and went through that painful realignment of your perspective, that&rsquo;s
still not the full picture. And it never will be. Our systems are highly
complex and subject to constant change in both their code and operations as
well as their environment. So even if we come up with something now that we
are sure will prevent the incident from happening again, it won&rsquo;t (and can&rsquo;t)
ever be a solution with the full view of the system. So anything we can come
up with in this situation to &ldquo;prevent&rdquo; the incident from happening again has a
high likelihood to be over specific to this current view of the world.
Proposals for solutions in this situation often take the form of guard rails,
check lists, over zealous automation, or other things to reduce <a href="https://unwiredcouch.com/2014/08/04/human-error-getting-off-the-hook.html" title="Human error and getting off the hook">&ldquo;human
error”</a>. But those things - while well
intentioned - often have the downside that they take flexibility away from the
true source of resilience in a system: the human operator. And when the next
incident comes around, there is a decent chance that these newly instated
&ldquo;improvements&rdquo; are now working against operators and their ability to reason
about a situation and improvise a solution. Effectively causing them to take
longer with debugging and remediation in a state of reduced flexibility than
they did before.</p>
<blockquote>
<p>“The problem comes when the pressure to fix outweighs the pressure to learn.”</p>
<p class="cite">
&mdash; <cite> Todd Conklin  </cite>
</p></blockquote>
<p>And after all that investment there was likely only very little learned from
an otherwise information rich incident. Plus there is usually no way of
telling whether you actually prevented anything. Most of the time in running
systems, incidents <em>don’t</em> actually happen. So the absence of incidents -
especially the non-recurrence of a specific incident - is generally not a
strong signal about the resilience impact of a single change.  Often many
contributing factors to an incident lie dormant for a long time only for an
operator to exclaim while debugging an incident “how did this ever work?”.</p>
<h2 id="learning-from-incidents">Learning from incidents</h2>
<p>So what’s the alternative? Surely we don&rsquo;t just want to shrug off incidents
and ignore them. That would be such a waste. Incidents are a great source of
knowledge and learning. And I for one enjoy getting the chance to learn
something.</p>
<p>In the situation of post processing an incident this means now is the time to keep an open mind and understand how we arrived at the current state of the world and our systems:</p>
<ul>
<li>Why are they set up like this?</li>
<li>What are their current known limitations?</li>
<li>What is easy/hard about changing them?</li>
</ul>
<p>I won’t go into too much detail about how to facilitate learning from
incidents here. I’ve had the fortune to collaborate <a href="https://extfiles.etsy.com/DebriefingFacilitationGuide.pdf" title="Etsy
Debriefing Facilitation Guide">on a
document</a> about this many years ago and I think it’s
still a useful place to start.</p>
<p>But this is the discussion that should bring all your experts to the forefront
(the yard?). The people that are operating the systems in question and know
their in and outs. The ones that are operating alongside and within the
behaviours and boundaries of those systems every day. They will be able to
quickly pinpoint shortcomings, workarounds, and idiosyncrasies. But also more
importantly slack and flexibilities, and processes and hacks that aren&rsquo;t part
of the day to day automation but are needed in edge case situations to keep
the system resilient. They will be able to talk about how they decided to look
at some metrics but not others to make sense of the state of the system. How
they decided to go with one route for a possible fix and not the other. In a
setting where learning is preferred over fixing and solving there is the
chance for a lot of people to go on a journey of how to navigate a highly
complex socio-technical system and make the whole organisation more resilient
through the dissipation of knowledge. And while discussions of incidents often
seem linear and algorithmic in hindsight, it’s also important to always
remember that the operator chose <em>one</em> path to success amongst a myriad of other
routes that may or may not have been taken. And what looks linear and
algorithmic now was most likely nothing like that when it happened. Sometimes
what comes out of the discussion <em>might</em> be more automation or it <em>might</em> be
an additional guard rail. But the importance is that neither is it something
that can be known beforehand, nor is it something that will be instated to
keep the human at bay.</p>
<p>The point of debriefing and discussing incidents is <strong>not</strong> to keep something
from happening but to make sure the tools (including automation) and support
(including knowledge sharing and learning) are in place to <em><a href="https://www.kitchensoap.com/2013/08/20/a-mature-role-for-automation-part-ii/">&ldquo;augment and
compliment [..] human adaptive and processing
capacities&rdquo;</a></em>.</p>
<h2 id="so-what-to-do">So what to do?</h2>
<p>Incidents don’t have to be painful. At least not once they are over and you’ve
gotten some sleep to get ready to debrief them. You can approach them with a
stoic attitude of knowing that you won&rsquo;t ever be able to prevent them. That
you will keep having incidents. They will likely change in nature and shape as
your systems and understanding of them changes. But you won&rsquo;t ever get rid of
them completely. Along that same topic <a href="https://surfingcomplexity.blog" title="Lorin Hochstein's blog">Lorin
Hochstein</a> recently
gave <a href="https://surfingcomplexity.blog/2023/04/25/my-srecon-23-talk-is-up/" title="Lorin Hochstein's SRECon23 Talk">a great talk at
SRECon23</a> about why we will all keep having
incidents. And it’s well worth your time, so go watch it. But approaching
incidents with a mindset of learning makes it an exciting rather than painful
situation. Because you’ll know you’ll never run out of sources for learning.
And once you’ve realised what a good source for learning incidents are, it’s
maybe even time to take a good look whether <a href="https://www.adaptivecapacitylabs.com/blog/2018/03/23/moving-past-shallow-incident-data/" title="Moving Past Shallow Incident data @ Adaptive Capacity Labs">shallow incident
data</a> like “mean time
to detection” and “mean time to resolution” (or the maybe worst offender of
all “mean time between failure”) are actually helping your team approach
incidents as a learning opportunity or maybe are incentivising an approach
that foregoes learning for a better look of those metrics.</p>
]]></content>
    <link href="https://unwiredcouch.com/2023/05/06/incident-prevention.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Hello World]]></title>
    <published>2023-04-22T00:00:00Z</published>
    <updated>2023-04-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/fry-helloworld-2020/</id>
    <content type="html"><![CDATA[<p>I had this book on my shelf since 2020 or so and never took the time to read
it. With the renewed (and intensified) hype around AI this year I started to
read it even though I wasn’t sure if there was really more about machine
learning that I wanted to know and would find interesting.</p>
<p>But truth be told I really enjoyed reading this book. It’s roughly divided
into topical areas like justice, medicine, cars, and art and discusses the
status quo (as of 2020) as well as improvements and downsides to the rise of
usage of algorithms in general but also machine learning. The author takes a
very nuanced look at all these and it actually made me feel less frustrated
about using computers in some areas than I had been through the public
discussions of AI this year. Highly recommend the book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/fry-helloworld-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Twilight of Democracy]]></title>
    <published>2023-03-08T00:00:00Z</published>
    <updated>2023-03-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/applebaum-twilightofdemocracy-2021/</id>
    <content type="html"><![CDATA[<p>Starting this book I was a little set back as the author identifies with what
would have been called the “center-right” towards the late 1990s and early
2000s. So I wasn’t too sure where this book would be going. But I ended up
finding it super interesting. The book details some historical background as
well as current workings of politics in various countries like Poland,
Hungary, and the U.K.. There is also a lot of very personal accounts in there
about how friends of the author have drifted more towards the contemporary
right and how that feels. It was overall a pretty harrowing account of what
has happened in some countries over the past 20 years and how much similarity
there is between them.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/applebaum-twilightofdemocracy-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Big Friendships]]></title>
    <published>2023-02-19T00:00:00Z</published>
    <updated>2023-02-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/friedmansow-bigfriendships-2021/</id>
    <content type="html"><![CDATA[<p>I didn&rsquo;t really know what to think of this book. I generally enjoyed reading
it. But I think I was expecting something different. I wasn’t expecting more
of a memoir of a friendship but rather thought there was some more general
discussion of the topic of friendship in there. Some parts of the book I
wasn’t super excited by as it either went into a lot of detail about social
outings I didn’t find super interesting or stopped before I felt it went to
the core of the situation.</p>
<p>Overall I liked the book. The big core message of friendship on a deep level
also takes work is something that really resonates with me. There’s too often
the message in media and entertainment that the perfect relationship (romantic
or otherwise) is effortless, which does us all a disservice as we assume we
don’t have to put the work in. But this book talks very clearly about the work
being part of a friendship and I really liked that.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/friedmansow-bigfriendships-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Book You Wish Your Parents Had Read (and Your Children Will Be Glad That You Did)]]></title>
    <published>2023-01-31T00:00:00Z</published>
    <updated>2023-01-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/perry-thebookyouwishyourparentshadread-2021/</id>
    <content type="html"><![CDATA[<p>I liked a lot how the book always came back to talking to the kids and
empathizing with their emotions instead of trying to abolish them. And there
were always good examples that came with every section to illustrate the
point. However it still felt very &ldquo;good weather parenting&rdquo; advice to me, in
the sense that it talks about how to do these things when you have the time to
take for it. But there was hardly anything in there about how to deal with
situations where time is tight or there isn&rsquo;t the space to slow down and
prolong the situation.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/perry-thebookyouwishyourparentshadread-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Solving Mathematical Problems: A Personal Perspective]]></title>
    <published>2023-01-08T00:00:00Z</published>
    <updated>2023-01-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/tao-solvingmathematicalproblems-2006/</id>
    <content type="html"><![CDATA[<p>It was fun to read through this book and see different approaches to solving
the problems presented. I didn&rsquo;t follow through all the proofs and steps all
the time because I would have needed to carefully do it with pen and paper and
that was really the mood I was in when I wanted to read. But it made me miss
school math and solving problems a lot. It also ended up being a bit hard to
follow for me occasionally as I read the book in English and most of my math
education was in German. So there were definitely words and terms I wasn&rsquo;t
familiar with at all. Overall I enjoyed reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/tao-solvingmathematicalproblems-2006/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Pedestrian Programmer]]></title>
    <published>2023-01-02T00:00:00Z</published>
    <updated>2023-01-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2023/01/02/pedestrian-programmer.html</id>
    <content type="html"><![CDATA[<h2 id="endless-fiddling">Endless fiddling</h2>
<p>In the field of software engineering there is a common theme of fiddling with
your tools and setup until they are juuuuust right, and only then can the work
of writing code commence. Or the next level of productivity be unlocked. I
don’t think this is something particularly special to software engineering.
It’s probably just a different form of cleaning up your desk before writing,
sorting all of the paperwork before doing your taxes, or re-organizing the
kitchen before cooking. Basically procrastination. And the hope that what’s
missing for productivity and &ldquo;getting in the zone&rdquo; is just this one weird trick
to improve the setup.</p>
<p>I&rsquo;ve done this a lot as well especially early in my career, where I&rsquo;ve had
convoluted setups that were very intricate and fine tuned to what I thought
made me the most productive. Be it
<a href="https://unwiredcouch.com/2013/11/15/my-tmux-setup.html" title="My Tmux setup on
unwiredcouch.com">tmux</a>, or <a href="https://unwiredcouch.com/2012/11/03/irc-notifications-with-logstash.html" title="IRC notifications on unwiredcouch.com ">IRC
notifications</a>, or of course <a href="https://unwiredcouch.com/setup/omnifocus/" title="Omnifocus setup on
unwiredcouch.com">my way of managing
tasks</a>. But the downside was always that the setups became more and
more brittle with everything that was added. A plugin would break behavior when
updated, a tool doesn&rsquo;t work or isn&rsquo;t available on macOS or Linux, an
integration breaks through an API change. And over time the upkeep of the setup
starts to become a bigger and bigger chore. And I noticed that at some point I
stopped bothering with it. My setup slowly &ldquo;deteriorated&rdquo; to the minimal
working state that kept me productive. I didn&rsquo;t use my fancy integrations
anymore. I could hardly remember why I installed some of the editor plugins.
And I actually was as productive if not more. And so the state I arrived at is
that I write all code in terminal vim in tmux now and that basically any Apple
laptop with even a small screen will do (my forever favorite being the 11&quot;
MacBook Air and I absolutely can’t stand having more than one display) and I
can be set up within about 20 minutes by basically just configuring:</p>
<ul>
<li>low key repeat delay and quick type rate</li>
<li>caps lock remapped to control</li>
<li>git clone <a href="https://github.com/mrtazz/dotfiles">https://github.com/mrtazz/dotfiles</a> &amp;&amp; make install</li>
</ul>
<p>And not much more. My dotfiles configure vim, zsh, some git aliases, and
install some useful tools (via homebrew/linuxbrew) that I could do without but
sometime enjoy using like <code>fzf</code>  and <code>ripgrep</code>.  They also get installed on
every <a href="https://github.com/features/codespaces" title="GitHub Codespaces">codespace</a> I
create (at GitHub that is my main development environment) so when I ssh into
it, the terminal is set up in the same way. On macOS they install some apps I
use like iTerm2, 1Password, Alfred, and the Phoenix window manager with configs
that I haven&rsquo;t really changed in years. They are also mostly niceties that I
can probably more or less do without (except 1Password). E.g. for months when
codespaces was new and only available through VSCode I wrote code in a
fullscreen VSCode terminal window running vim within it. And I was basically as
productive as ever (maybe even a bit more given I had access to quick and
disposable codespaces). Even with browsers I don’t really use any extensions or
anything and change them without even thinking about it. I had some work
specific configuration in Safari break a while ago. And instead of spending
ages debugging it, I just switched to Firefox and moved on with my work.</p>
<p>And that is more or less the pretty barebones setup I use to write code and
which I describe as a &ldquo;pedestrian programmer&rdquo; programmer style when I get asked
about it.</p>
<h2 id="programming">Programming</h2>
<p>But this notion doesn’t stop at the setup for me. It’s also how I write code.
As I’ve mostly worked in infrastructure engineering over the last decade (even
though I&rsquo;ve switched to a product platform team at the beginning of 2022), I’ve
had to jump between many different languages in the same day. It’s usually some
mix of ruby, python, shell, golang, javascript, PHP, and various config
formats. And I use that same setup for all of it. Furthermore, I also write
very similar code in all of these languages. I’ve mostly come to utilize the
common denominator of syntax and code structure to implement things regardless
of language (they are all C-style languages anyways so they aren&rsquo;t vastly
different). And only really start using language specific constructs where
needed. So my python classes look like my ruby classes, look like my PHP
classes. If I can help it I don’t use concepts like python’s decorators, or
ruby blocks (or even heavy use of higher order functions that most languages
support at this point) in code I define and control. In my mind, trying to
convey the solution that is implemented in code in as simple constructs as
possible makes it a lot more accessible and friendly to get started with. I
hope that even if someone comes from having written python their whole career,
my ruby and PHP still is very accessible. If it becomes a performance or
maintenance problem, it can still be changed to use more specialized constructs
and concepts later on. And maybe even more easily because the original
structure is fairly simple. But for as long as possible I try to keep code as
“pedestrian” as possible so it’s easy to read, follow along, reason about, and
change.</p>
<p>Similarly I don’t really use design patterns a lot when I write code. I
remember the days in university when design patterns were all the rage. And
books about them the most important text one could ever read about programming.
I also remember trying really hard to force the <a href="https://en.wikipedia.org/wiki/Singleton_pattern" title="The Singleton
Pattern">Singleton
pattern</a> into every university programming project because it’s what
professors wanted to see. Nowadays I will try and solve the problem with as
simple of an architecture as I can. And then only change to a more intricate
pattern if it serves understanding or maintenance. I most of the time don’t
recall what a design pattern does when I hear the name. Not that I don’t
understand what they do or how they are useful. Or that I don’t end up
implementing them along the way. But I’ve found it to be more confusing than
helpful to throw design pattern names as jargon around instead of writing code
in a form that solves the problem and comment it along the way.</p>
<h2 id="take-away">Take away</h2>
<p>My main point in all of this is not that it’s bad to have a very intricate
setup, or that you shouldn’t take joy in fiddling with it. But it’s important
to recognise when the hunt for the mythical “zero-friction” state gets in the
way of getting things done (and sometimes a little friction is not a bad
thing). On the other hand it also doesn’t mean that the most elite and true way
to write code is just using vim and the most barebones setup there is. Neither
would I advocate to never use design patterns or take time to build a
sophisticated architecture. The point I’m trying to bring across with the
description of my approach is that it doesn’t take intricate setups and complex
architectures to write production code that solves problems. Everyone is
different and different approaches result in the same useful code. There is no
rule of “the better the programmer, the more sophisticated the setup”. I’ve
worked with programmers I respect immensely that wrote code in Notepad.exe or
used the default 8 line vim config. I also enjoy reading about other’s setups
regardless of how complicated they are because there might be something in
there I’d love to try. And when I work in a team where we decide on design
patterns and a more complicated architecture or code style as a trade off for
some other problems, I’m happy to go along with it. It’s just not my first
choice.</p>
<p>Because in the end a setup doesn’t make you productive and code isn’t “the
better solution” because it’s complicated. You are productive with it because
it fits the way you (and the people on your team when it comes to shared code)
think and approach problems. There are no unreal programmers, if you write code
with whatever tools you like and in whatever shape you prefer, you’re still a
programmer as much as everyone else.  Even if you are - like me - a proud
pedestrian programmer.</p>
]]></content>
    <link href="https://unwiredcouch.com/2023/01/02/pedestrian-programmer.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Black Hole Survival Guide]]></title>
    <published>2022-12-25T00:00:00Z</published>
    <updated>2022-12-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/levin-blackholesurvivalguide-2022/</id>
    <content type="html"><![CDATA[<p>I absolutely enjoyed this book and devoured it in 2 days or so. Janna Levin
has a fantastically fun and entertaining way to talk about the most
fascinating things. And black holes sure are fascinating. There were a lot of
parts in the book where I was sure I&rsquo;m not getting quite the importance of the
implications she explained. Or even the science on the highest of levels. But
I enjoy thinking about space. And the book did exactly that.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/levin-blackholesurvivalguide-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[It’s not about the Burqa]]></title>
    <published>2022-12-23T00:00:00Z</published>
    <updated>2022-12-23T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/khan-itsnotabouttheburqa-2020/</id>
    <content type="html"><![CDATA[<p>I really enjoyed this book. Mostly because I rarely have read any texts, let
alone feminist ones from a Muslim women perspective. There&rsquo;s a lot of things I
learned having previously had close to no knowledge of Islam, its traditions,
or which are practiced. The book is very UK centric as most of the essay
authors live there. So I&rsquo;m curious to also read more from authors from other
places.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/khan-itsnotabouttheburqa-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Thinking, Fast and Slow]]></title>
    <published>2022-12-17T00:00:00Z</published>
    <updated>2022-12-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/kahnemann-thinkingfastandslow-2011/</id>
    <content type="html"><![CDATA[<p>I had originally started reading this on my Kindle in 2016 but quickly
abandoned it. I then thought it was because I didn&rsquo;t find the book too
interesting but I found out some time later that I just didn&rsquo;t enjoy reading
on the Kindle much. So having moved to reading exclusively on paper over the
last couple of years, I gave the book another try this year and absolutely
enjoyed it. There&rsquo;s a lot of fascinating facts and insights into research in
there. And a lot of examples to try out yourself to realize that your own
brain works the same way. I took a lot of notes for this one and can
definitely recommend it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/kahnemann-thinkingfastandslow-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Seven Brief Lessons on Physics]]></title>
    <published>2022-12-17T00:00:00Z</published>
    <updated>2022-12-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rovelli-sevenbrieflessonsphysics-2016/</id>
    <content type="html"><![CDATA[<p>I really liked the book. It was short and digestible but interesting and fun
to read nonetheless. I think I probably have read about most of the seven
things somewhere else before (which doesn&rsquo;t mean I understood or have a good
grasp on them). So it wasn&rsquo;t mind blowing new things I was reading. But
Rovelli has a very philosophical style how he talks about these things which
makes it really enjoyable.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rovelli-sevenbrieflessonsphysics-2016/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Green Ranger]]></title>
    <published>2022-11-20T00:00:00Z</published>
    <updated>2022-11-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/green-ranger-11-2022/</id>
    <content type="html"><![CDATA[<p>RIP Jason David Frank</p>
<p>I very much still remember the emotions those Saturday mornings when the Green
Ranger showed up and first wrecked havoc to the Rangers, summoning his Zord
with the dagger flute, before finally joining the good side and the Power
Rangers to fight evil and eventually becoming the White Ranger. I seems a bit
silly now writing it all out all these years later. But back then it blew my
kid brain and was the main topic of many debates on the school yard. Thanks
for everything!</p>
<p>#art #powerrangers #greenranger #jasondavidfrank #watercolor</p>
]]></content>
    <link href="https://unwiredcouch.com/art/green-ranger-11-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[MF DOOM]]></title>
    <published>2022-11-18T00:00:00Z</published>
    <updated>2022-11-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/mf-doom-11-2022/</id>
    <content type="html"><![CDATA[<p>MF Doom. Some Friday night art. Watercolor and ink in the sketchbook.</p>
]]></content>
    <link href="https://unwiredcouch.com/art/mf-doom-11-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[We need a replacement for TCP in the datacenter]]></title>
    <published>2022-11-06T00:00:00Z</published>
    <updated>2022-11-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/ousterhout-replacementtcpdatacenter-2022/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/reading/ousterhout-replacementtcpdatacenter-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Kate]]></title>
    <published>2022-09-10T00:00:00Z</published>
    <updated>2022-09-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/kate-09-2022/</id>
    <content type="html"><![CDATA[<p>I watched “Kate” with Mary Elizabeth Winstead and while I usually don’t enjoy
gory movies too much I enjoyed that one a lot. Lots of good fight scenes and
the general atmosphere was pretty cool.</p>
<p>#art #sketch #sketchbook #watercolor #ink #movies #kate #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/kate-09-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Splinter]]></title>
    <published>2022-09-07T00:00:00Z</published>
    <updated>2022-09-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/splinter-09-2022/</id>
    <content type="html"><![CDATA[<p>dramatic Splinter</p>
<p>I would definitely watch a TMNT show about a younger Splinter pre-Turtles that
just runs around a does badass ninja stuff. Got inspired for this by the
recent absolutely amazing @bosslogic Metal Gear Solid cyborg ninja posts for
the MGS anniversary. And thought which other popular fighters would fit such a
pose.</p>
<p>#art #sketch #sketchbook #ink #watercolor #splinter #tmnt #turtles #instaart
#comicart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/splinter-09-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The One Who Remains]]></title>
    <published>2022-08-28T00:00:00Z</published>
    <updated>2022-08-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/one-who-remains-08-2022/</id>
    <content type="html"><![CDATA[<p>Pretty excited for this dude to show up and show some Avengers what’s up. I
started the sketch while watching Loki season 1 and finally took the time to
finish it</p>
<p>#art #sketch #sketchbook #comicart #kang #avengers #marvel #watercolor #ink</p>
]]></content>
    <link href="https://unwiredcouch.com/art/one-who-remains-08-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Conan and Tetra]]></title>
    <published>2022-08-19T00:00:00Z</published>
    <updated>2022-08-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/conan-tetra-08-2022/</id>
    <content type="html"><![CDATA[<p>I gave sketching my version of a Conan panel from the one book I had as a kid
a try. Swipe to see the original and the cover of the book. It was the only
Conan book I had as a kid but I devoured it and was fascinated by the stories.
So I wanted to see how it would look like if I gave it a try. I took some
inspiration for Conan from <a href="https://mahmudasrar.com/">@mahmudasrar’s</a> version
(his art is amazing and a constant inspiration for me to get better) but tried
to still stay close to the original.</p>
<p>#art #sketch #sketchbook #comicart #watercolor #conan #conanthebarbarian</p>
]]></content>
    <link href="https://unwiredcouch.com/art/conan-tetra-08-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Order of Time]]></title>
    <published>2022-05-22T00:00:00Z</published>
    <updated>2022-05-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rovelli-orderoftime-2018/</id>
    <content type="html"><![CDATA[<p>I went into this book expecting much more physics than what was actually in
there. There was definitely a lot of explanation of concepts on a high level.
But overall the book felt much more on the philosophy side of things. Which -
after realizing and getting used to it - I really liked. The thing I struggled
with was the fact that so much is explained via entropy in the book. And
entropy has always been a hard topic for me to wrap my brain around. Even in
university. So there were some parts that left me more confused than I would
have liked. Due to no fault of the author.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rovelli-orderoftime-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The End of Everything]]></title>
    <published>2022-05-14T00:00:00Z</published>
    <updated>2022-05-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/mack-endofeverything-2020/</id>
    <content type="html"><![CDATA[<p>This was an absolutely wonderful and mind blowing book to read. I had to think
a lot of “Dark Matter and the Dinosaurs” while reading it. I feared at first
that it would be too depressing. But the author has an absolutely enjoyable
writing style and it made the whole topic fun and interesting rather than loom
and doom.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/mack-endofeverything-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[replace a disk in a running ZFS zpool]]></title>
    <published>2022-05-11T00:00:00Z</published>
    <updated>2022-05-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2022/05/11/replace-zpool-disk.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve recently had to replace all disks in a <code>zroot</code> zpool on <a href="https://unwiredcouch.com/2013/10/30/uncloud-your-life.html">my FreeBSD server</a>.
And I kept looking up the commands and order in which to run them. So I thought I&rsquo;d put them here to find them again when I need them.</p>
<p>The following assumptions are being made with this:</p>
<ol>
<li>The <code>ada0</code> disk was faulty and already replaced</li>
<li><code>ada1</code> is running in the zpool and working</li>
<li>Both <code>ada0</code> and <code>ada1</code> are the same size and have the same layout</li>
</ol>
<h2 id="replace-the-disk-itself">Replace the disk itself</h2>
<ol>
<li>First up we restore the disk layout from <code>ada1</code> onto <code>ada0</code> and verify:</li>
</ol>
<pre tabindex="0"><code># gpart backup ada1 | gpart restore -F ada0
# gpart show ada0
</code></pre><ol start="2">
<li>Given this is a <code>zroot</code> pool to boot from, we need a bootcode</li>
</ol>
<pre tabindex="0"><code># gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
</code></pre><ol start="3">
<li>Now we just replace the (missing) disk in the zpool with its replacement:</li>
</ol>
<pre tabindex="0"><code># zpool replace zroot ada0p3 /dev/ada0p3
# zpool status zroot
</code></pre><p>At this point the zpool is being resilvered to make sure all data is on the new disk. Depending on the amount of data
this can take a while. And while it&rsquo;s running <code>zpool status zroot</code> shows something like:</p>
<pre tabindex="0"><code># zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Tue May 10 07:15:33 2022
        416G scanned at 144M/s, 382G issued at 133M/s, 417G total
        384G resilvered, 91.66% done, 00:04:28 to go
config:

        NAME              STATE     READ WRITE CKSUM
        zroot             DEGRADED     0     0     0
          mirror-0        DEGRADED     0     0     0
            replacing-0   DEGRADED     0     0     0
              ada0p3/old  REMOVED      0     0     0
              ada0p3      ONLINE       0     0     0  (resilvering)
            ada1p3        ONLINE       0     0     0

errors: No known data errors
</code></pre><h2 id="configure-encrypted-swap">Configure encrypted swap</h2>
<p>The zroot setup from the baseinstall also comes with encrypted swap. So this also needs to be
configured on the new disk:</p>
<ol>
<li>Check setup and options from the existing swap partition</li>
</ol>
<pre tabindex="0"><code># geli list ada1p2.eli
Geom name: ada1p2.eli
State: ACTIVE
EncryptionAlgorithm: AES-XTS
KeyLength: 128
Crypto: accelerated software
Version: 7
Flags: ONETIME, W-DETACH, W-OPEN, AUTORESIZE
KeysAllocated: 1
KeysTotal: 1
Providers:
1. Name: ada1p2.eli
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 4096
   Mode: r1w1e0
Consumers:
1. Name: ada1p2
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e1
</code></pre><ol start="2">
<li>Set up the new swap partition with the same options</li>
</ol>
<pre tabindex="0"><code># geli onetime -d -e AES-XTS -l 128 -s 4096 /dev/ada0p2
</code></pre><ol start="3">
<li>Turn new swap partition on</li>
</ol>
<pre tabindex="0"><code>% swapon -a
</code></pre><p><strong>Notice:</strong>
When running <code>swapinfo</code> the old swap partition still shows up as a kinda ghost partition. I haven&rsquo;t
experienced any problems with that and it usually goes away on the next reboot.</p>
<pre tabindex="0"><code># swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/#C:0x86      2097152        0  2097152     0%
/dev/ada1p2.eli   2097152        0  2097152     0%
/dev/ada0p2.eli   2097152        0  2097152     0%
Total             6291456        0  6291456     0%
</code></pre>]]></content>
    <link href="https://unwiredcouch.com/bits/2022/05/11/replace-zpool-disk.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Art of Statistics: How to learn from Data]]></title>
    <published>2022-04-18T00:00:00Z</published>
    <updated>2022-04-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/spiegelhalter-artofstatistics-2019/</id>
    <content type="html"><![CDATA[<p>It took me a while to get through this book which had less to do with the book
itself and more with how I was able to make time to read. So the first couple
of chapters were spread out for me over a couple of weeks and thus I had a
hard time getting into it. But once I managed to make proper time to read
continuously I really enjoyed the flow of the book. It&rsquo;s much less
mathematical than I assumed beforehand. But it&rsquo;s really well written and does
a good job explaining statistical concepts in plain English. And I really
appreciated the strong focus on the fact that (good) statistics is more than
just applying some formulas but also a hard look at whether the data and the
conclusions make sense for the questions that need to be answered.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/spiegelhalter-artofstatistics-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Moon Knight]]></title>
    <published>2022-03-31T00:00:00Z</published>
    <updated>2022-03-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/moon-knight-03-2022/</id>
    <content type="html"><![CDATA[<p>Couldn’t not give this a try. The first episode of the Moon Knight show was so
good.</p>
<p>#art #ink #sketchbook #moonknight #marvel #comicart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/moon-knight-03-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Cherry]]></title>
    <published>2022-03-18T00:00:00Z</published>
    <updated>2022-03-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/cherry-03-2022/</id>
    <content type="html"><![CDATA[<p>Cherry from Streets of Rage 4</p>
<p>This is such a fun game and it reminds me a ton of being a kid and hanging out
with my best friend to play games on his mega drive</p>
<p>#art #watercolor #drawing #sketchbook #videogames #cherry #streetsofrage
#switch</p>
]]></content>
    <link href="https://unwiredcouch.com/art/cherry-03-2022/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Four Thousand Weeks: Time Management for Mortals]]></title>
    <published>2022-01-08T00:00:00Z</published>
    <updated>2022-01-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/burkeman-fourthousandweeks-2021/</id>
    <content type="html"><![CDATA[<p>I read this book after Mathias really enjoyed it and Nina finding the audio
book pretty insufferable. Which is an interesting combination to say the
least. Overall I did enjoy the book a lot. Even though I could definitely see
the preachy and more annoying parts of it. The general message of the books is
that we don&rsquo;t actually have that many weeks, so it&rsquo;s worth it to think about
how to spend them. What kind of things actually are enjoyable (of the things
that we do voluntarily) and which things we can more or less do away with.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/burkeman-fourthousandweeks-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Cancer Journals]]></title>
    <published>2021-12-31T00:00:00Z</published>
    <updated>2021-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/lorde-thecancerjournals-1980/</id>
    <content type="html"><![CDATA[<p>I found this book while browsing through the local book store. And I found it
really intriguing because even though cancer seems to be everywhere and all
around us in some ways and I&rsquo;ve had a number of friends and family suffer
through and even die of cancer, it&rsquo;s not something I&rsquo;ve personally
experienced. Especially not going through a mastectomy as a black, queer woman
in the 70s. So I was interested in reading a personal account of something
that I will never actually experience myself. And I&rsquo;m very glad I did. Lorde
talks very directly and openly about the struggle she went through, the
adversaries she faced, as well as the things that helped her through. I can
very much recommend reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/lorde-thecancerjournals-1980/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Der Russe ist einer, der Birken liebt]]></title>
    <published>2021-12-26T00:00:00Z</published>
    <updated>2021-12-26T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/grjasnowa-russederbirkenliebt-2012/</id>
    <content type="html"><![CDATA[<p>I decided to read this book after I had read <a href="/reading/grjasnowa-machtdermehrsprachigkeit-2021">Die Macht der
Mehrsprachigkeit</a> earlier
this year. I haven&rsquo;t read a lot of German literature (let alone fiction) since
I was in school for no real reason. So I was curious about some more
contemporary authors. And the book really wasn&rsquo;t an easy read. It&rsquo;s a very
emotional and not often happy story and deals with a lot of heavy topics
around war, migration, loss, and what it means to be and have a home.
Nonetheless I flew through its nearly 300 pages in about 2 days. And I was sad
when I was done as I wanted to keep reading.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/grjasnowa-russederbirkenliebt-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Afropean]]></title>
    <published>2021-12-25T00:00:00Z</published>
    <updated>2021-12-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/pitts-afropean-2019/</id>
    <content type="html"><![CDATA[<p>This was an absolutely fascinating book to read. For every city that the
author visits throughout the book and talks to people I&rsquo;ve learned to see them
(and I&rsquo;ve been to a good chunk of them) in a new light. And made me reflect
about a lot of things. I can whole-heartedly recommend this book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/pitts-afropean-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Human Error]]></title>
    <published>2021-11-12T00:00:00Z</published>
    <updated>2021-11-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/reason-humanerror-1990/</id>
    <content type="html"><![CDATA[<p>It took me quite a while to get through this book. I had actually abandoned it
before and then came back to it. It&rsquo;s very theoretical and academic (even
though it contains some suggestions towards more practical applications of the
theory towards the end). There is a lot of interesting thought in there about
how to put something more concrete to the notion of errors or mistakes. And
even if most of it is now superseded it provides interesting insight into the
research of human error during that time.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/reason-humanerror-1990/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Social Choice and Invidual Values]]></title>
    <published>2021-10-14T00:00:00Z</published>
    <updated>2021-10-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/arrow-socialchoiceindividualvalues-1970/</id>
    <content type="html"><![CDATA[<p>Given this book is more or less the printed and published version of Arrow&rsquo;s
PhD. thesis, it is not an easy read. It lays the foundation and explanation
for his <a href="https://en.m.wikipedia.org/wiki/Arrow%27s_impossibility_theorem">Impossibility
Theorem</a>
which states limitations of a ranked voting electoral systems when it comes to
individual and communal preferences. I can&rsquo;t claim to have understood all of
it fully. It was nevertheless very interesting to me, so read something from a
field I have hardly any experience or knowledge in. I started reading it
earlier in the year and then put it down because I didn&rsquo;t have the brain space
to get into the math and proofs of it. And then I picked it back up during the
time of the German general election this year. I found it really interesting
to read about how these kind of problems are described and reasoned about in
political and economical academia. And which kind of definitions and trade
offs are being chosen to create a system within these formal proofs can exist.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/arrow-socialchoiceindividualvalues-1970/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[vim package management in make]]></title>
    <published>2021-09-02T00:00:00Z</published>
    <updated>2021-09-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2021/09/02/vim-package-managment-make.html</id>
    <content type="html"><![CDATA[<p>My <a href="https://unwiredcouch.com/setup/vim/">text editor of choice is vim</a> and has been for many many years at this point. And while it’s a very configurable editor with all its available plugins, I generally tend to use only a small number of them. Partly because I like being able to have a small and simple setup, and not be slowed down too much when I have to use vim on a different machine or don’t have my setup installed.</p>
<p>Part of this tendency towards simple setups also means that I don’t want to have the management of the few plugins I do use be cumbersome and involving understanding the intricacies of some new tool. And given that vim plugins for the most part also don’t have a ton of dependencies (at least not the ones I use), my needs for managing plugins can be reduced to a small list of requirements:</p>
<ol>
<li>Have plugins I want defined in an authoritative list</li>
<li>Have a single command to update plugins to the newest version</li>
</ol>
<h2 id="enter-make">Enter Make</h2>
<p>So after having gone through various stages of managing my plugins like git submodules and a ruby script that was doing a lot more than it needed, I was wondering if I could just use one of my favorite tools <code>make</code> to do the job for me. After all one of the things it’s really good is, is making sure a defined set of files exists. Which is kinda all that vim plugin management is. And after thinking about this a little bit, I came up with the following things I would need to implement.</p>
<ol>
<li>A way to define a list of plugins with name and URL</li>
<li>A way to download all plugins and put them in the right location</li>
<li>A way to make sure plugins that don’t exist anymore are removed</li>
</ol>
<p>So let’s see how this can be done with <code>make</code>. First we need to define a list of plugins. This can be easily done by just storing them in variables, e.g.:</p>
<pre><code>plugin_supertab  := https://github.com/ervandew/supertab/tarball/master
plugin_syntastic := https://github.com/scrooloose/syntastic/tarball/master
plugin_fzf       := https://github.com/junegunn/fzf/tarball/master
plugin_fzf.vim   := https://github.com/junegunn/fzf.vim/tarball/master
</code></pre>
<p>This was easy and if you’re wondering about the <code>plugin_</code> prefix, you’ve noticed an important part of how to turn this into an easily usable list of plugin definitions. Within
<code>make</code> there exists a meta variable called <code>.VARIABLES</code> which contains the list of all defined variables in the current <code>Makefile</code>. And since we have chosen a specific prefix for all plugin definitions, we can now get all defined plugins in a programmatic way by filtering all variables with the prefix and then stripping that prefix to get the name we decided to give that plugin via something like:</p>
<pre><code># this filters out all variables with a plugin_ prefix and regards them 
# as plugin definitions
ALL_DEFINED_PLUGINS := $(filter plugin_%, $(.VARIABLES))
# from the defined variables list we only extract the name
ALL_PLUGINS := $(subst plugin_,,$(ALL_DEFINED_PLUGINS))
</code></pre>
<p>Now we have a list of plugins (by their names) that we can use to construct a target. I want something easy and nice to type, so I went for a target called <code>install-plugins</code>:</p>
<pre><code>.PHONY: install-plugins
install-plugins: $(patsubst %, $(PLUGINSDIR)/%, $(ALL_PLUGINS))
</code></pre>
<p>The target is <code>PHONY</code> because it doesn’t itself describe a file that gets generated, so it’s always &ldquo;out of date&rdquo;.  But the interesting part of the target is its prerequisites on the right side. There is a path substitution there that defines file targets for all plugins in the plugin directory (I use vim’s built in plugin management, so I have the plugin directory defined as <code>PLUGINSDIR := pack/plugins/start</code>). This now means make knows that it needs to create all the files for the generated plugins paths (e.g. <code>pack/plugins/start/syntastic</code>) in order to fulfill the <code>install-plugins</code> target. However so far there is nothing telling it how to do that and in order to change that, we define a wildcard rule that matches the file path of those plugin definitions:</p>
<pre><code>$(PLUGINSDIR)/%: $(PLUGINSDIR)
	@echo &quot;Installing $@ from $(plugin_$*)&quot;
	@install -d $@
	@curl -Lfs $(plugin_$*) | tar xz -C $@ --strip-components=1
</code></pre>
<p>This rule matches every directory under the plugin path via the wildcard character <code>%</code> . And because this is an implicit rule, make provides an automatic variable named <code>$*</code> that contains the stem of the match, which in our case is the plugin name. From there we can make sure the directory exists via <code>install -d $@</code> (<code>$@</code> is an automatic variable that matches the whole target), get the URL for the plugin by reconstructing the original variable we defined for the plugin from the stem (<code>$(plugin_$*)</code>)  and then run <code>curl</code> and <code>tar</code> to unpack the plugin into the newly created directory. You might have noticed that the rule also has a prerequisite on <code>$(PLUGINSDIR)</code> on the right side. Which is easily satisfied via this rule:</p>
<pre><code>$(PLUGINSDIR):
	install -d $@
</code></pre>
<p>So now we have all the pieces together to define plugins and install them. However we also want to make sure we don’t keep old plugins around that aren’t defined anymore, or files that were removed from plugins in newer versions. And this is done via deleting the plugins before installing new ones:</p>
<pre><code>.PHONY: clean-plugins
clean-plugins:
	rm -rf ./$(PLUGINSDIR)/*
</code></pre>
<p>And then we introduce one more convenience task to update plugins by first removing all of them and then installing the ones we want:</p>
<pre><code>.PHONY: update-plugins
update-plugins: clean-plugins install-plugins
</code></pre>
<p>And with that I can easily update all my vim plugins and commit the update to my dot file repository via the following command sequence:</p>
<pre><code>make update-plugins
git add pack/
git commit -m &quot;update vim plugins&quot;
</code></pre>
<p>And the full example of the <code>Makefile</code> looks like this:</p>
<pre><code>PLUGINSDIR := pack/plugins/start

# plugin definitions
plugin_supertab  := https://github.com/ervandew/supertab/tarball/master
plugin_syntastic := https://github.com/scrooloose/syntastic/tarball/master
plugin_fzf       := https://github.com/junegunn/fzf/tarball/master
plugin_fzf.vim   := https://github.com/junegunn/fzf.vim/tarball/master

# this filters out all variables with a plugin_ prefix and regards them as
# plugin definitions
ALL_DEFINED_PLUGINS := $(filter plugin_%, $(.VARIABLES))
# from the defined variables list we only extract the name
ALL_PLUGINS := $(subst plugin_,,$(ALL_DEFINED_PLUGINS))

# this will install all plugins via the wildcard matching target below
.PHONY: install-plugins
install-plugins: $(patsubst %, $(PLUGINSDIR)/%, $(ALL_PLUGINS))

.PHONY: clean-plugins
clean-plugins:
	rm -rf ./pack/plugins/start/*

.PHONY: update-plugins
update-plugins: clean-plugins install-plugins

$(PLUGINSDIR):
	install -d $@

$(PLUGINSDIR)/%: $(PLUGINSDIR)
	@echo &quot;Installing $@ from $(plugin_$*)&quot;
	@install -d $@
	@curl -Lfs $(plugin_$*) | tar xz -C $@ --strip-components=1
</code></pre>
<h2 id="final-words">Final words</h2>
<p>I’ve been using this way of managing my vim plugins for a while now. And I’m really liking it. It’s small, portable, and easy for me to reason about. I don’t think there will be a need for this to change anytime soon since it’s pretty feature complete for me. From a purely aesthetic perspective it’s not super nice that it removes all plugins every time it runs just to put most of the files back right away. But that doesn’t bother me because it’s fast, reliable, and simple.</p>
<p>If you’re curious about more of my setup, you can find <a href="https://github.com/mrtazz/dotfiles" title="mrtazz’s dotfiles on github.com">my dotfiles on GitHub</a> where I’ve done much more with <code>make</code> .</p>
<h2 id="bonus-update-11-2023-automated-plugin-updates">Bonus update 11-2023: automated plugin updates</h2>
<p>Since writing this I&rsquo;ve also incorporated an Action in my dotfiles repo that updates all my plugins once a week. So the next time I pull my dotfiles (or create a new codespace) I have an updated version of my vim plugins. The Action definition for this looks like this:</p>
<pre tabindex="0"><code>name: vim-plugin-update

on:
  workflow_dispatch:
  schedule:
    # run once a week on Wednesday
    - cron: &#39;30 3 * * 3&#39;

jobs:
  update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2

      - name: update vim plugins
        run: cd vim &amp;&amp; make update-plugins

      - name: commit and push changes
        run: |
          git config user.name Github Actions
          git config user.email actions@noreply.github.com
          git add vim/pack
          git commit --allow-empty -m &#34;update vim plugins&#34;
          git push
</code></pre>]]></content>
    <link href="https://unwiredcouch.com/bits/2021/09/02/vim-package-managment-make.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Bilingual Brain]]></title>
    <published>2021-08-29T00:00:00Z</published>
    <updated>2021-08-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/costa-bilingualbrain-2020/</id>
    <content type="html"><![CDATA[<p>I really enjoyed reading this book. For years at this point I&rsquo;ve felt like a
bilingual in German and English, and I&rsquo;ve come across so many situations where
I unconsciously chose one language over the other. And then was surprised by
it. I&rsquo;ve always wondered how bilingualism or multilingualism is reflected in
the brain and how it&rsquo;s different between people and languages. &ldquo;The Bilingual
Brain&rdquo; gives a ton of insight into the research field of multilingulism as
well as highlighting the approaches, challenges, and limitations of research
in this field. There were many situation in the book, when a common pattern
for bilinguals was explained, where I was happy to realize it&rsquo;s something I do
to.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/costa-bilingualbrain-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Journaling through a Pandemic]]></title>
    <published>2021-07-26T00:00:00Z</published>
    <updated>2021-07-26T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2021/07/26/journaling-through-a-pandemic.html</id>
    <content type="html"><![CDATA[<p><img src="/images/journaling-through-a-pandemic/journal-overview.png" alt="5 journals next to each other viewed from the binder side. Left to right the turquoise &ldquo;some lines a day&rdquo; journal, a blue long form journal, a green reading journal, and a yellow and purple bullet journal labeled with number 6, 01/2020 to 07/2020 and number 7, 08/2020 to 12/2020 respectively " title="Overview of my paper journals"></p>
<p>I started 2020 with a very different expectation of the year to come. And I think it’s safe to say that I’m not alone in that one. But fast-forward 3 months and I found myself in the middle of a lockdown and a pandemic. Everyone is at home all the time, it’s working from home all the time, no coffee shops, no meeting friends, and that for the foreseeable future. The days are fairly unstructured and pretty chaotic as we try to make the best of this situation. In which I found myself really embracing journaling into various paper notebooks and it has made a tremendous difference for my mental health.</p>
<h2 id="some-lines-a-day">Some lines a day</h2>
<p>Towards the end of 2019 I had bought a &ldquo;Some lines a day&rdquo; journal from Leuchtturm1917. And while at the beginning of the year, I wasn’t too sure what to put it in there, and lots of entries were just of the &ldquo;had coffee today and got some stuff done&rdquo; variety. I now really appreciate having this low-pressure prompt to get my thoughts out of my brain every morning.</p>
<p>The way the journal is designed is that it has 365 pages, one page per day. The pages are then divided into 5 parts so that the book can be used for 5 years. Once you’re through with the first year, you can see what you wrote in the year(s) before when getting to subsequent entries. It’s basically an analog version of the &ldquo;On this day&rdquo; feature you can find in every photo and journaling app these days.</p>
<p>The really nice part about this is that there isn’t really a ton of space for each day. So, it’s very low pressure. If you want, you can write a single sentence, and it feels like you did all there was to do. Or you can write fairly small and get a couple of good thoughts on paper before the space fills up. This means that for me, I can sit down in the morning and put whatever is on my mind onto paper, without a goal or a requirement. A literal brain dump.</p>
<h2 id="long-form-journal">Long Form Journal</h2>
<p>A month into the first lockdown, I wanted to have a place to continue these thoughts from the &ldquo;Some lines a day&rdquo; brain dumps. So I ordered a simple soft cover, lined, A5 notebook from Leuchtturm1917 again. And once it arrived, I just poured my thoughts and anxieties into it. Every morning when I could see that some lines wouldn’t be enough, I continued whatever thoughts I had in this new journal. It felt a bit awkward at first because I wasn’t sure how to deal with which things go in there. It was different from only writing some lines because filtering felt like a built-in thing with so little space each day. But in the long form journal there was no limitation. It took me a while to get used to the journal and be ok with writing whatever crossed my mind with no judgement or self censoring. But it was (and still is) incredibly helpful to calm myself down in the morning and try to make sense of all the thoughts in my brain.</p>
<p>Again, I try not to force any requirements or expectations on me with this journal, either. If I thought there was more to write but I am done after 2 sentences, that’s fine and I’ll close the journal for the day. Or pick it up again in the afternoon to continue some thoughts or add new ones. I allow myself to have days (or weeks) where I don&rsquo;t have the patience and calm to write in there. Or write multiple pages a day. The purpose of it is to make sure I don’t keep things rotating in my brain that make me distracted and anxious, but dump them onto paper. Even if I ran out of &ldquo;some lines&rdquo; that day.</p>
<h2 id="bullet-journals">Bullet journals</h2>
<p>Which then brings the third notebook into focus, which is my Bullet Journal. I wrote <a href="https://unwiredcouch.com/2019/07/05/pen-and-paper.html" title="Pen &amp; Paper on unwiredcouch.com ">about this before</a>, and it has seen a couple of different implementations since writing the original post.</p>
<p>In the Bullet Journal, which is a dotted A5 Leuchtturm1917 notebook for me, I describe &ldquo;the runway&rdquo; of my responsibilities. Meaning that I plan out what the months, weeks, and days (often with explicit time blocks) should ideally be. Which of course never actually ends up being what reality looks like.</p>
<blockquote>
<p>Make the plan. Execute the plan. Expect the plan to go off the rails. Throw away the plan. - Leonard Snart</p></blockquote>
<p>But that’s ok. It’s the map, not the territory. And sometimes even just a compass. The important thing for me is that it’s a place where I put structure into the day and don’t just let it happen. I’ve learned about myself over the years that while I sometimes feel like it would be nice to just not have any plans or to-dos for a day, these unstructured days quickly turn into dissatisfaction for me. And I feel like I’ve not done anything. This is why I try to be intentional about my days. At least during the week. Weekends are still mostly unstructured in the sense that I don’t plan out the whole day but maybe only one or two things.</p>
<p>A very major thing the Bullet Journal is instrumental in for me is what I call &ldquo;Inbox anxiety&rdquo;. Depending on how much and what kind of things I have going on, I find myself in periods of dreading opening E-Mail (especially in the morning) because of the additional and new responsibilities I will find in there. And to some extent this also goes for physical mail. So the way I tackle that is that I open the Bullet Journal and the mail client on my phone. And for everything I see in there that needs follow-up - or any other action from my side - I write a to-do item into the journal. Including some potential context, or related to-dos, and so on. And I’ve found that this helps me feel grounded and not overwhelmed in times when there is more incoming responsibilities through my inboxes than I’d like to.</p>
<p>As the kind of complement to that, I also try to log even seemingly mundane things I do throughout the day. Especially on days that don’t go as planned, or are super stressful and I end up not feeling super great about them, it’s helpful to take a break and realize where all the time went that day. And what kind of things I actually <em>did</em> get done. And even if it still means I didn’t get much done, I have a record of the day and it doesn’t feel that lost. Especially throughout this pandemic, where every day has a tendency to feel the same, it’s useful to have something that shows me that this isn’t quite true. And all the days are indeed different, if only in subtle ways.</p>
<p>What I don’t really put into the Bullet Journal anymore are backlogs (or &ldquo;someday&rdquo; lists in GTD parlance). I realized that with 2 notebooks a year for bullet journaling, there are a bunch of things that often get migrated over and over again because I want to get to them eventually, but there is no time pressure. And it was tedious to rewrite them all the time. It just made me feel like the collections I had those items in weren’t useful and actually demotivating to me. So I moved those into digital tools to keep the Bullet journal the place for front-of-mind things.</p>
<h2 id="reading-journal">Reading journal</h2>
<p>Towards the end of 2019 I also bought an &ldquo;Ex Libris&rdquo; reading journal from Leuchtturm1917. This is the least used one of my regular journals. But while I don’t use it consistently, it’s been an absolutely fun addition. It mostly came about through me realizing that I don’t super enjoy Goodreads for more than tracking <em>when</em> I’m reading and finishing a book. I don’t like the review interface, and the app is generally pretty slow. And it never really intrigues me to go back and browse through the books I read and what I thought about them the way a paper journal does.</p>
<h2 id="in-closing">In closing</h2>
<p>As difficult and challenging as 2020 was, and the ongoing pandemic continues to be, this process of having notebooks for many different ways to get things out of my head is immensely helpful to me. And as a continuation of leaning more on analog tools that I started in 2018 it keeps me from being completely petrified and procrastinating in the morning on many days. It also provides a way to really think about what thoughts and anxieties are rotating in my brain and have a conversation about it, even if I’m basically just talking to myself. And despite it being slightly inconvenient to carry around 3 or so notebooks sometimes, the benefits I get from having them around vastly outweighs that inconvenience for me. I even got myself a nice new pen to enjoy the process of sitting down and writing even more.</p>
<p><img src="/images/journaling-through-a-pandemic/lamy-cp1.png" alt="A black Lamy CP1 fountain pen on a wooden table" title="Lamy CP1 fountain pen"></p>
]]></content>
    <link href="https://unwiredcouch.com/2021/07/26/journaling-through-a-pandemic.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Between the World and Me]]></title>
    <published>2021-07-10T00:00:00Z</published>
    <updated>2021-07-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/coates-betweentheworldandme-2015/</id>
    <content type="html"><![CDATA[<p>This was an absolutely fantastic, emotional, gut-wrenching, sad,  and happy
book to read. I&rsquo;ve come to really like the way it is written, as a
conversation between the author and his son, trying to explain the world.
There&rsquo;s a lot of mixing of history, current affairs, personal history and
memories, which makes for a very personal piece of literature.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/coates-betweentheworldandme-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Digital Zettelkasten]]></title>
    <published>2021-06-20T00:00:00Z</published>
    <updated>2021-06-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/kadavy-digitalzettelkasten-2021/</id>
    <content type="html"><![CDATA[<p>I read this as I was working on figuring out how to set up my Zettelkasten and
how it would best serve me after reading the basics of the theory behind it in
<a href="/reading/ahrens-smartnotes-2017">Taking Smart Notes</a>. And there sadly wasn&rsquo;t much new stuff in the
book that I haven&rsquo;t already read in various forms in some blog posts over the
last couple of months. However it was really nice to have this information all
in one place in book form. And get the confirmation that I am on the right
track with how I&rsquo;m building my Zettelkasten. Or at least going in the same
direction that has worked for others so far as well.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/kadavy-digitalzettelkasten-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How to Do Nothing: Resisting the Attention Economy]]></title>
    <published>2021-06-19T00:00:00Z</published>
    <updated>2021-06-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/odell-howtodonothing-2019/</id>
    <content type="html"><![CDATA[<p>This was one of those books that I started reading with a lot of expectations
that it would be similar to <a href="/reading/newport-digitalminimalism-2019/">Newport&rsquo;s Digital Minimalism</a> and tell me a
lot of things that I already kinda knew. But I was completely surprised by it.
The book is much more about what to do <em>instead</em> of following the pull of the
attention economy, rather than why it&rsquo;s bad. There is some of that in there as
well of course. However the fascinating part of the book is much more the fact
that it kinda acts like a nice guide about how to re-engage with your physical
surroundings, your city, countryside, and neighborhood. The real things around
you. It took me a while to realize that and read the book with that in mind.
And I definitely struggled early on as there was way more talk about the
dropout communities of the 70s that I really was interested in. But after
finishing the book I definitely have a new-found appreciation of my
neighborhood and the nature around me. It was a very positive read and in the
end not really about &ldquo;doing nothing&rdquo;</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/odell-howtodonothing-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Die Macht der Mehrsprachigkeit - Über Herkunft und Vielfalt]]></title>
    <published>2021-05-24T00:00:00Z</published>
    <updated>2021-05-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/grjasnowa-machtdermehrsprachigkeit-2021/</id>
    <content type="html"><![CDATA[<p>I found this one by accident while shopping at the local bookstore.  The book
ended up being a super fascinating read about how language and multilingualism
is viewed in Germany. I&rsquo;ve found myself reminded of a lot of things that I&rsquo;ve
noticed since living abroad in the US and coming back to Germany. And it gave
me a new sense of reality of what it means to arrive and live in Germany as
someone who&rsquo;s first language isn&rsquo;t German and isn&rsquo;t considered one of the
&ldquo;good&rdquo; languages either.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/grjasnowa-machtdermehrsprachigkeit-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Limits of Organization (Fels Lectures on Public Policy Analysis)]]></title>
    <published>2021-04-21T00:00:00Z</published>
    <updated>2021-04-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/arrow-limitsoforganizations-1974/</id>
    <content type="html"><![CDATA[<p>This was a very fascinating book. In part because I have never really attended
an economics lecture. And the ones I did were geared towards very practical
things that we had to learn by heart. So while reading this book I was at the
same time getting used to thinking about economic things in a theoretical
context and taking in the information conveyed in the text. I enjoyed it a lot
and took a lot of inspiration and food for thought from it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/arrow-limitsoforganizations-1974/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Brain age and other bodily ‘ages’: implications for neuropsychiatry]]></title>
    <published>2021-04-19T00:00:00Z</published>
    <updated>2021-04-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/coleetal-brainage-2018/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/reading/coleetal-brainage-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Altered Traits]]></title>
    <published>2021-04-17T00:00:00Z</published>
    <updated>2021-04-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/golemandavidson-alteredtraits-2017/</id>
    <content type="html"><![CDATA[<p>I went into this book with a lot of expectations. I was very curious about the
science behind meditation and how to approach the topic in a scientific way.
While ultimately that information was in the book to some extent, it overall
wasn&rsquo;t what I had hoped it would be. First of all there is <em>a lot</em> of
namedropping happening. So for a quite some time I was busy with going back to
re-read who someone was, or wondering whether I should care to remember that
person&rsquo;s name because it would be important later on. This is likely very
interesting to someone who is already familiar with well-known people in the
field of meditation but was mostly distracting for me. The tl;dr for the book
seems to be that there are still more questions than answers about the science
behind meditation and how to explain the clearly visible benefits with a lot
of long time practitioners. But to me that could have been a blog post and not
such a long book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/golemandavidson-alteredtraits-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The amygdala as a hub in brain networks that support social life]]></title>
    <published>2021-03-28T00:00:00Z</published>
    <updated>2021-03-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/bickartdickersonbarrett-amygdalahubinbrainnetworks-2014/</id>
    <content type="html"><![CDATA[<p>This was a fascinating paper to read. Mostly because there were so many things
I absolutely didn&rsquo;t get. And yet from reading <a href="/reading/barret-howemotionsaremade2017/">How Emotions Are Made</a> there
were a bunch of things that I had heard before and that made sense to me.
Especially thinking about the brain more as a network structure and less than
a set of connected control nodes.  And some of the findings wrt how changes in
different networks for different aspects of social life contribute to symptoms
that manifest in various external behaviors (e.g. being cold/distant to loved
ones or being easily fooled by scammers)</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/bickartdickersonbarrett-amygdalahubinbrainnetworks-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How Emotions Are Made: The Secret Life of the Brain]]></title>
    <published>2021-03-21T00:00:00Z</published>
    <updated>2021-03-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/barret-howemotionsaremade2017/</id>
    <content type="html"><![CDATA[<p>This was an absolutely mind blowing read in the sense that it made me think
about a lot of things in a new light. The book is an introduction into <a href="https://en.wikipedia.org/wiki/Theory_of_constructed_emotion" title="Theory of Constructed Emotion on Wikipedia.org">the
theory of constructed emotion</a> which has been the author’s field of
research for many years. The tl;dr kinda being that emotions don’t just exist
as objective reality but are constructed by humans as we interpret external
and internal sensor input.</p>
<p>The author makes it very clear how much research has gone into the theory and
how it explains many observations that can’t be explained by the classical
view on emotions. It’s definitely a theory that’s a lot less intuitive than
the classical view, but I’ve still found myself nodding along at some
passages, while being very perplexed at others trying to understand concepts
like simulation and prediction of the brain. And what it means for my day to
day perceptions.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/barret-howemotionsaremade2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Dinosaur]]></title>
    <published>2021-03-12T00:00:00Z</published>
    <updated>2021-03-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/dinosaur-03-2021/</id>
    <content type="html"><![CDATA[<p>I rarely draw animals, let alone extinct ones. But this was a ton of fun and a
good learning exercise.</p>
<p>#drawing #painting #art #watercolor #sketch #sketchbook #dinosaur #triceratops
#tonedpaper #ink #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/dinosaur-03-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Dr. Beverly Crusher]]></title>
    <published>2021-02-22T00:00:00Z</published>
    <updated>2021-02-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/dr-beverly-crusher-02-2021/</id>
    <content type="html"><![CDATA[<p>Dr. Beverly Crusher, Star Trek TNG. It’s been hard to find time for art
lately. And the more I don’t do it the more I stress myself about having to
make something really good to make it count. So today I just took my art
supplies while watching TV and decided to see what happens. And I don’t hate
it 😬.
.
.
#art #sketch #sketchbook #drawing #painting #watercolor #strathmore
#fabercastell #watercolorpencils #ink #startrek #beverlycrusher #tng #tvshow
#tv #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/dr-beverly-crusher-02-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Columbo]]></title>
    <published>2020-12-28T00:00:00Z</published>
    <updated>2020-12-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/columbo-12-2020/</id>
    <content type="html"><![CDATA[<p>Columbo. This started as a traditional quick sketch on paper while watching
TV. And then I was too lazy to get my watercolors and colored this in
procreate with mostly the watercolor brush. I generally have a hard time with
creating digital art (I just enjoy traditional more). But this approach
inspired by @schmoedraws’ process she shared on her Patreon really worked for
me.
.
.
#art #sketch #painting #procreate #digitalart #columbo #peterfalk #tv
#watercolor #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/columbo-12-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Branch Deploys with GitHub Actions]]></title>
    <published>2020-12-05T00:00:00Z</published>
    <updated>2020-12-05T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2020/12/05/pr-deploys.html</id>
    <content type="html"><![CDATA[<p>Over the last 18 months or so working for GitHub on the team managing deploys, I’ve gotten very accustomed to branch based deployments. Even more so I’m very much enjoying it over the usual trunk based deployment setups that are common in CI/CD environments (this however might be a topic for a different post).</p>
<p>With the official availability of GitHub Actions last year I decided to move some of my CI jobs for my personal infrastructure over from my private Jenkins server to Actions. Both in an effort to clean up the setup a bit, but also to not have to maintain and rely on running Jenkins myself so much anymore.</p>
<p>My personal infrastructure runs out of a single monolithic repository that contains chef cookbooks, provisioning code, kubernetes resources, terraform code, and also dns configuration (which will serve as the example in this post) in a subdirectory. That is why a lot of the examples I post here are related to having multiple deployment targets in a single repository.</p>
<h2 id="the-used-to-be-situation">The used to be situation</h2>
<p>For a couple of years now I’ve been managing my DNS configuration with [octodns]. It’s a really nice tool written by the GitHub engineering team to manage DNS zones across different providers via yaml files. My zones are configured in yaml files in the <code>dns</code> sub directory of my infrastructure repository. There&rsquo;s a small Makefile that has tasks for verifying and deploying configuration. And it usually got deployed via a push to the default branch of the repo and a top level Jenkinsfile that mapped directory paths to Jenkins jobs like this:</p>
<pre tabindex="0"><code>stage(&#39;determine subjob to build&#39;) {
    try {
      sh &#34;printenv&#34;
      echo &#34;Got params: ${params}&#34;
      foundJob = false
      changedFiles = sh(script: &#34;git diff --name-only ${params.prevSHA} HEAD&#34;, returnStdout: true).trim()
      if (changedFiles =~ /^jobs/){
        build job: &#39;create-jobs&#39;, wait: false
        foundJob = true
      }
      if (changedFiles =~ /^dns/){
        build job: &#39;dns&#39;, wait: false
        foundJob = true
      }
      if (!foundJob) {
        echo &#34;No subjobs to build for &#34; + changedFiles
      }
      sh &#34;/usr/local/bin/ci-notify --job=${env.JOB_NAME} --build=${env.BUILD_NUMBER} --success&#34;
    }
    catch (err) {
      sh &#34;/usr/local/bin/ci-notify --job=${env.JOB_NAME} --build=${env.BUILD_NUMBER} --failure&#34;
      throw err
    }
  } 
</code></pre><p>In case the job that was run on push failed or I just wanted to rerun the deployment, I also have a helpful Slack bot to help out with that:</p>
<p><img src="/images/bits/pr-deploys/slack-friday.png" alt="slack bot deploy"></p>
<p>And this worked really well for a couple of years. I don’t have a ton of changes usually in my personal DNS setup, so whenever a change was needed, this setup was more than enough automation to keep me happy. But at some point I got annoyed by the fact that basically all automation for my infrastructure was dependent on Jenkins being up (and I had to make sure it was).</p>
<h2 id="hello-actions">Hello Actions</h2>
<p>Fortunately around that same time I got really annoyed running Jenkins, GitHub Actions went GA with the CI offering which prompted me to look more into how I could use it to maybe replace my Jenkins setup.</p>
<p>As a first step here I wanted to change as few things as possible. So Actions would literally just replace Jenkins, running the deployment logic whenever I pushed to the default branch of my infrastructure repository.</p>
<p>In Actions this was done by restricting the job to changes in the <code>dns</code> subdirectory or changes to the workflow file:</p>
<pre tabindex="0"><code>on:
  push:
    paths:
      - &#39;dns/**&#39;
      - &#39;.github/workflows/dns.yml&#39;
</code></pre><p>The job itself would then run the same sequence of <code>make check</code> to do a dry run of the changes, and then <code>make update</code> to deploy the changes.</p>
<h2 id="pull-request-ci-integration">Pull Request CI integration</h2>
<p>This was already great and a perfect replacement of the setup I had before. But it wasn’t using the power of Actions to their full extent. The most obvious one being that this wouldn’t really work with pull requests. Sure the job would run, but every change pushed to a PR would automatically be deployed. Which also can be a nice workflow, but wasn’t what I wanted. I wanted to have more control over deploys, essentially a human 👍🏻 that the changes are actually good to go. And as the signal for this I decided on the merge button. So once I was happy with the changes and wanted to see it deployed, I just had to merge the PR and make sure the automation keeps deploying on the default branch.</p>
<p>In order to do that, all I had to do was add <code>if: github.ref == 'refs/heads/master' </code> to the step that was running the <code>make update</code> deployment. And voila, the <code>make check</code> dry run is run on any PR now with nice and proper GitHub checks integration and any merge to master triggers a deploy now and makes sure the changes go out.</p>
<h2 id="branch-based-deploys">Branch based deploys</h2>
<p>I ran with this setup for a while. And again it worked perfectly fine. However I was also running into situations where the actual deploy failed even though the PR check was fine. Because there is no place like production. And all the tests will never be a full replacement to catch all the things before hitting production. That’s just how it is. But it meant that I would only find out after the merge that something was off. And then I had to open another PR to fix the problems which usually was just a small one line change. And I got annoyed by this. Plus Pull Requests have a really nice integration with the deployments API which I was missing out on.</p>
<p>So I wanted to have all these nice things as well. The first step to get there was to find another trigger for deployments that signals the automation that this code is fine to deploy. Pull Requests don’t have a ton of ways to interact with them. It basically comes down to comments or labels. And because there is a way to trigger Actions on labels, that’s what I went with.</p>
<p>In Actions configuration this looks like this. First I had to make sure the job runs when the PR gets <code>labeled</code> (the other 2 <code>opened</code> and <code>synchronized</code> make sure the automation is run on any changes pushed to the PR):</p>
<pre tabindex="0"><code>on:
  pull_request:
    paths:
      - &#39;dns/**&#39;
      - &#39;.github/workflows/dns.yml&#39;
    types: [opened, synchronize, labeled]
</code></pre><p>Then I decided on a label name - in this case <code>deploy requested/dns</code> - and made sure the deployment logic only ran when the PR actually was labeled with that. I did this by making <code>deploy</code> a separate job in the Actions definition and have it be guarded by <code>if: contains(github.event.pull_request.labels.*.name, 'deploy requested/dns')</code> similar to how the guard on default branch worked before.</p>
<p>Now to have that nice PR timeline integration with the deployments API I wrote a small ruby script which is configured via environment variables in the job.</p>
<pre tabindex="0"><code>env:
      DEPLOYMENT_REPO: ${{ github.repository }}
      DEPLOYMENT_ENVIRONMENT: dns
      DEPLOYMENT_DESCRIPTION: dns updated via octodns
      DEPLOYMENT_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      DEPLOYMENT_SHA: ${{ github.event.pull_request.head.ref }}
</code></pre><p>In order to record deployments and their status changes and run the deployment creation step right before the deployment logic like so:</p>
<pre tabindex="0"><code>- name: start deployment
  run: ruby bin/gh-deployment.rb create
</code></pre><p>And then the at the end of the job there are these two steps that are run on success and failure of the job respectively:</p>
<pre tabindex="0"><code>    - name: record deployment failure
      run: ruby bin/gh-deployment.rb failure
      if: failure()
    - name: record deployment success
      run: ruby bin/gh-deployment.rb success
      if: success()
</code></pre><p>Which makes sure the deployment status is properly reflected in the Pull Request timeline:</p>
<p><img src="/images/bits/pr-deploys/pr-deploy-timeline.png" alt="pull request deploy timeline"></p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>This is now a workflow I really enjoy. It’s extremely similar to how we work at GitHub, which is nice because I don’t have to rethink how things are done when I change things in my personal infrastructure. Plus I get all of the testing and feedback on my PR and can act there with changes before merging the code. I also can work on more than one PR at a time, and even have a PR for my chef changes and one for my dns changes and have them be deployed and tested and incorporate feedback before merging. Or do it in the same PR because the pre deployment checks will run as soon as there are changes in the respective subdirectory. And I can add a <code>deploy requested/dns</code> and a <code>deploy requested/chef</code> label to have the automation for both deployments run.</p>
]]></content>
    <link href="https://unwiredcouch.com/bits/2020/12/05/pr-deploys.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How to take Smart Notes]]></title>
    <published>2020-11-27T00:00:00Z</published>
    <updated>2020-11-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/ahrens-smartnotes-2017/</id>
    <content type="html"><![CDATA[<p>I really like this book even though it’s fairly academic to a large extent.
I’ve heard about the <a href="https://en.wikipedia.org/wiki/Zettelkasten" title="Zettelkasten on Wikipedia">Zettelkasten method</a> quite a bit especially recently
and I was very curious to learn more about it. Coming to the book with that
context, I didn’t mind the academic style of the book too much. However it’s
definitely more a book that makes you reflect on how you want to take notes
and why more than giving you a concrete guide on what to do.</p>
<p>After finishing the book I took some time to think about how a Zettelkasten
style note taking setup would best fit into my day to day. And how I would
want to interact with such a setup. And I’m still in the process of moving my
old notes into the system and figuring out how it works out for me. So while I
can’t say anything yet about how effective the things are that I learned
through the book, I found it intriguing enough to make me adapt the method and
try it out.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/ahrens-smartnotes-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Born a Crime]]></title>
    <published>2020-10-24T00:00:00Z</published>
    <updated>2020-10-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/noah-bornacrime-2016/</id>
    <content type="html"><![CDATA[<p>I bought this book never really having watched any of Trevor Noah’s comedy or
The Daily Show with him. But the book sounded really intriguing to me and I
was curious about his life and background. And the book was absolutely
fantastic. It’s one of the few books I’ve read where I literally laughed out
lout while also being struck and deeply emotional reading through other parts.
I&rsquo;ve only been to South Africa for a couple of days and only really been to
Cape Town. So I’ve had (and still have) very limited knowledge of the history
of the country. And the book taught me a ton about what it was like growing up
during and after Apartheid and how kids can perceive very extreme and cruel
situations. This is definitely one of the best books I’ve read and I can
wholeheartedly recommend picking it up.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/noah-bornacrime-2016/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Lebron James]]></title>
    <published>2020-10-18T00:00:00Z</published>
    <updated>2020-10-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/lebron-james-10-2020/</id>
    <content type="html"><![CDATA[<p>LeBron James sketch. Pencil on toned paper. This started as a loose
underdrawing for a painting. Then I decided to tighten the pencils a bit more
and now I don’t know anymore if I want to paint over it 😂.
.</p>
<p>But still decided to color it a couple of days later. Watercolor and
watercolor pencils. And a little bit of ink for some small details. That
counts for #inktober, right? 😆
.
.
.
#art #sketch #sketchbook #drawing #painting #watercolor #watercolorpainting
#watercolorpencils #ink #markers #lebronjames #basketball #🏀 #👑 #tonedpaper
#fabercastell #strathmorepaper #royaltalens #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/lebron-james-10-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Outlining your Novel: Map Your Way to Success]]></title>
    <published>2020-10-13T00:00:00Z</published>
    <updated>2020-10-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/weiland-outliningyournovel-2011/</id>
    <content type="html"><![CDATA[<p>I picked this book up because I’ve always been interested in outlining. But
even more so I’ve fascinated by how many people are fans of outlining. To the
point where there are discussion about things like “is OmniOutliner or OneNote
the better outlining app” and the point that there are even dedicated
outlining apps. I’ve mostly done outlining for my writing with bullet points
in a plain text or markdown file. And I wanted to know if I’m missing
anything. In addition to that I also find the process of writing a book or
novel super fascinating. I don’t know if I would want to write one. But the
process of planning, structuring, and writing something of that size I find
very interesting and fascinating. And I wanted to see if I could learn
something from it for shorter writing like documentation and blog posts.</p>
<p>The book is structured in a way where only the first two chapters are really
about outlining itself. The advantages of writing an outline versus writing
“at the seat of your pants” as well as different approaches to outlines (e.g.
mind maps, longform writing, bullet points) are discussed.</p>
<p>The following chapters then are a whirlwind tour through planning a novel and
how to apply outlining to areas like plot planning, character development, or
creating a backstory. At the end of every chapter there is an interview with a
writer about the value and their approach in outlining.</p>
<p>I really enjoyed reading the book. I learned a lot about structuring my
writing and what kind of things go into planning a novel. It’s very accessibly
written and the chapters all have a good length so it never feels like they
are dragged out or missing content. I could have done without the writer
interviews as they didn’t really add a lot to the book but had a couple of
interesting anecdotes. I don’t know if there would be anything new for a
seasoned author in the book, but I can definitely recommend it as a quick
intro into planning a novel.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/weiland-outliningyournovel-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Adrianne Sloboh]]></title>
    <published>2020-09-28T00:00:00Z</published>
    <updated>2020-09-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/adrianne-sloboh-09-2020/</id>
    <content type="html"><![CDATA[<p>watercolor practice with the new @etchr_lab brushes. Trying to capture some of
that mesmerizing magic of an
<a href="https://www.instagram.com/adrianne.sloboh/">@adrianne.sloboh</a> flip. Still so
much to learn but I really enjoyed painting this one and the brushes are
really nice.
.
.
#art #sketch #sketchbook #watercolor #painting #drawing #artstudy #practice
#skateboard #kickflip #tonedpaper #etchrlab #fabercastell #royaltalens
#instaartist #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/adrianne-sloboh-09-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How to Be a Stoic: Using Ancient Philosophy to Live a Modern Life]]></title>
    <published>2020-09-27T00:00:00Z</published>
    <updated>2020-09-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/pigliucci-howtobeastoic-2017/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/reading/pigliucci-howtobeastoic-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[1919]]></title>
    <published>2020-09-13T00:00:00Z</published>
    <updated>2020-09-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/ewing-1919-2019/</id>
    <content type="html"><![CDATA[<p>I was interested in Eve Ewing’s writing ever since I read her run on Marvel’s
Ironheart. So I got her other books a couple of months ago as well. I started
with &ldquo;1919&rdquo; and it was definitely an interesting albeit completely new
experience for me. I&rsquo;ve not really read poetry since school where I wasn&rsquo;t a
big fan because it always felt forced. But while reading &ldquo;1919&rdquo; I definitely
regretted not paying more attention in school to be able to pick up on the
stylistic and technique choices in the book. &ldquo;1919&rdquo; is a collection of poems
with the background of an official report about the 1919 race riots in Chicago
which I had never heard of until I read the book. It was absolutely
educational to read the various passages from the report that were the basis
for the poem. And then get an emotional and very personal feeling poem about
the passage right after. Given my own lack of knowledge about poetry I
constantly felt like I was missing some fascinating nuances about the writing.
And I&rsquo;ve started learning more about poetry and creative writing to rectify
that. And I definitely plan to re-read &ldquo;1919&rdquo; once I&rsquo;m more knowledgeable
about poetry.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/ewing-1919-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Draw People Every Day: Short Lessons in Portrait and Figure Drawing Using Ink and Color]]></title>
    <published>2020-09-09T00:00:00Z</published>
    <updated>2020-09-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/mcleod-drawpeopleeveryday-2019/</id>
    <content type="html"><![CDATA[<p>This book caught my eye while I was browsing for art books. I’ve been trying
for a while to incorporate (almost) daily drawing back into my daily schedule.
But in a lighter weight way with less pressure to do finished pieces. And “Draw
People Every Day” definitely fit the bill for that. The book provides a great
high level overview for getting started with ink drawing and dives deeper into
various topics like e.g. the importance of drawing from life, gesture drawing
and how to achieve different effects with line weights. The same approach is
then taken in the final chapter to talk about using color for quick sketching.</p>
<p>After having spent a lot of time on learning how to draw in the last 2 years,
there weren’t a lot of things that were really new to me in this book. However
it’s the book I wish I had 2 years ago when I got started. It does a great job
laying down the fundamentals for getting started with sketching. And it
definitely inspired me to do more quick sketching and drawing from life.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/mcleod-drawpeopleeveryday-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Spider-Man 2018 vs. 2020]]></title>
    <published>2020-09-03T00:00:00Z</published>
    <updated>2020-09-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/spiderman-09-2020/</id>
    <content type="html"><![CDATA[<p>2018 vs 2020. To the week 2 years ago I decided that I wanted to learn how to
draw with basically no idea how to do it. I watched a ton of @jimlee stream
videos on YouTube and in that first week I drew the bottom Spider-Man sketch
and it took me about 3 hours. Since then art has become such a big part of my
life and one of the most joyful hobbies I’ve ever had. I’ve tried out so many
mediums, discovered artists and their wonderful art, and learned to appreciate
comic books in a completely new way. I decided to redraw the sketch how I
would do it today (and in about an hour). It’s too easy to think I’ve not
learned anything in the last 2 years when drawings don’t turn out the way I
want. But this exercise really made me realize how far I’ve come, even if I
know how far I still want to go.</p>
<p>#art #drawing #sketch #sketching #pencil #ink #spiderman #marvel #comicart
#instaart #learntodraw #study</p>
]]></content>
    <link href="https://unwiredcouch.com/art/spiderman-09-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe]]></title>
    <published>2020-08-31T00:00:00Z</published>
    <updated>2020-08-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/randall-darkmatteranddinosaurs-2015/</id>
    <content type="html"><![CDATA[<p>I had heard about this book on several occasions and when we went to the
natural history museum here in Berlin I found it in the shop and decided to
finally buy it. I didn’t really know what to expect but having recently read a
couple of popular science books about physics I was very intrigued. And the
book did definitely deliver, albeit in a very different way than I thought. I
went into reading it with the expectation that the majority of the book would
be about dark matter and its details. This however is really only a small part
of the whole book. Throughout the book there are detailed explanations about
the (presumed) origins of life, the makeup of the cosmos, mass extinctions,
determining the age of rocks, history of scientific discoveries and methods,
dinosaurs, impact crater anatomy, why Pluto isn’t a planet anymore, why
meteroids as we call them often aren’t really meteroids, and much more. At the
end of the book the author then circles back to dark matter and some of the
recent theories and discoveries, tying all of the chapters together.</p>
<p>I have to admit that occasionally while reading the book I was fascinated by
the things I was learning while at the same time not really knowing what it
has to do with dark matter. Nevertheless I extremely enjoyed this book and the
things I learned reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/randall-darkmatteranddinosaurs-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Shafts]]></title>
    <published>2020-08-27T00:00:00Z</published>
    <updated>2020-08-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/shafts-08-2020/</id>
    <content type="html"><![CDATA[<p>Shafts! More sketching while watching TV. It’s incredibly hard to capture
their cool.</p>
<p>#art #sketch #sketchbook #watercolor #painting #drawing #tonedpaper #shaft
#richardroundtree #samuelljackson #tv #movie #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/shafts-08-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Roidrage]]></title>
    <published>2020-08-17T00:00:00Z</published>
    <updated>2020-08-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/roidrage-08-2020/</id>
    <content type="html"><![CDATA[<p>Because art should be fun here’s a hyper realist painting of
<a href="https://paperplanes.de">@roidrage</a> in celebration of his recent forays into
woodworking and the forest.  Also I never thought „shirtless lumberjack“ would
end up in my search history.  Done in watercolor and gouache on Clairefontaine
mixed media paper.</p>
<p>#art #painting #sketch #watercolor #gouache #woodworking #lumberjack #axe
#instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/roidrage-08-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Goblin of Man]]></title>
    <published>2020-08-03T00:00:00Z</published>
    <updated>2020-08-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/goblin-08-2020/</id>
    <content type="html"><![CDATA[<p>Goblin of Man. With apologies to René Magritte. Lunch time sketch with ink and
watercolor.</p>
<p>#art #sketch #sketchbook #painting #drawing #sketching #watercolor #ink
#tonedpaper #brushpen #greengoblin #normanosborn #marvel #spiderman #comicart
#instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/goblin-08-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Dr. Doom]]></title>
    <published>2020-07-14T00:00:00Z</published>
    <updated>2020-07-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/dr-doom-07-2020/</id>
    <content type="html"><![CDATA[<p>Watercolor painting practice. Reference is one of my favorite panels from the
2015 Secret Wars storyline. It’s such a peak Doom move. Original art by Esad
Ribić with colors by Ive Svorcina.</p>
<p>#art #sketch #sketchbook #painting #watercolor #comicart #drdoom
#victorvandoom #esadribic #ivesvorcina #secretwars #marvel #thanos #avengers
#instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/dr-doom-07-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Clark Kent]]></title>
    <published>2020-07-10T00:00:00Z</published>
    <updated>2020-07-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/clark-kent-07-2020/</id>
    <content type="html"><![CDATA[<p>Christopher Reeve inspired Clark Kent for a coworker. Ink and watercolor.
Swipe for some process pics.</p>
<p>#art #sketch #sketchbook #ink #watercolor #drawing #painting #clarkkent
#superman #christopherreeve #dccomics #comicart #movieart #movies #instaart
#moleskine #fabercastell #royaltalens</p>
]]></content>
    <link href="https://unwiredcouch.com/art/clark-kent-07-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Abby]]></title>
    <published>2020-07-09T00:00:00Z</published>
    <updated>2020-07-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/abby-07-2020/</id>
    <content type="html"><![CDATA[<p>Abby from Last of Us 2. I don’t think I have the nerves to actually play it.
But all the cool art happening around this game is really incredible. These
are some fountain pen and white watercolor pencil sketches done while watching
TV.</p>
<p>#art #sketch #sketchbook #fountainpen #ink #watercolorpencils #tonedpaper
#strathmorepaper #abby #lastofus2 #videogames #videogameart #instaart #drawing
#sketching</p>
]]></content>
    <link href="https://unwiredcouch.com/art/abby-07-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Batman]]></title>
    <published>2020-07-01T00:00:00Z</published>
    <updated>2020-07-01T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/batman-07-2020/</id>
    <content type="html"><![CDATA[<p>Michael Keaton Batman. This was a super quick end of the day sketch with a
fountain pen, brown ink, and a white watercolor pencil on scrapbook paper.</p>
<p>#art #scrapbook #sketchbook #sketch #drawing #pen #ink #fountainpen #batman
#michaelkeaton #dccomics #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/batman-07-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Magik]]></title>
    <published>2020-06-26T00:00:00Z</published>
    <updated>2020-06-26T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/magik-06-2020/</id>
    <content type="html"><![CDATA[<p>Practice, practice, practice. A heavily
<a href="https://www.instagram.com/davidyardin/">@davidyardin</a> inspired and referenced
Magik in ink and watercolor in the toned sketchbook. *</p>
<p>#art #sketchbook #sketch #drawing #painting #tonedpaper #watercolor #ink
#brushpen #magik #illyanarasputin #xmen #marvel #comicart #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/magik-06-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Bridgeman study]]></title>
    <published>2020-05-21T00:00:00Z</published>
    <updated>2020-05-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/bridgeman-study-05-2020/</id>
    <content type="html"><![CDATA[<p>Anatomy studies after George Bridgman. I’ve been watching the
<a href="https://www.instagram.com/stevehustonartist/">@stevehustonartist</a> course on
sketchbooking on @newmastersacademy and was immediately hooked. I got a
scrapbook and filled my @lamy_official joy with bronze ink. And it’s been very
enjoyable to just experiment and learn and worry less about feeling like I
need to “finish” something.</p>
<p>#art #sketchbook #sketching #anatomy #lamy #lamyjoy #fountainpen #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/bridgeman-study-05-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Spider-Man]]></title>
    <published>2020-05-11T00:00:00Z</published>
    <updated>2020-05-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/spider-man-05-2020/</id>
    <content type="html"><![CDATA[<p>Upside down arachno boy in the PS4 game suit. Ink and watercolor, plus some
watercolor pencils for details. Swipe to see some process pics. .</p>
<p>#art #sketch #sketchbook #drawing #painting #ink #watercolor
#watercolorpencils #spiderman #spideysense #comicart #marvel #avengers
#instaart #videogames</p>
]]></content>
    <link href="https://unwiredcouch.com/art/spider-man-05-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Harley Quinn]]></title>
    <published>2020-05-06T00:00:00Z</published>
    <updated>2020-05-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/harley-quinn-05-2020/</id>
    <content type="html"><![CDATA[<p>Harley Quinn based on that one scene in the Suicide Squad movie. Ink and
watercolor on A4 paper. This one took me a while and I had already abandoned
it after pencils because I wasn’t happy with it. But I really like how it
turned out and I’m glad I gave it another try.</p>
<p>#art #sketch #drawing #painting #ink #watercolor #brushpen #harleyquinn
#dccomics #suicidesquad #movieart #comicart #instaart #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/harley-quinn-05-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Stratos]]></title>
    <published>2020-04-28T00:00:00Z</published>
    <updated>2020-04-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/stratos-04-2020/</id>
    <content type="html"><![CDATA[<p>Stratos from Masters of the Universe. This is the last page of my first ever
watercolor sketchbook. And it’s been a fun experience. And of course I already
have another one lined up 😆</p>
<p>#art #sketchbook #sketch #drawing #painting #watercolor #ink #stratos #motu
#mastersoftheuniverse #heman #instaart #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/stratos-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Wasp]]></title>
    <published>2020-04-24T00:00:00Z</published>
    <updated>2020-04-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/wasp-04-2020/</id>
    <content type="html"><![CDATA[<p>Before bed quick sketch. The Wasp in ink and watercolor. So excited for
Ant-Man 3.</p>
<p>#art #sketch #sketchbook #painting #drawing #ink #watercolor #thewasp
#hopevandyne #marvel #avengers #instaart #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/wasp-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Mask]]></title>
    <published>2020-04-22T00:00:00Z</published>
    <updated>2020-04-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/the-mask-04-2020/</id>
    <content type="html"><![CDATA[<p>The Mask</p>
<p>I thought this was gonna be a quick fun sketch. And it turned out to be a pain
for me to get that face to a somewhat not creepy state. Still fun. That movie
was one of my favorites as a kid.</p>
<p>#art #sketch #sketchbook #ink #watercolor #mask #themask #jimcarrey #movies
#instaart #stayhome #moleskine #drawing #painting</p>
]]></content>
    <link href="https://unwiredcouch.com/art/the-mask-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Tifa Lockhart]]></title>
    <published>2020-04-21T00:00:00Z</published>
    <updated>2020-04-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/tifa-lockhart-04-2020/</id>
    <content type="html"><![CDATA[<p>Tifa doing her Final Heaven Limit move. Yes I’ve been playing final fantasy
vii remake.  Lots of experimenting went into this one and I learned a lot
about inks and watercolor.</p>
<p>#art #sketch #painting #drawing #finalfantasy #tifa #finalheaven #videogames
#instaart #ink #watercolor #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/tifa-lockhart-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Ghostrider]]></title>
    <published>2020-04-18T00:00:00Z</published>
    <updated>2020-04-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/ghostrider-04-2020/</id>
    <content type="html"><![CDATA[<p>Ghostrider in ink and watercolor.</p>
<p>#art #sketch #sketchbook #ink #watercolor #brushpen #ghostrider #johnnyblaze
#marvel #comicart #instaart #stayhome #painting #drawing</p>
]]></content>
    <link href="https://unwiredcouch.com/art/ghostrider-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Lockdown Selfie]]></title>
    <published>2020-04-16T00:00:00Z</published>
    <updated>2020-04-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/lockdown-selfie-04-2020/</id>
    <content type="html"><![CDATA[<p>#selfie</p>
<p>Left the quarantine stronghold today for the weekly-ish grocery shopping tour.</p>
<p>#art #sketch #sketchbook #painting #drawing #watercolor #ink
#vangoghwatercolor #fabercastell #moleskine #instaart #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/lockdown-selfie-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Hades]]></title>
    <published>2020-04-12T00:00:00Z</published>
    <updated>2020-04-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/hades-04-2020/</id>
    <content type="html"><![CDATA[<p>Hades. Best Disney villain or best Disney villain? Ink, watercolor, watercolor
pencils. Experimented a bit more with less line art and more painting.</p>
<p>#art #sketch #sketchbook #ink #watercolor #watercolorpencils #hercules #hades
#disney #cartoons #instaart #stayhome #drawing #painting</p>
]]></content>
    <link href="https://unwiredcouch.com/art/hades-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Megavolt]]></title>
    <published>2020-04-10T00:00:00Z</published>
    <updated>2020-04-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/megavolt-04-2020/</id>
    <content type="html"><![CDATA[<p>Megavolt. Classic Darkwing Duck villain. Ink and watercolor. It’s fun to do
these tiny sketches of childhood cartoon characters to play around with the
medium.</p>
<p>#art #sketch #sketchbook #drawing #painting #ink #watercolor #darkwingduck
#megavolt #elmosputterspark #instaart #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/megavolt-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Count Duckula]]></title>
    <published>2020-04-10T00:00:00Z</published>
    <updated>2020-04-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/count-duckula-04-2020/</id>
    <content type="html"><![CDATA[<p>Another childhood cartoon sketch from last night. Count Duckula, ink and
watercolor.</p>
<p>#art #sketch #painting #drawing #ink #watercolor #saturdaymorningcartoons
#countduckula #instaart #stayhome</p>
]]></content>
    <link href="https://unwiredcouch.com/art/count-duckula-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Darkwing Duck]]></title>
    <published>2020-04-08T00:00:00Z</published>
    <updated>2020-04-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/darkwing-duck-04-2020/</id>
    <content type="html"><![CDATA[<p>Post breakfast quick sketch this morning. Darkwing Duck in ink and watercolor.
I really wanted to give this sketch card sized @hahnemuehle_global bamboo
paper a try ever since I got it a couple of months ago. And it was definitely
different than I expected but a lot of fun.</p>
<p>#art #sketch #sketchcard #ink #watercolor #darkwingduck #letsgetdangerous
#zwoeinsrisiko #instaart #stayhome #drawnearlyeveryday</p>
]]></content>
    <link href="https://unwiredcouch.com/art/darkwing-duck-04-2020/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Accelerate: Building and Scaling High-Performing Technology Organizations]]></title>
    <published>2020-03-23T00:00:00Z</published>
    <updated>2020-03-23T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/forsgren-accelerate-2018/</id>
    <content type="html"><![CDATA[<p>This has come up in various discussions again and again so I finally decided
to read it. I didn’t really enjoy reading it much. It’s a good book and has a
lot of good points. But also a couple I disagree with. But having worked so
much in DevOps context, the book didn’t offer me anything new. More so some of
it actually is opposite to my experience. Which made it even harder to
continue reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/forsgren-accelerate-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Sapiens: A Brief History of Humankind]]></title>
    <published>2020-03-08T00:00:00Z</published>
    <updated>2020-03-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/harari-sapiens-2011/</id>
    <content type="html"><![CDATA[<p>Usually I tend to shy away from longer (i.e. over 300 pages) books because I
feel like I’m a somewhat slow reader. And it takes me too long to get through
and I feel demotivated to pick the book up. Sapiens still really piqued my
interest and it absolutely delivered.</p>
<p>The overall book is structured in chapters about the cognitive revolution, the
agricultural revolution, the unification of humankind, and the scientific
revolution. I’ve never thought about historical periods in that way and it was
interesting to see them presented in that way. The book covers the usual things
like evolution of Homo Sapiens with a bigger brain than other species, the
early periods as hunter gatherers, the emergence of agricultural and the
connected settling down.</p>
<p>The book also discusses cultural things like a common belief (e.g. religion or
capitalism) being a necessary establishment to allow for communication (and
collaboration to some extent) across larger territories than just family or
tribe boundaries. For any evolutionary step the trade-offs in those changes are
also discussed. Especially the cruel(-er) aspects of human dominance on the
planet.</p>
<p>I learned a lot reading this book and it definitely made me think about history
and human evolution from other perspectives than usual.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/harari-sapiens-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Drive: The Surprising Truth About What Motivates Us]]></title>
    <published>2020-02-14T00:00:00Z</published>
    <updated>2020-02-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/pink-drive-2009/</id>
    <content type="html"><![CDATA[<p>I really liked especially the first part of the book that focuses on the
science and research of motivation. It was very interesting to read about the
history of behavioral psychology and how in various stages scientists ran
different experiments to understand the psychology of motivation.</p>
<blockquote>
<p>Wikipedia’s triumph seems to defy the laws of behavioral physics</p></blockquote>
<p>One of the more prominent examples of intrinsic motivation in the book is
contribution to open source. At that part it shows that the book was written
over 10 years ago. Because the discussion of open source is solely focused on
the positive effects of honing skills, furthering one’s career through the
work, and the intrinsic “feel good” motivation of giving back and contributing.
However over the last 10 years the problem of burn out with open source
maintainers has become a major topic that is completely absent here. And there
is also no real acknowledgement of maintainers vs contributors and the
difference in demand, work, and rewards. But I don’t think that detracts from
the points the book is making, merely something I immediately jumped to when
reading these packages as some missing nuance in the argument.</p>
<blockquote>
<p>&hellip; rewards can perform a weird sort of behavioral alchemy: They can transform an interesting task into a drudge. They can turn play into work</p></blockquote>
<p>Large parts of the book then deal with the detrimental effects of the “carrot
and stick” approach to management. It was interesting to read about flow state,
rewards, and how - depending on how they are used - they can be actually
detrimental to motivation. This part contained an interesting short lesson in
management history and that the idea of creating space for autonomy for workers
has come up a couple of times in the past but never got a lot of traction. And
that the new way of granting autonomy and letting workers have more urgency
over their work is improving motivation. However there are also a couple of
dangerous ideas the author arrives at. That can easily be read as management
being a problem, rather than the fact that it is a craft that needs to honed
just as any other work</p>
<blockquote>
<p>Perhaps management is one of the forces that’s switching our default setting and <em>producing</em> that state [of passive inertia]</p></blockquote>
<p>Finally there is a discussion about mastery and putting effort into something
that I found very interesting based on Carol Dweck’s - a psychology professor
at Stanford - research. The discussion of mastery being a mindset rather than a
set goal resonated a lot with me. Especially the idea that intelligence is not
a set entity within people but a resource that can be increased via learning.</p>
<blockquote>
<p>That is the nature of mastery: Mastery is a mindset</p></blockquote>
<p>And finally the idea that mastery is an infinite pursuit. That even if you
spend a lifetime doing something, you can never truly master something. There
will always be more to learn, more to practice, more to &hellip; master.</p>
<p>Overall I really enjoyed the book. Even though there are definitely parts that
haven’t aged quite so well over the last 10 years and with all the things we’ve
learned from tech startups left and right misunderstanding the importance of
structure, management, and Human Resources. There were many parts that I really
enjoyed reading and learned a bunch of things from.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/pink-drive-2009/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Physics and Philosophy: The Revolution in Modern Science]]></title>
    <published>2020-02-09T00:00:00Z</published>
    <updated>2020-02-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/heisenberg-physicsandphilosophy-1958/</id>
    <content type="html"><![CDATA[<p>This book was just plain amazing. I didn’t know what to expect when I found
this at the book store and I was reading books about (quantum) physics at the
time, so I was curious to see that Heisenberg had written a book. A lot of the
(quantum) physics details went over my head. But I loved the discussion of its
implications on &ldquo;reality&rdquo; and the philosophy of being. It’s an extremely dense
read but I got so much out of it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/heisenberg-physicsandphilosophy-1958/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[First Aid for your Child&#39;s Mind: Simple steps to soothe anxiety, fears and worries]]></title>
    <published>2020-01-28T00:00:00Z</published>
    <updated>2020-01-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/eaton-firstaidchildsmind-2019/</id>
    <content type="html"><![CDATA[<p>I also found this one by accident while looking through the book store. It was
interesting to read a psychologist’s view on dealing with anxiety in kids. And
it turned out to be much more interesting and useful for its explanations of
general behaving towards children. The exercises are a bit too involved for me
to want to try them. But I can see them being useful.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/eaton-firstaidchildsmind-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Weisse Nächte (Vollständige Deutsche Ausgabe)]]></title>
    <published>2020-01-24T00:00:00Z</published>
    <updated>2020-01-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/dostoyevsky-weissenaechte-1848/</id>
    <content type="html"><![CDATA[<p>I decided to give this book a try after poking around at the local book store.
I realized I’ve never read a single Dostoyevsky. And I sure wasn’t gonna read
“War &amp; Peace” just to give it a try and see how I like it. After some quick
googling it seemed like “Weiße Nächte” was one of his fairly popular works,
and they had it in the <a href="(https://en.wikipedia.org/wiki/Reclam">Reclam</a> version which is about 4 Euros. So not a
huge investment.</p>
<p>So I read the whole thing with not a lot of expectations. And overall I didn’t
hate it. It was slightly obnoxious at times, and Dostoyevsky seems to
definitely have a thing for drama. Plus it has all the problems you can expect
from a man during the 19th century when it comes to respecting women. The
ending made up for a lot though.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/dostoyevsky-weissenaechte-1848/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[A Field Guide to Getting Lost]]></title>
    <published>2020-01-22T00:00:00Z</published>
    <updated>2020-01-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/solnit-fieldguidetogettinglost-2006/</id>
    <content type="html"><![CDATA[<p>This was another purchase while rummaging at the book  store. I&rsquo;ve read
Solnit‘s „Men explain things to me“ a couple of years ago. And since I really
enjoyed that one, I wanted to read another one of her works. I&rsquo;m not usually
one for essays so it was definitely a different experience from reading more
science related books.</p>
<p>I thoroughly enjoyed reading „A Field Guide to Getting Lost“ even though (or
maybe because of?) I often found myself getting lost myself in an essay and
wondering what&rsquo;s the point she&rsquo;s trying to be bring across. It was also at
times deeply emotional to read for me as I was reminded of personal loss (or
the fear of it) with many of her essays.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/solnit-fieldguidetogettinglost-2006/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Digital Minimalism: Choosing a Focused Life in a Noisy World]]></title>
    <published>2020-01-18T00:00:00Z</published>
    <updated>2020-01-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/newport-digitalminimalism-2019/</id>
    <content type="html"><![CDATA[<p>It took me quite a while after buying this book to find the excitement to
actually read it. Which had nothing to do with the book itself but the fact
that I was already sold on the concept of &ldquo;Digital Minimalism&rdquo; and thought
there wouldn’t be new or interesting things for me in there.</p>
<p>However once I started reading the book I immediately got sucked in and really
enjoyed it. The book is very well researched and there are a lot of references
to other works and research that are interesting to follow up on. It also
contains a good mix of background information and actionable things to do for
minimizing digital factors in once life. That makes it really easy (and
somewhat also creates the desire to try) to give the exercises for
minimization a spin.</p>
<p>My fear of already being sold on the concept and thus the book feeling like
preaching to me as the choir were completely unfounded. If anything I got a
confirmation and reaffirmation to my own practices of minimizing digital
influences on my life.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/newport-digitalminimalism-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Watercolor Artist&#39;s Bible]]></title>
    <published>2020-01-11T00:00:00Z</published>
    <updated>2020-01-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/scott-watercolorartistsbible-2009/</id>
    <content type="html"><![CDATA[<p>A while into learning how to draw I realized that watercolor is a lot more fun
and versatile than I thought it was from my experience using it in art class
in school. So I researched around and found this book recommended as a sort of
reference book for watercolor artists. And it’s exactly that. The book gives a
great overview over all the different aspects and techniques of using
watercolor. Reading it gave me a much better understanding of the medium and I
still refer back to it to look some things up occasionally.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/scott-watercolorartistsbible-2009/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Art of Logic in an Illogical World]]></title>
    <published>2020-01-11T00:00:00Z</published>
    <updated>2020-01-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/cheng-artoflogic-2018/</id>
    <content type="html"><![CDATA[<p>This book gives a thorough introduction into logic and how to apply it. And I
really enjoyed the discussion of what kind of things can be applied, how to
build a logical structure and argument, and what the limits of a logical
argument are.</p>
<p>I especially enjoyed the discussions of axioms as a kind of personal value
system. And thus the root of any logical reasoning that you apply to
arguments. This was the part for me that really tied the theoretical approach
of logic and the emotional and moral foundations of one’s own beliefs
together.</p>
<p>Overall however I wanted this book to be better. While I really enjoyed the
logic part, there was one thing that made it really hard for me to get into a
flow while reading. The author kept jumping between theory and examples way
too frequently for my taste. She also rarely used lighter &ldquo;introductory&rdquo;
examples. But most of the time jumped straight into heavy examples around
racism, sexism, and other topics like that. Which made it much harder for me
to get an understanding of the logical foundations as I was trying to keep the
theoretical ideas in my head while also dealing with the emotional reaction to
such heavy examples. I don’t think that those didn’t have a part in the book
at all, but rather for me it would have worked better to reserve these heavy
examples for the end. At which point I had familiarized myself with the
concepts that got introduced, had some simple examples to apply them, and
could focus on a more trying example as a completion of the topic.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/cheng-artoflogic-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Fallacy of needing a technical manager]]></title>
    <published>2019-11-19T00:00:00Z</published>
    <updated>2019-11-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2019/11/19/technical-manager.html</id>
    <content type="html"><![CDATA[<p>One of the things I’ve come across most in my career in technology (and one that still persists) is the belief that as an engineer, your manager (or coach) needs to be fairly technical as well. Or at least it being a huge advantage if they are. And even though this might seem like a logical thing initially, most of the times I’ve seen that being actually the case were edge cases. In most other cases it’s been not particularly helpful or even harmful.</p>
<p>First let&rsquo;s get this part out of the way: here are the scenarios where I’ve seen having a technical manager being a big advantage:</p>
<ol>
<li>As a very junior engineer without any mentorship setups in place</li>
<li>During onboarding where I didn’t have a good lay of the land or relationships to lean on to ask about system overviews yet</li>
</ol>
<p>However these situations are edge cases to me which don’t pose actual problems. A good non-technical manager in this scenario will just make an introduction to an engineer who knows the answers and is willing to mentor a junior engineer or new hire. If there isn’t such an engineer, then there are much bigger problems in an organization than technical vs non-technical managers.</p>
<p>And this actually ties into the main point why a technical manager is a silly requirement in my mind: As an engineer you’re already surrounded by a ton of people with deep technical knowledge (i.e. all the other engineers). It’s almost impossible to have a problem or question and there not being someone around to help.</p>
<p>However it’s less of a likelihood to have someone around to help you with non-technical problems. How to structure work into plannable pieces, how to give feedback, how to put your work into the larger context, how to improve the team, etc, etc. And these are much less special and unique topics to technology where you need a strong technology background than most people think. In a healthy organization the senior engineers will also be able to help with this to some extent. But they will also need to have learned it from someone. And need to have an avenue to improve their non-technical skills.</p>
<p>And as you become more and more senior, being able to excel in these non-technical areas becomes more and more crucial to be an effective engineer. With seniority comes problem solving on a larger scale that needs more breadth in impact and much less depth. Focusing on a specialist skillset of just technology will more and more limit you in being able to do this important kind of work. And at some point the only way to improve is working on your non-technical skills, for which you need a good non-technical manager.</p>
<p>This doesn&rsquo;t mean that technical managers can&rsquo;t teach you these kind of things or are necessarily bad at it. But in my experience when you have technical managers who are good at it that’s the case despite their technical background and not because of it. It&rsquo;s way too common that the technical background actually gets in the way. That they consider themselves more as used-to-be-engineers than managers. That they are tying every conversation back to a technical topic and focus on that instead of the reports actual growth. And at worst they try to still write code themselves and debate technical decision making instead of taking a step back and let others grow on those challenges. And don&rsquo;t get me wrong, I know many great technical managers that don&rsquo;t do these kind of things and are great at their jobs. But what I value about them is exactly that. They are managers first.</p>
<p>Requiring a strong technical skillset of managers is also a huge waste of resources. Given that a manager essentially ends up with two jobs that way, they will struggle to be able to do a good job in at least one of them (more likely in both). They are constantly having to shift focus from the management side of things to try and stay up to date on technical topics that they will never really have hands on experience on anyways. Taking time and energy away from their actual job: managing. And nobody really enjoys having a conversation with a manager who hasn&rsquo;t written production code in years yet still engages in technical problem solving and decision making conversations late at night because they think they have to also keep their technical chops up to date. While at the same time clouding their view of the higher level context because they get caught up in technology details.</p>
<p>And it&rsquo;s not only about the manager. It already starts at hiring. Requiring technical aptitude from managers means that you waste precious interview time on technical questions that are even less related to what their work will be than any of the ridiculed inverting a binary tree whiteboard exercises every engineer groans about. Instead you could use all that time to talk extensively about management topics. And how they will help grow teams and organizations.</p>
<p>At the end of the day management is a lot more removed from the actual day to day practice of writing code than most people think. And the problems to solve and challenges to tackle for managers at technology companies are much less unique to technology than we all like to think. A good manager (regardless of technical background) is able to recognize that and work with engineers to make them better by complementing the skills they already learn every day while doing <em>their</em> main job.</p>
]]></content>
    <link href="https://unwiredcouch.com/2019/11/19/technical-manager.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[A Briefer History of Time]]></title>
    <published>2019-11-11T00:00:00Z</published>
    <updated>2019-11-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hawking-brieferhistoryoftime-2006/</id>
    <content type="html"><![CDATA[<p>I decided to read this new edition of “A Brief History of Time”, which I had
started but never finished. I really enjoyed it even though most of it went
totally over my head. I read this book on the Kindle and mostly before going
to sleep. Which is not a good combination for reading a book with such an
intense topic.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hawking-brieferhistoryoftime-2006/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Sketch Every Day: 100&#43; simple drawing exercises from Simone Grünewald]]></title>
    <published>2019-10-17T00:00:00Z</published>
    <updated>2019-10-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/gruenewald-sketcheveryday-2019/</id>
    <content type="html"><![CDATA[<p>I absolutely loved this book. It was more of a flip through and reading
individual chapters for me than a book I read front to back. The chapters
cover the author’s creative journey, some general advice for artists, art
fundamentals, character design, and even a chapter on managing a family life
while also making time and space for art. Every chapter and its sub sections
contain a ton of very helpful examples. All drawn in Simone Grünewald’s well
known style. So it’s easy to see how to approach faces, noses, hair, old and
young people. And even flora and fauna.</p>
<p>The whole book is absolutely fun to read and I can highly recommend it for
learning and inspiration.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/gruenewald-sketcheveryday-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Batwoman]]></title>
    <published>2019-10-01T00:00:00Z</published>
    <updated>2019-10-01T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/batwoman-10-2019/</id>
    <content type="html"><![CDATA[<p>Batwoman! Not sure I’ll keep up 2 sketches per day this month, but I saw
@artgerm’s prompt list and this one was too good not to do. Also first time
using my @illosketchbook and it’s wonderful. Sketch is inspired by an
illustration by Alex Pascenko.</p>
<p>#art #sketch #sketchbook #dailysketch #inktober2019 #arttrober2019 #inktober
#batwoman #dccomics #illosketchbook #copicmarkers #brushpen #ink #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/batwoman-10-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Coffee Nerd: How to Have Your Coffee and Drink It Too]]></title>
    <published>2019-09-23T00:00:00Z</published>
    <updated>2019-09-23T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/brown-coffeenerd-2014/</id>
    <content type="html"><![CDATA[<p>Really enjoyed the book. I’ve been nerding out on coffee for a couple of years
now and I’ve definitely learned a bunch of new things through the book. And
it’s absolutely entertainingly written. Definitely recommended if you’re into
coffee.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/brown-coffeenerd-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Calm Technology: Designing for Billions of Devices and the Internet of Things]]></title>
    <published>2019-09-14T00:00:00Z</published>
    <updated>2019-09-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/case-calmtechnology-2015/</id>
    <content type="html"><![CDATA[<p>I was pretty excited about this book. I used to follow Amber Case on social
media and she did some interesting early IoT stuff. But the book overall is
written with mostly anecdotal examples and not many directly practical things.
It also has some confusing layout quirks that occasionally makes it hard to
read. But content wise it&rsquo;s pretty good if you’re new to the topic.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/case-calmtechnology-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Tree of Yoga]]></title>
    <published>2019-09-02T00:00:00Z</published>
    <updated>2019-09-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/iyengar-treeofyoga-1988/</id>
    <content type="html"><![CDATA[<p>I didn’t much enjoy the book which might be due to expecting something
different. I didn’t have strong expectation but I wanted to get some more
background on where yoga is coming from and what else is part of it to enhance
my own yoga (and maybe meditation) practice. And I’ve definitely learned a
couple of things from the book. But those could have been summarized on 10
pages. The rest of the time I’ve felt the book to mostly be repetitive,
rambling, almost condescending at times, and even occasionally dangerously
wrong (e.g. there’s a section in which it’s said that inverted poses while
menstruating can lead to cancer).</p>
<p>There are definitely good things in there about balance, not seeing yoga
practice as a competition, listening to your body, taking time, etc. But as a
fairly scientific person it’s hard to read over the parts where the last 100
years of modern medicine are ignored.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/iyengar-treeofyoga-1988/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency]]></title>
    <published>2019-08-27T00:00:00Z</published>
    <updated>2019-08-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/demarco-slack-2001/</id>
    <content type="html"><![CDATA[<p>I really enjoyed this book. As someone who has not been actively part of the
dotcom era, and has mostly been told about the bad sides of that time, it’s
refreshing to read something coming out of that time full of what are still
essentially progressive ideas today. Slack as the part of the work where
innovation happens vs the always on, always busy culture is something
organizations can still learn heaps from today. Definitely recommend reading
it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/demarco-slack-2001/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Managing Oneself (Harvard Business Review Classics)]]></title>
    <published>2019-08-20T00:00:00Z</published>
    <updated>2019-08-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/drucker-managingoneself-2008/</id>
    <content type="html"><![CDATA[<p>It’s really short, more of an essay really, so there’s no reason not to read
it. It gives a couple of really good and fairly practical tips about what to
find out about oneself in order to be effective and successful.</p>
<p>One of the things that definitely did strike me as interesting were Drucker’s
points about taking notes and writing things down in general:</p>
<blockquote>
<p>If I don’t write it down immediately, I forget it right away. If I put it
into a sketch book, I never forget it and I never have to look it up again</p></blockquote>
<p>I’ve had this experience so frequently, especially as a kid in school. That
when I wrote something down (e.g. homework for the day) I rarely needed to
look it up again but remembered what I had to do. Because I would also often
remember it either way, I sadly drew the wrong conclusion there (namely that
writing things down isn’t worth the hassle). Which stifled my note taking
skills and a lot of my learning later on. It wasn’t until a couple of years
ago that I realized that it was precisely that process of writing things down
that made me memorize my homework.</p>
<p>A good part of the book also is about figuring out which version of yourself
you want to be. Which I think is a really important aspect of maturing as a
person and figuring out why you’re even trying to get better:</p>
<blockquote>
<p>That is the mirror test. Ethics requires that you ask yourself, what kind of person do I want to see in the mirror in the morning?</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/drucker-managingoneself-2008/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech]]></title>
    <published>2019-08-19T00:00:00Z</published>
    <updated>2019-08-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/wachterboettcher-technicallywrong-2017/</id>
    <content type="html"><![CDATA[<p>This is a fantastic book. It walks through many different facets of how
interfaces are causing people to have a hard time instead of being useful. And
looks at many different factors contributing to the status quo. I’ve
definitely spent a lot of time educating myself on the topic before and while
some things I already knew about, the book provided a great mix of new
information, different view points, and reminders for myself. I wholeheartedly
recommend this to anyone working in tech.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/wachterboettcher-technicallywrong-2017/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Resilient Management]]></title>
    <published>2019-08-11T00:00:00Z</published>
    <updated>2019-08-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hogan-resilientmanagement-2019/</id>
    <content type="html"><![CDATA[<p>I’ve known Lara for years and have also been part of her organization for a bit
when we worked at Etsy together. And over the years we have talked extensively
about management and leadership. So I’ve had a certain idea of what I could
expect from her book. And the book absolutely delivered!</p>
<blockquote>
<p>As a manager, one of your primary jobs is to foster a foundation of trust on your team</p></blockquote>
<p>In the book Lara gives invaluable insights about the human side of management.
About the fact that as a manager you are tasked with making a group of humans
work together. And that only works if you understand what humans value, how
they tick, and if you’ve put the work in to build and foster a foundation of
trust on your team.</p>
<p>This common thread of understanding humans and using that to be effective at
work (and to be honest also outside) guides you through the whole book . Be it
mentoring, coaching, sponsoring, providing (and receiving) feedback, or
communication. Every chapter is written with humans in mind. And that is what
makes this such an outstanding book. Throughout the chapters you will learn how
to meet your team, grow your team, set expectations, communicate effectively,
and build resilience. Every chapter is to the point, clearly written, and
extremely actionable.</p>
<p>This book is a must have regardless whether you are a manager, want to become a
manager, reporting to a manager, or just in general work with other humans.
Lara Hogan puts down so much useful insights and wonderfully helpful guiding
questions and suggestions that I don’t think there is anyone who wouldn’t
benefit from reading it.  Plus it’s about 100 pages long. You can literally
start reading this book on Friday and will be better equipped to do your job on
Monday.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hogan-resilientmanagement-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Organized Mind: Thinking Straight in the Age of Information Overload]]></title>
    <published>2019-08-09T00:00:00Z</published>
    <updated>2019-08-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/levitin-organizedmind-2015/</id>
    <content type="html"><![CDATA[<p>This book blew my mind (pun intended). There was so much neuropsychology in
there that explained so many things about the brain. So many things I always
wondered about made so much more sense now. I don’t know if it made me more
organized but I have a whole new perspective on attention, memory, sleep,
categorization, and so much more. This is one of those “game changer” books
for me. I am approaching many things in daily life and even about my inner,
emotional life very differently since reading the book. From how remembering
and recalling things works (as far as we know currently), to how sleep impacts
your brain, how categorization is done by humans (and why you will always end
up with a “junk drawer”), and when to not trust your gut. It’s a book I want
to read a second time because I don’t feel like I’ve fully taken it in yet.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/levitin-organizedmind-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Pen and Paper]]></title>
    <published>2019-07-05T00:00:00Z</published>
    <updated>2019-07-05T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2019/07/05/pen-and-paper.html</id>
    <content type="html"><![CDATA[<p>A couple of years ago I went from working in an office to working remote mostly
from home. A couple of months in I realized how my productivity had dropped
significantly. For years everything I had to do and most of the planning around
it has lived in <a href="https://www.omnigroup.com/omnifocus" title="Omnifocus">Omnifocus</a>. I
have even written <a href="https://unwiredcouch.com/2014/05/13/omnifocus.html" title="Omnifocus post">about it
before</a>.
For the rest of planning and notes I kept a handful of markdown files in a git
repo, held together by Makefiles and <a href="https://github.com/mrtazz/vim-plan" title="vim-plan plugin">a vim
plugin</a>. But now it
didn’t work for me anymore. I kept opening OmniFocus just to find myself
aimlessly clicking and sorting things around. I redid the layout of my
perspectives again. Restructured all the GTD contexts and areas of focus. But
nothing actually changed. Looking at the app it just blurred with all the other
open windows. All the other apps. It became kind of meaningless. I realized
with 100% of work and interactions happening on my screen now, everything felt
the same to me. I was unable to focus on what I wanted to do. Planning was an
app switch away from coding was an app switch away from meetings was an app
switch away from my todos. There were many times where I caught myself cycling
from one thing to the other a couple of times within minutes. My attention was
completely shot. Additionally I had so many Omnifocus integrations set up that
were pulling in my JIRA tickets, my assigned code reviews, and even emails I
needed to reply to at some point. The longer I wasn’t using Omnifocus the more
it got cluttered with things that needed filing. Instead of helping me get
organized it did the opposite. I had over engineered Omnifocus having succumbed
to the idea that I&rsquo;d be more productive the more I automate and fine tune it.</p>
<h2 id="trying-something-new-ish">Trying something new(-ish)</h2>
<p>I needed to change things up. And the solution for this couldn&rsquo;t be another
app. It needed to be different. And it turns out this is a pretty normal thing
for humans. We link memories (which things to remember to do basically are) to
locations via the hippocampus.</p>
<blockquote>
<p>This is the reason it&rsquo;s important to have a designated place for each of our belongings - the hippocampus does the remembering for us if we associate an object with a particular spatial location.</p>
<p class="cite">
&mdash; <cite>Daniel Levitin, The Organized Mind (p. 91)</cite>
</p></blockquote>
<p>I’ve been carrying a Moleskine notebook with me since early 2008. Early on I
had already used it as a todo organizer before switching to Things and
eventually Omnifocus. I’ve used it on and off for random things (rarely enough
for it to last 10 years). And it’s been the testing ground every time I wanted
to get back to taking more analog notes. I’ve also backed the <a href="https://unwiredcouch.com/2015/03/18/spark-notebook-omnifocus.html" title="Spark Notebook post">Spark
Notebook</a> on Kickstarter and used that with a lot of success for a
while. So when I was looking to change things up from my digital routine I
remembered having read about the <a href="https://bulletjournal.com/" title="Bullet Journal method">Bullet Journal
method</a> and decided to give
it a try.</p>
<h2 id="getting-started-with-a-bullet-journal">Getting started with a Bullet Journal</h2>
<p>For getting set up I started reading the website first and watched the
canonical intro video linked from there. But being used to this elaborate GTD
setup I wasn’t convinced that a minimalist way worked for me. I read a lot of
fairly  popular posts on getting started with bullet journaling from websites
like <a href="https://littlecoffeefox.com/" title="Little Coffee Fox">this</a> and <a href="https://www.tinyrayofsunshine.com" title="Tiny Ray of Sunshines">this
one</a> and a ton of
other blog posts to understand how this is being used by different people. And
then I bought a new notebook and some pens and started with my own.</p>
<p>And I absolutely overdid it. I used a ton of color and differently sized pens
to denote headlines, priorities, etc. I had 2 different systems (dot stickers
and sticky labels) do denote important pages. And I added a ton of modules and
collections like trackers for workout, meditation, water intake, and reading
time. I had very elaborate monthly and weekly spreads, trying to recreate the
organizational cockpit that I always wanted Omnifocus to be. I put way too many
things to do in, areas of focus with color coded headings, and complicated time
blocking details. My daily spread had a <a href="https://medium.com/rohdesign/the-daily-plan-bar-357972361096" title="Rohdesign
Daily Plan bar">daily plan
bar</a> that included all my meetings and time blocks for the day. My
weekly spreads were as complicated and stuffed, at some point even including
which days to take out the trash. Bringing me to up to an hour of just setting
up my page to get started for the day. All to combat the feeling of not getting
things done and falling of the wagon again.</p>
<p>Of course once the initial excitement had worn off I fell back into seeing
maintaining this complicated thing as a chore and neglected it. And I ran into
the same problem I had with Omnifocus of having a layout that was very tuned to
my workdays. On the weekend or when I was taking vacation, it wasn’t useful.
And I hardly interacted with the journal. Leaving me again with the guilt of
“having fallen off”. One important difference though was that on those weekend
days and during time off where I couldn’t bother to get into my complicated
setups, when I did use the journal it resembled a lot more the original idea of
the Bullet Journal. And instead of giving up and changing back to Omnifocus, I
stuck with it.</p>
<h2 id="what-my-bullet-journal-actually-looks-like-now">What my Bullet Journal actually looks like now</h2>
<p>One of those vacations was at the end of last year. During that time I reduced
my usage of the journal to basically only a weekly spread. Mostly because there
wasn&rsquo;t much to keep track of. And I realized it still worked for me. I still
put all my todos and appointments in there. And it adapted to the difference in
usage wonderfully. I was also about to start my third Bullet Journal, having
journaled more than twice as much as the previous 10 years combined. I bought
the official <a href="https://bulletjournal.com/pages/book" title="Bullet
Journal Book">Bullet Journal book</a> to learn more about the ideas and philosophies behind the
original approach given I had more belief it could work for me. And aside from
all the other interesting things in the book, the thing that really changed the
way I thought about it was that it&rsquo;s still supposed to be more like a journal
than a GTD system.</p>
<p>After finishing the book I slimmed down my Bullet Journal to the useful bare
essentials. I kept the original monthly layout I had already been using but
stripped down the monthly task list to a literal list instead of different
areas with colored headlines. The 2 page weekly spread turned into a single
page of tasks I want to get done over the course of the week. And the daily
spread is no longer a plan bar for a meticulously planned out day. It now just
starts with the date headline and serves 90% as a journal for recording the day
rather than a pre-planned skeleton of how I think the day will go. Because one
of the big reasons why I was often abandoning the journal was because they day
almost never turned out as planned. Making me feel like the journal was less
useful.</p>
<p>I kept marking the future log (which for me is the combined
<a href="https://bulletjournal.com/blogs/bulletjournalist/future-log-inspiration" title="Calendex Alistair Hybrid Future Log">calendex/alistair</a> method), monthly and weekly spreads, as
well as important collections with dot stickers. That way I can quickly find
e.g. the page with the last monthly spread if I want to look something up.</p>
<p><img src="/images/pen-and-paper/dot-stickers.jpeg" alt="dot stickers for bookmarks" title="Dot Stickers"></p>
<p>And another big insight from the book was that I’m now leaning on
<a href="https://bulletjournal.com/blogs/bulletjournalist/migration" title="Bullet
Journal Migrations">migrations</a> a lot more than I used to even though I don’t do daily
migrations anymore. I scan the last pages for the current week in the morning
for things that still need to get done and if they are a priority I move them
to the current day. However that rarely happens and it mostly a measure for me
to not forget about priorities. I do weekly and monthly migrations where I
thoroughly go through the pages and migrate items, add additional context, put
things into the future log (or the topic specific collections for things like
personal, work, apartment, etc that serve as a sort of backlog). But otherwise
I really just start a new headline every morning and start journaling.</p>
<h2 id="in-closing">In Closing</h2>
<p>Switching to paper for organizing my todos, thoughts, events, and planning
things has been absolutely wonderful for my stress levels and mental health.
Especially after trimming the process down to the minimum. I’m no longer
stressing about the perfect setup, but use the journal in the way that makes
the most sense for me in the moment. I still use a reminders list on my phone
for things on the go or when I don’t have the journal with me to migrate over
later. I’m much more focused and calm about organizing things when I’m able to
close my laptop and just open the journal, it feels much less noisy. Using pen
and paper so much every day also lead me to occasionally doodle on pages and
discover my interest in drawing and art which has been another huge source of
joy for me.</p>
]]></content>
    <link href="https://unwiredcouch.com/2019/07/05/pen-and-paper.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Ben Grimm]]></title>
    <published>2019-06-18T00:00:00Z</published>
    <updated>2019-06-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/ben-grimm-06-2019/</id>
    <content type="html"><![CDATA[<p>Ben Grimm. Arthur Adams inspired.</p>
<p>#art #sketch #sketchbook #dailysketch #dailydrawing #2019draw365 #brushpen
#ink #pigmamicron #bengrimm #thething #thing #fantasticfour #marvel #comicart
#instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/ben-grimm-06-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Reed Richards]]></title>
    <published>2019-06-17T00:00:00Z</published>
    <updated>2019-06-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/reed-richards-06-2019/</id>
    <content type="html"><![CDATA[<p>Reed Richards.</p>
<p>#art #sketch #sketchbook #dailysketch #dailydrawing #2019draw365 #brushpen
#ink #reedrichards #mrfantastic #fantasticfour #stretch #marvel #comicart
#instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/reed-richards-06-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Big Boss]]></title>
    <published>2019-06-07T00:00:00Z</published>
    <updated>2019-06-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/big-boss-06-2019/</id>
    <content type="html"><![CDATA[<p>“Kept you waiting, huh?” Big Boss/Snake from Metal Gear Solid V. Still wished
they actually had gotten the time to finish the game.</p>
<p>#art #sketch #sketchbook #dailysketch #dailydrawing #pigmamicron #brushpen
#mgsv #metalgearsolid #bigboss #snake #videogames #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/big-boss-06-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Jean Grey]]></title>
    <published>2019-05-18T00:00:00Z</published>
    <updated>2019-05-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/jean-grey-05-2019/</id>
    <content type="html"><![CDATA[<p>Finally done with this teen Jean Grey piece I’ve tried to finish for a while
now. Absolutely inspired by @el_vic_ibanez_’s Jean Grey. One of my fav
characters and I loved her standalone run as well as her Generations issue.
Swipe for some process pics and my desk with a @jimlee stream in the
background which always inspires me to just draw.</p>
<p>#art #ink #drawing #pigmamicron #copicmarkers #jeangrey #xmen #marvel
#comicart #instaart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/jean-grey-05-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Factors of Confidence]]></title>
    <published>2019-04-02T00:00:00Z</published>
    <updated>2019-04-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2019/04/02/factors-of-confidence.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve been having a lot of discussions about delivery of software lately and
especially about the deployment part of it. This made me think about the last
couple of years of working on deployment and development tooling and the
approach I take there.</p>
<p>I&rsquo;ve come to view this from a perspective of formulating a hypothesis and
establishing factors of confidence to confirm or refute this hypothesis. This
sounds very abstract and theoretical at first. But bear with me for a moment
here. The basis for all delivery is a change (or patch, diff, commit, change
set, whatever you wanna call it). This change is meant to improve something.
Add a feature (or establish the base for one), fix a bug, improve performance,
increase visibility, or just clean up some technical debt. This means you&rsquo;re
going to production to make the world better. However given the complex nature
of the systems we deploy software to, you won&rsquo;t actually know if your change is
a net positive until it&rsquo;s running in production. And even then you often only
know a couple of hours or even days later. So all you have when you&rsquo;re in front
of your editor writing some code is an idea about what will make the word
better. A hypothesis.</p>
<p>The job of a delivery pipeline now is to help you get confidence. Confidence
that your hypothesis holds. Or confidence when you have to refute it. However
all of the complex interactions of systems means you don’t get to have that
single unified proof that your code is what you want it to be. Your ability to
make a decision about your change is based on many small factors of
confidence.  And the delivery pipeline should give you tools along the way to
acquire those factors of confidence in reasonable time and effort. It usually
starts with a very quick feedback loop and something akin to a unit test. You
can write them quickly and they can be verified quickly (individually that is.
Running large numbers of unit tests on CI is still a not so easy problem). You
then usually move on to test that are more expensive with a longer feedback
loop.  Like an integration test. Maybe a QA environment. A staging
environment.  Smoker tests in production. Canary deploys. And so on. All of
those things (and this is hardly an exhaustive list) are intended to give you
confidence in something.  That your logic is correct, that your code works
well with other API endpoints, that it interacts with other code on the site
in a way that doesn’t break the whole thing, that it doesn’t put too much load
on downstream systems, etc, etc.  And ideally all these things in place will
give you a nice set of guardrails that make deploying to production an
enjoyable experience.</p>
<p>However given these tools are merely a snapshot of your understanding of the
system at the time and what confidence is needed to make a change to it, the
delivery pipeline needs to be constantly maintained and re-evaluated. Maybe
system growth now means that the trade off of running a large array of unit
tests and the time it takes, doesn’t pay off in the confidence it provides. Or
maybe it does and this means you need to think of something to make running
unit tests faster. Maybe a new additional service means you now need to add a
set of smoker tests. Whatever it is, the most important thing is that you know
<em>why</em> any of these tools to assert confidence are in place. <em>Who</em> are they for
and <em>what</em> are they telling you? The last couple of years have seen the rise of
a huge number of fantastic delivery systems. Often highly opinionated or
infinitely configurable. Sometimes both. It’s easy to just take one of them and
cargo cult what they bring with it. And if you don’t already have an
established system, this is a fine approach that will certainly make you end up
with a better setup than you had before. However I encourage you to look
closely what your delivery pipeline is made up of. And what kind of things it
gives you confidence in. Do you often see failures after deploys because of
surprises in your logic? Maybe you’re missing some unit tests. Are you spending
tons of time on unit tests that essentially only re-test the framework code of
the tool you’re using? Maybe you don’t need those tests and can free up a lot
of engineering time. Whatever it is, your delivery pipeline needs to give you
confidence in changes in <em>your</em> stack. You know best what kind of things need
to go in there. And spending some time to think about that will give you a lot
of insight and pay off when it comes to improving your delivery pipeline. And
it’s also tons of fun!</p>
<p>PS: I’ve had many discussions about those things with many people over the
years. And they all helped me figure out how I think about delivery and make
sense of my rambling thoughts. So if you&rsquo;ve ever chatted with me about
deployment and/or delivery, I&rsquo;m extremely grateful you took the time and I
really enjoyed our chat.</p>
]]></content>
    <link href="https://unwiredcouch.com/2019/04/02/factors-of-confidence.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Bullet Journal Method: Track the Past, Order the Present, Design the Future]]></title>
    <published>2019-01-19T00:00:00Z</published>
    <updated>2019-01-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/carroll-bulletjournalmethod-2018/</id>
    <content type="html"><![CDATA[<p>Very nice and easy read. Made me really think about slowing down for planning
and reflection. I read this book after having switched to a Bullet Journal for
daily logging and planning about 9 months prior. And it’s really easy to get
sucked into all the pretty and elaborate spreads with bullet journaling and
lose focus of what’s really important. The book does a really good job
focusing on the fundamentals with bullet journaling and how to think about
what to incorporate into your journaling practice.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/carroll-bulletjournalmethod-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Hank McCoy]]></title>
    <published>2019-01-16T00:00:00Z</published>
    <updated>2019-01-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/hank-mccoy-01-2019/</id>
    <content type="html"><![CDATA[<p>Young Hank McCoy. Got myself some copics to play with coloring and tried to
recreate Evan Shaner’s take on young beast for practice.</p>
<p>#art #sketch #sketchbook #dailysketch #drawdaily #2019draw365 #ink
#pigmamicron #pentelbrushpen #copic #copicmarkers #henrymccoy #beast #xmen
#marvel #comicart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/hank-mccoy-01-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Constantine]]></title>
    <published>2019-01-09T00:00:00Z</published>
    <updated>2019-01-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/art/constantine-01-2019/</id>
    <content type="html"><![CDATA[<p>John Constantine.</p>
<p>#art #sketch #sketchbook #dailysketch #drawdaily #2019draw365 #pentelbrushpen
#pigmamicron #ink #comicbookart #dccomics #constantine #johnconstantine
#comicart</p>
]]></content>
    <link href="https://unwiredcouch.com/art/constantine-01-2019/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Capacity planning for Etsy’s web and API clusters]]></title>
    <published>2018-10-23T00:00:00Z</published>
    <updated>2018-10-23T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2018/10/23/capacity-planning-etsy.html</id>
    <content type="html"><![CDATA[<p>I wrote about how we do capacity planning for our web and API clusters on
Etsy&rsquo;s <a href="https://codeascraft.com">engineering blog</a>. You can find the post <a href="https://codeascraft.com/2018/10/23/capacity-planning-for-etsys-web-and-api-tiers/">here</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2018/10/23/capacity-planning-etsy.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Miracle Morning Journal]]></title>
    <published>2018-04-18T00:00:00Z</published>
    <updated>2018-04-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/elrod-miraclemorning-2012/</id>
    <content type="html"><![CDATA[<p>I wasn’t quite sure what to expect from this book. Morning routine inspiration
always intrigues me. I’m a terrible early riser but I really enjoy being awake
early. However the book was too much romanticism and extremes for me and not a
lot I was taking away from reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/elrod-miraclemorning-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Learning to have an engineering vision]]></title>
    <published>2018-01-03T00:00:00Z</published>
    <updated>2018-01-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2018/01/03/engineering-vision.html</id>
    <content type="html"><![CDATA[<p>Saying that the last 18 months or so were stressful and full of changes would
be a colossal understatement. Work wise I switched to a new team after over 4
years on the same team, which was then dismantled as part of a big structural
reorg that was actually part 1 of 2. Part 2 consisted of a larger
restructuring that meant my new team also ceased to exist in its then current
form after only 10 months. These changes were incredibly important and long
overdue. However after a couple of pretty stable years this also meant I had
to get out of my comfort zone in a lot of new ways. On both newly created
teams I was one of the more senior engineers which meant for me thinking long
and hard about how I want to contribute to building up a new team and what my
place should and can be there. This meant building up and getting used to new
routines, schedules, way of communication and urgencies in work. All muscles I
had not really needed to exercise in a while and to a large degree not ever.
And besides the build up of technical knowledge about the services we were now
providing as a team, the non-technical side of things was where I really grew
as an engineer. Specifically the most positive impact on how I view work has
been to finally get a better grasp and think hard about what vision means for
an infrastructure/systems engineering team.</p>
<p>I&rsquo;ve gone through the process of thinking about vision for a team before. At
Etsy we use a structure called &ldquo;VMSO - Vision, Mission, Strategy, Objectives&rdquo;,
to organize and structure teams and departments in what their purpose is
within the company and what they contribute to the business. It draws a lot of
inspiration from the ideas in this blog post by LinkedIn CEO Jeff Weiner
called <a href="https://www.linkedin.com/pulse/20121029044359-22330283-to-manage-hyper-growth-get-your-launch-trajectory-right">&ldquo;From Vision to Values: The Importance of Defining Your
Core&rdquo;</a>. The rough overview is that vision is the 30 000 foot view, the
high level idea on the horizon that (almost) never changes. The world we want
to see exist. The mission is derived from it and describes what the team does
to get towards the vision. And then it gets more concrete with strategies how
to get there and concrete objectives we want to fulfill. It&rsquo;s not an easy
process and definitely takes a whole of brainstorming, suggestions, throwing
away suggestions, refining and merging ideas, and consensus building to get
there.</p>
<p>Before this season of change, on my old team, when we were tasked with
creating a VMSO for ourselves we always got hung up on the vision. It was
always that high level thing that never quite matched the work we were doing.
We would meet once or twice and always seemed to end at the same dead ends:
&ldquo;Our work is too multifaceted to be captured by a single statement&rdquo;, &ldquo;It&rsquo;s
hard to explain what we do&rdquo;, &ldquo;We do anything that needs doing&rdquo;, &ldquo;We keep
things running&rdquo;. If you&rsquo;re working on a general purpose infrastructure team,
this might sound familiar to you. It seemed like it was just impossible to
come up with a single vision for the team, so we always left it at a half
baked, cheesy feeling idea. And of course at that point we didn&rsquo;t manage to
derive a good mission from the vision either. Not to speak of strategy or
objectives. I didn&rsquo;t feel too bad about that at the time.  As we had a fairly
broad vision statement, it let us basically take on anything we wanted. And to
be honest, I <em>loved</em> working on that team. Although we were always working on
separate things, we were a bunch of engineers with the same mindset and
approach to work. We had a great team dynamic and our team meetings were a ton
of fun. I couldn&rsquo;t imagine working on a different team.</p>
<p>And in the middle of this work the first part of the reorg happened and our
team got dissolved. I was really upset. While I was fully onboard with the
reasoning and goals of the reorg, I couldn&rsquo;t understand why our team got ended
and most of our roadmap dropped. It felt like our work had gone completely
unvalued. But then I had a long 1-on-1 with my then <a href="http://twitter.com/attackgecko">Engineering Director
Jason Wong</a>. We talked about all of it, he gave me a ton more context.
And he made me understand how a team that does &ldquo;a little bit of everything&rdquo; is
really hard to fit in organizationally. He asked me flat out what the purpose
and vision of the team was in the org. Where was the team going? What would it
look like in 2 years? And I couldn&rsquo;t give him a straight, simple answer. I was
a very senior engineer on the team and I had no answer. This was the moment
where I managed to connect (some of) the dots. And tie together our lack of a
comprehensive vision to the downsides of our operating model. We ended up
supporting way more things than we could, leading to long periods of
maintenance work and almost none of the iterative improvements we planned for
at the beginning of the year. We had no way of saying no to work because we
didn&rsquo;t have a good reason to reject the work. We had weeks where our work
summary would basically just be &ldquo;clean up&rdquo;. Which I love doing and is valuable
work, but not if it takes up 90% of someone&rsquo;s time. We agreed on a vision that
was defined by the work we were already doing and not by what we wanted the
work to be.</p>
<p>And at the time I failed to see the big downside of this: it let us take on
anything we wanted. While this sounds like fun at first, it makes a lot of
things really hard. We continuously worked on 6 different projects as a team
of 7 engineers. There was hardly any collaboration possible and we ended up
with single points of failure because single engineers would end up being the
only ones knowing about a particular system. Once we had hit the limit of
reports for a manager (7 at the time), we needed to hire another manager but
had a really hard time figuring out how to split the team because there was no
clear structure. And boy was it hard to give the elevator pitch for the team
in those interviews. We were a team that was ever expanding its work areas to
catch things and never managed to retract back and focus on our core. We were
aware of those problems and we always thought we will figure them out with
time. For the time being it felt better to keep fixing things and worry about
the rest later. Succumbing to the always existing, intriguing feeling that
something that you can fix needs to be fixed right now.</p>
<p>And in the middle of 2016, all of this was suddenly gone. And after that very
intense and honest 1-on-1 with Jason I felt I knew what I had to do. I joined
a new team. I kept thinking about team focus and organizational structure. And
when we set up to create a VMSO I went full in and went with the process. I
talked a lot to <a href="http://twitter.com/dbness">Vanessa Hurst</a> and <a href="https://twitter.com/lara_hogan">Lara Hogan</a>, both also
Engineering Directors at the time, about VMSOs, team structure and direction.
Both of them know incredibly well how to build engineering organizations and
gave me so much insight and food for thought how to approach this task. I
thought hard and good about what <em>I</em> as an engineer on the team wanted the
team to be. And what I don&rsquo;t want it to be. What were the things that I wanted
this team to contribute to the business? What did I not want to bring around
anymore and let stay in the past? I wanted to have a vision that I can align
goals and work to that would provide focus and effectiveness for the team. And
after a week or two with many, many VMSO meetings we ended up with a result
that I was really happy with. It was also the first thing we delivered as a
newly created team which helped immensely with team identity building. And in
the following months to come I found myself often referring to the vision when
the question came up of whether our team should be doing a particular bit of
work or take over a certain ticket.</p>
<p>Since then I&rsquo;ve also worked on the VMSO for our whole organization of Systems
Engineering. And it was an even harder challenge to find something that
matches the purpose of a dozen teams and gives them something to align their
work to. But it was again a valuable lesson and a time spent building the
structure for something I really want to be part of and make contribution to
the business.</p>
<p>These past 18 months have been an incredibly intense learning period for me.
Most of the things we did on my old team were the right things to do at the
time. We worked on a lot of exciting and important projects that enabled
others to build on top. And we were incredibly successful. But we also missed
the point where the team needed to change to a different operating model to
grow with the business. To align it to where the infrastructure needed to go.
I learned to not think about work on an infrastructure team as just &ldquo;keeping
the lights on&rdquo;, &ldquo;fixing broken things&rdquo; or &ldquo;administering the machines
everything else runs on&rdquo;. But actually take the time to think about what to
really contribute to engineering. What the state is I can see on the horizon
and not just the work I know needs to get done right now. The two or three
things that will make a difference over a laundry list of things that would be
nice to do.</p>
]]></content>
    <link href="https://unwiredcouch.com/2018/01/03/engineering-vision.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Deep Work]]></title>
    <published>2017-06-14T00:00:00Z</published>
    <updated>2017-06-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/newport-deepwork-2016/</id>
    <content type="html"><![CDATA[<p>I read this book during a fairly busy time of my life (both personally and
professionally) and it provided a very welcome perspective on slowing down,
dialing down distractions, and focusing on what Newport calls &ldquo;deep work&rdquo;.
Which is basically the engaged, deep, focus on the work that really matters
(to oneself, one&rsquo;s career, and goals). And even though that not every work day
in a busy job can accommodate lots of deep work every day, the book made me
sensitive to the topic and seek out more of these opportunities even in
smaller ways.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/newport-deepwork-2016/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Retrospective Handbook: A guide for agile teams]]></title>
    <published>2016-12-31T00:00:00Z</published>
    <updated>2016-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/kua-retrospectivehandbook-2013/</id>
    <content type="html"><![CDATA[<p>This book provides a great overview of how to run good retrospectives and
meetings in general. While all the recommendations and tips on there are very
apt and useful I&rsquo;ve found it a bit too practical and general. Most chapters
are applicable to any form of meeting (which is good) but I would have loved a
deeper dive into the psychological challenges and things to look out for with
retrospectives.</p>
<p>At work we have a group of people which I&rsquo;m part of that work on making sure
we have good frameworks in place for blameless postmortems and organizational
learning as a whole. Part of that is moving past only investigating failure
(via postmortems) and also look into investigating successes (via
retrospectives). So in a similar way to how I&rsquo;ve spent time understanding the
unhelpful concept of human error, I wanted to learn more about the theoretical
concepts of successful retrospectives. Unfortunately this was completely the
wrong book for this. It is a great and very practical read for retrospectives
in the agile sense and how to run successful meetings in general. However I
wasn&rsquo;t looking for that so I constantly kept thinking when we are going to
dive into the meaty, theoretical stuff. This is in no way the authors fault
and I would highly recommend the book as inspiration for improving your
meetings. But for the theoretical underpinnings of retrospectives as an
organizational learning tool I&rsquo;m still on the lookout. Let me know if you have
recommendations :).</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/kua-retrospectivehandbook-2013/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Make and Go for Fun and Profit]]></title>
    <published>2016-05-31T00:00:00Z</published>
    <updated>2016-05-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2016/05/31/go-make.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve been somewhat interested in Go for quite a while now. It&rsquo;s gotten to the
point where it has replaced Ruby for me in those places where I write command
line utilities which are too involved for them to make sense to be a shell
script. I don&rsquo;t have too many opinions about the language itself, but I like
the static type system and that it&rsquo;s a compiled language. And to be honest,
the build system and how to utilize it have been the most interesting bits for
me so far. One thing I especially like is the fact that go provides a bunch of
tooling to do different things but how you tie them together is up to you. So
this gives rise to some fun use cases for a nice Makefile.</p>
<h3 id="the-basics">The Basics</h3>
<p>Every project I start gets this Makefile with some basic setups and variable
definitions that I always want.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span><span style="color:#66d9ef">export </span>GO15VENDOREXPERIMENT <span style="color:#f92672">=</span> <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># variable definitions
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>NAME <span style="color:#f92672">:=</span> coolthings
</span></span><span style="display:flex;"><span>DESC <span style="color:#f92672">:=</span> a nice toolkit of helpful things
</span></span><span style="display:flex;"><span>PREFIX <span style="color:#f92672">?=</span> usr/local
</span></span><span style="display:flex;"><span>VERSION <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell git describe --tags --always --dirty<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>GOVERSION <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell go version<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>BUILDTIME <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell date -u +<span style="color:#e6db74">&#34;%Y-%m-%dT%H:%M:%SZ&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>BUILDDATE <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell date -u +<span style="color:#e6db74">&#34;%B %d, %Y&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>BUILDER <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell echo <span style="color:#e6db74">&#34;`git config user.name` &lt;`git config user.email`&gt;&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>PKG_RELEASE <span style="color:#f92672">?=</span> <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>PROJECT_URL <span style="color:#f92672">:=</span> <span style="color:#e6db74">&#34;https://github.com/mrtazz/</span><span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>LDFLAGS <span style="color:#f92672">:=</span> -X <span style="color:#e6db74">&#39;main.version=$(VERSION)&#39;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>           -X <span style="color:#e6db74">&#39;main.buildTime=$(BUILDTIME)&#39;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>           -X <span style="color:#e6db74">&#39;main.builder=$(BUILDER)&#39;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>           -X <span style="color:#e6db74">&#39;main.goversion=$(GOVERSION)&#39;</span>
</span></span></code></pre></div><p>For the most part this just defines a whole bunch of meta data that gets
compiled into the binaries via linker flags. This is a pattern I have seen in
a lot of Go projects and I really like that this is somewhat of a standard
thing to do. Especially with the static nature of Go binaries, the more
helpful information you can compile into the binary the better it is when you
have to figure out where a binary comes from.</p>
<p>I also always have a handful of tasks defined that are helpful for running
tests and such, especially to have a uniform and documented way how they are
run locally and on CI.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span><span style="color:#75715e"># development tasks
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span><span style="color:#a6e22e">test</span><span style="color:#f92672">:</span>
</span></span><span style="display:flex;"><span>	go test $$<span style="color:#f92672">(</span>go list ./... | grep -v /vendor/ | grep -v /cmd/<span style="color:#f92672">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>PACKAGES <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell find ./* -type d | grep -v vendor<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">coverage</span><span style="color:#f92672">:</span>
</span></span><span style="display:flex;"><span>	@echo <span style="color:#e6db74">&#34;mode: set&#34;</span> &gt; cover.out
</span></span><span style="display:flex;"><span>	@for package in <span style="color:#66d9ef">$(</span>PACKAGES<span style="color:#66d9ef">)</span>; <span style="color:#66d9ef">do</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>		<span style="color:#66d9ef">if</span> ls $$<span style="color:#f92672">{</span>package<span style="color:#f92672">}</span>/*.go &amp;&gt; /dev/null; <span style="color:#66d9ef">then</span>  <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>		go test -coverprofile<span style="color:#f92672">=</span>$$<span style="color:#f92672">{</span>package<span style="color:#f92672">}</span>/profile.out $$<span style="color:#f92672">{</span>package<span style="color:#f92672">}</span>; <span style="color:#66d9ef">fi</span>; <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>		<span style="color:#66d9ef">if</span> test -f $$<span style="color:#f92672">{</span>package<span style="color:#f92672">}</span>/profile.out; <span style="color:#66d9ef">then</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>		cat $$<span style="color:#f92672">{</span>package<span style="color:#f92672">}</span>/profile.out | grep -v <span style="color:#e6db74">&#34;mode: set&#34;</span> &gt;&gt; cover.out; <span style="color:#66d9ef">fi</span>; <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>	<span style="color:#66d9ef">done</span>
</span></span><span style="display:flex;"><span>	@-go tool cover -html<span style="color:#f92672">=</span>cover.out -o cover.html
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">benchmark</span><span style="color:#f92672">:</span>
</span></span><span style="display:flex;"><span>	@echo <span style="color:#e6db74">&#34;Running tests...&#34;</span>
</span></span><span style="display:flex;"><span>	@go test -bench<span style="color:#f92672">=</span>. $$<span style="color:#f92672">(</span>go list ./... | grep -v /vendor/ | grep -v /cmd/<span style="color:#f92672">)</span>
</span></span></code></pre></div><p>These make heavy use of <code>go list</code> to determine existing packages to run tests
for. The rules also exclude the vendor folder as I don&rsquo;t want to run those
tests and the cmd folder which I will describe more in the next section.</p>
<h3 id="structure-for-multiple-binaries">Structure for multiple binaries</h3>
<p>Go has this defacto standard of how to structure code if your build produces
multiple executables. Since your main entry point in the app is always the
main package and there can only be one per directory (which is also true for
any other package btw) you need to separate different executables by
directory. The pattern here is basically to have a <code>cmd</code> folder that contains
subfolders for each executable which in turn just contain a <code>main.go</code> file.
This is a pretty nice pattern, once you get used to it and is a convention
that lets you easily create make rules for building those executables via the
make wildcarding support.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span>CMD_SOURCES <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell find cmd -name main.go<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>TARGETS <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>patsubst cmd/%/main.go,%,<span style="color:#66d9ef">$(</span>CMD_SOURCES<span style="color:#66d9ef">))</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">%</span><span style="color:#f92672">:</span> cmd/%/main.go
</span></span><span style="display:flex;"><span>	go build -ldflags <span style="color:#e6db74">&#34;</span><span style="color:#66d9ef">$(</span>LDFLAGS<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span> -o $@ $&lt;
</span></span></code></pre></div><p>This piece just finds all <code>main.go</code> files under the cmd folder and creates
targets from them located at the top level of the repo. Then there is a rule
to build those targets via a rule that ties them back to the source file via
wildcarding again and runs <code>go build</code> with the linker flags from before.</p>
<p>Of course it&rsquo;s good habit to provide man pages for your tools. So we can rig
up a similar set of rules for building man pages for each executable:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span>MAN_SOURCES <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell find man -name <span style="color:#e6db74">&#34;*.md&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>MAN_TARGETS <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>patsubst man/man1/%.md,%,<span style="color:#66d9ef">$(</span>MAN_SOURCES<span style="color:#66d9ef">))</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">%.1</span><span style="color:#f92672">:</span> man/man1/%.1.md
</span></span><span style="display:flex;"><span>	sed <span style="color:#e6db74">&#34;s/REPLACE_DATE/</span><span style="color:#66d9ef">$(</span>BUILDDATE<span style="color:#66d9ef">)</span><span style="color:#e6db74">/&#34;</span> $&lt; | pandoc -s -t man -o $@
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">all</span><span style="color:#f92672">:</span> <span style="color:#66d9ef">$(</span>TARGETS<span style="color:#66d9ef">)</span> <span style="color:#66d9ef">$(</span>MAN_TARGETS<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>.DEFAULT_GOAL<span style="color:#f92672">:=</span>all
</span></span></code></pre></div><p>This lets us write man pages in markdown under the <code>man/man1/</code>folder named as
<code>${cmd}.1.md</code> and again uses wildcards in make to generate them top level via
an implicit rule. I also added an <code>all</code> target there which is the default and
builds all binaries and man pages.</p>
<p>Over time I&rsquo;ve come to the conclusion that it&rsquo;s really a good practice to have
your <code>main.go</code> files be as slim as possible. Ideally all they should be
concerned with is flag parsing, calling a method from your library packages,
and formatting and printing the output to the terminal. Any actual logic
should live in library modules somewhere else in your repo. This maintains a
good code layout to extend, makes sure code is reusable, and provides good
conventions for testing.</p>
<h3 id="installation">Installation</h3>
<p>So now that we have rules to build the binaries, we also want to be able to
install them to the <code>PREFIX</code> we have defined at the top. Go comes with an
install command already (<code>go install</code>) which will put binaries in your
<code>$GOPATH/bin</code> but there is no need to have to rely on that. Plus on a multi
user system you want to provide tools for everyone anyways. Also let&rsquo;s be
real, <code>go install</code> is not a replacement for a real package manager. Just
because go builds are fast and produce a static binary doesn&rsquo;t mean it&rsquo;s not a
good idea to be able to build packages. Plus you want your man pages to be
installed with your software as well of course.  So let&rsquo;s write some generic
install commands:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span>INSTALLED_TARGETS <span style="color:#f92672">=</span> <span style="color:#66d9ef">$(</span>addprefix <span style="color:#66d9ef">$(</span>PREFIX<span style="color:#66d9ef">)</span>/bin/, <span style="color:#66d9ef">$(</span>TARGETS<span style="color:#66d9ef">))</span>
</span></span><span style="display:flex;"><span>INSTALLED_MAN_TARGETS <span style="color:#f92672">=</span> <span style="color:#66d9ef">$(</span>addprefix <span style="color:#66d9ef">$(</span>PREFIX<span style="color:#66d9ef">)</span>/share/man/man1/, <span style="color:#66d9ef">$(</span>MAN_TARGETS<span style="color:#66d9ef">))</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># install tasks
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span><span style="color:#a6e22e">$(PREFIX)/bin/%</span><span style="color:#f92672">:</span> %
</span></span><span style="display:flex;"><span>	install -d $$<span style="color:#f92672">(</span>dirname $@<span style="color:#f92672">)</span>
</span></span><span style="display:flex;"><span>	install -m <span style="color:#ae81ff">755</span> $&lt; $@
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">$(PREFIX)/share/man/man1/%</span><span style="color:#f92672">:</span> %
</span></span><span style="display:flex;"><span>	install -d $$<span style="color:#f92672">(</span>dirname $@<span style="color:#f92672">)</span>
</span></span><span style="display:flex;"><span>	install -m <span style="color:#ae81ff">644</span> $&lt; $@
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">install</span><span style="color:#f92672">:</span> <span style="color:#66d9ef">$(</span>INSTALLED_TARGETS<span style="color:#66d9ef">)</span> <span style="color:#66d9ef">$(</span>INSTALLED_MAN_TARGETS<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">local-install</span><span style="color:#f92672">:</span>
</span></span><span style="display:flex;"><span>	<span style="color:#66d9ef">$(</span>MAKE<span style="color:#66d9ef">)</span> install PREFIX<span style="color:#f92672">=</span>usr/local
</span></span></code></pre></div><p>We&rsquo;re adding the <code>PREFIX</code> to all targets and man targets here to generate the
paths to install. Then we write another implicit wildcarding rule that has the
original targets as dependencies and performs install commands to put them
into the prefix. This is a quick and easy way to have a generic <code>make install</code>
target and also lets us easily add a local install target that we can use as a
dependency for building packages later on.</p>
<h3 id="dependencies-oh-my">Dependencies, Oh My!</h3>
<p>If you&rsquo;ve spent time with Go and make before, you will maybe have noticed a
flaw in the building step of the Makefile so far. To revisit, we are building
binaries from the source in the cmd folder with this implicit rule.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span><span style="color:#a6e22e">%</span><span style="color:#f92672">:</span> cmd/%/main.go
</span></span><span style="display:flex;"><span>	go build -ldflags <span style="color:#e6db74">&#34;</span><span style="color:#66d9ef">$(</span>LDFLAGS<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span> -o $@ $&lt;
</span></span></code></pre></div><p>However this only tells make about the first level of direct dependencies for
the binary to the cmd source. Chances are you are using library and vendored
code in those. This means while <code>go build</code> technically knows about all
dependencies, make doesn&rsquo;t. And it will refuse to rebuild the binaries if
something other than the cmd source changes. This is annoying but fortunately
also fixable. A simple fix would be to just not have dependencies in make for
the executables and mark them as <code>.PHONY</code> so that they are always regarded out
of date. This pushes all dependency resolution back to the go tool chain which
is nice, but kinda defeats half of the purpose of a Makefile as it will just
run all the commands all the time. To be clear, in practice this is a fine
solution and the downsides are mostly academic with the speed of a usual go
build.</p>
<p>However it&rsquo;s fun to figure out how to make things work and while we&rsquo;re here
already, lets utilize make to its full extent and make it aware of all
dependencies. The details for the make side of things I got from <a href="http://make.mad-scientist.net/papers/advanced-auto-dependency-generation/">this awesome
blogpost</a> which gives a great overview over automatic dependency
management in makefiles. So now all we need is a way to get a list of all
dependencies for a go source file. And of course, <code>go files</code> to the rescue
again! As it not only lets us print packages for passing to the test runner,
but also can print out all dependencies of a file. And with its <code>-f</code> parameter
it also supports basic templating for printing out the results. Utilizing that
we only need to do a small amount of post processing to print it in make
dependency format and we are good to go.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span><span style="color:#75715e"># source, dependency and build definitions
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>DEPDIR <span style="color:#f92672">=</span> .d
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">$(</span>shell install -d <span style="color:#66d9ef">$(</span>DEPDIR<span style="color:#66d9ef">))</span>
</span></span><span style="display:flex;"><span>MAKEDEPEND <span style="color:#f92672">=</span> echo <span style="color:#e6db74">&#34;</span>$@<span style="color:#e6db74">: </span>$$<span style="color:#e6db74">(go list -f &#39;{{ join .Deps &#34;</span><span style="color:#ae81ff">\n</span><span style="color:#e6db74">&#34; }}&#39; </span>$<span style="color:#e6db74">&lt; | awk &#39;/github/ { gsub(/^github.com\/[a-z]*\/[a-z]*\//, &#34;&#34;); printf </span>$$<span style="color:#e6db74">0&#34;</span>/*.go <span style="color:#e6db74">&#34; }&#39;)&#34;</span> &gt; <span style="color:#66d9ef">$(</span>DEPDIR<span style="color:#66d9ef">)</span>/$@.d
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">$(DEPDIR)/%.d</span><span style="color:#f92672">:</span> ;
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">.PRECIOUS</span><span style="color:#f92672">:</span> <span style="color:#66d9ef">$(</span>DEPDIR<span style="color:#66d9ef">)</span>/%.d
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#960050;background-color:#1e0010">-include</span> <span style="color:#66d9ef">$(</span>patsubst %,<span style="color:#66d9ef">$(</span>DEPDIR<span style="color:#66d9ef">)</span>/%.d,<span style="color:#66d9ef">$(</span>TARGETS<span style="color:#66d9ef">))</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">%</span><span style="color:#f92672">:</span> cmd/%/main.go <span style="color:#66d9ef">$(</span>DEPDIR<span style="color:#66d9ef">)</span>/%.d
</span></span><span style="display:flex;"><span>	<span style="color:#66d9ef">$(</span>MAKEDEPEND<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>	go build -ldflags <span style="color:#e6db74">&#34;</span><span style="color:#66d9ef">$(</span>LDFLAGS<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span> -o $@ $&lt;
</span></span></code></pre></div><p>The makedepend command here grabs all dependencies that come from github
(which was a good enough approximation for me to filter out the std lib), cuts
off the project prefix and appends <code>/*.go</code> to each dependency. With the go
rules of having a package per folder, this also is pretty accurate most of the
time and only occasionally serves false positives to result in a rebuild. We
then adapt the implicit build rule to require the dependency file as well but
also rebuild it on each build. And BOOM our Makefile knows almost perfectly a
out all source dependencies.</p>
<h3 id="packaging-and-documentation">Packaging and Documentation</h3>
<p>I always aim for providing packages and good documentation for my Go projects.
But I&rsquo;ve already written about those things more generally
<a href="https://unwiredcouch.com/2016/01/12/coding-pride.html">here</a>, so if you&rsquo;re interested in the details of it, give that
blog post a read. The important part is that the Makefile also holds the logic
for building docs and packages, so they can be easily triggered from CI.</p>
<h3 id="cleanup">Cleanup</h3>
<p>Since it&rsquo;s also always good to make it easy to clean up artifacts and
generated intermediate and output files, all makefiles also get some clean up
tasks.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span><span style="color:#75715e"># clean up tasks
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span><span style="color:#a6e22e">clean-docs</span><span style="color:#f92672">:</span>
</span></span><span style="display:flex;"><span>	rm -rf ./docs
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">clean-deps</span><span style="color:#f92672">:</span>
</span></span><span style="display:flex;"><span>	rm -rf <span style="color:#66d9ef">$(</span>DEPDIR<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">clean</span><span style="color:#f92672">:</span> clean-docs clean-deps
</span></span><span style="display:flex;"><span>	rm -rf ./usr
</span></span><span style="display:flex;"><span>	rm <span style="color:#66d9ef">$(</span>TARGETS<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>	rm <span style="color:#66d9ef">$(</span>MAN_TARGETS<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">.PHONY</span><span style="color:#f92672">:</span> all test rpm deb install local-install packages govendor coverage docs jekyll deploy-docs clean-docs clean-deps clean
</span></span></code></pre></div><p>Equipped with those Make tricks I&rsquo;ve been having tons of fun building Go code.
Some of that is surely more involved than it has to be and especially the
dependency resolution stuff is very bonus round. But it&rsquo;s been super
interesting to rig it up and I learned a lot of things about Make. And in
the end that&rsquo;s what it&rsquo;s all about for me. (Besides having projects with a
super nice to use structure :)</p>
]]></content>
    <link href="https://unwiredcouch.com/2016/05/31/go-make.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Optimize for Mutability and the Present]]></title>
    <published>2016-05-02T00:00:00Z</published>
    <updated>2016-05-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2016/05/02/mutability-present.html</id>
    <content type="html"><![CDATA[<p>I recently read <a href="https://twitter.com/lusis">John Vincent&rsquo;s</a> very interesting and honest blog post
about <a href="http://blog.lusis.org/blog/2016/04/28/the-flaw-in-all-things/">being paralyzed because of seeing all the flaws in
systems</a>. At first I decided to just put it away as it&rsquo;s not
a problem I encounter a lot. But in the last paragraph he asked how others
deal with this. And that was the point in which I started thinking about why
this doesn&rsquo;t bother me as much. And it occupied my brain in those precious
shower and dish washing moments thinking about why this - although I know the
feeling well - is the case. I more or less thought about it all weekend and
this is my attempt to give my perspective on it in a somewhat coherent form. I
hope it&rsquo;s in any way helpful and you should absolutely read John&rsquo;s post first
to understand the context of this post.</p>
<p>The short answer is that I optimize for the present and for mutability, which
in itself is probably a completely useless answer. So let me try to elaborate
what I mean by this. My day job is working in infrastructure engineering,
specifically on a team that works on making writing code and deploying it as
much fun as possible. This means while I&rsquo;m technically a software engineer,
the lines between software engineering and operations are blurry at best in my
day to day work (which is a good thing and I very much enjoy it). I have
worked on a bunch of systems, designed some of them, see almost all of them
break in various ways and participate in as many architecture reviews as
possible to give input on other people&rsquo;s system designs. The main goal of my
work however is to contribute to engineering happiness. This means I&rsquo;m very
aware of the intersection of technology and humans using it. In addition to
that, working mostly on internal things means when the things I work on break,
a lot of my coworkers are blocked from getting their stuff done. This can be
petrifying. When I set out to write a new thing or fix up an existing one, I
can test it out for my workflow but can also inevitably see how it could and
will break for someone with a different workflow, editor, or set of dotfiles.
And what makes it worse is that I won&rsquo;t notice immediately and when it breaks
for someone I might be in a meeting, unable to help right away. And I really
<a href="https://unwiredcouch.com/2016/03/18/breaking-things.html">hate breaking things</a>.</p>
<p>So there I have 2 choices. Ship a thing that&rsquo;s gonna break in some way. Or
don&rsquo;t ship anything. And the way I make myself be ok with shipping something
that is flawed is by of course making sure I do a reasonably extensive attempt
of testing it. But also make it as easy as possible to change or adapt later,
to decide whether my original trade offs are still the right way to go and to
rip things out if not. But not necessarily by trying to cater for all possible
future use cases (the famous &ldquo;premature optimization&rdquo;) and sure as hell not by
writing throw away code. Because everybody can tell you the only thing harder
than building something is decommissioning it, and that goes doubly so for
throw away code. How I try to achieve this is by dropping all of my context
into <a href="https://twitter.com/mrtazz/status/724734135831547905">documentation and automation</a> in some form. This means code
comments.  Documenting my thought process on the JIRA ticket that relates to
it. Writing thorough <a href="https://twitter.com/mrtazz/statuses/661618547295129600">detailed commit message</a> about my
change. Writing unit tests that are being run on CI. A proper README. A
thought through Chef recipe. A Makefile with all important tasks. A runbook.
Those kinds of things. So that when someone else has to go and fix something
(that could be future me or a coworker), they don&rsquo;t have to spend minutes to
hours to get up to speed on the context, decisions, and trade-offs I had and
made to understand why I opted for this solution to the problem. So I&rsquo;m very
happy to write a 20 line commit message that links to the ticket and the
CI/try run and mentions the people I consulted while working on this - even
for a single line of change. I&rsquo;m excited to add unit tests even when I&rsquo;m
&ldquo;only&rdquo; writing a vim plugin or a shell script. And I&rsquo;m excited when I get to
write man pages.</p>
<p>Because if I&rsquo;m honest, yes I can see how things are flawed and can break in
the future. But I don&rsquo;t think I can accurately judge the impact of that flaw
down the line. How severe will it be to reboot a bunch of things for a
security update? How annoyed is the developer really gonna be about this
tooling change? How pissed is my coworker gonna be to get paged for the thing
I built? But also, what is someone gonna be able to build on top of or
inspired by this? What am I free to do until the flaw really becomes a
problem? And is my coworker gonna be ok with with all of this as they have
learned something from it and had all the context available to fix it and make
it better? Because the one thing I do know about current me is that there are
gonna flaws in any solution. And I know one thing for sure about future me or
my coworker encountering the flaw in the system: If they have the same context
as I had when I deployed it, they are gonna be a lot happier, more empathetic
as to why I made those choices, and be able to more quickly build on the
existing solution. And if documentation, automation, and tests are in place it
looks a lot less like some thrown together piece of code but more like the
thought through project and honest attempt to fix a problem that had to make
trade offs that it is. And up until now they trade offs were good ones and
enabled a lot of other things that were impossible to tell before the fact.</p>
<p>So I guess the way I work through those feelings of overwhelming and paralysis
is by making sure I can be damn <a href="https://unwiredcouch.com/2016/01/12/coding-pride.html">proud</a> of the work I&rsquo;m doing in the
present. And make sure it&rsquo;s as easily adaptable as possible when the future
comes around.</p>
<p>Of course this is just my personal way of dealing with it. And it is highly
influenced by my character and the team I get to work with. And I hope I
didn&rsquo;t deviate too much from John&rsquo;s original point in the blog post and that
this post makes sense in some way. None of this actually makes the reality of
flaws and dread of having to deal with them go away. But it gives me an anchor
in the present and something to focus on to get things done. And a way to feel
more prepared when the flaws <em>do</em> surface. Because I&rsquo;d like to think of change
as inevitable but also a good thing. Change you&rsquo;re not prepared for however is
when it feels most like a flaw.</p>
]]></content>
    <link href="https://unwiredcouch.com/2016/05/02/mutability-present.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Bash Unit Testing from First Principles]]></title>
    <published>2016-04-13T00:00:00Z</published>
    <updated>2016-04-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2016/04/13/bash-unit-testing-101.html</id>
    <content type="html"><![CDATA[<p>In the last couple of months I&rsquo;ve done a foray into unit testing the shell
scripts I write. This is mostly a conglomerate of things I&rsquo;ve learned and a
talk I&rsquo;ve given to our ops team about unit testing 101 for infrastructure
tools last year.</p>
<p>In August last year I decided to finally scratch an itch I had for quite a
while. The details aren&rsquo;t super important here, just that it&rsquo;s a shell script
and that there was no sort of of pressure around it which made me take the
time to write unit tests for it. That meant for me researching what existed in
terms of frameworks and how people are generally approaching this. And
unsurprisingly I found a number of ready to use unit testing frameworks, most
of them modeled after the familiar patterns you can find in test frameworks
for other languages. However I was also curious what a minimal testing
framework for bash would look like. After all, all my script would be doing is
create some files and directories on disk with some specific content. So I
could verify it all with <code>grep</code> and <code>test</code>.  So I decided to also use this
side project to try and write my own minimal bash unit test setup.  And while
I mostly ended up doing integration testing for the script, it still made me
think quite a bit about the basics of unit testing.</p>
<h3 id="unit-testing-101">Unit Testing 101</h3>
<p>One of the first questions I always get when someone hasn&rsquo;t really come across
a lot of <a href="https://en.wikipedia.org/wiki/Unit_testing">unit testing</a> is &ldquo;what is a unit?&rdquo;. And while
technically you can probably argue for a unit being a lot of things, the most
helpful one I&rsquo;ve always found to be:</p>
<blockquote>
<p>a unit is a function</p></blockquote>
<p>This simultaneously gives a very concrete answer and also a starting point of
what to do. When writing unit tests, start testing functions. This of course
occasionally leads to the next question &ldquo;what is a function?&rdquo; and more often
to the debate of how to make a function testable. For the first question I&rsquo;ve
again found this very reductionist answer to be the most helpful:</p>
<blockquote>
<p>a function is a reusable piece of code that turns defined input into defined output</p></blockquote>
<p>This is somewhat close to what you learn in school about functions in math and
has helped me a lot with how I think about writing code.</p>
<h3 id="writing-our-first-tested-bash-code">Writing our first tested Bash code</h3>
<p>Now with those definitions out of the way, there&rsquo;s a plan on how to make a
shell script unit testable:</p>
<ol>
<li>Refactor your code into functions</li>
<li>Write tests for those functions</li>
</ol>
<p>There&rsquo;s a bit of a lesson to learn about <a href="https://en.wikipedia.org/wiki/Side_effect_(computer_science)">side effect free
functions</a> but we can short circuit that by saying the only
things your functions should rely on are variables passed into it. And it
should always echo its results to STDOUT.  This heavily reduces the
possibility for side effects in bash functions but also limits the functions
that absolutely have to do something other than just taking input and printing
results to an absolute minimum. Your logic can live in the other functions
most of the time. And those are the ones you can unit test. So now let&rsquo;s
write some functions and tests for them.</p>
<p>Let&rsquo;s say we want a function to output the number of characters in a string.
It could look something like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">function</span> num_chars <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>1<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span> | wc -c
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>It&rsquo;s a very contrived example and you&rsquo;re basically testing that <code>wc</code> works
correctly. But it&rsquo;s a useful example here to show some things. Notice how the
function only acts on variables passed into it and prints the result to
STDOUT.</p>
<p>Now let&rsquo;s write a unit test for it.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">function</span> test_num_chars <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span>  local res<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>num_chars <span style="color:#e6db74">&#34;foo&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> <span style="color:#e6db74">${</span>res<span style="color:#e6db74">}</span> -ne <span style="color:#ae81ff">3</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    echo <span style="color:#e6db74">&#34;failed to assert that </span><span style="color:#e6db74">${</span>res<span style="color:#e6db74">}</span><span style="color:#e6db74"> is 3&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>And that&rsquo;s it. That&rsquo;s all you really need to do to write a simple unit test in bash.</p>
<p>Of course adding more tests now generates a lot of repetitive work. So we can
write a helper function to do the assertion part of the test.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">function</span> assert <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span> eval <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>1<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span> <span style="color:#66d9ef">if</span> <span style="color:#f92672">[[</span> $? -ne <span style="color:#ae81ff">0</span> <span style="color:#f92672">]]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>   echo <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>FUNCNAME[1]<span style="color:#e6db74">}</span><span style="color:#e6db74">: failed&#34;</span>
</span></span><span style="display:flex;"><span> <span style="color:#66d9ef">else</span>
</span></span><span style="display:flex;"><span>   echo <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>FUNCNAME[1]<span style="color:#e6db74">}</span><span style="color:#e6db74">: passed&#34;</span>
</span></span><span style="display:flex;"><span> <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>This helper function takes an argument which is a statement to evaluate. And
depending on whether the eval exits with 0 or not, the test is regarded as
passing or failing. It then prints out the result accordingly. <code>FUNCNAME</code> in
bash is an array that holds the current execution call stack. And thus the
first entry in it is the current function and the next one is the calling
function. This gives us a nice way to determine which test is being executed
and make it part of the output message.</p>
<p>And with this helper function in place, our test now looks like this.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">function</span> test_num_chars <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span>  local res<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>num_chars <span style="color:#e6db74">&#34;foo&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>  assert <span style="color:#e6db74">&#34;[ </span><span style="color:#e6db74">${</span>res<span style="color:#e6db74">}</span><span style="color:#e6db74"> -ne 3 ]&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>Now we can already define a couple of tests and run them by calling the
functions we defined. However that also gets very repetitive fast and you
always have to remember to actually call the function when you define a new
test. So let&rsquo;s also write a helper function to do this for us.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">function</span> run_test_suite <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">for</span> testcase in <span style="color:#66d9ef">$(</span>declare -f | grep -o <span style="color:#e6db74">&#34;^test[a-zA-Z_]*&#34;</span><span style="color:#66d9ef">)</span> ; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">${</span>testcase<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">done</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>This helper gets all the currently declared functions (via <code>declare -f</code>),
looks for the ones starting with &ldquo;test&rdquo;, and then simply executes them.</p>
<p>Now all you have to do is call <code>run_test_suite</code> at the end of your file and
all new test functions are automatically picked up as long as they start with
&ldquo;test&rdquo;.</p>
<h3 id="fixtures-for-tests">Fixtures for Tests</h3>
<p>Now a lot of times in shell scripts you actually want to interact with files
on the file system. And it&rsquo;s not really feasible to always have everything
just be variables to be passed in. In this case you can adapt your script by
setting the base of where the files are you want to interact with. Something
like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>FILEBASE<span style="color:#f92672">=</span><span style="color:#e6db74">${</span>FILEBASE<span style="color:#66d9ef">:-</span>/usr/local/foo<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">function</span> list_files_with_a <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span>  ls <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>FILEBASE<span style="color:#e6db74">}</span><span style="color:#e6db74">/a*&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>Now you can set the variable in your test suite before you source your shell
script with the functions to test. That way <code>FILEBASE</code> will already be set and
the functions use it as their base. If you know create a directory for those
fixtures in your tests directory, you can easily mock out file system details
in a controlled way and test for them.</p>
<h3 id="dependency-injection-in-bash">Dependency Injection in Bash</h3>
<p>One of the most important things for me to get better at unit testing in
general was understanding <a href="https://en.wikipedia.org/wiki/Dependency_injection">dependency injection</a>. Writing code
in a at that would let me completely drive function behavior based solely on
what I&rsquo;m passing in. And if I have to call an external resource make it so I
can pass in the expected return value and only if it&rsquo;s not set, call the
external resource. A simple example could look like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#66d9ef">function</span> get_url <span style="color:#f92672">{</span>
</span></span><span style="display:flex;"><span>  local url<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>1<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>  local res<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>2<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> -z <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>res<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span><span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    res<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>curl -s <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>url<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">}</span>
</span></span></code></pre></div><p>Now you can use this function as you would normally do with <code>'get_url &quot;https://unwiredcouch.com&quot;</code>. However in tests you can also pass in a second
argument which will be used as a locked out response instead of actually
curl-ing the URL.</p>
<h3 id="wrapping-up">Wrapping up</h3>
<p>In this short set of examples I hope it got somewhat clear that it can be
straightforward to write a quick unit testing setup for shell scripts from
built in functionality and start writing tests. I&rsquo;ve also shown some
techniques to write more testable bash to begin with. If you&rsquo;re interested in
reusing the code if pushed the (slightly more improved) version of this I use
to <a href="https://github.com/mrtazz/minibashtest">GitHub</a>. It provides nicer output, more details and properly
returns a non-zero exit code if something failed, so you can run it on CI.  If
you want more functionality or more advanced testing support, I&rsquo;ve listed some
alternatives in the <a href="https://github.com/mrtazz/minibashtest#advanced-testing">README</a>.</p>
<p>And on a slightly related note, you should start using <a href="http://www.shellcheck.net/">shellcheck</a> when
writing bash. It&rsquo;s such an awesome way to get feedback about how to write
better shell scripts and I&rsquo;ve learned tons already just from the errors,
warnings, and suggestions popping up in my VIM quickfix list.</p>
<p>But the most important part is that testing isn&rsquo;t magic and doesn&rsquo;t have to be
complicated. You can get started immediately with just the basics of any
language. And especially starting to write tests for shell scripts is lots of
fun.</p>
]]></content>
    <link href="https://unwiredcouch.com/2016/04/13/bash-unit-testing-101.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[I don&#39;t like Breaking Things]]></title>
    <published>2016-03-18T00:00:00Z</published>
    <updated>2016-03-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2016/03/18/breaking-things.html</id>
    <content type="html"><![CDATA[<p>I don&rsquo;t like breaking things. I never have. I hate it. When I was a kid and we
got our first computer I was completely <a href="https://twitter.com/mrtazz/statuses/611190610964430848">afraid of breaking
it</a>. It was a super expensive item at the time and I had no
idea how it worked. And I didn&rsquo;t know if I would completely break it and we
didn&rsquo;t really have the money to get a new one if I did. Sure I was curious and
I was fascinated by it. I wanted to do cool things on this computer. But I
didn&rsquo;t want to break it. My sister was also using it and playing games on
there and I didn&rsquo;t have the hubris that I definitely would be able to put it
back together if I broke it. Looking at the trade-off between probably
learning something but also ruining my sister&rsquo;s day it just wasn&rsquo;t worth it.
It wasn&rsquo;t that I didn&rsquo;t want to know how it worked, it was <a href="https://twitter.com/mrtazz/statuses/611190760734650368">about
respect</a>. And it&rsquo;s not like I never broke the computer or
anything on it. But I never approached it lightly and was very uncomfortable
when it happened. And I sure wasn&rsquo;t proud of it.</p>
<p>Fast forward 25 years or so I have gone to a bunch of LAN parties as a kid,
went to university to study computers and eventually got a master&rsquo;s degree in
computer science. I&rsquo;m running <a href="https://unwiredcouch.com/2013/10/30/uncloud-your-life.html">most of my own infrastructure</a>, have
built my own home router, had DNS servers from a Dutch ISP take zone transfers
from a <a href="https://www.flickr.com/photos/mrtazz/214028839/in/album-72157594235164764/">computer running in a camper van toilet</a>, and upgraded PHP
on a big e-commerce website without downtime. It&rsquo;s fair to say I&rsquo;ve learned a
couple of things and know my way around computers most of the time. And yet I
am still deeply uncomfortable breaking things.</p>
<h3 id="whats-wrong-with-breaking-things">What&rsquo;s wrong with breaking things?</h3>
<p>Ironically I now work in an industry that basically worships breaking things.
From famous company mottos like &ldquo;move fast and break things&rdquo; to phrases that
get quoted out of context like &ldquo;ask for forgiveness, not permission&rdquo; everybody
seems to love being able to break stuff. What it doesn&rsquo;t take into account is
that breaking things doesn&rsquo;t happen in a vacuum. Your actions always impact
others. Even if you&rsquo;re on-call, the nature of our complex systems means that
nobody has a perfect overview over all interactions. And nobody can be sure
they will be the only one to get paged and not someone else downstream who is
just sitting down to eat with their family. And claiming you can is in my
opinion more an unhealthy sign of hubris than healthy engineering. More likely
than not it&rsquo;s gonna ruin someone else&rsquo;s day.</p>
<p>In addition the romantic picture of the engineer who is not afraid of breaking
things and thus disrupting whole industries on the way is not an evenly
distributed one. As <a href="https://twitter.com/katelosse">Kate Losse</a> already wrote in <a href="https://medium.com/@katelosse/the-unbearable-whiteness-of-breaking-things-521cb394fda2#.pujsyenre">&ldquo;the unbearable
whiteness of breaking things&rdquo;</a> it&rsquo;s usually just the white men
again who are able to get away with it. For everyone else this is likely gonna
end less well.</p>
<p>It&rsquo;s also a very unhealthy and non-collaborative way of approaching things. It
assumes a very negative default instead of working together. And it keeps
reinforcing a stereotype that only works because there&rsquo;s a team of people
picking up the pieces once the magic disruptive engineer is done.</p>
<h3 id="but-its-the-only-way-to-learn">But it&rsquo;s the only way to learn</h3>
<p>Now you might say &ldquo;Hold up there. Breaking things is the only way to learn.
You don&rsquo;t know a technology until you&rsquo;ve seen it break.&rdquo; And I partly agree
with you there. However there is a big difference between doing gamedays where
things are turned off and shut down in a controlled environment, where
everybody got a heads up this is going on, and systems are observed as a team
to learn how they behave. This is a great way to learn about technology. And I
encourage everyone to do this. Equally if things <em>do</em> break it&rsquo;s paramount to
investigate those incidents in a blameless way to maximize the things to learn
from it.</p>
<p>However: Just testing in prod. Not bothering to write unit tests. Not going
through staging before going to prod. Rolling out a change to all servers at
once just to save some time. Those are the things no one learns a lot from.
Other than the fact that you can quickly make a day awful for a bunch of
people. And that you might be a shitty coworker.</p>
<p>Let me be very explicit. I don&rsquo;t think you can <em>prevent</em> every failure from
happening. I don&rsquo;t think people should be punished if something breaks during
their daily work. Things break. There&rsquo;s nothing you can do about that. What I
don&rsquo;t condone is approaching everything with the attitude that it&rsquo;s ok to
actively break things. That it should be the default. That your need of
changing something is more important than someone else&rsquo;s need of not being
interrupted. That it&rsquo;s ok to lean all the way towards efficiency and away from
thoroughness. This is not disruption it&rsquo;s just lack of empathy. The default
should always be to try and not break anything. There should be a way to make
it as easy as possible to catch errors early on. To test things before they go
to prod. To get confidence in something without disrupting someone else&rsquo;s day.
And if there isn&rsquo;t, maybe this is something  to spend time on making better
first. It beats breaking things by a long shot. And is actually something to
be proud of.</p>
]]></content>
    <link href="https://unwiredcouch.com/2016/03/18/breaking-things.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Bad Feminist]]></title>
    <published>2016-02-19T00:00:00Z</published>
    <updated>2016-02-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/gay-badfeminist-2014/</id>
    <content type="html"><![CDATA[<p>I really enjoyed reading this book and the author&rsquo;s argument with the fact
that not everybody is perfect and that not everyone needs to be perfect to
strive for feminist ideas to become reality. I&rsquo;ve very much identified with
the guilty feeling of enjoying listening to rap music - which is often deeply
misogynistic - yet still wanting to further a more feminist world. And having
to sit through this cognitive dissonance discussion with myself. The book is
written in a very honest and raw way and I can definitely recommend it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/gay-badfeminist-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Take Pride in Your Code]]></title>
    <published>2016-01-12T00:00:00Z</published>
    <updated>2016-01-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2016/01/12/coding-pride.html</id>
    <content type="html"><![CDATA[<p>If you&rsquo;re working as a software engineer, you have very likely already heard
about <a href="https://en.wikipedia.org/wiki/Egoless_programming">&rsquo;egoless programming&rsquo;</a>. The notion that you should detach your
code from your ego. That criticizing your code is not a personal attack and
that at some point your code is going to get deleted. Maybe you have even gone
as far as seeing a lot of your work as &ldquo;throw away code&rdquo;, because most things
are supposed to help you in the moment and not last forever. And these are all
good things to learn and internalize. And definitely traits every programmer
should have. However as I <a href="https://twitter.com/mrtazz/status/674229082389938176">tweeted some weeks ago</a>, I&rsquo;m
convinced the biggest trick the devil ever pulled there is convincing everyone
you shouldn&rsquo;t take pride in your code. Which so often leads to half finished
proof of concepts stuffed into a git repo. Repositories whose README might as
well just say &ldquo;works for me&rdquo;. And of course most of the time there aren&rsquo;t any
tests, so when you want to make things better, you have no idea where to
start. And often enough the justification is just something like &ldquo;I needed
this code and maybe it&rsquo;s useful to someone else&rdquo; or &ldquo;it&rsquo;s pretty simple, it
would have taken me longer to write tests than the actual code&rdquo;.</p>
<p>And I think this doesn&rsquo;t have to be. All of this talk about egoless
programming and throwaway code doesn&rsquo;t mean you can&rsquo;t take pride in what you
create. As a programmer (and that includes ops people, security engineers,
designers, etc - if you commit to a repo, you&rsquo;re a programmer; don&rsquo;t let
anyone take this away from you) you now have access to a myriad of wonderful
tools and services that make it <a href="https://twitter.com/mrtazz/status/673585181001928704">so much fun to write and use
software</a>.</p>
<h3 id="the-readme">The README</h3>
<p>I almost feel like this goes without saying, but you should take some time to
write a proper README. It might take 15 or 30 minutes for you to write it. But
if 2 other people don&rsquo;t have to spend 10-20 minutes figuring out how your
project is supposed to be used or if it even solves their problem, it has
already saved time. The usual things like usage examples and installation
instructions should go in there. Plus as you probably know, GitHub shows
READMEs very prominently in a nicely rendered way. So your project already
feels a lot nicer to use.</p>
<p>And while you&rsquo;re there, also create a <code>CONTRIBUTING.md</code>. GitHub will show it
whenever someone is creating a pull request. So you can put some information
in there how you would like to receive contributions which can act as some
helpful guidelines and make it a lot less scary and awkward to contribute.</p>
<h3 id="unit-tests-and-code-coverage">Unit Tests and Code Coverage</h3>
<p>It&rsquo;s <a href="https://twitter.com/mrtazz/statuses/665167264971415552">no</a> <a href="https://twitter.com/mrtazz/statuses/667097579465875456">secret</a> that I&rsquo;m a fan of writing tests. And
it has really become more fun over the last years as frameworks and best
practices have improved. In basically all languages there exists now at least
one unit testing framework that is easy to use. Some languages even come with
one in their standard library. So there is no real reason to not write tests.
After all you are already testing your changes manually. Why not have the
computer do the tedious work? If you&rsquo;re looking for some introductory material
on testing, we have open sourced our <a href="https://codeascraft.com/2014/08/20/teaching-testing-our-testing-101-materials/">Testing 101</a> material we
use at Etsy to teach testing. The point here is not that you will never have
bugs in your code because you write tests. You are gonna reduce the number of
bugs for sure. But more importantly it provides <em>some</em> confidence factors for
contribution and sets a visible expectation of what things are being
automatically tested. Beyond that it provides example code for how to use your
code and codifies the intent you had while writing the original functions. It
also automatically serves as a first client for your API outside of the
intended use case the code was written for. Thus often uncovering a good chunk
of design problems.</p>
<p>And while you&rsquo;re at it, add code coverage as well. Coverage is one of those
tools that most people either love or hate. But the main point for me is that
it sets expectations for which parts of your code are regularly exercised
through tests. Not more not less. It&rsquo;s also not a simple way to make sure you
never have bugs. It can&rsquo;t, as it&rsquo;s a tool that is concerned with syntax and
not semantics of your code. But what it can do is make you think about code
paths more explicitly. And through that make you think more about how to test
things. And also add instructions about how to run the tests into the
<code>CONTRIBUTING.md</code> file so prospective contributors don&rsquo;t have to guess or
search.</p>
<h3 id="continuous-integration">Continuous Integration</h3>
<p>Once you have tests, the next logical step is to run them on a continuous
integration service. I love <a href="https://travis-ci.org">Travis CI</a> for this but there are many
others out there. Most services now support GitHub pull request status updates
which makes it so much less work to maintain external contributions to your
project as you&rsquo;ll immediately see whether or not the pull request passes
tests. But the most important bit about hooking up a CI system to run your
tests are the fact that you&rsquo;ll know it works somewhere else besides your
laptop. Plus it gives you a platform to trigger many other useful things
(which I will talk about in a bit) after your tests have successfully run. And
for that you can even just use the Jenkins setup you probably already have at work or any other CI setup really.</p>
<h3 id="code-style-and-static-analysis">Code style and Static Analysis</h3>
<p>Another thing I really enjoy is the renewed rise of static code analysis
tools. And I&rsquo;ll just throw code style checkers in that same bucket. I&rsquo;ve
met quite a lot of people who hate coding style checkers. The arguments
usually go like this: &ldquo;if you can&rsquo;t read an if statement with a missing space
before the curly brace I don&rsquo;t want you to write production code anyways&rdquo;.
It&rsquo;s amazing how many people have strong opinions on things they claim to not
matter. The point here is not so much the correct way of writing code but
rather a consistent way. Chances are your code is being read by a good number
of people, depending on how important/popular it is. Having a consistent style
makes it easier to read. Code without style guidelines can be like reading a
book where every page was printed in a different font. It doesn&rsquo;t matter for
functionality but it would be a lot more annoying to read. Plus there is
literally almost no overhead. Most languages have editor plugins now that will
format text for you, some languages even come with tooling. But what it shows
is that you care about this code to be readable and accessible. And that you
have an easy way of making these coding styles visible and applicable. Static
analysis is usually a less contestant topic. There are usually a lot of things
that aren&rsquo;t immediate problems but are helpful fixes. And it&rsquo;s as well a sign
of you caring about the quality of your code.</p>
<p><a href="http://codeclimate.com">Codeclimate</a> is a wonderful service I use to do this. It has static
analysis plugins for a lot of languages and is super easy to set up. It
integrates with the Github status API and shows changes for every commit and
pull request. That way you can have a computer enforce things like indents,
formatting, and problems that a static analyzer can find and you can
concentrate on the logic and spirit of the change.</p>
<h3 id="packaging-and-deployment">Packaging and Deployment</h3>
<p>These are topics very dear to my heart. A good packaging and deployment setup
makes the user experience of software so much better. Not having to think
about where to copy that one file, no need to curlbash some weird script,
having things come from the package manager you already use. All those things
make it a wonderful experience to get started on a piece of software. And the
state of things there also only has gotten better over the years. A lot of the
language specific package managers now have nice tooling around creating and
uploading install packages for their platform. Some like <a href="https://packagist.org/">packagist</a> even go
so far as to just fetch things for you from GitHub and create releases on tags
automatically. There is literally no reason to not have your PHP project on
there. But even ruby gems and Python modules you can upload easily in an
automatic way from your CI system. Travis CI has a whole section on
deployments and integrations with most of the popular services. So do other CI
platforms. <a href="https://github.com/jordansissel/fpm">fpm</a> has made it ridiculously easy to build packages for Linux.
And with <a href="https://packagecloud.io/">packagecloud</a> you can host them in an amazingly accessible and
user friendly way. You can even have your packages built and uploaded from
your CI system as well with something like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>NAME<span style="color:#f92672">=</span>restclient-cpp
</span></span><span style="display:flex;"><span>VERSION <span style="color:#f92672">=</span> <span style="color:#66d9ef">$(</span>shell git describe --tags --always --dirty<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>BUILDER <span style="color:#f92672">=</span> <span style="color:#66d9ef">$(</span>shell echo <span style="color:#e6db74">&#34;`git config user.name` &lt;`git config user.email`&gt;&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>PKG_RELEASE ?<span style="color:#f92672">=</span> <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>PROJECT_URL<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;https://github.com/mrtazz/</span><span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>FPM_FLAGS<span style="color:#f92672">=</span> --name <span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span> --version <span style="color:#66d9ef">$(</span>VERSION<span style="color:#66d9ef">)</span> --iteration <span style="color:#66d9ef">$(</span>PKG_RELEASE<span style="color:#66d9ef">)</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>           --epoch <span style="color:#ae81ff">1</span> --license MIT --maintainer <span style="color:#e6db74">&#34;</span><span style="color:#66d9ef">$(</span>BUILDER<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>           --url <span style="color:#66d9ef">$(</span>PROJECT_URL<span style="color:#66d9ef">)</span> --vendor mrtazz <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>           --description <span style="color:#e6db74">&#34;C++ client for making HTTP/REST requests&#34;</span>
</span></span><span style="display:flex;"><span>           --depends curl usr
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># build rpm and deb</span>
</span></span><span style="display:flex;"><span>fpm -t rpm -s dir <span style="color:#66d9ef">$(</span>FPM_FLAGS<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>fpm -t deb -s dir <span style="color:#66d9ef">$(</span>FPM_FLAGS<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># deploy to package cloud</span>
</span></span><span style="display:flex;"><span>package_cloud push mrtazz/<span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span>/el/7 *.rpm
</span></span><span style="display:flex;"><span>package_cloud push mrtazz/<span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span>/debian/wheezy *.deb
</span></span><span style="display:flex;"><span>package_cloud push mrtazz/<span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span>/ubuntu/trusty *.deb
</span></span></code></pre></div><p>Or use their integrated <a href="https://docs.travis-ci.com/user/deployment/packagecloud">deployment provider</a> which his
even less setup work.</p>
<h3 id="documentation-deploy">Documentation Deploy</h3>
<p>And speaking of automatic build and deploy. The same goes for documentation.
One of the most genius features of GitHub in my mind is the fact that every
repository can have a <code>gh-pages</code> branch whose contents are getting published
as a website under <code>http://username.github.io/reponame</code>. This makes it
extremely easy to host a documentation page for your project. And with
GitHub&rsquo;s CNAME support you can even have a custom domain for your project
point to it. The fact that it&rsquo;s just another branch in your repository means
that you can easily automate the deployment of docs alongside your code for
example from (you probably guessed it by now ) your CI system. Thanks to
GitHub pages this is as easy as:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e"># generate docs</span>
</span></span><span style="display:flex;"><span>install -d docs
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;projecturl: </span><span style="color:#66d9ef">$(</span>PROJECT_URL<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span> &gt;&gt; docs/_config.yml
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;basesite: http://www.unwiredcouch.com&#34;</span> &gt;&gt; docs/_config.yml
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;markdown: redcarpet&#34;</span> &gt;&gt; docs/_config.yml
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;---&#34;</span> &gt; docs/index.md
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;layout: project&#34;</span> &gt;&gt; docs/index.md
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;title: </span><span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span> &gt;&gt; docs/index.md
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;---&#34;</span> &gt;&gt; docs/index.md
</span></span><span style="display:flex;"><span>cat README.md &gt;&gt; docs/index.md
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># deploy to github</span>
</span></span><span style="display:flex;"><span>cd docs
</span></span><span style="display:flex;"><span>git init
</span></span><span style="display:flex;"><span>git remote add upstream <span style="color:#e6db74">&#34;https://</span><span style="color:#e6db74">${</span>GH_TOKEN<span style="color:#e6db74">}</span><span style="color:#e6db74">@github.com/mrtazz/</span><span style="color:#66d9ef">$(</span>NAME<span style="color:#66d9ef">)</span><span style="color:#e6db74">.git&#34;</span>
</span></span><span style="display:flex;"><span>git submodule add https://github.com/mrtazz/jekyll-layouts.git ./_layouts
</span></span><span style="display:flex;"><span>git submodule update --init
</span></span><span style="display:flex;"><span>git fetch upstream <span style="color:#f92672">&amp;&amp;</span> git reset upstream/gh-pages
</span></span><span style="display:flex;"><span>git config user.name <span style="color:#e6db74">&#39;Daniel Schauenberg&#39;</span>
</span></span><span style="display:flex;"><span>git config user.email d@unwiredcouch.com
</span></span><span style="display:flex;"><span>touch . <span style="color:#f92672">&amp;&amp;</span> git add -A .
</span></span><span style="display:flex;"><span>git commit -m <span style="color:#e6db74">&#34;rebuild pages at </span><span style="color:#66d9ef">$(</span>VERSION<span style="color:#66d9ef">)</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>git push -q upstream HEAD:gh-pages
</span></span></code></pre></div><p>If you run this from Travis CI with an encrypted GH_TOKEN environment
variable, make sure to suppress command echo-ing for the <code>git remote add</code>
command as it will otherwise write your token to the log in plain text.</p>
<p>And even if you just start with publishing your README, you have a nice
website in place already to build upon. Maybe add doxygen or other reference
generation to it.  Add a better getting started guide. Maybe someone else
contributes their notes. It&rsquo;s more likely the easier it is to do. And it makes
documentation contributions look more like the first class contribution they
are. And less like a nice side addition. Which is something almost every
project can benefit from.</p>
<h3 id="build-and-automation">Build and Automation</h3>
<p>With all those wonderful things in place and hooked up, you also want to
optimize for the common part of contributing. The local feedback loop. So
while it is awesome to have all those services hooked up, it should also be
obvious how to run and test them while working on something. This is where
build automation via a tool like <code>make</code> comes into play. If you don&rsquo;t have to
look at your Travis config how to run tests but instead can just run <code>make test</code> or <code>make coverage</code> to get coverage information or even <code>make packages</code>
to have debs and rpms build locally it&rsquo;s a lot more fun to contribute. And
it&rsquo;s not that much more work. When you hook up those things anyways, you can
then just run the make commands from your CI system as well. Which also makes
it a lot easier to debug if it goes wrong.</p>
<h3 id="show-it-off">Show it off</h3>
<p>And finally please let everyone know that you have all those things in place
for your project. Most CI systems and other services now support an HTML
embedded badge that shows the build status, code coverage percentage or static
analysis results in a little image. It is green when things are ok and red or
yellow otherwise. Which lets everyone know the current status of your project
immediately when loading the README on GitHub or the website of your project.
For everything else there is <a href="http://shields.io/">shields.io</a> which lets you create custom
badges via a simple URL structure so you can have the license you use, the
location of the packages and other things that are not red/green right up
there.</p>
<p><img src="/images/coding-pride/yagd_badges.png" alt="yagd badges">
<img src="/images/coding-pride/pocketcleaner_badges.png" alt="pocketcleaner badges"></p>
<h3 id="do-i-really-have-to-do-all-of-this">Do I really have to do all of this?</h3>
<p>I&rsquo;ve given a lot of examples for things you can or should do to make your
project nicer to use. And there are a myriad more, Heroku deploy buttons, npm
dependency checkers, slack links, etc. I&rsquo;ve mostly focused on a very specific
set of things I use regularly for my projects. And it&rsquo;s also very much focused
on open source repositories or at least repositories hosted on Github.</p>
<p>I&rsquo;m very aware that not all of these things always apply to or make sense for
a project.  Some languages don&rsquo;t have the support of your coverage platform.
You want to use another CI service. Your code is hosted in your corporate
network and you don&rsquo;t think you have the time to set all of these things up.</p>
<p>The real answer here is to always try to strive for this. A lot of setups can
literally be copy and pasted once you&rsquo;ve done it for one project. And again
while all the services mentioned are public ones, there are a lot of
integrations you can emulate in house with your existing CI and deployment
system and some HTML. If you only do a third of the things I described here
your project will already be in much better shape. And people will be more
happy to (have to) use your code. Which in my mind is something to be proud
of.</p>
]]></content>
    <link href="https://unwiredcouch.com/2016/01/12/coding-pride.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Effective Monitoring and Alerting: For Web Operations]]></title>
    <published>2016-01-06T00:00:00Z</published>
    <updated>2016-01-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/ligus-effectivemonitoringandalerting-2012/</id>
    <content type="html"><![CDATA[<p>I was really looking forward to this book as I&rsquo;ve heard good things about it
and thought it would round up what I already knew about the topic. However
right from the start it felt rather awkward. The author is trying to maintain
an abstract high level view on monitoring and alerting and not go into
specific implementations. This makes for an awkward combination with it being
basically a 101/introductory book on the topic. A lot of the formal
descriptions of monitoring and alerting feel forced and don&rsquo;t hold up in the
abstract very well and are too high level to be practical. He also talks about
operations in an almost romantic hero style way which I didn&rsquo;t enjoy. In
addition to that the book also includes some final chapters on outage handling
and organizational and cultural setups. The terms human error, root cause
analysis, and &ldquo;5 Whys&rdquo; are thrown around a lot with no acknowledgement of it
being actually harmful to learning according to modern research in the field
of systems safety. Definitely not a book I would recommend.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/ligus-effectivemonitoringandalerting-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Meat Market]]></title>
    <published>2016-01-04T00:00:00Z</published>
    <updated>2016-01-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/penny-meatmarket-2011/</id>
    <content type="html"><![CDATA[<p>This was an interesting read about the author&rsquo;s perspective on the issues of
objectification and discrimination of women and their bodies. There are a lot
of interesting arguments in there about how much in the current economy (and
the general status quo) really relies on maintaining that women stay in this
place and how badly women suffer through this. Especially as a cis-man I can
highly recommend reading the book to get a perspective on a lot of issues that
at least I didn&rsquo;t think about that much before.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/penny-meatmarket-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Thinking in Systems: A Primer]]></title>
    <published>2016-01-03T00:00:00Z</published>
    <updated>2016-01-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/meadows-thinkinginsystems-2008/</id>
    <content type="html"><![CDATA[<p>This is a really great introduction to systems thinking. Meadows does a great
job describing basic system properties, adding examples to illustrate them and
then coming back to them later when describing related properties. A lot of
the described things - especially at the beginning - almost seem like common
sense and suddenly you&rsquo;re neck deep in systems thinking. I especially liked
how she puts systems thinking into relation to being a mindful person. That
it&rsquo;s always a model and not the real world and that we still have to act as
morale humans. Highly recommended.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/meadows-thinkinginsystems-2008/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[My 2015 Reading List]]></title>
    <published>2015-12-31T00:00:00Z</published>
    <updated>2015-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/12/31/reading-list.html</id>
    <content type="html"><![CDATA[<p>This year has been really good for reading for me. Starting off from <a href="https://unwiredcouch.com/2014/12/31/reading-list.html">last year&rsquo;s list</a> and the 5-ish books I read in 2014, I made it to 16
this year. Some of them were very short but nonetheless an improvement. One of my goals for 2015 was to read more and I definitely managed to accomplish
that. My goal for 2016 is to read more than 20 books. If you&rsquo;re interested in
keeping up to date over the year, I usually post my progress and reviews on
<a href="https://www.goodreads.com/mrtazz">Goodreads</a> as well.</p>
<p>So without further ado, here&rsquo;s my reading list for 2015:</p>
<h3 id="the-whole-woman-by-germaine-greer"><a href="http://www.amazon.com/Whole-Woman-Germaine-Greer-ebook/dp/B0026LTNDG">The Whole Woman</a> by Germaine Greer</h3>
<p>As mentioned <a href="https://unwiredcouch.com/2014/12/31/reading-list.html">last year</a> I started reading this book in 2014 and
finished it early 2015. I overall liked it and it was really good in giving me
different ways to think about feminism and how the whole system works together
to enable sexism and exploitation. It&rsquo;s also a good resource to understand
better how closely related feminism and capitalism really are. However it
comes with a really serious trigger warning. Germaine Greer is known to have
very transphobic/cissexist views and this book is no exception. It is
restricted to one chapter but those opinions - which I don&rsquo;t share at all -
are definitely in there. So if this is a trigger for you, it&rsquo;s probably better
to skip this book.</p>
<blockquote>
<p>The pattern of devaluing women&rsquo;s contribution is as old as human
civilization</p></blockquote>
<h3 id="feminism-is-for-everybody-by-bell-hooks"><a href="http://www.amazon.com/Feminism-Everybody-Passionate-bell-hooks-ebook/dp/B00OCKEF8W">Feminism is for Everybody</a> by Bell Hooks</h3>
<p>I&rsquo;ve known about this book for a while now, but up until early 2015 it was
only available in print. And since I don&rsquo;t really like owning physical books
and read exclusively on my Kindle and iPhone I hadn&rsquo;t bought it yet. So when I
found out there is a Kindle version now, I immediately bought it. As expected,
the book is really good and gives a good primer on feminism and the historical
context from the author&rsquo;s perspective. It reads less extreme to me as Greer
which is very much in line with Hooks&rsquo; other writing. Definitely highly
recommended for learning more about feminism.</p>
<blockquote>
<p>Feminism is a movement to end sexism, sexist exploitation, and oppression.</p></blockquote>
<h3 id="designing-for-performance-by-lara-hogan"><a href="http://designingforperformance.com/">Designing for Performance</a> by Lara Hogan</h3>
<p>My coworker <a href="https://twitter.com/lara_hogan">Lara</a> wrote this book last year and it was a lot of fun
watching her process and how she knocked out that book. Since then it was on
my list of books to read. Especially since I tend to shy away from frontend
things in my day job and want to get better at not doing that. The book is a
wonderful introduction into web performance especially from a design view. It
gives very solid technical details on a lot of things like browser rendering
and image formats that I only had very superficial knowledge of before. I
really enjoyed it and the book lead me to <a href="https://unwiredcouch.com/2015/07/24/frontend-performance.html">reduce the page weight of this blog
by 92%</a> which was tons of fun to do as well.</p>
<blockquote>
<p>The largest hurdle to creating and maintaining stellar site performance is
the culture of your organization.</p></blockquote>
<h3 id="manage-your-day-to-day-by-jocelyn-glei"><a href="http://www.amazon.com/Manage-Your-Day---Day-Creative-ebook/dp/B00B77UE4W">Manage your Day-to-Day</a> by Jocelyn Glei</h3>
<p>This book sparked my interest while I was looking for improving my daily
routines. I was often just starting the day as it happened often leaving me
feel disorganized, unproductive, and imbalanced. Reading &ldquo;Manage your
Day-to-Day&rdquo; gave me a lot of ideas of what things to try and add to my daily
routine. And also to try and even have a daily routine. Something I picked up
again through this book was journaling and while it has been on and off for
the last couple of months I really enjoy it. The book was not mind blowing for
me but I enjoyed reading it and definitely would recommend it if you are
looking for inspiration for your daily routine.</p>
<blockquote>
<p>It takes willpower to switch off the world, even for an hour.</p></blockquote>
<h3 id="leading-snowflakes-by-oren-ellenbogen"><a href="http://leadingsnowflakes.com/">Leading Snowflakes</a> by Oren Ellenbogen</h3>
<p>I&rsquo;ve heard about this book ever since it was released and a lot of people I
know speak very highly of it. And they weren&rsquo;t wrong, I basically devoured the
book in a weekend. It&rsquo;s very well written and has a ton of actionable advice
for engineers becoming managers. But I would argue that this description
really limits the value of the book. I have no intention to become a manager
at the moment however the book was really interesting and helpful for me. I
think it&rsquo;s a great read for anyone looking to grow more into a leadership
position.</p>
<h3 id="the-highly-sensitive-person-by-elaine-n-aron"><a href="http://www.amazon.com/Highly-Sensitive-Person-Elaine-Aron-ebook/dp/B00GT1YES8">The Highly Sensitive Person</a> by Elaine N. Aron</h3>
<p>I had no idea about the concept of highly sensitive people until I read <a href="http://m.huffpost.com/us/entry/4810794">this
article</a>. It has a pretty click-baity headline but it
really hit home for me. So I decided to learn more about it and this book was
the most prominent resource to pop up in my search. It&rsquo;s a really good book
with a lot of great psychological insights and explicit case studies. At times
the way high sensitivity was described was a bit too feel-good for my taste.
At other times I would almost throw my kindle across the room as the author
managed to really sneak up on and hijack my sensitivity. The book focuses a
lot on what usually goes wrong during childhood for highly sensitive people
and makes it a point to relive memories and traumas through the lense of high
sensitivity. This is a practice I really enjoyed although it felt a bit much
to me at times as I consider my childhood to have been a happy one. On the
other hand I started to do this practice with every day situations at work to
help me understand why I feel what I feel in different situations.  I identify
myself as a highly sensitive person and the book was an extremely good read to
help me understand better what this could mean for me and my days.</p>
<h3 id="recoding-gender-by-janet-abbate"><a href="http://www.amazon.com/Recoding-Gender-Changing-Participation-Computing-ebook/dp/B009Z3U46S">Recoding Gender</a> by Janet Abbate</h3>
<p>I have a very complex relationship with the profession of &ldquo;software
engineering&rdquo; and how it&rsquo;s often defined in a non-inclusive way and as the
profession of the golden children of society. Part of that is that I had
always known a bit about the origins of programming and that a majority of
programmers used to be women. But I didn&rsquo;t know a lot about it which is why I
was excited to read this book. And it was great! The book walks you through
the beginnings before and during WWII and what programming meant back then. It
discusses how the emerging industry in this field changed job prospects and
economic chances for women. But it also discusses how the image of a
programmer changed as more and more men participated. It&rsquo;s full of historical
facts and documents and a more than wonderful read. It sparked a lot of
thoughts for me and changed the way I think about my profession even more.</p>
<blockquote>
<p>the traits that managers found most problematic in programmers were those
stereotypically associated with men</p></blockquote>
<h3 id="you-had-me-at--by-dona-sarkar"><a href="http://www.amazon.com/You-Had-Hello-World-Mentoring-ebook/dp/B0147SC2WO">You had me at &lsquo;Hello World&rsquo;</a> by Dona Sarkar</h3>
<p>I found this book through <a href="https://twitter.com/skamille">Camille</a> tweeting about the fact that she
was also interviewed for it. &ldquo;You had me at &lsquo;Hellow  World&rsquo;&rdquo; is a collection
of interviews with industry leaders from successful companies about the many
aspects of leadership and mentoring. It&rsquo;s a pretty lightweight read and a
great resource to get some insight how successful people talk about those
topics. It does a great job in conveying how important skills outside of
writing code are. And it provides good examples of how to use those for your
advantage.</p>
<h3 id="nonviolent-communication-by-marshall-b-rosenberg"><a href="http://www.amazon.com/Nonviolent-Communication-Language-Life-Changing-Relationships-ebook/dp/B014OISVU4">Nonviolent Communication</a> by Marshall B. Rosenberg</h3>
<p>This has been recommended by many people I work with as a wonderful resource
about positive human communication. And as - especially in a growing
engineering org - communication is one of the most important skills to try to
master, I decided to finally read this one. It&rsquo;s a very interesting book with
an approach to communication that is rarely taught especially not to men. It
focuses on a collaborative rather than a competitive style of communication
and the goal to reach agreements over winning arguments. The examples in the
book are often pretty extreme coming from the author&rsquo;s work as a diplomat. And
even though those are great to demonstrate how this way of communicating can
work in the most extreme cases, it also shifts its focus a lot on explicit
diplomatic style discussions. There are more examples that are more directed
towards every day situations and even though the author is very explicit about
this being useful in regular work meetings as well, I had a very hard time
understanding how to practically apply those lessons in a meeting for example.
That being said however it made me think a lot more about the way I
communicate and what I&rsquo;m saying versus what I want to say. I have also applied
that way of communicating successfully at least once since reading the book.
And I look forward to try it out more.</p>
<h3 id="the-retrospective-handbook-by-patrick-kua"><a href="http://www.amazon.com/Retrospective-Handbook-guide-agile-teams-ebook/dp/B00916BRVU">The Retrospective Handbook</a> by Patrick Kua</h3>
<p>At work we have a group of people which I&rsquo;m part of that work on making sure
we have good frameworks in place for blameless postmortems and organizational
learning as a whole. Part of that is moving past only investigating failure
(via postmortems) and also look into investigating successes (via
retrospectives). So in a similar way to how I&rsquo;ve spent time understanding the
unhelpful concept of human error, I wanted to learn more about the theoretical
concepts of successful retrospectives. Unfortunately this was completely the
wrong book for this. It is a great and very practical read for retrospectives
in the agile sense and how to run successful meetings in general. However I
wasn&rsquo;t looking for that so I constantly kept thinking when we are going to
dive into the meaty, theoretical stuff. This is in no way the authors fault
and I would highly recommend the book as inspiration for improving your
meetings. But for the theoretical underpinnings of retrospectives as an
organizational learning tool I&rsquo;m still on the lookout. Let me know if you have
recommendations :).</p>
<h3 id="cybersexism-sex-gender-and-power-on-the-internet-by-laurie-penny"><a href="http://www.amazon.com/Cybersexism-Sex-Gender-Power-Internet-ebook/dp/B00EO24J3O">Cybersexism: Sex, Gender and Power on the Internet</a> by Laurie Penny</h3>
<p>This short book by Laurie Penny is a very good read about sexism in the age of
social networks and the omnipresent Internet. It does a great job at talking
about how a lot of familiar concepts of &ldquo;offline sexism&rdquo; are reinvented online
and no news to women. It&rsquo;s short and insightful enough to recommend reading it
without hesitation.</p>
<blockquote>
<p>Perhaps one reason that women writers and technologists have, so far, the calmest and most comprehensive understanding of what surveillance technology really does to the human condition is that women grow up being watched.</p></blockquote>
<h3 id="the-boy-kings-by-katherine-losse"><a href="http://www.amazon.com/Boy-Kings-Journey-Social-Network-ebook/dp/B007MAXH38">The Boy Kings</a> by Katherine Losse</h3>
<p>The biography of Kate Losse about her time at (earl stage) Facebook is in my
mind a must read for any software engineer and especially if you&rsquo;re a man. It
gives an extremely good insight view into what happens when young men are
suddenly in charge of a ton of money. But more importantly it talks very
bluntly about how engineers are treated differently from most other employees
for our supposed gift to turn any idea into gold with code.</p>
<blockquote>
<p>Technology carries with it all the biases of the people who make it, so simply making the world more technical was not going to save us.</p></blockquote>
<h3 id="the-art-of-mindfulness-by-thích-nhất-hạnh"><a href="http://www.amazon.com/Art-Mindfulness-HarperOne-Select-Selects-ebook/dp/B005HG4H24">The Art of Mindfulness</a> by Thích Nhất Hạnh</h3>
<p>This is another super short read and the de-facto introductory book to
mindfulness meditation. There&rsquo;s not a lot to say here. It&rsquo;s good, give it a
read as it&rsquo;s short enough to not matter if you end up not liking it. I started
meditating regularly after reading it and it has been a great experience.</p>
<h3 id="men-explain-things-to-me-by-rebecca-solnit"><a href="http://www.amazon.com/Men-Explain-Things-Rebecca-Solnit-ebook/dp/B00IWGQ8PU">Men Explain Things to Me</a> by Rebecca Solnit</h3>
<p>This collection of essays titled for the aggressive tendency of men to always
have to explain things to women while assuming they have no idea what they are
talking about. The first essay brings this to a point by telling a story of a
party where a man mansplains to the author the book she herself wrote. Without
having actually read it. The book than continues with more essays that talk
about a lot more darker things like discussing domestic violence. The over
arching theme is that the credibility of and respect towards women is
continuously diminished to maintain the status quo and its power imbalance.
Some of the essays towards the end of the book are not easy to read but it&rsquo;s
more than worth it.</p>
<blockquote>
<p>Credibility is a basic survival tool.</p></blockquote>
<h3 id="don-by-steve-krug"><a href="http://www.amazon.com/Dont-Make-Think-Revisited-Usability-ebook/dp/B00HJUBRPG">Don&rsquo;t Make Me Think</a> by Steve Krug</h3>
<p>I&rsquo;m one of those engineers who used to happily claim to not have any frontend
skills and just not be good at design. I came to loathe this thinking over
the years and decided that if I can&rsquo;t do something I want to learn at least
the basics. This is one of the reasons why I read &ldquo;Designing for Performance&rdquo;
as mentioned above. Thankfully I also work with a ton of talented designers
and one of them is <a href="https://twitter.com/harllee">Jessica Harllee</a>. I talked to her about
suggestions to get started with learning about design. And she said I should
read &ldquo;Don&rsquo;t make me think&rdquo;. And she wasn&rsquo;t wrong. The book is a wonderful
introduction into usability and design. The beauty of it is that while reading
it, all of the things mentioned are total no-brainers. But you have to
remember it while designing things. The other interesting thing for me was
that while all of the examples in the book are web based (with some brief
stints into mobile) I could totally think of CLI apps I&rsquo;ve written in the past
that totally do the wrong thing design-wise. Definitely a recommended read.</p>
<h3 id="the-internet-of-garbage-by-sarah-jeong"><a href="http://www.amazon.com/Internet-Garbage-Sarah-Jeong-ebook/dp/B011JAV030">The Internet of Garbage</a> by Sarah Jeong</h3>
<p>In this book Sarah Jeong - a journalist trained as a lawyer at Harvard Law
School - talks about the problem of online harassment. It&rsquo;s another short but
really good one. I&rsquo;ve learned a ton about copyright law and the limitations of
current legislation when it comes to online harassment. But also things that
do work and what things could be attempted. It&rsquo;s a very sobering look at the
current state of social networks, online harassment and tooling and
legislation to help fight it. Definitely worth a read if you spend any time on
the internet.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/12/31/reading-list.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Chef Driven Graphite Dashboards]]></title>
    <published>2015-12-17T00:00:00Z</published>
    <updated>2015-12-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/12/17/chef-driven-dashboards.html</id>
    <content type="html"><![CDATA[<p>Some years ago <a href="https://unwiredcouch.com/2012/09/15/getting-started-with-monitoring.html">I wrote about</a> how to use Heroku and a set of
hosted solutions for getting started with monitoring for your personal
infrastructure. I used this set up for quite a while and I learned a ton
setting it up. But after a while things were chugging along and I was paying
for things I wasn&rsquo;t using. So I decided to self host my monitoring on the
infrastructure I was already running anyways. The big switches were using
Nagios instead of Sensu (as I was familiar with it and it has less moving
parts), dropping chat integration and log aggregation as I was barely using it
and switching to Graphite for graphs. Interestingly enough this switch made me
improve my graphing setup a lot. I&rsquo;m still using collectd and I&rsquo;ve extended it
a lot more with custom scripts to track various things.</p>
<h2 id="yet-another-graphite-dashboard">Yet Another Graphite Dashboard</h2>
<p>However since I wasn&rsquo;t using Librato anymore, I now had to find a way to get
nice overview dashboards for all of my metrics. And I looked into the usual
suspects. But all of them seemed to need a very elaborate setup and running an
additional application server besides <code>mod_php</code> which I was already running
for Nagios just for some graphs embedded on an HTML page didn&rsquo;t seem like a
thing I wanted to embark on. I always liked the way we approach <a href="https://github.com/etsy/dashboard">dashboards at
Etsy</a> a lot. It&rsquo;s basically a PHP framework that gives you a
nice way to create graphs from Graphite or Ganglia and combine them into
dashboards. However it was a bit overkill for my use case and I would have to
write all the code for a typical collectd host anyways. So I wrote my own
little PHP script to generate a list of graphs from a config file. And it was
really nice, took me 20 minutes, was a lot of fun, and did everything I wanted
it to do. I decided to just use Twitter Bootstrap for the UI, which means it
also looks nice on my iPhone and it&rsquo;s aptly named <a href="http://code.mrtazz.com/yagd/">Yet Another Graphite
Dashboard</a>.</p>
<h2 id="chef-integration-and-additional-metrics">Chef integration and additional metrics</h2>
<p>Now that I had this nice way of viewing dashboards, I wanted to have more
graphs. I have long made a choice to track as much as possible in Chef for my
personal infrastructure. And graphing is no exception here. Setting up the
initial collectd install is a bit manual as I depend on some options that are
available in the ports but not the official package builds (my infrastructure
is still all FreeBSD). But the configuration and graphing additions are all
fully chef-ed. I took the way we have Ganglia set up at Etsy as the role
model. We have a setup chef-ed to every box that runs all scripts prefixed
with <code>gmetric-</code> in a certain location on a minutely cron. This means in order
to get a new set of metrics, you just have to write a shell script that ends
up calling <code>gmetric</code> and put it in Chef.  And a couple of minutes later graphs
for all boxes will magically appear in Ganglia. I did the same for my collectd
setup via <code>collectdctl</code> and it looks a little bit like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>* * * * * <span style="color:#66d9ef">for</span> SCRIPT in <span style="color:#66d9ef">$(</span>ls /usr/local/collectd/collectd-*<span style="color:#66d9ef">)</span>; <span style="color:#66d9ef">do</span> command <span style="color:#e6db74">${</span>SCRIPT<span style="color:#e6db74">}</span>; <span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>This means I can now easily add new metrics by dropping a script in there that
utilizes the collectd CLI tooling. However since collectd has a very specific
type setup, each script also needs a corresponding configuration in a custom
types db. I also track this in Chef so it&rsquo;s not too big of a problem. An
example script to track disk temperature looks like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span>PLUGIN_NAME<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;disktemp&#34;</span>
</span></span><span style="display:flex;"><span>HOSTNAME<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>hostname -f<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>SMARTCMD<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;/usr/local/sbin/smartctl&#34;</span>
</span></span><span style="display:flex;"><span>COLLECTD<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;/usr/local/bin/collectdctl&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> disk in <span style="color:#66d9ef">$(</span>ls /dev/ada* | grep -o <span style="color:#e6db74">&#34;ada[0-9]</span>$<span style="color:#e6db74">&#34;</span><span style="color:#66d9ef">)</span>; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  TEMP<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span><span style="color:#e6db74">${</span>SMARTCMD<span style="color:#e6db74">}</span> -a /dev/<span style="color:#e6db74">${</span>disk<span style="color:#e6db74">}</span> | awk <span style="color:#e6db74">&#39;/194 Temperature_Celsius/ {print $10}&#39;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>  <span style="color:#e6db74">${</span>COLLECTD<span style="color:#e6db74">}</span> putval <span style="color:#e6db74">${</span>HOSTNAME<span style="color:#e6db74">}</span>/<span style="color:#e6db74">${</span>PLUGIN_NAME<span style="color:#e6db74">}</span>-<span style="color:#e6db74">${</span>disk<span style="color:#e6db74">}</span>/celsius_current interval<span style="color:#f92672">=</span><span style="color:#ae81ff">60</span> N:<span style="color:#e6db74">${</span>TEMP<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> $? !<span style="color:#f92672">=</span> <span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    echo <span style="color:#e6db74">&#34;ERROR </span><span style="color:#e6db74">${</span>0<span style="color:#e6db74">}</span><span style="color:#e6db74">: </span><span style="color:#e6db74">${</span>HOSTNAME<span style="color:#e6db74">}</span><span style="color:#e6db74">/</span><span style="color:#e6db74">${</span>PLUGIN_NAME<span style="color:#e6db74">}</span><span style="color:#e6db74">-</span><span style="color:#e6db74">${</span>disk<span style="color:#e6db74">}</span><span style="color:#e6db74">/celsius_current interval=60 N:</span><span style="color:#e6db74">${</span>TEMP<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>Another thing that Ganglia gives you for free is a section for additional
metrics that just appear as soon as you send them with an optional group name
to group them by. In order to emulate that in my setup, the recipes for each
collectd script are also defining node attributes with the Graphite graphs
they are generating and how they are supposed to be displayed. This made a lot
of sense to me as when I&rsquo;m writing the scripts I have the generated metrics in
my head anyways. And it&rsquo;s easy to just drop them in a node attribute. So for
tracking disk temperature for example, the recipe looks a bit like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>cookbook_file <span style="color:#e6db74">&#34;/usr/local/collectd/collectd-disk-temp.sh&#34;</span> <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  source <span style="color:#e6db74">&#34;collectd-disk-temp.sh&#34;</span>
</span></span><span style="display:flex;"><span>  owner <span style="color:#e6db74">&#34;root&#34;</span>
</span></span><span style="display:flex;"><span>  group <span style="color:#e6db74">&#34;wheel&#34;</span>
</span></span><span style="display:flex;"><span>  mode <span style="color:#ae81ff">0755</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">end</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>node<span style="color:#f92672">.</span>default<span style="color:#f92672">[</span><span style="color:#e6db74">:yagd</span><span style="color:#f92672">][</span><span style="color:#e6db74">:additional_metrics</span><span style="color:#f92672">][</span><span style="color:#e6db74">:disk_temperature</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#e6db74">&#34;Disk Temperature&#34;</span> <span style="color:#f92672">=&gt;</span> <span style="color:#e6db74">&#34;collectd.</span><span style="color:#e6db74">#{</span>node<span style="color:#f92672">[</span><span style="color:#e6db74">:fqdn</span><span style="color:#f92672">].</span>gsub(<span style="color:#e6db74">&#34;.&#34;</span>,<span style="color:#e6db74">&#34;_&#34;</span>)<span style="color:#e6db74">}</span><span style="color:#e6db74">.disktemp-ada*.celsius_current&#34;</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>The next step was now to not have to manually edit the config file for my
dashboards anymore. I now had all the data I needed in Chef, so all it took
was generating the config file there from a Chef search and all graphs were
magically appearing as soon as both the node to monitor and the dashboard host
had run Chef. This can take up to 20 minutes worst case (I run Chef every 10
minutes) which is really not a big deal for me. The Chef search code that does
this for me looks like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>hosts <span style="color:#f92672">=</span> <span style="color:#f92672">[]</span>
</span></span><span style="display:flex;"><span>nodes <span style="color:#f92672">=</span> search(<span style="color:#e6db74">:node</span>, <span style="color:#e6db74">&#34;domain:*unwiredcouch.com&#34;</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>nodes<span style="color:#f92672">.</span>each <span style="color:#66d9ef">do</span> <span style="color:#f92672">|</span>computer<span style="color:#f92672">|</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  this_computer <span style="color:#f92672">=</span> {}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:name</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> computer<span style="color:#f92672">[</span><span style="color:#e6db74">:fqdn</span><span style="color:#f92672">]</span>
</span></span><span style="display:flex;"><span>  this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:cpus</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> computer<span style="color:#f92672">[</span><span style="color:#e6db74">:cpu</span><span style="color:#f92672">].</span>nil? ? <span style="color:#ae81ff">0</span> : computer<span style="color:#f92672">[</span><span style="color:#e6db74">:cpu</span><span style="color:#f92672">][</span><span style="color:#e6db74">:total</span><span style="color:#f92672">]</span>
</span></span><span style="display:flex;"><span>  this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:apache</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> computer<span style="color:#f92672">.</span>recipes<span style="color:#f92672">.</span>include?(<span style="color:#e6db74">&#34;apache&#34;</span>)
</span></span><span style="display:flex;"><span>  this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:interfaces</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> computer<span style="color:#f92672">.</span>network<span style="color:#f92672">.</span>interfaces<span style="color:#f92672">.</span>keys<span style="color:#f92672">.</span>select {<span style="color:#f92672">|</span>k<span style="color:#f92672">|</span> <span style="color:#f92672">!</span>k<span style="color:#f92672">.</span>to_s<span style="color:#f92672">.</span>start_with?<span style="color:#e6db74">&#34;lo&#34;</span> }
</span></span><span style="display:flex;"><span>  this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:filesystems</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> <span style="color:#f92672">[]</span>
</span></span><span style="display:flex;"><span>  computer<span style="color:#f92672">.</span>filesystem<span style="color:#f92672">.</span>each <span style="color:#66d9ef">do</span> <span style="color:#f92672">|</span>k,v<span style="color:#f92672">|</span>
</span></span><span style="display:flex;"><span>    name <span style="color:#f92672">=</span> v<span style="color:#f92672">[</span><span style="color:#e6db74">:mount</span><span style="color:#f92672">]</span> <span style="color:#f92672">==</span> <span style="color:#e6db74">&#34;/&#34;</span> ? <span style="color:#e6db74">&#34;/root&#34;</span> : v<span style="color:#f92672">[</span><span style="color:#e6db74">:mount</span><span style="color:#f92672">]</span>
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># cut out leading &#39;/&#39;</span>
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">[</span><span style="color:#ae81ff">0</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> <span style="color:#e6db74">&#39;&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># substitute &#39;/&#39; with &#39;-&#39;</span>
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">.</span>gsub!(<span style="color:#e6db74">&#34;/&#34;</span>, <span style="color:#e6db74">&#34;-&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># substitute &#39;.&#39; with &#39;_&#39;</span>
</span></span><span style="display:flex;"><span>    name<span style="color:#f92672">.</span>gsub!(<span style="color:#e6db74">&#34;.&#34;</span>, <span style="color:#e6db74">&#34;_&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#75715e"># and add to array</span>
</span></span><span style="display:flex;"><span>    this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:filesystems</span><span style="color:#f92672">]</span> <span style="color:#f92672">&lt;&lt;</span> name
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">end</span>
</span></span><span style="display:flex;"><span>  this_computer<span style="color:#f92672">[</span><span style="color:#e6db74">:additional_metrics</span><span style="color:#f92672">]</span> <span style="color:#f92672">=</span> computer<span style="color:#f92672">[</span><span style="color:#e6db74">:yagd</span><span style="color:#f92672">][</span><span style="color:#e6db74">:additional_metrics</span><span style="color:#f92672">]</span> <span style="color:#66d9ef">unless</span> computer<span style="color:#f92672">[</span><span style="color:#e6db74">:yagd</span><span style="color:#f92672">].</span>nil?
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  hosts <span style="color:#f92672">&lt;&lt;</span> this_computer
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">end</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>template <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span>dashboards_dir<span style="color:#e6db74">}</span><span style="color:#e6db74">/config.php&#34;</span> <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  source <span style="color:#e6db74">&#34;yagd.config.php.erb&#34;</span>
</span></span><span style="display:flex;"><span>  owner <span style="color:#e6db74">&#34;www&#34;</span>
</span></span><span style="display:flex;"><span>  group <span style="color:#e6db74">&#34;wheel&#34;</span>
</span></span><span style="display:flex;"><span>  mode <span style="color:#ae81ff">0775</span>
</span></span><span style="display:flex;"><span>  variables( <span style="color:#e6db74">:hosts</span> <span style="color:#f92672">=&gt;</span> hosts )
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">end</span>
</span></span></code></pre></div><p>And this is the accompanying erb template that gets rendered into a PHP file
to serve as the configuration for my dashboards instance.</p>
<pre tabindex="0"><code class="language-erb" data-lang="erb">&lt;?php

$CONFIG = array(
    &#39;title&#39; =&gt; &#34;dashboards&#34;,
    &#39;navitems&#39; =&gt; [
        &#39;Hosts&#39; =&gt; &#39;/hosts.php&#39;,
        &#39;Graphite&#39; =&gt; &#39;/graphite.php&#39;,
        &#39;Twitter&#39; =&gt; &#39;/tweets.php&#39;
    ],
    &#39;graphite&#39; =&gt; [
      &#39;host&#39; =&gt; &#34;https://graphite.example.com&#34;,
      &#39;hidelegend&#39; =&gt; false
    ],
    &#39;hosts&#39; =&gt; array(
      &lt;% @hosts.each do |host| %&gt;
       &#34;&lt;%= host[:name] %&gt;&#34; =&gt; array(
         &#34;cpus&#34; =&gt; &lt;%= host[:cpus] %&gt;,
         &#34;apache&#34; =&gt; &lt;%= host[:apache] %&gt;,
         &#34;interfaces&#34; =&gt; &lt;%= host[:interfaces].to_json %&gt;,
         &#34;filesystems&#34; =&gt; &lt;%= host[:filesystems].to_json %&gt;,
         &#34;additional_metrics&#34; =&gt; [
           &lt;% host[:additional_metrics].each do |name,values| %&gt;
           &#34;&lt;%= name %&gt;&#34; =&gt; [
             &lt;% values.each do |title,metric| %&gt;
               &#34;&lt;%= title %&gt;&#34; =&gt; &#34;&lt;%= metric %&gt;&#34;,
             &lt;% end %&gt;
           ],
           &lt;% end %&gt;
         ]
       ),
    &lt;% end %&gt;
    )
);
</code></pre><h2 id="the-cleanup">The Cleanup</h2>
<p>After all of this configuration, my dashboard setup was working beautifully
and I added more and more graphs I was interested in. But it was still more or
less the 20 minute PHP code I had initially thrown together.  This was
technically fine and I didn&rsquo;t mind it too much. But at the same time I thought
it might be nice to bring it in a state where it&rsquo;s usable for others.  So I
decided to take some time to clean it up and make it more generically usable.
So I refactored the code, added unit tests to run on <a href="https://travis-ci.org/mrtazz/yagd">Travis CI</a>
and hooked it up to <a href="https://codeclimate.com/github/mrtazz/yagd/">Code Climate</a> so I could have a computer tell me
how I can improve the code quality.</p>
<p>With this refactor in place it&rsquo;s now fairly easy to get a dashboard page that
shows the status of all hosts:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">require</span> <span style="color:#66d9ef">__DIR__</span> <span style="color:#f92672">.</span> <span style="color:#e6db74">&#39;/../vendor/autoload.php&#39;</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">include_once</span>(<span style="color:#e6db74">&#34;../config.php&#34;</span>);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">use</span> <span style="color:#a6e22e">Yagd\CollectdHost</span>;
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">use</span> <span style="color:#a6e22e">Yagd\Page</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>$page <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">Page</span>($CONFIG);
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">echo</span> $page<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">getHeader</span>($CONFIG[<span style="color:#e6db74">&#34;title&#34;</span>],
</span></span><span style="display:flex;"><span>    $CONFIG[<span style="color:#e6db74">&#34;navitems&#34;</span>]);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">foreach</span>($CONFIG[<span style="color:#e6db74">&#34;hosts&#34;</span>] <span style="color:#66d9ef">as</span> $host <span style="color:#f92672">=&gt;</span> $data) {
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    $fss <span style="color:#f92672">=</span> <span style="color:#66d9ef">empty</span>($data[<span style="color:#e6db74">&#34;filesystems&#34;</span>]) <span style="color:#f92672">?</span> [] <span style="color:#f92672">:</span> $data[<span style="color:#e6db74">&#34;filesystems&#34;</span>];
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    $server <span style="color:#f92672">=</span> <span style="color:#66d9ef">new</span> <span style="color:#a6e22e">CollectdHost</span>($host, $data[<span style="color:#e6db74">&#34;cpus&#34;</span>], $fss,
</span></span><span style="display:flex;"><span>                               $data[<span style="color:#e6db74">&#34;interfaces&#34;</span>]);
</span></span><span style="display:flex;"><span>    $server<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">setGraphiteConfiguration</span>($CONFIG[<span style="color:#e6db74">&#34;graphite&#34;</span>][<span style="color:#e6db74">&#34;host&#34;</span>]);
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">echo</span> <span style="color:#e6db74">&#34;&lt;h2&gt; </span><span style="color:#e6db74">{</span>$host<span style="color:#e6db74">}</span><span style="color:#e6db74"> &lt;/h2&gt;&#34;</span>;
</span></span><span style="display:flex;"><span>    $server<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">render</span>();
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">echo</span> $page<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">getFooter</span>();
</span></span></code></pre></div><p>You can also <a href="https://github.com/mrtazz/yagd#inject-a-select-box-into-the-navbar">inject a selectbox</a> into the header to have it be
possible to just select a single server instead of always displaying all of
them. This makes it really nice to be able to just browse to something like
<code>https://yagd.example.com/hosts.php?hostname=foo.example.com</code> and get a quick
overview of how that that host is doing. Plus it gives you a URL you can link
to from anywhere else.</p>
<p>Also since this is just plain PHP, entirely driven by the config file,  it&rsquo;s
possible to have a per cluster view by passing a URL parameter like
<code>?cluster=name</code> and then changing the <code>include_once()</code> code in that example to
include a different config file based on that. And since Chef already knows or
can know all that data (maybe each cluster is its own role? ), it&rsquo;s just a
matter of writing some ruby to generate different sets of config files for the
dashboards.</p>
<h2 id="summary">Summary</h2>
<p>Writing yagd has been a lot of fun. The initial version took - as I already
said - about 20 minutes to write and I learned a ton of things while
refactoring it into a usable PHP module. You can install it via composer <a href="https://packagist.org/packages/mrtazz/yagd">from
packagist</a> if you want to try it out and use it for your own
dasboards.</p>
<p>However the point of this is not so much that I wrote yet another dashboard
framework, but rather how easy it was to get this going. Sure it&rsquo;s not super
trivial to get your infrastructure into Chef if it&rsquo;s not. And it also takes
some time to install Graphite if you aren&rsquo;t familiar with it. But with those
things in place, you have all the building blocks to quickly whip up a nice
dashboarding solution with some simple PHP.</p>
<p>As much as I love how many frameworks and libraries there are to
already solve those problems for us, I think it&rsquo;s a good practice to
occasionally go back to the basics and think about what the simplest solution
is I actually need. In my case this was showing graphs on an HTML page in a
somewhat structured way. I didn&rsquo;t need anything more fancy. And there was no
reason to try and find the dashboard solution that would do that, preferably
in PHP so I don&rsquo;t have to set up yet another application server, which most
likely solved way more problems that I actually had.</p>
<p>If you want to give <a href="http://code.mrtazz.com/yagd/">yagd</a> a try, I would love to hear what you think.
I currently track 4 servers and 2 jails with it, but for this and other
reasons it won&rsquo;t be the solution to all dashboarding problems. Nor should it.
The way more important thing in my mind is that it&rsquo;s solving a very specific
problem I had, in a pretty simple way. And in addition served as a side
project for me to learn a lot of things about writing PHP, setting up phpunit,
using codeclimate, and creating a reusable package on Packagist I didn&rsquo;t know
before.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/12/17/chef-driven-dashboards.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Internet Of Garbage]]></title>
    <published>2015-11-28T00:00:00Z</published>
    <updated>2015-11-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/jeong-internetofgarbage-2018/</id>
    <content type="html"><![CDATA[<p>In this book Sarah Jeong - a journalist trained as a lawyer at Harvard Law
School - talks about the problem of online harassment. It&rsquo;s another short but
really good one. I&rsquo;ve learned a ton about copyright law and the limitations of
current legislation when it comes to online harassment. But also things that
do work and what things could be attempted. It&rsquo;s a very sobering look at the current state of social networks, online harassment and tooling and legislation to help fight it. Definitely worth a read if you spend any time on the internet.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/jeong-internetofgarbage-2018/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Don&#39;t Make Me Think, Revisited: A Common Sense Approach to Web Usability]]></title>
    <published>2015-11-26T00:00:00Z</published>
    <updated>2015-11-26T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/krug-dontmakemethink-2005/</id>
    <content type="html"><![CDATA[<p>I&rsquo;m one of those engineers who used to happily claim to not have any frontend
skills and just not be good at design. I came to loathe this thinking over the
years and decided that if I can&rsquo;t do something I want to learn at least the
basics. This is one of the reasons why I read &ldquo;Designing for Performance&rdquo; as
mentioned above. Thankfully I also work with a ton of talented designers and
one of them is <a href="https://twitter.com/harllee">Jessica Harllee</a>. I talked to her
about suggestions to get started with learning about design. And she said I
should read &ldquo;Don&rsquo;t make me think&rdquo;. And she wasn&rsquo;t wrong. The book is a
wonderful introduction into usability and design. The beauty of it is that
while reading it, all of the things mentioned are total no-brainers. But you
have to remember it while designing things. The other interesting thing for me
was that while all of the examples in the book are web based (with some brief
stints into mobile) I could totally think of CLI apps I&rsquo;ve written in the past
that totally do the wrong thing design-wise. Definitely a recommended read.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/krug-dontmakemethink-2005/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Timeouts And Reflections]]></title>
    <published>2015-11-20T00:00:00Z</published>
    <updated>2015-11-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/11/20/timeouts-reflections.html</id>
    <content type="html"><![CDATA[<p>I love <a href="https://unwiredcouch.com/setup/coffee/">coffee</a>. I really do. And yet I haven&rsquo;t had any for 4 days now. The
first day was rough, I got the headaches that everybody will tell you about
when it comes to the topic of caffeine withdrawal symptoms. The second day was
better and by the third, the headaches were gone. And I&rsquo;m just halfway done.
I&rsquo;m not gonna drink coffee for a couple more days. It&rsquo;s part of a yearly
ritual I have of not drinking any coffee for at least a week or so. Last year
the headaches didn&rsquo;t disappear until day 5 so I decided to go for 10 days
without coffee instead of 7. And after the headaches disappear, the fun part
starts: trying to figure out how to replace something so integral of my life.
What to drink for breakfast now? What to do instead of going to the coffee
shop?</p>
<p>So why is this important? Everybody does not drink coffee at some point,
right? It&rsquo;s important because it&rsquo;s part of something I&rsquo;ve tried to
continuously include more and more in my life over the years. Taking timeouts
and making time for reflections. Humans tend to be creatures of habit. If you
want to achieve something that needs constant work, build a habit out of it.
You want to read more books? Spend 30 minutes every morning with just reading.
You want to lose weight? Have a strict habit of going to the gym every Monday,
Wednesday, and Friday. Want to learn a language? Start memorizing words every
day on your commute. If you search Google for &ldquo;building habits&rdquo; you will find
a myriad of websites, articles, and tutorials of how to hack your life and
yourself into a more productive version by making things a routine. And it&rsquo;s
true, building habits is a very effective way of incorporating new things into
your daily life.</p>
<p>However the same also goes for negative habits. Smoking, drinking a beer every
day after work, eating too much unhealthy food, constantly immersing oneself
in the stressful routines at work, not going to the gym because the day
already feels too planned out every day. Once something has become an
ingrained part of your day, it&rsquo;s really hard to notice whether it&rsquo;s there for
fun and enjoyment or actually harmful, and it&rsquo;s even harder to get rid off.</p>
<p>The way I&rsquo;m trying to battle those things becoming too much of a routine is by
taking timeouts and reflect on my choices. Because that&rsquo;s what they all are.
Choices. I do reflect a lot without necessarily taking a timeout, however when
I do take a timeout from something, I automatically also reflect on what
impact that thing has on my life and whether I like it or not. Over 9 years
ago I thought deep and hard about why I was smoking and decided that it wasn&rsquo;t
worth it and stopped. I had multiple times in my life where I completely
stopped drinking alcohol for months or even years because I had stopped and
reflected on whether I really want to drink this beer because of its taste or
because of habit. I generally try to eat vegetarian by default and then have
meat twice a week and fish/seafood two to three times a week. I periodically
stop and think about whether this is still the case and whether it has been
for the last couple of weeks or months. If it doesn&rsquo;t then I think about why
and if it&rsquo;s because I decided to or just because I got carried away in another
routine.</p>
<p>But this doesn&rsquo;t only go for negative things in my life. I have a very
specific set of tools that I rely on heavily for my work. I write code and
words only in vim, I have all my todos <a href="https://unwiredcouch.com/2014/05/13/omnifocus.html">in OmniFocus</a>, I exclusively use
iPhones, OSX on my laptops and FreeBSD on my personal servers. And I really
like it. My setup is as close to ideal as I can imagine. However I still
occasionally stop and revisit those choices. I have used Atom for a bit when
it came out and have tried to use emacs as well. I&rsquo;ve switched my todos over
to a different app or even plain text notes for a month, I every now and then
wonder if I want to run FreeBSD on my laptop again, and I&rsquo;ve switched
completely to an Android phone for a week last year.</p>
<p>Of course this is partly because of the ever present nagging of optimizing
things. But it&rsquo;s also because I want to make conscious choices about the
things I use and be mindful about the way I consume. Just because I have used
a tool for 5 years doesn&rsquo;t make it the perfect one. Just because I have been
doing things in a certain way every day doesn&rsquo;t mean it&rsquo;s the best way to do
it. The hard part is recognizing which things are even part of a routine. This
is why it&rsquo;s important to me to have a lot of time for reflections. I try to
have multiple times a day where I can just think. This is mostly right after I
get up in the morning and prepare breakfast, on my commute, when I do the
dishes, or in the shower. I don&rsquo;t reflect on my habits every single time but I
do have the time to do so every day. It&rsquo;s also the reason why I really value
vacation days and completely unplug from work when I go on vacation. This
forces a timeout of all work related things on me. It&rsquo;s so easy to take things
for granted and assume that&rsquo;s how it has to be instead of thinking about what
you want it to be. I&rsquo;ve cut down on mailing list memberships, push
notifications, and times I check and respond to email just because I was on
vacation and had a timeout from it all and when I came back reflected on why I
was getting all those notifications and if I really had to.</p>
<p>Taking the time to stop one of your routines or habits for a week or a month
and reflecting on whether it actually makes your life better has been a
wonderful way of improving my quality of life. It made me cut out things
completely that turned out to not bring me the joy I thought they would. It
made me find new things that I thoroughly enjoy now. And it made me appreciate
the things that continue to be part of my life even more. Because come Monday
I will drink coffee again. And it&rsquo;s gonna be wonderful because I know why I do
drink it.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/11/20/timeouts-reflections.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Men Explain Things to Me]]></title>
    <published>2015-10-19T00:00:00Z</published>
    <updated>2015-10-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/solnit-menexplainthingstome-2014/</id>
    <content type="html"><![CDATA[<p>This collection of essays titled for the aggressive tendency of men to always
have to explain things to women while assuming they have no idea what they are
talking about. The first essay brings this to a point by telling a story of a
party where a man mansplains to the author the book she herself wrote. Without
having actually read it. The book than continues with more essays that talk
about a lot more darker things like discussing domestic violence. The over
arching theme is that the credibility of and respect towards women is
continuously diminished to maintain the status quo and its power imbalance.
Some of the essays towards the end of the book are not easy to read but it&rsquo;s
more than worth it.</p>
<blockquote>
<p>Credibility is a basic survival tool.</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/solnit-menexplainthingstome-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How I prepare for and give conference talks]]></title>
    <published>2015-09-25T00:00:00Z</published>
    <updated>2015-09-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/09/25/talk-prep.html</id>
    <content type="html"><![CDATA[<p>I thoroughly enjoy reading about how other people do their work, tackle
problems, find productivity, and prepare for talks. So this is my contribution
to this. Before I start however I want to acknowledge that this post is
completely inspired by blog posts from <a href="http://www.catehuston.com/blog/2014/06/06/my-4-step-plan-for-giving-a-talk/">Cate Huston</a> and <a href="http://lizabinante.com/blog/how-i-prepare-conference-talks/">Liz
Abinante</a> who have written wonderful posts about their conference
talk prep process. So before you go on reading here, I encourage you to read
theirs first.</p>
<p>I have given <a href="https://unwiredcouch.com/talks">a bunch of talks</a> over the last 2 years. Most of them
were last year where I felt like I was at a different conference every month.
I have given regular conference talks, keynotes, and was on panels. Some of
them were as short as 15 minutes with the longest one having been 90 minutes
of me talking. The topics range from cultural talks about how we approach
things like deployment or blameless postmortems at Etsy to somewhat technical
talks about our stack and tools we use and have built. Before 2013 I had never
given a public talk, save for some presentations at work and university with a
somewhat mixed audience of coworkers or other academics. This means when I
had my first talk over 2 years ago, everything was very new to me, I had
no idea what I was doing and it took me a long time to get anything done.
However I have learned a lot since then and have refined my skills to a level
where I enjoy and feel comfortable preparing talks, creating slides, and
speaking in front of an audience. And giving talks has been one of the most
amazing experiences in my career so far. So I hope this post is useful in
helping more people to also improve their presentation skills and encouraging
more to also give talks. While you&rsquo;re reading through this I also want you to
always keep in mind that all of those things are very much a function of my
character. I identify as an introvert and a highly-sensitive person (I haven&rsquo;t
been tested for it, so I don&rsquo;t have conclusive proof that I am either one). A
lot of things in how I prepare and give talks are to make this experience as
good as I can for me as well as my audience. So while some things might work
for you as well, I encourage you to take this as inspiration to find your own
way.</p>
<h3 id="finding-a-topic">Finding A Topic</h3>
<p>The most important part about finding a topic is to remember that talks are
about sharing knowledge and whatever seems obvious to you, is very likely
pretty interesting to a lot of people outside of your company.</p>
<p>I used to have a pretty hard time with this as I did think all my work was
obvious and nothing interesting. I was fortunate however that the way Etsy
does deployment was something still very popular when I started giving talks
and still is to this day. And since I also work on deployment tooling as my
day job it was a very natural thing to talk about for me. So when I got asked
to submit a talk proposal for my first conference, I could take the topic of
Continuous Deployment and turn it into a talk that was appropriate for what
the organizers were looking for. Since then I have given many variations of
that talk. Depending on what a conference was looking for I could put more
focus on the cultural aspects of it, the tooling, or how it fits into the
bigger picture of software development and collaboration at Etsy.</p>
<p>However coming up with different topics has been a challenge for me. I gave
two differently themed talks last year. One was focused on the Etsy monitoring
stack and the other one on how we tackle blameless postmortems. What makes
them both a little bit special is that they are not really related to an
actual project I had at work. They are about existing setups and ongoing work
we are doing to improve things. While this is great for getting out of the
mindset that all your current work is obvious and nobody would be interested
in hearing about it (two things that are basically always false), it can be
hard to find your story arc in such a talk. Since you don&rsquo;t really go from a
problem to the analysis and research part, to the implementation and then the
solution, you have to find another hook for your audience. In the case of the
monitoring talk I chose Etsy&rsquo;s technical architecture. We are for the most
part running a monolithic <a href="https://en.wikipedia.org/wiki/LAMP_(software_bundle)">LAMP</a> application which seems surprising to a
lot of people. So this was a good introduction in to the talk. For talking
about blameless postmortems I just chose something everybody can relate to:
failure. Most people in the audience have seen their stack break under
surprising conditions and Etsy is no exception there. So I chose a familiar
scenario to talk about how we deal with it.</p>
<p>Something that has helped in the past with finding a topic for me was talking
to coworkers that are working on slightly different things and asking them
what they think would be interesting to hear a talk on. There is also often an
opportunity to follow trends on Twitter to see what kind of problems people
are interested in and give a talk on how you tackle that. And if there is
something you think is interesting, there&rsquo;s a high chance others will to. And
even if it seems obvious to you, people love to hear about how others tackle
problems. So don&rsquo;t think just because you feel like your work isn&rsquo;t totally
novel that others won&rsquo;t be interested in it.</p>
<h3 id="writing-the-abstract">Writing The Abstract</h3>
<p>When it comes to writing the abstract, there are two things I try to optimize
for:</p>
<ol>
<li>Organizers should quickly get an idea if the talk is a fit</li>
<li>Attendees should quickly know if they want to see the talk over another one</li>
</ol>
<p>Writing the abstract for the talk proposal used to be a huge undertaking for
me. I wanted to make sure all my ideas are captured and the conference
organizers knew what they were getting when they accepted my talk. So I ended
up with proposals that were sometimes 2 pages of a fully fleshed out talk
(that would change until I give it anyways) and really elaborate. It took me a
while to realize that conference organizers get a ton of proposals and they
don&rsquo;t have the time to read a novel of a proposal just to decide whether or
not the topic is interesting (if you are a conference organizer and have
gotten such a proposal from me: I&rsquo;m sorry). So I eventually learned to
concentrate on the main story arc and message the talk will have. So now when
I write an abstract, it&rsquo;s about 6 sentences to at most 2 paragraphs of text.
It contains the title, the super high level outline and what the audience will
be able to take away from it. Because the other part of this is that
conferences often put the abstract on their schedule page and attendees use it
to decide which talk to go to if it&rsquo;s not a single track conference. So I want
them to be able to decide within 30 seconds whether or not my talk sounds
interesting. And not bore them with 2 pages of things they might not even be
interested in.</p>
<p>Plus focusing on the main idea has a big benefit when writing the outline and
the slides for the talk. I can always go back to the abstract and check
whether or not my talk actually conveys the message I wanted it to bring
across. And if I notice I drift from my original idea, I can correct that
easily. There have also been occasions where I changed the story arc and
message of the talk slightly as I found one I liked better while preparing the
slides.  This is ok most of the time, however if it turns into a completely
different talk I&rsquo;d check with the organizers if they are fine with this as
well. If not, maybe I have a new proposal for a different conference :).</p>
<h3 id="prepping-the-talk">Prepping The Talk</h3>
<p>The time leading up to the conference and preparing the talk is kind of a
tricky one for me. I have a process there that works great for me, but which I
wouldn&rsquo;t necessarily recommend for anyone. The main reason for this is that I
don&rsquo;t write anything down until a week or so before the conference and I also
don&rsquo;t do any dry runs. However I want to emphasize that this is not because I
think I don&rsquo;t need all of those things. I&rsquo;m very convinced my talks would be
better if I did them. It&rsquo;s mostly because of how my brain works and some of my
personal anxieties.</p>
<p>Just because I don&rsquo;t write anything down doesn&rsquo;t mean I don&rsquo;t think about the
talk. As a matter of fact in the weeks before the conference I&rsquo;m mostly
forming ideas and shaping things in my head which then end up on the slides. I
do think a long time about things before taking actions, this is just my
nature so my talk prep follows this. Then about a week before I give the talk,
I start to write my ideas down as slides. Refining them until (sometimes
literally) I go on stage. I have used <a href="https://www.apple.com/mac/keynote/">Keynote</a> for a long time to do
this. I sometimes wrote down my ideas as a Markdown outline in vim and then
create slides in Keynote from this. However as much as I preferred writing the
outline in vim, it being twice the amount of work - as I had to basically do
the same thing in Keynote then - lead to me more often than not just starting
in Keynote. Then in fall of last year I found <a href="http://www.decksetapp.com">Deckset</a>. A wonderful
OSX application that lets me write my slides completely in Markdown and then
creates a beautiful presentation from them. Since then I have gone back to
writing the outline of my talks in vim and then slowly transforming it into
slides.</p>
<p>And as I said before, I never do dry runs. That&rsquo;s not because I don&rsquo;t think
they are a good idea. They are and you should absolutely do them. However for
me they never fit into my schedule. Because I work on the slides until right
before I have to give the talk, there isn&rsquo;t a version I&rsquo;m confident in showing
people early enough for dry runs. In addition to that if I prepared my talk
sooner so I could do a dry run I would constantly think that I&rsquo;m not giving my
best because I&rsquo;m not using all the time there is. Plus talking in front of
people takes a lot of preparation for me (as you will discover later). So dry
runs take a huge amount of energy. That being said it is something I&rsquo;m not
really happy about and want to work on getting better at in the future. There
are so many things that other people notice about your talks that I think it
is one of the things I&rsquo;m doing that is keeping my talks from being better.</p>
<h3 id="slide-design">Slide Design</h3>
<p>For the slide design I have come to heavily rely on Deckset to do the right
thing. I&rsquo;m a big fan of having only a simple statement or message on a slide
to carry the story of the talk. Even when I was using Keynote I tried to have
as few things as possible on each slide. Keynote makes it really easy to go
overboard with effects, information, shapes, pictures, movies, bullet points,
etc. I had a pretty good slides template that I got from <a href="https://twitter.com/lara_hogan">a coworker</a>
and that has basically been adapted for almost all Etsy engineering talks by
now. This made it pretty fast for me to iterate on slides. I would put the
outline headings on a single slide in bold font and then fill in slides in
between with content aiming for 1.5 slides per minute. When it comes to slide
design I usually choose between either</p>
<ul>
<li>a short statement or quote</li>
<li>a picture (optionally with a statement or title)</li>
<li>an animated gif</li>
</ul>
<p>That&rsquo;s it. Nothing more complicated than that. I sometimes have a short bullet
point list but I try to keep that rare. If I can&rsquo;t say something on a slide
within those constraints, I very likely should rethink it or split it across
multiple slides. I do use some font styles to emphasize words in a statement
that I think should have more focus. But all in all I try to keep it simple.
And Deckset makes that a lot easier with its constraints (generated from
Markdown, no custom themes, etc) than Keynote. So I actually end up being able
to iterate on slides much faster. I spent a lot of time trying to find the
right pictures and animated gifs for my talks, often I even switch them out
right before I give the talk (more on that later). Usually I look for things
that are somewhat humorous and make the talk less dry and more enjoyable. I
have a big weakness for pop culture references and so it&rsquo;s not unlikely that
my talk includes references to Gossip Girl, Vampire Diaries, Black Sabbath,
Iron Man, MacGyver, or various internet memes. This is also how I ended up
giving a <a href="https://speakerdeck.com/mrtazz/the-road-to-success-is-paved-with-small-improvements">talk at the USPTO</a> with the Backstreet Boys and Avril
Lavigne being part of my slides. I usually end up having enough slides for my
talk (to satisfy the 1.5 slides/minute ratio) by the day before the conference
or the day before I leave for the conference.</p>
<h3 id="travel-optional">Travel (optional)</h3>
<p>This should probably be its own blog post as there is so much I&rsquo;ve learned
about traveling in the last year. However this only has its own section here
as I want to emphasize that I optimize for minimalism and worry free travel
when I go to conferences. This means I only have my backpack, which can hold
my clothes for at least a week, my laptop with all the cabling, and my
<a href="https://instagram.com/p/lIK8wstp-_/">Aeropress, grinder, and beans</a> because I love good coffee and don&rsquo;t
want to think about where to get coffee when I&rsquo;m in a hotel. I plan my travel
so I&rsquo;ll be at the conference on the day before I give my talk. Usually I end
up working on the slides more on the plane and thinking about what message I
want to bring across with each slide.</p>
<h3 id="the-day-before">The Day Before</h3>
<p>When I arrive at the conference, I check into the hotel, make sure to ask
where breakfast is served the next day and when and then meet up with the
organizers and try to get a feel for the atmosphere of the conference. This is
all for me to get acclimated with the new environment and to minimize
surprises and things to worry about. I try to find the room I&rsquo;m going to give
the talk in and if there is a talk I really want to see I attend it. But I
don&rsquo;t sweat that too much, if I have the feeling that I don&rsquo;t yet feel good
about my slides or that I will be more calm by hanging out in the hotel room,
I will do that instead.  After all I&rsquo;m here for giving a kick-ass talk and I
will do everything to make sure this is what&rsquo;s gonna happen. Either way I try
to be in my hotel room at 10pm at the latest. If I&rsquo;m there sooner, I&rsquo;ll go
over my slides again and make some adaptations based on what atmosphere I
picked up from the conference. By 10pm I&rsquo;m usually exhausted from travel
and/or jet lag and will fall asleep.</p>
<h3 id="the-day-of">The Day Of</h3>
<p>I get up between 6 and 6.30am, read a book or browse twitter or my RSS feeds
to not yet think about the talk. As soon as the hotel breakfast restaurant
opens, I&rsquo;m heading down there to have an extensive and relaxed breakfast. I go
that early for two reasons:</p>
<ol>
<li>I want to have as much time as possible to enjoy breakfast.</li>
<li>Most likely there aren&rsquo;t a lot of people around yet, so it&rsquo;s quiet</li>
</ol>
<p>After breakfast I go back to my room and go over my slides again. At this
point I usually just do some minor changes. But I have also reworked a good
chunk of the talk during this before. So there are no rules except that I have
to feel good about the talk. I might look for a better picture or gif to
support the message of some slides or reorder them a bit to make the flow of
the overall talk better. I also make coffee as it&rsquo;s another thing that makes
me calm (oddly enough) and feel good. I keep working on my slides until 90
minutes or so before my talk. Then I try to take my mind of the talk for a
bit, shower, listen to some music, get dressed (I always wear my Etsy
Engineering t-shirt so I don&rsquo;t have to think about what to wear) and head to
the conference. I always try to be at the conference 30-45 minutes before my
talk to acclimate myself with the atmosphere. This is a lot easier if the
conference is at the same hotel obviously but the same goes regardless of
where it&rsquo;s at. I then find the room I&rsquo;m giving the talk in, if nobody is
speaking there before I find an organizer or other staff member and get set
up. My favourite speaking slots are the first ones in the morning. The
conference is still a bit empty, the rooms usually are. I just take 15 minutes
or so to get a feeling for the room and watch the people coming in. While
attendees are sitting down I try to spot 3-5 people in various locations that
are evenly spread out and remember them.  Those are going to be the people
that I try to make eye contact with during the talk. Then 10 minutes or so
before I actually am supposed to go on, I go to the restroom, even if it&rsquo;s
just to wash my hands, to have another couple of minutes of quietness before
I&rsquo;m supposed to talk to a room full of people.</p>
<h3 id="actually-giving-the-talk">Actually Giving The Talk</h3>
<p>Once I&rsquo;m on stage and have my slides up there and am ready to give the talk.
Nothing matters except for the talk. I try to be my most energetic, friendly
and enthusiastic self. I emphasize when I think how good or bad something is
that we are doing or trying to solve. I might try to make some jokes about
certain things that people can likely relate to, like how naming this is hard
or how computers sometimes don&rsquo;t do what you think you told them to do. I
remember the 3-5 people from before and while I&rsquo;m talking I switch between
them, trying to make eye contact. Generally I&rsquo;m really bad about making eye
contact even in conversations within small groups. So by adhering to this
pattern of looking up people beforehand I can just follow it without worrying
whether or not I&rsquo;m actually capturing the audience enough or staring holes
into the air. I have my speaker notes on my laptop (I try to present with my
laptop if at all possible) but I only usually have notes for the most crucial
things or if I&rsquo;m not confident I get some things right. English isn&rsquo;t my first
language so if things might get tricky with remembering an important word I
write it down. Otherwise I tend to improvise on slides a bit. I generally know
what I want to say and not having a strict set of notes tends to make it less
dry and more lively for me. I tend to always have the last 10% of talk time
open for questions. So once I&rsquo;m done with my slides I let everyone know that I
have time for questions. But also make sure it&rsquo;s clear that I will be
around after the talk and also have my (work) email address on the slides. So
if someone in the audience has a question but doesn&rsquo;t want to ask it in front
of a lot of people, there is more than that one setting to ask about my talk.</p>
<h3 id="after-the-talk">After The Talk</h3>
<p>If possible I try to stay in the room for a couple of minutes so people can
come up to me for questions. If not I&rsquo;ll try to be around the conference
somewhere and - although I have the urge to - try not to disappear
immediately. I won&rsquo;t however attend a talk right after as I&rsquo;m too hyped up and
overstimulated from just having spoken to a room full of people. Once people
are not coming up to me for questions anymore or it otherwise doesn&rsquo;t seem
like I&rsquo;m running away from anyone I try to find a quiet corner and check
Twitter to see how people reacted to my talk and what things they tweeted from
it. This is a good indicator for me which parts resonated with people and
which didn&rsquo;t. I don&rsquo;t over obsess on this but it&rsquo;s nice to read about how
people liked your talk and gives me a better feeling about all this
preparation and over stimulation having been worth it. I then upload my
slides, usually have a page written for it on my <a href="https://unwiredcouch.com/talks">site</a> which I
publish and then try to enjoy the rest of the conference. I tend to only go
into hiding for a bit and not go to my hotel room as there is a big chance I
won&rsquo;t come back to the conference. So I only take as much time as I need to be
able to recharge and enjoy the conference again.</p>
<h3 id="the-takeaway">The Takeaway</h3>
<p>Preparing and giving a talk is something I do very thoroughly. I do all those
things mentioned here (which may seem like a huge set of preparations) because
it helps me be and feel prepared. A lot of the preparation for a talk is
psychological for me. As I said in the introduction, I&rsquo;m very introverted and
often have somewhat strong reactions to new and unknown environments and
people. So having this framework helps me immensely feeling less overwhelmed.</p>
<p>However it&rsquo;s worth noting that this is the absolute ideal plan. I try to make
it work like that but if any of the things don&rsquo;t work according to plan it&rsquo;s
not a catastrophe. I&rsquo;m able to give the talk regardless, this is just the
dream set up. It also varies a lot depending on how big the conference is and
what kind of talk I&rsquo;m giving. If it&rsquo;s an internal lunch talk at work, it&rsquo;s
fairly low stress for me now and I don&rsquo;t have that much prep time that I need.
But mostly because I now have a ton of experience giving talks and it&rsquo;s less
scary than it was 2 years ago.</p>
<p>This year however I have decided to take a break from giving talks as it was
just a bit too much last year. I&rsquo;m looking forward to giving more talks next
year and have spent this year so far helping others to give talks by
connecting them to conferences I have spoken at, acting as a sounding board
for talk ideas, giving feedback on abstracts and proposals and answer as many
questions as I can about the process and nature of giving a conference talk.
Learning how to give talks and giving them until I felt pretty comfortable
doing it has been a great experience and definitely one of the most amazing
things I get to do as part of my job.</p>
<p><em>Thanks to <a href="https://twitter.com/bethanymacri">Bethany Macri</a> and <a href="https://twitter.com/lara_hogan">Lara Hogan</a> for reading drafts
of this and giving feedback</em></p>
]]></content>
    <link href="https://unwiredcouch.com/2015/09/25/talk-prep.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Art of Mindfulness]]></title>
    <published>2015-09-23T00:00:00Z</published>
    <updated>2015-09-23T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hanh-artofmindfulness-2012/</id>
    <content type="html"><![CDATA[<p>This is another super short read and the de-facto introductory book to
mindfulness meditation. There&rsquo;s not a lot to say here. It&rsquo;s good, give it a
read as it&rsquo;s short enough to not matter if you end up not liking it. I started
meditating regularly after reading it and it has been a great experience.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hanh-artofmindfulness-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Boy Kings: A Journey into the Heart of the Social Network]]></title>
    <published>2015-09-16T00:00:00Z</published>
    <updated>2015-09-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/losse-boykings-2012/</id>
    <content type="html"><![CDATA[<p>The biography of Kate Losse about her time at (early stage) Facebook is in my
mind a must read for any software engineer and especially if you&rsquo;re a man. It
gives an extremely good insight view into what happens when young men are
suddenly in charge of a ton of money. But more importantly it talks very
bluntly about how engineers are treated differently from most other employees
for our supposed gift to turn any idea into gold with code.</p>
<blockquote>
<p>Technology carries with it all the biases of the people who make it, so
simply making the world more technical was not going to save us.</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/losse-boykings-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Cybersexism: Sex, Gender and Power on the Internet]]></title>
    <published>2015-09-15T00:00:00Z</published>
    <updated>2015-09-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/penny-cybersexism-2013/</id>
    <content type="html"><![CDATA[<p>This short book by Laurie Penny is a very good read about sexism in the age of
social networks and the omnipresent Internet. It does a great job at talking
about how a lot of familiar concepts of &ldquo;offline sexism&rdquo; are reinvented online
and no news to women. It&rsquo;s short and insightful enough to recommend reading it
without hesitation.</p>
<blockquote>
<p>Perhaps one reason that women writers and technologists have, so far, the
calmest and most comprehensive understanding of what surveillance technology
really does to the human condition is that women grow up being watched.</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/penny-cybersexism-2013/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Nonviolent Communication: A Language of Life]]></title>
    <published>2015-09-12T00:00:00Z</published>
    <updated>2015-09-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rosenberg-nonviolentcommunication-2015/</id>
    <content type="html"><![CDATA[<p>This has been recommended by many people I work with as a wonderful resource
about positive human communication. And as - especially in a growing
engineering org - communication is one of the most important skills to try to
master, I decided to finally read this one. It&rsquo;s a very interesting book with
an approach to communication that is rarely taught especially not to men. It
focuses on a collaborative rather than a competitive style of communication
and the goal to reach agreements over winning arguments. The examples in the
book are often pretty extreme coming from the author&rsquo;s work as a diplomat. And
even though those are great to demonstrate how this way of communicating can
work in the most extreme cases, it also shifts its focus a lot on explicit
diplomatic style discussions. There are more examples that are more directed
towards every day situations and even though the author is very explicit about
this being useful in regular work meetings as well, I had a very hard time
understanding how to practically apply those lessons in a meeting for example.
That being said however it made me think a lot more about the way I
communicate and what I&rsquo;m saying versus what I want to say. I have also applied
that way of communicating successfully at least once since reading the book.
And I look forward to try it out more.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rosenberg-nonviolentcommunication-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[You Had Me at &#34;Hello, World&#34;: Mentoring Sessions with Industry Leaders at Microsoft, Facebook, Google, Amazon, Zynga and more!]]></title>
    <published>2015-09-07T00:00:00Z</published>
    <updated>2015-09-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/sarkar-helloworld-2015/</id>
    <content type="html"><![CDATA[<p>I found this book through <a href="https://twitter.com/skamille" title="@skamille on Twitter">Camille</a> tweeting about the fact that she was
also interviewed for it. &ldquo;You had me at &lsquo;Hellow  World&rsquo;&rdquo; is a collection of
interviews with industry leaders from successful companies about the many
aspects of leadership and mentoring. It&rsquo;s a pretty lightweight read and a
great resource to get some insight how successful people talk about those
topics. It does a great job in conveying how important skills outside of
writing code are. And it provides good examples of how to use those for your
advantage.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/sarkar-helloworld-2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Recoding Gender (History of Computing)]]></title>
    <published>2015-09-06T00:00:00Z</published>
    <updated>2015-09-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/abbate-recodinggender-2012/</id>
    <content type="html"><![CDATA[<p>I have a very complex relationship with the profession of &ldquo;software
engineering&rdquo; and how it&rsquo;s often defined in a non-inclusive way and as the
profession of the golden children of society. Part of that is that I had
always known a bit about the origins of programming and that a majority of
programmers used to be women. But I didn&rsquo;t know a lot about it which is why I
was excited to read this book. And it was great! The book walks you through
the beginnings before and during WWII and what programming meant back then. It
discusses how the emerging industry in this field changed job prospects and
economic chances for women. But it also discusses how the image of a
programmer changed as more and more men participated. It&rsquo;s full of historical
facts and documents and a more than wonderful read. It sparked a lot of
thoughts for me and changed the way I think about my profession even more.</p>
<blockquote>
<p>the traits that managers found most problematic in programmers were those
stereotypically associated with men</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/abbate-recodinggender-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[My Writing Workflow]]></title>
    <published>2015-08-31T00:00:00Z</published>
    <updated>2015-08-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/08/31/writing-workflow.html</id>
    <content type="html"><![CDATA[<p>I love writing. A lot. It&rsquo;s one of the things that helps me the most with
structuring thoughts and opinions I have. It&rsquo;s also invaluable for me to just
brain dump things and see how I feel about them later. I&rsquo;ve come to really
appreciate taking the time to properly formulate things and incorporating time
to write (almost) every day. In order to make this as easy to get started with
as possible I have developed a setup that works really well for me and thus I
wanted to share it.</p>
<h3 id="the-basics">The Basics</h3>
<p>In order to make writing for me as low barrier as possible it has to work with
the tools I use every day. For me this mostly means vim and git. I try to
really embrace plain text for everything and store those files in git - even
for my <a href="https://unwiredcouch.com/2015/06/08/accounting-the-unix-way.html">accounting data</a>. For quite a long time I just used my
regular (terminal) vim setup that I had configured for writing code to also
write prose. And it worked remarkably well. However it also always had this
programming feel to me with all the important status bar information that was
actually way more distracting than useful when not in a coding mode. It&rsquo;s also
a lot harder and uncomfortable to focus on only writing with default left
aligned layout in vim. I would open vim in full screen to not have any
distractions and my text would always hang on the left third of the terminal.</p>
<p>I was looking for something like a <a href="http://www.hogbaysoftware.com/products/writeroom">write room</a> style setup.
Fortunately a lot of other people have also had that problem and there&rsquo;s a
huge variety of distraction-less writing plugins for vim now. After some
searching I&rsquo;ve settled on <a href="https://github.com/amix/vim-zenroom2">vim-zenroom2</a>. It&rsquo;s based on <a href="https://github.com/junegunn/goyo.vim">Goyo</a>
and gives me a beautifully decluttered vim session where I can focus on only
the text I want to write and nothing else. I spent some time trying to get the
vim status bar to display the word count of the current buffer. But it was
always kind of janky. So I decided to not care that much and just run <code>:!wc - w %</code> when I really want to know how much I&rsquo;ve written. And as I also exist
inside of <a href="https://unwiredcouch.com/2013/11/15/my-tmux-setup.html">a tmux session</a> most of the time, I also disable the status
bar there when in writing mode. I rely heavily on the terminal bell changing
the status bar for notifications and I don&rsquo;t want to be interrupted by that
when I&rsquo;m writing.</p>
<p>I do all my writing in Markdown. I really like that it&rsquo;s basically plain text
with a little bit of markup and super close to HTML. That makes it easy to
just write and more importantly read without having to constantly render and
preview it. I actually write out all of my drafts before looking at the
rendered version most of the time. A helpful setting in vim there is to enable
spell checking for Markdown files. In my <code>vimrc</code> I have <code>autocmd FileType markdown setlocal spell</code> set, so that every time I open a Markdown file I get
spell correction automatically. This is especially helpful as I do most of my
writing not in my first language. So vim tells me immediately when I&rsquo;ve
written the German spelling for an English word and lets me correct it.</p>
<p>When I want to focus on writing I also tend to only work on my 11&quot; MacBook Air
and not connect it to the Thunderbolt Display. The bigger screen tends to
distract me more than it helps. However I do have an iTerm profile with a
bigger font for writing. The default for coding, IRC, email, etc for me is
11pt. And I switch to 18pt for writing.</p>
<p><img src="/images/goyo.png" alt="vim distraction free writing"></p>
<h3 id="blogging">Blogging</h3>
<p>For blogging I use <a href="http://jekyllrb.com">jekyll</a>. I switched to it 6 years ago when I
wanted to have a blog again and it works pretty well most of the time. Rumour
has it that I&rsquo;m constantly trying to replace it with a simple Makefile but
that may or may not be true. The repo is <a href="https://github.com/mrtazz/unwiredcouch.com">open source on
GitHub</a> but I host the actual site on my own server.</p>
<p>I used to have all my drafts in the jekyll <code>_drafts</code> folder, as it made the
most sense to have it all in one place. Whenever I had an idea for a new blog
post I would create a file in there with the yaml frontmatter, set the title
and <code>published</code> to <code>false</code> and jot down some bullet points. However since I
didn&rsquo;t want the blog posts I may or may not write be in the git repo on
GitHub, I never committed them to git. Which started to annoy me as I didn&rsquo;t
really have a commit log, backup, etc for my drafts. I also had the whole blog
cloned into my owncloud folder, which meant that it would always try to sync
all the jekyll files and git objects although it wasn&rsquo;t really necessary as I
only really cared about having my drafts everywhere for easy access. So I
decided to move my drafts into a separate repo, that is just pushed to one of
my servers. From there I can just clone it wherever I want, edit, and push it
back up.</p>
<p>The way I create drafts in there is still the same. I add a file with the
yaml frontmatter (I actually have a <a href="https://github.com/mrtazz/vim-stencil">vim-stencil</a> template for
it). And then add some bullet points, headlines and ideas I have about that
blog post in there. Basically a small outline which I will use to evolve the
post. I will then most likely not touch it for quite a while. The way I write
most blog posts is that I have the draft with some notes in the repo for weeks
or months and every now and then think about it and come up with some new
things to add and new ways to phrase ideas. Then at some point I just sit down
and write it all down. So my writing flow really happens in spurts.</p>
<p>Once I&rsquo;m happy with the draft, I move it over to the jekyll blog, make some
minor adjustments so it looks good in HTML and publish it by rsyncing the
generated <code>_site</code> folder to my server. I then <code>git rm</code> the file from the
drafts repo and add the URL where it&rsquo;s published in the commit message.</p>
<h3 id="journaling">Journaling</h3>
<p>Another outlet for my writing is journaling. However it&rsquo;s a lot more
infrequent and random for me. The setup is kinda similar, although I don&rsquo;t use
jekyll for it. I basically have a <code>journal</code> git repo that holds a file
<code>current.md</code> where I write down my thoughts for the day. At the end of the
month I move the file to an archive format in the form of <code>YYYY/MM.md</code> and
touch the file again to now hold the entries for the current month. This means
I have a setup in the repo that holds a folder per year and a markdown file
per month in each folder. There is also a Makefile to generate a single HTML
file from all entries to make it nicer to browse.</p>
<p>I try to write something into my journal every day but it&rsquo;s been one of the
harder habits to keep up. Similarly to how I have a hard time taking notes in
meetings, I&rsquo;m a person who thinks a lot about things. But since it all takes
place in my brain I never remember to actually write it down. In order to make
it more low barrier to write journal entries I have created a specific iTerm
profile just for that. It opens a terminal in a new window, and immediately
runs <code>/usr/local/bin/vim -c Goyo /path/to/current.md</code> instead of a shell. Thus
I get a vim session with my journal immediately. I also have an
<a href="https://www.alfredapp.com">Alfred</a> workflow to open iTerm profiles directly. So I can open
Alfred, type <code>itp journal</code> and get the iTerm session for writing. This makes
it a lot easier and low barrier to journaling.</p>
<p>When I&rsquo;m done with the entry, I commit it to git and immediately push. This
way I have my journal available wherever I want it but especially on my phone
to jot things down on the go.</p>
<h3 id="mobile">Mobile</h3>
<p>I don&rsquo;t write a ton on my phone but it&rsquo;s nonetheless a crucial part of my
writing flow. I used to have all my writing things in my ownCloud folder
together with my notes and synced to my iPhone via <a href="http://www.notebooksapp.com">Notebooks</a> and
webdav. However this has gotten a bit more tedious than I want it to be.
Webdav is not super fast and if there is a merge conflict, I don&rsquo;t get all the
niceties of git to resolve it. Plus I don&rsquo;t need it to check all the files
when I really just want to open my journal and add an entry.</p>
<p>So with the move to separate git repos for my drafts and journal I also
started to base my mobile flow more on git. I use <a href="http://workingcopyapp.com">Working Copy</a>
on the iPhone to get access to all my git repos. It&rsquo;s a really great mobile
git client and for journaling I just open the <code>current.md</code> file in the app
directly and commit and push it back up. The editor is pretty decent and more
than apt for quickly jotting things down.</p>
<p>For writing longer, actual blog posts I&rsquo;ve come to really like
<a href="http://omz-software.com/editorial/">Editorial</a> for iOS. It&rsquo;s an extremely nice plain text editor with
a great Markdown mode. And since both Working Copy and Editorial support the
iOS share extensions, I can open a file from Working Copy in Editorial and
then save it back to Working Copy for committing the changes to git when I&rsquo;m
done. I&rsquo;m not gonna ditch the laptop for it anytime soon, but it&rsquo;s a great way
to write when I suddenly am in the mood but don&rsquo;t want to get up and get my
laptop. And I even wrote the first third of this blog post completely on the
iPhone.</p>
<p><img src="/images/editorial.jpg" alt="writing on iOS with Editorial"></p>
<h3 id="future-improvements">Future Improvements</h3>
<p>I&rsquo;m really happy with my setup, it works great for me and is based on the
tools I know and trust. It&rsquo;s a very frictionless setup that makes it easy for
me to write down my thoughts and have them available for look up and further
editing.</p>
<p>The things I definitely want to improve on is to write more. I have a lot of
things on my mind and whenever I write things down, it makes things a lot
clearer for me. But I have to remind myself to actually do it. One thing I
want to establish is more of a writing routine. And even if it&rsquo;s just a couple
of hundred words every day in the journal. Because writing is a ton of fun and
it&rsquo;s one of the things I really really enjoy.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/08/31/writing-workflow.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Practical Postmortems at Etsy]]></title>
    <published>2015-08-22T00:00:00Z</published>
    <updated>2015-08-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/08/22/practical-postmortems-at-etsy.html</id>
    <content type="html"><![CDATA[<p>I wrote an article on InfoQ about the way we conduct <a href="https://codeascraft.com/2012/05/22/blameless-postmortems/">Blameless
Postmortems</a> at Etsy. You can check it out <a href="http://www.infoq.com/articles/postmortems-etsy">here</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/08/22/practical-postmortems-at-etsy.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Highly Sensitive Person in Love: Understanding and Managing Relationships When the World Overwhelms You]]></title>
    <published>2015-08-14T00:00:00Z</published>
    <updated>2015-08-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/aron-sensitivepersoninlove-1996/</id>
    <content type="html"><![CDATA[<p>I really liked the book. It was interesting to read about experiences of other
highly sensitive people and to get a view from a psychologist on it. The book
however has a way too touchy-feely style for me. And especially the final
chapters talking about spirituality were a bit much for me. That being said I
took a couple of things away from reading the book, I was more than once ready
to throw the kindle across the room because my sensitivity got super hijacked
by some things, and I&rsquo;m definitely better informed and at peace with my
sensitivity than I was before.</p>
<p>I had no idea about the concept of highly sensitive people until I read <a href="http://m.huffpost.com/us/entry/4810794" title="Article about Highly Sensitive Person on Huffington Post">this
article</a>. It has a pretty click-baity headline but it really hit home for
me. So I decided to learn more about it and this book was the most prominent
resource to pop up in my search. It&rsquo;s a really good book with a lot of great
psychological insights and explicit case studies. At times the way high
sensitivity was described was a bit too feel-good for my taste. At other times
I would almost throw my kindle across the room as the author managed to really
sneak up on and hijack my sensitivity. The book focuses a lot on what usually
goes wrong during childhood for highly sensitive people and makes it a point
to relive memories and traumas through the lense of high sensitivity. This is
a practice I really enjoyed although it felt a bit much to me at times as I
consider my childhood to have been a happy one. On the other hand I started to
do this practice with every day situations at work to help me understand why I
feel what I feel in different situations. I identify myself as a highly
sensitive person and the book was an extremely good read to
help me understand better what this could mean for me and my days.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/aron-sensitivepersoninlove-1996/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Leading Snowflakes]]></title>
    <published>2015-08-01T00:00:00Z</published>
    <updated>2015-08-01T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/ellenbogen-leadingsnowflakes-2013/</id>
    <content type="html"><![CDATA[<p>I really enjoyed the book. It has a structure that is very easy to follow and
definitely a quick read. And even though I&rsquo;m not a manager or planning to
become one, there&rsquo;s a lot of actionable advice in there for me as an engineer.</p>
<p>I&rsquo;ve heard about this book ever since it was released and a lot of people I
know speak very highly of it. And they weren&rsquo;t wrong, I basically devoured the
book in a weekend. It&rsquo;s very well written and has a ton of actionable advice
for engineers becoming managers. But I would argue that this description
really limits the value of the book. I have no intention to become a manager
at the moment however the book was really interesting and helpful for me. I
think it&rsquo;s a great read for anyone looking to grow more into a leadership
position.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/ellenbogen-leadingsnowflakes-2013/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Manage Your Day-To-Day: Build Your Routine, Find Your Focus, and Sharpen Your Creative Mind]]></title>
    <published>2015-07-26T00:00:00Z</published>
    <updated>2015-07-26T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/glei-manageyourdaytoday-2013/</id>
    <content type="html"><![CDATA[<p>This book sparked my interest while I was looking for improving my daily
routines. I was often just starting the day as it happened often leaving me
feel disorganized, unproductive, and imbalanced. Reading &ldquo;Manage your
Day-to-Day&rdquo; gave me a lot of ideas of what things to try and add to my daily
routine. And also to try and even have a daily routine. Something I picked up
again through this book was journaling and while it has been on and off for
the last couple of months I really enjoy it. The book was not mind blowing for
me but I enjoyed reading it and definitely would recommend it if you are
looking for inspiration for your daily routine.</p>
<blockquote>
<p>It takes willpower to switch off the world, even for an hour.</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/glei-manageyourdaytoday-2013/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Adventures in Frontend Performance]]></title>
    <published>2015-07-24T00:00:00Z</published>
    <updated>2015-07-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/07/24/frontend-performance.html</id>
    <content type="html"><![CDATA[<p>In my daily job I am far removed from being a frontend engineer. In a previous
job I had spent months to optimize JavaScript to be small enough to be served
by an <a href="http://dl.acm.org/citation.cfm?id=1582405">embedded microcontroller</a> (there&rsquo;s a certain irony in the
fact that I don&rsquo;t have the original paper anymore and thus can&rsquo;t read my own
paper without paying now, but that&rsquo;s a different topic). But those days are
long gone. My main tools today are Chef, PHP and shell script most of the
time. This means that I unfortunately don&rsquo;t have a great amount of experience
on how to structure web frontends to be fast. But it&rsquo;s one of the things I&rsquo;m
working on getting better at. Fortunately I work with a lot of talented
people. And one of them, my coworker and friend <a href="http://twitter.com/lara_hogan">Lara Hogan</a>,
literally wrote the book on <a href="http://larahogan.me/design/">&ldquo;Designing for Performance&rdquo;</a>. So I decided
to start there and bought it and read it last week. It&rsquo;s a great and fast read
and gives you a really great introduction into how to structure web content to
be fast.</p>
<p>After I was done reading it, I decided to try out what I&rsquo;ve learned and see if
I can make my blog faster. So I installed <a href="http://yslow.org">YSlow</a> and ran it on an
<a href="https://www.unwiredcouch.com/2015/06/08/accounting-the-unix-way.html">example blog post</a>. I don&rsquo;t often have a lot of images in my
posts. It&rsquo;s mostly text and maybe some inline code snippets. So I decided that
this was a good representative post for testing my improvements. And the YSlow
results weren&rsquo;t great. In my current setup loading this simple page resulted
in the following YSlow result:</p>
<pre tabindex="0"><code>12 HTTP requests and a total weight of 157.2K bytes with empty cache
</code></pre><p>At least I now had a baseline to compare my changes to. The book has a very
good chapter on how to cleanup CSS and how it can affect page weight and page
load times. So I started there. I didn&rsquo;t really have good structure in my CSS
as it has basically organically grown since 2009. And whenever I wanted to
change something or try out something new, I just added the CSS for it on top
of the existing one. I&rsquo;ve rarely gone back and cleaned things up. This lead to
me having 3 CSS files being loaded for every page. The main style declarations
aptly named <code>style.css</code>, a CSS file which contained definitions for all the
code highlighting I&rsquo;m using on my site <code>pygments.css</code> and a third file from
when I started playing with fonts that would declare font-faces and such and
was named - you guessed it - <code>fonts.css</code>. When I added those files it made
sense to me to have them separate. They were taking care of different things,
contained no duplicate code, I was basically treating them like includes in a
programming language. So I took a look at the files and found a lot of old
clutter. I was still loading two fonts which I hadn&rsquo;t used in forever.  And I
also had recently reworked my header but was still loading the font that gave
me icons for popular social media sites which I had previously used for
linking. Those together were already around 75kB and 3 HTTP requests for
nothing. So I removed the fonts I wasn&rsquo;t using, cleaned up some other CSS that
I had found that was unused and combined the 3 style files I had into 1. And
that already gave me a huge jump in optimization. Just by removing things that
I didn&rsquo;t need and combining CSS files I went to:</p>
<pre tabindex="0"><code>7 HTTP requests and a total weight of 65.4K bytes with empty cache
</code></pre><p>I was already amazed and excited and of course now I wanted to do more. So I
installed a CSS minifier and decided to load only minified CSS. This gave me
another 15 kB in improvements:</p>
<pre tabindex="0"><code>7 HTTP requests and a total weight of 50.0K bytes with empty cache
</code></pre><p>Another thing that is mentioned in the book as a common way to improve
performance is to load images for the size you actually want to display them
at and not bigger. I didn&rsquo;t want to go through all the images I had in some
random blog posts but still do something for my baseline performance. And
since I load my avatar from Gravatar into the header on every load, I looked
at that and saw that I was requesting the image in size 142x142 (Gravatar
gives you the handy URL parameter <code>?s=142</code> for that) but was only displaying
it in 100x100. So I changed the parameter to 100 and squeezed a couple more
kilobytes out of my site:</p>
<pre tabindex="0"><code>7 HTTP requests and a total weight of 47.7K bytes with empty cache
</code></pre><p>Now I was kind of at a stopping point. I had collected all the low hanging
fruit. My page was only a third as big as it was before and I was down 5 HTTP
requests that didn&rsquo;t have to be done anymore. This was already huge for me.
Looking at YSlow now told me that I was spending another HTTP request for
loading JavaScript from Twitter to embed their social media buttons. And that
this also added about 35 kB to my page weight. At this point about 75% of my
total page weight. Of course as a first course of action, <a href="https://twitter.com/mrtazz/status/622603993215303681">I tweeted about
it</a> and thought about whether or not I really need or want the
share buttons on there. Fortunately <a href="https://twitter.com/atmos">Corey</a> responded to my tweet with
essentially what I was thinking. That I don&rsquo;t really need those buttons for
anything. So I removed the buttons, bringing my total YSlow results to:</p>
<pre tabindex="0"><code>6 HTTP requests and a total weight of 12.4K bytes with empty cache
</code></pre><p>And I was blown away when I actually saw the number. Just by following some
simple advice from the book, I was able to cut the number of HTTP requests in
half. And slim down my page weight to ~8% of what it was before.</p>
<h3 id="learning-new-things-is-fun">Learning new things is fun</h3>
<p>Optimizing my site has been a lot of fun. Of course if you are an experienced
frontend engineer, all of those things are not surprising and probably the
first thing you would try. But not working in that field every day, I was
surprised by the impact such a cleanup and restructuring can make on even a
small site like mine. I had tried to dabble a little bit in getting better at
frontend engineering and performance before. However I had never had a good
introduction to give me a place to start. Learning new things can be
overwhelming and especially if you don&rsquo;t know where to start, everything can
feel like it&rsquo;s probably wrong. Lara&rsquo;s book gave me a great introduction into
all the different topics of frontend performance and if you are interested in
this as well, I can highly recommend it.</p>
<p>For now I might not be able to squeeze out more performance just from reducing
the page weight. I&rsquo;ve since installed <code>mod_deflate</code> and <code>mod_expire</code> on my web
server to improve transfer size and caching for my site. However I still feel
like structuring HTML and CSS in a good and fast performing way is something I
don&rsquo;t have a good grasp on. And it&rsquo;s definitely something my site can benefit
from, if even for more clarity next time I want to change anything. So this
might be the next thing I&rsquo;ll tackle in learning more about frontend
engineering.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/07/24/frontend-performance.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Designing for Performance: Weighing Aesthetics and Speed]]></title>
    <published>2015-07-18T00:00:00Z</published>
    <updated>2015-07-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hogan-designingforperformance-2014/</id>
    <content type="html"><![CDATA[<p>My coworker <a href="">Lara</a><a href="">lara</a> wrote this book last year and it was a lot of fun
watching her process and how she knocked out that book. Since then it was on
my list of books to read. Especially since I tend to shy away from frontend
things in my day job and want to get better at not doing that. The book is a
wonderful introduction into web performance especially from a design view. It
gives very solid technical details on a lot of things like browser rendering
and image formats that I only had very superficial knowledge of before. I
really enjoyed it and the book lead me to <a href="https://unwiredcouch.com/2015/07/24/frontend-performance.html">reduce the page weight of this blog
by 92%</a> which
was tons of fun to do as well.</p>
<blockquote>
<p>The largest hurdle to creating and maintaining stellar site performance is
the culture of your organization.</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/hogan-designingforperformance-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Feminism Is for Everybody: Passionate Politics]]></title>
    <published>2015-07-13T00:00:00Z</published>
    <updated>2015-07-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hooks-feminismisforeverybody-2000/</id>
    <content type="html"><![CDATA[<p>I’ve known about this book for a while now, but up until early 2015 it was
only available in print. And since I don’t really like owning physical books
and read exclusively on my Kindle and iPhone I hadn’t bought it yet. So when I
found out there is a Kindle version now, I immediately bought it. As expected,
the book is really good and gives a good primer on feminism and the historical
context from the author’s perspective. It reads less extreme to me as Greer
which is very much in line with Hooks’ other writing. Definitely highly
recommended for learning more about feminism.</p>
<blockquote>
<p>Feminism is a movement to end sexism, sexist exploitation, and oppression.</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/hooks-feminismisforeverybody-2000/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Open Source Spring Cleaning]]></title>
    <published>2015-07-09T00:00:00Z</published>
    <updated>2015-07-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/07/09/oss-spring-cleaning.html</id>
    <content type="html"><![CDATA[<p>I wrote about our plans to clean up our open source repositories and be good
maintainers on Etsy&rsquo;s <a href="https://codeascraft.com">engineering blog</a>. You can find the post <a href="https://codeascraft.com/2015/07/09/open-source-spring-cleaning/">here</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/07/09/oss-spring-cleaning.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Accounting: The Unix Way]]></title>
    <published>2015-06-08T00:00:00Z</published>
    <updated>2015-06-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/06/08/accounting-the-unix-way.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;m a big fan of simple tools and building blocks. A function of writing code
every day for the last 10 years has been that I feel really comfortable with
plain text files, vim and git. So whenever I can, I try to see if I can base
the solution to a problem on plain text. It&rsquo;s just in most cases so much more
portable than any other way and I am already very comfortable with the command
line. That way I can use my every day tools to add, modify and delete data.</p>
<p>A while ago I wanted to find a better way to keep on top of my finances. They
aren&rsquo;t crazy in any way and I have a very normal, regular, non-exceptional
financial situation as an employed engineer. However I felt like I could get
more data and better insights into everything. This wasn&rsquo;t really on my mind
for a while though. I kept the idea in the back of my head but wasn&rsquo;t actively
looking into any way to make it happen. Then one day I was reading my RSS
feeds where I amongst others subscribe to the wonderful <a href="http://usesthis.com/">&ldquo;The
Setup&rdquo;</a> blog. If you don&rsquo;t know about it, it&rsquo;s basically an
interview series where people talk about the tools (hardware and software)
they use to get their job done. I like reading about the tools others use and
get inspired to try out different things. And in there I was reading <a href="http://stefano.zacchiroli.usesthis.com/">an
interview</a> with a Debian developer who mentioned working a lot with
text files and git. And he also said he was doing his accounting with git and
<a href="http://www.ledger-cli.org/">ledger</a> and I was immediately intrigued.</p>
<p>I started to investigate the tool and read blog posts about how others were
using it. Ledger actually has a very <a href="http://www.ledger-cli.org/3.0/doc/ledger3.html">comprehensive
documentation</a>, so I started there. Read about how to use it, the
basics of <a href="http://en.wikipedia.org/wiki/Double-entry_bookkeeping_system">double-entry bookkeeping</a>, and what kind of
information I could get out of it. I then also found an <a href="http://blog.andrewcantino.com/blog/2013/02/16/command-line-accounting-with-ledger-and-reckon/">interesting
post</a> where someone wrote a tool - <a href="https://github.com/cantino/reckon">reckon</a> - which parses
CSV and formats it into ledger format. It even uses Bayesian machine learning
to suggest accounts to use for each entry, minimizing the work that needs to
be done manually even further.</p>
<h3 id="diving-into-the-deep-end">Diving into the deep end</h3>
<p>So I decided to give it a try and take the upcoming tax return I had to file
as the motivation to get it done before that. I downloaded CSV data from my
bank accounts (and learned that the allowed time to back is 2 years, no data
before that), installed ledger via homebrew and the reckon rubygem and started
to import data. This was a bit tedious at first, as reckon didn&rsquo;t support
backspacing and thus editing mistyped accounts. I fixed that in the gem and
sent a <a href="https://github.com/cantino/reckon/pull/44">pull request</a> like a good open source citizen and
<a href="https://twitter.com/mrtazz/statuses/573671503029497856">procrastinating software engineer</a>. And after a couple of
hours I had all my data from 2014 and (most of) 2013 in the ledger data
format. I played around with the reporting options and really liked it. It was
super flexible I could quickly fix and change things by opening the file in
vim. So I decided to properly structure it and go full in with ledger.</p>
<h3 id="all-in">All in</h3>
<p>I created a git repo with directories for the raw csv files (in case I needed
to regenerate any data at some point or look up something else) and for each
year since 2013. In those per-year directories I have a file for checking,
credit card, cash and opening balances. And a top level ledger dat file per
year that includes the appropriate files.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>cassie:accounting<span style="color:#f92672">[</span>master<span style="color:#f92672">]</span>% cat 2015.dat
</span></span><span style="display:flex;"><span>include 2015/opening_balances.dat
</span></span><span style="display:flex;"><span>include 2015/checking.dat
</span></span><span style="display:flex;"><span>include 2015/credit_card.dat
</span></span><span style="display:flex;"><span>include 2015/cash.dat
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>cassie:accounting<span style="color:#f92672">[</span>master<span style="color:#f92672">]</span>% ls -1 <span style="color:#ae81ff">2015</span>
</span></span><span style="display:flex;"><span>cash.dat
</span></span><span style="display:flex;"><span>checking.dat
</span></span><span style="display:flex;"><span>credit_card.dat
</span></span><span style="display:flex;"><span>opening_balances.dat
</span></span></code></pre></div><p>Checking and credit card should be self explanatory as they just hold the
entries for those accounts. Whenever I withdraw money at an ATM, I book it to
an <code>Expenses:Cash</code> account as I expect to spend that money. Otherwise I
wouldn&rsquo;t have withdrawn it. But this also means that I don&rsquo;t have a ton of
visibilty into what I spend cash on. That is why I have a file called
<code>cash.dat</code>. When I spend cash on something and remember, I note it down on my
phone in a text file which syncs to my computer. And when I&rsquo;m doing my monthly
accounting I can pull up this file and just write proper ledger data entries
for the contents of this file. I then note that those expenses come from the
account <code>Expenses:Cash</code> to keep everything correct. The next file special file
is <code>opening_balances.dat</code>. Because I have a file per year, the data only
reflects postings for that year. In order to still get accurate balances, I
run the equity command (<code>ledger -f ledger-old.dat equity</code>) on the old year&rsquo;s
data and write that into the opening balances file as coming from the account
<code>Equity:OpeningBalances</code>. This is a bit of hack, but it illustrates a major
advantage of ledger. It doesn&rsquo;t care about what the accounts are named. This
means you can give them names that mean something to you and ledger won&rsquo;t have
a problem with that. The documentation even gives an example for tracking
inventory in the <a href="http://www.ledger-cli.org/3.0/doc/ledger3.html#Accounts-and-Inventories">video game EverQuest</a>.</p>
<h3 id="monthly-routine">Monthly Routine</h3>
<p>Now with this setup in place, I have a monthly recurring project in <a href="https://unwiredcouch.com/2014/05/13/omnifocus.html">my
OmniFocus</a> to download the CSV for my account and add them to my
ledger data. Once I&rsquo;ve downloaded the file, I run reckon over it to have it
properly format the data and suggest accounts to add them to. Since reckon -
as I mentioned before - uses Bayesian learning to find out what accounts a
posting likely belongs to, it makes sense to have a corpus for it to learn
from which includes all possible accounts. And because I really like
Makefiles, I have one with a simple task in there to generate a big file which
contains all of my postings:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-make" data-lang="make"><span style="display:flex;"><span>SOURCES <span style="color:#f92672">:=</span> <span style="color:#66d9ef">$(</span>shell find . -iname <span style="color:#e6db74">&#34;*.dat*&#34;</span> -mindepth 2<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">corpus.dat</span><span style="color:#f92672">:</span> <span style="color:#66d9ef">$(</span>SOURCES<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>	cat <span style="color:#66d9ef">$(</span>SOURCES<span style="color:#66d9ef">)</span> &gt; $@
</span></span></code></pre></div><p>Now I run reckon with something like</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>reckon -f raw_data/2015/checking05.csv --contains-header -o 201505_checking.dat -l corpus.dat
</span></span></code></pre></div><p>in order to parse the CSV file. Usually when I run this, reckon detects almost
all of my recurrent transactions like rent, gas, electricity, internet, subway
and ferry fare. I just have to confirm the account by hitting enter. With this
it takes me about 5-10 minutes to get through a CSV file. After I&rsquo;m done I
might go through the file in vim to make some adaptions. For example I have a
very generic account named &ldquo;Expenses:Amazon&rdquo; which reckon detects to use for
everything coming from Amazon. However since I buy a variety of things
(household items, clothes, etc) on Amazon I open my &ldquo;past orders&rdquo; page on
amazon.com and check my transactions in ledger against it and file them into
more specific accounts. When I&rsquo;m happy with it, I commit my changes to git and
have an up to date version of my accounting data. I then can run all the
queries on it to give me some overview of what I was spending money on in the
last month. Ledger is versatile enough that I could spend a lot of time on
explaining all the possibilities. But the simplest way to get started is just
showing the top level balances which will give you an overview about Income,
Expenses and Assets (if you have named your accounts like that):</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>ledger -f 2015.dat --period-sort <span style="color:#e6db74">&#34;(amount)&#34;</span> balance -M --begin 2015/04/01 --end 2015/05/01 --depth<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
</span></span></code></pre></div><h3 id="verdict">Verdict</h3>
<p>I&rsquo;m really happy with this setup so far. It&rsquo;s comparably low tech and low
investment and I can (for the most part) use the tools I know and I don&rsquo;t have
to store my financial data on someone elses server. Still if I want to do
accounting on a different machine, it&rsquo;s just a matter of cloning the git repo
to it and installing ledger. The import process is not too cumbersome,
although I have to remember when I pay off my credit card from my checking
account for example that I only enter the transaction once as it will show up
in both CSV downloads (once as debit and once as credit). This has caused some
confusion for me in the past when I forgot but generally isn&rsquo;t too bad.</p>
<p>I&rsquo;m also not doing any super advanced things with it so far. I&rsquo;ve played with
it&rsquo;s <a href="http://www.ledger-cli.org/3.0/doc/ledger3.html#Visualizing-with-Gnuplot">Gnuplot suppport</a> and ran different queries in different
situations to track down where I actually spent more money than the month
before. I&rsquo;m sure there are more use cases that will arise over time and while
I&rsquo;m no accountant (and probably used some words wrong on this post) it has
been super interesting to get some more structure and insight into my personal
finances.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/06/08/accounting-the-unix-way.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[There&#39;s No Such Thing as No Project Management]]></title>
    <published>2015-05-04T00:00:00Z</published>
    <updated>2015-05-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/05/04/project-management.html</id>
    <content type="html"><![CDATA[<p>Project management. Every engineer seems to loathe the term and also what it
describes. It has that word <em>management</em> in there. It&rsquo;s different than code.
It&rsquo;s not code. So naturally whenever this comes up, all the engineers make a
joke, shrug and go hide behind their editors. I&rsquo;ve been there, I&rsquo;ve done that.
However over the years I have realized that this is not just stupid but
actively working against making software projects successful. I mostly had an
aversion against the overloaded, overused and useless definition of project
management glorifying charts and plans and deadlines over actually
organizing and making sense of the work and the processes to get it done.
Because the truth is you are already doing project management. Yes you! The
software engineer!</p>
<h3 id="you-had-the-project-management-all-along">You had the project management all along</h3>
<p>Every time you start to write code, even for some fun side project, you start
to think about the different components that will make up the whole thing. You
start to form a mental model of all the high level parts that comprise the
finished application. You plan out some rough course of action for yourself.
Which parts you want to tackle first. Which things to stub out and which
things to punt on for later. You just (most of the time) don&rsquo;t write them
down. But as you go on, you think about what the first workable version will
be able to do. Then you think about the next one, maybe refactor some things
to accomodate new features. Maybe you write down some notes for you or add
some <code>//TODO:</code> lines in the code so you know what to do when you come back to
it.  But the important part here is, that you&rsquo;re planning the application.
You&rsquo;re basically already doing 85% of what project management for a small to
medium software project is all about.</p>
<p>So what&rsquo;s the difference? Well really mostly thinking about the structure of
your project on a higher level and writing things down. To be clear: It&rsquo;s ok
if you only want to write code. And you can be or get really good at it.
However only wanting to write code is like saying you only want to hammer wood
together.  Sure there is beauty in how you do it. There are good carpenters
out there who are able to do woodwork like nobody else. And everybody wants to
have such a craftsperson on their team when it comes down to doing the work.
However this will only be useful up to a certain level.  And that&rsquo;s absolutely
ok. But if you want to level up as a carpenter you will need to understand
what it is that you&rsquo;re actually building.</p>
<p>And that is very much the same for a software engineer. In my mind <a href="http://www.kitchensoap.com/2012/10/25/on-being-a-senior-engineer/">being a
senior engineer</a> also means that you understand the problems
you are solving past the perfect implementation of a binary sort. That you
understand that writing code is a means to an end and not the ultimate
purpose. That you understand what problem you are solving and the environment
it will exist in so you <a href="https://www.unwiredcouch.com/2015/01/28/building-a-plant.html">plan for it</a> accordingly. This means
a good chunk of &ldquo;non-coding time&rdquo;. It means that you understand how to break
the problem apart and how it would speed up the implementation if you suddenly
got 2 or 3 more engineers on the project. Or if it wouldn&rsquo;t help at all. It
means that you understand how the project could continue if you suddenly
<a href="https://twitter.com/mrtazz/status/593835726858518528">decided to go on vacation</a> because you know you don&rsquo;t want
to <a href="https://twitter.com/mrtazz/statuses/557697168010924033">be a spof</a>. It means that you have things planned out so someone
else could <a href="https://twitter.com/mrtazz/status/590506541436039169">take it over</a> or even make the whole project happen
without you. It means understanding what existing or future work would be
great for a more junior engineer on the team to level up on and plan work so
it&rsquo;s possible for them to do it. And it means writing things down and
communicating them.</p>
<p>This can be as easy as creating a project in your JIRA instance and adding a
bunch of subtasks. It can be Gantt charts if you are so inclined and want to
show dependencies better. It can be a markdown document laying out all the
bits and pieces you have thought about so far.</p>
<h3 id="you-are-not-your-project">You are not your project</h3>
<p>All of these things might feel weird at the beginning. All you want to do is
write code, find the perfect abstraction, make it beautiful. You will suck at
this at first because you are not used to it. But at the same time you will
suddenly see others implementing things you want to exist, doing work for you
and learning while they do it. And maybe even take a whole project over from
you and finish it. And it will feel weird again. You will have this feeling of
not having finished something. Of only going 80% there. Of only having done
the &ldquo;soft&rdquo; parts. But in reality you just transferred a ton of knowledge. You
made it possible for someone else to work on something that previously only
existed in your head. <a href="http://en.wikipedia.org/wiki/Egoless_programming">You are not your projects</a>. I&rsquo;ve <a href="https://twitter.com/mrtazz/statuses/467769106780127232">said
before</a> that you need to capture ideas for others to work on.
It&rsquo;s the only way to scale yourself. And it frees up your time to work on
other things. And even if you end up working on the project all by yourself
(which is less and less likely as your organization grows), there will be a
plan for others who are interested to follow along with what&rsquo;s going on. There
is clear communication of what&rsquo;s in progress and what the current state of
things is. And other engineers can learn from your example. Because suddenly
you&rsquo;re doing project management. And it&rsquo;s not even that weird. You have been
doing it on some level all along. And you should, it&rsquo;s part of your job. As an
engineer you understand best how work gets done. So you are in the perfect
position to plan out the structure of your projects. And there is no such
thing as &ldquo;no project management&rdquo; anyways. You can only decide to do it badly
or try to do it well. And seeing all the benefits of doing it well come to
life is so much more fun.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/05/04/project-management.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[First month with the Spark Notebook]]></title>
    <published>2015-03-18T00:00:00Z</published>
    <updated>2015-03-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/03/18/spark-notebook-omnifocus.html</id>
    <content type="html"><![CDATA[<p>When the <a href="http://www.thesparknotebook.com/">Spark Notebook</a> got announced on Kickstarter I was
first a bit hesitant as I have struggled before balancing digital and paper
notes. But nonetheless I still always felt something was missing from the way
I was currently going about planning my day and taking notes. My todos are all
neatly organized <a href="http://www.unwiredcouch.com/2014/05/13/omnifocus.html">in OmniFocus</a> and I did most of my note taking in
VIM as text files in a folder structure. This works quite well and I even can
sync them with my phone. However I always felt too restricted when taking
notes without being able to scribble. There are things that just somehow feel
better when written by hand. I have tried to bring a Moleskine notebook with
me all the thing but somehow I rarely actually take notes. So when I saw the
Spark Notebook I was somewhat sceptical but the whole structure and design
seemed really thought through and as if it could actually fit into my setup
and fill the gaps I was having. So I decided to <a href="https://twitter.com/mrtazz/statuses/535621491392806912">back the
campaign</a> and early February I got my 2 spark notebooks for
2015 and started using them.</p>
<h3 id="how-i-use-it">How I use it</h3>
<p>The arrival of the notebooks coincided with the first Monday of February for
me. So that Monday morning I sat down to start to fill out my yearly, monthly
and weekly goals. The notebooks had actually arrived on the weekend already.
However other than looking at them quickly I didn&rsquo;t touch them but spent the
weekend researching the intended use of the notebooks on the
<a href="https://popforms.com">popforms</a> site. I read about how to set the <a href="https://popforms.com/how-to-create-a-yearly-theme/">yearly
theme</a>, <a href="https://popforms.com/getting-things-done-my-monday-morning-routine/">create goals for the month</a> and how to
<a href="https://www.kickstarter.com/projects/katemats/spark-notebook-a-place-for-your-life-plans-and-gre/posts/1121110">plan your week</a>. This first review probably took a whole hour or
so for me because I had to do the whole thing. But it was actually a pretty
good exercise and a great way to reflect a bit more about the things I have on
my todo list. Since I&rsquo;ve written my original blog post about OmniFocus, I&rsquo;ve
added a couple of useful perspectives and one of them is for planning my weeks
and days. It&rsquo;s basically modeled after <a href="http://simplicitybliss.com/blog/omnifocus-perspectives-redux-planning">this blog post</a> and
shows all not on-hold or blocked projects from which I then pick things to
work on every day. This perspective was the basis for my monthly and weekly
planning with the spark notebooks. It reflects all my duties and obligations.
For this first review after I set my yearly goals, planned my month to be in
line with those goals (in an ideal world your work projects align with your
yearly goals but in reality it&rsquo;s a healthy mix of things you have to do and
things that bring you closer to your long term goals) and then planned my
weekly tasks according to my montly plan I went on to <a href="https://popforms.com/how-to-do-time-blocking/">time
blocking</a>. I grabbed my calendar and put all the meetings in the
time blocking view of the spark notebook. I then went ahead and filled all
slots (or most of them) where I didn&rsquo;t have meetings with blocks for things I
wanted to get done from my weekly goals. And after this hour or two of
planning I went into my week. The following weekly and monthly reviews were
much quicker as I didn&rsquo;t have to do everything from scratch again. To the
point where my weekly reviews take me between 15 and 30 minutes right now.</p>
<p>A somewhat big surprise was however that it turns out I can&rsquo;t plan my whole
week with time blocks in advance. I did that for the first two and every time
interruptions and unplanned work and meetings destroyed my carefully layed out
plan. So now I timeblock the first 2 or 3 days of the week on Monday and then
do it again on Wednesday for the rest of the week. And while doing that I
literally put those blocks into my calendar as well so people see that I&rsquo;m
working on something during that time and that no meetings should go there if
possible. I also stopped putting meetings in the spark notebook outline as it
was somewhat a double maintenance of the same thing. So the calendar for me
holds the daily outline of meetings and blocks of time where I wanna do work.
And the notebook tells me how I have planned to spend those time blocks. This
has worked pretty well for me so far and I definitely have more structured
time than I used to.</p>
<p>Another great tool in the notebook are the meeting notes templates. They give
you a structured form in which to write your notes down. Although I have some
problems with this feature (see below) I&rsquo;ve really come to like them for
facilitating PostMortems. They are great for taking notes during the
reconstruction of the timeline and the &ldquo;Follow Up/Nest Steps&rdquo; box is perfect
for jotting down remediation items before transferring them to Jira. I don&rsquo;t
use the &ldquo;Main points&rdquo; box as it doesn&rsquo;t make a ton of sense for PostMortems
and I still rarely take notes in other meetings (something I definitely want
to get better at).</p>
<h3 id="balancing-it-with-omnifocus">Balancing it with OmniFocus</h3>
<p>A point I was curious about when I ordered the notebook was how this could be
balanced with my OmniFocus setup. I rely heavily on OmniFocus and everything
that needs to get done has a place in there. And I really didn&rsquo;t want to end
doing duplicate work. So for the first week or two it felt kinda weird. I was
a bit confused about where to look for what to work on next and which things
to track where and in which granularity. I started out with a pretty detailed
time blocking setup in the notebook and followed that one closely. But that
turned out to be too much. I found my natural balance there to put the higher
level tasks/projects in the time blocks and then have the break down of those
in OmniFocus. This means I can have the overview of what I am supposed to work
on in a given time slot in the notebook and then just open OmniFocus to see
what the next important part is for that project. Thus I still do my weekly
review in OmniFocus and then go on to plan the week in the spark notebook
picking things from the OmniFocus &ldquo;Plan&rdquo; perspective.</p>
<p>The natural balance there is that the spark notebook gives me the higher level
overview of my plans that I was always missing in OmniFocus. It&rsquo;s a really
great software for structuring lists, todos and break projects down into
smaller things. But I have never figured out a great way to plan the higher
level in there. And that&rsquo;s where the spark notebook fits in perfectly for me.
So when I start to work in a time blocked slot, I check the notebook what I
have noted down to work on during that time and then check OmniFocus for the
next action on that project. This sounds a bit more tedious than it really is,
often enough I remember what I allocated the time block for and don&rsquo;t need to
open the notebook and even if I do, it&rsquo;s not a ton of overhead.</p>
<h3 id="short-comings-and-things-that-dont-work-for-me-so-far">Short comings and things that don&rsquo;t work for me (so far)</h3>
<p>After using the spark notebook for a couple of weeks now I have found a couple
of things that unfortunately didn&rsquo;t work that well for me or that I was
missing. None of them are a dealbreaker and I wasn&rsquo;t expecting everything to
make perfect sense, since it&rsquo;s probably impossibly to make something work for
so many people with different work and life styles.</p>
<p>The first thing that I was missing was more bookmarks. The spark notebook
comes with 2 bookmarks to quickly find pages. I&rsquo;m using one to mark my weekly
time block table and the other one to mark the position of where I left off
with meeting notes. However I would love to have at least two more to be able
to quickly find the monthly overview and the project pages. Maybe even three
to also quickly find the scribble pages at the end. And I would love it if the
spark notebook also had a Moleskine style pocket in the back to put smaller
cards and notes in.</p>
<p>Speaking of the meeting notes, I&rsquo;m really enjoying the layout and it&rsquo;s
extremely good for taking structured meeting notes. However the fact that
there is only a limited number of them means that I&rsquo;m constantly trying to
decide if a meeting is important enough to &ldquo;waste&rdquo; a meeting page if there is
a chance that I&rsquo;m not taking notes at all (did I mention that I&rsquo;m a poor
notetaker?). Especially with the anxiety to run out of meetings notes in my
spark notebook and then having to bring a second notebook to meetings.  I
don&rsquo;t think this is something that can be fixed within the spark notebook but
I have to be ok with.</p>
<p>I have also yet to find a use case for the project planning pages and the
scribble/free use pages at the end. My problem with the project planning notes
is that it&rsquo;s easier for me to keep that part in OmniFocus (at least the task
breakdown, I could definitely benefit from doing a more formal write up of the
goals and notes write up) and easier to find as I don&rsquo;t have a bookmark left
over to use for the project pages. Kind of the same goes for the free use
pages at the end for me. I haven&rsquo;t really used them yet as I can&rsquo;t bookmark
them and I&rsquo;m afraid of running out.</p>
<p>The 30 day challenge is something I&rsquo;m really torn on. I love the idea but it&rsquo;s
been really hard for me to follow. I&rsquo;m not sure if it&rsquo;s because it&rsquo;s a bit out
of sight in the weekly plan (maybe I should account time for it in the time
blocking more) or something else. But something I want to try there is to
always mention the challenge as another weekly goal in the plan and see if
that helps.</p>
<p>As I mentioned above, I also had to adapt my time blocking routine to do it
twice a week as planning all time blocks for the week in advance doesn&rsquo;t work
for me. I have so far also written down 6-7 goals per week as this seems to be
the number of things I can actually find the time to work on. I don&rsquo;t always
finish all of them and I might try to break them down more in the future.</p>
<h3 id="verdict">Verdict</h3>
<p>While I haven&rsquo;t yet fully come to use all the features the spark notebook
provides, it&rsquo;s really been a great addition to my planning tools. Especially
since I&rsquo;ve always missed a higher level planning overview with OmniFocus, this
is where the notebook fits in great for me. It gives me a good sense of the
higher level things I need to do and whether or not I have allocated time for
it appropriately on a weekly level. I use both tools to plan work and non-work
related things and the combination of the spark notebook and OmniFocus has
definitely become <a href="https://twitter.com/mrtazz/status/577961903995113472">crucial to my productivity</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/03/18/spark-notebook-omnifocus.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Deployment is Unix]]></title>
    <published>2015-02-23T00:00:00Z</published>
    <updated>2015-02-23T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/02/23/deployment-is-unix.html</id>
    <content type="html"><![CDATA[<p>Over the last 3 years I&rsquo;ve worked a lot on <a href="https://github.com/etsy/deployinator">Etsy&rsquo;s deployment
system</a> (we&rsquo;ve recently <a href="https://codeascraft.com/2015/02/20/re-introducing-deployinator-now-as-a-gem/">brought the Open Source version back
into sync with our internal changes</a> and are running on the
public version now as well). It&rsquo;s at the core of our development process as
all development is framed in the context of continuously deploying small
changes to the website. And the process of putting in feature flags and always
comitting to master follows from that. Deployinator is a Sinatra/Ruby
application that executes Bash scripts and commands in the background. It has
two buttons - for staging and production - that run the (shell) commands to
execute a list of deployment tasks. The usual tasks include refreshing the git
checkout on the build box, building/minifying JavaScript and CSS, compiling
templates, and rsyncing code to all the web servers (with our <a href="https://codeascraft.com/2013/07/01/atomic-deploys-at-etsy/">atomic
deploys</a> there is also some symlink flipping involved). But
that&rsquo;s it, it&rsquo;s a very simple concept.</p>
<p><a href="https://speakerdeck.com/mrtazz/the-road-to-success-is-paved-with-small-improvements?slide=68"><img src="/images/deployinator-ruby-bash.png" alt="deployinator - ruby in the front, bash in the
back"></a></p>
<p>Of course the overall application has a lot of features. And they keep growing
and changing as we figure out how a growing engineering team is using it. As
we have remediation items coming from PostMortems. And as more teams need and
add more deployment stacks for slightly different applications. The original
version of deployinator had an execution model where all commands where
executed in a streaming manner within an HTTP request. That meant we had to
configure the correct output buffering, had a long running request doing the
work and generally a somewhat confusing scenario where we often weren&rsquo;t sure
what would happen when you close your laptop lid in the middle of a deploy. We
also started to run into problems where the deploy would often break with SSH
broken pipe errors (all commands are run over ssh) in the middle of a run. We
tracked it down to an oddity in TCP behaviour between modern versions of OSX
and Linux.  And we decided that it was time to move the deploy run out of the
HTTP request. We thought about different ways of doing that and prototyped a
couple of things. And then one day while working on one of the prototypes of
the new deployment model I took a step back and realized that I was basically
trying to <a href="https://twitter.com/mrtazz/statuses/380547968174415872">reimplement OS process management in Ruby</a>. And this was
not what I wanted Deployinator to be. Deployinator is <strong>UNIX</strong>. So a
deployment is now done by <a href="https://github.com/etsy/deployinator/blob/master/lib/deployinator/app.rb#L183">forking into a separate process</a>, setting the
process title to the stack and stage name and letting it run. <code>ps</code>, <code>kill</code>,
<code>nice</code> all still work. If you need to log into the deployment server and
figure things out, you can still use the tools you use every day. The rest of
Deployinator also always has been very UNIX inspired. The deployment process
runs commands over ssh and distributes commands to multiple machines via
<a href="https://www.netfort.gr.jp/~dancer/software/dsh.html.en">dsh</a>. All deployment output is written to a log file. The log file is
tailed by a websocket server to present it back to the web application. The
log in the web app shows all output of what the shell commands are doing. If
the commands write to STDERR, Deployinator shows it in red and bubbles it up
to a separate error log. This means you can write your deployment commands in
the well known UNIX style. Infos go to STDOUT, errors go to STDERR. In
addition Deployinator also comes with a command line tool to kick off any
deploy without needing a working web server.</p>
<p>And in my opinion this is how it should be. The actual steps of how your
software gets deployed will always be a little different. You might run a
Rails app instead of a PHP application. You might have a compiled binary that
needs to be shipped or you will have to restart services. You might git pull
on the servers directly instead of rsyncing files over. But there is always
the operating system as the common denominator (or almost always). And by
using that foundation in your tooling you already have a common ground when it
comes to understanding and debugging what your deployment system does. And
there are a lot of existing tools, like rsync, git or ssh which you can reuse
and leverage. There is also a <a href="https://gist.github.com/atmos/6631554">great response</a> by <a href="https://twitter.com/atmos">Corey</a>
about how the GitHub deployment system works. And the final paragraph there
is really what I love most about it:</p>
<blockquote>
<p>I think people would be underwhelmed by the technology and implementation though. It&rsquo;s just a bit of ruby, UNIX, and HTTP. It&rsquo;s not pushing the boundaries of computing, it just chugs along doing its job so we don&rsquo;t have to.</p></blockquote>
<p>And it really doesn&rsquo;t have to be complicated. Deployment can start with a
simple shell script. And then you can wrap it in a web frontend. Or an IRC
command.  Or an iPhone app. But at its core it&rsquo;s still manipulating files and
putting them on a computer. Deployment is still UNIX.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/02/23/deployment-is-unix.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Whole Woman]]></title>
    <published>2015-02-08T00:00:00Z</published>
    <updated>2015-02-08T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/greer-wholewoman-1999/</id>
    <content type="html"><![CDATA[<p>I started reading this book in 2014 and finished it early 2015. I overall
liked it and it was really good in giving me different ways to think about
feminism and how the whole system works together to enable sexism and
exploitation. It&rsquo;s also a good resource to understand better how closely
related feminism and capitalism really are. However it comes with a really
serious trigger warning. Germaine Greer is known to have very
transphobic/cissexist views and this book is no exception. It is restricted to
one chapter but those opinions - which I don&rsquo;t share at all - are definitely
in there. So if this is a trigger for you, it&rsquo;s probably better to skip this
book.</p>
<blockquote>
<p>The pattern of devaluing women&rsquo;s contribution is as old as human
civilization</p></blockquote>
]]></content>
    <link href="https://unwiredcouch.com/reading/greer-wholewoman-1999/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[You Shouldn&#39;t Have To Ask For Forgiveness]]></title>
    <published>2015-02-04T00:00:00Z</published>
    <updated>2015-02-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/02/04/forgiveness.html</id>
    <content type="html"><![CDATA[<p>A couple of days ago I was part of a private email thread about the popular
Grace Hopper quote <a href="http://en.wikiquote.org/wiki/Grace_Hopper">&ldquo;It&rsquo;s easier to ask forgiveness than it is to get
permission.&rdquo;</a> and the somewhat related <a href="https://medium.com/@katelosse/the-unbearable-whiteness-of-breaking-things-521cb394fda2">blog post</a> by <a href="https://twitter.com/katelosse">Kate
Losse</a> (please go and read it first, it&rsquo;s really good). I got a
helpful and friendly nudge to put the response I wrote on here, so here it is
in a slightly edited version to fit the format:</p>
<p>I have so many opinions on that and I hope the following brain dump makes
sense/helps.</p>
<p>I should start off with the disclaimer that I think there is something to that
quote but it&rsquo;s been completely repurposed to serve as a backwards
justification for a lot of things. I think the variant <a href="http://en.wikiquote.org/wiki/Grace_Hopper">&ldquo;If it&rsquo;s a good idea,
go ahead and do it. It is much easier to apologize than it is to get
permission.&rdquo;</a> is a much better version (although still problematic) of this
quote. But I highly dislike the often quoted form together with its corollary
&ldquo;move fast and break things&rdquo; which basically has the vast majority of the same
problems.</p>
<p>So why do I dislike it? First and for all because it&rsquo;s arrogant and
disrespectful. It has the implication that rules don&rsquo;t apply for some (for a
certain value of some) people and that it is in their judgement to decide
what applies to them. It also implicitly means, because (usually though not
always) rules are made to protect/help people, that what you want to do is
more important than protecting/helping. I&rsquo;m probably extremely biased because
I work in infrastructure where a large part of work is maintenance. But in
computering what following this rule most likely means is hack something
together that works and then figure out how to maintain it and who. And in
that regard it often comes down to upholding the romantic VC notion of a 10X
engineer/lone wolf programmer who is so genius that you have to get everything
out of their way because they can change the game in an instant. That they
don&rsquo;t have to communicate, follow rules, or workflows, because their beautiful
mind justifies everything. Another thing I highly dislike.</p>
<p>The next problem I have with this statement is that it is so ambivalent that
it doesn&rsquo;t really mean anything. And as so often, can only be verified in
hindsight. The article you linked really pin pointed one of the major problems
there. In order to be granted forgiveness, you are betting on &ldquo;the authority&rdquo;
(this could be your manager, execs or tech leads, or even the police) to turn
a blind eye on something you did or even praise you for breaking the &ldquo;law&rdquo; for
making things better. This usually is only the case if you are a member of the
same race, culture, class, group as those you will have to ask for
forgiveness. Which doesn&rsquo;t work well for everybody. It&rsquo;s also important here,
that - if I&rsquo;m not mistaken - the quote comes from a time where Grace Hopper
was almost retired and already an accomplished Rear Admiral. The power and
influence that comes with such a rank shouldn&rsquo;t be neglected. And I highly
doubt that the same thing worked when she was a Sea(wo)man or Petty Officer.</p>
<p>That being said there is something to that quote. But as I said in the
beginning, I see it more in the context of the variant quote. And more
importantly in the context of <a href="http://en.wikipedia.org/wiki/Efficiency%E2%80%93thoroughness_trade-off_principle">efficiency thoroughness trade-off</a>. A lot of
times you can&rsquo;t ask everyone for permission to do something because it takes
too much time and doesn&rsquo;t make sense. Especially when it comes to computering
there is often a lot of merit in trying to get a prototype in place so there
is a concrete thing to talk about. It&rsquo;s also often worth it to only bounce
ideas off of a handful of people before trying it out instead of getting a
formal review and the exec&rsquo;s agreement to do it. But with all of this the
impact if it&rsquo;s a bad idea has to be taken into account. If everybody runs off
doing their weird ideas, we likely would have chaos. At the same time if
everybody spends their day with getting permissions about work, there won&rsquo;t be
any work getting done. Ideally we trust our colleagues that they know what is
needed to bring things forward. That is why there are always tendencies to
reduce bureaucracy and empower individuals. But I don&rsquo;t think this means you
should do things where you have to ask for forgiveness. Because if you have
to, you likely made someone else&rsquo;s day pretty miserable.</p>
<p>I hope this was a somewhat coherent write up and answers at least some of the
questions you had. Also I&rsquo;m super happy to be proven wrong here since this is
very likely a pretty narrow view on things.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/02/04/forgiveness.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[You&#39;re building a Plant]]></title>
    <published>2015-01-28T00:00:00Z</published>
    <updated>2015-01-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/01/28/building-a-plant.html</id>
    <content type="html"><![CDATA[<p>As an infrastructure engineer you&rsquo;re building a plant. Not literally of course
but it&rsquo;s also not too far off. There are actually a lot of parallels to
building a chemical plant. I used to work for a chemical company a couple of
years back and I have thought a lot about the similarities ever since I
started working at Etsy and with that on infrastructure for a running website.
This is why I decided to write down how I think about IT infrastructure
projects. However as it&rsquo;s been years since I set foot in a chemical plant,
most of the things I&rsquo;m gonna mention and explain are probably outdated and
might be handled differently now. So take what I say with a grain of salt and
don&rsquo;t run off trying to build a chemical plant with this, please.</p>
<h3 id="building-a-plant">Building a plant</h3>
<p>The general setup of a plant is most often pretty structured on a high level.
You have a process that someone invented at some point. You burn sand or
combine acid to create a new product that has properties which are desirable.
Then you probably spent a lot of time improving and refining the process and
making the actual plant work better (this is where the vast amount of IP and
patents come from). And at some point you end up with a pretty good and
structured plan how to repeat it all over the world. Of course the process
refinements never stop but they get a little smaller over time.</p>
<p>Then at some point the time comes to build a new one, maybe at a new location
maybe at an old one. But you as the engineer (or project manager at that
point) get told that a new plant is needed. You likely have done that many
times before. So you&rsquo;re grabbing the documentation, maybe a project plan
template for it and you start planning. This is really where the first
important parts come together. A plant is build by tens or hundreds of people
all in all. But at this stage you have to make sure you&rsquo;re getting the right
people on the project for each component of the plant. Obviously there will
have to be some buildings. So you go to Facility Management and ask for
an engineer to work with you on the project. You don&rsquo;t have to know how to
construct buildings yourself, but it&rsquo;s an important part of the plant so you
get an engineer you can delegate the work to. Of course a dark, empty building
isn&rsquo;t that helpful. We are gonna put a ton of machinery in there. And all of
it needs power. So you&rsquo;re gonna get an electrical engineer on the project to
plan out the electrical infrastructure and hook the plant up to power. Someone
also has to actually install all that machinery, make sure you&rsquo;re choosing the
right parts and devices, work on the actual construction. This means, you
gotta get an engineer to supervise that. Once the plant is running there are
a lot of things that constantly need regulation. Water flow, temperature, air
flow, all those things need to be controlled and automated. You&rsquo;ll get an
automation, measurement and control engineer to take care of this of course.
And in the end this is also about a chemical process, so a chemical engineer
will also be on the project. With all these engineers on board, it&rsquo;s your job
to keep them all on the same page. You have to set up meetings, establish
communication and make sure everyone is included on updates since changes
might influence their area of work.</p>
<p>This is only an exemplary run down of what goes into building a plant.
Depending on what kind of engineer you are and the size and complexity of the
plant you might be doing one those things yourself. But for a big project you
might also just be the technical project lead and delegate all of those things
to others.</p>
<h3 id="building-infrastructure">Building infrastructure</h3>
<p>So let&rsquo;s talk about the other kind of infrastructure projects. Where computers
are involved. Because they aren&rsquo;t that different. Sure they are often much
cheaper, you rarely get to work on a giant multi million dollar project. It is
also much easier and cheaper to experiment if you&rsquo;re writing software. You can
experiment with things before the whole project is done, change directions
much more quickly and generally see intermittent results with less hassle. And
you usually don&rsquo;t have to manage tens of people and contractor companies on
the project. On the other side you are usually trying something new, so you
don&rsquo;t have the security of having done this before multiple times. You have to
come up with a lot of things for the first time as you are trying to solve
problems with this new piece of infrastructure (this is why it might also be a
good idea to write down a <a href="http://www.d2fn.com/2013/01/28/functional-specifications-for-infrastructure-engineers.html">spec for your project</a>, to give you a better
picture of what parts are actually involved).</p>
<p>But the big thing both types of projects have in common is the collaboration
and delegation part. You most likely won&rsquo;t have to deal with facility
management, construction and maybe not even electrical power in your project.
But there are still a lot of parts. The software you plan to write or the
service you want to introduce is probably gonna run on some form of computer.
And although it has become way easier to spin up a virtual machine or
commission some hardware, you probably want to at least talk to an ops
engineer about it. There could be different classes of machines to choose from
that have proven to work better for different workloads. The requirements for
hardware (virtualized or real) could change if the scope or implementation
details of the project change. It could be that new machines are automatically
added to the monitoring system unless you tell it not to. And speaking of
monitoring: there should be some. And generally there is a lot of knowledge in
Ops about monitoring things. You also want to automate the setup for your new
service, write some Chef or Puppet to make it smooth to create new instances.
Which is probably another thing your ops team can help with. So ask for an ops
engineer to be on your project. Some infrastructure services also need to
persist data. Maybe you need a cool new database, although likely there is already an
existing place where you can put data. In any way this is something where an
Ops engineer (or DBA) can help with as well. As soon as you access and write
data, call to other services, maybe need some form of authentication or have a
naive implementation where you shell out in your PHP code you should have a
security engineer on board to make sure it doesn&rsquo;t end badly. Even if you
don&rsquo;t think you are, having someone from security on board for the project or
maybe even just review code changes a lot. Suddenly it&rsquo;s not that weird
service anymore that someone put into production. You get code review from a
very different angle and have the good feeling of everything being alright. Of
course your new thing also has a ton of tests. Unit tests are the simple thing
here, as you can probably hook them up to your existing CI infrastructure.
However there might be integration and end-to-end tests that you want to
create to increase confidence in changes when someone else (and also yourself)
works on the project. So when you start of the project, include someone from
your testing/QA/CI infrastructure team. You want to increase confidence in
changes with your new piece of infrastructure and engineers working close to
testing can definitely help. And speaking of confidence, there should be a way
of making things work in development as close as possible to how they are in
production. There should be a way to run it on a VM, on your laptop or some
other way where you can be sure changes you make in development work in
production. If you have a team that takes care of those things, get an
engineer from that team involved. Make sure they know that you care about
things working in development.</p>
<p>Recently, with the increased focus on cross team collaboration, there is a big
chance that you can do a lot of those things yourself. And you should, it&rsquo;s a
great way to learn (although that doesn&rsquo;t mean you shouldn&rsquo;t talk to the
engineers that work in those areas every day). But you can&rsquo;t do everything
yourself, so depending on the scale of the project and your familiarity with
all those areas, there will have to be delegation. And not all projects touch
all of those things. You might not have data to persist. Or you already have
an internal auth solution that you can plug in. However when in doubt it&rsquo;s
always better to err on the side of caution and include more people or ask
them if they think the project would benefit from it. The important part is to
take the time to talk to people and let them know you want their input on the
project or you want them to also work on the project. And then create a
mailing list, or a forum group or an IRC channel or however you handle
communication in your organization and invite all those engineers to join and
make it clear that project communication will happen there. You want them to
always know where they can get information about the project, its progress and
an up-to-date status in case they want to check if the project touches things
they work on. And not have to be afraid they might miss things.</p>
<p>Because you&rsquo;re building a plant. Kind of.</p>
<p><em>Thanks to <a href="https://twitter.com/benjammingh">Ben Hughes</a> for reading drafts of this, giving me feedback and
being generally rad.</em></p>
]]></content>
    <link href="https://unwiredcouch.com/2015/01/28/building-a-plant.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Road to Success is paved with Small Improvements]]></title>
    <published>2015-01-14T00:00:00Z</published>
    <updated>2015-01-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/uspto2015/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/uspto2015/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Learning to be On-Call]]></title>
    <published>2015-01-06T00:00:00Z</published>
    <updated>2015-01-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2015/01/06/learning-on-call.html</id>
    <content type="html"><![CDATA[<p>I stumbled upon this great <a href="https://medium.com/@thematthewgreen/on-call-dont-be-scared-4eef4ff2928f">blog post about on-call</a> the other
day. It&rsquo;s a great article and you should definitely go read it first. It
prompted me to think about my own experience with being on-call and while I
don&rsquo;t have years of it, in the last 2.5 years I went from never having been
on-call before to being somewhat experienced in it. And especially the closing
quote of the aforementioned article about being scared of on-call really hit
home for me:</p>
<blockquote>
<p>Don’t be. While being on-call can be challenging, it is also very rewarding.</p></blockquote>
<p>This is why I wanted to share my personal development and experience about
learning to be on-call. Because fundamentally I am convinced that it&rsquo;s not
only a necessary but also a rewarding thing to do. When I started at Etsy <a href="https://twitter.com/mrtazz/statuses/147115673137577984">a
bit over 3 years ago</a> I had never worked anywhere where I had to
be on-call. I was either too junior (engineers in training didn&rsquo;t have to be
on-call) or I was working on research and actual shipped software where there
were no production systems to look after. And when I applied for jobs I even
actively sought out job positions where I didn&rsquo;t have to be on-call a lot as I
saw it as a tedious, annoying and scary thing that I didn&rsquo;t want to do.</p>
<h3 id="how-i-got-into-this">How I got into this</h3>
<p>Then when I joined Etsy every engineer had to sign up for a week of general
developer on-call per year (almost all of our on-call rotations are a week
long). This is the general on-call rotation for anything relating to the web
app of <a href="https://etsy.com">etsy.com</a> where the knowledge of how things work an
ops engineer usually has doesn&rsquo;t suffice. We have amazingly good ops engineers
who can fix a ton of things themselves and if you have a really bad on-call
week you get paged once as a developer. Back then we were small (or big)
enough so that we could cover the year with every one being on-call at most
once. That wasn&rsquo;t too bad and so I put my week in about 6 months after I
started so I could learn some things about the web application before actually
being on-call. Since my team works on developer tooling and not the website
itself I chose this quite long period of time before my first on-call just
because I wouldn&rsquo;t have that much exposure to the app in my regular workday.
Lucky me, I happened to choose the week that included the weekend where we
wanted to upgrade our main SPOF database. So needless to say, I was really
scared of the on-call. I didn&rsquo;t have enough knowledge about the architecture
to know where things could break or debug it and I had never been on-call
before so even having to pay attention to whether or not I had cell reception
all the time was already making me nervous. I also hadn&rsquo;t been at Etsy long
enough to know how often I would get paged, how serious/time-sensitive a page
would usually be, whether or not I could be underground for 30 minutes to take
the subway home, which action should I use to tell PagerDuty that I&rsquo;m working
on the incident, and what even would happen when I get paged and what to do
next. Most of this was just me being nervous. We had a lot of documenation
about being on-call and a lot of people to talk to. And I stayed up all night
during the database upgrade and didn&rsquo;t even get paged a single time. The whole
thing was planned really well and all the people who knew how the site worked
were already online and it turned out to be an invaluable learning experience
for me.</p>
<h3 id="getting-used-to-the-game">Getting used to the game</h3>
<p>Then after my first year or so we changed the on-call schedule a bit. We had
more engineers than we needed to fill the dev on-call rotation. We started to
have more specialized teams who would take on their own on-call and were thus
excluded from the generic one. So we switched to a system where you could
volunteer to be on-call every 4 weeks or so for a week. We set it up to be two
tiered so if you&rsquo;ve never been on-call, you would be the L1 contact who gets
paged first. If you really can&rsquo;t get to a computer or have no clue what to do,
you can escalate to the L2 who is more senior and has experience with being
on-call. And then after 6 months you were released from the rotation and a new
round started where engineers could volunteer. I signed up immediately when
this got introduced. I wanted to learn more about what it means to be on-call.
The rotation was still super quiet and I almost never got paged. But just the
fact that I would now have to bring a charged phone, a Mifi and a laptop with
me wherever I went every 4 weeks made on-call less of a scary exception but a
scheduled routine. This took away a lot of my fears. I learned a lot about how
noisy of a rotation to expect, how to plan my day so that I would have cell
reception all the time, and how to react when I got paged. I still felt
awkward being on-call because I still almost never really knew the parts
involved when I got paged. I would work as a communication broker to call in
the right people but I could hardly ever fix things myself. Again I learned a
ton from watching the people I called debug and fix problems, especially in
parts of the app I had never touched before. But after a while it was also
somewhat unsatisfying to almost never get paged and then if a page happened,
not being able to actually do something about it.</p>
<h3 id="level-up-the-ops-rotation">Level up: The Ops Rotation</h3>
<p>In addition to that, by then I had worked on mostly infrastructure things. I
worked on a lot of Chef recipes, upgraded all of etsy.com to a new release of
PHP and reworked core parts of how software deployment worked. I touched all
the things that would wake up an ops engineer (but thankfully never did) and I
wanted to own up to the responsibility. But you only ever hear horror stories
about ops on-call from almost anyone. Because once you have an automated
system being able to wake you up at night (95% of Nagios alerts used to go to
the Etsy ops rotation back then) it changes on-call a lot. So to own up to me
having changed one of the biggest parts of our stack, I decided to sign up for
shadowing the ops on-call engineer for a week. I got my phone hooked into
Nagios. And <a href="https://www.youtube.com/watch?v=uvqJ1mTkEuY">it got real</a>. I had added my email a couple of months
earlier already to get a feel for the alert volume and to know whether I broke
things I was working on but adding your phone is a different kind of real. I
chose a week where one of our senior ops engineers would be on-call and I got
woken up for everything for a week. It was pretty brutal. I got woken up in the
middle of the night and had no idea what Nagios wanted from me. I would log
onto IRC and get some information from <a href="https://twitter.com/ickymettle">Marcus</a> what the alert was
about and how to fix it. I mostly felt helpless and super slow with debugging
what might be causing the production issues at hand. The week ended with a
couple of sleepless nights and a site outage on Friday. And I can&rsquo;t even begin
to explain how much I learned during that week. About systems I&rsquo;ve never seen
before, about database replication, about hating disk space alerts, and how to
work after being deprived of sleep because of a busy on-call night. And the
learning was what got me hooked. A couple of months later I signed up for the
ops on-call rotation for good and have been part of it for over a year now.</p>
<h3 id="long-story-short">Long Story Short</h3>
<p>I can honestly say leaning into on-call has been one of the best things I did
for growing as an engineer. It put me way out of my comfort zone and I had to
overcome a good chunk of impostor syndrome and fear for it. I write software
and design systems with a different view now. I have way more experience with
all the tools we use to manage our infrastructure - having to find something
in Chef at 3am really helps learning your way around it - and a better
intuition when it comes to adding things to it. When we added an on-call
rotation to my team I was super relaxed because I knew it wouldn&rsquo;t be anything
that I hadn&rsquo;t dealt with before in the ops rotation. It&rsquo;s a great feeling to
know that you are doing your part to keep things running and you can bond with
a lot of other people over sharing on-call pain and <a href="https://speakerdeck.com/jnewland/optimizing-ops-for-happiness">how to make things
better</a>. It&rsquo;s not always fun. In fact it&rsquo;s fitting that
I&rsquo;m writing this blog post slightly sleep deprived, while I&rsquo;m on-call, have
been woken up almost every night since Friday, had to manage to find someone
to cover for me while I&rsquo;m traveling for a whole day and had to hook up
notifications and mobile internet in another country for the first 2 days of
on-call. However I&rsquo;m always happy when I&rsquo;m done with another on-call week and
can look back on the things I learned. After all it&rsquo;s the challenges through
which we grow.</p>
]]></content>
    <link href="https://unwiredcouch.com/2015/01/06/learning-on-call.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Sketchnote Workbook: Advanced Techniques for Taking Visual Notes You Can Use Anywhere]]></title>
    <published>2014-12-31T00:00:00Z</published>
    <updated>2014-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/rohde-sketchnoteworkbook-2014/</id>
    <content type="html"><![CDATA[<p>This was really fun to read. I didn’t end up picking up sketch noting as a
permanent tool in note taking. But I really enjoyed reading it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/rohde-sketchnoteworkbook-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Quiet: The Power of Introverts in a World That Can&#39;t Stop Talking]]></title>
    <published>2014-12-31T00:00:00Z</published>
    <updated>2014-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/cain-quiet-2021/</id>
    <content type="html"><![CDATA[<p>This is another one I started in 2013 and then dropped for no real reason. I
finished it this year and thoroughly enjoyed it. I knew before I started
reading that I fall on the introvert side of the scale but the book really
helped recognizing some more patterns and making me feel better about it. This
is also the only book I finished as a Kindle audiobook and while I likely
won&rsquo;t do it again, it was an interesting experience. It&rsquo;s a great read and
definitely recommended for anyone who works with other humans in their daily
life.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/cain-quiet-2021/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Creating Flow with OmniFocus]]></title>
    <published>2014-12-31T00:00:00Z</published>
    <updated>2014-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/dini-creatingflowwithomnifocus-2010/</id>
    <content type="html"><![CDATA[<p>This one was kind of a surprise read for me. I’ve written before about how much I have OmniFocus is integrated in my life. And when this book popped up in one of my RSS feeds I decided to give it a read. It’s definitely not a cheap book and I jumped over the first half as it’s basically an introduction into OmniFocus which I already know how to use. The book isn’t a total game changer, but the latter half gives some good food for thought on how to make the most of OmniFocus’ Perspectives and some unusual use cases for it.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/dini-creatingflowwithomnifocus-2010/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[2014 Reading List]]></title>
    <published>2014-12-31T00:00:00Z</published>
    <updated>2014-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/12/31/reading-list.html</id>
    <content type="html"><![CDATA[<p>This year I tried again to keep up the flow of reading books. I&rsquo;m an awfully
slow reader when it comes to books in general (not so much for blog posts
interestingly) but I managed to read a handful of books and I wanted to share
my thoughts about it. Mostly because I got inspired by reading <a href="http://www.paperplanes.de/2014/12/30/reading-list-2014.html">Mathias'
reading list</a> but also because I wanted to have a track record of
what I read and hopefully inspire myself to read even more next year.</p>
<p>So without further ado, here is my list:</p>
<h3 id="germaine-greer---the-female-eunuch"><a href="http://www.amazon.com/Female-Eunuch-Germaine-Greer/dp/006157953X">Germaine Greer - The Female Eunuch</a></h3>
<p>I started the year off with finally finishing Germaine Greer&rsquo;s feminist
classic from 1970 about the role of women in modern society. I had known about
the book for a couple of years and after having read <a href="http://www.amazon.com/Will-Change-Men-Masculinity-Love/dp/0743456084">Bell Hooks&rsquo; The Will to
Change: Men, Masculinity, and Love</a> last year I decided to finally read
it. I definitely enjoyed it. Especially as a man it opens your eyes to a lot
of things you never encounter in your daily life. It&rsquo;s very graphic at times
and there are some long-winded parts in the middle but I would definitely
recommend it to anyone who&rsquo;s interested in feminism. I also started reading
her newest book <a href="http://www.amazon.com/Whole-Woman-Germaine-Greer/dp/0385720033">&ldquo;The Whole Woman&rdquo;</a> this year which is the sequel
she never wanted to write. And so far I like it and it&rsquo;s alarming how few
things have changed since &ldquo;The Female Eunuch&rdquo;.</p>
<h3 id="erik-hollnagel---the-etto-principle-efficiency-thoroughness-trade-off"><a href="http://www.amazon.com/ETTO-Principle-Efficiency-Thoroughness-Trade-Off/dp/0754676781/">Erik Hollnagel - The ETTO Principle: Efficiency-Thoroughness Trade-Off</a></h3>
<p>It&rsquo;s no secret that I&rsquo;m interested in <a href="http://www.unwiredcouch.com/2014/08/04/human-error-getting-off-the-hook.html">human factors and system
safety</a> and how to apply lessons learned to our field of creating
and managing complex computer systems. So it also shouldn&rsquo;t be a surprise that
this book really hit home for me. It&rsquo;s well written and touches on a myriad of
different aspects about how we trade off thoroughness for efficiency and how
production pressure changes our way of making decisions. It&rsquo;s a pretty fast
read and I really enjoyed it. It also has a huge references and related
literature section following each chapter which makes it great to start diving
deeper into the topic.</p>
<h3 id="the-phoenix-project-a-novel-about-it-devops-and-helping-your-business-win"><a href="http://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262509">The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win</a></h3>
<p>I originally started reading it in 2013 when it came out and I enjoyed it back
then. But I somehow still dropped the ball and stopped reading it. I finally
went back and finished it this summer. It&rsquo;s a good and interesting read and
having worked in traditional plant production companies I liked a lot of the
parallels in there. It gets a little weird at the end and the last quarter
feels like the authors really had to wrap up the book. And no matter how you
look at it, it&rsquo;s definitely business romanticism. But if you don&rsquo;t mind that,
it&rsquo;s definitely entertaining.</p>
<h3 id="susan-cain---quiet-the-power-of-introverts-in-a-world-that-can"><a href="http://www.amazon.com/Quiet-Power-Introverts-World-Talking/dp/0307352153">Susan Cain - Quiet: The Power of Introverts in a World That Can&rsquo;t Stop Talking</a></h3>
<p>This is another one I started in 2013 and then dropped for no real reason. I
finished it this year and thoroughly enjoyed it. I knew before I started
reading that I fall on the introvert side of the scale but the book really
helped recognizing some more patterns and making me feel better about it. This
is also the only book I finished as a Kindle audiobook and while I likely
won&rsquo;t do it again, it was an interesting experience. It&rsquo;s a great read and
definitely recommended for anyone who works with other humans in their daily
life.</p>
<h3 id="kourosh-dini---create-flow-with-omnifocus-2"><a href="http://www.usingomnifocus.com">Kourosh Dini - Create Flow with OmniFocus 2</a></h3>
<p>This one was kind of a surprise read for me. I&rsquo;ve <a href="http://www.unwiredcouch.com/2014/05/13/omnifocus.html">written before</a>
about how much I have OmniFocus is integrated in my life. And when this book
popped up <a href="http://simplicitybliss.com">in one of my RSS feeds</a> I decided to give it a
read. It&rsquo;s definitely not a cheap book and I jumped over the first half as
it&rsquo;s basically an introduction into OmniFocus which I already know how to use.
The book isn&rsquo;t a total game changer, but the latter half gives some good food
for thought on how to make the most of OmniFocus&rsquo; Perspectives and some
unusual use cases for it.</p>
<h3 id="bonus-jon-cowie---customizing-chef"><a href="http://www.amazon.com/Customizing-Chef-Jon-Cowie/dp/149194935X">Bonus: Jon Cowie - Customizing Chef</a></h3>
<p>I added this as a bonus round, because while I definitely read it, I had the
privilege to do so as a reviewer. I&rsquo;m really happy that <a href="https://twitter.com/jonlives">Jon</a> asked me
to review his book and while I had done a lot of Chef before, I learned tons
about its internals from this book. If you work with Chef and want to get more
out of it or even just understand some of the internals a little better,
definitely read this book.</p>
<p>I really enjoyed reading all those books this year. And one of my New Year&rsquo;s
resolutions is definitely to read more next year. I&rsquo;ve planned to set some
time aside every day to read and hope to have a longer list of things I read
next year. If not, it sure isn&rsquo;t because of a small Kindle backlog.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/12/31/reading-list.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[3 Simple Things that improved my Work-Life Balance]]></title>
    <published>2014-12-15T00:00:00Z</published>
    <updated>2014-12-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/12/15/work-life-balance.html</id>
    <content type="html"><![CDATA[<p>These days productivity and work life balance are two parts of the holy grail
of modern life for a lot of people. We have an abundance of projects, things
to work on, things that interest us and distractions to keep us from doing all
the things we want to do. And so often the solution to this is basically &ldquo;work
more hours&rdquo; or &ldquo;have more intelligent notification systems&rdquo; because more so
often looks like better. And I fell into that trap. I worked insane hours
(mostly because it was fun and I wanted to learn so much more), jumped on
every project and task that seemed remotely interesting to me and had
notifications from everything I had ever installed on my phone. I have been
that guy who comes home at 1am after hanging out with friends and had this
amazing idea how to fix something that couldn&rsquo;t wait until Monday or even just
the next day. So I would hack on things until 3 or 4 am. This didn&rsquo;t happen
often, but it definitely did happen occasionally. And it worked ok for some
time. But eventually it all caught up to me and I felt overwhelmed with all
the things that were going on. Projects, new things I wanted to learn,
notifications, things I wanted to read and people I wanted to talk to or
email. Still I was in a generally pretty good situation, since I had managers
that told me to work less and made it very clear that this is not what they
expected from me. And I wasn&rsquo;t even in a particularly bad situation, I wasn&rsquo;t
really miserable, I still liked my job a lot and I wasn&rsquo;t close to a burnout
or anything. I just knew I didn&rsquo;t want to continue like that because work
should really be fun and constraint to &hellip;  well &hellip;  work hours. And it
wasn&rsquo;t really for me anymore, I felt overwhelmed and at the same time like I
didn&rsquo;t get anything done. So this year I started with some very basic
realizations:</p>
<ol>
<li>There&rsquo;s always more work that you could do</li>
<li>There are always more things to read, watch or catch up on</li>
<li>Most notifications don&rsquo;t really need to interrupt you</li>
</ol>
<h3 id="the-workday">The Workday</h3>
<p>And I started to make some changes from there. The very first one was strict
working hours. At the beginning of the year I decided that I will go home at
6.30pm every day. Unless something is really on fire. It really helped that I
take the ferry to and from work everyday. So I settled for a ferry schedule
that I wanted to take and stuck with it. There are no excuses. All the work
I&rsquo;m doing in the evening will still be there the next day. And there is no
work, work email or any such things after I got home. In order to further this
more I also <a href="https://twitter.com/mrtazz/status/536571409674555392">deleted any work related email and calendar accounts from my
personal laptop</a> a couple of weeks ago. If I want to get more
work done I have to get up and start work earlier. In reality this usually
plays out to me being in the office around 10am or 10:30 with 30 to 45 minutes
of working from home before that. Usually I check email and try to flag things
I want to get done that day in <a href="http://www.unwiredcouch.com/2014/05/13/omnifocus.html">my OmniFocus</a> at home as it&rsquo;s more
quiet, earlier in the day, and less busy. Obviously there are exceptions to
this, as I said already when things are on fire, when I&rsquo;m on-call of course,
or when I&rsquo;m really in the zone and don&rsquo;t want to stop (although this is really
really rare after 5:30pm to be honest). On regular days I stick to my working
hours.</p>
<h3 id="reading-things">Reading things</h3>
<p>The next step was embracing the fact that there is always more to read. I&rsquo;m
pretty <a href="http://www.unwiredcouch.com/2014/08/29/email-happiness.html">happy with my e-mail setup</a> but it took some time to be ok
with heavy filtering and only checking it occasionally. And in addition to not
having any sort of notifications for my work email I have also turned off icon
badges in the iPhone mail client (except for VIP mails from friends and
family). The number of unread emails doesn&rsquo;t really mean anything other than
that it causes you to keep checking it because it&rsquo;s a pattern of &ldquo;todo&rdquo; items.
And you really want to get this number down. For no particular reason other
than that it feels good to cross things off. So it&rsquo;s easy to get into the
habit of keep doing it. I realized that the icon badge doesn&rsquo;t actually mean
anything for me as all the emails I need to answer are in my OmniFocus so
there is no need for badges on my email client. I also turned off unread count
badges for basically everything else, but most notably <a href="http://reederapp.com/ios/">RSS feeds</a> and
<a href="http://getpocket.com">articles I&rsquo;ve saved for later</a>. Reading things shouldn&rsquo;t be a chore
but something you enjoy when you have the time. I always felt bad that I have
so many things pile up in my different accounts. And it ended up being a
constant hassle of cleaning up the lists in there, instead of enjoying the
things I can read from it. So I stopped feeling bad about having an insane
backlog of articles in my queue and now see it more as a big pool of interesting
things to read when I have the time (thank you <a href="https://twitter.com/mikebrittain/status/539198323471962112">Mike for this</a>).</p>
<h3 id="notifications-and-the-phone">Notifications and the phone</h3>
<p>So that leaves notifications. You have probably realized by now that every app
on your phone competes for your attention. And even more so with
notifications. Every single app wants to be able to push notifications to your
phone. Even if it&rsquo;s just a game that wants to remind you what you&rsquo;re missing
while you&rsquo;re not playing. So while realizing that a ton of things actually
don&rsquo;t need my attention, I started to divide the things on my phone into 3
groups. The first one is allowed to push notifications, make sounds and
interrupt me and they are basically only the phone and messages app for SMS.
The second group is stuff that I care about enough to allow notifications but
isn&rsquo;t urgent. Twitter clients, Foursquare and Pushover (which I use to tell me
about IRC mentions when I&rsquo;m idle) notifications fall into that. Whenever I
have time I&rsquo;ll skim through the notifications on the lock screen on my phone
but nothing in that list is allowed to make a sound or vibrate the phone. And
all the other apps on my phone don&rsquo;t get to push notifications at all. My
notification setup has also much improved since I got a <a href="https://getpebble.com">Pebble</a>.
While it&rsquo;s not crucial, it makes checking notifications a matter of a handful
of seconds by flicking my wrist versus checking my phone.  The downside is
that when it comes to vibration it&rsquo;s all or nothing on the Pebble. Right now
it doesn&rsquo;t bother me much but is definitely something I&rsquo;m watching out for in
case it gets annoying. And in addition to cleaning up notifications I also
cleaned up my iPhone&rsquo;s home screen while I was at it. I only have things on
there that I actually (want to) use every day. That way my phone looks and
feels way less cluttered (and it&rsquo;s way more fun to have wallpapers).</p>
<p><img src="/images/iphone-screen.png" alt="iPhone homescreen"></p>
<h3 id="what-else">What else?</h3>
<p>Changing those simple habits has made a big impact in my life. I&rsquo;m definitely
more exhausted when I get home because I&rsquo;m trying to get as much done as
possible during the day. But at the same time I&rsquo;m super <a href="https://twitter.com/mrtazz/statuses/467076737105674240">excited to get back
to work the next morning</a>. My phone doesn&rsquo;t make a noise except for
really important things, so although I have work email and calendars on there,
when I just have it in my pocket or on a table it doesn&rsquo;t remind me of work
things after work (I disconnect the Pebble when I get home as I don&rsquo;t have a
need for it there). I actually feel less stressed out about my phone and
interruptions all day and get more out of the things I actually want to do on
my phone.</p>
<p>There are still some things I want to improve though. I definitely don&rsquo;t spend
as much time reading as I want to. Something I want to try there is take a
page from <a href="http://blog.travis-ci.com/2014-09-04-10-things-i-do-to-stay-productive/">my friend Mathias&rsquo; book</a> and get 30 minutes of reading in
every morning before I head into the office. At the beginning of the year I
have also started to keep a work journal to jot down things that happened
during the day and that I worked on. And I would love to expand that to also
include non-work related things.</p>
<p>I&rsquo;m also notoriously bad about taking vacation days. I usually end up with a 2
to 3 week long vacation at the end of the year because I need to use up all my
days. I really want to get back to taking a longer time off in the middle of
the year to enjoy the summer and spread more days off over the year.</p>
<p>What did you do to improve your work-life balance this year? <a href="mailto:d@unwiredcouch.com">Email
me</a>, <a href="https://twitter.com/mrtazz">tweet me</a> or better take some time and write a blog post
about it!</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/12/15/work-life-balance.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Human Factors and PostMortems]]></title>
    <published>2014-11-11T00:00:00Z</published>
    <updated>2014-11-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/human-factors-postmortems/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/human-factors-postmortems/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Culture: the invisible ingredient]]></title>
    <published>2014-11-10T00:00:00Z</published>
    <updated>2014-11-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/culture-panel/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/culture-panel/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Deploy, Collaborate and Listen]]></title>
    <published>2014-11-06T00:00:00Z</published>
    <updated>2014-11-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/deploy-collaborate-listen/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/deploy-collaborate-listen/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[DevOps - Jetzt aber richtig]]></title>
    <published>2014-11-05T00:00:00Z</published>
    <updated>2014-11-05T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/wjax2014-panel/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/wjax2014-panel/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Code Reviews Considered Awesome]]></title>
    <published>2014-10-21T00:00:00Z</published>
    <updated>2014-10-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/10/21/code-review.html</id>
    <content type="html"><![CDATA[<p>I think by now it is pretty much accepted that collaboration and working
together is a lot better than assigning blame and yelling at each other. In
the field of operating web services and infrastructure we have made the shift
in the last years to improving techniques and simplifying processes, everybody
is modelling infrastructure as code, nobody throws stuff over the wall
anymore, developers are on-call and there is rarely any yelling (at least
that&rsquo;s the ideal most companies strive for).</p>
<p>But I think there is a very undervalued part of that whole situation.  A lot
of operations engineers (at least in my part of the internet) will happily
claim they can&rsquo;t write code or write shitty code (by the way if you still
think you don&rsquo;t write code as an ops engineer go read <a href="http://cwebber.net/blog/2014/09/26/i-am-not-a-coder/">Christopher Webber&rsquo;s
great blog post</a> about it). When in reality ops engineers write a
ton of code and I have seen and reviewed amazing apps and tools being knocked
out by people who will happily make excuses for their code whenever you talk
about the awesome thing they wrote. The big advantage of working together is
exchanging knowledge that was previously in siloed domains. It&rsquo;s great that
developers carry pagers and know about deployment and how to write Chef
recipes or Puppet modules. But that also means as an operations engineer you
can tap into a giant stream of software engineering knowledge that developers
have learned, improved and cared about for years. And one of those is code
review.</p>
<p>Code review is a great way to get free learning and feedback about the things
you are working on. It&rsquo;s a way to get someone with a different context to
think about whether your solution to a problem (and that&rsquo;s what basically all
programming, scripting and automation is) makes sense to someone else.  It&rsquo;s
also a way to learn about paradigms, conventions and unknown techniques to
make things better. I&rsquo;m a software engineer by trade and whenever I start with
something new, work on a new project or try to solve a new problem I try to
seek out code review. I recently worked on our Android app. And while I have
worked with Java and written Android code before, it&rsquo;s been years and was in a
completely different code base. So while I was writing down the code that would
solve the problem I was having, it was very non-idiomatic when it came to our
Android coding style. Once I was done I asked my coworker <a href="https://twitter.com/hannahmitt">Hannah</a> if
she could take a look at my code. And she was super excited about it and
immediately jumped on it. And she gave me <em>tons</em> of feedback. She showed me
the app loader structure that would make my code much more <a href="http://en.wikipedia.org/wiki/Don't_repeat_yourself">DRY</a> and I
learned a lot about how we approach Android development. And even though she
initially wasn&rsquo;t completely familiar with the problem I was solving and it
definitely took a bit longer for me to actually deploy the code I learned a
ton.</p>
<p>One of the keys here was definitely that she was so excited to review code for
me. And this is where the reviewer comes in. Being asked for a code review
means the person values your opinion and would love to get your feedback on
something that is arguably important to them and trusts you to be able to
improve it. It also means you now have to balance the fine line between not
actually giving useful feedback and overdoing the code review and effectively
blocking their progress. E.g. while it likely doesn&rsquo;t make sense to suggest an
abstract factory pattern for a procedural script of python code, showing
where it could greatly benefit from using functions is a simple way to improve
things. Fundamentally how to give good code reviews is a complicated topic
which I don&rsquo;t want to get into here. But the important part is that this is
something you should be excited about and should let the person asking for the
review know that you are.</p>
<p>Often code reviews are still seen as a slow down, annoying and nothing
that really adds value as you already know your code works. However seeing
code reviews as an opportunity to learn, get another perspective and
ultimatively a way of sharing knowledge is in my opinion way more accurate
when done right (<em>right</em> being used lax here as everyone has to define for
themselves what that means). It is a great way to learn about different parts
of your infrastructure&rsquo;s code, new programming paradigms and methods, how to
communicate more effectively and in the end leads to a more resilient
organization and enjoyable work and programming environment.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/10/21/code-review.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Data Driven Monitoring]]></title>
    <published>2014-10-09T00:00:00Z</published>
    <updated>2014-10-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/data-driven-monitoring/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/data-driven-monitoring/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The ETTO Principle: Efficiency-Thoroughness Trade-Off]]></title>
    <published>2014-09-10T00:00:00Z</published>
    <updated>2014-09-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hollnagel-ettoprinciple-2009/</id>
    <content type="html"><![CDATA[<p>It&rsquo;s no secret that I&rsquo;m interested in <a href="http://www.unwiredcouch.com/2014/08/04/human-error-getting-off-the-hook.html" title="Human Error and Getting Off The Hook on unwiredcouch.com">human factors and system safety</a> and
how to apply lessons learned to our field of creating and managing complex
computer systems. So it also shouldn&rsquo;t be a surprise that this book really hit
home for me. It&rsquo;s well written and touches on a myriad of different aspects
about how we trade off thoroughness for efficiency and how production pressure
changes our way of making decisions. It&rsquo;s a pretty fast read and I really
enjoyed it. It also has a huge references and related literature section
following each chapter which makes it great to start diving deeper into the
topic.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hollnagel-ettoprinciple-2009/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win]]></title>
    <published>2014-09-02T00:00:00Z</published>
    <updated>2014-09-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/kim-phoenixproject-2013/</id>
    <content type="html"><![CDATA[<p>I originally started reading it in 2013 when it came out and I enjoyed it back
then. But I somehow still dropped the ball and stopped reading it. I finally
went back and finished it this summer. It&rsquo;s a good and interesting read and
having worked in traditional plant production companies I liked a lot of the
parallels in there. It gets a little weird at the end and the last quarter
feels like the authors really had to wrap up the book. And no matter how you
look at it, it&rsquo;s definitely business romanticism. But if you don&rsquo;t mind that,
it&rsquo;s definitely entertaining.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/kim-phoenixproject-2013/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[My Rules for E-Mail Happiness]]></title>
    <published>2014-08-29T00:00:00Z</published>
    <updated>2014-08-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/08/29/email-happiness.html</id>
    <content type="html"><![CDATA[<p>Over the last couple of days and maybe weeks a lot of my friends, coworkers
and most of all people I follow on twitter have been overly excited about the
beta availability of <a href="http://www.mailboxapp.com">an e-mail client</a>. At first I was being
regrettably <a href="https://twitter.com/mrtazz/status/501909189577687040">very snarky about it</a> as in my opinion the mail
client in question has some serious privacy and availability issues. But as
more and more people got excited about it I took a step back and thought about
why I couldn&rsquo;t understand the excitement and why people would give up their
privacy for &ldquo;better e-mail&rdquo;. And it dawned on me that I have never actually
seen e-mail as that problematic and annoying and that I actually like my
setup a lot. This is why I decided to share how I do e-mail and why it works
well for me (yes you may consider this one of those productivity blog posts).</p>
<h3 id="setting-the-stage">Setting the stage</h3>
<p>Before I start I want to make very clear that I&rsquo;m likely not a power email
user and what works for me might not work for you. This is also mostly about
how I manage my work email, as my personal email is low volume enough so that
it&rsquo;s probably not interesting. I also use a <a href="http://www.mutt.org">somewhat esoteric e-mail
client</a> and while most of the things I&rsquo;ll talk about are generic, using
mutt makes it a lot easier for me. And to give you a ballpark number for
e-mail volume: I receive about 370 e-mails per day - your mileage may
vary.</p>
<h3 id="so-how-do-i-use-e-mail">So how <em>do</em> I use e-mail?</h3>
<p>The first two very important factors for me are filtering and <a href="http://www.43folders.com/izero">Inbox
Zero</a>.  I am subscribed to a ton of mailing lists at work.
Everything I deem interesting (and I&rsquo;m a super nosy person) I subscribe to,
but I have strict rules about what goes into my inbox:</p>
<ul>
<li>E-mails addressed directly to me</li>
<li>My team&rsquo;s mailing list</li>
<li>Mailing lists of teams I work closely with</li>
<li>Low volume mailing lists (less than 5 messages a day)</li>
<li>Important automated e-mails (e.g. from Nagios)</li>
</ul>
<p>Everything else gets filtered into separate mailboxes. No exceptions. If a low
volume list gets more busy it gets filtered.</p>
<p>With this setup I check my inbox a couple of times a day. The frequency
depends on how busy I am obviously, but I check e-mail at most every 30
minutes and at least 5 times a day. And everything that is filtered I check
once a day up until once a week, depending on how important the mailing list
is. I also clear out all mailboxes at least once a week and archive all e-mail
in there. This usually takes about 5 - 20 minutes and I do it before doing my
weekly GTD review.</p>
<p>In mutt I also use the <a href="https://github.com/mrtazz/muttfiles/blob/master/mutt-colors-solarized-light-16.muttrc">solarized light theme</a> (as I do almost
everywhere else) which helps a lot as e-mails are color coded differently.
Read e-mails are gray, e-mails addressed to me directly or via cc are green
and messages from mailing lists are blue. That way I can open up mutt, take a
quick glance to see if I have new important email or if I can postpone going
through my email. This is sometimes so fast that my terminal emulator warns me
about the <a href="https://twitter.com/mrtazz/statuses/467405164790693888">shell being closed too fast again</a> and there might be
something wrong, which I still find hilarious. When I actually go through my
e-mail, I file them into mailboxes depending on whether I want to read them
later or already know that they need an answer and archive everything else.
Following the GTD principle if I can answer the e-mail in 2 minutes I do it
right away.  Otherwise I move it to the corresponding mailbox from which it
gets <a href="http://www.unwiredcouch.com/2014/05/13/omnifocus.html">pulled into my Omnifocus inbox</a>. All of this is done
with simple keyboard shortcuts that work on single or multiple messages.</p>
<p>I also have e-mail set up on my iPhone with the iOS built-in mail client via
IMAP. However I only ever skim e-mail on there and at most answer if it takes
me less than 2 minutes. I also have the labeling enabled that tells me whether
a messages was sent to me directly or via cc or just because I&rsquo;m part of a
mailing list. That way I can also check very quickly if there is something in
there that potentially needs my attention.</p>
<h3 id="the-important-part">The important part</h3>
<p>The most important part to take away from this is that <strong>you don&rsquo;t need to
read all e-mail you get</strong>. Especially in a big company there are enough things
going on that you can&rsquo;t possibly keep up with everything. And that&rsquo;s ok. In
addition to that what really works for me is using trusted clients that have
been around for a while. I have been using mutt for years and I have most of
its shortcuts in my muscle memory. When I came to work after a week of
vacation and had <a href="https://twitter.com/mrtazz/statuses/486165858809815040">6000 emails in my inbox</a> it took me 10 seconds
to clear out all the automated emails based on their from addresses and cut
the number of messages by 95%. The learning curve for mutt was pretty steep at
the beginning but it has payed off over the years and since it&rsquo;s open source I
know it will be around and not suddenly disappear (or at least it&rsquo;s unlikely).
I also love being able to write my e-mail in vim and am a big fan of plain
text e-mails. On my phone I also use the built-in e-mail client as it&rsquo;s likely
to stay and not completely disappear or get shut down either. My usage on the
phone is also light enough so I don&rsquo;t care about small changes in UX or
functionality with OS upgrades. I have yet to encounter a bad surprise after
upgrading my phone.</p>
<p>When it comes to dealing with stress and the feeling of being overwhelmed with
e-mail, the biggest change for me besides filtering was to turn off all
notifications. No e-mail that I receive will ever make a sound or make my
phone vibrate. There are no lock screen notifications on my phone besides
e-mail from people in my iPhone VIP list which is mostly family and even then
it just shows up. No sound, no vibration. I decide when I have time to read
e-mail.</p>
<p>As I said, most of these things are applicable no matter what e-mail client
you use. I happen to use mutt (set up similar to how it&rsquo;s explained
<a href="http://stevelosh.com/blog/2012/10/the-homely-mutt/">here</a> and these are my <a href="https://github.com/mrtazz/muttfiles">config files</a> in case you&rsquo;re
interested), but there are a ton of good and proven clients out there (I used
OSX Mail.app for years and always liked it). And plain old IMAP is honestly
pretty cool. But most of all - in my opinion - the biggest problems with
e-mail are social or rather psychological problems (trying to keep up with
everything, wanting to get notified all the time) and not technological ones,
and they can be solved.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/08/29/email-happiness.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Bye Bye Fitbit]]></title>
    <published>2014-08-21T00:00:00Z</published>
    <updated>2014-08-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/08/21/bye-bye-fitbit.html</id>
    <content type="html"><![CDATA[<p>About a year ago I decided that I had to do something about my fitness. Since
moving to NYC about 1.5 years prior I went from having basketball training two
to three times a week and a game on the weekend to basically not doing any
sports. Needless to say this didn&rsquo;t really impact my fitness in any positive
way. In addition I was also used to eat quite a lot since basketball training
had been enough sports so I could easily burn it off again. And while I didn&rsquo;t
have practice anymore I was still used to eat the same amount of food
basically.</p>
<p>So something needed to change. I had tried running several times, but since I
find it utterly boring I never really got into the habit of doing it
regularly. Plus there is no real instant feedback from running, so even when I
managed to go, I never had the feeling of actually doing sports. Clearly I
needed a way to track progress. So the first step was to get a scale. That way
I would be able to at least track my weight. I decided to get the <a href="http://www.fitbit.com/aria">Fitbit
Aria</a> scale as I liked the idea of syncing it to an account where I can
get pretty graphs. <a href="http://shouldigraphit.com">I like graphs</a>. The next step then was to track
how active I am and what I eat so that I could get a rough overview of general
activity and calorie burn. As I had already created an account with Fitbit
for the scale I decided to get the <a href="http://www.fitbit.com/flex">Fitbit flex</a> wristband (I later
replaced it with the now discontinued <a href="http://www.fitbit.com/force">Force</a>) to track steps and
calories.</p>
<p>Now I had feedback and graphs for what I was doing all day, how active I was,
how long/good I was sleeping and I kept track of what I was eating. Having
this incentive meant that I would be going running or shooting some hoops for
at least 20 minutes everyday before or after work. I also stopped eating
everything I found (which seems to be big part of the secret of losing weight)
as it would go into my food journal in the Fitbit application. And this worked
really well. Over the course of a couple of months I lost about 11 kilos
(about 24 lbs) and even had to <a href="https://twitter.com/mrtazz/statuses/399937629078441984">put new holes in my belt</a>.</p>
<p>But soon after all of this my habit of how I used the Fitbit changed. In early
2014 I decided that I didn&rsquo;t want to work long hours anymore and improve my
work-life balance. This meant I took the conscious decision that I would leave
work at around 6pm everyday to catch the ferry home. No more staying late
unless things are on fire and no more working from home after I left the
office. If I wanted to get more stuff done I&rsquo;d have to get up early. While the
getting up early part didn&rsquo;t work very well at the beginning, this now meant
that I would go to bed at a regular time (usually between 10-11pm) and get
about 9-10h of sleep most nights. This however made my Fitbit sleep tracking
basically obsolete for me. I knew that I slept well most nights and if I
didn&rsquo;t it was mostly because I violated the rule and went to bed late (or I
was on-call and got woken up at night). In addition to that I got a
<a href="https://getpebble.com">Pebble</a> and was now rocking dual wearables. Which really wasn&rsquo;t a
pleasant feeling. Both the Fitbit and the Pebble aren&rsquo;t super big but having
stuff hanging on your wrist all day meant that I would come home and take both
of because it felt much better not to have anything on the wrists. And that
feeling amplified when I went to bed. I felt much more relaxed and less
restricted when I wouldn&rsquo;t wear anything on my wrists when I was sleeping. So
most days I would take off the Fitbit (and Pebble) when I got home and not put
it back on until the next morning (with the exception of being on-call where
we <a href="http://codeascraft.com/2014/06/19/opsweekly-measuring-on-call-experience-with-alert-classification/">use sleep data to improve the on-call rotation</a>). I still liked
having graphs about steps, but mostly for the sake of having graphs. I didn&rsquo;t
act on them in any way other than occasionally <a href="https://twitter.com/mrtazz/statuses/483125970245660674">bragging on twitter about how much I
walked</a>.</p>
<p>So when I came home from a week of vacation in early July I decided to not put
the Fitbit back on at all and see how it feels. And it was great! I felt much
more free and less restricted. I hadn&rsquo;t been using the data it collected for
anything really in almost 6 months. Plus I never really felt comfortable with
the fact that details about my activity and calorie intake live on a server in
the cloud. Thanks (partly) to the Fitbit I got back to a good intuition about
how much I should eat and how much sport I should do every week. I now go to
the gym regularly, have a way better sleep schedule and eat more consciously
and more importantly less than 1.5 years ago (although there are definitely
improvements to make there still). And if anything is off about food, sports
or sleep I notice immediately as I start to feel unwell. This doesn&rsquo;t mean I
would never ever use a fitness tracker again. If they eventually end up being
less intrusive in daily life and maybe even come with a collection application
I can install on my own servers I would happily try it again. But for now it&rsquo;s
&ldquo;Bye Bye Fitbit&rdquo;.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/08/21/bye-bye-fitbit.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Mirror GitHub repositories in pure shell]]></title>
    <published>2014-08-16T00:00:00Z</published>
    <updated>2014-08-16T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2014/08/16/github-mirror-shell.html</id>
    <content type="html"><![CDATA[<p>As I have <a href="http://www.unwiredcouch.com/2013/10/30/uncloud-your-life.html">written before</a> I have slowly started to move my data out
of cloud services where applicable. One part of that was setting up my own
backup server at home based on <a href="http://www.unwiredcouch.com/bits/2014/03/18/zfs-rsync-backups.html">FreeBSD, zfs and rsync</a>. One part I
consider important data but didn&rsquo;t have on there was my (Open Source) code I
host on GitHub. This also wasn&rsquo;t ever a priority as the code is public anyways
so it wasn&rsquo;t a privacy issue for me, and I also trust GitHub to run backups so
I wasn&rsquo;t overly concerned about my data vanishing. Still I wanted to have my
own backup of things.</p>
<p>So I started to look into how people mirror their repositories for backups,
speed, availability and other things. There exist quite a lot of solutions out
there which are mostly written in Ruby or Python. While this is fine and I
would encourage you to look into those, I didn&rsquo;t want to deal with installing
pip to install some Python script or installing yet another gem just for
something that can be accomplished with a couple of lines of shell. So I wrote
my own set of scripts in Bourne shell (one of the default installed shells in
FreeBSD) so I could just cron them up on my backup box.</p>
<p>First I needed a way to get a list of all my repositories. Thankfully GitHub
has a <a href="https://developer.github.com/v3/">pretty great API</a> so I can just get a list of all my
repositories and their git clone URLs:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Usage:</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># github_repo_list.sh mrtazz [34345k34j3k4b2jk3]</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">#</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># get a list of all public repos for a user</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> -z $1 <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;Usage:&#34;</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;github_repo_list.sh USERNAME [TOKEN]&#34;</span>
</span></span><span style="display:flex;"><span>  exit <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> ! -z $2 <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  TOKEN<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&amp;access_token=</span><span style="color:#e6db74">${</span>2<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>CURL<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>which curl<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> -z <span style="color:#e6db74">${</span>CURL<span style="color:#e6db74">}</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># fall back to /usr/local/bin/curl</span>
</span></span><span style="display:flex;"><span>  CURL<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;/usr/local/bin/curl&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>BASEURL<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;https://api.github.com/users/</span><span style="color:#e6db74">${</span>1<span style="color:#e6db74">}</span><span style="color:#e6db74">/repos?type=owner</span><span style="color:#e6db74">${</span>TOKEN<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>count<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">while</span> <span style="color:#f92672">[</span> <span style="color:#e6db74">${</span>count<span style="color:#e6db74">}</span> -gt <span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  lines<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span><span style="color:#e6db74">${</span>CURL<span style="color:#e6db74">}</span> <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>BASEURL<span style="color:#e6db74">}</span><span style="color:#e6db74">&amp;page=</span><span style="color:#e6db74">${</span>count<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span> -s | grep git_url | cut -d<span style="color:#e6db74">&#34; &#34;</span> -f6 | sed -e <span style="color:#e6db74">&#34;s/[\&#34;,]//g&#34;</span><span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># stop if we don&#39;t get any more content. A bit hacky but I don&#39;t want to</span>
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># parse HTTP header data to figure out the last page</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>lines<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span> <span style="color:#f92672">==</span> <span style="color:#e6db74">&#34;&#34;</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    count<span style="color:#f92672">=</span><span style="color:#ae81ff">0</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">else</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">for</span> line in <span style="color:#e6db74">${</span>lines<span style="color:#e6db74">}</span>; <span style="color:#66d9ef">do</span> echo <span style="color:#e6db74">${</span>line<span style="color:#e6db74">}</span> ; <span style="color:#66d9ef">done</span>
</span></span><span style="display:flex;"><span>    count<span style="color:#f92672">=</span><span style="color:#e6db74">`</span>expr $count + 1<span style="color:#e6db74">`</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>This script takes a username and an optional access token and retrieves the
public list of repositories for that user. It then outputs the git clone URLs
one per line so it&rsquo;s easily stored in a text file or fed into other scripts.
There are some minor inefficiencies and missing features in there as it curls
one more time than needed to the GitHub API to figure out if there are more
results and it also only supports public repositories as I don&rsquo;t have private
ones at the moment. However changing the URL to call if I ever want to mirror
private repositories is relatively easy and I don&rsquo;t care that much about the
extra curl as this script is not gonna be run very frequently.</p>
<p>This now gives me a list of all repositories on my account I want to mirror.
The next step is actually mirroring them. For that I wrote a script that looks
like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># take a list of git clone urls on STDIN and clone them if they don&#39;t exist.</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> -z $1 <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;Usage:&#34;</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;github_repo_sync.sh directory&#34;</span>
</span></span><span style="display:flex;"><span>  exit <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>GIT<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>which git<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> -z <span style="color:#e6db74">${</span>GIT<span style="color:#e6db74">}</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  <span style="color:#75715e"># if git is not in path fall back to /usr/local</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> -f /usr/local/bin/git <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    GIT<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;/usr/local/bin/git&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">else</span>
</span></span><span style="display:flex;"><span>    echo <span style="color:#e6db74">&#34;You need to have git installed.&#34;</span>
</span></span><span style="display:flex;"><span>    exit <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># switch to archive directory</span>
</span></span><span style="display:flex;"><span>cd $1
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">while</span> read line; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  directory<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>echo <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">${</span>line<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span> | cut -d <span style="color:#e6db74">&#34;/&#34;</span> -f 5<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> ! -d <span style="color:#e6db74">${</span>directory<span style="color:#e6db74">}</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">${</span>GIT<span style="color:#e6db74">}</span> clone --mirror <span style="color:#e6db74">${</span>line<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">else</span>
</span></span><span style="display:flex;"><span>    cd <span style="color:#e6db74">${</span>directory<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>    <span style="color:#e6db74">${</span>GIT<span style="color:#e6db74">}</span> fetch -p origin
</span></span><span style="display:flex;"><span>    cd ..
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>This script checks for each entry in a list of git clone URLs passed in via
STDIN and if the directory already exists it fetches changes and if not clones
it into the given directory. The mirroring commands reflect the instructions
in this <a href="https://help.github.com/articles/duplicating-a-repository">GitHub guide</a>.</p>
<p>Now to tie those two together I just set up two cron entries to run those two
commands:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#ae81ff">0</span> <span style="color:#ae81ff">20</span> * * * ~/bin/github_repo_list.sh mrtazz 0f6 &gt; /backup/github/github_repo_list.txt
</span></span><span style="display:flex;"><span><span style="color:#ae81ff">0</span> <span style="color:#ae81ff">21</span> * * * ~/bin/github_repo_sync.sh /backup/github &lt; /backup/github/github_repo_list.txt
</span></span></code></pre></div><p>The first cron entry fetches the list of repositories and sticks them into a
text file. The second one runs an hour later and actually syncs all the
changes. I set it up to sync into the zfs pool that gets snapshotted every
night anyways (as described <a href="http://www.unwiredcouch.com/bits/2014/03/18/zfs-rsync-backups.html">here</a>) so I get that for free. I&rsquo;m not
super happy with running this on a cron as there could be a smarter solution
that checks for changes via the API and marks repositories as dirty, but this
is the simplest thing that could work and way less work than interacting more
with the API. In addition I would love to exclude forks from the backup since
I don&rsquo;t really care about backing those up. But I&rsquo;ll leave this for iteration
2.</p>
<p>I track changes to the script in my <a href="https://github.com/mrtazz/bin">bin folder repository on GitHub</a>, so
if you&rsquo;re interested in tracking changes to this setup, follow it there.</p>
]]></content>
    <link href="https://unwiredcouch.com/bits/2014/08/16/github-mirror-shell.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Customizing Chef: Getting the Most Out of Your Infrastructure Automation]]></title>
    <published>2014-08-10T00:00:00Z</published>
    <updated>2014-08-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/cowie-customizingchef-2014/</id>
    <content type="html"><![CDATA[<p>I added this as a bonus round, because while I definitely read it, I had the
privilege to do so as a reviewer. I’m really happy that Jon asked me to review
his book and while I had done a lot of Chef before, I learned tons about its
internals from this book. If you work with Chef and want to get more out of it
or even just understand some of the internals a little better, definitely read
this book.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/cowie-customizingchef-2014/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Human Error and Getting off the Hook]]></title>
    <published>2014-08-04T00:00:00Z</published>
    <updated>2014-08-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/08/04/human-error-getting-off-the-hook.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve been interested in the field of human error and system safety for a while
now. My original interest in it got sparked by talking to <a href="http://www.kitchensoap.com/">John
Allspaw</a> and ultimately reading Sidney Dekker&rsquo;s <a href="http://amzn.com/0754648265">The Field Guide to
Understanding Human Error</a> which gives a very good introduction to
the topic. The book gives a lot of examples of things that have gone bad -
often in aircraft control - and even given the fact that I read most of it on
a 14 hour long flight I can definitely recommend it. I have since then
participated in a book club about the field guide and completed an informal
course about learning how to facilitate a <a href="http://codeascraft.com/2012/05/22/blameless-postmortems/">blameless postmortem</a>
taught by John. This approach of figuring out what happened and why it
happened in a blameless manner makes a lot of sense to me. I have worked in
more traditional places before where incidents weren&rsquo;t investigated in such a
way and I always had the feeling that there was something missing. That the
full story was never really uncovered - not even close. Over time I have
talked to quite a lot of people about the New View, this new way of thinking
about what contributes to incidents and blamelessly investigating them. And
when I talk to people about it who have never heard of this before or are
new to the topic, there is usually one question that comes up really quickly
and is usually something along the lines of:</p>
<blockquote>
<p>But isn&rsquo;t this just a cheap way of getting off the hook?</p></blockquote>
<p>This is why I decided to write down my thoughts about why this isn&rsquo;t the case
and what the New View is about relating to responsibility and trying to
prevent the same incident from happening again.</p>
<p>First let&rsquo;s look into what we are working with every day. Be it in airtraffic
control and flying airplanes, operating modern trains, working in a hospital
and taking care of patients or keeping a website running. All of those are
complex socio-technical systems. That means the systems as a whole consist of
many many technological parts and humans as operators interact with them. And
they are big and complex enough as so they are in itself intractable for any
person. This means at no point is there a simple and clear plan to follow and
at no point is it possible for anyone to fully describe the system end to end
with all of its interactions. This means anyone working within the system has
to choose carefully between checking every single step for any risk
imaginable and actually getting work done (something that Erik Hollnagel calls
ETTO or Efficiency-Thoroughness-Trade-Off). For example an airplane pilot
might speed up going through the pre take-off checklisting because being
extremely thorough almost certainly means introducing delays or maybe even
missing the plane&rsquo;s flight slot. A doctor maybe only goes through the part of
the patient report that is relevant to the immediate action or surgery they
are about to do because of the huge number of patients they have to take care
of. A software engineer wants to make something faster to provide a better
experience for the user and subsequently brings down the site by exhausting
available resources too fast.</p>
<p>Now these very specific examples might seem like people slacking off and if
they just did all those things according to the rules and regulations,
everything would be fine and nothing can go wrong. And conversely if we fire
the person that caused that deviation from the rules we have a perfectly
simple reason why our complex system broke. This is a very natural
approach to accident investigation. Even Nietzsche talked about it before.</p>
<blockquote>
<p>In the search for a cause of an accident we do tend to stop, in the words of
Nietzsche, by ‘the first interpretation that explains the unknown in familiar
terms’ and ‘to use the feeling of pleasure … as our criterion for truth.’</p>
<p class="cite">
&mdash; <cite>Erik Hollnagel, The ETTO Principle: Efficiency-Thoroughness Trade-Off (p. 10)</cite>
</p></blockquote>
<p>The reality is however that in complex, intractable systems it&rsquo;s impossible to
follow all the rules and attend to work with 100% thoroughness. There is even
a behaviour called <a href="http://en.wikipedia.org/wiki/Work-to-rule">&ldquo;work-to-rule&rdquo;</a> that describes the action of
working exactly what the rules describe and thus causing a slowdown that can
even come close to a stop.</p>
<p>So now that we have established that people take
shortcuts and thoroughness tradeoffs all the time, we can also safely assume -
as those actions are likely what makes sense at the time - that other
operators (would) do the same. Bringing us to a point where certain actions
that just before seemed like the cause of trouble are now considered to be a
natural behaviour of people working within the system. And as Sidney Dekker
puts it so aptly:</p>
<blockquote>
<p>Indeed, as soon as you have reason to believe that any other practitioner
would have done the same thing as the one whose assessments and actions are
now controversial, you should start looking at the system.</p>
<p class="cite">
&mdash; <cite>Sidney Dekker, The Field Guide to Understanding Human Error (p. 195)</cite>
</p></blockquote>
<p>But is that really true? Maybe all the others who would have done this very
thing a dozen times before just had better judgement? Maybe they just were
more aware of the situation and noticed that it would be an appropriate
response. Whereas in the failure case the operator just failed to recognize
that now was not the time to do this. It turns out smart people have thought
about this very thing before. The Austrian physicist and philosopher Ernst
Mach came to <a href="https://archive.org/download/erkenntnisundirr00machuoft/erkenntnisundirr00machuoft.pdf">this conclusion in 1905</a>:</p>
<blockquote>
<p>Erkenntnis und Irrtum fließen aus denselben psychischen Quellen; nur der
Erfolg vermag beide zu scheiden. Der klar erkannte Irrtum ist als Korrektiv
ebenso erkenntnisfördend wie die positive Erkenntnis.</p>
<p class="cite">
&mdash; <cite>Ernst Mach, Erkenntnis und Irrtum (p. 116)</cite>
</p></blockquote>
<p>This translates to something like &ldquo;knowledge and error flow from the same
mental sources; only success can tell one from the other. A clearly recognized
error as a corrective is fostering knowledge as much as positive
realization.&rdquo;. Which makes it clear that the decision of whether or not
something was &ldquo;the right thing to do&rdquo; or an error is defined by post-hoc
analysis of the situation. An advantage the operator didn&rsquo;t have in the
situation.</p>
<p>So now what? Our precious theory about the bad apple is gone. Where do we go
from there? Are we not allowed to talk about human actions at all anymore?</p>
<p>Quite the opposite.</p>
<p>Humans are a crucial part of socio-technical systems. But they are what makes
systems safe. As we have said before, our complex systems are in large parts
intractable and thus there is no way we could design a ruleset of things that
we could have a machine execute and everything would be safe. The thousands of
little adjustments, human operators carry out every minute are the pillars of
our system safety. With the little crux that they sometimes also lead to
adversarial outcomes. And this is the part we are interested in. How does
something that is done over and over again seemingly suddenly lead to an
incident? Why does it make sense for a person in that situation to act in the
way they did? After all the basic assumption is that people don&rsquo;t come to work
to do a bad job.</p>
<blockquote>
<p>There is a difference between explaining and excusing human performance.</p>
<p class="cite">
&mdash; <cite>Sidney Dekker, The Field Guide to Understanding Human Error (p. 196)</cite>
</p></blockquote>
<p>So does the human operator in this New View get off the hook? The answer here
is no. Because thinking about failure and outages in this way means the
practitioner was never on the hook for explaining their behaviour in the first
place. However being part of an incident on the very sharp end of the
situation brings some new responsibilities with it. It means the human is now
the specialist with most of the knowledge about how the system surprised us
and broke down. They know best how they expected the system to react and what
it actually did. They are the foremost authority on what detections they
utilized and what to put in place to realize faster that something is going
wrong. They know which tools they reached for, which they had to improvise,
and which tools they were missing. This means they are very much on the hook.
But on the hook for helping to find ways to make the system safer going
forward.</p>
<p>If this has sparked your interest in the field, my coworker <a href="https://twitter.com/indec">Ian</a> has
also recently published a set of resources on the <a href="http://codeascraft.com/2014/07/18/just-culture-resources/">Etsy Engineering
blog</a> to get started with the topic of System Safety, Human Error
and Just Culture.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/08/04/human-error-getting-off-the-hook.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Productive VIM with templates]]></title>
    <published>2014-07-22T00:00:00Z</published>
    <updated>2014-07-22T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2014/07/22/productive-vim-with-templates.html</id>
    <content type="html"><![CDATA[<p>I basically exist inside of vim all day. I write code in there, I write emails
in VIM via <a href="http://www.mutt.org">mutt</a>, I take notes with it and I write all my blog posts in
VIM. I think it&rsquo;s clear that improving the way I work with VIM helps in a
variety of scenarios. Over time I also noticed that I often start out with the
same basic file structure and then fill it with content. For example jekyll
blog posts always have the same header, meeting notes always have the same
structure and I use a template to reply to recruiter emails in times where I&rsquo;m
not looking for a job (a trick I learned from <a href="https://twitter.com/katemats">Kate Matsudaira</a> in
one of her <a href="http://katemats.com/people-are-lazy/">great blog posts</a> about productivity).</p>
<p>In the coding world VIM provides a great built-in functionality for that which
is called <a href="http://vimdoc.sourceforge.net/htmldoc/autocmd.html#skeleton">&ldquo;skeleton files&rsquo;</a>. This is a great way to always have a
good to go version of C source or header files, Makefiles or RPM spec files.
However this is all based on filetypes (or rather file endings) and since I
write most of my notes and all my blog posts in <a href="http://daringfireball.net/projects/markdown/">Markdown</a> for
example and they all have the same file ending this doesn&rsquo;t help me much for
having different templates. So I started to look around for VIM functionality
or plugins that would just let me load templates from a specific location and
maybe expand some variables (as I for example like to have the date auto
inserted into meeting notes). I didn&rsquo;t want a full fledged templating engine,
although I could certainly have installed and wrapped the <a href="https://github.com/tobyS/vmustache">Mustache
implementation written in VimL</a> to do that for me. But I wanted to
keep it simple and apparently that solution didn&rsquo;t exist yet.</p>
<p>This is why I wrote a VIM plugin called <a href="https://github.com/mrtazz/vim-stencil">vim-stencil</a>. It&rsquo;s a
handful of lines of VimL and it does exactly 2 things:</p>
<ul>
<li>Load a template from a specified location</li>
<li>Expand some variables (currently even only one: the date)</li>
</ul>
<p>So now with a simple call to <code>:Stencil</code> in VIM I can choose a template for the
type of file I&rsquo;m editing (yes it supports tab completion) and load that into
my buffer. I even get the current date for free in templates where I choose to
have it. No fuzz, no complicated setup. But a small thing that increases my
productivity a lot.</p>
]]></content>
    <link href="https://unwiredcouch.com/bits/2014/07/22/productive-vim-with-templates.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Mobile CI at Etsy]]></title>
    <published>2014-06-19T00:00:00Z</published>
    <updated>2014-06-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/mobile-ci-amsterdam/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/mobile-ci-amsterdam/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Development, Deployment &amp; Collaboration at Etsy]]></title>
    <published>2014-06-19T00:00:00Z</published>
    <updated>2014-06-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/collaboration-amsterdam/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/collaboration-amsterdam/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Shared layout for project pages in Jekyll]]></title>
    <published>2014-06-14T00:00:00Z</published>
    <updated>2014-06-14T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2014/06/14/jekyll-shared-project-layouts.html</id>
    <content type="html"><![CDATA[<p>I use <a href="http://jekyllrb.com/">Jekyll</a> a lot, especially for <a href="http://unwiredcouch.com">my website</a>. And I
quite like it a lot. I also write and open source the occasional software
every now and then, which usually happens on <a href="https://github.com/mrtazz">my GitHub profile</a>. And
thankfully GitHub makes it <a href="https://pages.github.com/">dead easy</a> to generate a nice looking page
for your project. I&rsquo;ve used this feature for a long time now and have used a
bunch of their awesome provided themes. However since I also host my site on
GitHub Pages and thus all my projects are automatically available under a sub
path there named after the project name.</p>
<p>However last week I decided that I wanted to have them all be in a layout
similar to my website so the whole page doesn&rsquo;t change just because you click
on a link on my <a href="http://www.unwiredcouch.com/projects.html">projects page</a>. But I also wanted to keep the code
for the pages in the respective repo so it&rsquo;s all in one place while at the
same time I didn&rsquo;t want to copy the layout into each repository.</p>
<p>Thankfully there is trick you can use with GitHub Pages. If you add git
submodules to your repository they are gettiing <a href="https://help.github.com/articles/using-submodules-with-pages">pulled in</a>
automatically on page build. So I created a <a href="https://github.com/mrtazz/jekyll-layouts">shared repository</a> to
hold the template I wanted for my projects. And now all I have to do to get a
project page with the correct layout is:</p>
<ul>
<li><code>git checkout gh-pages</code></li>
<li><code>git submodule add https://github.com/mrtazz/jekyll-layouts.git _layouts</code></li>
<li>copy the <code>README.md</code> of my project to <code>index.md</code> and add the jekyll
frontmatter:</li>
</ul>
<pre tabindex="0"><code>---
layout: project
title: project name
---
</code></pre><ul>
<li>add a <code>_config.yml</code> and fill out the following values:</li>
</ul>
<pre tabindex="0"><code>gaugesid: tracking code for the gaug.es gauge
projecturl: github url for the ribbon in the upper right corner
basesite: base URL to get the CSS from
markdown: kramdown
</code></pre><ul>
<li><code>git push</code></li>
</ul>
<p>The only dependency now is that the CSS comes from my main website. Which I&rsquo;m
fine with and is actually a feature because if I ever change something there I
want the project pages to reflect that change also. The other downside is that
if I change the project layout repository I will have to update the reference
in all the project repositories. Which should be fairly straightforward with
some automation and is at least better than copying files around and
committing them to each repository.</p>
]]></content>
    <link href="https://unwiredcouch.com/bits/2014/06/14/jekyll-shared-project-layouts.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Releasing at Scale]]></title>
    <published>2014-05-30T00:00:00Z</published>
    <updated>2014-05-30T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/releasing-at-scale/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/releasing-at-scale/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Computer Positivity]]></title>
    <published>2014-05-21T00:00:00Z</published>
    <updated>2014-05-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/05/21/computers-are-awesome.html</id>
    <content type="html"><![CDATA[<p>Let&rsquo;s face it, we&rsquo;ve all been there, your database just decided to split
brains, Linux just killed Apache on your web servers because another process
was using a bit more memory, your app is broken because the type system of the
language you wrote it in has interesting assumptions about what object you
just passed to a function and your laptop keeps disconnecting from the Wi-Fi.
It&rsquo;s fair to say computers are horrible right? And with all that certainty in
mind, we reach for the small phone in our pocket, open an app, type into a
text field and hit &ldquo;Tweet&rdquo;, so that the little message gets send off to
thousands of servers, which process it, find and extract data and then make
our proclamation available to millions of people all over the world within a
second:</p>
<blockquote>
<p>Computers are terrible. Nothing works.</p></blockquote>
<p>We all love to share our opinion and give talks about all the things that are
wrong with computing. JavaScript is a popular topic, so is PHP, Redis, nodejs,
Scala, Rails, Nagios, MongoDB and almost any other technology on different
occasions. You can not spend a week (or sometimes even a day) without a new
blog post about how some technology or piece of software (or better yet:
everything) is broken lacking any positive take away.</p>
<p>I was at Monitorama at the beginning of May which is always amazing and great
to meet and talk with friends and other people who work in infrastructure and
operations in the vastest sense. And it&rsquo;s easily one of my favorite
conferences. I <a href="https://vimeo.com/95247023">got to give a talk</a> there and close to the end
before introducing how our Nagios setup works I asked the audience to raise
their hand if they have strong feelings about Nagios. From what I could see
almost everybody raised their hand. Then I asked them to raise the other hand
if those feelings were love. Almost nobody did.</p>
<p>We have arrived at a point where a piece of software (granted with some
fixable inconveniences) monitors a complex system of computers, lets us know
when something is broken and arguably enables a ton of businesses. And yet
almost nobody has to say anything good about it. Everybody will tell you how
terrible it is.</p>
<p>This isn&rsquo;t about Nagios though. This is about general attitude. We work in a
field where you can (compared to other professions) make a lot of money,
usually don&rsquo;t have to be too concerned about unemployment (it&rsquo;s even kind of a
sport already to complain and make fun of all the recruiter requests we get
and be upset they are not perfectly tailored to what we want to work on) and
generally get paid to solve problems. And then we start to complain about the
fact that we can&rsquo;t always choose the problems we have to solve. We build
complex systems with a myriad of interactions and components and then have the
hubris to say we should understand them in their entirety and whoever doesn&rsquo;t
is plain stupid.</p>
<p>Let&rsquo;s be clear here. I don&rsquo;t think you can never complain about things. I
don&rsquo;t think everything is unicorns and rainbows. And I don&rsquo;t think you are not
allowed to say something when things are horrible. But as a profession I get
the feeling we have started to basically take negativity for granted. And with
that we set an awful example for peers and especially (young) people coming to
our industry. We say &ldquo;computers are terrible&rdquo; when it really just means &ldquo;the
computer does a thing that is really inconvenient for me right now&rdquo;.  We have
a machine in front of us that let&rsquo;s us boot other computers all around the
world, talk to our families face-to-face wherever they might be and access any
information known to mankind in an instant. Yet what we say is &ldquo;look at all
the crap we have to put up with everyday, recognize our prowess&rdquo;.  We could
invent teleportation and enable everybody to be anywhere they want in an
instant. As long as it sometimes puts us inconveniently a couple of meters
away from our desired destination we would be tweeting that &ldquo;this is horrible
and everything is broken&rdquo;.</p>
<p>Technology has made a lot of things possible for me. I would have never
thought that I would live in what might be the best city on earth, love every
day I go to work and am able to work on things I&rsquo;m interested in, and have the
opportunity to travel to conferences to have people listen to what I say about
computers. Am I stressed, annoyed or even mad sometimes about yum resolving
dependencies wrong, having to use a different way of comparing variables in
PHP or the fact that my laptop goes to a blank screen again 20 seconds after I
unlocked it? Yup. Does that make everything terrible? Nope.</p>
<p>I&rsquo;m sure as hell happy and thankful I can work with computers every day, do
interesting things and contribute to a platform where people create businesses
and make a living on. Because computers are pretty amazing and I get to learn
something new every day. And sometimes even feel almost
<a href="https://twitter.com/mrtazz/statuses/460423094054973440">like a wizard</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/05/21/computers-are-awesome.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[How OmniFocus controls my life]]></title>
    <published>2014-05-13T00:00:00Z</published>
    <updated>2014-05-13T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/05/13/omnifocus.html</id>
    <content type="html"><![CDATA[<p>At this point it&rsquo;s pretty fair to say that <a href="http://www.omnigroup.com/omnifocus">OmniFocus</a> rules my life. I&rsquo;ve
started to really take GTD seriously about 2 years ago. I had tried a lot of
different task managers before of course. I loved <a href="https://culturedcode.com/things/">Things</a> when it
originally came out as a beta and of course I had started to write <a href="https://github.com/mrtazz/gtd-couch">my own
todo tracker</a> like everybody else. But I had never actually read
<a href="http://www.amazon.com/Getting-Things-Done-Stress-Free-Productivity/dp/0142000280">the book</a> before because I thought I didn&rsquo;t need such a
sophisticated todo tracker. That changed when I started a new job and moved to
a different country. Suddenly there were so much more things to keep track of
and do. So I read the book, pulled out Things again and tried to implement my
version of GTD. However it became apparent to me very quickly that the way I
want to use it (collect everything and have a lot of different ways to
retrieve/view data) wouldn&rsquo;t work with Things. So I shelled out a lot of money
and bought OmniFocus. And I have tried a couple of times to use a different
tool again but nothing worked for me as good as OF does. Mainly because I have
arrived at a setup that is very well integrated with my daily workflow.</p>
<h3 id="the-basics">The Basics</h3>
<p>I have OmniFocus running on my personal and work laptop as well as on the
iPhone and iPad (although I hardly ever use it on there). They are all synced
through webdav and owncloud on my personal servers. I mainly use it on the
laptop and the iPhone client serves mostly for quickly inputting data or
pulling up a list when I&rsquo;m on the go.</p>
<h3 id="collecting-all-the-things">Collecting all the things</h3>
<p>As every GTD guide ever will tell you, the system only works if it is your one
and only system that contains everything. And thus you have to add all your
todos and ideas in there.  I try to heavily follow this approach as I found it
to be very true for me that I lose confidence in the tool as soon as it
doesn&rsquo;t contain my whole world. Paramount to this is the ability to enter new
items from basically everywhere and support every way that could generate
things for you to do. Luckily for me this means only a handful of things:</p>
<ul>
<li>Random things that I come up with</li>
<li>Email</li>
<li>GitHub issues</li>
<li>Jira</li>
</ul>
<p>This basically covers all variations of how I have new things landing on my
plate. And thus I have made sure all those things find an easy way into my
inbox.</p>
<h4 id="random-things">Random things</h4>
<p>I use <a href="http://www.alfredapp.com/">Alfred 2</a> heavily on the desktop to quickly switch to or open
apps, convert units, lookup people, and a myriad of other things. Naturally
that means this is also the place where I should be inputting all new todos as
they come to mind. For that I&rsquo;m using an <a href="http://www.alfredforum.com/topic/1041-create-new-task-in-omnifocus-inbox">awesome workflow</a>
that I found somewhere on the internet. It allows me to fire up the Alfred
prompt and simply enter <code>todo do awesome thing @context</code> and on hitting enter
the new item is in my inbox with the correct context. This allows me literally
add new things in a matter of seconds and bother later with filtering,
remembering and doing them.</p>
<h4 id="email">Email</h4>
<p>Email is a little bit trickier. There are awesome plugins for Mail.app to work
with Omnifocus and I hear they make it a breeze to get things done. However my
email client of choice is <a href="http://www.mutt.org/">mutt</a>. Which means there is a bit more
hacking to do (as usual). However I found a great <a href="https://github.com/mrtazz/bin/blob/master/mutt-to-omnifocus.py">Python script</a>
that parses emails and adds them to Omnifocus. I also added this keybinding to
my mutt configuration:</p>
<pre tabindex="0"><code>macro index,pager \Ca &#34;&lt;enter-command&gt;unset
wait_key&lt;enter&gt;&lt;pipe-message&gt;mutt-to-omnifocus.py
&lt;enter&gt;&lt;save-message&gt;=gtd-needs-reply/&lt;enter&gt;&lt;sync-mailbox&gt;&#34;
</code></pre><p>Now all I have to do when reading an Email or browsing through the list is hit
<code>Ctrl-a</code> and mutt automatically creates a task in my Omnifocus inbox and moves
the email out of my inbox into a folder I creatively called <em>gtd-needs-reply</em>
(I also have one called <em>gtd-to-read</em> which I use for emails that I still have
to read). This keeps my Email inbox clean and has the benefit since it adds an
Omnifocus entry with the context &ldquo;Email&rdquo; that I can easily find all the emails
I have to write with a custom perspective (more on that later).</p>
<h4 id="github-issues">GitHub Issues</h4>
<p>A decent amount of things to do for me are also generated via GitHub Issues.
This can either be issues on one of the public GitHub projects I maintain or
more often a code review <a href="https://www.etsy.com">at work</a>. We use Pull Requests on GitHub
Enterprise for code reviews at Etsy and if someone wants you to review code,
they assign the pull request to you. Since there is no need for me to go
through my Email for notifications about code I have to review, I wrote a
simple script that runs every 10 minutes and checks whether I have issues
assigned that are not yet in my Omnifcous. <a href="https://github.com/mrtazz/bin/blob/master/ghfocus.rb">This script</a> reads a
configuration file which can have an arbitrary number of GitHub (Enterprise)
instances and asks for all issues assigned to a user owning the OAuth token.
It then generates a ticket title based on the repo URL and issue number and
adds a configurable context (Github or EtsyGithub for me) to it. It then
creates an Omnifocus inbox tasks based on that data, again easily findable by
context in Omnifocus.</p>
<h4 id="jira">Jira</h4>
<p>We use Jira at Etsy to manage tickets and workload and thus the majority of my
work is captured in there. Since I don&rsquo;t want to have two places to look for
things, I&rsquo;m also pulling all my Jira tickets into Omnifocus. This is done with
basically the same script as the GitHub sync but uses <a href="https://github.com/codehaus/jira4r">jira4r</a> as the
input source. It then drops a todo item with the Jira project key and ticket
number into my inbox with the context <em>Etsy:Jira</em>. This makes it super easy to
organize all the work I am assigned in Omnifocus. The only downside to that is
that it&rsquo;s not a 2-way sync. Right now I clean up and close tasks in Omnifocus
(and Jira) when I actually finish them or during the weekly review. I also
only create tickets for myself in Jira and don&rsquo;t add tickets from Omnifocus
when I create new todos with the <em>Etsy:Jira</em> context. This wouldn&rsquo;t be very
hard to do but I haven&rsquo;t found it to be super painful to do it manually.</p>
<h3 id="basic-structure">Basic Structure</h3>
<p>So now that I have an easy way to enter all the incoming work into Omnifcous,
the next step is organizing all the things. For that I use folders heavily for
the basic structure and something close to the Areas of Responsibility in the
<a href="http://www.amazon.com/Getting-Things-Done-Stress-Free-Productivity/dp/0142000280">GTD book</a>. I have top level folders for <em>Etsy</em>, <em>Personal</em>,
<em>Talks</em> and <em>Open Source</em>. And under those another layer of folders which
reflect finer grained areas of responsibility. You see the structure here:</p>
<p><img src="/images/of_structure.png" alt="of structure"></p>
<p>Within those areas a have the actual projects I work on and usually a single
action list project for miscellaneous things. I organize active and someday
projects in there by putting someday projects in <em>On Hold</em> status. This makes
it easy to find projects I&rsquo;m working on by filtering for active ones in a
perspective. The project view is the most important one for finding and
organizing my work. I never fully got into using contexts for things other
than automated tools that pull in data. I rarely find myself in actually
different contexts where it makes sense to pull up a specific list and all my
tries to get that working ended in confusion for me (ymmv).</p>
<h3 id="perspectives-all-the-way-down">Perspectives all the way down</h3>
<p>Based on that structure I have created a handful of custom perspectives to
quickly find things I need. You can see the overview of my perspectives in the
screenshot below.</p>
<p><img src="/images/of_perspectives.png" alt="of perspectives"></p>
<p>The most important one is the <em>Today</em> perspective. It holds all items that are
due, overdue or flagged. This is my daily todo list with things I wanna get
done today. The next ones are Etsy active projects, next actions and weekly
summary. Those I pull up for planning daily tasks and writing my weekly
summary. I also have a perspective for Personal active projects which I don&rsquo;t
use that much but still pull up often enough to be valuable. The only crux
with those perspectives is that they are mostly project and not context based.
That means most of the perspectives don&rsquo;t sync to the iPhone. For now that is
ok for me because I mostly use the iPhone to add stuff to the inbox and to
check my daily todo list which I made a context based perspective. And <a href="https://twitter.com/kcase/status/465904405141671938">I&rsquo;ve
also heard</a> that project based perspectives will be syncing to
the iPhone in the future. So that will help a lot.</p>
<h3 id="review-review-review">Review, Review, Review</h3>
<p>As every person that is trying to do their version of GTD will tell you,
consistent reviews are the heart of a working system. And that is no different
for me. I try to really be disciplined about my weekly reviews and try to do
daily reviews but often only end up actually doing them 3 times a week or so.
Which is not too bad as long as the weekly review is consistent.</p>
<h4 id="daily">Daily</h4>
<p>My daily review routine is pretty straight forward. I pull up the today list
and mark everything as done I completed but haven&rsquo;t checked off yet. Then I
pull up my active perspectives and flag stuff I wanna work on today. That&rsquo;s
it, simple and easy.</p>
<h4 id="weekly">Weekly</h4>
<p>My weekly review is a bit more complex. I actually have a recurring project
that becomes available every Friday and is due on Sunday and looks like this:</p>
<p><img src="/images/of_weekly_review.png" alt="of weekly review"></p>
<p>This is my checklist to do my weekly review. So every weekend I will clear out
and archive all email and filter unprocessed email into <em>gtd-to-read</em> or
<em>gtd-needs-reply</em>. This is mostly mailing list stuff since I try to stay on
Inbox Zero during the week. I then put every thing I can think of that has to
be done into the inbox. I check my calendar from last week if there is
anything left to be done from meetings and check next weeks calendar for stuff
I have to prepare. I then hit the <em>Review</em> button in Omnifocus and start
reviewing all my projects. That usually starts with sorting all the inbox
items into the folder structure and then going through all the other projects
to mark things as completed and add new actions. This usually takes a bit
longer for active projects, whereas on hold projects I can go over quickly
because they don&rsquo;t usually have a lot of activity. I have set my default
review cycle to 5 days in general. That means projects become available for
review again after 5 days, so when I don&rsquo;t get to it on the Weekend and do my
review on Monday morning, I still have the projects ready for review on
Friday.</p>
<h3 id="verdict">Verdict</h3>
<p>For now I think I have found a good balance of using a lot of features of
Omnifocus while still keeping it somewhat simple and not going overboard with
the setup. Automating a lot of things - especially for inputting data - has
made a big difference in trusting the system to be my only source of truth for
work that needs to be done. My biggest problems are still making sure to take
enough time for the reviews, keep adding todos and paying attention to my
daily list even if I&rsquo;m stressed and some days also &hellip; you know &hellip; actually
getting things done.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/05/13/omnifocus.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[A Whirlwind Tour of the Etsy Monitoring Stack]]></title>
    <published>2014-05-06T00:00:00Z</published>
    <updated>2014-05-06T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/etsy-monitoring/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/etsy-monitoring/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[My 5 years of GitHub]]></title>
    <published>2014-03-28T00:00:00Z</published>
    <updated>2014-03-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/03/28/5-years-github.html</id>
    <content type="html"><![CDATA[<p>In early 2009 I was very much not the typical programmer. I had just spent 3
years part time in what was basically systems administration during my
undergrad and 1 year of <a href="http://en.wikipedia.org/wiki/Manufacturing_execution_system">trying to connect chemical production plants to
computers</a>. I had to write code for some of my University assignments and
had done some shell scripting in my spare time before (all the sysadmin stuff
was Windows and there was not a lot of scripting involved there). However in
general I had this idea that systems administration and IT was completely
orthogonal to writing code and a lot of the Software Engineering classes I had
up to the point didn&rsquo;t really spur my interest. They usually featured a lot of
software processes, XML, SOAP, and simple C#.NET Windows applications. Nothing
I was really interested in. But after graduating in 2007 and working full time
I had this feeling every day that something was missing.  I had no idea what I
wanted to do with my computer degree, I loved working with computers but I
felt I had nothing I was really good at. And the nature of my degree was that
it was very practical. Which helped me a lot setting foot and getting started
in the industry, but I felt like I was missing all the theoretical education
you would get in a more traditional university setting. So I decided to go
back to University and get a Masters degree. And I also quit my job and worked
part time as an Embedded Systems developer. Which put me way more out of my
comfort zone than I expected. But it also confronted me with a lot of new
things regarding software development. My first project at work was actually a
web based network sniffer that ran on a microcontroller. So I accidentally
started learning more about web development while working at an Embedded shop.
And another really fortunate accident was that one of our lead engineers -
<a href="https://github.com/nbraun">Nathan</a> - was really into git. And me being a subversion fan at the
time resulted in some really interesting discussions which made me look into
git.</p>
<h3 id="enter-twitter-twsh-and-github">Enter Twitter, twsh and GitHub</h3>
<p>I had signed up for Twitter in mid 2007 while I was writing my Bachelor thesis
and literally had no idea what I was supposed to do with it. In my circle of
coworkers and friends who worked with computers I was most of the time one of
the first to explore new things and thus I didn&rsquo;t know a single person with a
twitter account. I don&rsquo;t even remember how I heard about Twitter in a world
that doesn&rsquo;t have Twitter. But I had <a href="https://twitter.com/mrtazz/statuses/130573092">extremely interesting tweets</a>
back then already of course. Fast forward to 2009 my new found interest in
programming from my new job and the programming I had to do for class
assignments in university meant that I was constantly trying out new things
and experimenting with the concepts I learned. So at some point I decided I
wanted to have a twitter command line client and started to write <a href="https://github.com/mrtazz/twsh">the twitter
shell</a>. It was painful and slow and I had no idea what I was doing. It
was living there in a subversion repository on my Mac mini at home and I
wanted to open source it eventually. I had no idea what that really meant
either. But I had used a lot of open source software and was always fascinated
by the idea that I could just look into how things work.</p>
<p>I first looked into hosting it on Google code but I found it to be ugly and
weird. And a lot of people in my twitter stream were talking about git and
this new thing called GitHub. Since there was nothing really tying me into
subversion, I moved the code over to git and signed up for a GitHub account on
March 28th 2009.</p>
<h3 id="brave-new-world">Brave New World</h3>
<p>And it <em>literally</em> changed my world. I suddenly found a lot of people who were
doing so many interesting things. And it encouraged me to work on all the
ideas I had floating in my mind but thought were useless or boring. I found
out about the programming communities behind languages like ruby or python.
How to package software to be installed from PyPi. I started thinking about
how to split software into projects as libraries and how to design APIs. I
wrote API wrappers for things like <a href="https://github.com/mrtazz/InstapaperLibrary">Instapaper</a>, <a href="https://github.com/mrtazz/notifo.py">a Notifo
library</a> or a tool to import YAML based groceries lists into the
<a href="http://sophiestication.com/groceries/">iPhone Groceries app</a> and I also heard about continuous
integration for the first time.</p>
<p>And I was fascinated by the idea of having a build system do all those things
I usually ran commands for automatically. I read up on it and tried to get
<a href="https://github.com/integrity/integrity">Integrity</a> up and running, which seemed to be the most accessible
solution to me at the time. And having worked on the notifo library I wanted
it to push to my phone whenever a build would break or work (yes in general it
was quiet enough for me back then that I wanted all of the notifications). So
the time had come to contribute to an Open Source project and figure out how
to ruby and write unit tests and all of those things. I wrote the notifier,
submitted a pull request and after some feedback and improvements, <a href="https://twitter.com/sr">Simon</a>
merged my notifier into master and I was super excited. I was finally able to
have it running on my own CI instance and know the status of my builds with
just a quick glance at my phone:</p>
<p><img src="/images/integrity-notifo.png" alt="integrity pull request merged"></p>
<p>After that I was less terrified of contributing to Open Source and just trying
out things. I kept perusing the GitHub explore page and found all those
interesting projects. I went into something like an Open Source rampage and
tried to contribute to and open source as much as possible. I even signed up
for <a href="https://github.com/blog/178-it-s-a-calendar-about-nothing">Calendar About Nothing</a> and maintained something like 120 days with
consecutive contributions at some point. And whenever I decided to push a new
project I would meet and engage with more people and learn new things. For
example I wrote the <a href="http://www.unwiredcouch.com/2010/04/21/plustache.html">C++ implementation</a> of Mustache templating for
fun and met <a href="https://twitter.com/janl">Jan</a> because of that. Which then led to me meeting a lot of
other awesome people in Berlin and around the internet.</p>
<p>And even though twsh never actually got finished and I lost interest in it, I
can definitely say that GitHub has changed my computing life to the amazing.
And I probably wouldn&rsquo;t be where I am now without it.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/03/28/5-years-github.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Backups with rsync and zfs]]></title>
    <published>2014-03-18T00:00:00Z</published>
    <updated>2014-03-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2014/03/18/zfs-rsync-backups.html</id>
    <content type="html"><![CDATA[<p>As I <a href="http://www.unwiredcouch.com/2013/10/30/uncloud-your-life.html">mentioned before</a> I&rsquo;m running my own backup on a server that is
running in my apartment. I didn&rsquo;t really talk a lot about how this works,
other than it is running on a HP Microserver with an encrypted ZFS RAID. So I
wanted to also quickly jot down how the backup works. This is only set up for
a single user right now because I&rsquo;m the only one using it.</p>
<p>For me a backup has two important parts:</p>
<ul>
<li>Have data in a different location</li>
<li>Be able to restore data from the past</li>
</ul>
<p>The time sensitivity of those two properties are pretty different for me. For
example I have chosen for myself that I&rsquo;m happy with only being able to
restore deleted data from the last day. So if I create something and delete it
5 hours later, I&rsquo;m ok with not being able to recover it. On the other hand I&rsquo;m
very aware of the fact that my mailserver can disappear at any given time:</p>
<blockquote class="twitter-tweet" lang="en"><p>that moment when you want to make dinner and your mailserver disappears</p>&mdash; Daniel Schauenberg (@mrtazz) <a href="https://twitter.com/mrtazz/statuses/411689583370592256">December 14, 2013</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>This is why I want to copy data to a remote location as often as possible
(which for me means about every 15 minutes). And my setup is heavily based
around those ideas. The core of the backup system is ZFS and a separate file
system for each machine I want to backup. In order to have the ability to go
back in time I use <a href="http://docs.oracle.com/cd/E19253-01/819-5461/gbcya/index.html">zfs snapshots</a>. Every night the following
script runs on my backup server and creates a snapshot for the day:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span><span style="color:#75715e"># simple script to snapshot locations on a ZFS backup pool</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>timestamp<span style="color:#f92672">=</span><span style="color:#e6db74">`</span>date +%Y-%m-%d-%H:%M:%S<span style="color:#e6db74">`</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> volume in <span style="color:#66d9ef">$(</span>ls /backup<span style="color:#66d9ef">)</span>; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;Creating snapshot for </span><span style="color:#e6db74">${</span>volume<span style="color:#e6db74">}</span><span style="color:#e6db74"> at date </span><span style="color:#e6db74">${</span>timestamp<span style="color:#e6db74">}</span><span style="color:#e6db74">&#34;</span>
</span></span><span style="display:flex;"><span>  /sbin/zfs snapshot backup/<span style="color:#e6db74">${</span>volume<span style="color:#e6db74">}</span>@<span style="color:#e6db74">${</span>timestamp<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>And to make sure that I really do have snapshots I have this simple nagios
script to tell me if the snapshotting worked last night.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># nagios script to check age of backup snapshots</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>YESTERDAY<span style="color:#f92672">=</span><span style="color:#e6db74">`</span>date -v-1d +%Y-%m-%d<span style="color:#e6db74">`</span>
</span></span><span style="display:flex;"><span>EXITCODE<span style="color:#f92672">=</span><span style="color:#ae81ff">0</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> backup in <span style="color:#66d9ef">$(</span>ls /backup<span style="color:#66d9ef">)</span>; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  zfs list -t snapshot | grep <span style="color:#e6db74">${</span>backup<span style="color:#e6db74">}</span> | grep -q <span style="color:#e6db74">${</span>YESTERDAY<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> $? !<span style="color:#f92672">=</span> <span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    echo <span style="color:#e6db74">&#34;Snapshot of </span><span style="color:#e6db74">${</span>backup<span style="color:#e6db74">}</span><span style="color:#e6db74"> missing for </span><span style="color:#e6db74">${</span>YESTERDAY<span style="color:#e6db74">}</span><span style="color:#e6db74">.&#34;</span>
</span></span><span style="display:flex;"><span>    EXITCODE<span style="color:#f92672">=</span><span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> <span style="color:#e6db74">${</span>EXITCODE<span style="color:#e6db74">}</span> <span style="color:#f92672">==</span> <span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;All backup volumes were snapshotted on </span><span style="color:#e6db74">${</span>YESTERDAY<span style="color:#e6db74">}</span><span style="color:#e6db74">.&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>exit <span style="color:#e6db74">${</span>EXITCODE<span style="color:#e6db74">}</span>
</span></span></code></pre></div><p>And this check (which runs on all my servers because I have zpools everywhere)
to tell me about the disk health of the backup zpool:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># check for zpool health</span>
</span></span><span style="display:flex;"><span>ZPOOL<span style="color:#f92672">=</span><span style="color:#e6db74">`</span>which zpool<span style="color:#e6db74">`</span>
</span></span><span style="display:flex;"><span>EXITSTATUS<span style="color:#f92672">=</span><span style="color:#ae81ff">0</span>
</span></span><span style="display:flex;"><span>IFS<span style="color:#f92672">=</span><span style="color:#e6db74">$&#39;\n&#39;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> line in <span style="color:#66d9ef">$(</span><span style="color:#e6db74">${</span>ZPOOL<span style="color:#e6db74">}</span> list -o name,health | grep -v NAME | grep -v ONLINE<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  echo $line
</span></span><span style="display:flex;"><span>  EXITSTATUS<span style="color:#f92672">=</span><span style="color:#ae81ff">2</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> $EXITSTATUS <span style="color:#f92672">==</span> <span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>  echo <span style="color:#e6db74">&#34;All pools are healthy.&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>exit $EXITSTATUS
</span></span></code></pre></div><p>With this setup in place I can simply copy files into the file system that
belongs to that machine and it will get snapshotted every night. And what&rsquo;s an
awesome tool to copy data? That&rsquo;s right, <a href="http://rsync.samba.org">rsync</a>.</p>
<p>My backup script runs once every 15 minutes and looks like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span><span style="color:#75715e">#</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># Backup script to pull in changes from remote hosts</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">#</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">for</span> backup in <span style="color:#66d9ef">$(</span>ls /backup<span style="color:#66d9ef">)</span>; <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>  grep -q <span style="color:#e6db74">${</span>backup<span style="color:#e6db74">}</span> ~/.backupexcludes
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">if</span> <span style="color:#f92672">[</span> $? !<span style="color:#f92672">=</span> <span style="color:#ae81ff">0</span> <span style="color:#f92672">]</span>; <span style="color:#66d9ef">then</span>
</span></span><span style="display:flex;"><span>    /usr/local/bin/rsync -e <span style="color:#e6db74">&#39;ssh -o BatchMode=yes -o ConnectTimeout=10&#39;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>--archive --delete --timeout<span style="color:#f92672">=</span><span style="color:#ae81ff">5</span> <span style="color:#e6db74">${</span>backup<span style="color:#e6db74">}</span>:. /backup/<span style="color:#e6db74">${</span>backup<span style="color:#e6db74">}</span>/
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">fi</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">done</span>
</span></span></code></pre></div><p>This allows me to have machines that I used to backup but are no longer online
in an excludes list. That way rsync (and ssh) doesn&rsquo;t hang or error for
something that doesn&rsquo;t need to be backed up anymore anyways. And in case a
machine is unavailable or disappears the timeout settings in that script make
sure it just gets skipped and retried on the next run.</p>
<p>I&rsquo;m pretty happy with the setup, my backup server pulls in data from all my
servers on the internet and stores it (forever?). It is chef&rsquo;d for the most
part (though there is always more to automate) and is pretty simple in my
opinion. The backup situation for my laptop is not ideal yet, as I manually
back it up by running rsync. I want to set the backup server up to also serve
some of the backup filesystems as Timemachine targets, so I can just use
Timemachine on my laptop and have it automatically run the backups.</p>
<p>But in the meantime I can add a new backup with this one weird trick:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>zfs create /backup/newhost <span style="color:#f92672">&amp;&amp;</span> chown -R mrtazz:mrtazz /backup/newhost
</span></span></code></pre></div>]]></content>
    <link href="https://unwiredcouch.com/bits/2014/03/18/zfs-rsync-backups.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Co-Workers as Customers]]></title>
    <published>2014-03-10T00:00:00Z</published>
    <updated>2014-03-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/coworkers-as-customers/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/coworkers-as-customers/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Female Eunuch]]></title>
    <published>2014-03-07T00:00:00Z</published>
    <updated>2014-03-07T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/greer-thefemaleeunuch-1970/</id>
    <content type="html"><![CDATA[<p>I started the year off with finally finishing Germaine Greer&rsquo;s feminist classic
from 1970 about the role of women in modern society. I had known about the book
for a couple of years and after having read “Bell Hooks&rsquo; The Will to Change:
Men, Masculinity, and Love” last year I decided to finally read it. I definitely
enjoyed it. Especially as a man it opens your eyes to a lot of things you never
encounter in your daily life. It&rsquo;s very graphic at times and there are some
long-winded parts in the middle but I would definitely recommend it to anyone
who&rsquo;s interested in feminism. I also started reading her newest book &ldquo;The Whole
Woman&rdquo; this year which is the sequel she never wanted to write. And so far I
like it and it&rsquo;s alarming how few things have changed since &ldquo;The Female Eunuch&rdquo;.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/greer-thefemaleeunuch-1970/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Development, Deployment &amp; Collaboration at Etsy]]></title>
    <published>2014-03-05T00:00:00Z</published>
    <updated>2014-03-05T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/collaboration-london/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/collaboration-london/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Git - Put the stupid back in stupid content tracker]]></title>
    <published>2014-02-17T00:00:00Z</published>
    <updated>2014-02-17T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2014/02/17/git-stupid.html</id>
    <content type="html"><![CDATA[<p>Git describes itself as a <a href="https://www.kernel.org/pub/software/scm/git/docs/">stupid content tracker</a>. While this
was surely meant as a clever pun and an understatement, there is some truth to
it. When you go to seek out workflows with git and try to understand how
people use it, there is a myriad of flows and recommendations. Most of them
focus on the areas of git which are often considered to be on a higher level
of the usage ladder. Rebasing, squashing, cherry-picking and all those
features. And most often they are considered to be part of your everyday
workflow. And while I&rsquo;m in no position to judge whether those techniques are
ideal for certain teams and their workflows I think they are most of the time
way too complex to work in larger teams and across people with different skill
levels. Once you tell people they should squash/rebase on all branches when
merging upstream and alias pull to pull &ndash;rebase but not when you have merged
a feature branch and not pushed it yet so remember this when you write code
and concentrate on the next deploy you&rsquo;re probably gonna have a bad time.</p>
<p>See this is not what you should have to remember to commit the code you just
wrote to disk. I&rsquo;ve seen such recommendations and the following complaints
when the workflow breaks down a couple of times now. It even led me to
compose this fascinating tweet at one point:</p>
<blockquote class="twitter-tweet" lang="en">
<p>Before complaining about git
please consider answering these 2 questions:&#10;1. Were you trying to be
overly clever?&#10;2. Were you expecting svn?
</p>
&mdash; Daniel Schauenberg (@mrtazz)
<a href="https://twitter.com/mrtazz/statuses/341841535165415424">June 4, 2013</a>
</blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<h3 id="the-example">The Example</h3>
<p>But in all seriousness in the end the most important thing about git is that
it lets you commit changes to disk in a way so that your future self can
figure out what changed later. And I want to highlight why I think it makes
more sense to start simple and then figure out if additional features really
make sense for you.</p>
<p>Let&rsquo;s take an example workflow you can find on the internet and take a look:</p>
<blockquote>
<p>git pull &ndash;rebase instead of git pull</p>
<p>git rebase -i @{u} before git push</p>
<p>(on “feature”)</p>
<p>git merge master</p>
<p>to make feature compatible with latest master</p>
<p>(on “master”)</p>
<p>git merge &ndash;no-ff feature to ship a feature</p>
<p>However if “feature” contains only 1 commit, avoid the merge commit:</p>
<p>(on “master”)</p>
<p>git cherry-pick feature</p></blockquote>
<p>It doesn&rsquo;t matter where this is from because I don&rsquo;t want to disrespect the
author or their ability to choose a workflow that works for them. It just is a
good example (of which you can find hundreds that are similar on the web) of
what a lot of people think a git workflow has to be like and cargo cult it
into their setup. After all if you&rsquo;re not using all features of git you must
be missing out on precious productivity right?</p>
<p>Back to the example. Have you kept track and can remember without looking when
to merge, rebase or cherry-pick? You are basically doing the same thing -
bringing new commits to your branch - and you have to do 3 different things
depending on some circumstances. But this isn&rsquo;t even the most error prone
part. Let&rsquo;s consider you work on a fairly active project with a lot of people.
You are maybe even doing continuous integration and continuous deployment and
trunk/master gets commited to all the time. You are working on a feature in a
branch (btw while we&rsquo;re at it read <a href="http://whilefalse.blogspot.de/2013/02/branching-is-easy-so.html">Camille Fourniers&rsquo; excellent blog
post</a> to find out why you actually shouldn&rsquo;t do this in a
continuous deployment setup) and want to pull in changes from master. So you
merge master as per the instructions. You notice someone else has pushed to
the feature branch in the meantime. You update the local branch with
<code>pull --rebase</code> as per the instructions again and push up your changes. If you
ever had to do this you won&rsquo;t be surprised that your working tree now looks
something like this:</p>
<p><img src="/images/feature-rebase.png" alt="rebase on the feature branch"></p>
<p>You now have the same commit with two different SHA IDs on different branches.
This might not look like a big problem right now. But I for one don&rsquo;t like to
have multiple IDs for the same thing and also think that having two commits
doing the same thing is not an ideal situation.</p>
<p>But even if you work alone on your feature branch and don&rsquo;t need to rebase
before pushing you will eventually run into a similar situation on your master
branch. You have worked on your feature branch for a while. You opened a pull
request, incorporated feedback and pushed it back up to the branch. People
have linked to your commits in tickets to note that it fixes stuff they&rsquo;ve
been waiting on. Now the big moment has come and you want to integrate with
master. You pull in changes from origin, you run <code>git merge --no-ff feature-branch</code> to bring in the changes with a clean merge commit. You run
<code>git push</code> but there have been upstream changes. So you bring them in again as
always with <code>git pull --rebase</code>. Now you can push and everything&rsquo;s fine right?
Well almost, except that last rebase has rewritten all your commits from your
feature branch (if you additionally ran <code>git rebase -i</code> before pushing you are
probably very well aware of this). You might think that this is ok or even
intended since you wanted to clean up your commits anyways. And that this
makes it much cleaner and easier to read. However what you effectively just
did was rewrite (git) history and remove public references of changes.
Everybody who linked to your commits now has links pointing to a non existing
resource (they will still be there until you eventually clean up the remote
feature branch but this is just details in my opinion). Pull requests don&rsquo;t
get automatically closed and you sure don&rsquo;t remember to do it by hand and all
references in there are useless anyways. And all just for a little bit of
beautification of how you actually did your work.</p>
<h3 id="so-what-are-you-saying">So what are you saying?</h3>
<p>Does that mean you should never use rebase? Am I saying that only the basic
git commands are supposed to be part of a workflow? Do I call all these
developers using advanced techniques stupid?</p>
<p>N-O-P-E</p>
<p>I do use rebase in some occasions, I don&rsquo;t think it&rsquo;s the devil&rsquo;s work. There
are legitimate reasons to use it and it sure is helpful sometimes. However
when you&rsquo;re starting to get into git or migrating to it and building a
workflow for your team it often makes the most sense to start with the
simplest thing that could possibly work. git add, commit, push, pull, merge.
All these are safe operations and don&rsquo;t break anything for others and most
likely don&rsquo;t bring you into weird situations where you&rsquo;re totally stuck and
lost. There might come a time where you bring rebase and its siblings into the
mix. But then there should be a reason and it hopefully is because it helps
you be more productive and removes confusion in your team. And I sure hope you
know how it works and what you&rsquo;re getting yourself into. Because it can be
what you end up with but it shouldn&rsquo;t be where you start. Keep it stupid
simple.</p>
]]></content>
    <link href="https://unwiredcouch.com/2014/02/17/git-stupid.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Context specific dotfiles]]></title>
    <published>2014-02-03T00:00:00Z</published>
    <updated>2014-02-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2014/02/03/dotoverride.html</id>
    <content type="html"><![CDATA[<p>I have a <a href="https://github.com/mrtazz/muttfiles">collection</a> <a href="https://github.com/mrtazz/vimfiles">of</a> <a href="https://github.com/mrtazz/zshfiles">various</a>
<a href="https://github.com/mrtazz/dotfiles">dotfiles</a> which I use to configure the most important tools I use
everyday. Naturally all those are kept in git and shared between all the
machines I work on. The problem is that there might be things I don&rsquo;t want to
store publicly. This might include shell aliases to hostnames, git user emails
I only use at work, etc. I used to manage this by having a different branch
checked out on machines at work and would just merge in master whenever
something changes. However this was super tedious as I had to remember to
switch to the right branch depending on whether I wanted to make public or
private changes. And after changing something I had to remember to switch back
to the correct branch and not accidentally push the private branch to public
GitHub. What it effectively ended up being was a whole bunch of dirty repos on
different machines that were never in sync and partly had duplicate changes
and partly only worked on that box anyways. And whenever I wanted to bring
them back in sync it was a huge pain.  So I decided to adopt a new strategy
for managing context specific dotfiles.</p>
<p>I added a git repo <code>~/.dotoverrides</code> to all the machines I work on (or at
least most of them) which contains a <code>vimrc</code>, a <code>zshrc</code> and so on.  On my work
machines this is pushed to a repo on our internal GitHub Enterprise instance
so I can easily share it between machines. And all my regular dotfiles now
source those override files at the very end.</p>
<p>So in my regular <code>.vimrc</code> I have something like this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-vim" data-lang="vim"><span style="display:flex;"><span><span style="color:#75715e">&#34; source overrides configs</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">if</span> <span style="color:#a6e22e">filereadable</span>($<span style="color:#a6e22e">HOME</span>.<span style="color:#e6db74">&#34;/.dotoverrides/vimrc&#34;</span>)
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">exec</span> <span style="color:#e6db74">&#34;:source &#34;</span>. $<span style="color:#a6e22e">HOME</span> . <span style="color:#e6db74">&#34;/.dotoverrides/vimrc&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">endif</span>
</span></span></code></pre></div><p>In my <code>.zshrc</code> I have this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#f92672">[</span> -f  <span style="color:#e6db74">${</span>HOME<span style="color:#e6db74">}</span>/.dotoverrides/zshrc <span style="color:#f92672">]</span> <span style="color:#f92672">&amp;&amp;</span> source <span style="color:#e6db74">${</span>HOME<span style="color:#e6db74">}</span>/.dotoverrides/zshrc
</span></span></code></pre></div><p>And in git (only works if you have at least v1.7.10) I&rsquo;ve added this stanza:</p>
<pre tabindex="0"><code class="language-config" data-lang="config">[include]
  path = ~/.dotoverrides/gitconfig
</code></pre><p>Now I can easily share and push/pull my regular dotfiles  in public GitHub and
don&rsquo;t have to pay attention whether or not I&rsquo;m on the correct branch and if
I&rsquo;m not accidentally pushing to the wrong remote. Whenever I need to use
different settings on a work machine I just make sure to add it to the
overrides file and have it ready as soon as I open a new shell, run a git
command or open vim again.</p>
<p>So much easier!</p>
]]></content>
    <link href="https://unwiredcouch.com/bits/2014/02/03/dotoverride.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Field Guide to Understanding Human Error]]></title>
    <published>2013-12-31T00:00:00Z</published>
    <updated>2013-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/dekker-fieldguidetounderstandinghumanerror-2002/</id>
    <content type="html"><![CDATA[<p>This is a must read book for anyone who’s interested in system resilience and
human factors. I’d consider this the primer to get started and get a broad but
not shallow entry into the world of human factors and resilience engineering.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/dekker-fieldguidetounderstandinghumanerror-2002/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Creating Encrypted Home Directories in FreeBSD]]></title>
    <published>2013-12-28T00:00:00Z</published>
    <updated>2013-12-28T00:00:00Z</updated>
    <id>https://unwiredcouch.com/bits/2013/12/28/encrypted-homedirs.html</id>
    <content type="html"><![CDATA[<p>I run FreeBSD with ZFS on all my servers and I generally want to have my home
directories encrypted. Since ZFS native encryption is not yet in FreeBSD, I
create two ZFS filesystems, which are then encrypted with <a href="http://www.freebsd.org/doc/handbook/disks-encrypting.html">GELI
encryption</a> and build a new ZFS pool. This pool is then used as my home
directory. In order to simplify this, I have a shell script that takes the
username and size as input and creates keys and all partitions as well as the
zpool.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/sh
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span>USERHOME<span style="color:#f92672">=</span>$1
</span></span><span style="display:flex;"><span>SIZE<span style="color:#f92672">=</span>$2
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>zfs create -omountpoint<span style="color:#f92672">=</span>/encrypted tank/encrypted
</span></span><span style="display:flex;"><span>zfs create tank/encrypted/keys
</span></span><span style="display:flex;"><span>zfs create -omountpoint<span style="color:#f92672">=</span>none tank/encrypted/zvols
</span></span><span style="display:flex;"><span>zfs create -ocompression<span style="color:#f92672">=</span>on tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>zfs create -V <span style="color:#e6db74">${</span>SIZE<span style="color:#e6db74">}</span>G tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0
</span></span><span style="display:flex;"><span>zfs create -V <span style="color:#e6db74">${</span>SIZE<span style="color:#e6db74">}</span>G tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>zfs create tank/encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>
</span></span><span style="display:flex;"><span>dd <span style="color:#66d9ef">if</span><span style="color:#f92672">=</span>/dev/random of<span style="color:#f92672">=</span>/encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0 bs<span style="color:#f92672">=</span><span style="color:#ae81ff">64</span> count<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>dd <span style="color:#66d9ef">if</span><span style="color:#f92672">=</span>/dev/random of<span style="color:#f92672">=</span>/encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1 bs<span style="color:#f92672">=</span><span style="color:#ae81ff">64</span> count<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span>geli init -s <span style="color:#ae81ff">4096</span> -K /encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>/dev/zvol/tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0
</span></span><span style="display:flex;"><span>geli init -s <span style="color:#ae81ff">4096</span> -K /encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>/dev/zvol/tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>geli attach -k /encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>/dev/zvol/tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0
</span></span><span style="display:flex;"><span>geli attach -k /encrypted/keys/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>/dev/zvol/tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>zpool create <span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>-home raidz <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>/dev/zvol/tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk0.eli <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span>/dev/zvol/tank/encrypted/zvols/<span style="color:#e6db74">${</span>USERHOME<span style="color:#e6db74">}</span>/disk1.eli
</span></span></code></pre></div><p>I try to keep the script updated on <a href="https://github.com/mrtazz/bin/blob/master/create_encrypted_zfs_home.sh">GitHub</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/bits/2013/12/28/encrypted-homedirs.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Will to Change: Men, Masculinity, and Love]]></title>
    <published>2013-11-15T00:00:00Z</published>
    <updated>2013-11-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/hooks-willtochange-2004/</id>
    <content type="html"><![CDATA[<p>I bought this book because I wanted to read up more on Feminism, structural
sexism, toxic masculinity, and other related topics. I’ve had many discussions
about these before but never made the time to read actual books about it. I
chose Bell Hooks as the first author to read on the topic as her name came up
in many discussions about the topic. And I chose that book specifically to
learn more about the role of men in all of this.</p>
<p>And the book was absolutely great. It does a great job of running through many
areas where sexism and the patriarchy does a lot of harm to men as well. And
it made me think a lot about my upbringing, childhood, and what kind of biases
I carry because of it. But more importantly it made me think about the things
that I value and was at times longing for. E.g. the fact that relationships
between men are often very competitive and don’t provide any space for
emotional depth and vulnerability. Limiting any chance of properly dealing
with emotional situations in that setting.</p>
<p>I can wholeheartedly recommend this book to anyone (especially men) interested
in learning more about the topic.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/hooks-willtochange-2004/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[My Tmux Setup]]></title>
    <published>2013-11-15T00:00:00Z</published>
    <updated>2013-11-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2013/11/15/my-tmux-setup.html</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve been using <a href="http://tmux.sourceforge.net">tmux</a> as my main terminal multiplexer for about 3 years
now and have refined my configuration over time to fit my daily workflow.
Which is usually a mix of writing code, chef recipes, remote login into
different servers and various shell tasks. This is a flexible setup that
doesn&rsquo;t concentrate too much on doing a specific thing or replacing an IDE
inside of tmux. The <a href="https://github.com/mrtazz/dotfiles/blob/master/tmux.conf">configuration</a> and <a href="https://github.com/mrtazz/zshfiles/blob/master/zshrc">shell aliases</a> are
up on GitHub if you want to check them out.</p>
<h3 id="the-basics">The Basics</h3>
<p>Let&rsquo;s start with the basics. By default tmux uses <code>ctrl-b</code> as its prefix key
for commands and escaping. But the years of using screen have ingrained in my
muscle memory to use <code>ctrl-a</code>, so I switched with this simple setting:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>unbind C-b
</span></span><span style="display:flex;"><span>set -g prefix C-a
</span></span></code></pre></div><p>I also added a couple of important baseline settings to make tmux in general
look nice in colored terminals and work with unicode in the display window as
well as in the status bar. I also wanted to have the window numbering start at
1, since it doesn&rsquo;t make sense to me for accessing succesive windows to start
on the right side of the keyboard and then continue on the left side. And I
also wanted to have a simple shortcut (<code>ctrl-a r</code>) to reload configuration in
a live tmux session whenever I change something.</p>
<pre tabindex="0"><code># force a reload of the config file
unbind r
bind r source-file ~/.tmux.conf

# start window numbering at 1 for easier switching
set -g base-index 1

# colors
set -g default-terminal &#34;screen-256color&#34;

# unicode
setw -g utf8 on
set -g status-utf8 on
</code></pre><p>The next important change was modifying the status bar. There are a lot of
crazy things you can do and overload your tmux status bar with more
information than you could ever need. I try to balance the contents of my
status bar to only have information in there I actually care about. This means
I have the local hostname and the name of the current session on the left side
and then all the windows. The right side contains the current battery status
(when I&rsquo;m on a laptop), the status of my mail (inbox, to read, to answer) and
the time and date. Although I see less and less benefit of having the mail
check in there and will probably remove it soon (currently it&rsquo;s only showing
the inbox mail count). I also have the status bar configured to show terminal
bells in red so I always know when there is something that needs attention in
a window (I have weechat and mutt set to alert via terminal bells). For the
colorscheme I use a <a href="https://github.com/seebi/tmux-colors-solarized">solarized light</a> theme as you can see in
the screenshot:</p>
<p><img src="/images/tmux-status.png" alt="tmux status bar"></p>
<p>And the configuration for my status bar looks like this:</p>
<pre tabindex="0"><code># status bar config
set -g status-left &#34;#h:[#S]&#34;
set -g status-left-length 50
set -g status-right-length 50
set -g status-right &#34;⚡ #(~/bin/tmux-battery) [✉#(~/bin/imap_check.py)] %H:%M %d-%h-%Y&#34;
setw -g window-status-current-format &#34;|#I:#W|&#34;
set-window-option -g automatic-rename off

# listen to alerts from all windows
set -g bell-action any
</code></pre><p>This is the base configuration I use for basic project sessions with tmux. I
have two simple shell aliases to make it easier to re-attach to a session and
create new ones based on the current directory I&rsquo;m in:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>alias tma<span style="color:#f92672">=</span><span style="color:#e6db74">&#39;tmux attach -d -t&#39;</span>
</span></span><span style="display:flex;"><span>alias git-tmux<span style="color:#f92672">=</span><span style="color:#e6db74">&#39;tmux new -s $(basename $(pwd))&#39;</span>
</span></span></code></pre></div><p>With those I can ran <code>tma &lt;tab&gt;</code> in any shell and get a tab completion list
for all the current sessions running. Which is handy when logging into a
machine or generally working in a new shell. The second one I usually use when
I checked something in a local project (which are usually in git hence the
name of the alias) and then decided that I want a proper workspace but don&rsquo;t
have an existing session already. The alias will just create a new session on
the spot and name it after the current directory name. This also has the big
advantage that all new shells spawned inside of tmux (e.g. opening a new
window with <code>ctrl-a c</code>) will be started in that directory. Within those open
sessions I have some more important shortcuts I use often. They allow me to
cycle through panes (vertical or horizontal splits in a window created with
<code>ctrl-a V</code> and <code>ctrl-a H</code>) with <code>ctrl-a a</code> and to switch between windows with
<code>ctrl-a &lt;tab&gt;</code>.</p>
<pre tabindex="0"><code># rebind pane tiling
bind V split-window -h
bind H split-window

# quick pane cycling
unbind ^A
bind ^A select-pane -t :.+

# screen like window toggling
bind Tab last-window
bind Escape copy-mode
</code></pre><p>And last but not least in every basic setup - as an avid vim user - movement
commands live on the home row of course. And different panes can be selected
with <code>ctrl-a</code> and the corresponding movement command.</p>
<pre tabindex="0"><code># vim movement bindings
set-window-option -g mode-keys vi
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R
</code></pre><h3 id="next-level">Next Level</h3>
<p>I used to use tmux sessions in multiple tabs in iTerm for a long time.
However these sessions became very long living and whenever I needed to update
iTerm (which wasn&rsquo;t that often to be honest) I had to recreate the tabs again
as I wanted. Additionally I felt it was unnecessary to have multiple ways of
doing basically the same thing (iTerm tabs and tmux windows/sessions) when I
can just decide to use one. So I decided to switch to tmux as the main working
environment on the laptop for everything. This means I have a tmux session on
my laptop dedicated to communication which has a window that runs mosh with an
attached tmux session from the server where I run weechat. And another window
that does the same to my work VM which runs my work IRC client. And another
window that just runs mutt for email reading. This means at any given time I
have two nested tmux sessions as you can see in the screenshot below:</p>
<p><img src="/images/nested-tmux.png" alt="nested tmux"></p>
<p>This lets me reattach the communications session even when I accidentally
close my terminal and have it be exactly how I left it off. Even when I don&rsquo;t
connect to a remote host, I often have nested tmux sessions locally since I
use it basically like terminal tabs. This is very useful but needs one more
setting in the configuration to work. Since both nested tmux sessions expect
the same meta command, I have this stanza in my configuration:</p>
<pre tabindex="0"><code>bind-key a  send-prefix
</code></pre><p>This sends the command prefix to the inner tmux session when I hit <code>ctrl-a a</code> thus enabling me to execute commands in nested tmux sessions.</p>
<p>In order to easily switch between sessions I mainly use to important
commands. The first one is the tmux built-in <code>ctrl-a s</code> which gives me a
list of all current sessions on the system (the same list the <code>tma</code> tab
completion gives me) and I can easily switch sessions from within a tmux
session. However this means finding the session I want in a list that might
contain 20 or more sessions. And all I really want is to switch to the
session named &ldquo;chef&rdquo;. This is why I added another extremely useful shortcut:</p>
<pre tabindex="0"><code># bind fast session switching
unbind S
bind S command-prompt &#34;switch -t %1&#34;
</code></pre><p>Now when I hit <code>ctrl-a S</code> I get a <code>(switch)</code> prompt where I can enter the name
of the session I want (or just the prefix as long as it is unique) and switch
to that session when hitting <code>Return</code>. This is super helpful since I have most
of my sessions named after the directory/project name anyways. So I usually
know which session to switch to.</p>
<h3 id="we-have-to-go-deeper">We have to go deeper</h3>
<p>But this is not the end yet. I have one more very useful bit of configuration
I use everyday which is related to how I remote login into servers. For this
purpose I have a tmux session on a server called &ldquo;jumpsessions&rdquo; in which I
open a new tmux window whenever I ssh into a server. However this got very
confusing after a while and I had no idea what all those windows were. So I
added this little bit into my <code>~/.ssh/config</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>Host *
</span></span><span style="display:flex;"><span>PermitLocalCommand yes
</span></span><span style="display:flex;"><span>LocalCommand <span style="color:#66d9ef">if</span> <span style="color:#f92672">[[</span> $TERM <span style="color:#f92672">==</span> screen* <span style="color:#f92672">]]</span>; <span style="color:#66d9ef">then</span> printf <span style="color:#e6db74">&#34;\033k%h\033\\&#34;</span>; <span style="color:#66d9ef">fi</span>
</span></span></code></pre></div><p>This runs a local command on each ssh login on the server I login in. With the
effect that it prints the local hostname with an escape sequence that triggers
tmux to set the window title to that hostname. This means if I now open the
list of windows (<code>ctrl-a w</code>) I can see to which server each window is
connected. And this is also the reason why I have automatic window renaming
turned off.</p>
<p>But of course I don&rsquo;t want to browse through all of those windows
to get to a server, so I just use the &ldquo;find-window&rdquo; command in tmux (<code>ctrl-a f</code>) and enter the server name (which is also the window name) and will
automatically switch to the correct session on hitting enter.</p>
<p>And as the final stage of inception, I often run a screen session on those
servers to execute long running commands. Which means I&rsquo;m now three levels
deep into terminal multiplexers and it still works like a charm.</p>
]]></content>
    <link href="https://unwiredcouch.com/2013/11/15/my-tmux-setup.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Uncloud your Life]]></title>
    <published>2013-10-30T00:00:00Z</published>
    <updated>2013-10-30T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2013/10/30/uncloud-your-life.html</id>
    <content type="html"><![CDATA[<p>There has been a lot of talk lately about privacy in the cloud and owning
your own data. I&rsquo;m not linking any articles here, since there are so many
and I don&rsquo;t think anyone has missed it. However it spurred a new and awesome
debate about hosting your own applications and thinking about where your data
is stored and who manages it. I have thought about writing this for a while
and always felt there wasn&rsquo;t enough to write about. But in the spirit of
sharing and getting back into writing I decided to do it nonetheless.</p>
<p>The setup I&rsquo;m describing has grown pretty organically and is heavily based on
what I use and how I work everyday. This is also probably a bit too technical
to be considered a general purpose manual. But that will have to do for now. I
am also a heavy FreeBSD user, as it makes a lot of things easier and more
enjoyable for me. So my setup is also very biased towards that.</p>
<h3 id="email">Email</h3>
<p>Maybe one of the most important parts is email. I switched from hosted email
providers to self hosted (first on a friend&rsquo;s server) in 2005, when the 12MB
Inbox I had wasn&rsquo;t big enough anymore and before GMail was widely available in
Germany (at least I only knew one person with a GMail account back then). So
nothing has changed for me there. I also think email is considered to be one
of the more painful things to self host although I don&rsquo;t think this is true. I
run a setup based on <a href="http://www.freebsd.org">FreeBSD</a>, <a href="http://www.sendmail.com/sm/open_source/">sendmail</a> and the <a href="http://www.dovecot.org">Dovecot
IMAP server</a> which is not very complicated to set up. Especially the
FreeBSD/sendmail part literally takes 10 minutes. I don&rsquo;t really run spam
filtering since it hasn&rsquo;t been a problem (I do filter some known spam
addresses in my procmail rules though). I read my email in <a href="http://www.mutt.org">mutt</a> on the
laptop where it is synced with <a href="http://offlineimap.org">offlineimap</a> and also run mutt in
a tmux session on my mailserver to access it from anywhere. On iOS devices I
use the built-in Mail application and have come to love <a href="http://www.triage.cc">Triage</a> for
quickly going through email when I have a minute.</p>
<h3 id="calendars-and-contacts">Calendars and Contacts</h3>
<p>Another very important aspect of my daily synced data are calendars and
contacts. Especially with the iPhone and iPad being in constant use, I want
that data to be synced everywhere. I used to use iCloud for that and it works
beautifully and I wanted something which works equally flawless. After some
trial and error I found <a href="http://owncloud.org">ownCloud</a> which provides CalDav and CardDav
services as well as general WebDAV. The setup guides are really good and
include most of the common clients. I nevertheless ran into some problems
with the initial setup on iOS and OSX clients because of when and where they
expect slashes or protocol headers. However this is a
configuration/documentation issue, which is annoying but can be solved.</p>
<h3 id="file-sync">File sync</h3>
<p>I used to use <a href="http://dropbox.com">Dropbox</a> a lot. I loved the simplicity and being able
to have files in sync everywhere. I even put my git repos in there at some
point so I could continue working from every computer I used. With time I used
it less and less as I simplified my workflows a lot but it was still important
to have a proper file sync solution, mostly for convenince options like
syncing <a href="http://www.alfredapp.com">Alfred</a> preferences. But I wanted to get all my documents out
of a location that somebody else had under control. Thankfully ownCloud also
comes with a client to sync the WebDAV directory between computers. So I
basically set that up and copied everything over from Dropbox. It has been
working really well so far, though I don&rsquo;t have heavy requirements for syncing
and files in there don&rsquo;t change that often.</p>
<h3 id="gtdtodo-tracking">GTD/Todo tracking</h3>
<p>I track everything in <a href="http://www.omnigroup.com/omnifocus">OmniFocus</a>. Literally. Work stuff, personal
stuff, movies I want to watch, books I want to read, it pulls in GitHub issues
and Jira tickets that are assigned to me, I plan blog posts I want to write
and talks I want to give in there. I extensively use custom perspectives to
get data out. It&rsquo;s safe to say that it&rsquo;s an important piece of software for
me. Luckily Omni products support syncing via WebDAV and have been for a
while. Thus it was very easy to switch from the hosted Omni Sync Server, which
works flawlessly, to just use WebDAV endpoint of ownCloud. I have since also
looked around to find out if there are alternatives to OmniFocus if I ever
wanted to switch away from OSX. Sadly it seems to be that self hosting is
rarely an option for any app and I have only found a handful that even
provided a synchronisation mechanism that does not involve DropBox, Apple or
their own cloud sync solution.</p>
<h3 id="note-taking">Note taking</h3>
<p>Note taking was also an important part that had to continue to work for me. I
don&rsquo;t take a lot of notes all the time. But when I need to jot something down,
it must not matter whether I&rsquo;m on my phone or in VIM on my laptop. I was a
very happy <a href="http://simplenote.com">Simplenote</a> customer and still think it&rsquo;s the best
cloud based note taking platform there is. I even wrote a <a href="https://github.com/mrtazz/simplenote.vim">VIM
plugin</a> for it so I&rsquo;d never have to leave my trusty editor.
This also meant a solution that would replace it needed a decent iOS client,
notes I can access from VIM, support for Markdown and a syncing engine that is
ideally based on WebDAV, since I was already running that. And after some
searching I actually found this unicorn of note taking solutions. It&rsquo;s simply
called <a href="http://www.notebooksapp.com">Notebooks</a> and it&rsquo;s a simple app that displays the folders
and files in a WebDav directory, let&rsquo;s you edit text files and view them in
Markdown mode. And even take and attach pictures. It comes as a Universal App
for iPhone and iPad and has an OSX client in a beta version, which I don&rsquo;t use
because I can just edit all the files in VIM. Which makes me very happy.</p>
<h3 id="password-syncing">Password syncing</h3>
<p>The only application I haven&rsquo;t found a satisfying self hosted solution yet is
password syncing. I use <a href="https://agilebits.com/onepassword">1Password</a> and am very happy with it.
However the only non-LAN solutions for syncing that it provides are Dropbox
and iCloud. So I switched to Wi-fi sync for my passwords. It&rsquo;s not ideal and
there will come a point where I am on my iPad and don&rsquo;t have a password there
and am too lazy to open the laptop to sync. However since all passwords for
my crucial services are already synced this won&rsquo;t be the end of the world and
can very likely wait until I am on a nother device or have both the laptop and
the iPad open. So I&rsquo;m not 100% happy with it but it is one of those &ldquo;good
enough&rdquo; solutions.</p>
<h3 id="irc-and-instant-messaging">IRC and Instant Messaging</h3>
<p>Being able to idle on IRC and have a proper chat client at hand everywhere has
always been important to me and for that I have run terminal based clients in
a screen or tmux session for years now. Since I (similarly to email) never
used any of the cloud based solutions, I was already running <a href="http://wiki.znc.in/ZNC">ZNC</a> and
<a href="http://www.bitlbee.org/main.php/news.r.html">Bitlbee</a>. And since the changes in GTalk earlier this year which
broke a lot of stuff for me, I also already had a <a href="http://web.jabber.ccc.de">Jabber account</a>
which I was using for chat and OTR.</p>
<h3 id="backup">Backup</h3>
<p>How to handle backups was one of the bigger concerns I had. Now that I would
be hosting all my data I needed a proper plan so when one of my servers dies
I&rsquo;m not losing everything that was on there. Like probably almost every Mac
user, I used to use <a href="http://www.haystacksoftware.com/arq/">Arq</a> to backup my laptop to an encrypted S3 bucket.
However that was only ever the client side. And I was happy with it because
it included my mail folder and thus I had a backup of my email. And when I
stopped using that to not push all my date to S3 I also didn&rsquo;t backup my email
anymore. After some thought it was clear to me that I wanted to have a backup
in a location with as much control as possible. I decided to buy an <a href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=5336619#!tab=features">HP
Microserver</a> and put it in my apartment. It runs FreeBSD
(surprise!) and has a 2x2TB encrypted ZFS RAID. The backup location for each
of my machines on that RAID is an independent filesystem so I can snapshot it
regularly and go back in time if I have to. The server pulls in data from my
servers via rsync and that&rsquo;s how I do backups. It&rsquo;s less automated than I want
it to be right now and I still have to configure it to server as a TimeMachine
destination for my laptop. But this is already a pretty good solution for me.</p>
<h3 id="where-i-still-use-the-cloud">Where I still use the cloud</h3>
<p>I&rsquo;ve extensively talked about how I moved my data into self hosted
applications and what I use for those use cases. However that doesn&rsquo;t mean
that I&rsquo;m completely free of cloud based applications. Obviously there are a
variety of applications that don&rsquo;t support this yet or where it&rsquo;s not even
something that would work without changing the product a lot. That means I
still use Dropbox to sync <a href="http://www.papersapp.com">Papers</a> or automatically pull in pictures
from Instagram. Since Google Reader died I switched to <a href="https://feedbin.me">Feedbin</a> and
have no intention to stop using it, I have my Kindle books at Amazon, my music
in the iTunes Cloud, I use a variety of infrastructure software as a service
to <a href="http://www.unwiredcouch.com/2012/09/15/getting-started-with-monitoring.html">monitor my servers</a> and I run my <a href="https://github.com/roidrage/s3itch">public image
sharing</a> and <a href="https://github.com/mrtazz/katana">custom URL shortener</a> on S3 and Heroku and
this blog on GitHub Pages. The difference for me is, that most of this data I
don&rsquo;t necessarily regard as private as the ones I pulled into my own hosting.
I will probably experiment with how I can do some of this on my own in the
future, but it is less important to me right now.</p>
<h3 id="why-are-you-telling-me-all-this">Why are you telling me all this?</h3>
<p>As I said in the first section, this is not considered a manual of how to host
your own data. While I try to keep my <a href="https://github.com/mrtazz/cookbooks">Chef cookbooks</a> for this
stuff up to date, they are very custom tailored and probably not of great use
for everybody. If you want to get started and host your own data, I highly
recommend checking out <a href="https://github.com/al3x/sovereign">Alex Payne&rsquo;s Sovereign Project</a>. It&rsquo;s an
Ansible project which installs a lot of the things I&rsquo;ve been talking about
here and is definitely much easier to get started with. I do hope though I was
able to share some ideas and make hosting your own data sound a little less
scary.</p>
<p>I also realize that even with an easy to get started guide and a collection of
Chef recipes this is not something every person can run and you need some
understanding of (and tolerance for) running your own services. There has been
some work going on for some time to make it easier to host your own services
and even have decentralized applications. The newest one I am aware of is
called <a href="http://decentralize.it">Grand Decentral Station</a> and looks very promising. I
would love to see some of these ideas flourish and be pushed forward. And
maybe have a future in which we can not only pay people to run services for
us, but also to develop services we can run ourselves as easy as it is to set
up a TV or a Roomba today.</p>
]]></content>
    <link href="https://unwiredcouch.com/2013/10/30/uncloud-your-life.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Scaling Deployments at Etsy]]></title>
    <published>2013-10-09T00:00:00Z</published>
    <updated>2013-10-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/scaling-deployments-cdnyc/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/scaling-deployments-cdnyc/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Feature Flagging your Infrastructure for Fun and Profit]]></title>
    <published>2013-09-10T00:00:00Z</published>
    <updated>2013-09-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/infrastructure-feature-flags/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/infrastructure-feature-flags/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Way to Go: A Thorough Introduction to the Go Programming Language]]></title>
    <published>2013-08-10T00:00:00Z</published>
    <updated>2013-08-10T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/balbaert-thewaytogo-2012/</id>
    <content type="html"><![CDATA[<p>This was a $3 Kindle purchase before I got on a flight. And with the uptick in
popularity of the Go programming language I thought it would be a good thing
to learn about it. I don’t generally enjoy reading programming books but
rather learn by trying to write some code. But this book did a good job of
guiding me through the language and I felt pretty confident in diving in and
giving it a try. Definitely more than worth the money.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/balbaert-thewaytogo-2012/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Infrastructure upgrades with Chef]]></title>
    <published>2013-08-02T00:00:00Z</published>
    <updated>2013-08-02T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2013/08/02/infrastructure-upgrades-with-chef.html</id>
    <content type="html"><![CDATA[<p>I wrote about how we roll out infrastructure upgrades with Chef on Etsy&rsquo;s
<a href="https://codeascraft.com">engineering blog</a>. You can find the post <a href="https://codeascraft.com/2013/08/02/infrastructure-upgrades-with-chef/">here</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2013/08/02/infrastructure-upgrades-with-chef.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Devtools at Etsy]]></title>
    <published>2013-05-27T00:00:00Z</published>
    <updated>2013-05-27T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/devtools-at-etsy/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/devtools-at-etsy/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Scaling Deployments at Etsy]]></title>
    <published>2013-04-18T00:00:00Z</published>
    <updated>2013-04-18T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/scaling-deployments-scaleconf/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/scaling-deployments-scaleconf/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[StatsD Workshop]]></title>
    <published>2013-03-29T00:00:00Z</published>
    <updated>2013-03-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/statsd-workshop/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/statsd-workshop/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Chef Workflow at Etsy]]></title>
    <published>2013-01-29T00:00:00Z</published>
    <updated>2013-01-29T00:00:00Z</updated>
    <id>https://unwiredcouch.com/talks/chef-workflow/</id>
    <content type="html"><![CDATA[]]></content>
    <link href="https://unwiredcouch.com/talks/chef-workflow/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Time Management for System Administrators: Stop Working Late and Start Working Smart]]></title>
    <published>2013-01-12T00:00:00Z</published>
    <updated>2013-01-12T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/limoncelli-timemanagementforsysadmins-2005/</id>
    <content type="html"><![CDATA[<p>I read this early on in my career switching to a job where I was occupied with
running production services (as opposed to writing shipped software). And I
took a lot away from reading this book. Not only about time management but
also an understanding of things that are common in the line of systems
administration.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/limoncelli-timemanagementforsysadmins-2005/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[The Riak Handbook]]></title>
    <published>2012-12-31T00:00:00Z</published>
    <updated>2012-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/meyer-riakhandbook-2011/</id>
    <content type="html"><![CDATA[<p>I read this book some time in early 2012 shortly after moving to New York
City. It was a staple on my Kindle while exploring the subway system, parks,
and coffee shops in a new city.</p>
<p><img src="/images/reading/meyer-riakhandbook-2011-east-river.jpg" alt=""></p>
<p>The Riak handbook is a great overview over one of the most popular and
sophisticated NoSQL databases during the early 2010s. The book guides you
through a number of easy to follow examples to learn all the basic properties
of Riak. It uses JavaScript as an easy accessible language to understand the
examples. It shows that Mathias has done deep research into the database and
has put a lot of effort into creating an accessible resource to learn Riak.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/meyer-riakhandbook-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Test-Driven Infrastructure with Chef]]></title>
    <published>2012-12-31T00:00:00Z</published>
    <updated>2012-12-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/reading/nelsonsmith-testdriveninfrastructurewithchef-2011/</id>
    <content type="html"><![CDATA[<p>I wasn’t quite sure if I agree with using cucumber for integration testing of
configuration management. And after reading the book I still don’t. It’s hard
to draw the line where you just test the implementation of the config
management framework versus your own business logic. And I think it’s highly
dependent on the level of advanced logic in your config management code.
Although I definitely have seen code that would benefit from some testing like
that.</p>
]]></content>
    <link href="https://unwiredcouch.com/reading/nelsonsmith-testdriveninfrastructurewithchef-2011/" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[IRC notifications with logstash]]></title>
    <published>2012-11-03T00:00:00Z</published>
    <updated>2012-11-03T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2012/11/03/irc-notifications-with-logstash.html</id>
    <content type="html"><![CDATA[<p>I have spent some time in the last weeks to learn more about
<a href="http://logstash.net/">logstash</a> and used the kind of bad state of my IRC
notifications as the fun side project to get into it. I now have a pretty
useful (well for me) setup which I thought I&rsquo;d share.</p>
<h3 id="the-irc-setup">The IRC setup</h3>
<p>My basic setup revolves around using the <a href="http://znc.in">ZNC</a> bouncer which
keeps me always connected. I still use <a href="http://www.weechat.org/">weechat</a> in a
remote tmux session most of the time, but like to have the option to switch
clients without losing my connection or backlog. I also use
<a href="http://growl.info/">Growl</a> pretty heavily in combination with OSX
notification center to alert me of special keywords or all messages in certain
channels. Past solutions included running the IRC client locally with a growl
plugin or remote tail-ing a notification logfile. Those solutions were close to
what I wanted but tied too much to the client, when I really wanted to have
notifications directly from my bouncer. And since znc has a <a href="http://wiki.znc.in/Log">module to
log</a> all messages to various logfiles, I decided to get
my notifications from there.</p>
<h3 id="enter-logstash">Enter logstash</h3>
<p>I had read about logstash before and decided to give it a try for this. I won&rsquo;t
go into detail about installing and running it here, but check out the <a href="http://logstash.net/docs/1.1.4/tutorials/getting-started-simple">getting
started</a> for a
good introduction.</p>
<p>For the first important step, we need logstash to listen to changes in the
bouncer&rsquo;s logfiles. This is pretty easy and can be accomplished with the
following logstash configuration bits:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">input</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">file</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">path</span> =&gt; <span style="color:#e6db74">&#34;/home/username/.znc/users/zncuser/moddata/log/*&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">type</span> =&gt; <span style="color:#e6db74">&#34;znclog&#34;</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>Per default the log module puts all log files under
<code>users/youruser/moddata/log/</code> and creates a logfile per day which is named
after the channel name and date. The logstash input just reads all files that
are in there and adds a type to the captured logs to be able to better identify
them in subsequent filters. The pattern is not really ideal since older
logfiles are not interesting for notifications but are also kept open. So at
the moment I work around that by moving my logfiles to a backup partition every
night, but there might be a better way to do it.</p>
<p>The next step is to remove lines which I&rsquo;m never interested in for
notifications, like my own messages and JOIN/QUIT messages for example. For
this the logstash <code>grep</code> filter definitions are very useful:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">filter</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">grep</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">type</span> =&gt; <span style="color:#e6db74">&#34;znclog&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">match</span> =&gt; [<span style="color:#e6db74">&#34;@message&#34;</span>, <span style="color:#e6db74">&#34;\[[0-9:]{8}\](.+?)&lt;USERNAME&gt;&#34;</span>]
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">negate</span> =&gt; <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">grep</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">type</span> =&gt; <span style="color:#e6db74">&#34;znclog&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">match</span> =&gt; [<span style="color:#e6db74">&#34;@message&#34;</span>, <span style="color:#e6db74">&#34;\*\*\* (Quits|Joins|Parts|.+ sets mode: |.+ is now known as)&#34;</span>]
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">negate</span> =&gt; <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>The grep filter is also very useful for another criterion on which I want
notifications, namely for all of my private messages. Since all
channel names per IRC convention have a <code>#</code> in the name, we can just assume
that logfiles without that sign are for private messages. It is important to
set <code>drop =&gt; false</code> here since we don&rsquo;t want grep to drop the log line (which
is default behaviour).</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">grep</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">type</span> =&gt; <span style="color:#e6db74">&#34;znclog&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">match</span> =&gt; [<span style="color:#e6db74">&#34;@source&#34;</span>, <span style="color:#e6db74">&#34;#&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">add_tag</span> =&gt; [<span style="color:#e6db74">&#34;pmnotification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">negate</span> =&gt; <span style="color:#66d9ef">true</span>
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">drop</span> =&gt; <span style="color:#66d9ef">false</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>This also needs to be added to the filter section and tags all messages coming
from logfiles without a <code>#</code> in the name with <code>&quot;pmnotifcation&quot;</code>. Now let&rsquo;s go to
the actual parsing of log events. Since there are going to be some repeated
patterns and I wanted to have an easy way to add new ones, I have a &lsquo;pattern
library file&rsquo; which is included in the configuration.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">NOTIFYME</span> (<span style="color:#a6e22e">pizza</span><span style="color:#f92672">|</span><span style="color:#a6e22e">cupcakes</span><span style="color:#f92672">|</span><span style="color:#a6e22e">fire</span>)
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">IRCNOTIFY</span> <span style="color:#f92672">%</span>{<span style="color:#a6e22e">DATA</span>}<span style="color:#f92672">%</span>{<span style="color:#a6e22e">NOTIFYME</span>}<span style="color:#f92672">%</span>{<span style="color:#a6e22e">GREEDYDATA</span>}
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">IRCTIME</span> [<span style="color:#ae81ff">0</span><span style="color:#f92672">-</span><span style="color:#ae81ff">9</span><span style="color:#f92672">:</span>]{<span style="color:#ae81ff">8</span>}
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">IRCCHANNELS</span> (<span style="color:#a6e22e">nunagios</span><span style="color:#f92672">|</span><span style="color:#a6e22e">chef</span><span style="color:#f92672">|</span><span style="color:#a6e22e">food</span>)
</span></span></code></pre></div><p>The terms in capital letters can be used as regex placeholders. The interesting
ones are <code>NOTIFYME/IRCNOTIFY</code> which are used as a collection of regexes on
which I want to show a notification and <code>IRCCHANNELS</code> which are basically the
channel names for which I want notifications for all messages. In order to get
those notifications I set up a set of grok filters.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">grok</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">match</span> =&gt; [<span style="color:#e6db74">&#34;@source&#34;</span>, <span style="color:#e6db74">&#34;%{IRCCHANNELS}&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">add_tag</span> =&gt; [<span style="color:#e6db74">&#34;channelnotification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">exclude_tags</span> =&gt; [<span style="color:#e6db74">&#34;pmnotification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">patterns_dir</span> =&gt; <span style="color:#e6db74">&#39;/home/username/logstash-patterns&#39;</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>This grok ruleset grabs all events from the channels based on the <code>IRCCHANNELS</code>
match and tags them with the <code>&quot;channelnotification&quot;</code> tag. PMs are excluded from
that match because they have already matched.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">grok</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pattern</span> =&gt; <span style="color:#e6db74">&#34;\[%{IRCTIME:irctime}\](.+?)&lt;%{DATA:ircsender}&gt;%{GREEDYDATA:ircmessage}&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">tags</span> =&gt; [<span style="color:#e6db74">&#34;channelnotification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">patterns_dir</span> =&gt; <span style="color:#e6db74">&#39;/home/username/logstash-patterns&#39;</span>
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span><span style="color:#a6e22e">grok</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pattern</span> =&gt; <span style="color:#e6db74">&#34;\[%{IRCTIME:irctime}\](.+?)&lt;%{DATA:ircsender}&gt;%{GREEDYDATA:ircmessage}&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">tags</span> =&gt; [<span style="color:#e6db74">&#34;pmnotification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">patterns_dir</span> =&gt; <span style="color:#e6db74">&#39;/home/username/logstash-patterns&#39;</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>These rulesets extract the timestamp, sender and message data for the
notifications into separate fields so they are easily accessible later on. I
have the same ruleset for channel notifications and private messages, because I
didn&rsquo;t find a way to match any tag (the <code>tags</code> setting requires an event to
match all given tags) so I couldn&rsquo;t combine them into one rule. Though this
seems like something that should be fixable.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">grok</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">pattern</span> =&gt; <span style="color:#e6db74">&#34;\[%{IRCTIME:irctime}\](.+?)&lt;%{DATA:ircsender}&gt;%{IRCNOTIFY:ircmessage}&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">add_tag</span> =&gt; [<span style="color:#e6db74">&#34;notification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">exclude_tags</span> =&gt; [<span style="color:#e6db74">&#34;pmnotification&#34;</span>]
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">patterns_dir</span> =&gt; <span style="color:#e6db74">&#39;/home/username/logstash-patterns&#39;</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>And finally the last pattern ruleset matches the regexes that are defined for all
events and parses them into the fields mentioned before. Notice that all
rulesets include a <code>patterns_dir</code> section which points to the folder with the
regex defintions file described above.</p>
<p>The last part of the logstash ruleset is defining an output for the
notifications. For a while I just appended them to a logfile and tail-ed that
from my laptop over ssh. This worked ok, but I had problems with duplicate
notifications when restarting the polling script and wasn&rsquo;t really happy with
this solution. And since I already had Redis running on that host, I thought
I&rsquo;d give that a try.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#a6e22e">output</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">redis</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">host</span> =&gt; <span style="color:#e6db74">&#39;localhost&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">data_type</span> =&gt; <span style="color:#e6db74">&#39;list&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">key</span> =&gt; <span style="color:#e6db74">&#39;notifications&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">tags</span> =&gt; [<span style="color:#e6db74">&#34;pmnotification&#34;</span>]
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">password</span> =&gt; <span style="color:#e6db74">&#39;secret&#39;</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">redis</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">host</span> =&gt; <span style="color:#e6db74">&#39;localhost&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">data_type</span> =&gt; <span style="color:#e6db74">&#39;list&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">key</span> =&gt; <span style="color:#e6db74">&#39;notifications&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">tags</span> =&gt; [<span style="color:#e6db74">&#34;channelnotification&#34;</span>]
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">password</span> =&gt; <span style="color:#e6db74">&#39;secret&#39;</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">redis</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">host</span> =&gt; <span style="color:#e6db74">&#39;localhost&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">data_type</span> =&gt; <span style="color:#e6db74">&#39;list&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">key</span> =&gt; <span style="color:#e6db74">&#39;notifications&#39;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">tags</span> =&gt; [<span style="color:#e6db74">&#34;notification&#34;</span>]
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">password</span> =&gt; <span style="color:#e6db74">&#39;secret&#39;</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>The output config basically just says the for every type of notification log
event, append it to a Redis list with the name <code>'notifications'</code> on
the instance running on localhost.</p>
<h3 id="the-client-side">The client side</h3>
<p>The last part now is actually getting the notifications into growl on the OSX
side of things. For this I have Growl setup to forward everything to
notification center and run the following script on my Mac:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-python" data-lang="python"><span style="display:flex;"><span><span style="color:#f92672">import</span> sys
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> gntp
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> json
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> redis
</span></span><span style="display:flex;"><span><span style="color:#f92672">import</span> gntp.notifier
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>r <span style="color:#f92672">=</span> redis<span style="color:#f92672">.</span>StrictRedis(host<span style="color:#f92672">=</span><span style="color:#e6db74">&#39;ircserver&#39;</span>,
</span></span><span style="display:flex;"><span>                      port<span style="color:#f92672">=</span><span style="color:#ae81ff">6379</span>, db<span style="color:#f92672">=</span><span style="color:#ae81ff">0</span>,
</span></span><span style="display:flex;"><span>                      password<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;secret&#34;</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>app <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;irc-growl&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">while</span> <span style="color:#ae81ff">1</span>:
</span></span><span style="display:flex;"><span>    key, logline <span style="color:#f92672">=</span> r<span style="color:#f92672">.</span>blpop(<span style="color:#e6db74">&#34;notifications&#34;</span>)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        log <span style="color:#f92672">=</span> json<span style="color:#f92672">.</span>loads(logline)
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">Exception</span> <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>        title <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;Failure loading logline: &#34;</span> <span style="color:#f92672">+</span> str(logline)
</span></span><span style="display:flex;"><span>        message <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;error(</span><span style="color:#e6db74">{0}</span><span style="color:#e6db74">)&#34;</span><span style="color:#f92672">.</span>format(e)
</span></span><span style="display:flex;"><span>        gntp<span style="color:#f92672">.</span>notifier<span style="color:#f92672">.</span>mini(message, applicationName<span style="color:#f92672">=</span>app, title<span style="color:#f92672">=</span>title)
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">continue</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        channel <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;-&#34;</span><span style="color:#f92672">.</span>join(log[<span style="color:#e6db74">&#34;@source&#34;</span>]<span style="color:#f92672">.</span>split(<span style="color:#e6db74">&#34;/&#34;</span>)[<span style="color:#f92672">-</span><span style="color:#ae81ff">1</span>]<span style="color:#f92672">.</span>split(<span style="color:#e6db74">&#34;_&#34;</span>)[<span style="color:#ae81ff">1</span>:<span style="color:#f92672">-</span><span style="color:#ae81ff">1</span>])
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">Exception</span> <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>        title <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;Failure parsing channel name in: &#34;</span> <span style="color:#f92672">+</span> str(log[<span style="color:#e6db74">&#34;@source&#34;</span>])
</span></span><span style="display:flex;"><span>        message <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;error(</span><span style="color:#e6db74">{0}</span><span style="color:#e6db74">)&#34;</span><span style="color:#f92672">.</span>format(e)
</span></span><span style="display:flex;"><span>        gntp<span style="color:#f92672">.</span>notifier<span style="color:#f92672">.</span>mini(message, applicationName<span style="color:#f92672">=</span>app, title<span style="color:#f92672">=</span>title)
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">continue</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">try</span>:
</span></span><span style="display:flex;"><span>        title <span style="color:#f92672">=</span> (<span style="color:#e6db74">&#34;</span><span style="color:#e6db74">%s</span><span style="color:#e6db74"> in </span><span style="color:#e6db74">%s</span><span style="color:#e6db74">&#34;</span> <span style="color:#f92672">%</span> (log[<span style="color:#e6db74">&#34;@fields&#34;</span>][<span style="color:#e6db74">&#34;ircsender&#34;</span>][<span style="color:#ae81ff">0</span>],
</span></span><span style="display:flex;"><span>                  channel<span style="color:#f92672">.</span>encode(<span style="color:#e6db74">&#34;utf-8&#34;</span>)))
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">except</span> <span style="color:#a6e22e">Exception</span> <span style="color:#66d9ef">as</span> e:
</span></span><span style="display:flex;"><span>        title <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;Failure parsing ircsender in: &#34;</span> <span style="color:#f92672">+</span> str(log)
</span></span><span style="display:flex;"><span>        message <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;error(</span><span style="color:#e6db74">{0}</span><span style="color:#e6db74">)&#34;</span><span style="color:#f92672">.</span>format(e)
</span></span><span style="display:flex;"><span>        print title
</span></span><span style="display:flex;"><span>        print message
</span></span><span style="display:flex;"><span>        gntp<span style="color:#f92672">.</span>notifier<span style="color:#f92672">.</span>mini(message, applicationName<span style="color:#f92672">=</span>app, title<span style="color:#f92672">=</span>title)
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">continue</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    message <span style="color:#f92672">=</span> (log[<span style="color:#e6db74">&#34;@fields&#34;</span>][<span style="color:#e6db74">&#34;ircmessage&#34;</span>][<span style="color:#ae81ff">0</span>])<span style="color:#f92672">.</span>encode(<span style="color:#e6db74">&#34;utf-8&#34;</span>)
</span></span><span style="display:flex;"><span>    gntp<span style="color:#f92672">.</span>notifier<span style="color:#f92672">.</span>mini(message, applicationName<span style="color:#f92672">=</span>app, title<span style="color:#f92672">=</span>title)
</span></span></code></pre></div><p>This uses the python gntp library to talk to Growl and the redis client to talk
to Redis. Specifically for the Redis connection I use <code>blpop</code>, which pops an
element (in our case a notification) from the list and if there is none waits
for the next one to come in. For every notification it parses out the
timestamp, channel, sender and message from the fields I set in the logstash
grok rules, formats it nicely, sends it to growl and then gets the next one or
waits for new notifications to come in.</p>
<h2 id="verdict">Verdict</h2>
<p>There are still some improvements I want to make. Mostly around moving the old
log files or only reading the newest one. And improving the script so it
survives network disconnects and possibly run it under launchd. Also if I&rsquo;m not
running the script to pull notifications, they are piling up in Redis at the
moment. So next time I connect, I get an abundance of new notifications.
Notification center batches them nicely to not litter the whole screen and only
the last 20 are in the sidebar. So it&rsquo;s not really a problem, but I thought
about running a cron to prune the list to a maximum of 20 notifications or so.</p>
<p>I now have a setup where I get my notifications directly from the bouncer logs
and can display them on any (OSX) host which has the script set up. It should
also be fairly simple to adapt this to other notification display systems. The
setup is no longer bound to which IRC client I use or whether or not I
constantly have it running on a server. Plus the alerting keywords and channels
are easily extended because I only have to add patterns to the library file
and not touch the config itself.</p>
]]></content>
    <link href="https://unwiredcouch.com/2012/11/03/irc-notifications-with-logstash.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Getting started with monitoring on the cheap and easy]]></title>
    <published>2012-09-15T00:00:00Z</published>
    <updated>2012-09-15T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2012/09/15/getting-started-with-monitoring.html</id>
    <content type="html"><![CDATA[<p>This post started out as a writeup of tools and services I use to monitor my
small (currently 3) set of personal servers. However thinking about it, it made
more sense to me to structure it as a small guide on how to get started with
monitoring without having to invest too much time, effort and money. Since I
don&rsquo;t use that at the moment, I won&rsquo;t cover instrumentation and monitoring of
application metrics but go more into general service availabilty and machine
level metrics. The prices I mention are (to my best knowledge) up to date for
the current time, but are of course subject to change.</p>
<h3 id="my-setup">My setup</h3>
<p>I have a small set of servers which I&rsquo;m using for basic services. These include
mail server, IMAP, backup MX, <a href="http://wiki.znc.in/ZNC">IRC bouncer</a> and general
remote shell for running <a href="http://www.mutt.org/">mutt</a>,
<a href="http://www.weechat.org/">weechat</a>, <a href="http://www.newsbeuter.org/">newsbeuter</a>
and other terminal based applications.  I recently got around to more or less
properly create <a href="https://github.com/mrtazz/cookbooks">cookbooks</a> for this as I
am running <a href="http://opscode.com">chef</a> for configuration management. This also
prompted me to finally set up monitoring and alerting for the services I care
about.</p>
<h3 id="external-service-monitoring">External service monitoring</h3>
<p>Servers are not very useful when their services are not accessible from the
outside world. So you want to monitor this from an external source which
usually tries to establish a connection to specified TCP ports. The general
first service to use is <a href="http://pingdom.com">pingdom</a>. They provide a great
service with great statistics. However since I want to monitor more than the
free plan offers (and possibly more than the cheapest paid plan also), I was
looking into an alternative. Since I already have an account at
<a href="http://zerigo.com">zerigo</a> for some DNS services, I decided to give their
<a href="http://zerigo.com/watchdog">Watchdog service</a> a try. It&rsquo;s $15 per 3 months and
allows 50 service checks for 10 hosts with checking time down to every 5
minutes. This is more than enough for my needs and comes down to $5 a month.
The only drawback is that they only provide email notifications (which can be
somewhat mitigated with <a href="http://ifttt.com">ifttt</a> or the mail to text gateway
of your mobile provider) to one user and a not really great statistics
overview. Otherwise it works pretty great.</p>
<h3 id="process-monitoring">Process monitoring</h3>
<p>The next step is to monitor the processes which are actually providing those
services. For this I&rsquo;m running a <a href="https://github.com/sensu">Sensu</a> instance on
<a href="http://heroku.com">Heroku</a> in the setup I <a href="http://unwiredcouch.com/2012/07/31/deploy-sensu-heroku.html">described
before</a>. Sensu is
an awesome monitoring framework which provides a lot of flexibility, so it&rsquo;s
definitely worth checking out. Since it runs on two small Heroku instances I
can host the server and API for free which works pretty well. As basic checks
I test for running sendmail, cron and dovecot processes. If the checks fail the
given threshold, an alert is pushed to an IRC channel on my
<a href="http://grove.io">grove.io</a> organization. Admittingly this is a little bit
overkill since the basic plans for grove.io start at $10, but I like to play
and experiment with chat based interfaces to infrastructure automation and
monitoring. An alternative would be to use <a href="http://campfirenow.com">Campfire</a>
which is free for a small amount of users. I am also playing with the idea of
having a <a href="http://boxcar.io">Boxcar</a> handler either for Sensu itself or alerting
to Boxcar from IRC. Boxcar is a pretty sweet service which handles push
notifications to mobile phones and I&rsquo;m already using it for notifications from
my IRC bouncer and <a href="http://ifttt.com">ifttt.com</a>. And since I&rsquo;m also running an
instance of <a href="http://github.com/github/hubot">Hubot</a> (also on a free Heroku
instance) it should be rather trivial to have the bot listen for patterns and
send Boxcar notifications upon match.</p>
<h3 id="log-processing">Log processing</h3>
<p>Since I don&rsquo;t want to log into several servers to quickly check different
logfiles, I&rsquo;m sending all of my log data to
<a href="http://papertrailapp.com">Papertrail</a>. They provide an easy endpoint to send
log lines from various systems such as syslog, rsyslog or directly from an
application with an rsyslog handler. Their basic free plan allows for 100MB of
log data per month with a searchable archive of 1 week. This amount should be
enough for a small set of systems with average log data. After that you get 1GB
of log lines in the first stage of paid plans for $7, which is still a decent
trade. The big advantage is that I can now log into a web interface and see
specific log information (for example about chef runs) across all of my
servers.</p>
<h3 id="machine-level-metrics">Machine level metrics</h3>
<p>Additionally I also gather machine level metrics for all of my servers. These
include basic information about CPU and memory usage, disk space and uptime.
All of these metrics are gathered by <a href="http://collectd.org">collectd</a> and its
various plugins and are sent to <a href="http://metrics.librato.com">Librato Metrics</a>
for graphing. This is a lot easier and less hassle than managing your own
<a href="http://graphite.wikidot.com/">Graphite</a> instance. And you only pay for the
metrics you actually send. The data I currently send there are basic metrics
from 2 servers and the number of Sensu check occurrences and it adds up to
something around $5 a month.</p>
<h3 id="verdict">Verdict</h3>
<p>This setup gives me (in my opinion) a pretty good monitoring solution for my
personal infrastructure. Since I don&rsquo;t consume a lot of resources for the
services I depend on, I can usually use the free or cheapest plan available.
With the cheapest options it&rsquo;s around $10 a month and even adding grove.io and
paid Papretrail into the mix only brings you to a bit more than $25 a month.
Of course depending heavily on 3rd party services opens a whole new discussion
about <a href="http://whoownsmyavailability.com">availability</a> which you should be
aware of.</p>
<p>For configuration examples for the services mentioned above, you can check out
my <a href="https://github.com/mrtazz/cookbooks">chef cookbooks</a>. They are mostly run
on FreeBSD but should be somewhat easy to adapt to a different environment.</p>
]]></content>
    <link href="https://unwiredcouch.com/2012/09/15/getting-started-with-monitoring.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Deploying Sensu monitoring on Heroku]]></title>
    <published>2012-07-31T00:00:00Z</published>
    <updated>2012-07-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2012/07/31/deploy-sensu-heroku.html</id>
    <content type="html"><![CDATA[<h3 id="sensu---trying-to-unsuck-monitoring">Sensu - trying to unsuck monitoring</h3>
<p>Some months ago I wanted to set up monitoring for a handful of servers I use
for personal stuff. As a first solution <a href="http://nagios.org">Nagios</a> came to mind. However for
several reasons I didn&rsquo;t want to set it up and configure it. And I really
didn&rsquo;t want to dedicate an existing server to do monitoring or get a new one
just for that purpose. Around that time I also read about <a href="https://github.com/sensu/sensu">Sensu</a>, a new
approach to monitoring, which is a result of Nagios not being a good fit for
the monitoring needs at <a href="http://www.sonian.com/">Sonian</a>. Its technology stack is Ruby, Redis and
AMQP. I immediately thought it should be possible to put this on the <a href="https://devcenter.heroku.com/articles/cedar/">Heroku
Cedar stack</a> and run it on an instance there, which would make a nice
solution for monitoring a small number of systems. So I hacked away and with a
lot of help (and patience) from <a href="https://twitter.com/portertech">Sean Porter</a>, the adaptions to make the
server and API part of Sensu deployable on Heroku are in the new <a href="https://github.com/sensu/sensu/tree/v0.9.6">0.9.6
release</a>.</p>
<h3 id="setting-up-the-sensu-repository">Setting up the Sensu repository</h3>
<p>In order to get started and configure your Sensu instance, clone the <a href="https://github.com/mrtazz/sensu-heroku-app">example
repository</a> from Github.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    git clone https://github.com/mrtazz/sensu-heroku-example
</span></span></code></pre></div><p>The example includes a basic folder layout for running a server or API instance
on Heroku. All configuration files can be dropped in the <code>config/</code> folder. They
will be picked up by the process when Sensu starts. The example repo also
includes a basic handler (<code>bin/showme.rb</code>), which prints event data to STDOUT.
There are a lot more handlers in the Sensu <a href="https://github.com/sensu/sensu-community-plugins">community plugins</a> repository on
Github. Since handlers are just ruby scripts, you can download the handlers you
want and also put it in the <code>bin/</code> directory. Don&rsquo;t forget to add the correct
configuration file for the handler in the <code>config/</code> directory also. A great
overview how to configure Sensu can be found on Joe Miller&rsquo;s <a href="http://joemiller.me">blog</a> and there
is also an official <a href="https://github.com/sensu/sensu/wiki/Install-Guide">install guide</a>.</p>
<h3 id="deployment">Deployment</h3>
<p>In order to deploy Sensu to Heroku, you need to create two apps. One will be
the Sensu API instance and the other one the Sensu server. It doesn&rsquo;t really
matter, which one you start with. The important thing is, that you only need to
add the RabbitMQ and Redis plugins once and can then reuse the settings on the
second instance.</p>
<p>So create the first instance on the cedar stack from within the example repo
and add the plugins:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    heroku create --stack cedar awesome-sensu-server
</span></span><span style="display:flex;"><span>    heroku plugins:install redistogo
</span></span><span style="display:flex;"><span>    heroku plugins:install rabbitmq
</span></span><span style="display:flex;"><span>    heroku config:add API_PORT<span style="color:#f92672">=</span><span style="color:#ae81ff">80</span>
</span></span></code></pre></div><p>You have to add the <code>API_PORT</code> environment variable to the server instance,
since otherwise it will assume it&rsquo;s running the API itself and assign the
instance locale port from the <code>PORT</code> environment variable to use as the API
port. After that is done, push the code to Heroku and scale up a worker
process:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    git push heroku master
</span></span><span style="display:flex;"><span>    heroku ps:scale app<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
</span></span></code></pre></div><p>For the API instance create a new branch in the repo or clone the example repo
into a new location. Then initialize the API:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    heroku create --stack cedar awesome-sensu-api
</span></span><span style="display:flex;"><span>    heroku config:add REDISTOGO_URL<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;value from server instance&#34;</span>
</span></span><span style="display:flex;"><span>    heroku config:add RABBITMQ_URL<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;value from server instance&#34;</span>
</span></span></code></pre></div><p>Now change the Procfile to start up the API instead of the Sensu server like
this:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    app: sensu-api -v -c config/config.json -d config/
</span></span></code></pre></div><p>Commit the changes and push it to the Heroku app:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    git push heroku-api master
</span></span><span style="display:flex;"><span>    heroku ps:scale app<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
</span></span></code></pre></div><p>Now all you have to do is set up clients and voila, you have Heroku hosted
monitoring. If you&rsquo;re not yet familiar with setting up clients, I highly
recommend Joe Miller&rsquo;s <a href="http://joemiller.me">blog</a> again. He&rsquo;s a strong contributor to Sensu and
has written an abundance of blog posts and tutorials about it. And of course
there is also the <a href="https://github.com/sensu/sensu/wiki/">sensu wiki</a>.</p>
<h3 id="further-improvements">Further improvements</h3>
<p>A definite improvement for plugins and handlers would be to be able to also
read configuration from environment variables. At the moment the way to go is
to add a configuration JSON file in the config folder. This is fine except for
the fact that you&rsquo;d also have API keys commited to the repo.</p>
<p>And obviously more bugs will probably come up, once more people run Sensu on
Heroku. I&rsquo;ve been running a low volume instance for a couple of weeks now and
it works pretty great so far.</p>
]]></content>
    <link href="https://unwiredcouch.com/2012/07/31/deploy-sensu-heroku.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Setting up workstations with Chef (Newbie Edition)]]></title>
    <published>2011-08-25T00:00:00Z</published>
    <updated>2011-08-25T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2011/08/25/setting-up-workstations-with-chef-newbie-edition.html</id>
    <content type="html"><![CDATA[<p>I have wanted to reinstall and cleanly set up my iMac at home for some time
now. And since there was a new release of Mac OS X around the corner, it seemed
like the perfect opportunity to do so. All my past setups and reinstalls were
guided by a useful <a href="https://gist.github.com/513101">gist</a> I forked from <a href="http://kennethreitz.com">Kenneth
Reitz</a> some time ago and adopted to my needs. However this time I
wanted to do it a bit differently, I wanted to take this as an opportunity to
dive into configuration management with <a href="http://opscode.com/chef">Chef</a>. As I prepared my
configuration I found a lot of things confusing and not so well documented for
a complete newbie. Thus I wanted to share my experience and maybe provide an
overview and easier access into the world of Chef. After all once you have your
setup it is a pretty nice way to keep your workstations&rsquo; configuration in sync
and have a documented way how you got there.</p>
<p>The setup I am going to describe is based heavily on Joshua Timberman&rsquo;s
<a href="http://jtimberman.posterous.com/managing-my-workstations-with-chef">post</a> about managing Mac OS X workstations with Chef. If you
already know Chef, go read it, it&rsquo;s great. As all my workstations are running
OS X, the steps described are only actually tested on this OS, but should
hopefully apply for any other supported OS as well. And of course the setup
should be installable to the environment of a normal user (no need to wake up
root just because you want to add a plugin to your shell).</p>
<p>However as I am very new to Chef and configuration management, some things may
not be described 100% accurately, so read this post with two big hands of salt
(or two cups of coffee).</p>
<h3 id="configu-what">Configu-what?</h3>
<p>If you are not familiar with configuration management, you can go read it on
the <a href="http://en.wikipedia.org/wiki/Configuration_Management">Wikipedias</a>. But in a nutshell it is the possibility to have
an automated build with one build target which is &lsquo;set up the machine
production ready&rsquo;. As in a classical automated software build, the system knows
what needs to be done to complete the build target and can track what has
already been done. Therefore all steps are idempotent, which means executing a
step multiple times always leads to the same result (and no duplicated
resources).  Therefore it is important that you treat your configuration in the
same way you would treat your automated build: There are no steps executed
outside the system. If you force yourself to use your configuration management
system for every install and configuration you will see how it simplifies your
life, at least when you set up a new machine again. Chef is one implementation
for such a management system (other popular choices are <a href="http://projects.puppetlabs.com/projects/puppet">Puppet</a> and
<a href="http://cfengine.com">cfengine</a>). Chef is (mainly) written in Ruby and supports cookbooks
written in Ruby itself or the Chef DSL which we will see in a later example.</p>
<h3 id="to-the-cloud">To the Cloud!!</h3>
<p>Chef comes in two flavours: <a href="http://wiki.opscode.com/display/chef/Chef+Server">Chef Server</a> and <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef
Solo</a>. The main difference here is that with Chef server everything
related to your configuration is managed on a server and machines register on
it to get their configuration and then perform all actions locally
with <code>chef-client</code>. Chef Solo on the other hand is basically a client run where
you have to download your configuration manually beforehand. The downloaded
configuration is then used by the executable to set up your machine. So in a
Solo run there is no external resource involved, but there are also some
features which are only available in the server edition. For managing my own
configuration I decided if I am going to learn Chef I might as well do it with
the full stack. However setting up Chef server is a real hassle as many
different technologies are involved and is not really recommended for someone
new to Chef. Fortunately <a href="http://www.opscode.com">Opscode</a> (the company behind Chef) provides
a so-called &lsquo;Hosted Chef&rsquo; service, which really just means a Chef server in the
cloud. And as it is free up until 5 nodes, it is a great way to get started
with Chef.</p>
<h3 id="clients-nodes-knife-cookbook-recipe">Clients, nodes, knife, cookbook, recipe?</h3>
<p>The basic terminology can be a bit confusing (especially as half of the search
results usually link to gourmet sites). So let&rsquo;s try to clear some terminology
right upfront:</p>
<ul>
<li>Cookbooks: Basic Chef configuration/distribution unit</li>
<li>Recipe: Subunit of cookbooks. All basic steps are taken in recipes</li>
<li>Client: A client which connects to the Chef server, level at which
certificates are issued</li>
<li>Node: An actual machine which asks the server for its configuration</li>
<li>Roles: Collection of cookbooks which can be assigned to nodes</li>
<li>Knife: Command line client to interact with the Chef server</li>
<li>Data bags: JSON encoded information which doesn&rsquo;t fit anywhere else to store</li>
</ul>
<p>This might still be a bit confusing, but let&rsquo;s just start with our configuration
to see how these parts all play together. The big benefit of Chef (I&rsquo;m sure
it&rsquo;s the same with most of the other systems), which is also a point which is
often discussed as a weakness, is the fact that everything really is Ruby or
json. This means it is source code, which again means we can easily manage it
with an SCM (I will use git in the examples, but it really applies to your
favourite SCM, too). So let&rsquo;s start with creating our configuration repository:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    $ mkdir chef-repo ; cd chef-repo
</span></span><span style="display:flex;"><span>    $ git init .
</span></span></code></pre></div><p>Now that we have our repository set up, we can start to add cookbooks.
There are in general two ways to get cookbooks into your repository.</p>
<ul>
<li>create the files and folder yourself</li>
<li>knife (the command line client, remember?)</li>
</ul>
<p>Knife is definitely the better way as you can create cookbook scaffolds, add
cookbooks directly from the community site or use one of the great plugins
(like pulling cookbooks directly from Github). But to get a better
understanding of the cookbook basics, we&rsquo;ll create everything by hand now.</p>
<h3 id="the-first-cookbook">The first cookbook</h3>
<p>As an example cookbook we&rsquo;ll want to install <a href="https://github.com/robbyrussell/oh-my-zsh">oh-my-zsh</a> with our own
custom <code>.zshrc</code>. Although this is probably not such a common install as <code>git</code>
for example, it is a reasonably easy one and a good example for how to
automate steps which would normally be done manually. The steps we want to
automate are:</p>
<ul>
<li>download and install oh-my-zsh</li>
<li>install our custom <code>.zshrc</code></li>
</ul>
<p>So first of all let&rsquo;s create the basic folder structure:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    $ mkdir -p cookbooks/oh-my-zsh/recipes
</span></span><span style="display:flex;"><span>    $ mkdir -p cookbooks/oh-my-zsh/templates/default
</span></span><span style="display:flex;"><span>    $ touch cookbooks/oh-my-zsh/recipes/default.rb
</span></span><span style="display:flex;"><span>    $ touch cookbooks/oh-my-zsh/templates/default/dot.zshrc.erb
</span></span><span style="display:flex;"><span>    $ touch cookbooks/oh-my-zsh/README.rdoc
</span></span><span style="display:flex;"><span>    $ touch cookbooks/oh-my-zsh/metadata.rb
</span></span></code></pre></div><p>The rough knife equivalent (which creates all the possible folders for the
cookbook) would be <code>knife cookbook create oh-my-zsh -o./cookbooks</code>. However in
order to get our oh-my-zsh cookbook working, we only need the files and folders
shown above. The <code>README.rdoc</code> and <code>metadata.rb</code> files are just for metadata
about the cookbook and only the Ruby file is directly parsed by the Chef server
for information. But every cookbook should also contain a README which
explains its purpose in a spoken language (you create README files for all of
your projects, don&rsquo;t you?).</p>
<p>In order to setup the cookbook, first insert your current <code>.zshrc</code> into
<code>oh-my-zsh/templates/default/dot.zshrc.erb</code>. This makes it available to our
recipes as a template file. Now we want to configure the actual recipe.
Therefore enter the following into <code>oh-my-zsh/recipes/default.rb</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>    script <span style="color:#e6db74">&#34;oh-my-zsh install from github&#34;</span> <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>      interpreter <span style="color:#e6db74">&#34;bash&#34;</span>
</span></span><span style="display:flex;"><span>      url <span style="color:#f92672">=</span> <span style="color:#e6db74">https</span>:<span style="color:#e6db74">//</span>github<span style="color:#f92672">.</span>com<span style="color:#f92672">/</span>robbyrussell<span style="color:#f92672">/</span>oh<span style="color:#f92672">-</span>my<span style="color:#f92672">-</span>zsh<span style="color:#f92672">/</span>raw<span style="color:#f92672">/</span>master<span style="color:#f92672">/</span>tools<span style="color:#f92672">/</span>install<span style="color:#f92672">.</span>sh
</span></span><span style="display:flex;"><span>      code <span style="color:#e6db74">&lt;&lt;-EOS
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"></span>        curl <span style="color:#f92672">-</span>sLf <span style="color:#75715e">#{url} -o - | sh</span>
</span></span><span style="display:flex;"><span>        rm <span style="color:#75715e">#{ENV[&#39;HOME&#39;]}/.zshrc</span>
</span></span><span style="display:flex;"><span>      <span style="color:#66d9ef">EOS</span>
</span></span><span style="display:flex;"><span>      not_if { <span style="color:#66d9ef">File</span><span style="color:#f92672">.</span>directory? <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.oh-my-zsh&#34;</span> }
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">end</span>
</span></span></code></pre></div><p>This just executes the shell script passed to the <code>code</code> directive. The used
interpreter is <code>bash</code> and the <code>not_if</code> directive secures the idempotency of
this step. The script is only executed if the directory <code>~/.oh-my-zsh</code> does not
exist. The shell script just contains the usual oh-my-zsh installer and removes
the generic <code>.zshrc</code> which is important for the next step. As we want to
install our own config file but don&rsquo;t want to do it everytime, we use the
following Chef block (written to <code>oh-my-zsh/recipes/default.rb</code> directly after
the install script):</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>    template <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.zshrc&#34;</span> <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>      mode   <span style="color:#ae81ff">0700</span>
</span></span><span style="display:flex;"><span>      owner  <span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;USER&#39;</span><span style="color:#f92672">]</span>
</span></span><span style="display:flex;"><span>      group  <span style="color:#66d9ef">Etc</span><span style="color:#f92672">.</span>getgrgid(<span style="color:#66d9ef">Process</span><span style="color:#f92672">.</span>gid)<span style="color:#f92672">.</span>name
</span></span><span style="display:flex;"><span>      source <span style="color:#e6db74">&#34;dot.zshrc.erb&#34;</span>
</span></span><span style="display:flex;"><span>      variables({ <span style="color:#e6db74">:home</span> <span style="color:#f92672">=&gt;</span> <span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span> })
</span></span><span style="display:flex;"><span>      not_if { <span style="color:#66d9ef">File</span><span style="color:#f92672">.</span>exist? <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.zshrc&#34;</span> }
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">end</span>
</span></span></code></pre></div><p>This creates the file given as the template parameter (our zsh config file)
with the given properties. It makes sure the file is owned and only readable by
us, takes the content from the <code>dot.zshrc.erb</code> template and passes <code>variables</code>
to the renderer. As you might have already seen, templates are just <a href="http://ruby-doc.org/stdlib/libdoc/erb/rdoc/classes/ERB.html">ERB</a>.
This means we can use the ERB syntax (<code>&lt;%= var %&gt;</code>) within a template to insert
dynamic content passed from the recipe.</p>
<p>One additional step, we might want to take, is source <code>.profile</code> in our config
file. This is especially useful if you use environment management like
<a href="http://beginrescueend.com/rvm/install/">rvm</a>, <a href="http://pypi.python.org/pypi/virtualenv">virtualenv</a> or <a href="https://github.com/spawngrid/kerl">kerl</a>. These usually need to be
activated in the shell config. In order to make sure that they are present in
every shell the activation step is written into <code>.profile</code>. Therefore we also
want to source it in our zsh config. The <code>not_if</code> method here also conserves
the idempotency of the step.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>    script <span style="color:#e6db74">&#34;source .profile in .zshrc&#34;</span> <span style="color:#66d9ef">do</span>
</span></span><span style="display:flex;"><span>      interpreter <span style="color:#e6db74">&#34;bash&#34;</span>
</span></span><span style="display:flex;"><span>      code <span style="color:#e6db74">&lt;&lt;-EOS
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"></span>      echo <span style="color:#e6db74">&#34;source </span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.profile&#34;</span> <span style="color:#f92672">&gt;&gt;</span> <span style="color:#75715e">#{ENV[&#39;HOME&#39;]}/.zshrc</span>
</span></span><span style="display:flex;"><span>      <span style="color:#66d9ef">EOS</span>
</span></span><span style="display:flex;"><span>      not_if <span style="color:#e6db74">&#34;grep </span><span style="color:#ae81ff">\&#34;</span><span style="color:#e6db74">source </span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.profile</span><span style="color:#ae81ff">\&#34;</span><span style="color:#e6db74"> </span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.zshrc&#34;</span>
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">end</span>
</span></span></code></pre></div><h3 id="the-server-comes-into-play">The server comes into play</h3>
<p>After finishing these steps, we can upload the cookbook to our server.  In
order to be able to do this, the server needs to be set up, so if you haven&rsquo;t
already <a href="http://www.opscode.com/hosted-chef/">sign up</a> for a free hosted chef. After creating your
organization, put your client and validation certificates in <code>~/.chef</code>. I find
this to be a convenient place for all your Chef related configuration, but you
can of course choose another directory (just make sure that you also adapt
subsequent steps in this post accordingly). Now we can upload our cookbook
with:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    knife cookbook upload oh-my-zsh
</span></span></code></pre></div><p>We have a cookbook on the server now, but no node uses it, yet (we also don&rsquo;t
have nodes set up at the moment but bear with me here).  In order to match
nodes to cookbooks Chef employs the concept of &lsquo;run lists&rsquo;.  These are
basically lists of recipes which can be added to a node so that it knows what
to install. As run lists are mostly very similar between nodes of the same
category, we can set up a role for it in Chef. A role is just a specific set of
attributes and a run list which is mapped to a name. As there may be multiple
machines we use as workstations we create a role &lsquo;workstation&rsquo; in the roles
directory of our Chef repository:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    $ mkdir -p roles
</span></span><span style="display:flex;"><span>    $ touch roles/workstation.rb
</span></span></code></pre></div><p>Again this is just Ruby so we add the following information to
<code>workstation.rb</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>    name <span style="color:#e6db74">&#34;workstation&#34;</span>
</span></span><span style="display:flex;"><span>    description <span style="color:#e6db74">&#34;development workstations&#34;</span>
</span></span><span style="display:flex;"><span>    run_list(
</span></span><span style="display:flex;"><span>      <span style="color:#e6db74">&#34;recipe[oh-my-zsh]&#34;</span>
</span></span><span style="display:flex;"><span>    )
</span></span></code></pre></div><p>Now every node which is assigned the &lsquo;workstation&rsquo; role will know that it has
to install chef <code>oh-my-zsh</code> recipe. Let&rsquo;s upload the role to our server:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    $ knife upload role from file workstation.rb
</span></span></code></pre></div><p>In the management web interface (or via <code>knife</code>) we can now assign the role
&lsquo;workstation&rsquo; to specific nodes. However we first need a client which is
allowed to connect to the server API. Clients and nodes are somewhat the same
in Chef. Theoretically it is possible that a client manages a number of nodes,
but normally every node corresponds to one client. Therefore we create a new
client for our workstation. You can also run <code>chef-client</code> on your node and
provide the validator certificate for your organization. If the node does not
yet exist on the server it is created. However this means that you have to have
the validator certificate (which is the ultimate key to your server) on the
node. This might not be a problem for setting up your development machine, but
is bad security in general. So the better way is to create the client and node
on the server and provide the correct credentials (at least read and update)
for the client on the node. One more advantage is that we can now already
assign roles to our nodes (via the &lsquo;Roles&rsquo; menu) and add the &lsquo;workstation&rsquo; role
to the newly created node. All these steps can of course also be accomplished
with <code>knife</code>, but I find the web management console easier to start with.  When
all this is done, download the client&rsquo;s certificate and also put it in
<code>~/.chef</code>. Theoretically your node is correctly set up already. However Chef
makes the assumption that it is run with privileges. Therefore the default data
directory is in <code>/etc/chef</code>. As we want to setup our development machine and
not a server, it makes sense to run <code>chef-client</code> as your normal user. In order
to do this, you would now have to make the default directories accessible for
your user. But we can also override the paths used in the client config. I also
keep my paths in <code>~/.chef</code> (everything in one place, remember?) so a good
adaption of your <code>client.rb</code> might be:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>    base_dir <span style="color:#f92672">=</span> <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span><span style="color:#66d9ef">ENV</span><span style="color:#f92672">[</span><span style="color:#e6db74">&#39;HOME&#39;</span><span style="color:#f92672">]</span><span style="color:#e6db74">}</span><span style="color:#e6db74">/.chef&#34;</span>
</span></span><span style="display:flex;"><span>    run_path <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span>base_dir<span style="color:#e6db74">}</span><span style="color:#e6db74">/run&#34;</span>
</span></span><span style="display:flex;"><span>    checksum_path <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span>base_dir<span style="color:#e6db74">}</span><span style="color:#e6db74">/checksum&#34;</span>
</span></span><span style="display:flex;"><span>    file_cache_path <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span>base_dir<span style="color:#e6db74">}</span><span style="color:#e6db74">/cache&#34;</span>
</span></span><span style="display:flex;"><span>    file_backup_path <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span>base_dir<span style="color:#e6db74">}</span><span style="color:#e6db74">/backup&#34;</span>
</span></span><span style="display:flex;"><span>    cache_options({<span style="color:#e6db74">:path</span> <span style="color:#f92672">=&gt;</span> <span style="color:#e6db74">&#34;</span><span style="color:#e6db74">#{</span>base_dir<span style="color:#e6db74">}</span><span style="color:#e6db74">/cache/checksums&#34;</span>, <span style="color:#e6db74">:skip_expires</span> <span style="color:#f92672">=&gt;</span> <span style="color:#66d9ef">true</span>})
</span></span></code></pre></div><p>This will make sure only subdirectories of <code>~/.chef</code> will be used for
caching, checksums, etc. After these steps there is only one thing to do.</p>
<h3 id="sit-back-and-watch">Sit back and watch</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>    $ chef-client -c ~/.chef/client.rb -k ~/.chef/client.pem
</span></span></code></pre></div><p>The above command will run the Chef client with the specified config and client
certificate. It will then fetch the cookbooks from the server, determine which
to execute via the nodes run list and run them. If everything went well you now
have oh-my-zsh installed and can go on and add additional cookbooks to your
repository.</p>
<h3 id="further-reading">Further reading</h3>
<p>You should now be equipped with a basic working setup to create your
configuration with Chef. Play around with new cookbooks and try to force
yourself to do everything system configuration related in terms of cookbooks
and data bags. You&rsquo;ll only learn it by doing it. If you feel comfortable enough
with this basic setup, see the following links for some more sophisticated
possibilities.</p>
<ul>
<li><a href="http://jtimberman.posterous.com/managing-my-workstations-with-chef">Managing workstations with Chef</a></li>
<li><a href="http://wiki.opscode.com/display/chef/Encrypted+Data+Bags">Encrypted data bags</a></li>
<li><a href="http://wiki.opscode.com/display/chef/Lightweight+Resources+and+Providers+(LWRP)">Light Weight Resource Providers(LWRP)</a></li>
</ul>
]]></content>
    <link href="https://unwiredcouch.com/2011/08/25/setting-up-workstations-with-chef-newbie-edition.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Testing couchapps with cucumber]]></title>
    <published>2010-12-30T00:00:00Z</published>
    <updated>2010-12-30T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/12/30/testing-couchapps-with-cucumber.html</id>
    <content type="html"><![CDATA[<h3 id="couchapps-overview">Couchapps overview</h3>
<p>[Couchapps][2] are a great way to build web apps hosted directly on a
CouchDB. This is due to the integrated HTTP server, so if you can
fit your application into the constraints of HTML/CSS/JavaScript, you
get the storage (almost) for free. The heart of couchapps is [evently][3],
a JavaScript framework simplifying the development of event based web
applications. The development process is accompanied by the
[couchapp python script][4], which maps a certain directory structure
to the evently application layout. This makes it easy to develop the source
code, which is normally stored as attachments in the design document, in your
favourite editor.  However this difference in how the application is developed
and how it is deployed, makes it a bit more difficult to automatically test the
application.</p>
<h3 id="enter-cucumber">Enter cucumber</h3>
<p>Fortunately the ruby world has provided us with a great tool for
acceptance testing web applications: [Cucumber][5]. Cucumber is a
testing framework, which encourages [BDD][6] style development. It
features different drivers for (headless) browser testing and supports
an easy, natural language like, syntax for creating tests (scenarios as they
are called in BDD world). If you don&rsquo;t use it already, give it a try, it is
really great. However to fully embrace an automated testing approach we need
some helpers to do additional work, for example create and destroy the
testing environments.</p>
<h3 id="we-have-the-technology-we-can-make-him-stronger">We have the technology, we can make him stronger</h3>
<p>The first problem was the CouchDB native authentication db used by couchapps to
profit from the already existing user management. Fortunately there is [a
way][7] to change the db used for authentication to an arbitrary one. The next
nice-to-have is an easy setup for choosing databases for different environments
like tests. Fortunately rails already provides a clean setup for this, which we
can copy. Now we only have to bundle a [simple CouchDB library][8] and some
helper methods and we are ready to go.  This is what [couchapp-cucumber][9] is
about. I bundled all these steps as a simple cucumber drop-in. I&rsquo;m sure it can
be improved in a lot of ways.</p>
<h3 id="so-fork-it-hack-away-and-happy-testing">So fork it. Hack away. And happy testing.</h3>
<p>[1]: {{ page.url }}
[2]: <a href="http://couchapp.org">http://couchapp.org</a>
[3]: <a href="http://couchapp.org/page/evently">http://couchapp.org/page/evently</a>
[4]: <a href="https://github.com/couchapp/couchapp">https://github.com/couchapp/couchapp</a>
[5]: <a href="http://cukes.info">http://cukes.info</a>
[6]: <a href="http://en.wikipedia.org/wiki/Behavior_Driven_Development">http://en.wikipedia.org/wiki/Behavior_Driven_Development</a>
[7]: <a href="http://lenaherrmann.net/2010/04/29/security-in-couchdb-changing-the-authentication-db">http://lenaherrmann.net/2010/04/29/security-in-couchdb-changing-the-authentication-db</a>
[8]: <a href="https://gist.github.com/738128">https://gist.github.com/738128</a>
[9]: <a href="https://github.com/mrtazz/couchapp-cucumber">https://github.com/mrtazz/couchapp-cucumber</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/12/30/testing-couchapps-with-cucumber.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Introducing Ramrod Command Center]]></title>
    <published>2010-09-19T00:00:00Z</published>
    <updated>2010-09-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/09/19/introducing-ramrod-command-center.html</id>
    <content type="html"><![CDATA[<p>I have [finally][2] registered my Master&rsquo;s thesis. This means in 6 month
my university life is over and the hard and cruel reality begins. I am writing
my thesis at the [Chair of Computer Networks][3] about localization of nodes
without external positioning information. I am going to port the existing
algorithms to Unix platforms and create a server environment to be able to use
the localization also in networks without direct communication.</p>
<p>The resulting (localization) software has to run on different platforms such as
Windows, Unix and the iPhone. This is why I wanted a build system with
continuous integration, where the software is built and tested on these
platforms. For continuous integration I normally use [integrity][4] or
[cijoe][5], which are simple but therefore don&rsquo;t support notifying builds for
several platforms. [Hudson][6] has support for agents, but as far as I found
out, they have to be hudson agents. For my Master&rsquo;s project I wanted to be able
to use integrity as well as cijoe as agents and have a simple central command
instance, which notifies them.</p>
<h3 id="ramrod">Ramrod</h3>
<p>This is why I built [ramrod][7]. It is a small sinatra application, which acts
as a sort of CI control center. Ramrod can be notified to build a project via
simple HTTP POST. All registered agents are then notified subsequently. The
agents have to be configured to notify the result back to ramrod, where all the
results are then displayed. This can be easily done with cijoe and integrity.
Ramrod itself has a simple structure and can be deployed to any ruby
hosting platform.</p>
<p>The project is still in a very early stage and there are a lot of things which
have to be improved (or even implemented), such as a notification system,
authentication and of course a much better design. But I think the initial
release v0.1.0 is already quite helpful and usable.</p>
<p>[1]: {{ page.url }}
[2]: <a href="http://twitter.com/mrtazz/status/24040433982">http://twitter.com/mrtazz/status/24040433982</a>
[3]: <a href="http://cone.informatik.uni-freiburg.de">http://cone.informatik.uni-freiburg.de</a>
[4]: <a href="http://integrityapp.com">http://integrityapp.com</a>
[5]: <a href="http://github.com/defunkt/cijoe">http://github.com/defunkt/cijoe</a>
[6]: <a href="http://hudson-ci.org">http://hudson-ci.org</a>
[7]: <a href="http://github.com/mrtazz/ramrod">http://github.com/mrtazz/ramrod</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/09/19/introducing-ramrod-command-center.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Chipping in on Textmate to Vim switching]]></title>
    <published>2010-08-09T00:00:00Z</published>
    <updated>2010-08-09T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/08/09/vim.html</id>
    <content type="html"><![CDATA[<p>There was some buzz lately about people considering [Vim][2] as their main
editor and especially going from [Textmate][3] to Vim. I&rsquo;ve tried several times to
go the other way and leave vim as my editor of choice in favour of Textmate. It
never worked.</p>
<h3 id="magic-does-not-come-easy">Magic does not come easy</h3>
<p>I think there is one big mistake a lot of people do when switching. They expect to start
vim and instantly have their fingers dance over the keyboard and coworkers
being stunned by the awesome magic. But as with everything you learn, this is
not the case. It is a long (think years long and not weeks long) way to even
come close to this dance. I started fighting with vim in 2004 and think I have
mastered the first 10% now. In my opinion Yehuda Katz was right when he [said][4]
that you should really start in insert mode. It turns out, that you can really
use Vim (especially [MacVim][5]) like any other editor. From there on, if you
force yourself to learn and use a new command everyday, you will see
significant speed (and magic) improvements quite fast.</p>
<h3 id="bundle-configuration-with-pathogen">Bundle configuration with pathogen</h3>
<p>Another topic which is very important (and powerful) is bundle support.
Textmate&rsquo;s bundles are split into different type in Vim. There are plugins,
compiler, syntax and some more folders which can be used for configuration. If
you do it the old way, it is really cumbersome. You have to copy new scripts
into the according folders in your <code>~/.vim</code> folder. You have to check that
nothing gets overwritten, and after some time you will lose track of what
plugins you have installed.</p>
<p>Fortunately, Tim Pope wrote the great [pathogen plugin][6]. You only have to
put it into a folder called autoload and enter 3 lines into your <code>~/.vimrc</code> and
your done. Now you can create a new folder under <code>~/.vim/bundle</code> for each
plugin you want to install. Pathogen will automatically load the plugin for
you. You can even go further and put your configuration into [git][7].
If you add all of your plugins as git submodule, you can easily update them and
have your configuration in sync on all your machines. It&rsquo;s that easy.</p>
<h3 id="you-mentioned-textmate">You mentioned Textmate?</h3>
<p>Right. I mentioned that I tried several times to switch to Textmate, as all the
smart OSX users seem to use it. And if all the smart people use it, it must be
awesome right?</p>
<p>After using Textmate for some time, I suffered the same symptoms all the Vim
switchers were talking about. Coding was slow, I had to think how I do this and
that much to often, shortcuts are way to complicated and I tried to find a Vim
mode for Textmate, that worked for me. I was trying to instantly unleash the
magic. And that just doesn&rsquo;t work.
While I think Textmate is (one of) the best pure OSX editors and I hope that
there will be a version 2 someday, I am still incredibly more productive with
Vim. So Textmate is more of a fun to use tool, which I use when I am in the
mood.</p>
<p>But there is still this emacs thing out there, I hear. And it also wants to be
learned.</p>
<p>[1]: {{ page.url }}
[2]: <a href="http://www.vim.org">http://www.vim.org</a>
[3]: <a href="http://macromates.com">http://macromates.com</a>
[4]: <a href="http://yehudakatz.com/2010/07/29/everyone-who-tried-to-convince-me-to-use-vim-was-wrong/">http://yehudakatz.com/2010/07/29/everyone-who-tried-to-convince-me-to-use-vim-was-wrong/</a>
[5]: <a href="http://code.google.com/p/macvim/">http://code.google.com/p/macvim/</a>
[6]: <a href="http://github.com/tpope/vim-pathogen">http://github.com/tpope/vim-pathogen</a>
[7]: <a href="http://git-scm.com">http://git-scm.com</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/08/09/vim.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Notifo]]></title>
    <published>2010-07-11T00:00:00Z</published>
    <updated>2010-07-11T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/07/11/notifo.html</id>
    <content type="html"><![CDATA[<p>[Notifo.com][notifo] is a web service which enables you to send push
notifications to your mobile device (at the moment there is only support
for the iPhone). This [blog post][notifo_announce] first called my attention
to the service. I have used [prowl][prowl_url] (a similar service based
on [growl][growl_url]) before and I was instantly interested in this
somewhat more versatile notifo.</p>
<p>On the same evening I decided to build a [python library][github_notifo-py],
to be able to easily notify users from python applications. I also thought it
would be cool to get push notifications from your CI server about your builds.
So I implemented an [integrity][integrity_url] notifier which is now available
in the main line.</p>
<p>At the moment I personally use notifo to get notified about twitter mentions
via [push.ly][pushly_url], website changes via [femtoo][femtoo_url] and commits to some of my
github projects via a [service hook][notifo_hook]. Once I have found suitable
hosting for integrity for my projects, I will also use the integrity notifier.
And I am very excited to see what else will be build upon this service.</p>
<p>[notifo_post]: {{ page.url }}
[notifo]: <a href="http://notifo.com">http://notifo.com</a>
[notifo_announce]: <a href="http://paulstamatiou.com/notifo-yc-w2010-gets-a-co-founder-me">http://paulstamatiou.com/notifo-yc-w2010-gets-a-co-founder-me</a>
[prowl_url]: <a href="http://prowl.weks.net/">http://prowl.weks.net/</a>
[growl_url]: <a href="http://growl.info/">http://growl.info/</a>
[github_notifo-py]: <a href="http://github.com/mrtazz/notifo.py">http://github.com/mrtazz/notifo.py</a>
[integrity_url]: <a href="http://integrityapp.com/">http://integrityapp.com/</a>
[pushly_url]: <a href="http://push.ly">http://push.ly</a>
[femtoo_url]: <a href="http://femtoo.com">http://femtoo.com</a>
[notifo_hook]: <a href="http://github.com/github/github-services">http://github.com/github/github-services</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/07/11/notifo.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Thunk.us Python wrapper]]></title>
    <published>2010-06-04T00:00:00Z</published>
    <updated>2010-06-04T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/06/04/thunk-us.html</id>
    <content type="html"><![CDATA[<p>Lately I have become more and more interested in APIs (especially on the web).
As a first manifestation of this interest I converted the simple python script
to add articles to [instapaper][instapaper] I wrote some time ago into a usable
[python library][instapaperlib] utilizing the full API.</p>
<p>Then, while reading some blog posts about continuous integration and build notification,
I stumbled upon [thunk.us][thunkus]. It is a simple status management service. You
register your thunk there, poke it to set the status and everyone who needs to
(i.e. knows the ID) can check it. It can easily be used to notify about succeeded/failed
builds, status of queues or coffee supply or anything else for that matter. Allthough I
had no immediate usage for such a system, I read the API definitions and found them to
be interesting enough to build something upon it.</p>
<p>Enter [thunkapi.py][thunkus_github]. A python library with command line client to interact
with the thunk.us service. I built the library with ease of use in mind (and I think I
somewhat succeeded). One can easily provide states, payload and a UID (or list of UIDs where
applicable) to the exposed methods and get a clean python dict object in return.</p>
<p>I will still have to see if I have a productive use for the thunk.us service. But the fun of
building the library was motivation enough. So have fun with it, use it if you like it
and if you dislike something about it: [Fork me!][thunkus_github]</p>
<p>[thunkus_post]: {{ page.url }}
[instapaper]: <a href="http://instapaper.com">http://instapaper.com</a>
[instapaperlib]: <a href="http://pypi.python.org/pypi/instapaperlib">http://pypi.python.org/pypi/instapaperlib</a>
[thunkus]: <a href="http://thunk.us">http://thunk.us</a>
[thunkus_github]: <a href="http://github.com/mrtazz/thunkapi.py">http://github.com/mrtazz/thunkapi.py</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/06/04/thunk-us.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Where&#39;s ma ketchup?]]></title>
    <published>2010-05-31T00:00:00Z</published>
    <updated>2010-05-31T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/05/31/ketchupstatus.html</id>
    <content type="html"><![CDATA[<p>At our office mondays are special. This is the one day Bobby drives his
good old trailer to the parking lot of the supermarket next to the office and sells
chicken and fries.
As an alternative it is also possible to only get fries and get burgers for the
microwave from the supermarket. But there is still one more important thing: Ketchup.
There are even weekly fights, about which brand is the best (no doubt, Heinz is where
my heart is). Ketchup is a shared resource with us. One person usually buys a bottle
for everybody and when it is empty, the next one is bought by someone else.</p>
<p>The problem is once we are in the supermarket, usually nobody has checked if we still
have ketchup or not. And so the guessing starts. This is why I decided to build an easy
web application where we can set and check the current status of our ketchup supply.
So last weekend I sat down and since I wanted to get into ruby anyway, I built a sinatra
app. It is quite simple and works with a token based API, but it does the job. It is hosted
on [heroku][ketchup_heroku] if you want to check it out. But you can also fork it on
[the githubs][ketchup_github], if you want to improve it or adapt it to something else.</p>
<p>[ketchupstatus_post]: {{ page.url }}
[ketchup_heroku]: <a href="http://ketchupstatus.heroku.com">http://ketchupstatus.heroku.com</a>
[ketchup_github]: <a href="http://github.com/mrtazz/ketchupstatus">http://github.com/mrtazz/ketchupstatus</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/05/31/ketchupstatus.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[instapaperlib]]></title>
    <published>2010-05-20T00:00:00Z</published>
    <updated>2010-05-20T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/05/20/instapaperlib.html</id>
    <content type="html"><![CDATA[<p>For some time now <a href="http://instapaper.com">instapaper.com</a> is one of the web services I use most.
Together with the iPhone application it is the perfect storage place for articles and
blog posts you get via RSS, twitter or from anywhere else. Whenever I don&rsquo;t have time
to read a link right now, I hit my &ldquo;Read later&rdquo; bookmark and the article is saved
in pure text form to my instapaper.com account. The same goes for twitter links, Tweetie
(now Twitter for iPhone) has perfectly integrated instapaper so that it could not be
easier to post links for reading later.</p>
<p>However I also wanted to be able to add links quickly which come from other sources.
That is why I wrote a library for instapaper in python and a command line client using
this library. For source code and examples see the <a href="http://github.com/mrtazz/InstapaperLibrary">github</a> page and to
install use <a href="http://pypi.python.org/pypi/instapaperlib/0.2.0">PyPi</a>.</p>
]]></content>
    <link href="https://unwiredcouch.com/2010/05/20/instapaperlib.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[plustache]]></title>
    <published>2010-04-21T00:00:00Z</published>
    <updated>2010-04-21T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2010/04/21/plustache.html</id>
    <content type="html"><![CDATA[<p>Some time ago I discovered [@defunkt&rsquo;s][@defunkt] logic less
ruby templating system [mustache][mustache]. I instantly
liked it because of its simplicity, independence from any frameworks
and admittingly also because of the name. After having seen many
implementations in [python][pystache],
[JavaScript][mustache.js] and even [Erlang][mustache.erl] and
[node.js][mustache.node] I decided to also port it to a new language.
Since I am doing some C++ at work at the moment and I wanted to deepen
my knowledge in some non-work related projects anyway, the decision was
practically made. So I fired up my trusty
[text editor][macvim] and hacked away.
After some weeks of creating the build system, deciding how to do regular
expressions and unit tests, basic mustache tags, true/false and inverted sections
are working, as well as basic HTML esacping.</p>
<p>The results can be seen on
[github][plustache] and at the moment I am working on
getting to a point where I am satisfied enough with the code to tag it v0.1.0.
I am excited to port mustache to a more static language and challenge the
difficulties of still keeping it simple. And I am even more excited about how this
mustache thing will evolve.</p>
<p>[plustache_post]: {{ page.url }}
[@defunkt]: <a href="http://twitter.com/defunkt">http://twitter.com/defunkt</a>
[mustache]: <a href="http://mustache.github.com">http://mustache.github.com</a>
[pystache]: <a href="http://github.com/defunkt/pystache">http://github.com/defunkt/pystache</a>
[mustache.js]: <a href="http://github.com/janl/mustache.js">http://github.com/janl/mustache.js</a>
[mustache.erl]: <a href="http://github.com/mojombo/mustache.erl">http://github.com/mojombo/mustache.erl</a>
[mustache.node]: <a href="http://github.com/raycmorgan/Mu">http://github.com/raycmorgan/Mu</a>
[macvim]: <a href="http://code.google.com/p/macvim/">http://code.google.com/p/macvim/</a>
[plustache]: <a href="http://github.com/mrtazz/plustache">http://github.com/mrtazz/plustache</a></p>
]]></content>
    <link href="https://unwiredcouch.com/2010/04/21/plustache.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Almost Endless Screen Real Estate]]></title>
    <published>2009-12-24T00:00:00Z</published>
    <updated>2009-12-24T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2009/12/24/first-weekend-with-endless-screen-real-estate.html</id>
    <content type="html"><![CDATA[<h3 id="ye-olde-setup">Ye olde setup</h3>
<p>Some months ago I realized that my Mac mini, used as a media server in my
livingroom, has been getting a bit long in the tooth. The 1,8 GHz Core (no 2)
Duo with 1GB RAM and 80GB hard disk was top notch when I bought it around 3
years ago. But in the meantime I have aquired around 300GB of music, TV shows
and movies (thank you iTunes store and handbrake) and the startup time of
iTunes seemed to increase every week. It was also getting more and more time
consuming to play flash videos, not to speak of HD TV shows.</p>
<p>Another thing that always somehow bugged me, was the fact that I had a whole
machine only for serving media content, which just seems a bit too much. Since
I don&rsquo;t watch TV shows all day and also don&rsquo;t listen to music all the time,
the Mac mini was idle around 75% of the day. Sure it doesn&rsquo;t consume that much
power when idling, but nevertheless it always felt like a waste. Additionally
there are some Firewire hard disks connected to the Mac mini, since I ran out
of internal storage a long time ago (and didn&rsquo;t feel like replacing the hard
disk). Just more negative contribution to my carbon footprint.</p>
<p>My main machine for work and university is a black MacBook from late
2007/early 2008. For coding and longer studying, I hooked it up to an external
22-inch Full HD display and Apple wireless keyboard/mouse. Since I got tired
of cabling/uncabling the MacBook, it happened all too often, that I worked on
the small screen too long to be comfortable or had the MacBook connected to
the display all weekend and missed its mobility. I also more and more often
had the feeling, that the external display was a bit too small and that the
combination of 22-inch and 1920x1080 resolution wasn&rsquo;t right for me.</p>
<p>So around September this year I started thinking about how to improve my
situation.</p>
<h3 id="choosing-new-hardware">Choosing new hardware</h3>
<p>The first solution I thought about, was the new Mac mini, which has two
display outputs. I could hook it up to the display and the TV and have a
machine for working and a multimedia machine at the same time. However, I am
not satisfied with the hard disk size of the Mac mini. It is only a matter of
time until my media data exceeds 500GB and I am not ready to give up the
optical drive for more storage and the Mac mini server.</p>
<p>Then the new 27-inch iMac came out and I was instantly stunned by the display.
Also the low-end 27-inch has a 1TB hard disk, which should be enough for some
time, a rather strong CPU and a fairly large amount of RAM (did I mention the
display?). So it was almost decided that the new setup would include a 27-inch
iMac. The following weeks consisted of discussing the solution (thanks
<a href="http://twitter.com/0ktan">@0ktan</a> for pointing out the size of the display
several times a day), calculating how to finance it, and making lists to
justify not going with the Mac mini variant. After this (for me fairly normal)
decision process, I was ready to order and when I was somewhat surprised by a
christmas bonus, I knew it was time for a christmas present for myself. So I
ordered the low end 27-inch iMac with Apple Care and the new Apple Remote.
Only to get notified that it will be delivered 24th to 31st of December (not
in time for christmas). The following days I read about broken displays,
supply shortage, graphics card failures, but I was convinced that this
couldn&rsquo;t happen to me and kept thinking about a name for it (really the
hardest part of all). On the following Saturday I received a shipping
confirmation with the 21st of December as the estimated delivery date. It goes
without saying, that I was totally excited that the new machine would arrive
before christmas, allthough the Apple status site still said 23rd to 29th. And
then on the 18th the UPS truck finally arrived.</p>
<h3 id="the-screen-real-estate-paradise">The screen real estate paradise</h3>
<p>After the exciting unboxing, the setup was straight forward as usual. Entering
my MobileMe credentials synchronized contacts, calendars, iDisk, and such to
the new iMac. I copied my data and applications from my MacBook and the media
files from the mac mini and have used the iMac since then almost exclusively
(except for some trips to the couch with the MacBook).</p>
<p>Working with the iMac is just awesome. I can now have a browser window, IDE
and a terminal conveniently beneath each other without overlapping. iTunes
with my whole media library loads in seconds, even when I have a virtual
machine running and the whole setup isn&rsquo;t even that big. Since I have a rather
small desktop, I was worried a bit that the iMac would look too big on it and
that there wasn&rsquo;t enough space for books and other stuff. But despite its
large screen it consumes hardly more space than the 22-inch display with the
VESA mount.</p>
<p>The mini Displayport to HDMI adapter still has to be delivered, so I don&rsquo;t
know yet if it is irritating, if the TV is permanently connected as a
secondary display. If this is the case, I will have to manually disconnect the
TV when I am working on the iMac. Also because my desktop isn&rsquo;t that big I
have to turn my head a little bit to see the edges of the screen, which can be
a bit uncomfortable.</p>
<p>Since these two things can be easily filed under first world problems, I have
to say that I am totally happy with my decision and that it is (again) the
best Mac I have worked on so far.</p>
]]></content>
    <link href="https://unwiredcouch.com/2009/12/24/first-weekend-with-endless-screen-real-estate.html" rel="alternate"/>
	</entry>
  
	<entry>
    <title type="html"><![CDATA[Google Reader to Instapaper bridge]]></title>
    <published>2009-09-19T00:00:00Z</published>
    <updated>2009-09-19T00:00:00Z</updated>
    <id>https://unwiredcouch.com/2009/09/19/google-reader-instapaper.html</id>
    <content type="html"><![CDATA[<h3 id="the-dilemma">The dilemma</h3>
<p>For some years now I am using <a href="http://google.com/reader">Google Reader</a> as my
reader of choice for news feeds. I still think it&rsquo;s the best way to get daily
news (for me), although I do recognize that I don&rsquo;t read every feed as
thoroughly as I used to anymore. I always use the starred items section as a
sort of &ldquo;save for later&rdquo; storage, so I can open all the unread stories in my
browser tabs and read them when I have time. This worked out ok, but in the
last months I got more and more used to the
<a href="http://www.instapaper.com">instapaper</a> service, which provides a simple web
frontend to save websites for later and a great <a href="http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=284942713&amp;mt=8">iPhone
app</a>.
This led to the situation, that I still starred items in Google Reader, but
then had to open the pages and manually save them to instapaper.com and unstar
them in Google Reader. An alternative is the builtin Google Reader option to
share to instapaper, which also needs some manual actions and you still have
to unstar the item afterwards.</p>
<h3 id="the-solution">The solution</h3>
<p>So I decided to build a script, which would run nightly on a server and pull
all starred items out of Google reader and into instapaper. Since I am trying
to improve my python knowledge whenever I can, I decided to build a python
implementation. Since I also wanted to dig into the (unofficial) Google Reader
API, I decided not to use the very good existing <a href="http://code.google.com/p/pyrfeed">python
framework</a>, but to utilize the API myself. I
had already written a small <a href="http://github.com/mrtazz/InstapaperLibrary">instapaper
library</a> before, which I mainly
used to save articles for later on the command line.</p>
<p>So I sat down and started to write <a href="http://github.com/mrtazz/instareader.py">the
bridge</a>. All in all it took
me a bit more than a month (due to university life, work and exams) to finally
get a basic version working, which is able to retrieve starred items, save
them to instapaper and then remove the star from instapaper. The script is now
run every hour on a server, which is enough for my needs of instapaper being
up-to-date. Also I have more choices on iPhone RSS readers with Google Reader
syncing now, since Instapaper support is not a requirement for the app
anymore. There are still (as always) some things to do, but at least the
manual open,save,unstar actions are no more now.</p>
]]></content>
    <link href="https://unwiredcouch.com/2009/09/19/google-reader-instapaper.html" rel="alternate"/>
	</entry>
  
</feed>
