<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Octopus blog</title>
  <subtitle>Site description.</subtitle>
  <link href="https://octopus.com/blog/feed.xml" rel="self" />
  <link href="https://octopus.com" />
  <id>https://octopus.com/blog/feed.xml</id>
  <updated>2026-04-07T00:00:00.000Z</updated>

    <entry>
      <title>Continuous Delivery Office Hours Ep.3: Branching strategies</title>
      <link href="https://octopus.com/blog/continuous-delivery-office-hours-e3" />
      <id>https://octopus.com/blog/continuous-delivery-office-hours-e3</id>
      <published>2026-04-07T00:00:00.000Z</published>
      <updated>2026-04-07T00:00:00.000Z</updated>
      <summary>Your branching strategy can support Continuous Delivery, or make it an impossible goal. We discuss the pros and cons of different approaches.</summary>
      <author>
        <name>Steve Fenton, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Your branching strategy can support Continuous Delivery, or make it an impossible goal. You should assess the impact of how you branch on your ability to deliver software at all times, and you’ll find some branching techniques that work, while others that make software delivery more like walking in the dark through a field of rakes.</p>
<p>Continuous Integration is the practice of integrating code changes frequently, typically multiple times a day. This means you should check your code multiple times a day and keep your main branch deployable. Continuous Integration is a prerequisite for Continuous Delivery.</p>
<p>The name “Continuous Integration” provides solid hints about the crucial parts of the practice. Everyone should <em>integrate</em> their code into a shared branch (feature branches don’t count) and this should be done <em>continuously</em>, which means you’re doing it all the time; at least once a day, but ideally more often.</p>
<p>You’ll often speak to developers who want to stretch the definition of Continuous Integration, but you can’t escape those foundations. Merging the main branch into a long-lived branch feels like Continuous Integration, but you’ll notice no changes come back for ages until someone finally merges to main. Then you have to perform a large, complex merge, which you should avoid.</p>
<p>The <a href="https://dora.dev/capabilities/continuous-delivery/">DORA research on Continuous Delivery</a> includes the capabilities of Continuous Integration and trunk-based development. The statistics suggest you can get similar benefits as long as you limit yourself to 3 (or fewer) short-lived branches (less than a day old).</p>
<h2 id="watch-the-episode">Watch the episode</h2>
<p>You can watch the episode below, or read on to find some of the key discussion points.</p>
<p><a href="https://www.youtube.com/watch?v=WDHZAHGxpR8">Watch Continuous Delivery Office Hours Ep.3</a></p>
<h2 id="the-worst-branching-strategy">The worst branching strategy</h2>
<p>Since the ability to branch code was invented, developers have applied a great deal of creativity to how to use branches. The most common approach is feature branching, where each feature gets a branch of main that evolves separately until the feature is complete, and it’s merged back into main. Release branching allows development to continue on main by taking a cut of the main branch that will be released. This allows hotfixes to be applied to the release branch without further destabilizing it.</p>
<p>The utility of branching strategies is often eroded by the coordination overhead of maintaining the separate branches and merging different changes back together. The more complex the branching strategy, the more likely it is that you’ll have merge conflicts and lost bug fixes. For example, you might fix a release branch and forget to merge the fix back to main, so the next release reintroduces the bug.</p>
<p>This is why the worst branching strategy is Gitflow. This is a complicated branching strategy that creates dedicated branches for features, releases, and hotfixes alongside permanent main and develop branches. The overhead of Gitflow vastly outweighs its benefits.</p>
<p>This is where trunk-based development shines, as it removes unnecessary complexity.</p>
<h2 id="the-best-strategy-is-trunk-based-development">The best strategy is trunk-based development</h2>
<p>Trunk-based development is the process of making all commits directly to the main branch. This is complemented by out-of-band reviews that don’t block merging and by feature toggles that decouple deployments from releases. It should be possible to deploy your software from the main branch at all times, even if features aren’t complete.</p>
<p>The DORA research allows up to 3 short-lived branches, which can be useful for teams working remotely (like open-source project teams), who can use branches and pull requests to coordinate their work. Even so, the goal is to keep branches short-lived and to merge frequently.</p>
<p>Trunk-based development is complemented by automated builds and checks that run when code is committed to the main branch. If a problem is found during these checks, the team should prioritize the fix over other development work.</p>
<h2 id="elite-performance-comes-from-small-batches">Elite performance comes from small batches</h2>
<p>Teams with the best software delivery performance work in small steps. Trunk-based development and Continuous Integration are crucial practices for controlling batch size. Problems are discovered sooner and are easier to fix when you only have a small amount of change to reason about.</p>
<p>Making frequent commits makes it easy to back out a bad change. If a test fails, you can discard changes since the last commit and try again, instead of trying to debug the problem. This is especially true when using AI coding assistants or other code generation techniques.</p>
<p>Ultimately, for trunk-based development to succeed without friction, the entire team must be aligned on the process. It doesn’t work if only some of the team are on board.</p>
<p>Happy deployments!</p>
<div class="hint"><p>Continuous Delivery Office Hours is a series of conversations about software delivery, with Tony Kelly, Bob Walker, and Steve Fenton.</p><p>You can find more episodes on <a href="https://www.youtube.com/playlist?list=PLAGskdGvlaw3CrxkUOAMmiy928lr5D4oh">YouTube</a>, <a href="https://podcasts.apple.com/us/podcast/continuous-delivery-office-hours/id1872101651">Apple Podcasts</a>, and <a href="https://pca.st/hwjaox59">Pocket Casts</a>.</p></div>]]></content>
    </entry>
    <entry>
      <title>Proactive Dependency Security Best Practices</title>
      <link href="https://octopus.com/blog/dependency-security" />
      <id>https://octopus.com/blog/dependency-security</id>
      <published>2026-04-06T00:00:00.000Z</published>
      <updated>2026-04-06T00:00:00.000Z</updated>
      <summary>Learn how to proactively manage and secure your software from vulnerabilities and supply chain attacks.</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>With the <a href="https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all">news of the supply chain attack on the axios npm package</a>, more than a few DevOps teams will be scrambling to understand their exposure and identify potentially affected applications. These kinds of attacks are just a fact of life, though, and while serious, can be dealt with in a proactive and pragmatic manner with some simple changes to your deployment pipelines.</p>
<p>In this post, I’ll show you how to proactively manage the risk associated with dependencies and supply chain attacks by running daily security scans of a Software Bill of Materials (SBOM) associated with the production deployment of your applications.</p>
<p>You can complete the steps in this post in around 30 minutes.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Sign up for a free trial of Octopus Cloud at <a href="https://octopus.com/start">https://octopus.com/start</a>. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.</p>
<p>Then install the Octopus AI Assistant Chrome extension from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</p>
<h2 id="creating-the-sample-application">Creating the sample application</h2>
<p>To demonstrate the process of proactively managing and securing your application, we’ll create one of the sample applications provided by the AI Assistant.</p>
<p>Open the AI Assistant and click the <code>Community Dashboards...</code> link:</p>
<p><img src="/blog/img/dependency-security/community-dashboards.png" alt="AI Assistant menu"></p>
<p>Click the <code>Octopus Easy Mode</code> link:</p>
<p><img src="/blog/img/dependency-security/easy-mode.png" alt="Easy Mode option"></p>
<p>Octopus Easy Mode provides the ability to create sample projects based on best practices. We’ll create the sample Kubernetes project. Select the <code>Kubernetes</code> item and click the <code>Execute</code> button:</p>
<p><img src="/blog/img/dependency-security/easy-mode-dashboard.png" alt="Easy Mode option"></p>
<p>Once you review and approve the changes, this results in a project called <code>My K8s WebApp</code> being created in your space, along with all supporting resources like feeds, targets, environments, lifecycles, and accounts.</p>
<p>This sample project demonstrates proactive dependency management, and it starts with lifecycles and environments.</p>
<h2 id="security-environment-and-lifecycles">Security environment and lifecycles</h2>
<p>The AI Assistant created four environments: <code>Development</code>, <code>Test</code>, <code>Production</code>, and <code>Security</code>.</p>
<p>The first three environments represent the typical infrastructure used to host application deployments. The <code>Security</code> environment is a specialized environment that will host our dependency scanning and vulnerability management steps:</p>
<p><img src="/blog/img/dependency-security/environments.png" alt="Octopus environments"></p>
<p>We then have a <code>DevSecOps</code> lifecycle that has four phases, one for each environment. Deployments are executed automatically to the <code>Development</code> and <code>Security</code> environments. This means when a deployment to the <code>Production</code> environment succeeds, it automatically starts a deployment in the <code>Security</code> environment. This will be important later on:</p>
<p><img src="/blog/img/dependency-security/lifecycles.png" alt="Octopus lifecycles"></p>
<h2 id="the-deployment-process">The deployment process</h2>
<p>The deployment process for the sample application involves scanning the SBOM for a given application version. The final step, called <code>Scan for Vulnerabilities</code>, accepts a package containing an SBOM file and scans it with an open-source dependency-scanning tool.</p>
<p>Notably, the steps that deploy the application (<code>Deploy a Kubernetes Web App via YAML</code> in this example) skip the <code>Security</code> environment. This means deployments to all environments perform the security scan, but deployments to the <code>Security</code> environment <em>only</em> scan the SBOM file:</p>
<p><img src="/blog/img/dependency-security/deployment-process.png" alt="Octopus deployment process"></p>
<p>The end result of this deployment process is that every deployment to every environment performs an SBOM security scan, and once a deployment to the <code>Production</code> environment succeeds, a deployment is immediately triggered in the <code>Security</code> environment.</p>
<p>This initial sequence of a deployment to <code>Production</code> followed by a deployment to <code>Security</code> is not particularly useful, as it is unlikely that a new vulnerability was detected in the seconds between the deployments.</p>
<p>However, the deployment to the <code>Security</code> environment can then be rerun as part of a trigger.</p>
<h2 id="rerunning-the-security-scan-as-a-trigger">Rerunning the security scan as a trigger</h2>
<p>The sample project also includes a <code>Daily Security Scan</code> trigger. This trigger reruns the deployment in the <code>Security</code> environment once per day:</p>
<p><img src="/blog/img/dependency-security/triggers.png" alt="Octopus triggers"></p>
<p>This results in a daily scan of the dependencies that contributed to the version of your application in the <code>Production</code> environment.</p>
<p>This demonstrates how a Continuous Deployment (CD) tool like Octopus complements Continuous Integration (CI) tools to implement DevSecOps. Because Octopus knows exactly which versions of your applications are currently in production (as opposed to the state of a dependency lock file in Git, which may not reflect code deployed to production), it can scan application dependencies to catch vulnerabilities as soon as they are discovered.</p>
<p>You can then automatically respond to any vulnerability reports with custom steps like email alerts or messages sent to a chat platform, and proactively address any issues in a predictable and controlled manner.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>By following along with this post, you created a sample Kubernetes application with:</p>
<ul>
<li>SBOM security scanning steps</li>
<li>Environments and lifecycles that support proactive security management</li>
<li>A trigger that performs a daily security scan of the dependencies in your production application</li>
</ul>
<p>This pattern provides DevOps teams with a proactive process to respond to known dependency vulnerabilities and complements other security practices such as SAST/DAST scanning at CI-time, dependency management, and prioritization.</p>]]></content>
    </entry>
    <entry>
      <title>Practical Platform Engineering in 5 Lunches: 5. Policies</title>
      <link href="https://octopus.com/blog/platform-engineering-lunch-5" />
      <id>https://octopus.com/blog/platform-engineering-lunch-5</id>
      <published>2026-04-03T00:00:00.000Z</published>
      <updated>2026-04-03T00:00:00.000Z</updated>
      <summary>Learn how to use policies to provide guardrails for projects to ensure they are compliant with organizational standards.</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Practical Platform Engineering in 5 Lunches is a series of blog posts that takes you through the process of building a hands-on Internal Developer Platform (IDP) using Octopus Deploy.</p>
<p>This is part 5 of the series. In the <a href="/blog/platform-engineering-lunch-4">previous post</a>, you created a project structure that supports a shared responsibility model, with multiple teams owning different stages of the deployment process.</p>
<p>In this post, you’ll learn about policies, which provide guardrails for projects to ensure they are compliant with organizational standards.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Sign up for a free trial of Octopus Cloud at <a href="https://octopus.com/start">https://octopus.com/start</a>. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.</p>
<p>Then install the Octopus AI Assistant Chrome extension from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</p>
<h2 id="exploring-the-sample-policy">Exploring the sample policy</h2>
<p>The mock Git repo shares sample policies that you can apply to your projects. Open Platform Hub, click <code>Policies</code>, and click the <code>Manual Intervention Required</code> policy:</p>
<p><img src="/blog/img/platform_engineering_lunch_5/policies.png" alt="Sample manual intervention policy"></p>
<p>This sample policy requires that any deployment include a manual intervention step. To activate the policy, it must be published and set to active:</p>
<p><img src="/blog/img/platform_engineering_lunch_5/publish.png" alt="Publishing a policy"></p>
<p>To trigger a policy violation, you must delete the manual intervention step from the deployment process. Open the process for the <code>K8s Web App</code> project, delete the <code>Manual Intervention</code> step, and save the project:</p>
<p><img src="/blog/img/platform_engineering_lunch_5/delete-step.png" alt="Manual intervention step removed"></p>
<p>Save the changes and deploy a release. The deployment will fail because the process doesn’t include a manual intervention step, which violates the policy you just published:</p>
<p><img src="/blog/img/platform_engineering_lunch_5/error.png" alt="Deployment failed due to policy violation"></p>
<p>This demonstrates how platform teams can enforce organizational standards and provide guardrails for projects using policies. Policies are another example of architectural decisions shared and maintained at scale through Platform Hub to support DevOps teams.</p>
<p>The <code>Suggest a fix</code> button can be used by teams that may have encountered an error due to a policy violation. This feature scans the deployment logs, passes the content to an LLM, and provides guidance on resolving the issue:</p>
<p><img src="/blog/img/platform_engineering_lunch_5/suggest-a-fix.png" alt="Suggest a fix"></p>
<h2 id="what-just-happened">What just happened?</h2>
<p>At the end of this post, you have:</p>
<ul>
<li>Published a policy that requires all deployment processes to include a manual intervention step.</li>
<li>Modified the deployment process for the <code>K8s Web App</code> project to intentionally violate the policy.</li>
<li>Attempted to deploy a release, which failed because the deployment process violated the policy.</li>
<li>Used the <code>Suggest a fix</code> button to resolve the policy violation.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>Congratulations! You have completed the blog series.</p>
<p>By following along with the series, you have built a hands-on Internal Developer Platform (IDP) using Octopus Deploy. You have also experienced how the strong opinions baked into Platform Hub complement the traditional role of CI servers, enabling platform teams to share and maintain architectural decisions at scale.</p>]]></content>
    </entry>
    <entry>
      <title>Practical Platform Engineering in 5 Lunches: 4. Shared Responsibility of Projects</title>
      <link href="https://octopus.com/blog/platform-engineering-lunch-4" />
      <id>https://octopus.com/blog/platform-engineering-lunch-4</id>
      <published>2026-04-02T00:00:00.000Z</published>
      <updated>2026-04-02T00:00:00.000Z</updated>
      <summary>Learn how to structure projects to support multiple teams owning different parts of the deployment process.</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Practical Platform Engineering in 5 Lunches is a series of blog posts that takes you through the process of building a hands-on Internal Developer Platform (IDP) using Octopus Deploy.</p>
<p>This is part 4 of the series. In the <a href="/blog/platform-engineering-lunch-3">previous post</a>, you pushed changes to a shared process template in Platform Hub and saw how those changes were automatically consumed by the project that used the template.</p>
<p>In this post, you’ll build a project that supports a shared responsibility model, with multiple teams owning different stages of a deployment.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Sign up for a free trial of Octopus Cloud at <a href="https://octopus.com/start">https://octopus.com/start</a>. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.</p>
<p>Then install the Octopus AI Assistant Chrome extension from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</p>
<h2 id="updating-the-project-structure">Updating the project structure</h2>
<p>Open the process for the <code>K8s Web App</code> project. You already have the <code>Platform PreDeploy Hook</code> step template as the first step. This effectively means the platform team owns the start of the deployment process.</p>
<p>:::div{.hint} You must publish process templates before they can be used in a project. See the <a href="/blog/platform-engineering-lunch-2">second post</a> for an example of publishing process templates. :::</p>
<p>Add the <code>Security PreDeploy Hook</code> process template next. This gives the security team the opportunity to implement any security checks or processes that must be run before the deployment can proceed.</p>
<p>Add the <code>Security PostDeploy Hook</code> and <code>Platform PostDeploy Hook</code> process templates as the final two steps. This allows the security and platform teams to implement any checks or processes that must run after the deployment completes.</p>
<p>All process template steps default to the name <code>Run a Process Template</code>, but must have a unique name to save the process. Rename each of the process template steps under the <code>Settings</code> tab and define a new name in the <code>Name</code> field:</p>
<p><img src="/blog/img/platform_engineering_lunch_4/process-template-name.png" alt="Process template step name"></p>
<p>You must also configure the worker pool parameter for each of the process templates.</p>
<p>With these steps in place, you have modelled your project to support the cross-cutting concerns of both the platform and security teams. This embraces Conway’s law, which posits that organizations design systems that mirror their own communication structure. Conway’s law is sometimes seen as an unintended consequence arising in organizations, but regardless, it is a common pattern that must be supported by platform teams.</p>
<p>By wrapping deployment processes in shared templates like an onion, each team can own and maintain their part of the process. Combine this approach with the Git based workflows for managing changes to process templates, and using SemVer for quantifying the impact of changes to those templates, and you have a powerful combination of features that allow you to manage process templates at scale:</p>
<p><img src="/blog/img/platform_engineering_lunch_4/process-editor.png" alt="Publishing project process"></p>
<p>The onion model is just one example of how process templates can be used. You are, of course, free to create process templates defining more general deployment processes that don’t necessarily fit the pre/post-hook model. The key point is that process templates provide a powerful mechanism for sharing and maintaining architectural decisions, and Platform Hub provides strong opinions to support platform teams in managing those architectural decisions at scale.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>At the end of this post, you have:</p>
<ul>
<li>Updated the project structure to support a shared responsibility model, with multiple teams owning different stages of the deployment process.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>You are 80% done with the series. In the <a href="/blog/platform-engineering-lunch-5">next and final post</a>, you’ll explore policies, which provide guardrails for projects to ensure they are compliant with organizational standards.</p>]]></content>
    </entry>
    <entry>
      <title>Practical Platform Engineering in 5 Lunches: 3. Pushing Template Changes</title>
      <link href="https://octopus.com/blog/platform-engineering-lunch-3" />
      <id>https://octopus.com/blog/platform-engineering-lunch-3</id>
      <published>2026-04-01T00:00:00.000Z</published>
      <updated>2026-04-01T00:00:00.000Z</updated>
      <summary>Edit and push changes to process templates in Platform Hub.</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Practical Platform Engineering in 5 Lunches is a series of blog posts that takes you through the process of building a hands-on Internal Developer Platform (IDP) using Octopus Deploy.</p>
<p>This is part 3 of the series. In the <a href="/blog/platform-engineering-lunch-2">previous post</a>, you configured a Kubernetes deployment project and began exploring the features of Platform Hub.</p>
<p>In this post, you’ll modify a shared process template in Platform Hub, publish the changes, and consume those changes in your project.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Sign up for a free trial of Octopus Cloud at <a href="https://octopus.com/start">https://octopus.com/start</a>. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.</p>
<p>Then install the Octopus AI Assistant Chrome extension from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</p>
<h2 id="updating-a-shared-template">Updating a shared template</h2>
<p>Open up Platform Hub, select the <code>Platform PreDeploy Hook</code> process template, edit the script step, and click the <code>Commit Only</code> button:</p>
<p><img src="/blog/img/platform_engineering_lunch_3/update-step-template.png" alt="Editing Process Templates"></p>
<p>This commits your changes to Git. By default, your commits go to the <code>main</code> branch, but you can create or select a different branch to commit your changes to. This allows you to implement Git based workflows for managing changes to your shared templates.</p>
<p>Leaning into Git based workflows is an example of a strong opinion implemented by Platform Hub. Any team that has reached the point of defining and distributing shared templates has already implemented Git and is comfortable with pull/merge requests, branching strategies, and so on. By using Git as the underlying mechanism for managing changes to shared templates, Platform Hub allows teams to leverage existing knowledge and workflows instead of forcing them to learn a new system for managing changes:</p>
<p><img src="/blog/img/platform_engineering_lunch_3/branch-selection.png" alt="Editing Process Templates"></p>
<p>You must publish your changes to make them available to projects that wish to consume the shared template.</p>
<p>The use of the SemVer versioning scheme to quantify the impact of changes to shared templates is another strong opinion Platform Hub provides for platform teams. It requires that you quantify the impact of the change by categorizing it as a major, minor, or patch change.</p>
<p>If you recall from the previous blog post, the <code>K8s Web App</code> project is configured to consume either minor or patch updates to the <code>Platform PreDeploy Hook</code> template. In this example, you’ll publish a patch change, which means the change is non-breaking and is safe to apply to all projects.</p>
<p>Click the <code>Publish</code> button, select the <code>Patch</code> option, and click the <code>Publish</code> button:</p>
<p><img src="/blog/img/platform_engineering_lunch_3/publish-changes.png" alt="Publishing Process Templates"></p>
<p>Once published, the new version of the step template is automatically consumed by the <code>K8s Web App</code> project:</p>
<p><img src="/blog/img/platform_engineering_lunch_3/updated-process.png" alt="The updated process template"></p>
<p>Although this example only applied the updates to a process template in a single project, the workflow scales to any number of projects across any number of spaces. This demonstrates the core requirement of an IDP: to provide a repository of architectural decisions (in the form of process templates) and distribute them at scale.</p>
<p>Platform Hub also demonstrates the value of a dedicated CD platform, like Octopus, when paired with traditional CI platforms, offering strong opinions that support platform teams as they create, distribute, and maintain architectural decisions.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>At the end of this post, you have:</p>
<ul>
<li>Updated a shared process template in Platform Hub and published the changes.</li>
<li>Verified that the changes appear in the project that consumes the shared template.</li>
<li>Noted the strong opinions that Platform Hub provides to support platform teams in managing shared templates.</li>
<li>Learned how Platform Hub complements traditional CI platforms.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>You are 60% done with the series. In the <a href="/blog/platform-engineering-lunch-4">next post</a>, you’ll structure the process templates exposed by Platform Hub in a project to support multiple teams that own different parts of the deployment process.</p>]]></content>
    </entry>
    <entry>
      <title>Practical Platform Engineering in 5 Lunches: 2. Your First Project</title>
      <link href="https://octopus.com/blog/platform-engineering-lunch-2" />
      <id>https://octopus.com/blog/platform-engineering-lunch-2</id>
      <published>2026-03-31T00:00:00.000Z</published>
      <updated>2026-03-31T00:00:00.000Z</updated>
      <summary>Create your first project and begin exploring the features of Platform Hub.</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Practical Platform Engineering in 5 Lunches is a series of blog posts that takes you through the process of building a hands-on Internal Developer Platform (IDP) using Octopus Deploy.</p>
<p>This is part 2 of the series. In the <a href="/blog/platform-engineering-lunch-1">previous post</a>, you set up your Octopus instance and learned about the core concepts we’ll be using throughout the series.</p>
<p>In this post, you’ll create a simple Kubernetes deployment project and begin exploring the features of Platform Hub.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Sign up for a free trial of Octopus Cloud at <a href="https://octopus.com/start">https://octopus.com/start</a>. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.</p>
<p>Then install the Octopus AI Assistant Chrome extension from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</p>
<h2 id="what-is-the-ai-assistant">What is the AI Assistant?</h2>
<p>The AI Assistant is an extension of Octopus that enables several AI-powered features. You’ll use the AI Assistant to build a sample Kubernetes project and configure Platform Hub against a mock Git repository.</p>
<h2 id="creating-the-project">Creating the project</h2>
<p>Click the AI Assistant icon in the bottom right-hand corner of the screen to open the AI Assistant chat. Then paste the following prompt into the chat and run it:</p>
<div class="hint"><p>You will need to skip the welcome screen on a fresh instance of Octopus.</p></div>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="markdown"><code><span class="line"><span style="color:#000000">Create a Kubernetes project called "K8s Web App", and then:</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Use client side apply in the Kubernetes step (the mock Kubernetes cluster only supports client side apply).</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Disable verification checks in the Kubernetes steps (the mock Kubernetes cluster doesn't support verification checks).</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Enable retries on the K8s deployment step.</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">---</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">Create a token account called "Mock Token".</span></span>
<span class="line"></span>
<span class="line"><span style="color:#800000;font-weight:bold">---</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">Create a feed called "Docker Hub" pointing to "https://index.docker.io" using anonymous authentication.</span></span>
<span class="line"></span>
<span class="line"><span style="color:#800000;font-weight:bold">---</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">Create a Kubernetes target with the tag "Kubernetes", the URL https://mockk8s.octopus.com, using the health check container image "octopusdeploy/worker-tools:6.5.0-ubuntu.22.04" from the "Docker Hub" feed, using the token account, and the "Hosted Ubuntu" worker pool.</span></span></code></pre>
<div class="hint"><p>The document separator (<code>---</code>) is used to split the prompt into multiple sections. Each section is applied sequentially, which allows you to create different types of resources in a single prompt.</p></div>
<p>You can review the changes that the AI Assistant is proposing to make to your Octopus instance before approving them. Once you approve the changes, the AI Assistant will create a new project called <code>K8s Web App</code> in your Octopus instance with the specified configuration.</p>
<div class="hint"><p>You can enable the auto-apply option in the AI Assistant settings to automatically apply changes.</p></div>
<div class="hint"><p>You may need to refresh the page to show the newly created project.</p></div>
<p>The mock Kubernetes cluster at <a href="https://mockk8s.octopus.com">https://mockk8s.octopus.com</a> exposes just enough of the Kubernetes API to allow Octopus to complete a deployment. No actual Kubernetes resources are created, and the server resets itself every few minutes. This allows you to explore Octopus’s Kubernetes deployment features without needing access to a real Kubernetes cluster.</p>
<h2 id="configuring-platform-hub">Configuring Platform Hub</h2>
<p>Enter this prompt into the AI Assistant to configure Platform Hub with a mock Git repository:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="markdown"><code><span class="line"><span style="color:#000000">Configure the Platform Hub git repo to point to https://mockgit.octopus.com/repo/platformhubrepo using the ".octopus" base path.</span></span></code></pre>
<p>The mock Git repository at <a href="https://mockgit.octopus.com/repo/platformhubrepo">https://mockgit.octopus.com/repo/platformhubrepo</a> provides a pre-configured repository with sample Platform Hub resources. The repository contents are reset periodically, allowing you to explore Platform Hub’s features without setting up your own Git repository or providing credentials.</p>
<h2 id="exploring-process-templates">Exploring process templates</h2>
<p>Open Platform Hub from the left-hand menu:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/platform-hub.png" alt="Platform Hub link"></p>
<p>You will then be taken to the list of sample process templates preconfigured in the mock Git repository. Open the <code>Platform PreDeploy Hook</code> process template:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/process-template.png" alt="Process Template link"></p>
<p>You’ll explore the patterns around these sample process templates in a subsequent post. For now, you’ll experience the process of publishing and sharing a template.</p>
<p>Click the <code>Publish</code> button to publish the process template to your Octopus instance:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/publish.png" alt="Publish template"></p>
<p>The first version of a template is always <code>1.0.0</code>. Click the <code>Publish</code> button to publish the template with the selected version:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/publish-dialog.png" alt="Publish template dialog"></p>
<p>Templates must also be shared with a space, or all spaces, before projects can consume them. Click the kebab menu and then click the <code>Share</code> button:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/share.png" alt="Share template"></p>
<p>Select the spaces you wish to share the template with (you can select all spaces if you wish), and then click the <code>Share</code> button:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/share-dialog.png" alt="Share template dialog"></p>
<p>Your process template is now published, shared, and ready to be consumed in a deployment process.</p>
<p>Open the <code>K8s Web App</code> project that you created earlier, and click the <code>Process</code> tab. Click the <code>Add a step</code> button, click <code>Process Templates</code> under the filter category, and click the <code>Add Template</code> button on the <code>Platform PreDeploy Hook</code> template that you just published:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/process-editor.png" alt="Share template dialog"></p>
<p>You are prompted to select the changes you wish to opt into automatically. Select <code>Accept minor changes</code> to accept any change to the <code>y</code> and <code>z</code> components of the <code>x.y.z</code> SemVer version scheme used by Platform Hub. Alternatively, you can select <code>Accept patches</code> to only accept changes to the <code>z</code> component.</p>
<p>The ability to automatically opt into new versions of a process template is a strong opinion of Platform Hub. It allows platform teams to update and distribute architectural decisions, in the form of process templates, at scale with the click of a button:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/add-template-dialog.png" alt="Share template dialog"></p>
<p>The process template exposes a parameter called <code>PostDeployHook.WorkerPool</code> that defines the worker pool on which the step is run. Assign the <code>Hosted Ubuntu</code> worker pool to this parameter:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/template-parameter.png" alt="Process template parameter"></p>
<p>Because this was a template designed to be run at the start of a deployment process, the order of the steps will have to be changed to place the new step before the <code>Deploy Kubernetes YAML</code> step:</p>
<p><img src="/blog/img/platform_engineering_lunch_2/reorder-steps.png" alt="Share template dialog"></p>
<p>Save the changes to the project, create a release, and deploy it to see the new step in action.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>At the end of this post, you have:</p>
<ul>
<li>Created a new Kubernetes deployment project using the AI Assistant.</li>
<li>Configured Platform Hub against a mock Git repository containing sample resources.</li>
<li>Published a process template from Platform Hub and shared it with the default space.</li>
<li>Added a step to your deployment process using the shared process template.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>You are 40% done with the series. In the <a href="/blog/platform-engineering-lunch-3">next post</a>, you’ll explore how to update and maintain shared resources in Platform Hub by making changes to the process template in the mock Git repository, publishing the changes with versioning, and consuming those changes in your project.</p>]]></content>
    </entry>
    <entry>
      <title>Practical Platform Engineering in 5 Lunches: 1. Getting Started</title>
      <link href="https://octopus.com/blog/platform-engineering-lunch-1" />
      <id>https://octopus.com/blog/platform-engineering-lunch-1</id>
      <published>2026-03-30T00:00:00.000Z</published>
      <updated>2026-03-30T00:00:00.000Z</updated>
      <summary>Get started with Platform Engineering in this introductory post to the Platform Engineering in 5 Lunches series.</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Practical Platform Engineering in 5 Lunches is a blog post series that walks you through building a hands-on Internal Developer Platform (IDP) with Octopus Deploy.</p>
<p>These blog posts can be completed during your lunch break, with each post taking about 30 minutes. Notably, you don’t need any passwords or any other platforms beyond a free trial instance of Octopus.</p>
<p>By the end of the series, you’ll have a fully functional IDP that you can use as a reference for building your own Platform Engineering solutions.</p>
<p>In this introductory post, we’ll set up the basic prerequisites and introduce some of the core concepts we’ll use throughout the series.</p>
<h2 id="getting-an-octopus-instance">Getting an Octopus instance</h2>
<p>Sign up for a free trial of Octopus Cloud at <a href="https://octopus.com/start">https://octopus.com/start</a>. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.</p>
<p>Then install the Octopus AI Assistant Chrome extension from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</p>
<h2 id="defining-common-terms">Defining common terms</h2>
<p>We’ll need a common language to discuss the concepts we’ll use throughout the series.</p>
<h3 id="what-is-devops">What is DevOps?</h3>
<p>This is perhaps a controversial opinion, but DevOps has lost its meaning. Any term that becomes the target of Search Engine Optimization (SEO) ceases to be a useful term. DevOps is unfalsifiable - literally any practice, tool, or process used by technical teams can be labelled as DevOps.</p>
<p>So I’ll use DevOps as a broad term referring to teams and processes that work with technology.</p>
<h3 id="what-is-platform-engineering">What is Platform Engineering?</h3>
<p>Platform Engineering is the practice of scaling architectural decisions.</p>
<p>That’s it.</p>
<p>I like this definition of architecture from “Objects, Components, and Frameworks With UML: The Catalysis Approach” by Desmond D’Souza and Alan Wills, which defines “architecture” as:</p>
<blockquote>
<p>The set of design decisions about any system (or smaller component) that keeps its implementors and maintainers from exercising needless creativity.</p>
</blockquote>
<p>More simply, Platform Engineering provides common solutions to common problems so your teams can focus on solving valuable problems.</p>
<h3 id="what-is-an-internal-developer-platform-idp">What is an Internal Developer Platform (IDP)?</h3>
<p>An Internal Developer Platform (IDP) comprises the tools, systems, and processes you use to distribute and maintain architectural decisions.</p>
<p>That’s it.</p>
<p>It can be a wiki, a set of scripts in a shared repository, or a fully fledged self-service portal. The specific implementation doesn’t matter, as long as it provides a way to share and maintain architectural decisions.</p>
<p>We’ll use Octopus as a repository for architectural decisions that can be updated and distributed to DevOps teams.</p>
<h2 id="how-does-platform-engineering-relate-to-cicd">How does Platform Engineering relate to CI/CD?</h2>
<p>Continuous Integration and Continuous Delivery (CI/CD) focuses on delivering software to consumers. Platform Engineering streamlines CI/CD processes by providing a common set of tools and processes for building, testing, and deploying software. This allows teams to focus on delivering value to customers rather than building and maintaining every aspect of their CI/CD pipelines.</p>
<h2 id="how-does-octopus-support-platform-engineering">How does Octopus support Platform Engineering?</h2>
<p>Octopus provides <a href="https://octopus.com/use-case/platform-hub">Platform Hub</a>, which is an opinionated solution for defining, maintaining, and distributing shared processes and policies to support the delivery and maintenance of software.</p>
<p>Platform Hub is based on Git and uses Git based workflows to enable the development, testing, and approval of changes to shared processes and policies.</p>
<p>Platform Hub resources are versioned using SemVer and distributed to consumers in accordance with the semantic meanings of major, minor, and patch versions. Consumers can choose to automatically consume the latest version of a resource or manually update to new versions when they are released.</p>
<p>These features allow Platform Hub to define a set of architectural decisions (in the form of process templates and policies) that are maintained and distributed to consumers at scale.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>At the end of this post, you have:</p>
<ul>
<li>Created a trial Octopus instance and installed the Octopus AI Assistant Chrome extension.</li>
<li>Learned the basic concepts of DevOps, Platform Engineering, and Internal Developer Platforms (IDPs).</li>
<li>Learned the relationship between Platform Engineering and CI/CD.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>You are 20% done with the series. You now have a trial instance of Octopus, the AI Assistant Chrome extension, and an understanding of the core concepts we’ll be using throughout the series.</p>
<p>In the <a href="/blog/platform-engineering-lunch-2">next post</a>, you’ll create a simple Kubernetes deployment project and begin exploring the features of Platform Hub.</p>]]></content>
    </entry>
    <entry>
      <title>Verified Argo CD deployments</title>
      <link href="https://octopus.com/blog/argo-cd-verified-deployments" />
      <id>https://octopus.com/blog/argo-cd-verified-deployments</id>
      <published>2026-03-24T00:00:00.000Z</published>
      <updated>2026-03-24T00:00:00.000Z</updated>
      <summary>You can now configure Octopus to verify the Argo CD applications are healthy before deployment completes</summary>
      <author>
        <name>Frank Lin, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Since <a href="/blog/argo-cd-in-octopus">Argo CD in Octopus</a> was released in <strong>Early Access</strong> in 2025, we’ve been incrementally adding new features to make the integration even more useful. This blog post is a deep dive into the new <strong>step verification</strong> feature that lets you wait for the updated Argo CD applications to be healthy before the step in Octopus completes.</p>
<h2 id="step-verification">Step verification</h2>
<p>Previously, Octopus would consider a step complete once changes were pushed to Git.</p>
<p>There are some new options in the step editor now that allow you to customize the behavior. The options are listed under <strong>step Verification</strong>:</p>
<ul>
<li><strong>Direct commit</strong>: Progress to the next step once changes are pushed to Git (this is the existing behavior)</li>
<li><strong>Pull request merged</strong>: Progress to the next step once pull requests are merged, or fail the step if pull requests are closed or abandoned. This option results in a no-op if changes are committed directly without a pull request (see Git commit method)</li>
<li><strong>Argo CD application is healthy</strong>: Progress to the next step once all the Argo CD applications have synced the new changes and the applications are in a healthy state. Choosing this setting means your Octopus dashboard will accurately reflect the version and status of your applications deployed to the cluster</li>
</ul>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/step-verification-options.png" alt="Step verification options"></p></figure>
<p>The task is paused while Octopus waits for pull requests to be merged or for Argo CD applications to be healthy. This means the task does not count towards your instance task cap.</p>
<h2 id="trigger-sync">Trigger sync</h2>
<p>Turning on this option will trigger Argo CD to explicitly sync applications with the changes committed to Git by this same step.</p>
<p>If the application has auto-sync turned off, then triggering sync ensures Argo CD will look at the latest changes in Git when verifying application health.</p>
<p>If the application has auto-sync turned on, then triggering sync speeds up the deployment because Octopus does not have to wait for the next Argo CD refresh loop.</p>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/trigger-sync-options.png" alt="Trigger sync options"></p></figure>
<h2 id="when-is-the-application-synced-and-healthy">When is the application synced and healthy?</h2>
<p>When verifying that the application is healthy after a change, we first need to check whether it references the changes we just made. Unfortunately, we can’t rely on Argo CD’s sync status alone, since Argo CD doesn’t know what Octopus’s intended changes are.</p>
<p>Let’s go through a few scenarios:</p>
<h3 id="scenario-1-all-synced">Scenario 1: All synced</h3>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/1-same-commit.png" alt="Same commit"></p></figure>
<ol>
<li>Octopus commits <code>97A2</code></li>
<li>Argo CD refreshes to <code>97A2</code> and syncs the changes to the cluster</li>
</ol>
<p>Sync status:</p>
<ul>
<li>Argo CD: <strong>In sync</strong></li>
<li>Octopus: <strong>In sync</strong></li>
</ul>
<p>This is the simplest scenario where all parties are looking at the same commit, so everyone is <strong>In sync</strong>.</p>
<h3 id="scenario-2-out-of-sync">Scenario 2: Out of sync</h3>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/2-argo-out-of-sync.png" alt="Argo out of sync"></p></figure>
<ol>
<li>Octopus commits <code>97A2</code></li>
<li>Argo CD refreshes to <code>97A2</code> but has yet to sync the changes to the cluster</li>
</ol>
<p>Sync status:</p>
<ul>
<li>Argo CD: <strong>Out of sync</strong></li>
<li>Octopus: <strong>Out of sync</strong></li>
</ul>
<p>Even though Octopus and Argo CD are looking at the same commit, the changes have not yet been applied to the cluster, so Octopus still shows <strong>Out of sync</strong>.</p>
<h3 id="scenario-3-octopus-is-ahead-of-argo-cd">Scenario 3: Octopus is ahead of Argo CD</h3>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/3-octopus-ahead.png" alt="Octopus is ahead of Argo"></p></figure>
<ol>
<li>Octopus commits <code>8DEF</code></li>
<li>Argo CD has yet to refresh, so it still considers <code>97A2</code> to be the latest</li>
</ol>
<p>Sync status:</p>
<ul>
<li>Argo CD: <strong>In sync</strong></li>
<li>Octopus: <strong>Git drift</strong></li>
</ul>
<p>In this scenario, Octopus has made a change that Argo CD doesn’t see yet. Here we introduce a concept called <strong>Git drift</strong> - this means even though everything looks up to date from Argo CD’s perspective, the changes made by Octopus aren’t in the cluster.</p>
<h3 id="scenario-4-external-change-overwrites-octopus-generated-changes">Scenario 4: External change overwrites Octopus-generated changes</h3>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/4-user-change-related.png" alt="Octopus is ahead of Argo"></p></figure>
<ol>
<li>Octopus commits <code>97A2</code></li>
<li>Another process commits <code>1123</code> with contents overwriting Octopus-generated changes</li>
<li>Argo CD refreshes to <code>1123</code> and syncs the changes to the cluster</li>
</ol>
<p>Sync status:</p>
<ul>
<li>Argo CD: <strong>In sync</strong></li>
<li>Octopus: <strong>Git drift</strong></li>
</ul>
<p>This scenario also results in <strong>Git drift</strong> because a later commit overwrites Octopus’s changes - an example would be the user updating image tags that Octopus updated.</p>
<h3 id="scenario-5-external-change-is-unrelated-to-octopus-generated-changes">Scenario 5: External change is unrelated to Octopus-generated changes</h3>
<figure><p><img src="/blog/img/argo-cd-verified-deployments/5-user-change-unrelated.png" alt="Octopus is ahead of Argo"></p></figure>
<ol>
<li>Octopus commits <code>97A2</code></li>
<li>Another process commits <code>1124</code> with contents unrelated to Octopus-generated changes</li>
<li>Argo CD refreshes to <code>1124</code> and syncs the changes to the cluster</li>
</ol>
<p>Sync status:</p>
<ul>
<li>Argo CD: <strong>In sync</strong></li>
<li>Octopus: <strong>In sync</strong></li>
</ul>
<p>This scenario is similar to the previous one, but here the later commit only contains unrelated changes - an example would be the user updating the replica count after Octopus updates the image tags. Since Octopus-generated changes made are still in the cluster, it displays <strong>In sync</strong>.</p>
<h2 id="how-does-octopus-know-what-changes-are-intended">How does Octopus know what changes are intended?</h2>
<p>Since Octopus pushed the changes to the Git repository, it can keep track of the intended changes.</p>
<p>The two Argo CD steps have different functionality, so the way they record the intended changes is different.</p>
<h3 id="update-argo-cd-application-image-tags">Update Argo CD Application Image Tags</h3>
<p>This step updates the image tags in the manifests. To track changes, Octopus records JSON patches for the files it updates.</p>
<p>When detecting whether these changes have been overwritten later on:</p>
<ol>
<li>Octopus checks out the Git repository files for the commit that Argo CD is looking at</li>
<li>Octopus re-applies the JSON patches to the files it previously updated</li>
<li>If the files have any changes, then it means Octopus’s changes have been overwritten</li>
</ol>
<p>Note that JSON patches have limitations, so if the manifest has been significantly restructured, you might see an unexpected <strong>Git drift</strong> status. Simply redeploy to remove this false positive.</p>
<h3 id="update-argo-cd-application-manifests">Update Argo CD Application Manifests</h3>
<p>This step generates the manifests that go into the application’s repository. To track changes, Octopus records the file hashes it generates.</p>
<p>When detecting whether these changes have been overwritten later on:</p>
<ol>
<li>Octopus checks out the Git repository files for the commit that Argo CD is looking at</li>
<li>Octopus checks if the file contents have changed by comparing the hashes of the files it generated</li>
<li>If the files have any changes, then it means Octopus’s changes have been overwritten</li>
</ol>
<h3 id="does-octopus-inspect-the-git-tree">Does Octopus inspect the Git tree?</h3>
<p>While planning this feature, we initially went down the route of inspecting the Git tree to figure out whether Argo CD was including the latest changes deployed by Octopus. We soon realised that we would need to inspect file contents anyway because a later commit could easily overwrite Octopus’s changes.</p>
<p>Other than some small optimizations to skip file comparisons when the commit SHA matches between Octopus and Argo CD, Octopus only looks at the file contents in Git; it doesn’t care whether the commit is in the Git history.</p>
<h2 id="how-to-try-it-out">How to try it out</h2>
<p>The <a href="https://octopus.com/docs/argo-cd/steps#step-verification">step verification</a> functionality is currently available to all customers starting with 2026.1. Pull request merged verification is available from 2026.2 (rolling out in Octopus Cloud).</p>
<p>There’s nothing extra required to enable the feature. Simply open the <strong>Argo CD Instances</strong> section under <strong>Infrastructure</strong>, and connect one of your Argo CD instances to Octopus. From there, you can start modelling deployments that combine GitOps and Continuous Delivery out of the box.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Argo CD is a powerful GitOps tool for Kubernetes, but it wasn’t built to manage the full software delivery lifecycle. Octopus complements Argo CD by adding environment promotions, orchestration across diverse workloads, fine-grained RBAC, and centralized visibility across clusters.</p>
<p>With Argo CD integration, Octopus lets teams combine the strengths of GitOps and Continuous Delivery without building custom automations. You get the reliability of Git-driven deployments and the safety, governance, and flexibility of a full CD platform—all in one place.</p>
<p>Learn more on our <a href="https://octopus.com/use-case/argo-cd-in-octopus">use-case page</a>.</p>
<p>Happy deployments!</p>]]></content>
    </entry>
    <entry>
      <title>Why Octopus invests in developer experience</title>
      <link href="https://octopus.com/blog/octopus-invests-in-developer-experience" />
      <id>https://octopus.com/blog/octopus-invests-in-developer-experience</id>
      <published>2026-03-24T00:00:00.000Z</published>
      <updated>2026-03-24T00:00:00.000Z</updated>
      <summary>Octopus invests in developer experience because great tooling means more productive, engaged developers, better software, happier customers, and a stronger bottom line.</summary>
      <author>
        <name>Steve Fenton, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>When you join Octopus, you get a great laptop and your choice from a range of docks, monitors, webcams, and input devices. You also get a home office allowance, which lets you upgrade your space and get 50% back. You may want a sit/stand desk, an awesome chair, and portrait lighting so everyone can see how delighted you are when they demo their cool new feature.</p>
<p>What we ask in return is that you set yourself up to do your best work. Invest in your <a href="https://www.hanselman.com/blog/brain-bytes-back-buns-the-programmers-priorities">brain, bytes, back, and buns</a>, as Scott Hanselman put it. The essential idea behind this is that when you win, Octopus wins. We all win!</p>
<p>Yet you can throw a soggy cat at a conference, and the chances are it will splash someone from an organization with a completely different approach to developer experience. One that doesn’t value the sustainable delivery of their people’s best work. Why is that?</p>
<h2 id="a-fork-stuck-in-the-road">A fork stuck in the road</h2>
<p>As we’ve seen in the software industry, silos are the source of all kinds of chaos. DevOps was created when someone realized the scale of the problem caused by having two teams with conflicting goals working on opposite sides of the same problem. Leaders whipped developers up for feature throughput and wanted to hold operations teams to account for reliability.</p>
<p>In hindsight, it’s obvious that this isn’t going to work, but we did it for decades. You can get halfway to DevOps just by aligning the goals. When both teams share the same target, they’ll work out how to get you the rest of the way.</p>
<p>If we zoom out on our organization, we’re likely to find a similar silo/goal conflict between engineering managers and the finance team. We ask the engineering manager to build a high-performing team, while we direct finance to spend as little money as possible. This structural incentive problem leads developers to get laptops and tools that someone has negotiated to be “good enough,” and trust me, they aren’t.</p>
<p>When you look at developer experience through a FinBoss lens, where finance and engineering managers align around the return on investment, you’ll find the optimal spend on developer tools shifts right rapidly.</p>
<p>FinCost organizations reduce costs through standardization, commoditization, and by delaying kit refreshes for as long as possible. They often spend less than $1,000 on a laptop without realizing the true cost of an under-spec’d machine. Meanwhile, FinBoss organizations are seeking the $30,000 annual return on a $3,000 investment.</p>
<p>To achieve this ROI, finance and engineering leadership work together to design policies that allow engineers to choose from a set of well-designed options. The combination of investment in great tools and individual choice in building the right setup is the path to success.</p>
<h2 id="tiny-visible-costs">Tiny visible costs</h2>
<p>FinCost organizations are driving decisions based on small, visible costs. Where there’s a paper trail, there are savings to make: the purchase of a laptop, the training budget, and the conference invoice. These are part of the legible model, so there is no guesswork or experimental method needed to cost them. The numbers are right there on the page.</p>
<p>Once you’re managing costs, it’s not uncommon to dial up the control, too. You limit options, standardize across the organization, and negotiate in bulk on everything to drive costs even lower. Compared to competitors, you could be spending 75% less on laptops, and just look at the savings from those pesky licenses and SaaS tools various teams were using. Hooray.</p>
<p>Instead of letting engineers attend conferences they find relevant to their work, the organization negotiates a bulk purchase for a single conference. Instead of teams buzzing with diverse ideas and approaches, they all sit and watch the same talk. It’s cheaper that way.</p>
<p>A wooden fence encloses my garden. A fence lasts a long time if you give it a protective coat of wood preserver every once in a while. A tin of wood preserve costs about $20 and a few hours of mindful application each year. Over 5 years, I can save $100 by skipping the treatment. If I do this, my fence lasts about 10 years instead of 30.</p>
<p>The cost of a tin of wood preserver is immediate and visible. The cost of replacing the fence is far higher, but it won’t be visible for years. Additionally, if the fence provides value beyond privacy, like keeping a goat out of my vegetable garden, the cost of the fence failing could be the cost of my whole crop, which is even less legible to my cost control mindset.</p>
<p>If you’re creating software, it’s likely to be more valuable than a fence. In fact, it should be giving you returns multiple times your investment. It’s like a golden goose. You feed it well, and you get regular, valuable deliveries. If you had a golden goose, you wouldn’t try to save on food costs if it reduced the flow of golden eggs.</p>
<p><img src="/blog/img/octopus-invests-in-developer-experience/meanwhile-in-devops-0064-h.png" alt="A cartoon showing a character trying to save money by reducing the cost of feeding a golden goose. The goose doesn&#x27;t survive this experiment."></p>
<p>The same logic plays out with training. A FinCost organization cuts the training budget because the line item is visible and the return isn’t. Individual engineers stop attending relevant courses, and teams miss out on the steady accumulation of skill that comes from ongoing, targeted learning. Nobody raises a purchase order for “capability that didn’t grow this year.”</p>
<p>Then something goes wrong. Delivery is slow, the codebase is a mess, or a competitor pulls ahead. Someone decides the answer is transformation, and that means Scaled Agile, or whatever flavor of desperation is in fashion in exec suites at the time.</p>
<p>So the whole department gets booked onto an external SAFe course. Not one or two people who need it. Everyone. The invoice lands, and it’s eye-watering: many times what a thoughtful, ongoing training program would have cost across the same period. The organization that was too cautious to spend $500 on a course people wanted just spent $50,000 on one nobody asked for.</p>
<p>This is the hidden tax of FinCost thinking. Small, visible costs get cut. The problems that those small costs would have prevented quietly compound. Eventually, the pressure releases all at once, in an expensive, rushed, and hard-to-argue-against way, because now there’s a crisis with a visible price tag.</p>
<p>The tin of wood preserver was the right investment all along.</p>
<h2 id="the-invisible-people-hours">The invisible people-hours</h2>
<p>Developers work around 40 hours a week. Stripe’s Developer Coefficient found that productivity friction can eat up roughly half of those hours. That’s the time that evaporates before a single line of useful code is written. Across the global developer workforce, Stripe put a number on that loss: $300 billion. It’s worth noting that only $85 billion of this is attributable to bad code. The rest is pure friction: slow tools, broken environments, waiting for builds, wrestling with under-spec’d machines. Productivity means more than writing code faster.</p>
<p>That friction has a texture. It’s a developer losing flow because their laptop is thrashing memory, and they’ve lost the thread of what they were thinking.</p>
<p>It’s waiting 10 minutes for a build that would have completed in 2 minutes on a faster machine. It’s the workarounds you built because the approved tool wasn’t good enough, which obliterates hours of your week for maintenance. It’s the skilled engineers who start looking for a new role because the environment makes them feel like you don’t take their work seriously.</p>
<p>None of this appears on a purchase order. None of it shows up in a variance report. It just quietly drains the value your software could be creating.</p>
<p>There’s also an upside case, not just a loss case. Software that ships faster captures more market share. Developers who stay in flow write better code with fewer bugs, which means less rework and fewer incidents. A high-performing team attracts other high performers. The return on investing in developer experience isn’t just “we lose less time”; it’s sales opportunities, reduced churn, higher product quality, and a compounding advantage over competitors who are still buying $800 laptops and calling it responsible.</p>
<p>When you focus on cost control, there’s a ceiling on how much you can save. When you look for return on investment, there’s no equivalent ceiling on how much you can gain.</p>
<h2 id="the-cfo-lens">The CFO lens</h2>
<p>When I asked Sammy, our CFO, about why we invest in people, powerful machines, and great tools, he said: “At Octopus, we think about equipment as a productivity tool. Developers are one of our most expensive and constrained resources. If spending a few extra thousand dollars on a fast machine and proper screen space saves time, reduces cognitive load, or keeps someone in flow more often, it pays for itself very quickly.”</p>
<p>That means our finance team doesn’t ask “What’s the cheapest laptop we can buy?” Instead, they ask, “What setup helps this person do their best work?” From a finance perspective, this is about a high-ROI spend, not indulgence.</p>
<p>Octopus is running the FinBoss playbook, which aligns finance and leadership in empowering employees to do their best work. That means expanding what you’re willing to measure so that you can balance the visible costs with traditionally invisible returns.</p>
<p>The responsibility for providing this environment for employees rests with all leaders, including Sammy, and is part of the deal employees have with the company; we’ll give you the best environment and tools if you give us the greatest work of your career.</p>
<h2 id="the-reckoning">The reckoning</h2>
<p>If we’ve ever met, I’ve likely mentioned <a href="https://dora.dev/research/">DORA’s research</a>. I’m a super-fan of their State of DevOps reports, because they provide us techniques, practices, and outcomes that are all linked through complex relationships. The model shows that transformational leadership, lean product management, and Continuous Delivery drive software delivery performance and organizational outcomes.</p>
<p>While many of the capabilities in the DORA model are technical, many others relate to leadership. Particularly relevant to developer experience are generative organizational culture, transformational leadership, empowering teams to choose tools, team experimentation, and a learning culture. You can read about these and more on the <a href="https://dora.dev/capabilities/">DORA website</a>. Together, they back up the FinBoss concept with tens of thousands of survey samples that support the case.</p>
<p>The question is: Which organization are you now, and which do you want to be? Is it FinCost, operating myopically against the bottom line, or FinBoss, sharing the responsibility for developer experience and employee productivity? The fork in the road was never a choice. We know FinCost is a dead end, and the benefits of FinBoss aren’t soft claims; they’re measurable outcomes.</p>
<p>Happy deployments!</p>]]></content>
    </entry>
    <entry>
      <title>Introducing Starter Policies in Platform Hub</title>
      <link href="https://octopus.com/blog/starter-policies-in-platform-hub" />
      <id>https://octopus.com/blog/starter-policies-in-platform-hub</id>
      <published>2026-03-19T00:00:00.000Z</published>
      <updated>2026-03-19T00:00:00.000Z</updated>
      <summary>Starter policies make it easier to add governance to your deployments. Get started with pre-built, customizable Rego policies for common compliance use cases in Platform Hub.</summary>
      <author>
        <name>Ryan Hall, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>We built our policy engine on Rego, the language of the <a href="https://www.openpolicyagent.org/docs/policy-language">Open Policy Agent (OPA)</a>, to give you control over your deployments and runbooks. Rego is powerful and flexible, but it comes with a learning curve. Until now, getting a policy working meant switching between your Octopus instance and our documentation, with no guarantee a snippet would behave as expected. That kind of friction discourages teams from adopting the guardrails that keep deployments safe.</p>
<p>Starter Policies are designed to change that.</p>
<h2 id="how-it-works">How it works</h2>
<p>A guided wizard takes the guesswork out of writing your first policy. When you add a new policy in Platform Hub, you’ll find a library of starter policies built around common compliance use cases we see across our community.</p>
<figure>
<p><img src="/blog/img/starter-policies-in-platform-hub/policies-create-starter-modal.png" alt="Starter policies in Platform Hub"></p>
</figure>
<p>Open Platform Hub, head to Policies, and select a starter policy that matches your goal. Give it a name, and Octopus takes it from there. It generates the boilerplate code with the name, scope, and conditions already filled in.</p>
<p>From there, it’s yours to customize. The editor includes Rego syntax highlighting to make the logic easier to read, and inline comments explain exactly which parts to modify for your team.</p>
<h2 id="governance-in-platform-hub">Governance in Platform Hub</h2>
<p>Starter policies are a new addition to the Policies feature in Platform Hub, available on the Enterprise tier.</p>
<p>Platform Hub is your central home for software delivery, insights, and governance in Octopus. If you’re on Enterprise, you can use Policies to define and enforce standards at scale across all your deployments and Runbook runs. This includes things like ensuring production deployments always have an approval workflow, or preventing deployments from using outdated package versions.</p>
<p>Starter policies make it easier to get started with all of this, regardless of how familiar you are with Rego.</p>
<h3 id="learning-by-doing">Learning by doing</h3>
<p>We want you to understand the code, not just copy it. Each starter policy includes inline comments explaining how the Rego logic works, highlighting the specific sections you need to modify to customize the policy for your team.</p>
<p>This gives you something you can run immediately, not just read. Direct links to the policy schema are built into the editor too, so if you want to go deeper, the reference material is already there.</p>
<p>To see this in practice, here is a real-world example: enforcing an approval workflow for all deployments and runbook runs in your Production environments.</p>
<p>This policy is made up of two parts.</p>
<p>The <a href="https://octopus.com/docs/platform-hub/policies?q=violation#4-define-the-policy-scope">policy scope</a> defines <em>when</em> the policy applies. In this case, it targets any Environment named <code>Production</code>.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="plaintext"><code><span class="line"><span>package require_manual_intervention</span></span>
<span class="line"><span></span></span>
<span class="line"><span># Default: Do not evaluate unless conditions are met</span></span>
<span class="line"><span>default evaluate := false</span></span>
<span class="line"><span></span></span>
<span class="line"><span>evaluate if {</span></span>
<span class="line"><span>  input.Environment.Name == "Production"</span></span>
<span class="line"><span>}</span></span></code></pre>
<p>The <code>default evaluate := false</code> line is worth noting. Unlike a policy that applies everywhere by default, this one’s off unless the condition <code>input.Environment.Name == "Production"</code> is met. Scoping it to Production means the approval requirement only activates in environments with that name, leaving other environments unaffected.</p>
<p>The <a href="https://octopus.com/docs/platform-hub/policies?q=violation#5-define-the-policy-conditions">policy conditions</a> define <em>what</em> is enforced. It checks that at least one manual intervention step exists in the Deployment or Runbook run, and that none of those steps are being bypassed:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="plaintext"><code><span class="line"><span>package require_manual_intervention</span></span>
<span class="line"><span></span></span>
<span class="line"><span># Default: Deny all</span></span>
<span class="line"><span>default result := {"allowed": false}</span></span>
<span class="line"><span></span></span>
<span class="line"><span># Helper: True if any manual intervention step is present in the skipped steps list</span></span>
<span class="line"><span>manual_intervention_skipped if {</span></span>
<span class="line"><span>  some step in input.Steps</span></span>
<span class="line"><span>  step.Id in input.SkippedSteps</span></span>
<span class="line"><span>  step.ActionType == "Octopus.Manual"</span></span>
<span class="line"><span>}</span></span>
<span class="line"><span></span></span>
<span class="line"><span># Allow: Manual intervention steps exist and none are being bypassed</span></span>
<span class="line"><span>result := {"allowed": true} if {</span></span>
<span class="line"><span>  some step in input.Steps</span></span>
<span class="line"><span>  step.ActionType == "Octopus.Manual"</span></span>
<span class="line"><span>  not manual_intervention_skipped</span></span>
<span class="line"><span>}</span></span>
<span class="line"><span></span></span>
<span class="line"><span># Deny: Block Deployment if manual intervention is bypassed</span></span>
<span class="line"><span>result := {</span></span>
<span class="line"><span>  "allowed": false,</span></span>
<span class="line"><span>  "reason": "Manual intervention steps cannot be skipped in this Environment"</span></span>
<span class="line"><span>} if {</span></span>
<span class="line"><span>  manual_intervention_skipped</span></span>
<span class="line"><span>}</span></span></code></pre>
<p>There are three possible outcomes this policy can produce:</p>
<ul>
<li><strong>Deny by default:</strong> If no manual intervention step is present, the policy is non-compliant. The <code>default result := {"allowed": false}</code> line ensures this is the baseline behavior.</li>
<li><strong>Allowed:</strong> A manual intervention step exists and has not been skipped. The approval workflow is intact, and the Deployment or Runbook run proceeds.</li>
<li><strong>Non-compliant with a reason:</strong> A manual intervention step exists but has been explicitly skipped. Octopus surfaces the reason to the team: <em>“Manual intervention steps cannot be skipped in this Environment.”</em></li>
</ul>
<h3 id="violation-actions">Violation actions</h3>
<p>How Octopus responds to a non-compliant policy (the first and third outcomes above) depends on the configured Violation Action:</p>
<ul>
<li><strong>Block (default):</strong> The Deployment or Runbook run is stopped from progressing until the issue is resolved.</li>
<li><strong>Warning:</strong> The execution continues, but the team is shown a warning so the non-compliance is visible without being a hard stop.</li>
</ul>
<p>The right violation action will depend on your team’s processes and how strictly you need to enforce compliance in an Environment. Either way, the outcome is recorded. Every policy evaluation is captured in your audit log, including the policy name, verdict, action taken, and the reason. When something is blocked, your team can see exactly why and what needs to change.</p>
<h2 id="get-started">Get started</h2>
<p>Starter policies are available now. Head over to the Platform Hub section of your Octopus instance and start building out your guardrails.</p>
<p>We’re continuing to expand policy management in Platform Hub, and your feedback shapes what we build next. If you have suggestions, share them on the <a href="https://roadmap.octopus.com">Octopus Deploy Roadmap</a>.</p>
<p>Happy deployments!</p>]]></content>
    </entry>
    <entry>
      <title>Inside Platform Engineering with Piotr Szwed</title>
      <link href="https://octopus.com/blog/inside-platform-engineering-piotr-szwed" />
      <id>https://octopus.com/blog/inside-platform-engineering-piotr-szwed</id>
      <published>2026-03-18T00:00:00.000Z</published>
      <updated>2026-03-18T00:00:00.000Z</updated>
      <summary>After 25 years in infrastructure automation, Piotr Szwed argues today's platform engineering tools are already a liability, and the API-first, AI-native shift is happening now.</summary>
      <author>
        <name>Matthew Allford, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Platform Engineering means different things to different people, but after 25 years in infrastructure automation, <a href="https://www.linkedin.com/in/szwed/">Piotr Szwed</a> has a clear view of where it’s heading. In this episode of Inside Platform Engineering, Piotr makes the case that the tools and patterns most teams rely on today are already becoming a liability, and that the shift to API-first, AI-native infrastructure isn’t a distant future. It’s happening now.</p>
<h2 id="watch-the-episode">Watch the episode</h2>
<p>You can watch the episode with Piotr below.</p>
<p><a href="https://www.youtube.com/watch?v=5OM3cm8BeHo">Inside Platform Engineering with Piotr Szwed</a></p>
<h2 id="the-shift-from-automation-to-autonomy">The shift from automation to autonomy</h2>
<p>For most of its history, infrastructure automation has been about reducing human toil. Scripting repetitive tasks, templating deployments, and codifying what engineers do by hand. Piotr argues that this framing is now outdated and suggests the goal shouldn’t be to automate your platform team’s work; it should be to build systems that manage themselves. Self-healing, autoscaling, and continuously reconciling their own state without human intervention. This is the core idea behind the Kubernetes operator pattern, and it represents a significant mindset shift for teams that have always reached for automation scripts and manual processes when something needs to change. The difference may seem subtle, but it has profound implications for how you design, build, and staff a platform.</p>
<h2 id="is-terraform-keeping-up-with-platform-teams-needs">Is Terraform keeping up with Platform Teams’ needs?</h2>
<p>Piotr was one of Terraform’s earliest power users, with his teams building over 20 custom providers in its early days. That history gives weight to his argument that Terraform is struggling to keep pace with the modern infrastructure landscape. The number of cloud services and APIs is growing faster than providers can track, and the reconciliation loop that teams bolt on through Jenkins jobs or cron-triggered pipelines is, in his view, one of the biggest anti-patterns in the industry. Teams doing this are essentially rebuilding the Kubernetes controller pattern in a much more fragile way, simulating something that Kubernetes provides for free, while adding the complexity of maintaining yet another layer of tooling around it.</p>
<p>His recommendation isn’t to throw everything out overnight, but to understand that operators support import and read-only modes, making a gradual, non-destructive migration far more achievable than most teams assume.</p>
<h2 id="reference-architectures-are-making-the-path-clearer">Reference architectures are making the path clearer</h2>
<p>One of the practical barriers to adopting API-first, Kubernetes-centric platforms has been the lack of a clear blueprint. That’s changing. Piotr points to a growing set of well-resourced reference architectures, <a href="https://github.com/cnoe-io/reference-implementation-aws">CNOE from Amazon and Cisco</a>, <a href="https://documentation.apeirora.eu/">ApeiroRA from SAP</a>, and <a href="https://cloud.google.com/blog/products/containers-kubernetes/introducing-kube-resource-orchestrator">Google’s KCC with Kro</a>, as evidence that the industry has reached a turning point. Importantly, these aren’t monolithic frameworks that need to be adopted wholesale. They’re designed more like Lego blocks, where teams can cherry-pick the components that fit their organization and swap out the pieces that don’t.</p>
<p>What they all share is more important than their differences: each is API-first and built on the Kubernetes operator pattern. That consistency across hyperscalers and vendors is itself a strong signal that the industry is converging around a direction. With major players now publishing and backing these blueprints, the “we don’t know where to start” barrier is lower than ever.</p>
<h2 id="where-does-this-leave-platform-teams">Where does this leave platform teams?</h2>
<p>The conversation with Piotr covers a lot of ground, but the underlying message is consistent throughout. The move to API-first, autonomous systems isn’t just something Piotr is chasing as the newest, shiniest solution in our field, but he is instead focusing on building platforms that can keep pace with the speed at which cloud services, AI capabilities, and business requirements are evolving.</p>
<p>The good news, depending on how you look at it, is that the ecosystem is maturing quickly. Reference architectures are documented, communities are active, and migration paths exist that don’t require starting from scratch. For teams that see the value in what Piotr is putting forward, starting incrementally now is a much more comfortable place to be than waiting until the shift feels unavoidable and finding yourself staring down another major platform migration.</p>
<p>Happy Deployments!</p>
<div class="hint"><p>Inside Platform Engineering is a series of conversations with Matt Allford and a guest, bringing their own experience and perspective from the world of Platform Engineering.</p><p>You can find more episodes on <a href="https://www.youtube.com/playlist?list=PLAGskdGvlaw24Y-7jTcw09jbzsLw5uL9X">YouTube</a>.</p></div>]]></content>
    </entry>
    <entry>
      <title>Continuous Delivery Office Hours Ep.2: Remaining deployable at all times</title>
      <link href="https://octopus.com/blog/continuous-delivery-office-hours-e2" />
      <id>https://octopus.com/blog/continuous-delivery-office-hours-e2</id>
      <published>2026-03-12T00:00:00.000Z</published>
      <updated>2026-03-12T00:00:00.000Z</updated>
      <summary>Continuous Delivery promotes low-risk releases, faster time-to-market, higher quality, lower costs, better products, and happier teams.</summary>
      <author>
        <name>Steve Fenton, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>When you can’t deploy on demand, you’ve lost control of your software. Risk accumulates in unreleased code, and the more changes you store in one place, the more chance they have of triggering overheads, rework, and failures. When you have blockers stopping you from going live, you’ll start to accumulate dangerously high levels of risk unless you prioritize deployability.</p>
<p>By choosing to do work that returns software to a deployable state ahead of any other kinds of work, like feature development, you’ll avoid the toxicity of tangled work batches. Instead, you can smoothly flow changes between test and production and get the feedback you need to be confident that your software works.</p>
<p>Crucially, if you discover a high-severity bug or security problem, there are no roadblocks to getting fixes into production. You don’t need a special “expedited lane” for these changes, which means you don’t skip steps that let bad changes into your codebase.</p>
<p>Let’s take a look at problem indicators that will help you identify and fix common deployability issues. The goal of answering the deployability question is a mile marker, not the destination, but if you can’t answer the question, you’re going to waste time looking for landmarks!</p>
<h2 id="watch-the-episode">Watch the episode</h2>
<p>You can watch the episode below, or read on to find some of the key discussion points.</p>
<p><a href="https://www.youtube.com/watch?v=QCVrQ0fjdJw">Watch Continuous Delivery Office Hours Ep.2</a></p>
<h2 id="the-awkward-silence-problem">The awkward silence problem</h2>
<p>When you ask your team, “Are we ready to deploy?”, the correct answer is either “absolutely” or “no way”. The worst possible answer is silence or confusion. If the team doesn’t know whether they can deploy, they’re missing the deployment pipeline that would generate the answer.</p>
<p>When you commit a code change, build errors should be returned to you within a couple of minutes. At the end of 5 minutes from your commit, a suite of fast-running tests should tell you if you’ve broken functionality or the quality attributes of your application. Dependency checks, code scanning, and other static analysis tools should have told you if you have a problem. If you have long-running tests or tests that depend on the new software version being deployed to a test environment, you should know in another 5-20 minutes if they’ve detected a problem.</p>
<p>After all these checks, you should have a version of your software running in a test environment that you have high confidence in. If someone asked you whether you’re ready to deploy, you’d answer “absolutely”.</p>
<p>Deployment automation makes sure the deployment is a non-event. It guarantees the same process is used to deploy to all environments, and makes it trivial to deploy on demand. A solid deployment pipeline contains all the checks you need to know whether you can deploy.</p>
<h2 id="manual-testing-is-a-ceiling-not-a-floor">Manual testing is a ceiling, not a floor</h2>
<p>Teams often treat manual testing as a foundation for verifying that a software version works. In reality, it acts more like a ceiling, limiting your ability to flow changes to production. As your software becomes more complex, the ceiling descends as the testing takes longer.</p>
<p>Long test cycles make you subvert your process to the speed at which you can test. You’ll notice when this happens because you’ll keep looping back to fix bugs and restart the test process. From the earliest change you make, through each bug list and all the re-testing, right through to the final software version, you are not deployable.</p>
<p>Automating your tests is the real foundation for your software. This raises the ceiling and moves the constraint away from the test cycle.</p>
<h2 id="your-old-code-wasnt-written-to-be-tested">Your old code wasn’t written to be tested</h2>
<p>If your application is successful, it will have some history. Part of that history is often that it wasn’t written with test automation in mind. That means you need to identify where your risk is, work out how to find the seams that will make it testable, and start adding characterizing tests.</p>
<p>Once some old code is wrapped with tests, it becomes far easier to change the code design, because the tests will fail if you break something.</p>
<h2 id="automation-is-living-documentation">Automation is living documentation</h2>
<p>When developers move on, a portion of your institutional knowledge goes with them. High-quality documentation can help teams distribute this knowledge and reduce its loss, and the best kind of documentation is test automation.</p>
<p>Well-written automation, like tests, deployment automation, and infrastructure as code, performs useful functions while effortlessly documenting them. Because you make all changes through the living documentation, it is always up to date.</p>
<p>The hidden cost of undocumented knowledge becomes painfully clear when you have to deploy without the person who normally handles it. You follow their checklist carefully, confirm every step, and everything looks right; yet the deployment still fails. What you didn’t know is that the checklist stopped being accurate months ago, because the person doing the deployments stopped consulting it. All the new steps they introduced lived in their head, not on paper.</p>
<p>The living documentation built into automation tools is especially valuable when onboarding new developers. Rather than relying on tribal knowledge passed down through conversations and shadowing, a new team member can read the test suite and understand not just what the software does, but what it’s supposed to do and why certain behaviors matter. That’s documentation that keeps pace with the code because it is the code.</p>
<h2 id="the-value-of-long-term-sustainability">The value of long-term sustainability</h2>
<p>Like its Agile predecessors, Continuous Delivery values long-term sustainability. That means you invest a little more effort up front to constrain maintenance costs over the long term. Writing tests may mean a feature takes <strong>20-25%</strong> more time to implement, but the defect density can be <strong>91%</strong> lower than similar features not guided by tests (<a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2009/10/Realizing-Quality-Improvement-Through-Test-Driven-Development-Results-and-Experiences-of-Four-Industrial-Teams-nagappan_tdd.pdf">Microsoft VS</a>).</p>
<p>You could reduce 40 hours of bug fixing to just 3.6 hours by guiding feature development with tests, and you also save on other overheads caused by escaped bugs, like reputational damage, customer churn, support costs, pinpointing and debugging, test cycles, and feature delay.</p>
<h2 id="conclusion">Conclusion</h2>
<p>While it takes some effort to set up a strong deployment pipeline, knowing whether a software version is deployable pays dividends. Technical practices like test-driven development and pair programming are needed to keep software economically viable in the long term, even though they require a little more effort up front.</p>
<p>If you can’t answer the question “Is your software deployable?”, you’re sure to run into trouble.</p>
<p>Happy deployments!</p>
<div class="hint"><p>Continuous Delivery Office Hours is a series of conversations about software delivery, with Tony Kelly, Bob Walker, and Steve Fenton.</p><p>You can find more episodes on <a href="https://www.youtube.com/playlist?list=PLAGskdGvlaw3CrxkUOAMmiy928lr5D4oh">YouTube</a>, <a href="https://podcasts.apple.com/us/podcast/continuous-delivery-office-hours/id1872101651">Apple Podcasts</a>, and <a href="https://pca.st/hwjaox59">Pocket Casts</a>.</p></div>]]></content>
    </entry>
    <entry>
      <title>Extended Tag Sets</title>
      <link href="https://octopus.com/blog/extended-tag-sets" />
      <id>https://octopus.com/blog/extended-tag-sets</id>
      <published>2026-03-11T00:00:00.000Z</published>
      <updated>2026-03-11T00:00:00.000Z</updated>
      <summary>You can now tag projects, environments and runbooks with metadata, making your Octopus instance easier to manage - especially at scale.</summary>
      <author>
        <name>Michelle O'Brien, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<h2 id="tag-sets">Tag Sets</h2>
<p>If you’ve used Octopus Deploy for tenanted deployments, you know how valuable tags can be. Tags help segment customers and provide more granular control when deploying.
We’ve expanded tag sets to help solve a wider range of problems</p>
<ul>
<li>Filtering your dashboard to view the projects you care about</li>
<li>Tagging environments for specific compliance requirements or cloud providers</li>
<li>Organize runbooks for different operational scenarios</li>
</ul>
<h3 id="whats-changed">What’s changed?</h3>
<figure><p><img src="/blog/img/extended-tag-sets/tag-sets.png" alt="Screenshot showing list of tag sets"></p></figure>
<p>From <strong>2026.1.6552</strong></p>
<p>In addition to tenants, you can now apply tags to:</p>
<ul>
<li>Projects</li>
<li>Environments</li>
<li>Runbooks</li>
</ul>
<p>There are three types of tag sets that can be created:</p>
<ul>
<li>MultiSelect: Allows selecting multiple predefined tags from the tag set. This is the standard behavior and works for most scenarios.</li>
<li>SingleSelect: Allows selecting only one predefined tag from the tag set. Use this when you need to ensure just one option is chosen - like a cloud provider or deployment tier.</li>
<li>FreeText: Allows entering custom text values without requiring predefined tags. The tag set name must match exactly, but the tag value can be any arbitrary text.  Use this when wanting dynamic values like region identifiers, customer IDs, or version numbers. When using free text, only one value per tag set is allowed.</li>
</ul>
<h3 id="project-tags">Project Tags</h3>
<p>Project tags let you assign metadata such as teams, product lines, criticality and/or technology stack to projects.
You can then use these tags to configure your dashboard to show only the projects you care about.</p>
<figure><p><img src="/blog/img/extended-tag-sets/environment-tag.png" alt="Screenshot showing list of tag sets"></p></figure>
<h3 id="environment-tags">Environment Tags</h3>
<p>Environment tags help you mark environments as production or assign metadata such as cloud provider or region.
You can filter both the deployment dashboard and environments page by using these tags.
Enterprise users can also assign policies to environment tags in the platform hub. This is particularly helpful if separating out production and non-production environments.</p>
<h3 id="runbook-tags">Runbook Tags</h3>
<p>Runbook tags help you categorize by type and specify which runbooks should be triggered by an event.<br>
Runbook triggers have been updated, so you can now trigger specific runbooks or groups of runbooks based on their tag.</p>
<h3 id="best-practices">Best Practices</h3>
<p>To help achieve the best results, our documents include guidance on how to design your tag sets. Consider what your organization wants to achieve with tags and how they can work to make filtering and management easier.</p>
<h3 id="whats-next">What’s Next</h3>
<p>We’re currently working on migrating all existing target tags to be managed with tag sets. With the existing target tag design, deployment targets can be tagged, but once created, cannot be updated or deleted. We will move this functionality into the tag set space to remedy this limitation.
What’s next from there is up to you. Possibilities include:</p>
<ul>
<li>Deployment Freezes by tag</li>
<li>Scoping variables to tags</li>
<li>License usage by tag</li>
</ul>
<p>If any of these would work well for you or if you have other suggestions, let us know <a href="https://survey.octopus.com/t/tUQnnVKW7pus">by completing this survey</a> or in the comments below.</p>
<p>Happy deployments!</p>]]></content>
    </entry>
    <entry>
      <title>Octopus Easy Mode - Kubernetes</title>
      <link href="https://octopus.com/blog/octo-easy-mode-14-k8s" />
      <id>https://octopus.com/blog/octo-easy-mode-14-k8s</id>
      <published>2026-03-09T00:00:00.000Z</published>
      <updated>2026-03-09T00:00:00.000Z</updated>
      <summary>Learn how to create a Kubernetes deployment project</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>In the <a href="/blog/octo-easy-mode-13-lifecycles">previous post</a>, you added a custom lifecycle to your project. In this post, you’ll create a mock Kubernetes deployment.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>An <a href="https://octopus.com/start">Octopus Cloud</a> account. If you don’t have one, you can sign up for a free trial.</li>
<li>The Octopus AI Assistant Chrome extension. You can install it from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</li>
</ul>
<div class="hint"><p>The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The
cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.</p></div>
<h2 id="creating-the-project">Creating the project</h2>
<p>This example builds on the concepts introduced in previous posts to construct an end-to-end deployment solution to a mock Kubernetes cluster.</p>
<p>Paste the following prompt into the Octopus AI Assistant and run it:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="markdown"><code><span class="line"><span style="color:#000000">Create a Kubernetes project called "K8s Web App", and then:</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Use client side apply in the Kubernetes step (the mock Kubernetes cluster only supports client side apply).</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Disable verification checks in the Kubernetes steps (the mock Kubernetes cluster doesn't support verification checks).</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Enable retries on the K8s deployment step.</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">---</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">Create a token account called "Mock Token".</span></span>
<span class="line"></span>
<span class="line"><span style="color:#800000;font-weight:bold">---</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">Create a feed called "Docker Hub" pointing to "https://index.docker.io" using anonymous authentication.</span></span>
<span class="line"></span>
<span class="line"><span style="color:#800000;font-weight:bold">---</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">Create a Kubernetes target with the tag "Kubernetes", the URL https://mockk8s.octopus.com, using the health check container image "octopusdeploy/worker-tools:6.5.0-ubuntu.22.04" from the "Docker Hub" feed, using the token account, and the "Hosted Ubuntu" worker pool.</span></span></code></pre>
<div class="hint"><p>The document separator (<code>---</code>) is used to split the prompt into multiple sections. Each section is applied sequentially, which allows you to create different types of resources in a single prompt.</p></div>
<p>This prompt creates a new project with a Kubernetes deployment step deploying a sample application to a mock Kubernetes server defined in a <a href="https://octopus.com/docs/kubernetes/targets/kubernetes-api">Kubernetes target</a>. A deployment process defines <em>how</em> an application is deployed, while targets define <em>where</em> the application is deployed.</p>
<p>The mock Kubernetes server exposes just enough of the Kubernetes API to allow the deployment step to execute successfully, but it doesn’t actually create any Kubernetes resources. This allows you to test the deployment process without needing access to a real Kubernetes cluster.</p>
<p><img src="/blog/img/octo-easy-mode-14-k8s/k8s-target.png" alt="Kubernetes Target"></p>
<p>All of these steps run in the <a href="https://octopus.com/docs/infrastructure/workers/worker-pools">Hosted Ubuntu worker pool</a> provided as part of the Octopus cloud platform.</p>
<div class="hint"><p>You can interact with the mock Kubernetes server locally by running the following Docker command:</p><p><code>docker run -p 48080:48080 --entrypoint java ghcr.io/octopussolutionsengineering/k8s-mockserver -jar /k8smock/mockk8s.jar</code></p><p>Save the following kubeconfig file to connect to the mock server:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#800000">apiVersion</span><span style="color:#000000">: </span><span style="color:#0000FF">v1</span></span>
<span class="line"><span style="color:#800000">kind</span><span style="color:#000000">: </span><span style="color:#0000FF">Config</span></span>
<span class="line"><span style="color:#800000">current-context</span><span style="color:#000000">: </span><span style="color:#0000FF">local</span></span>
<span class="line"><span style="color:#800000">clusters</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">default-cluster</span></span>
<span class="line"><span style="color:#800000">  cluster</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000">    server</span><span style="color:#000000">: </span><span style="color:#0000FF">http://localhost:48080</span></span>
<span class="line"><span style="color:#800000">users</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">default-user</span></span>
<span class="line"><span style="color:#800000">  user</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000">    token</span><span style="color:#000000">: </span><span style="color:#0000FF">anything_will_work</span><span style="color:#008000"> # the mock server accepts any token</span></span>
<span class="line"><span style="color:#800000">contexts</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">local</span></span>
<span class="line"><span style="color:#800000">  context</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000">    cluster</span><span style="color:#000000">: </span><span style="color:#0000FF">default-cluster</span></span>
<span class="line"><span style="color:#800000">    user</span><span style="color:#000000">: </span><span style="color:#0000FF">default-user</span></span>
<span class="line"><span style="color:#800000">    namespace</span><span style="color:#000000">: </span><span style="color:#0000FF">default</span></span></code></pre><p>Create a pod using the mock server:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">kubectl</span><span style="color:#A31515"> run</span><span style="color:#A31515"> nginx-test</span><span style="color:#0000FF"> --image=nginx</span><span style="color:#0000FF"> --restart=Never</span></span></code></pre><p>List the pods using the mock server:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">kubectl</span><span style="color:#A31515"> get</span><span style="color:#A31515"> pods</span><span style="color:#0000FF"> -A</span></span></code></pre></div>
<p>You can now <a href="https://octopus.com/docs/releases/creating-a-release">create a release and deploy it</a> to the first environment. The Kubernetes deployment step deploys a Kubernetes deployment resource to the mock Kubernetes server. Of course, no actual Kubernetes resources are created, but Octopus interacts with the mock server as if it were a real Kubernetes API server.</p>
<p>You will notice that the sample Kubernetes project includes some additional steps. Specifically, it includes a step to scan the <a href="https://www.cisa.gov/sbom">Software Bill of Materials (SBOM)</a> associated with the sample project. The results of this scan show you if your deployment has any security vulnerabilities.</p>
<p><img src="/blog/img/octo-easy-mode-14-k8s/sbom-scan.png" alt="SBOM Scan Results"></p>
<p>The security scan is executed daily in a dedicated environment called <code>Security</code> using a <a href="https://octopus.com/docs/projects/project-triggers">trigger</a>. This means the SBOM scan result is updated to reflect any new vulnerabilities discovered after a production deployment.</p>
<p><img src="/blog/img/octo-easy-mode-14-k8s/security-scan-trigger.png" alt="Project Trigger"></p>
<p>Once the deployment is complete, the execution container is discarded, along with the mock server and any deployed resources.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>You created a sample project with:</p>
<ul>
<li>A Kubernetes deployment step <a href="https://octopus.com/docs/kubernetes/steps/yaml">deploying raw YAML</a> inside an <a href="https://octopus.com/docs/projects/steps/execution-containers-for-workers">execution container</a></li>
<li>A <a href="https://octopus.com/docs/projects/built-in-step-templates/manual-intervention-and-approvals">manual intervention</a> with <a href="https://octopus.com/docs/projects/steps/conditions">step conditions</a> to only run in the <code>Production</code> environment</li>
<li>Using <a href="https://octopus.com/docs/projects/steps/conditions">step conditions</a> to exclude the deployment step from the <code>Security</code> environment</li>
<li>A step to scan the <a href="https://www.cisa.gov/sbom">SBOM</a> associated with the sample application</li>
<li>A <a href="https://octopus.com/docs/projects/project-triggers">trigger</a> to rerun the security scan daily</li>
<li>A <a href="https://octopus.com/docs/kubernetes/targets/kubernetes-api">Kubernetes target</a> referencing the mock Kubernetes server</li>
</ul>]]></content>
    </entry>
    <entry>
      <title>Octopus Easy Mode - Lifecycles</title>
      <link href="https://octopus.com/blog/octo-easy-mode-13-lifecycles" />
      <id>https://octopus.com/blog/octo-easy-mode-13-lifecycles</id>
      <published>2026-03-04T00:00:00.000Z</published>
      <updated>2026-03-04T00:00:00.000Z</updated>
      <summary>Learn how to add lifecycles used by a project</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>In the <a href="/blog/octo-easy-mode-12-channels">previous post</a>, you added a custom channel to the project. In this post, you’ll define custom lifecycles.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>An <a href="https://octopus.com/start">Octopus Cloud</a> account. If you don’t have one, you can sign up for a free trial.</li>
<li>The Octopus AI Assistant Chrome extension. You can install it from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</li>
</ul>
<div class="hint"><p>The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The
cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.</p></div>
<h2 id="creating-the-project">Creating the project</h2>
<p>Paste the following prompt into the Octopus AI Assistant and run it:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="markdown"><code><span class="line"><span style="color:#000000">Create a Script project called "13. Script App with Lifecycle", and then:</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Define a lifecycle called "Auto Deploy" that includes:</span></span>
<span class="line"><span style="color:#0451A5">  *</span><span style="color:#000000"> The "Development" phase with the "Development" environment as the first phase set to automatically deploy</span></span>
<span class="line"><span style="color:#0451A5">  *</span><span style="color:#000000"> The "Test" phase with the "Test" environment as the second phase</span></span>
<span class="line"><span style="color:#0451A5">  *</span><span style="color:#000000"> The "Production" phase with the "Production" environment as the third phase</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Define a channel to the project called "Application" that uses the "Auto Deploy" lifecycle, and make this the default channel</span></span>
<span class="line"></span></code></pre>
<p>You can now <a href="https://octopus.com/docs/releases/creating-a-release">create a release</a> and see it automatically deployed to the <code>Development</code> environment as part of the <code>Auto Deploy</code> lifecycle. This is because the first phase of the lifecycle is set to deploy automatically.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>You created a sample project with:</p>
<ul>
<li>A custom <a href="https://octopus.com/docs/releases/lifecycles">lifecycle</a> to automatically deploy to the <code>Development</code> environment when a release is created, followed by the <code>Test</code> and <code>Production</code> environments.</li>
<li>A new <a href="https://octopus.com/docs/releases/channels">channel</a> called <code>Application</code> using the <code>Auto Deploy</code> lifecycle as the default channel for the project.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>The <a href="/blog/octo-easy-mode-14-k8s">next step</a> is to create a mock Kubernetes deployment project.</p>]]></content>
    </entry>
    <entry>
      <title>Octopus Easy Mode - Channels</title>
      <link href="https://octopus.com/blog/octo-easy-mode-12-channels" />
      <id>https://octopus.com/blog/octo-easy-mode-12-channels</id>
      <published>2026-03-02T00:00:00.000Z</published>
      <updated>2026-03-02T00:00:00.000Z</updated>
      <summary>Learn how to add channels to projects</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>In the <a href="/blog/octo-easy-mode-11-community">previous post</a>, you added a community step template to the project. In this post, you’ll define project channels.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>An <a href="https://octopus.com/start">Octopus Cloud</a> account. If you don’t have one, you can sign up for a free trial.</li>
<li>The Octopus AI Assistant Chrome extension. You can install it from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</li>
</ul>
<div class="hint"><p>The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The
cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.</p></div>
<h2 id="creating-the-project">Creating the project</h2>
<p>Paste the following prompt into the Octopus AI Assistant and run it:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="markdown"><code><span class="line"><span style="color:#000000">Create a Script project called "12. Script App with Channel", and then:</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Define a lifecycle called "Hot Fix" that includes the "Production" environment as the only phase</span></span>
<span class="line"><span style="color:#0451A5">*</span><span style="color:#000000"> Add a channel to the project called "Hot Fix" that uses the "Hot Fix" lifecycle</span></span></code></pre>
<p>You can now <a href="https://octopus.com/docs/releases/creating-a-release">create a release and deploy it</a> to the new “Hot Fix” channel. This channel deploys directly to the “Production” environment, skipping the earlier environments. This is useful for deploying urgent fixes directly to production without going through the standard deployment pipeline.</p>
<p><img src="/blog/img/octo-easy-mode-12-channels/channels.png" alt="Channels screenshot"></p>
<h2 id="what-just-happened">What just happened?</h2>
<p>You created a sample project with:</p>
<ul>
<li>A new <a href="https://octopus.com/docs/releases/channels">channel</a> called <code>Hot Fix</code> that uses a custom <a href="https://octopus.com/docs/releases/lifecycles">lifecycle</a> to deploy directly to the <code>Production</code> environment.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>The <a href="/blog/octo-easy-mode-13-lifecycles">next step</a> is to define a custom lifecycle.</p>]]></content>
    </entry>
    <entry>
      <title>Inside Platform Engineering with Matt Gowie</title>
      <link href="https://octopus.com/blog/inside-platform-engineering-matt-gowie" />
      <id>https://octopus.com/blog/inside-platform-engineering-matt-gowie</id>
      <published>2026-02-27T00:00:00.000Z</published>
      <updated>2026-02-27T00:00:00.000Z</updated>
      <summary></summary>
      <author>
        <name>Matthew Allford, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Infrastructure as code sounds simple until it isn’t. Matt Gowie, founder of IaC consulting firm MasterPoint, joined me on Inside Platform Engineering to share what he’s learned helping organizations build sustainable, scalable platforms with Terraform and Open Tofu, including some costly mistakes he sees teams make time and time again.</p>
<p>One of my favorite topics was talking with Matt about whether to do it yourself and “reinvent the wheel”, or lean on open-source and community modules for building your IaC. I can see value in both approaches and have mostly been on the side of using open source modules where possible. Your infrastructure probably isn’t that different from someone else’s. With that said, there are times when you need to consider the risks of using these modules and whether their support and community are responsive enough to meet your demands when critical changes are needed.</p>
<p>While the discussion focused on Matt’s specialty around IaC, I found all of the discussion points can be re-applied broadly across Platform Engineering.</p>
<h2 id="watch-the-episode">Watch the episode</h2>
<p>You can watch the episode with Matt below.</p>
<p><a href="https://www.youtube.com/watch?v=V2UFz9_IjAw">Inside Platform Engineering with Matt Gowie</a></p>
<h2 id="pick-one-tool-and-go-deep">Pick one tool and go deep</h2>
<p>One of the first traps Matt sees platform teams fall into is spreading their expertise across too many IaC tools. Whether it’s Terraform, Bicep, CloudFormation, or Pulumi, the instinct to keep options open actually slows teams down and breeds inconsistency. His advice is to consolidate, but do so mindfully. Vendor-specific tools like Bicep and CloudFormation lock you into a single cloud, and the moment you need to automate something outside that ecosystem - be it a DNS provider, a monitoring tool, a SaaS platform — you’re hacking around the edges and accumulating technical debt. Pick the tool that gives you the most reach, build expertise around it, and create practices that scale.</p>
<h2 id="stop-reinventing-the-wheel">Stop reinventing the wheel</h2>
<p>If your platform team is writing every Terraform resource by hand, you’re burning time your competitors aren’t. Matt is a strong advocate for the open source module ecosystem and pushes back on the common instinct to build everything internally. A well-maintained, focused open-source module delivers great security defaults, community-vetted patterns, and ongoing updates that most internal teams simply can’t match.</p>
<h2 id="the-hidden-cost-of-building-it-yourself">The hidden cost of building it yourself</h2>
<p>The same logic applies to the operational layer around your Platform. Many teams build their own Jenkins or GitHub Actions pipelines to run Terraform and assume it saves money because the work is done in-house. But Matt argues this rarely pencils out. At scale, managing state files, enforcing policy, handling environment-specific approvals, and maintaining all of that custom pipeline code is a significant ongoing burden, and when the person who built it leaves, that cost compounds. Matt’s take is to evaluate the vendor tooling available to solve your problem and be honest about what an engineer’s time is actually worth when measured against a vendor invoice.</p>
<p>Happy Deployments!</p>
<div class="hint"><p>Inside Platform Engineering is a series of conversations with Matt Allford and a guest, bringing their own experience and perspective from the world of Platform Engineering.</p><p>You can find more episodes on <a href="https://www.youtube.com/playlist?list=PLAGskdGvlaw24Y-7jTcw09jbzsLw5uL9X">YouTube</a>.</p></div>]]></content>
    </entry>
    <entry>
      <title>Hidden Gems in Octopus Deploy: Resources You Might Have Missed</title>
      <link href="https://octopus.com/blog/hidden-gems-in-octopus-deploy" />
      <id>https://octopus.com/blog/hidden-gems-in-octopus-deploy</id>
      <published>2026-02-26T00:00:00.000Z</published>
      <updated>2026-02-26T00:00:00.000Z</updated>
      <summary>Explore some hidden gems in Octopus Deploy.</summary>
      <author>
        <name>Robert A. McCarther, II, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Octopus Deploy includes powerful features that simplify complex deployments and runbook automation. Beyond the core platform, there are resources and tools that can make your work even smoother. Some of these don’t always get much attention, so this post highlights a few hidden gems you might have missed.</p>
<h2 id="kubernetes-agent">Kubernetes Agent</h2>
<p>The <a href="https://octopus.com/docs/kubernetes/targets/kubernetes-agent">Octopus Kubernetes Agent</a> makes it easy to connect and manage clusters with Octopus. You don’t need to open firewall rules or manage complex networking. The agent uses a secure outbound connection, so setup is quick and safe. It works a lot like an <a href="https://octopus.com/docs/infrastructure/deployment-targets/tentacle">Octopus Tentacle</a>. It is small, lightweight application, and installed directly in the cluster.</p>
<h2 id="resilient-tentacle-communications">Resilient Tentacle communications</h2>
<p>Tentacle communication issues can be frustrating, especially in cloud or restricted networks. We published a <a href="https://octopus.com/blog/tentacle-communication-resiliency">blog post that explains our recent improvements to Tentacle resiliency</a>. These changes help Tentacles stay connected, even in less-than-ideal network conditions.</p>
<h2 id="terraform-provider--octoterra">Terraform Provider &#x26; OctoTerra</h2>
<p>Infrastructure as Code users will like the <a href="https://registry.terraform.io/providers/OctopusDeploy/octopusdeploy/latest">Octopus Terraform provider</a>. It lets you define and manage Octopus resources in Terraform. For even more control, take a look at <a href="https://octopus.com/integrations/octopus/octopus-serialize-space-to-terraform">OctoTerra</a>. It can serialize your Octopus space into Terraform configuration, which makes it easier to version-control, and replicate your setup.</p>
<h2 id="octopus-api-examples-github-repository">Octopus API examples GitHub repository</h2>
<p>If you need to automate Octopus outside the UI, our <a href="https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master">API examples repository</a> can help. It includes ready-to-use scripts for PowerShell, Bash, Python, and more. You’ll find examples for tasks like bulk project updates and advanced deployment automation. It’s a great place to start when building your own scripts.</p>
<h2 id="octopus-samples-instance">Octopus Samples Instance</h2>
<p>Want to explore Octopus features without creating your own test environment? The <a href="https://samples.octopus.app">Octopus Samples Instance</a> is a fully functional playground. You can click around, test deployments, and see best practices in action. It’s divided into separate spaces for different use cases. This should make it easy to find examples that match the scenarios you’re working on.</p>
<h2 id="octopus-go-api-client">Octopus Go API client</h2>
<p>Curious how our Go API client works behind the scenes? Or want to contribute improvements yourself? This open-source project is a great place to start:
Go API client: <a href="https://github.com/OctopusDeploy/go-octopusdeploy">https://github.com/OctopusDeploy/go-octopusdeploy</a>
This repository shows how the Octopus Go API client works and lets you extend, automate, or contribute to the platform.</p>
<h2 id="kubernetes-yaml--github-actions-generators">Kubernetes YAML &#x26; GitHub Actions generators</h2>
<p>Speed up your setup with a pair of helpful online generators from Octopus:
<a href="https://k8syaml.com/">Kubernetes YAML generator</a> to create clean, valid Kubernetes manifests.
<a href="https://githubactionsworkflowgenerator.octopus.com/#/">GitHub Actions generator</a> to generate GitHub Actions workflow for Octopus deployments.
These tools save you time and help reduce errors when building or automating your CI/CD pipelines.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Octopus Deploy is full of hidden gems. We hope these resources spark new ideas for your automation journey.</p>
<p>Did we miss one of your favorites? We’d love to hear about it! Share it in the <a href="https://octopususergroup.slack.com/signup#/domain-signup">Octopus Community Slack</a> or explore more in our <a href="https://octopus.com/docs">documentation</a>. Whether you’re a seasoned user or just getting started, there’s always something new to discover.</p>
<p>Happy deployments!</p>]]></content>
    </entry>
    <entry>
      <title>Octopus Easy Mode - Community Step Templates</title>
      <link href="https://octopus.com/blog/octo-easy-mode-11-community" />
      <id>https://octopus.com/blog/octo-easy-mode-11-community</id>
      <published>2026-02-25T00:00:00.000Z</published>
      <updated>2026-02-25T00:00:00.000Z</updated>
      <summary>Learn how to add community step templates to a project</summary>
      <author>
        <name>Matthew Casperson, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>In the <a href="/blog/octo-easy-mode-10-project-templates">previous post</a>, you defined tenant templates directly against a project. In this post, you’ll create a project that includes a community step template.</p>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>An <a href="https://octopus.com/start">Octopus Cloud</a> account. If you don’t have one, you can sign up for a free trial.</li>
<li>The Octopus AI Assistant Chrome extension. You can install it from the <a href="https://chromewebstore.google.com/detail/octopus-ai-assistant/agfpjjibnieiihjoehophlbamcifdfha">Chrome Web Store</a>.</li>
</ul>
<div class="hint"><p>The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The
cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.</p></div>
<h2 id="creating-the-project">Creating the project</h2>
<p>Paste the following prompt into the Octopus AI Assistant and run it:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="markdown"><code><span class="line"><span style="color:#000000">Create a Script project called "11. Script App with Community Step Template". </span></span>
<span class="line"><span style="color:#000000">Modify the deployment process to include the community step template with the website "https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5" as the first step with the name "Calculate Deployment Mode".</span></span></code></pre>
<p>This adds the community step template found <a href="https://library.octopus.com/step-templates/d166457a-1421-4731-b143-dd6766fb95d5">in the Octopus community step template library</a> as the first step in the deployment process. Community step templates use the URL as an ID to uniquely identify them.</p>
<p>Community step templates are contributed by the Octopus community and can be found in the <a href="https://library.octopus.com">Octopus Step Template Library</a>. They provide additional functionality that you can easily add to your projects.</p>
<p>You can now <a href="https://octopus.com/docs/releases/creating-a-release">create a release and deploy it</a> to the first environment. The step will print the deployment details, indicating whether it is a new or a redeployment, and other helpful information.</p>
<h2 id="what-just-happened">What just happened?</h2>
<p>You created a sample project with:</p>
<ul>
<li>A step sourced from the community step template library that calculates and prints deployment mode information.</li>
</ul>
<h2 id="whats-next">What’s next?</h2>
<p>The <a href="/blog/octo-easy-mode-12-channels">next step</a> is to define project channels.</p>]]></content>
    </entry>
    <entry>
      <title>Continuous Delivery Office Hours Ep.1: Continuous Delivery should be your top priority</title>
      <link href="https://octopus.com/blog/continuous-delivery-office-hours-e1" />
      <id>https://octopus.com/blog/continuous-delivery-office-hours-e1</id>
      <published>2026-02-24T00:00:00.000Z</published>
      <updated>2026-02-24T00:00:00.000Z</updated>
      <summary>Continuous Delivery promotes low-risk releases, faster time-to-market, higher quality, lower costs, better products, and happier teams.</summary>
      <author>
        <name>Steve Fenton, Octopus Deploy</name>
      </author>
      <content type="html"><![CDATA[<p>Continuous Delivery promotes low-risk releases, faster time-to-market, higher quality, lower costs, better products, and happier teams. Software is at the core of everything a business does today, so organizations must be able to respond to customer needs more quickly than ever.</p>
<p>Taking a quarter or a month to deliver new functionality puts companies behind their competition and prevents them from serving their customers. Few practices offer as much return on investment as Continuous Delivery, but many organizations continue to resist it, often making their deployment problems worse in the process.</p>
<p>Understanding why Continuous Delivery matters and how to implement it effectively can transform not only your deployment process but also your entire software development approach.</p>
<h2 id="watch-the-episode">Watch the episode</h2>
<p>You can watch the episode below, or read on to find some of the key discussion points.</p>
<p><a href="https://www.youtube.com/watch?v=V67ASNnUGDs">Watch Continuous Delivery Office Hours Ep.1</a></p>
<h2 id="what-is-continuous-delivery">What is Continuous Delivery?</h2>
<p>At its core, Continuous Delivery means you can deploy your software at any time. A good indication of whether a team practices Continuous Delivery is whether they prioritize work that keeps software deployable. Other development styles usually continue working on features and return to deployability issues later.</p>
<p>That means teams must have fast, automated feedback for every change, highlighting when the software has an issue that would prevent its deployment. Deployments to all environments must be automated, with artifacts and deployment processes pinned to avoid unexpected changes between deployments.</p>
<h2 id="the-big-three-time-risk-and-money">The big three: Time, risk, and money</h2>
<p>The longer the intervals between deployments, the more you accumulate risk, and the more you delay the value the changes will realize. If you wait six months between deployments, you’re more likely to get caught in a firefighting loop, spending more time pinpointing bug sources because of the volume of changes.</p>
<p>Crucially, until you place new features in users’ hands, you accumulate market risk that the changes won’t solve the underlying problem in a way users accept.</p>
<h2 id="the-deployment-paradox">The deployment paradox</h2>
<p>Human psychology works against us when deployments go wrong. Having waited six months to deploy, the pain of the firefighting stage and the increased risk of deploying large batches of changes mean people develop an aversion to deployments.</p>
<p>When a process is stressful and goes wrong, we naturally want to do it less often. You might think: “If we do fewer deployments, we’ll have less pain.” But this is precisely backwards.</p>
<p>Decreasing deployment frequency increases batch size, making the next deployment more likely to go wrong and cause pain. This is like avoiding the dentist after a painful checkup; the longer you leave it, the worse the next visit will be.</p>
<p>Risk-averse organizations have instincts that work against their goal of safety. The solution isn’t to deploy less often; it’s to deploy more frequently with smaller batches of changes.</p>
<h2 id="keeping-software-deployable-during-feature-development">Keeping software deployable during feature development</h2>
<p>Another objection to Continuous Integration and Delivery is that features take time to build, so you can’t deploy while a feature is in flight. With an infinity of overlapping feature development, this would result in never deploying (or, more likely, work taking place in long-lived branches).</p>
<p>The solution is to separate deployments from feature release. Trunk-based development (integrating changes into the main branch every day, often many times each day) and feature toggles make it possible to work from a shared code base without making in-flight features visible to users.</p>
<p>There are many benefits to feature toggles beyond supporting Continuous Delivery. They also let you share features early with specific user segments or roll them out progressively rather than all at once.</p>
<h2 id="changing-what-deployment-success-means">Changing what deployment success means</h2>
<p>When you separate deployment from release, you also transform how you measure deployment success. You’re no longer testing whether new functionality works during deployment. You’re only verifying that the application is running and healthy. This focus makes deployments faster and less stressful.</p>
<p>Feature toggles reduce the stress and burden of deployments because you’ll no longer miss deployment issues while checking functionality or miss functionality problems while monitoring deployments. Separating these concerns means each gets proper attention.</p>
<h2 id="solving-dependency-challenges">Solving dependency challenges</h2>
<p>Feature toggles also address one of the most complex problems in microservices: deployment dependencies. Despite the promise of independently deployable services, teams often create elaborate deployment choreographies to ensure services are deployed in a specific order. Sometimes they give up entirely and deploy everything simultaneously. They accept unpredictable behavior during deployment or direct users to a holding page until it’s complete.</p>
<p>When deployments form a chain of dependencies, the architecture isn’t truly microservices but a distributed monolith. Real microservices should deploy independently. Feature toggles make this possible. Deploy all services when ready, then switch on functionality once dependencies are in place.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Continuous Delivery isn’t just about deploying more often. It’s about reducing risk through smaller changes, separating deployment from release, maintaining deployable code at all times, and giving teams the confidence to move quickly and safely.</p>
<p>The instinct to slow down after problems is natural, but it’s counterproductive. The path to safer deployments runs through more frequent deployments, not fewer. Organizations that embrace this counterintuitive truth gain a competitive advantage through faster feedback, lower risk, and ultimately, better software.</p>
<p>Happy deployments!</p>
<div class="hint"><p>Continuous Delivery Office Hours is a series of conversations about software delivery, with Tony Kelly, Bob Walker, and Steve Fenton.</p><p>You can find more episodes on <a href="https://www.youtube.com/playlist?list=PLAGskdGvlaw3CrxkUOAMmiy928lr5D4oh">YouTube</a>, <a href="https://podcasts.apple.com/us/podcast/continuous-delivery-office-hours/id1872101651">Apple Podcasts</a>, and <a href="https://pca.st/hwjaox59">Pocket Casts</a>.</p></div>]]></content>
    </entry>
</feed>