<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Benjamin Cane on Medium]]></title>
        <description><![CDATA[Stories by Benjamin Cane on Medium]]></description>
        <link>https://medium.com/@madflojo?source=rss-96013faddf78------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 11 Apr 2026 01:47:06 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@madflojo/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Generating Code Faster Is Only Valuable If You Can Validate Every Change With Confidence]]></title>
            <link>https://itnext.io/generating-code-faster-is-only-valuable-if-you-can-validate-every-change-with-confidence-5148a37c2320?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/5148a37c2320</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 26 Mar 2026 00:00:28 GMT</pubDate>
            <atom:updated>2026-04-03T20:26:53.854Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*thcsa3J0HrkMymmk" /><figcaption>Photo by <a href="https://unsplash.com/@alexkondratiev?utm_source=medium&amp;utm_medium=referral">Alex</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Generating code faster is only valuable if you can validate every change with confidence.</p><p>Software engineering has never really been about writing code. Coding is often the easy part.</p><p>Testing is harder, and many teams struggle with it.</p><p>As tools make it easier to generate code quickly, that gap widens. If you can produce changes faster than you can validate them, you eventually create more code than you can safely operate.</p><p>Which begs the question: What does good testing actually look like?</p><h3>🔍 What Good Looks Like</h3><p>One of the biggest challenges I see is that teams struggle to understand what “good” testing means and never define it.</p><p>Pipelines are often built early in a project, when the team is small, and they rarely keep pace with the system and organization as they grow.</p><p>My starting principle is simple:</p><ul><li>At pull request time, you should have strong confidence that the change will not break the service or platform being modified.</li><li>Within a day of merging, you should have strong confidence that the change hasn’t broken the full customer journey that the platform supports.</li></ul><h3>🔁 On Pull Request</h3><p>For backend platforms, I like to see three levels of automated testing before merging.</p><h3>Code Tests (Unit Tests)</h3><p>This level is the foundation. Unit tests validate internal logic, error handling, and edge cases. Techniques such as fuzz testing and benchmarking also reveal issues early. As the test pyramid tells us, this is where the majority of testing and logic validation should take place.</p><h3>Service-Level Functional Tests</h3><p>Too many teams stop at unit tests for pull requests. Functional tests should also be run in CI for every pull request.</p><p>Services should be tested in isolation with functional tests. Dependencies can be mocked, but things like databases should ideally run for real (Dockerized).</p><p>This is where API contracts are validated and regressions can be identified without wondering whether the issue came from this change or another service.</p><h3>Platform-Level Functional Tests</h3><p>Testing a service alone isn’t enough. Changes can break upstream or downstream dependencies. Platform-level tests spin up the entire platform in CI and validate that services interact correctly.</p><p>These tests ensure the platform continues to work as a system.</p><p>For platforms with strict latency or resiliency requirements, I recommend introducing light stress tests at both the service and platform levels. These aren’t full performance tests, but they act as early indicators of performance regressions.</p><p>If these three layers pass, you should have high confidence in the change. But not complete confidence.</p><h3>🌙 Nightly Testing</h3><p>Some failures take time to appear.</p><p>Memory leaks, performance degradation, and cross-platform integration issues may not show up immediately.</p><p>That’s why I like to run a nightly build (or every few hours).</p><p>This environment runs end-to-end customer journey tests, performance tests, and chaos tests.</p><p>These are typically the same tests used during release validation, but running them continuously accelerates feedback. If something breaks, you learn about it early, before the pressure of a release.</p><h3>🧠 Final Thoughts</h3><p>There is no universal approach everyone can follow.</p><p>Different systems have different needs; mission-critical systems may focus heavily on correctness and resilience. Non-mission-critical systems may focus more on validating core functionality.</p><p>Your testing strategy depends heavily on architecture, dependencies, and operational constraints. But if your organization is increasing its ability to generate code quickly, your testing capabilities must evolve at the same pace.</p><p>AI-generated code becomes much easier to review when you already have high confidence in your testing.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-26/"><em>https://bencane.com</em></a><em> on March 26, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5148a37c2320" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/generating-code-faster-is-only-valuable-if-you-can-validate-every-change-with-confidence-5148a37c2320">Generating Code Faster Is Only Valuable If You Can Validate Every Change With Confidence</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When You Go to Production with gRPC, Make Sure You’ve Solved Load Distribution First]]></title>
            <link>https://itnext.io/when-you-go-to-production-with-grpc-make-sure-youve-solved-load-distribution-first-2f5042bfe4f1?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/2f5042bfe4f1</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 19 Mar 2026 00:00:09 GMT</pubDate>
            <atom:updated>2026-03-28T14:58:00.000Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gEuw3nu4Jgyzzbjxg_JQ3w@2x.jpeg" /><figcaption>Photo by <a href="https://www.buymeacoffee.com/mikevandenbos">Mike van den Bos</a> on <a href="https://unsplash.com/?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>When you go to production with gRPC, make sure you’ve solved load distribution first.</p><p>I was recently talking with another engineer who is rolling out gRPC into production. He asked what the biggest gotchas were.</p><p>My first answer: Load Distribution.</p><h3>🚦 HTTP/1 vs. HTTP/2</h3><p>Most teams first implement services using REST over HTTP/1 and then migrate to gRPC as they seek its performance benefits.</p><p>That shift introduces a subtle but important change in how traffic gets distributed across instances.</p><p>With HTTP/1, requests are generally tied closely to connections. A client opens a connection, sends a request, waits for the response, and then sends another (if connection re-use is enabled).</p><p>HTTP/2 (which underpins gRPC) works differently.</p><p>HTTP/2 multiplexes requests over persistent connections. A client can send many requests over the same connection without waiting for responses.</p><p>This is one of the reasons gRPC provides a performance boost, but it can create unexpected load distribution issues.</p><p>If your infrastructure isn’t built for an HTTP/2 world, you’ll quickly find traffic becoming unevenly distributed.</p><h3>🏗️ Infrastructure Support</h3><p>In an HTTP/1 world, load balancing at the connection (Layer 4) level often works well enough. But with HTTP/2, connections live much longer and carry far more concurrent traffic.</p><p>If your load balancer distributes traffic based only on connections, a busy client may hammer a single instance while others sit idle.</p><p>Unfortunately, much of the infrastructure still doesn’t fully support HTTP/2-aware load balancing.</p><p>Depending on your environment, your load balancers or ingress controllers may operate primarily at Layer 4. That works fine for HTTP/1, but once you introduce HTTP/2 via gRPC, the effectiveness changes significantly.</p><h3>⚙️ Supporting gRPC</h3><p>To get the most out of gRPC, the best approach is to use infrastructure that understands HTTP/2 and load-balances requests rather than just connections.</p><p>If that’s not possible, another option is client-side load balancing.</p><p>Many gRPC clients support opening a pool of connections and distributing requests across them. You still benefit from HTTP/2’s persistent connections, but you avoid concentrating all traffic on a single backend instance.</p><h3>🧠 Final Thoughts</h3><p>gRPC offers many advantages, including performance, strongly typed contracts, and efficient communication. But it also introduces different networking behavior.</p><p>If you’re rolling out gRPC into production, make sure your load balancing infrastructure is ready for an HTTP/2 world.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-19/"><em>https://bencane.com</em></a><em> on March 19, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2f5042bfe4f1" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/when-you-go-to-production-with-grpc-make-sure-youve-solved-load-distribution-first-2f5042bfe4f1">When You Go to Production with gRPC, Make Sure You’ve Solved Load Distribution First</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You may be building for availability, but are you building for resiliency?]]></title>
            <link>https://itnext.io/you-may-be-building-for-availability-but-are-you-building-for-resiliency-c49f6e45c883?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/c49f6e45c883</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 12 Mar 2026 00:00:50 GMT</pubDate>
            <atom:updated>2026-03-28T14:27:23.508Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vdkYMgPbZOiqOFu7DZDM2w@2x.jpeg" /><figcaption>Photo by <a href="https://instagram.com/rawan_aahmed?igshid=YmMyMTA2M2Y=">Rawan Ahmed</a> on <a href="https://unsplash.com/?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>You may be building for availability, but are you building for resiliency? Many teams design for availability. Far fewer design for resiliency.</p><p>A concept that took me a while to really grasp is that building highly available systems and highly resilient systems is not the same thing.</p><p>The difference is how the system reacts to failure.</p><h3>🚄 High Availability</h3><p>When you build for high availability, the goal is simple: ensure there is always another path.</p><p>If something fails, traffic can be redirected somewhere else.</p><p>For example, a service might run across multiple availability zones or regions. If one fails, traffic is routed to another.</p><p>Detecting failures and redirecting traffic are core elements of building for high availability.</p><p>Availability is about rerouting traffic when something fails.</p><h3>🚂 High Resiliency</h3><p>Building for resiliency is different.</p><p>The solution to failure isn’t another path; it’s how the system handles the error.</p><p>When a dependency fails, the decision becomes:</p><p>Do we retry? Do we continue without that dependency? Do we degrade functionality? Do we stop processing altogether?</p><p>Resiliency is about defining what happens when things go wrong.</p><p>Sometimes you can continue processing. Sometimes you can defer work and fix it later.</p><p>Resiliency is absorbing failure instead of avoiding it.</p><h3>🧩 A Simple Example</h3><p>When you design systems with resiliency in mind, you tend to treat dependencies differently.</p><p>A simple example is configuration.</p><p>Many systems use distributed configuration services so that runtime behavior can change without redeployment.</p><p>But that configuration service then becomes a dependency. To avoid turning it into a hard dependency, many systems cache the configuration in memory.</p><p>When updates occur, the system fetches the new configuration and switches only after it’s fully loaded into memory.</p><p>If configuration refresh fails, the system continues operating with the last known configuration. Transient failures don’t bring the system down.</p><p>That’s resiliency.</p><h3>🧠 Final Thoughts</h3><p>When I talk about non-functional requirements, you’ll hear me say:</p><p>“Highly available and resilient systems”</p><p>I separate them intentionally because the approaches are different.</p><p>Availability ensures there is always another path. Resiliency ensures the system can continue operating when failures occur.</p><p>Availability routes around failure. Resiliency survives failure. You need both.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-12/"><em>https://bencane.com</em></a><em> on March 12, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c49f6e45c883" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/you-may-be-building-for-availability-but-are-you-building-for-resiliency-c49f6e45c883">You may be building for availability, but are you building for resiliency?</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When your coding agent doesn’t understand your project, you’ll get junk]]></title>
            <link>https://itnext.io/when-your-coding-agent-doesnt-understand-your-project-you-ll-get-junk-8e0d789986fd?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/8e0d789986fd</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 05 Mar 2026 00:00:28 GMT</pubDate>
            <atom:updated>2026-03-14T19:57:24.648Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>When your coding agent doesn’t understand your project, you’ll get junk.</p><p>Junk in, junk out.</p><p>One of the best ways to get more from agentic coding tools is to give the agent context.</p><p>The more an agent understands your project, the better its work will be.</p><p>If you ask an agent to add a method to a class, it will. It might read the file. It might infer some structure. But it won’t understand the project’s intent.</p><p>If you asked a human engineer to make the same change, they would have questions.</p><p>What is the purpose of this project? How is it used? What constraints exist?</p><p>If they skipped that step, you’d get exactly what you asked for, even if it was wrong.</p><p>That’s the same challenge many face with coding agents. A lack of context means it only does what it’s told — which isn’t always what you actually need.</p><p>But when it understands a project, it operates with far more clarity.</p><h3>🧙‍♂️ My “Old School” Method</h3><p>Before I start serious work with an agent, I have it learn the project.</p><p>Read the docs 📚 Review the codebase ⚙️ Understand the architecture 🏙️ Learn how to build, test, and run the project locally 👩‍🔧</p><p>I even ask the agent to summarize its understanding back to me.</p><p>This started as a saved prompt, turned into a slash command, and is now a skill.</p><p>This step is a huge productivity boost.</p><h3>🤖 Agents Files (AGENTS.md)</h3><p>Over the past year, an open standard for providing agents with structured context has emerged.</p><p>Instead of prompting the agent to rediscover your project every time, document that context once — and the agent will reference it going forward.</p><p>Most modern agents support an Agents.md file and reference it during each interaction.</p><h3>💽 What Goes in an Agents File?</h3><p>Think of the Agents file as onboarding documentation, but for an agent.</p><p>Project context:</p><p>Team context:</p><ul><li>Code style preferences</li><li>Testing philosophy (TDD or YOLO)</li><li>Tech stack constraints</li></ul><p>Any tribal knowledge you’d expect a new team member to learn belongs in an Agents file.</p><h3>👨‍💻 Personal Agent Files</h3><p>Many tools also support a personal Agents file in your home directory.</p><p>That’s where your workflow preferences live. Are you a two-space tabs person? Do you want your agent to prefer table tests?</p><p>If you have preferences you want to apply to every project, but are unique to you, they go in the personal Agents file.</p><h3>🧠 Final Thoughts</h3><p>Using an Agents file dramatically improves agent quality.</p><p>Even then, I still use my “learn-this” slash command — sometimes that extra context makes a difference.</p><p>If you wouldn’t drop a new engineer into a project without context, don’t do it to your agents.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-05/"><em>https://bencane.com</em></a><em> on March 5, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8e0d789986fd" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/when-your-coding-agent-doesnt-understand-your-project-you-ll-get-junk-8e0d789986fd">When your coding agent doesn’t understand your project, you’ll get junk</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You can have 100% Code Coverage and still have ticking time bombs in your code. ]]></title>
            <link>https://itnext.io/you-can-have-100-code-coverage-and-still-have-ticking-time-bombs-in-your-code-b66a49fd955d?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/b66a49fd955d</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 26 Feb 2026 00:00:03 GMT</pubDate>
            <atom:updated>2026-03-07T22:17:49.850Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>You can have 100% Code Coverage and still have ticking time bombs in your code. 💣</p><p>I was listening to a team recently, and an engineer was discussing how a coding agent added additional tests to a project that already had 100% code coverage.</p><p>The conversation reminded me that coverage is directional and often mistaken for quality. Just because your coverage shows 100% doesn’t mean your software is fully tested.</p><h3>👨‍🏫 Understanding How Coverage Is Measured</h3><p>Code Coverage measures the percentage of executable lines that run during code tests. Executed doesn’t mean well-tested.</p><p>Just because every function runs doesn’t mean it’s free of logic errors or safe.</p><h3>😃 Happy Path Testing</h3><p>A common challenge teams face with testing is focusing too much on the happy path.</p><p>Suppose you have a function that accepts an array. In your tests, you always pass 5 elements — because that’s the expected usage. Coverage shows all branches executed. You’re good, right?</p><p>What happens if you pass 4 elements? Or 0?</p><p>If you never test fewer than 5, how do you know? You may say: “But wait, it’s only ever called with 5 elements.” That may be true, for now.</p><h3>⚠️ Protecting Against Your Future Self</h3><p>Code is rarely static; someone will come along and change things. That might be you, it might be someone else.</p><p>Eventually someone changes that function. Will they add tests for new edge cases? Maybe. Assume they won’t.</p><p>When you write tests, don’t just focus on how you know a function is going to be used; also include tests that misuse the function.</p><p>Rather than sending an array with 5 elements, send one with 4, 0, and send a nil value.</p><p>Rather than sending strings that match an expected pattern, send junk that doesn’t.</p><p>Does the function still behave correctly? Should it?</p><p>The more you test outside the happy path, the more resilient your code becomes — and the less likely it is to break later.</p><h3>🧠 Final Thoughts</h3><p>Code coverage is a guide, don’t let it give you false confidence. Test the happy path, and the unexpected ones. Validate function outputs against the input you provide.</p><p>100% Coverage is easy. Writing reliable code is not.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-02-26/"><em>https://bencane.com</em></a><em> on February 26, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b66a49fd955d" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/you-can-have-100-code-coverage-and-still-have-ticking-time-bombs-in-your-code-b66a49fd955d">You can have 100% Code Coverage and still have ticking time bombs in your code. 💣</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting More Out of Agentic Coding Tools]]></title>
            <link>https://itnext.io/getting-more-out-of-agentic-coding-tools-177df02690ab?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/177df02690ab</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 19 Feb 2026 00:00:41 GMT</pubDate>
            <atom:updated>2026-02-28T23:38:42.251Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>Are you getting the most out of Agentic Coding Tools?</p><p>Software engineering is changing fast.</p><p>Agentic coding tools became widely available last year, and if you’re not using them today, you’re already behind. But many still struggle to move beyond the “fancy chat” experience.</p><p>Just like any tool in our engineering tool belts, knowing how to use it effectively matters.</p><h3>🤖 Agents Are More Than A Better Chat</h3><p>Last year, most were using tab-complete with a useful chat interface where you could ask questions, get suggestions, and maybe copy/paste into your code.</p><p>But agents can do much more than make suggestions — they can understand your codebase and act.</p><p>Instead of asking an agent:</p><blockquote><em>“Can you suggest additional tests?”</em></blockquote><p>Tell your agent:</p><blockquote><em>“Create additional test cases, then run make tests and validate they pass.”</em></blockquote><p>An agent can create tests, run them, inspect failures, adjust the implementation, and re-run the suite until it passes.</p><p>This isn’t about suggestions anymore; agents have more autonomy.</p><p>I think of coding agents as assistants working toward a shared goal. They do some work, you do some, and you iterate together.</p><h3>🏆 Moving from Direction to Outcomes</h3><p>A big mental shift is moving away from simple directions to defining an outcome with guidance &amp; guardrails.</p><p>Agents don’t just perform a single task; they can execute multiple steps (and even parallelize them). You don’t need to spoon-feed each directive one by one.</p><p>Instead, define the outcome you want, along with guidance and guardrails.</p><p>The clearer you are on the outcomes, constraints, and context around what you are trying to do, the better the agent will perform.</p><h3>📋 Examples: Real-world tasks I’ve asked Agents to handle</h3><blockquote><em>“Using the existing DB Driver X as a reference, create a set of table tests for driver Y. The tests should be structured similarly to the existing driver, surface any logic issues, concurrency issues, and act as a clear insurance against the defined interface.”</em></blockquote><blockquote><em>“Update CI workflows to Go 1.26.0, find and update any references to 1.25.6, then run tests to ensure everything still builds and passes”</em></blockquote><p>I also use agents for mundane work like git commits and opening pull requests. They consistently produce better commit messages and PR descriptions than I would.</p><p>Agents don’t always get it exactly right, but with a bit of feedback and occasional adjustment, you can get a lot done quickly.</p><p>Avoid going down the rabbit hole of endless refinement, sometimes it’s better to reset with a clearer prompt.</p><h3>👨‍🏫 Context is Key</h3><p>If you want the best results from agents, you need to give them context.</p><p>Before I do serious work on a project, I have the agent:</p><ul><li>Read the Docs 📚</li><li>Review the Architecture 🏙️</li><li>Understand the Project Structure 📐</li><li>Understand how to build, test, and run the application locally 👩‍🔧</li></ul><p>The same steps that a human would take. Agents are no different.</p><p>(I’ll dive deeper into Agent files, skills, and effective ways to provide more context in a future post)</p><h3>🧠 Final Thoughts</h3><p>Engineers are doing amazing things with agents, and new capabilities are being added daily. But you don’t need to be at the bleeding edge to get more out of them (I certainly am not).</p><p>Don’t worry about the hype. Understand what these tools can do, making small adjustments in how you use them can drastically change what you get back.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-02-19/"><em>https://bencane.com</em></a><em> on February 19, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=177df02690ab" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/getting-more-out-of-agentic-coding-tools-177df02690ab">Getting More Out of Agentic Coding Tools</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why is Infrastructure-as-Code so important? Hint: It’s correctness]]></title>
            <link>https://itnext.io/why-is-infrastructure-as-code-so-important-hint-its-correctness-b59242f70659?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/b59242f70659</guid>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[software-engineering]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 12 Feb 2026 00:00:54 GMT</pubDate>
            <atom:updated>2026-02-21T20:06:45.704Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>Why is Infrastructure-as-Code so important? Hint: It’s correctness.</p><p>I’ve worked on many systems in my career, and one thing that I’ve noticed is that those that leverage infrastructure-as-code tend to be more stable than those that don’t.</p><h3>🤔 But wait, isn’t everyone using IaC these days?</h3><p>You may be thinking, “Why am I talking about IaC in 2026? Isn’t this just the de facto standard at this point?”</p><p>My hope is yes, everyone does this, but I’m sure many don’t invest the time into it.</p><p>I’m not here to tell you to use IaC; I’m here to tell you why it’s important, and it’s not necessarily about the speed of deployment.</p><h3>🏎️ Fast is great, but it’s not the biggest benefit</h3><p>A very clear and correct reason people leverage IaC is the speed of infrastructure provisioning.</p><p>It’s much faster to provision infrastructure with IaC; it takes less time, enabling you to scale faster, and it lets you do cool things like ephemeral environments.</p><p>But the biggest benefit of IaC, in my mind, is correctness.</p><h3>⚠️ IaC reduces human error</h3><p>Humans make mistakes. When you ask humans to click the same buttons in the same sequence every time, you’ll get mixed results.</p><p>Steps get missed — especially when time passes or people rely on memory instead of process.</p><p>Documentation helps, but there are those of us who think, “I’ve done this a million times, I don’t need instructions.”</p><p>This attitude is the same reason one of my kid’s desks wobbles and the other one doesn’t…</p><p>IaC is a contract. Once defined, every environment is created from the same source of truth.</p><h3>✅ Consistency is essential to production stability</h3><p>The consistency of IaC is what brings production stability.</p><p>When your performance testing environment matches production, your tests become more accurate.</p><p>If one service has a larger memory footprint in testing than it does in production, you might find yourself surprised by out-of-memory errors, especially if heap sizes are configured based on your test environment and not your production environment (because, of course, they would be the same, right?).</p><p>When I come across platforms that use IaC, I see fewer mistakes and fewer incorrect assumptions. And production tends to be more stable, at least with respect to infrastructure and capacity-related issues.</p><h3>🧠 Final Thoughts</h3><p>So, to answer the question, why is IaC so important? It’s not the speed of provisioning; it’s the correctness of the environments.</p><p>In production systems, correctness beats speed every time.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-02-12/"><em>https://bencane.com</em></a><em> on February 12, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b59242f70659" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/why-is-infrastructure-as-code-so-important-hint-its-correctness-b59242f70659">Why is Infrastructure-as-Code so important? Hint: It’s correctness</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Optimizing the team’s workflow can be more impactful than building business features]]></title>
            <link>https://itnext.io/optimizing-the-teams-workflow-can-be-more-impactful-than-building-business-features-5ebf7645b538?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/5ebf7645b538</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[coding]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 05 Feb 2026 00:00:18 GMT</pubDate>
            <atom:updated>2026-02-16T21:05:10.047Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>Optimizing the team’s workflow can be more impactful than building business features. It defies logic, but it’s true.</p><p>I work with and talk to a lot of engineers, and to explain my point, I’ll describe two engineers on the same team.</p><h3>💪 Engineer 1</h3><p>The first engineer churns out a lot of code and user stories. They’re focused, consistently finishing on time, and often doing more than they’re assigned.</p><p>When it comes to shipping business features, this person does a great job.</p><p>But this person is also more than happy to let their build run for 3 hours.</p><h3>🦾 Engineer 2</h3><p>The second engineer completes their assigned user stories, but when they encounter inefficiencies, they spend time fixing them. Sometimes it’s improving the build pipeline, fixing flaky tests, making code more maintainable, etc.</p><p>While this engineer may finish fewer user stories because they are distracted by these “side quests,” they make a bigger impact.</p><h3>🏋️ Enabling Others</h3><p>While avoiding the 10x engineer trope, Engineer 2 has a bigger impact by resolving issues affecting the whole team.</p><p>A slow pipeline slows everyone’s work.</p><p>Open a single change, then wait 3 hours. A test fails-wait another 3 hours. Feedback comes in-wait 3 more.</p><p>Broken workflows turn simple changes into long, inefficient endeavors.</p><p>So fixing these not just for themselves but for everyone means the whole team can ship code faster.</p><h3>📈 Invest in Workflows</h3><p>Investing time in optimizing your workflow and the team’s workflow usually pays dividends.</p><p>Sometimes it’s hard to quantify, but the smallest optimizations can be huge.</p><p>Someone on the team who gets frustrated with inefficiencies and decides to fix them is incredibly valuable.</p><h3>👩‍🔧 Do you take ownership of your codebase?</h3><p>If you want to make a greater impact, look at how you work.</p><p>When you fix a bug, do you search the codebase for the same bug elsewhere?</p><p>When your build pipeline is slow, or you have flaky tests, do you fix them or live with them, complaining while nothing changes?</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-02-05/"><em>https://bencane.com</em></a><em> on February 5, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5ebf7645b538" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/optimizing-the-teams-workflow-can-be-more-impactful-than-building-business-features-5ebf7645b538">Optimizing the team’s workflow can be more impactful than building business features</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I follow an architecture principle I call The Law of Collective Amnesia]]></title>
            <link>https://itnext.io/i-follow-an-architecture-principle-i-call-the-law-of-collective-amnesia-25338e7801b1?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/25338e7801b1</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 29 Jan 2026 00:00:53 GMT</pubDate>
            <atom:updated>2026-02-15T19:53:10.744Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>I follow an architecture principle I call The Law of Collective Amnesia.</p><p>Over time, everyone (including yourself) forgets the original intention of the system’s design as new requirements emerge.</p><p>This law applies at all levels, from system design to <em>microservices</em>, or even libraries.</p><h3>🧬 Systems Evolve (and Intent Fades)</h3><p>When building new platforms/services/whatever, we create a system design that follows a structure.</p><p>Different components have distinct responsibilities; they interact clearly with the rest of the system, and there is a plan.</p><p>But as time progresses, new people may not understand the original intentions of the design.</p><p>As new requirements come in, the pressure to deliver may push you or others down a path that doesn’t align with the original plan.</p><p>When the architecture’s intent is understood, additions can be beneficial. When it’s forgotten, they start to feel duct-taped on.</p><p>Duct-taped solutions turn into technical debt or operational/management complexity that starts to weigh the system down.</p><h3>📠 How Good Systems Become Legacy Nightmares</h3><p>We’ve all seen the legacy platform that feels brittle, does too much, and is daunting to refactor.</p><p>It didn’t start that way.</p><p>At the time, it was probably a great design, but over time, new features and capabilities turned it into Frankenstein’s monster.</p><h3>👮 How to Defend Architecture from Collective Amnesia</h3><p>While it may not be possible to prevent the system from devolving forever, you can reduce the need for duct tape solutions by designing for change.</p><h3>📜 Roles and Responsibilities</h3><p>An important-but not always effective-step is to document and define the roles and responsibilities of components within the system.</p><p>When a system is broken down into components with distinct roles and responsibilities, it becomes easier for people to make informed decisions about where new capabilities should reside.</p><p>The documentation “should” influence how change is implemented.</p><p>But it relies on people following that documentation, which is the fundamental flaw.</p><h3>🚧 Architectural Guardrails: Make the Right Path the Easy Path</h3><p>When I say “architectural guardrails,” you probably think of review boards and ADRs. These processes are essential, but they don’t always work as a prevention.</p><p><em>Instead, I mean designing the system so that the correct placement of functionality is the path of least resistance.</em></p><h3>🔏 Contracts as Constraints, Not Convenience</h3><p>In general, I feel like back-end APIs should provide as much data as possible, and it should be up to the clients to use what&#39;s relevant.</p><p>But sometimes contracts can be used to enforce design behaviors.</p><p>Systems can’t act unless they receive the data required to act.</p><h3>🚪 Control Ingress and Egress to Control Evolution</h3><p>Ensuring that only specific systems serve as entry and exit points helps direct future design decisions.</p><p>It’s often easier to add a new endpoint than to add a new platform that serves as an entry point.</p><p>Knowing this can allow you to put in place processing at those entry and exit points that ensure future capabilities follow specific patterns.</p><h3>🧩 Design for Change, Not Today’s Requirements</h3><p>When you are first building a system, it’s easy to want to make it quickly based on the requirements in front of you.</p><p>But when you know a platform will evolve, it’s beneficial to take time and implement interfaces that make the system more modular.</p><p>Within a <em>microservice</em>, this can be how you structure the application, how you create packages that can be extended even though you don’t need them day one.</p><p>At a platform level, it could be the decision between <em>monolith</em> and <em>microservices</em>. If you know there will be a rapid change, it may make sense to leverage <em>microservices</em>. If you know there won’t be a fast change, start with a <em>monolith</em>.</p><h3>🧠 Final Thoughts: Assume Intent Will Be Forgotten</h3><p>The above examples are just a subset of the ways you can enforce a design that aligns with your intentions.</p><p><strong>The key lesson:</strong> don’t build a plan that relies on people to follow your intentions. They won’t.</p><p>You have to assume the next person won’t design systems the way you do, they won’t understand the reasons behind your design, and they’ll be under pressure to deliver.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-01-29/"><em>https://bencane.com</em></a><em> on January 29, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=25338e7801b1" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/i-follow-an-architecture-principle-i-call-the-law-of-collective-amnesia-25338e7801b1">I follow an architecture principle I call The Law of Collective Amnesia</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Performance testing without a target is like running a race with no finish line]]></title>
            <link>https://madflojo.medium.com/performance-testing-without-a-target-is-like-running-a-race-with-no-finish-line-ab9e9ed03595?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/ab9e9ed03595</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 22 Jan 2026 00:00:13 GMT</pubDate>
            <atom:updated>2026-01-31T18:26:23.062Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>Performance testing without a target is like running a race with no finish line.</p><p><em>Did you win or did you stop early?</em></p><p>I previously shared my thoughts on benchmark and endurance tests, but before ever running a test, a target must be defined.</p><h3>🎯 Why Set Targets?</h3><p><em>Without a target, how do you know what good looks like?</em></p><p>I’ve often come across teams that have incorporated performance testing into their releases (which is excellent). But they had no targets defined.</p><p>No production baseline.</p><p>No service-level objectives from the business.</p><p><em>How did they know whether the system was meeting expectations?</em> They didn’t.</p><p>In some cases, after targets were defined, the system was performing as needed.</p><p>In others, it clearly wasn’t, and the team had no idea until targets were defined and compared with production.</p><h3>🏆 Defining Targets</h3><p>It’s easier to define targets for existing systems (and modernization projects) than for a brand-new system.</p><p>Existing platforms have production numbers you can reference, user expectations, and service-level objectives that can be translated into performance targets.</p><p>New systems rarely have much to baseline from.</p><p>For a brand-new system, I like to work with the product/business team and understand their goals.</p><ul><li><em>📈 What is the expected growth? Slow and steady, or fast and unpredictable?</em></li><li><em>🚨 What is the criticality of the platform? If it fails to respond, is it a problem or an inconvenience?</em></li><li><em>🌟 What unique constraints or features of the platform might influence performance requirements?</em></li></ul><p>Once defined, targets should not be treated as static.</p><p>As traffic starts, you can adjust targets accordingly. Maybe it’s higher, perhaps it’s lower.</p><h3>🪫 Leave Some Buffer</h3><p>Once a target is agreed upon, I like to add a bit of buffer.</p><p>If the requirement is 100ms, I’ll target closer to 75ms, or lower, depending on the system and its purpose.</p><p><em>Why?</em> Adding capacity or tuning the system takes time.</p><p>Things change, sometimes in unexpected ways.</p><p>Sometimes unexpected changes can be handled by automatic/manual scaling, but not always.</p><p>It’s important to give yourself a bit of buffer to respond to those changes.</p><h3>🧠 Final Thoughts</h3><p>I’ve talked a lot about setting targets and their importance. But one of the most important aspects of having targets is monitoring and measuring production.</p><p>Having visibility in production helps validate that your targets are realistic.</p><p>Maybe they are too high, and you have wasted infrastructure reserved.</p><p>Perhaps they are too low, and you won’t be able to survive the next traffic spike.</p><p>Traffic changes over time, and application performance naturally drifts as new capabilities are added.</p><p>Clear visibility into traffic and latency patterns is essential for anyone operating mission-critical, large-scale systems.</p><p>But also a foundational practice for most platforms.</p><p><em>Do you have performance targets for your platform? Is it grounded in production measurements? Should you?</em></p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-01-22/"><em>https://bencane.com</em></a><em> on January 22, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ab9e9ed03595" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>