<?xml version="1.0" encoding="UTF-8" standalone="no"?><!-- generator="FeedCreator 1.8" --><rss version="2.0">
    <channel xmlns:g="http://base.google.com/ns/1.0">
        <title>Andreas Gohr: Linkblog [splitbrain.org]</title>
        <description>Noteworthy or interesting links collected by Andreas Gohr.</description>
        <link/>
        <lastBuildDate>Sat, 11 Apr 2026 06:04:11 +0000</lastBuildDate>
        <generator>FeedCreator 1.8</generator>
        <item>
            <title>Eight years of wanting, three months of building with AI - Lalit Maganti</title>
            <link>https://lalitm.com/post/building-syntaqlite-ai/</link>
            <description>&lt;blockquote&gt;&lt;hr&gt;&lt;/blockquote&gt;&lt;section&gt;&lt;p&gt;For eight years, I’ve wanted a high-quality set of devtools for working with
SQLite. Given how important SQLite is to the industry&lt;sup id="sn-ref-sqlite-industry"&gt;&lt;a href="#sn-sqlite-industry"&gt;1&lt;/a&gt;&lt;/sup&gt;, I’ve long been puzzled that no one has invested in building
a really good developer experience for it&lt;sup id="sn-ref-devtools"&gt;&lt;a href="#sn-devtools"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;&lt;p&gt;A couple of weeks ago, after ~250 hours of effort over three months&lt;sup id="sn-ref-hours"&gt;&lt;a href="#sn-hours"&gt;3&lt;/a&gt;&lt;/sup&gt; on evenings, weekends, and vacation days, I finally
&lt;a href="/post/syntaqlite/"&gt;released syntaqlite&lt;/a&gt;
(&lt;a href="https://github.com/LalitMaganti/syntaqlite"&gt;GitHub&lt;/a&gt;), fulfilling this
long-held wish. And I believe the main reason this happened was because of AI
coding agents&lt;sup id="sn-ref-codingtools"&gt;&lt;a href="#sn-codingtools"&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;&lt;p&gt;Of course, there’s no shortage of posts claiming that AI one-shot their project
or pushing back and declaring that AI is all slop. I’m going to take a very
different approach and, instead, systematically break down my experience
building syntaqlite with AI, both where it helped &lt;em&gt;and&lt;/em&gt; where it was
detrimental.&lt;/p&gt;&lt;p&gt;I’ll do this while contextualizing the project and my background so you can
independently assess how generalizable this experience was. And whenever I make
a claim, I’ll try to back it up with evidence from my project journal, coding
transcripts, or commit history&lt;sup id="sn-ref-evidence"&gt;&lt;a href="#sn-evidence"&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;&lt;h2 id="why-i-wanted-it"&gt;Why I wanted it&lt;/h2&gt;&lt;p&gt;In my work on &lt;a href="https://docs.perfetto.dev"&gt;Perfetto&lt;/a&gt;, I maintain a SQLite-based
language for querying performance traces called
&lt;a href="https://perfetto.dev/docs/analysis/perfetto-sql-getting-started"&gt;PerfettoSQL&lt;/a&gt;.
It’s basically the same as SQLite but with a few extensions to make the trace
querying experience better. There are ~100K lines of PerfettoSQL internally in
Google and it’s used by a wide range of teams.&lt;/p&gt;&lt;p&gt;Having a language which gets traction means your users also start expecting
things like formatters, linters, and editor extensions. I’d hoped that we could
adapt some SQLite tools from open source but the more I looked into it, the more
disappointed I was. What I found either wasn’t reliable enough, fast
enough&lt;sup id="sn-ref-speed-comparison"&gt;&lt;a href="#sn-speed-comparison"&gt;6&lt;/a&gt;&lt;/sup&gt;, or flexible enough to adapt to PerfettoSQL. There was
clearly an opportunity to build something from scratch, but it was never the
“most important thing we could work on”. We’ve been reluctantly making do with
the tools out there but always wishing for better.&lt;/p&gt;&lt;p&gt;On the other hand, there &lt;em&gt;was&lt;/em&gt; the option to do something in my spare time. I
had built lots of open source projects in my teens&lt;sup id="sn-ref-holoirc"&gt;&lt;a href="#sn-holoirc"&gt;7&lt;/a&gt;&lt;/sup&gt; but this
had faded away during university when I felt that I just didn’t have the
motivation anymore. Being a maintainer is much more than just “throwing the code
out there” and seeing what happens. It’s triaging bugs, investigating crashes,
writing documentation, building a community, and, most importantly, having a
direction for the project.&lt;/p&gt;&lt;p&gt;But the itch of open source (specifically freedom to work on what I wanted while
helping others) had never gone away. The SQLite devtools project was eternally
in my mind as “something I’d like to work on”. But there was another reason why
I kept putting it off: it sits at the intersection of being both hard &lt;em&gt;and&lt;/em&gt;
tedious.&lt;/p&gt;&lt;h2 id="what-makes-it-hard-and-tedious"&gt;What makes it hard and tedious&lt;/h2&gt;&lt;p&gt;If I was going to invest my personal time working on this project, I didn’t want
to build something that only helped Perfetto: I wanted to make it work for &lt;em&gt;any&lt;/em&gt;
SQLite user out there&lt;sup id="sn-ref-ambition"&gt;&lt;a href="#sn-ambition"&gt;8&lt;/a&gt;&lt;/sup&gt;. And this means parsing SQL &lt;em&gt;exactly&lt;/em&gt;
like SQLite.&lt;/p&gt;&lt;p&gt;The heart of any language-oriented devtool is the parser. This is responsible
for turning the source code into a “parse tree” which acts as the central data
structure anything else is built on top of. If your parser isn’t accurate, then
your formatters and linters will inevitably inherit those inaccuracies; many of
the tools I found suffered from having parsers which approximated the SQLite
language rather than representing it precisely.&lt;/p&gt;&lt;p&gt;Unfortunately, unlike many other languages, SQLite has no formal specification
describing how it should be parsed. It doesn’t expose a stable API for its
parser either. In fact, quite uniquely, in its implementation it doesn’t even
build a parse tree at all&lt;sup id="sn-ref-no-parse-tree"&gt;&lt;a href="#sn-no-parse-tree"&gt;9&lt;/a&gt;&lt;/sup&gt;! The only reasonable approach
left in my opinion is to carefully extract the relevant parts of SQLite’s source
code and adapt it to build the parser I wanted.&lt;/p&gt;&lt;p&gt;This means getting into the weeds of SQLite source code, a fiendishly difficult
codebase to understand. The whole project is written in C in an
&lt;a href="https://sqlite.org/src/file?name=src/vdbe.c&amp;amp;ci=trunk"&gt;incredibly dense style&lt;/a&gt;;
I’ve spent days just understanding the virtual table
&lt;a href="https://www.sqlite.org/vtab.html"&gt;API&lt;/a&gt;&lt;sup id="sn-ref-vtab-nuance"&gt;&lt;a href="#sn-vtab-nuance"&gt;11&lt;/a&gt;&lt;/sup&gt; and
&lt;a href="https://sqlite.org/src/file?name=src/vtab.c&amp;amp;ci=trunk"&gt;implementation&lt;/a&gt;. Trying
to grasp the full parser stack was daunting.&lt;/p&gt;&lt;p&gt;There’s also the fact that there are &amp;gt;400 rules in SQLite which capture the full
surface area of its language. I’d have to specify in each of these “grammar
rules” how that part of the syntax maps to the matching node in the parse tree.
It’s extremely repetitive work; each rule is similar to all the ones around it
but also, by definition, different.&lt;/p&gt;&lt;p&gt;And it’s not just the rules but also coming up with and writing tests to make
sure it’s correct, debugging if something is wrong, triaging and fixing the
inevitable bugs people filed when I got something wrong…&lt;/p&gt;&lt;p&gt;For years, this was where the idea died. Too hard for a side project&lt;sup id="sn-ref-complexity"&gt;&lt;a href="#sn-complexity"&gt;12&lt;/a&gt;&lt;/sup&gt;, too tedious to sustain motivation, too risky to invest months
into something that might not work.&lt;/p&gt;&lt;h2 id="how-it-happened"&gt;How it happened&lt;/h2&gt;&lt;p&gt;I’ve been using coding agents since early 2025 (Aider, Roo Code, then Claude
Code since July) and they’d definitely been useful but never something I felt I
could trust a serious project to. But towards the end of 2025, the models seemed
to make a significant step forward in quality&lt;sup id="sn-ref-agents-got-good"&gt;&lt;a href="#sn-agents-got-good"&gt;13&lt;/a&gt;&lt;/sup&gt;. At the
same time, I kept hitting problems in Perfetto which would have been trivially
solved by having a reliable parser. Each workaround left the same thought in the
back of my mind: maybe it’s finally time to build it for real.&lt;/p&gt;&lt;p&gt;I got some space to think and reflect over Christmas and decided to really
stress test the most maximalist version of AI: could I vibe-code the whole thing
using just Claude Code on the Max plan (£200/month)?&lt;/p&gt;&lt;p&gt;Through most of January, I iterated, acting as semi-technical manager and
delegating almost all the design and all the implementation to Claude.
Functionally, I ended up in a reasonable place: a parser in C extracted from
SQLite sources using a bunch of Python scripts, a formatter built on top,
support for both the SQLite language and the PerfettoSQL extensions, all exposed
in a web playground.&lt;/p&gt;&lt;p&gt;But when I reviewed the codebase in detail in late January, the downside was
obvious: the codebase was complete spaghetti&lt;sup id="sn-ref-spaghetti"&gt;&lt;a href="#sn-spaghetti"&gt;14&lt;/a&gt;&lt;/sup&gt;. I didn’t
understand large parts of the Python source extraction pipeline, functions were
scattered in random files without a clear shape, and a few files had grown to
several thousand lines. It was &lt;em&gt;extremely&lt;/em&gt; fragile; it solved the immediate
problem &lt;em&gt;but&lt;/em&gt; it was never going to cope with my larger vision, never mind
integrating it into the Perfetto tools. The saving grace was that it had proved
the approach was viable and generated more than 500 tests, many of which I felt
I could reuse.&lt;/p&gt;&lt;p&gt;I decided to throw away everything and start from scratch while also switching
most of the codebase to Rust&lt;sup id="sn-ref-rust-not-c"&gt;&lt;a href="#sn-rust-not-c"&gt;15&lt;/a&gt;&lt;/sup&gt;. I could see that C was going
to make it difficult to build the higher level components like the validator and
the language server implementation. And as a bonus, it would also let me use the
same language for both the extraction and runtime instead of splitting it across
C and Python.&lt;/p&gt;&lt;p&gt;More importantly, I completely changed my role in the project. I took ownership
of all decisions&lt;sup id="sn-ref-took-control"&gt;&lt;a href="#sn-took-control"&gt;16&lt;/a&gt;&lt;/sup&gt; and used it more as “autocomplete on
steroids” inside a much tighter process: opinionated design upfront, reviewing
every change thoroughly, fixing problems eagerly as I spotted them, and
investing in scaffolding (like linting, validation, and non-trivial
testing&lt;sup id="sn-ref-scaffolding"&gt;&lt;a href="#sn-scaffolding"&gt;17&lt;/a&gt;&lt;/sup&gt;) to check AI output automatically.&lt;/p&gt;&lt;p&gt;The core features came together through February and the final stretch (upstream
test validation, editor extensions, packaging, docs) led to a 0.1 launch in
mid-March.&lt;/p&gt;&lt;p&gt;But in my opinion, this timeline is the least interesting part of this story.
What I really want to talk about is what wouldn’t have happened without AI and
also the toll it took on me as I used it.&lt;/p&gt;&lt;h2 id="ai-is-why-this-project-exists-and-why-its-as-complete-as-it-is"&gt;AI is why this project exists, and why it’s as complete as it is&lt;/h2&gt;&lt;h3 id="overcoming-inertia"&gt;Overcoming inertia&lt;/h3&gt;&lt;p&gt;I’ve &lt;a href="https://lalitm.com/llm-motivation-via-emotions/"&gt;written in the past&lt;/a&gt;
about how one of my biggest weaknesses as a software engineer is my tendency to
procrastinate when facing a big new project. Though I didn’t realize it at the
time, it could not have applied more perfectly to building syntaqlite.&lt;/p&gt;&lt;p&gt;AI basically let me put aside all my doubts on technical calls, my uncertainty
of building the right thing and my reluctance to get started by giving me very
concrete problems to work on. Instead of “I need to understand how SQLite’s
parsing works”, it was “I need to get AI to suggest an approach for me so I can
tear it up and build something better"&lt;sup id="sn-ref-inertia-journal"&gt;&lt;a href="#sn-inertia-journal"&gt;18&lt;/a&gt;&lt;/sup&gt;. I work so much
better with concrete prototypes to play with and code to look at than endlessly
thinking about designs in my head, and AI lets me get to that point at a pace I
could not have dreamed about before. Once I took the first step, every step
after that was so much easier.&lt;/p&gt;&lt;h3 id="faster-at-churning-code"&gt;Faster at churning code&lt;/h3&gt;&lt;p&gt;AI turned out to be better than me at the act of writing code itself, assuming
that code is obvious. If I can break a problem down to “write a function with
this behaviour and parameters” or “write a class matching this interface,” AI
will build it faster than I would and, crucially, in a style that might well be
more intuitive to a future reader. It documents things I’d skip, lays out code
consistently with the rest of the project, and sticks to what you might call the
“standard dialect” of whatever language you’re working
in&lt;sup id="sn-ref-standard-dialect"&gt;&lt;a href="#sn-standard-dialect"&gt;19&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;&lt;p&gt;That standardness is a double-edged sword. For the vast majority of code in any
project, standard is exactly what you want: predictable, readable, unsurprising.
But every project has pieces that are its edge, the parts where the value comes
from doing something non-obvious. For syntaqlite, that was the extraction
pipeline and the parser architecture. AI’s instinct to normalize was actively
harmful there, and those were the parts I had to design in depth and often
resorted to just writing myself.&lt;/p&gt;&lt;p&gt;But here’s the flip side: the same speed that makes AI great at obvious code
also makes it great at refactoring. If you’re using AI to generate code at
industrial scale, you &lt;em&gt;have&lt;/em&gt; to refactor constantly and
continuously&lt;sup id="sn-ref-refactoring-journal"&gt;&lt;a href="#sn-refactoring-journal"&gt;20&lt;/a&gt;&lt;/sup&gt;. If you don’t, things immediately get
out of hand. This was the central lesson of the vibe-coding month: I didn’t
refactor enough, the codebase became something I couldn’t reason about, and I
had to throw it all away. In the rewrite, refactoring became the core of my
workflow. After every large batch of generated code, I’d step back and ask “is
this ugly?” Sometimes AI could clean it up. Other times there was a large-scale
abstraction that AI couldn’t see but I could; I’d give it the direction and let
it execute&lt;sup id="sn-ref-refactor-pattern"&gt;&lt;a href="#sn-refactor-pattern"&gt;21&lt;/a&gt;&lt;/sup&gt;. If you have taste, the cost of a wrong
approach drops dramatically because you can restructure
quickly&lt;sup id="sn-ref-refactor-taste"&gt;&lt;a href="#sn-refactor-taste"&gt;22&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;&lt;h3 id="teaching-assistant"&gt;Teaching assistant&lt;/h3&gt;&lt;p&gt;Of all the ways I used AI, research had by far the highest ratio of value
delivered to time spent.&lt;/p&gt;&lt;p&gt;I’ve worked with interpreters and parsers before but I had never heard of
Wadler-Lindig pretty printing&lt;sup id="sn-ref-wadler-lindig"&gt;&lt;a href="#sn-wadler-lindig"&gt;23&lt;/a&gt;&lt;/sup&gt;. When I needed to build
the formatter, AI gave me a concrete and actionable lesson from a point of view
I could understand and pointed me to the papers to learn more. I could have
found this myself eventually, but AI compressed what might have been a day or
two of reading into a focused conversation where I could ask “but why does this
work?” until I actually got it.&lt;/p&gt;&lt;p&gt;This extended to entire domains I’d never worked in. I have deep C++ and Android
performance expertise but had barely touched Rust tooling or editor extension
APIs. With AI, it wasn’t a problem: the fundamentals are the same, the
terminology is similar, and AI bridges the gap&lt;sup id="sn-ref-lateral-moves"&gt;&lt;a href="#sn-lateral-moves"&gt;24&lt;/a&gt;&lt;/sup&gt;. The VS
Code extension would have taken me a day or two of learning the API before I
could even start. With AI, I had a working extension within an hour.&lt;/p&gt;&lt;p&gt;It was also invaluable for reacquainting myself with parts of the project I
hadn’t looked at for a few days&lt;sup id="sn-ref-context-reacquisition"&gt;&lt;a href="#sn-context-reacquisition"&gt;25&lt;/a&gt;&lt;/sup&gt;. I could control
how deep to go: “tell me about this component” for a surface-level refresher,
“give me a detailed linear walkthrough” for a deeper dive, “audit unsafe usages
in this repo” to go hunting for problems. When you’re context switching a lot,
you lose context fast. AI let me reacquire it on demand.&lt;/p&gt;&lt;h3 id="more-than-id-have-built-alone"&gt;More than I’d have built alone&lt;/h3&gt;&lt;p&gt;Beyond making the project exist at all, AI is also the reason it shipped as
complete as it did. Every open source project has a long tail of features that
are important but not critical: the things you know theoretically how to do but
keep deprioritizing because the core work is more pressing. For syntaqlite, that
list was long: editor extensions, Python bindings, a WASM playground, a docs
site, packaging for multiple ecosystems&lt;sup id="sn-ref-last-mile-list"&gt;&lt;a href="#sn-last-mile-list"&gt;26&lt;/a&gt;&lt;/sup&gt;. AI made these
cheap enough that skipping them felt like the wrong trade-off.&lt;/p&gt;&lt;p&gt;It also freed up mental energy for UX&lt;sup id="sn-ref-ux-focus"&gt;&lt;a href="#sn-ux-focus"&gt;27&lt;/a&gt;&lt;/sup&gt;. Instead of spending
all my time on implementation, I could think about what a user’s first
experience should feel like: what error messages would actually help them fix
their SQL, how the formatter output should look by default, whether the CLI
flags were intuitive. These are the things that separate a tool people try once
from one they keep using, and AI gave me the headroom to care about them.
Without AI, I would have built something much smaller, probably no editor
extensions or docs site. AI didn’t just make the same project faster. It changed
what the project &lt;em&gt;was&lt;/em&gt;.&lt;/p&gt;&lt;h2 id="where-ai-had-its-costs"&gt;Where AI had its costs&lt;/h2&gt;&lt;h3 id="the-addiction"&gt;The addiction&lt;/h3&gt;&lt;p&gt;There’s an uncomfortable parallel between using AI coding tools and playing slot
machines&lt;sup id="sn-ref-addiction"&gt;&lt;a href="#sn-addiction"&gt;28&lt;/a&gt;&lt;/sup&gt;. You send a prompt, wait, and either get something
great or something useless. I found myself up late at night wanting to do “just
one more prompt,” constantly trying AI just to see what would happen even when I
knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at
it even in tasks it was clearly ill-suited for, telling myself “maybe if I
phrase it differently this time.”&lt;/p&gt;&lt;p&gt;The tiredness feedback loop made it worse&lt;sup id="sn-ref-tiredness-loop"&gt;&lt;a href="#sn-tiredness-loop"&gt;29&lt;/a&gt;&lt;/sup&gt;. When I had
energy, I could write precise, well-scoped prompts and be genuinely productive.
But when I was tired, my prompts became vague, the output got worse, and I’d try
again, getting more tired in the process. In these cases, AI was probably slower
than just implementing something myself, but it was too hard to break out of the
loop&lt;sup id="sn-ref-ai-slower"&gt;&lt;a href="#sn-ai-slower"&gt;30&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;&lt;h3 id="losing-touch"&gt;Losing touch&lt;/h3&gt;&lt;p&gt;Several times during the project, I lost my mental model of the
codebase&lt;sup id="sn-ref-losing-touch"&gt;&lt;a href="#sn-losing-touch"&gt;31&lt;/a&gt;&lt;/sup&gt;. Not the overall architecture or how things
fitted together. But the day-to-day details of what lived where, which functions
called which, the small decisions that accumulate into a working system. When
that happened, surprising issues would appear and I’d find myself at a total
loss to understand what was going wrong. I hated that feeling.&lt;/p&gt;&lt;p&gt;The deeper problem was that losing touch created a communication
breakdown&lt;sup id="sn-ref-communication-breakdown"&gt;&lt;a href="#sn-communication-breakdown"&gt;32&lt;/a&gt;&lt;/sup&gt;. When you don’t have the mental
thread of what’s going on, it becomes impossible to communicate meaningfully
with the agent. Every exchange gets longer and more verbose. Instead of “change
FooClass to do X,” you end up saying “change the thing which does Bar to do X”.
Then the agent has to figure out what Bar is, how that maps to FooClass, and
sometimes it gets it wrong&lt;sup id="sn-ref-manager-analogy"&gt;&lt;a href="#sn-manager-analogy"&gt;33&lt;/a&gt;&lt;/sup&gt;. It’s exactly the same
complaint engineers have always had about managers who don’t understand the code
asking for fanciful or impossible things. Except now you’ve become that manager.&lt;/p&gt;&lt;p&gt;The fix was deliberate: I made it a habit to read through the code immediately
after it was implemented and actively engage to see “how would I have done this
differently?”.&lt;/p&gt;&lt;p&gt;Of course, in some sense all of the above is also true of code I wrote a few
months ago (hence the
&lt;a href="https://text-incubation.com/AI+code+is+legacy+code+from+day+one"&gt;sentiment that AI code is legacy code&lt;/a&gt;),
but AI makes the drift happen faster because you’re not building the same muscle
memory that comes from originally typing it out.&lt;/p&gt;&lt;h3 id="the-slow-corrosion"&gt;The slow corrosion&lt;/h3&gt;&lt;p&gt;There were some other problems I only discovered incrementally over the three
months.&lt;/p&gt;&lt;p&gt;I found that AI made me procrastinate on key design
decisions&lt;sup id="sn-ref-procrastination"&gt;&lt;a href="#sn-procrastination"&gt;34&lt;/a&gt;&lt;/sup&gt;. Because refactoring was cheap, I could
always say “I’ll deal with this later.” And because AI could refactor at the
same industrial scale it generated code, the cost of deferring felt low. But it
wasn’t: deferring decisions corroded my ability to think clearly because the
codebase stayed confusing in the meantime. The vibe-coding month was the most
extreme version of this. Yes, I understood the problem, but if I had been more
disciplined about making hard design calls earlier, I could have converged on
the right architecture much faster.&lt;/p&gt;&lt;p&gt;Tests created a similar false comfort&lt;sup id="sn-ref-tests-insufficient"&gt;&lt;a href="#sn-tests-insufficient"&gt;35&lt;/a&gt;&lt;/sup&gt;. Having 500+
tests felt reassuring, and AI made it easy to generate more. But neither humans
nor AI are creative enough to foresee every edge case you’ll hit in the future;
there are several times in the vibe-coding phase where I’d come up with a test
case and realise the design of some component was completely wrong and needed to
be totally reworked. This was a significant contributor to my lack of trust and
the decision to scrap everything and start from scratch.&lt;/p&gt;&lt;p&gt;Basically, I learned that the “normal rules” of software still apply in the AI
age: if you don’t have a fundamental foundation (clear architecture,
well-defined boundaries) you’ll be left eternally chasing bugs as they appear.&lt;/p&gt;&lt;h3 id="no-sense-of-time"&gt;No sense of time&lt;/h3&gt;&lt;p&gt;Something I kept coming back to was how little AI understood about the passage
of time&lt;sup id="sn-ref-no-sense-of-time"&gt;&lt;a href="#sn-no-sense-of-time"&gt;36&lt;/a&gt;&lt;/sup&gt;. It sees a codebase in a certain state but
doesn’t &lt;em&gt;feel&lt;/em&gt; time the way humans do. I can tell you what it feels like to use
an API, how it evolved over months or years, why certain decisions were made and
later reversed.&lt;/p&gt;&lt;p&gt;The natural problem from this lack of understanding is that you either make the
same mistakes you made in the past and have to relearn the lessons &lt;em&gt;or&lt;/em&gt; you fall
into new traps which were successfully avoided the first time, slowing you down
in the long run. In my opinion, this is a similar problem to why losing a
high-quality senior engineer hurts a team so much: they carry history and
context that doesn’t exist anywhere else and act as a guide for others around
them.&lt;/p&gt;&lt;p&gt;In theory, you can try to preserve this context by keeping specs and docs up to
date. But there’s a reason we didn’t do this before AI: capturing implicit
design decisions exhaustively is incredibly expensive and time-consuming to
write down. AI can help draft these docs, but because there’s no way to
automatically verify that it accurately captured what matters, a human still has
to manually audit the result. And that’s still time-consuming.&lt;/p&gt;&lt;p&gt;There’s also the context pollution problem. You never know when a design note
about API A will echo in API B. Consistency is a huge part of what makes
codebases work, and for that you don’t just need context about what you’re
working on right now but also about other things which were designed in a
similar way. Deciding what’s relevant requires exactly the kind of judgement
that institutional knowledge provides in the first place.&lt;/p&gt;&lt;h2 id="relativity"&gt;Relativity&lt;/h2&gt;&lt;p&gt;Reflecting on the above, the pattern of when AI helped and when it hurt was
fairly consistent.&lt;/p&gt;&lt;p&gt;When I was working on something I already understood deeply, AI was excellent. I
could review its output instantly, catch mistakes before they landed and move at
a pace I’d never have managed alone. The parser rule generation is the clearest
example&lt;sup id="sn-ref-parser-rules"&gt;&lt;a href="#sn-parser-rules"&gt;37&lt;/a&gt;&lt;/sup&gt;: I knew exactly what each rule should produce, so
I could review AI’s output within a minute or two and iterate fast.&lt;/p&gt;&lt;p&gt;When I was working on something I could describe but didn’t yet know, AI was
good but required more care. Learning Wadler-Lindig for the formatter was like
this: I could articulate what I wanted, evaluate whether the output was heading
in the right direction, and learn from what AI explained. But I had to stay
engaged and couldn’t just accept what it gave me.&lt;/p&gt;&lt;p&gt;When I was working on something where I didn’t even know what I wanted, AI was
somewhere between unhelpful and harmful. The architecture of the project was the
clearest case: I spent weeks in the early days following AI down dead ends,
exploring designs that felt productive in the moment but collapsed under
scrutiny. In hindsight, I have to wonder if it would have been faster just
thinking it through without AI in the loop at all.&lt;/p&gt;&lt;p&gt;But expertise alone isn’t enough. Even when I understood a problem deeply, AI
still struggled if the task had no objectively checkable answer&lt;sup id="sn-ref-verifiability"&gt;&lt;a href="#sn-verifiability"&gt;38&lt;/a&gt;&lt;/sup&gt;. Implementation has a right answer, at least at a local level:
the code compiles, the tests pass, the output matches what you asked for. Design
doesn’t. We’re still arguing about OOP decades after it first took off.&lt;/p&gt;&lt;p&gt;Concretely, I found that designing the public API of syntaqlite was where this
hit home the hardest. I spent several days in early March doing nothing but API
refactoring, manually fixing things any experienced engineer would have
instinctively avoided but AI made a total mess of. There’s no test or objective
metric for “is this API pleasant to use” and “will this API help users solve
the problems they have” and that’s exactly why the coding agents did &lt;em&gt;so badly&lt;/em&gt;
at it.&lt;/p&gt;&lt;p&gt;This takes me back to the days I was obsessed with physics and, specifically,
relativity. The laws of physics look simple and Newtonian in any small local
area, but zoom out and spacetime curves in ways you can’t predict from the local
picture alone. Code is the same: at the level of a function or a class, there’s
usually a clear right answer, and AI is excellent there. But architecture is
what happens when all those local pieces interact, and you can’t get good global
behaviour by stitching together locally correct components.&lt;/p&gt;&lt;p&gt;Knowing where you are on these axes at any given moment is, I think, the core
skill of working with AI effectively.&lt;/p&gt;&lt;h2 id="wrap-up"&gt;Wrap-up&lt;/h2&gt;&lt;p&gt;Eight years is a long time to carry a project in your head. Seeing these SQLite
tools actually exist and function after only three months of work is a massive
win, and I’m fully aware they wouldn’t be here without AI.&lt;/p&gt;&lt;p&gt;But the process wasn’t the clean, linear success story people usually post. I
lost an entire month to vibe-coding. I fell into the trap of managing a codebase
I didn’t actually understand, and I paid for that with a total rewrite.&lt;/p&gt;&lt;p&gt;The takeaway for me is simple: AI is an incredible force multiplier for
implementation, but it’s a dangerous substitute for design. It’s brilliant at
giving you the right answer to a specific technical question, but it has no
sense of history, taste, or how a human will actually feel using your API. If
you rely on it for the “soul” of your software, you’ll just end up hitting a
wall faster than you ever have before.&lt;/p&gt;&lt;p&gt;What I’d like to see more of from others is exactly what I’ve tried to do here:
honest, detailed accounts of building real software with these tools; not
weekend toys or one-off scripts but the kind of software that has to survive
contact with users, bug reports, and your own changing mind.&lt;/p&gt;&lt;/section&gt;</description>
            <pubDate>Tue, 07 Apr 2026 06:12:28 +0000</pubDate>
            <guid>https://lalitm.com/post/building-syntaqlite-ai/</guid>
        </item>
        <item>
            <title>Github repo dszendrei/playwright-zoom</title>
            <link>https://github.com/dszendrei/playwright-zoom</link>
            <description>&lt;div class="star js-feed-item-view"&gt;&lt;div class="body"&gt;
&lt;!-- watch --&gt;
&lt;div class="d-flex flex-items-baseline tmp-py-4"&gt;
  &lt;div class="d-flex flex-column width-full"&gt;
      &lt;div&gt;
        &lt;div class="d-flex flex-items-baseline"&gt;
          &lt;div class="color-fg-muted"&gt;
              &lt;span class="mr-2"&gt;&lt;a class="d-inline-block" href="https://github.com/splitbrain" rel="noreferrer"&gt;&lt;img class="avatar avatar-user" src="https://avatars.githubusercontent.com/u/86426?s=64&amp;amp;v=4" width="32" height="32" alt="@splitbrain"&gt;&lt;/a&gt;&lt;/span&gt;
            &lt;a class="Link--primary no-underline wb-break-all" href="https://github.com/splitbrain" rel="noreferrer"&gt;splitbrain&lt;/a&gt;
            starred
            &lt;a class="Link--primary no-underline wb-break-all" href="https://github.com/dszendrei/playwright-zoom" rel="noreferrer"&gt;dszendrei/playwright-zoom&lt;/a&gt;
            &lt;span&gt;
              · &lt;relative-time tense="past" datetime="2026-03-30T04:58:37-07:00" data-view-component="true"&gt;March 30, 2026 04:58&lt;/relative-time&gt;
            &lt;/span&gt;
          &lt;/div&gt;
        &lt;/div&gt;
      &lt;/div&gt;

      &lt;div class="Box tmp-p-3 mt-2 color-shadow-medium color-bg-overlay"&gt;
        &lt;div&gt;
          &lt;div class="f4 lh-condensed text-bold color-fg-default"&gt;
            &lt;a class="Link--primary text-bold no-underline wb-break-all d-inline-block" href="https://github.com/dszendrei/playwright-zoom" rel="noreferrer"&gt;dszendrei/playwright-zoom&lt;/a&gt;
          &lt;/div&gt;
          &lt;div class="dashboard-break-word color-fg-muted mt-1 mb-0 repo-description"&gt;
            &lt;p&gt;A TypeScript library to enhance Playwright with zoom functionality.&lt;/p&gt;
          &lt;/div&gt;

          &lt;p class="f6 color-fg-muted mt-2 mb-0"&gt;
              &lt;span class="d-inline-block color-fg-muted tmp-mr-3"&gt;
                &lt;span class="ml-0"&gt;
  &lt;span class="repo-language-color"&gt;&lt;/span&gt;
  &lt;span itemprop="programmingLanguage"&gt;TypeScript&lt;/span&gt;
&lt;/span&gt;

              &lt;/span&gt;

              &lt;span class="d-inline-block tmp-mr-3"&gt;
                  &lt;a class="Link--muted" href="https://github.com/dszendrei/playwright-zoom/stargazers" rel="noreferrer"&gt;&lt;svg class="octicon octicon-star mr-1" viewbox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"&gt;&lt;path d="M8 .25a.75.75 0 0 1 .673.418l1.882 3.815 4.21.612a.75.75 0 0 1 .416 1.279l-3.046 2.97.719 4.192a.751.751 0 0 1-1.088.791L8 12.347l-3.766 1.98a.75.75 0 0 1-1.088-.79l.72-4.194L.818 6.374a.75.75 0 0 1 .416-1.28l4.21-.611L7.327.668A.75.75 0 0 1 8 .25Zm0 2.445L6.615 5.5a.75.75 0 0 1-.564.41l-3.097.45 2.24 2.184a.75.75 0 0 1 .216.664l-.528 3.084 2.769-1.456a.75.75 0 0 1 .698 0l2.77 1.456-.53-3.084a.75.75 0 0 1 .216-.664l2.24-2.183-3.096-.45a.75.75 0 0 1-.564-.41L8 2.694Z"&gt;&lt;/path&gt;&lt;/svg&gt;5&lt;/a&gt;
              &lt;/span&gt;


              &lt;span&gt;Updated Apr 15, 2025&lt;/span&gt;
          &lt;/p&gt;
        &lt;/div&gt;
      &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/div&gt;</description>
            <pubDate>Mon, 30 Mar 2026 11:58:37 +0000</pubDate>
            <guid>https://github.com/dszendrei/playwright-zoom</guid>
        </item>
        <item>
            <title>Why craft-lovers are losing their craft</title>
            <link>https://writings.hongminhee.org/2026/03/craft-alienation-llm/</link>
            <description>&lt;blockquote&gt;Finally a Marxist view on AI&lt;hr&gt;&lt;/blockquote&gt;&lt;article&gt;
      &lt;time datetime="2026-03-21T09:30:00.000Z"&gt;
        
        March 21, 2026
      &lt;/time&gt;
      
&lt;p&gt;&lt;a href="https://blog.lmorchard.com/2026/03/11/grief-and-the-ai-split/"&gt;Les Orchard&lt;/a&gt; made a quiet observation recently that I haven't been able to
shake. Before LLM coding assistants arrived, the split between developers was
invisible:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Craft-lovers and make-it-go people sat next to each other, shipped the same
products, looked indistinguishable. The &lt;em&gt;motivation&lt;/em&gt; behind the work was
invisible because the &lt;em&gt;process&lt;/em&gt; was identical.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The tools didn't create a division; they simply revealed an existing one.&lt;/p&gt;
&lt;p&gt;Orchard himself belongs to the first camp. He learned BASIC at age seven not
because BASIC was beautiful but because he wanted things to happen on screen.
For him, LLM coding assistants are just another rung on the same ladder he's
always been climbing. The puzzle didn't disappear; it moved to a higher level
of abstraction. He grieves, but what he grieves is the ecosystem around the
work, not the work itself.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://nolanlawson.com/2026/02/07/we-mourn-our-craft/"&gt;Nolan Lawson&lt;/a&gt; grieves differently:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We'll miss the feeling of holding code in our hands and molding it like clay
in the caress of a master sculptor. We'll miss the sleepless wrangling of
some odd bug that eventually relents to the debugger at 2 AM. We'll miss
creating something we feel proud of, something true and right and good.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;His post reads like an elegy, and the grief in it is real. What he's mourning
is the act itself.&lt;/p&gt;
&lt;p&gt;Two developers, both thoughtful, both honest, looking at the same moment and
feeling different things. That asymmetry is worth taking seriously, because it
points to something the “just adapt” conversation keeps missing.&lt;/p&gt;
&lt;h2&gt;Alienated from the act&lt;/h2&gt;
&lt;p&gt;Marx identified four dimensions of alienated labor: separation from the product
of one's work, from the act of working itself, from other people, and from
one's own human capacities. In the context of LLM coding assistants, the second
of these is doing most of the work.&lt;/p&gt;
&lt;p&gt;What Marx meant by separation from the act is something like this. Humans,
unlike other animals, can imagine what they want to make before they make it
and then shape the material world to match that image. This capacity for
conscious, intentional creation is close to what Marx considered distinctively
human. When work is reduced to something mechanical, coerced, endured rather
than inhabited, that capacity goes unused. The activity is still happening; the
person is just no longer really present in it.&lt;/p&gt;
&lt;p&gt;The craft-lovers mourning their work fit this description. What they valued
wasn't the output. It was the process of building something, the hours of close
attention, the feeling of understanding a system well enough to reshape it.
Lawson says as much: the GitHub repo that says “I made this.” Not “something
was made” but &lt;em&gt;I&lt;/em&gt; made it.&lt;/p&gt;
&lt;p&gt;This also explains why the two camps feel so differently about the same tools.
Orchard never invested his sense of self in the act of writing code. He
invested it in the result. When LLM coding assistants let him get to the result
faster, nothing essential is lost for him. For Lawson, the act was where the
meaning lived. LLM coding assistants don't bypass the output; they bypass the
part he cared about. Marx's distinction between objective alienation (a
condition that exists regardless of whether you feel it) and subjective
alienation (experiencing the loss) maps almost exactly onto this split. Orchard
isn't subjectively alienated because he was never objectively attached to the
act in the first place. Lawson is both.&lt;/p&gt;
&lt;p&gt;The usual response is that this is nostalgia, or that new crafts will emerge to
replace the old ones. Maybe. But that response sidesteps the actual question:
why are people who love coding being pushed away from coding? Nobody is
stopping them from writing code by hand. The market is penalizing them for it.&lt;/p&gt;
&lt;h2&gt;Who's doing the penalizing&lt;/h2&gt;
&lt;p&gt;In &lt;em&gt;Capital&lt;/em&gt;, Marx wrote about the Luddite movement:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It took time and experience before the workers learnt to distinguish between
machinery and its employment by capital, and to transfer their attacks from
the material instruments of production to the form of society which utilises
those instruments.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The workers who destroyed the looms weren't wrong to be angry. The direction
was off. Capital extended working hours, not the loom. Capital turned workers
into appendages of the machine, not the loom. The distinction matters because
it changes what the actual problem is, and therefore what might be done about
it.&lt;/p&gt;
&lt;p&gt;When a developer explains that their productivity is being measured against
colleagues who use LLM coding assistants, and that they're using them because
they need the job, not because they want to, the source of alienation is plain.
It isn't the LLM coding assistant. It's the structure that ties livelihood to a
metric, and that metric now favors whoever produces the most output the
fastest. The LLM coding assistant is the lever; the market is the mechanism.&lt;/p&gt;
&lt;p&gt;One caveat matters here. The tension between craft and efficiency doesn't
disappear if you remove capitalism from the picture. LLM coding assistants
produce faster results whether anyone is being paid or not, and any community,
however it's organized, will eventually have to reckon with what to do with
that speed difference. Capitalism gives the harshest possible answer to that
question: the slower worker loses their livelihood. But the question itself
would survive capitalism. Other forms of social organization might answer it
more gently, but they'd still have to answer it.&lt;/p&gt;
&lt;h2&gt;What my situation reveals&lt;/h2&gt;
&lt;p&gt;I maintain open source software full time. My income comes entirely from public
funding: grants, foundations, institutional support. I have no employer who can
tell me to use LLM coding assistants or lose my job. No quarterly review where
my output gets compared to a colleague's.&lt;/p&gt;
&lt;p&gt;Under these conditions, my relationship with LLM coding assistants is nothing
like what Lawson describes. I still write the code I find interesting by hand.
The parts I don't want to do, the verbose test scaffolding and boilerplate I've
written a hundred times, I hand to the model. The division follows a line I
drew myself, between work that expresses something and work that just needs to
happen.&lt;/p&gt;
&lt;p&gt;This is close to what Marx imagined machinery could do in conditions other than
capitalism: relieve people of repetitive labor so they could do something more
fully human with the time. The same tool can feel liberating in one set of
conditions and alienating in another.&lt;/p&gt;
&lt;p&gt;My situation is unusual, and it exists inside a capitalist economy, a partial
shelter rather than an escape. I'm not presenting it as a solution, only as
evidence of that difference.&lt;/p&gt;
&lt;h2&gt;Where the grief should look&lt;/h2&gt;
&lt;p&gt;Knowing the source of a problem doesn't dissolve it. The developers being
pushed toward LLM coding assistants they don't want to use are facing a real
constraint right now, and a structural analysis doesn't help them this
afternoon.&lt;/p&gt;
&lt;p&gt;But it does change the question. If the grief Lawson describes is real (and I
think it is), and if its deeper cause lies in the social relations around the
technology rather than the technology itself, then the right target for that
grief isn't the LLM coding assistant. It's whatever forces people to use tools
they don't want to use, on terms they didn't choose.&lt;/p&gt;
&lt;p&gt;Lawson gestures in this direction too:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I don't celebrate the new world, but I also don't resist it. The sun rises,
the sun sets, I orbit helplessly around it, and my protests can't stop it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That's honest. But I'd like to think resignation isn't the only option.&lt;/p&gt;

    &lt;/article&gt;</description>
            <pubDate>Sun, 22 Mar 2026 09:15:43 +0000</pubDate>
            <guid>https://writings.hongminhee.org/2026/03/craft-alienation-llm/</guid>
        </item>
        <item>
            <title>graydon2 | LLM time</title>
            <link>https://graydon2.dreamwidth.org/322732.html</link>
            <description>&lt;blockquote&gt;&amp;quot;I'm not writing this to come to any particular conclusion, just to note that it's happened, that it's a set of events that I've experienced as they're happening. This is a journal and sometimes all I can do with it is log events. I don't know how this is going to end, or what to make of it all, I really don't.&amp;quot;&lt;hr&gt;&lt;/blockquote&gt;&lt;div&gt;&lt;p&gt;Note: this is not a thinkpiece and there is no need to debate it or repost it or comment about it. It offers no conclusions and takes no sides besides the one I've already admitted publicly (a reluctant but fatalistic willingness to use LLMs day-to-day, because they seem to work). It's mostly just a journal entry noting the occurrence of a significant change in the nature of my profession. I've turned off comments as I do usually for "things people are likely to heckle me about pointlessly anyways" because I'm tired and don't have patience for that.&lt;/p&gt;&lt;p&gt;With that out of the way: 2025 (particularly near the end of it) and early 2026 have been, for my corner of the software industry, extremely unusual times.&lt;/p&gt;&lt;p&gt;LLMs turned a corner. I'm not sure how else to put it. If you are not interacting with them yet in your day job, you are perhaps lucky, perhaps unlucky, I'm not sure how to judge that but you are definitely operating in some level of ignorance of what has occurred. You may be seeing the 2nd order effects and hiding. You may be telling yourself nothing's changed and it's all just smoke and mirrors, a marketing campaign by con artists aimed at the gullible. I wish it was. But as far as I can tell this is not so: LLMs really, really turned a corner.&lt;/p&gt;&lt;p&gt;Their capabilities expanded a lot. Coding capability seemed like the first bump (especially around the late fall / early winter: the opus 4.5 / gemini 3 / gpt 5.2 series). But it was quickly clear that the capability also extended to something much worse: vulnerability hunting. They can break software even better than they can write it -- I guess because "you only need to be right sometimes" with vulnerability seeking -- and "breaking" has even more people eager for the new capability.&lt;/p&gt;&lt;p&gt;The change has felt, to me, very sudden and very severe. In a matter of months a lot of people I know personally switched from "playing around seeing what I can do" to "I literally never write code by hand anymore" to "my boss is asking whether I can write 100x more code per day and/or firing me" to "help help my team is under attack by hundreds of new security vulnerabilities and can barely keep up".&lt;/p&gt;&lt;p&gt;I still write some code, but less and less, and more of it is around the margins: touchups, sketches of APIs and data structures, subtle stuff it's easy to be subtly-wrong about, or perhaps LLM-supervisory bits. Because the LLM really does often write the main logic as well as I would at this point, and faster, and more persistently. And also I'm now busy responding to all the damn vulnerabilities. There is an arms race, and I'm now plainly in it.&lt;/p&gt;&lt;p&gt;This is the fastest and most violent change to working conditions and assumptions I've witnessed in my career, including the arrival of the internet and open source and distributed version control and cloud computing and all of that. Nothing else is in the same ballpark.&lt;/p&gt;&lt;p&gt;Software projects have tried to adapt. Some are trying to embrace the tools, some are firmly rejecting them. Some have closed their issue trackers to new submissions which were all slop. Some maintainers have quit, some contributors have been banned, some dependencies have been rolled back or severed, some forks are emerging. A lot of people are re-evaluating (and some rebuilding) their entire software stacks. A lot of people are debating licenses again, with even more fury than they did during the drafting of GPLv3.&lt;/p&gt;&lt;p&gt;Thinkpieces on this event proliferated, many very sour. People wrote about mourning their loss of identity as programmers. People wrote about fear for their loss of jobs. People wrote a lot about their personal disgust with the slop, their fury at the billionaires, their sense that all this is part of of the fascist turn of America. The level of anger in the community of programmers is unlike anything I've ever seen before. People are making lists of who's been infected by the menace and who's still clean. The community is tearing itself apart. Professional and volunteer relationships ended, friendships lost, battle lines drawn.&lt;/p&gt;&lt;p&gt;I'm not writing this to come to any particular conclusion, just to note that it's happened, that it's a set of events that I've experienced as they're happening. This is a journal and sometimes all I can do with it is log events. I don't know how this is going to end, or what to make of it all, I really don't. It's sort of interesting, deeply confusing, sometimes sort of fun, mostly sort of horrifying, sort of miserable. The unit economics of making and breaking software in 2026 are completely different than they were in 2025. More than anything, it's just weird.&lt;/p&gt;&lt;p&gt;This time next year we could all be out of work, or dead from a nuclear war, or even-more-burnt-out from sustained 100x higher velocity of code and vulnerabilities with teams of adversarial LLMs, or .. the whole thing could collapse because maybe, just maybe, it really is "all just a bubble" pushed by VCs on credulous rubes like myself, and it'll vanish like a bad dream. I'm not presently betting on that, but I couldn't have predicted this year, so I'm not going to make any predictions about the next.&lt;/p&gt;&lt;p&gt;I guess I'm sorry to anyone who thinks I'm infected, or facilitating the fascists, or whatever. I'm just trying to adapt. I hope you can see me as a human again someday. I miss the past too. I don't see a way to go back to it, but I'd like it too if there were one.&lt;/p&gt;&lt;/div&gt;</description>
            <pubDate>Mon, 16 Mar 2026 08:52:21 +0000</pubDate>
            <guid>https://graydon2.dreamwidth.org/322732.html</guid>
        </item>
        <item>
            <title>PSA: Think hard before you deploy BookLore : selfhosted</title>
            <link>https://www.reddit.com/r/selfhosted/comments/1rs275q/psa_think_hard_before_you_deploy_booklore/</link>
            <description></description>
            <pubDate>Sun, 15 Mar 2026 05:44:53 +0000</pubDate>
            <guid>https://www.reddit.com/r/selfhosted/comments/1rs275q/psa_think_hard_before_you_deploy_booklore/</guid>
        </item>
    </channel>
</rss>