<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Zef Hemel]]></title><description><![CDATA[Engineering leadership coach and consultant]]></description><link>https://zef.me/</link><generator>Ghost 6.9</generator><lastBuildDate>Thu, 19 Mar 2026 09:35:12 GMT</lastBuildDate><atom:link href="https://zef.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Coming soon]]></title><description><![CDATA[<p>This is Zef Hemel, a brand new site by Zef Hemel that&apos;s just getting started. Things will be up and running here shortly, but you can <a href="#/portal/">subscribe</a> in the meantime if you&apos;d like to stay up to date and receive emails when new content is published!</p>]]></description><link>https://zef.me/coming-soon/</link><guid isPermaLink="false">692977294871d400016522cf</guid><category><![CDATA[News]]></category><dc:creator><![CDATA[Zef Hemel]]></dc:creator><pubDate>Fri, 28 Nov 2025 10:19:21 GMT</pubDate><media:content url="https://static.ghost.org/v4.0.0/images/feature-image.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://static.ghost.org/v4.0.0/images/feature-image.jpg" alt="Coming soon"><p>This is Zef Hemel, a brand new site by Zef Hemel that&apos;s just getting started. Things will be up and running here shortly, but you can <a href="#/portal/">subscribe</a> in the meantime if you&apos;d like to stay up to date and receive emails when new content is published!</p>]]></content:encoded></item><item><title><![CDATA[What's Next]]></title><description><![CDATA[<p>It&#x2019;s been quiet for a little while around here. Let me catch you up in broad strokes on what has been going on, and more importantly: what&#x2019;s next. </p><p><strong>Spoiler alert:</strong> career shift ahead!</p><hr><p>&#x201C;Hey, didn&#x2019;t you do a pivot to AI some time</p>]]></description><link>https://zef.me/whats-next/</link><guid isPermaLink="false">692978164871d40001652528</guid><dc:creator><![CDATA[Zef Hemel]]></dc:creator><pubDate>Mon, 28 Jul 2025 07:00:00 GMT</pubDate><content:encoded><![CDATA[<p>It&#x2019;s been quiet for a little while around here. Let me catch you up in broad strokes on what has been going on, and more importantly: what&#x2019;s next. </p><p><strong>Spoiler alert:</strong> career shift ahead!</p><hr><p>&#x201C;Hey, didn&#x2019;t you do a pivot to AI some time ago?&#x201D;</p><p>Good memory! <a href="https://zef.me/the-ai-engineering-nut/" rel="noreferrer">Indeed I did</a>.</p><p>I willingly threw myself into the world of <em>Generative AI</em>, fully. I was excited. I thought I knew what I was getting myself into. Of course, I knew there is a hype cycle going on, I knew there would be hyperbole and grifters, but that was the challenge: I&#x2019;m rational, I can cut through this! I can dig and find what&#x2019;s real and what is not! I have a PhD for crying out loud, if that equips you with anything, it&#x2019;s the skill of critical thinking and rigor.</p><p>To quote myself in my announcement post:</p><blockquote>This is going to be awesome.</blockquote><p>I was wrong.</p><hr><p>As it turned out, after only a few months operating so close to the &#x201C;heat&#x201D; of a booming industry, I was anything but energized.</p><p>Expectations are enormous; promises border-line absurd; and reality <em>extremely</em> hard to evaluate.</p><p>I found that a <em>lot</em> of this industry runs on beliefs, echo chambers, and FOMO. Everything is presented as a constantly moving target. The moment you settle on some strategy in one area, some new product launches, or some vendor&#x2019;s executive makes an even more outrageous claim on a podcast, and the circus continues.</p><p>Some may thrive in an environment like this. I do not, as became painfully clear.</p><p>The deeper I dug, the more disillusioned I became. I devolved from an optimistic, trusting person to feeling like <a href="https://en.wikipedia.org/wiki/Buffy_the_Vampire_Slayer?ref=zef.me" rel="noreferrer">Zeffy the AI Snake-Oil Salesman Slayer</a>, drowning in a dystopia built on poorly-disguised doomsday-cults masquerading as shamelessly exploitative unicorn companies.</p><p>Once I noticed it started to affect my personal life, I knew I needed out.</p><p>It happens. It&#x2019;s not anybody&#x2019;s fault. I&#x2019;m alright now.</p><p>We live and learn.</p><hr><p>So, what&#x2019;s next?</p><p>I did some soul searching: what gives me energy, what do I enjoy? Am I going to roll the dice again, or take a different approach this time? While if the perfect opportunity would land on my lap I likely wouldn&#x2019;t say no, let&#x2019;s not assume that scenario.</p><p>Thus far, whenever companies hired me as a <em>Head Of</em>, <em>Director</em>, or <em>VP</em>, I was made responsible for a part of the organization and got to work on interesting engineering leadership challenges:</p><ul><li>Adjust the organizational structure and size</li><li>Identify and coach a next crop of leaders</li><li>Increase product mindset and change engineering culture</li><li>Increase the pace of delivery</li><li>Establish practices around recruitment, people growth, incentives, goal setting</li></ul><p>I was somewhat shocked to realize that over the past decade and a half, I&#x2019;ve now done this in <strong>eight</strong> different companies, ranging from small start-ups, to scale-ups, to organizations part of large multi-nationals. Every time I saw some things that were unique, and many that were not. I saw similar patterns emerge, and familiar traps. I applied and wrote about what I learned, and tried new things.</p><p>While I never joined any company to leave it, over the years I had been around. As a result, I had gotten skilled at being airdropped into a new organization, quickly grasping the lay of the land, and starting to contribute.</p><p>Perhaps it made sense to double down on this skill and experience. While I never thought I&#x2019;d say this, perhaps the answer was...</p><p><em>Consultancy</em></p><p>I love helping people and organizations with their problems. I&#x2019;ve seen a lot by now, so I bring experience to the table. I&#x2019;ve actually done the work, so should be credible. I&#x2019;d like to believe I communicate well. I&#x2019;ve established a network over the years. And essentially all my work engagements have ended on a good note.</p><p>Sounds like a pretty solid foundation for starting a consultancy business to me.</p><p>This is part one.</p><hr><p>The second part is that I love <em>writing</em> about all this stuff. Over the years I&#x2019;ve written many dozens of essays and articles. Many of which have made an impact on people&#x2019;s approach to their work (or so they claim). Many have suggested I should write a book, or otherwise commercialize this somehow.</p><p>So this is the second part &#x2014; the potentially scalable one, which should nicely synergize (look at me, whipping around <em>managementese</em>) with consultancy work.</p><p>I&#x2019;m in the process of restructuring my content and publishing it on a rebranded site,&#xA0;which I think I can be a valuable resource for (aspiring) leaders in tech. One worth paying for. More on this soon.</p><hr><p>As I do, I shopped these ideas around a bit. The most common question I received is: <em>how can I help?</em></p><p>Obviously, the most <em>obvious way</em> to help me short term, <a href="https://zef.me/" rel="noreferrer">is to hire me</a> as a consultant. So, let me abuse you, my audience, for a bit of a pitch:</p><p>Do you need <a href="https://zef.me/services/">my help</a>? Would you, or people in your organization benefit from leadership coaching or mentoring? Are you looking at a tough reorg, struggling to keep people on mission, and could use some fresh, outside perspective on an interesting management challenge? Do you have a temporary engineering management gap to fill? <a href="https://zef.me/intro/">Let&#x2019;s have a chat</a>.</p><p>The <em>second way</em> would be to tell your network about me, share my content and services. This can also take the shape of a <a href="https://www.linkedin.com/in/zefhemel/?ref=zef.me">recommendation on LinkedIn</a>.</p><p>The <em>third way</em> will come a bit later, as I&apos;ll launch something around my writing content soon.</p><p>Thank you for being on this journey with me. Interesting times ahead! </p>]]></content:encoded></item><item><title><![CDATA[The AI Engineering Nut]]></title><description><![CDATA[<p><strong>Disclaimer:</strong> Some things have changed since making this announcement, <a href="https://zef.me/whats-next/">see this post for details</a>.</p>
<hr>
<p>For the past years I&#x2019;ve been spending a lot of my time thinking (and writing) about the human side of engineering. Topics like leadership, management, psychology, humanity. The trivial things.</p>
<p>But I cracked that</p>]]></description><link>https://zef.me/the-ai-engineering-nut/</link><guid isPermaLink="false">692978164871d40001652527</guid><dc:creator><![CDATA[Zef Hemel]]></dc:creator><pubDate>Thu, 03 Apr 2025 05:46:00 GMT</pubDate><content:encoded><![CDATA[<p><strong>Disclaimer:</strong> Some things have changed since making this announcement, <a href="https://zef.me/whats-next/">see this post for details</a>.</p>
<hr>
<p>For the past years I&#x2019;ve been spending a lot of my time thinking (and writing) about the human side of engineering. Topics like leadership, management, psychology, humanity. The trivial things.</p>
<p>But I cracked that nut. Done. There&#x2019;s just not <a href="https://en.wikipedia.org/wiki/Fermat&apos;s_Last_Theorem?ref=zef.me">enough space in this post to fit the answer</a>.</p>
<p>So it was time to move on to something <em>actually</em> challenging.</p>
<p>Since the opportunity arose, I decided to switch my role to become <em>Head of AI Engineering</em>. While there are definitely still management aspects to this role, only a minority of subjects to my management shall henceforth be humans.</p>
<p>Bow before me, ye AI subjects.</p>
<p><em>I wonder if at some point such jokes will be considered speciesism. I may be setting myself up to be canceled or worse in a few years. If you never hear from me again, call my wife.</em></p>
<p>In all seriousness, the role is all about accelerating the company&#x2019;s transition to fully embrace AI in every sensible way. In the product to the customers&#x2019; benefit, but also internally. A big part is to advance engineering practice using AI tools, and ultimately to leverage AI in every part of the organization. Quite the nut.</p>
<hr>
<p>One of the (many) intriguing aspects to explore in this space are that LLMs (Large Language Models) have a surprising amount of overlap in behavior with humans. More so than any &#x201C;traditional&#x201D; computer algorithm thus far.</p>
<p>Still, some of you will say: &#x201C;LLMs ain&#x2019;t people, they&#x2019;re just stochastic, probabilistic token predictors; fancy auto complete. What&#x2019;s the point of comparing them to humans?&#x201D;</p>
<p>Of course, you would be right. Academically. But who wants to be <em>academically</em> right? You can dismissively describe many things in a similar fashion: computers are <em>just</em> fancy calculators. Humans are <em>just</em> thermoregulated biological units optimized for pattern detection. <em>Burn! Oh no he didn&#x2019;!</em></p>
<p>While all technically accurate (let&#x2019;s say), not super helpful.</p>
<p>Ultimately, the question is not <em>what it is</em>, but how we use it in valuable ways. And AI &#x2014; love it or hate it &#x2014; has value to provide.</p>
<hr>
<p>Working more with LLMs over the last months has clarified a couple of left-over issues I&#x2019;ve had with humans.</p>
<p>One example: the challenge of <em>context window management</em>. LLMs have a hard cap in how many tokens (think: words) can fit into a conversation. This is called their context window.</p>
<p>Humans have similar constraints (although it&#x2019;s harder to identify the exact limit): how much stuff can we productively keep in our heads? How much context do we need, what is too much, when do we start to get confused? With LLM it&#x2019;s super obvious we have to strategically manage this. I would argue that with humans we do too, but do we ever really think of it that way? Thus far, I have naively argued that &#x201C;more context is better.&#x201D; But is it? If you keep dumping more context information on people, do they not simply get lost in <a href="https://dictionary.cambridge.org/dictionary/english/tmi?ref=zef.me">TMI</a>? Perhaps we should also more strategically curate this? &#x1F914;</p>
<p>Not my problem anymore. I&#x2019;m all about AI now.</p>
<p>You get the point, though. There&#x2019;s stuff to learn here. Both on the AI side, the human side, and their overlap.</p>
<p>Welcome to my (new) world.</p>
<hr>
<p>This is a head (of AI)&#x2019;s up that I will be talking, writing and <a href="https://youtu.be/hmSsFVPmHKA?ref=zef.me">videotaping myself</a> a lot more talking about AI going forward.</p>
<p>And obviously, you can also expect a ton more absolute banger AI jokes. None of them LLM generated, because thus far LLMs are terrible, <em>terrible</em> joke tellers. That&#x2019;s why the world still needs me.</p>
<p>Still.</p>
<p>Anyway, that&#x2019;s some context for your window (see? <em>banger</em> AI jokes).</p>
<p>I hope you&#x2019;ll join me on this journey. Nevertheless, if you&#x2019;re somehow AI allergic, this may be a good time to (mentally) unsubscribe. Your loss though. Because I repeat: AI jokes. I mean come on.</p>
<p>This is going to be awesome.</p>
<h2 id="meta-comment">Meta comment</h2>
<p>Since people are going to ask: none of this post was written by AI, although I will admit I did have ChatGPT generate a first version of it.</p>
<p>After ChatGPT&#x2019;s first try, I proceeded to delete 99% (I kept &#x201C;the human side of engineering&#x201D; phrase at the beginning &#x2014;&#xA0;I liked that), and crafted everything by hand. Like an animal. I then had AI critique it &#x2014;&#xA0;devil&#x2019;s advocate style &#x2014; but it completely missed my signature sarcasm &#x2014;&#xA0;so no valuable dice.</p>
<p>Famous last words, but I intend to keep it this way. I think (based on what people have been telling me <em>to my face</em>) that the reason everybody (and I mean <em>everybody</em>) reads my stuff is because it&#x2019;s <em>me</em>. I am (and I quote:) <em>cool</em>. While little of what I&#x2019;m proclaiming in these posts is technically new, what adds value is my particular way of framing things, with my signature blend of hilarious dad jokes. Try that, ChatGPT. You personality-less... LLM, you.</p>
<p>That said, the bland draft that ChatGPT produced did encourage me to finally write this post. I find this to be a common pattern in what AI tools have to offer: even if you don&#x2019;t end up using it all, or even a little of what they produce, they are a surprisingly effective way to have the energy to just start, to do things you would otherwise not even attempt. <a href="https://www.theringer.com/2016/09/07/tech/why-did-apple-remove-the-headphone-jack-courage-592a412412b5?ref=zef.me">Courage</a>.</p>
<p>AI. Quite the nut. Let&#x2019;s get cracking.</p>
]]></content:encoded></item></channel></rss>