<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-gb" xmlns="http://www.w3.org/2005/Atom"><title>Decade City</title><link href="https://decadecity.net/" rel="alternate"/><link href="https://decadecity.net/index.xml" rel="self"/><id>https://decadecity.net/</id><updated>2026-02-12T00:00:00+00:00</updated><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><rights>Attribution 3.0 Unported (CC BY 3.0) 2026 Orde Saunders</rights><entry><title>The Gell-Mann amnesia of AI</title><link href="https://decadecity.net/blog/2026/02/12/the-gell-mann-amnesia-of-ai" rel="alternate"/><published>2026-02-12T00:00:00+00:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:0706fef0-617c-4f9e-afc5-d58276a9f8f2</id><summary type="html">&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amnesia_effect"&gt;Gell-Mann amnesia effect&lt;/a&gt; occurs when assessing the utility of a source for a subject we know about compared to one that we don't.  Even if is obviously low quality in our known subject matter we will still tend to trust it to inform us in areas we are not familiar with.&lt;/p&gt;
&lt;p&gt;I get the impression that this is at work with assessments of the utility of generative "AI" where people will say it is good at things where they have limited domain expertise whilst simultaneously critiquing its effectiveness for things they know how to do.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=The+Gell-Mann+amnesia+of+AI&amp;url=%2Fblog%2F2026%2F02%2F12%2Fthe-gell-mann-amnesia-of-ai" style="border:0" alt="" /&gt;</summary><category term="AI"/><category term="article"/></entry><entry><title>Willpower</title><link href="https://decadecity.net/blog/2026/02/07/willpower" rel="alternate"/><published>2026-02-07T00:00:00+00:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:eb102d66-b759-4ff7-8141-53c96ab52963</id><summary type="html">&lt;p&gt;Willpower itself isn't particularity effective but conscious decision making coupled with establishing structures that facilitate the alignment of those decisions with your desired outcomes is effective.  However, that process isn't visible to others so they perceive it as willpower.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Willpower&amp;url=%2Fblog%2F2026%2F02%2F07%2Fwillpower" style="border:0" alt="" /&gt;</summary></entry><entry><title>Geopolitical evolution vs revolution</title><link href="https://decadecity.net/blog/2026/01/27/geopolitical-evolution-vs-revolution" rel="alternate"/><published>2026-01-27T00:00:00+00:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:9c4a5b9f-bc51-4acb-b5a7-8ae9b38b7ef1</id><summary type="html">&lt;p&gt;I've been following the commentariat's response to Carney's speech and I'm seeing the short term (electoral cycle; revolutionary) and long term (arc of history; evolutionary) theses develop.&lt;/p&gt;
&lt;p&gt;Revolutionary: Trump is an aberration who won't last long, and moving away from from US hegemony is too complex to be achievable.&lt;/p&gt;
&lt;p&gt;Evolutionary: Trump is the foreseeable product of a system that remains in place, and moving away from US hegemony is complex to achieve so has to be done incrementally.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Geopolitical+evolution+vs+revolution&amp;url=%2Fblog%2F2026%2F01%2F27%2Fgeopolitical-evolution-vs-revolution" style="border:0" alt="" /&gt;</summary><category term="article"/></entry><entry><title>Overvew of the AI landscape (Q4 2025)</title><link href="https://decadecity.net/blog/2025/11/21/overvew-of-the-ai-landscape" rel="alternate"/><published>2025-11-21T00:00:00+00:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:0327ae9a-d1ac-4bf6-9c33-ff49f299ded2</id><summary type="html">&lt;p&gt;AI is creating interesting times which can be hard to follow.  As a by-product of organising my thoughts I've written a summary.&lt;/p&gt;
&lt;h2&gt;AI does not mean what you think it means&lt;/h2&gt;
&lt;p&gt;The "AI" we have is mostly LLMs (Large Language Models) so they are the golden hammer.  We also have image generators as an example of a different model for a non-language job (even if they use LLMs to process text input from prompts).  We are able to use "AI" to detect breast cancer much earlier in mammograms and that's because it's using a dedicated model trained on a large data set of historical mammograms, not an LLM "AI".  We don't have a Large Spreadsheet Model which is why we get warnings from Microsoft not to use Copilot's LLM "AI" in Excel for anything that needs accurate numbers...&lt;/p&gt;
&lt;h2&gt;Monopoly Capitalism&lt;/h2&gt;
&lt;p&gt;The aim of the game is that the winner will take all, become a monopoly on which everyone depends, and then charge whatever it wants.  In order to become that monopoly, the real costs of the models will be subsidised with outside money (equity, debt, venture, free cash, &amp;amp;c.) so consumers will think it's cheap (or even free) meaning they use it for things that are only value for money when there's very little money.  Once they are reliant on the value - and there's no competition left - then the winner can make the money go up.  If you're doing things in the "AI" space right now then you'd be daft not to make use of this subsidised value, otherwise you're effectively competing with the biggest companies on the planet.  This is why we're seeing LLM "AI" products rather than dedicated models since the former is subsidised and the latter requires non-trivial investment to train.&lt;/p&gt;
&lt;h2&gt;Runners and riders&lt;/h2&gt;
&lt;p&gt;Talking of the biggest companies on the planet, here's how I see the major players in the race to be the winner who takes all:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OpenAI and Anthropic are LLM companies that will rent them out to you at heavily subsidised rates.  They are trying to build useful products round their models in order to gain users (even if the unit economics are unprofitable).  If they find you are building a useful product on their service they have a good incentive to build their own version and undercut you.&lt;/li&gt;
&lt;li&gt;Google and Microsoft have products and users so they are shotgunning LLMs into them trying to find a market fit.  They are also sitting on huge piles of cash they don't know what to do with and people will lend them money at low interest rates.&lt;/li&gt;
&lt;li&gt;Amazon are going to keep selling you compute in what ever way they can and, handily, LLMs need a lot of compute.  They have plenty of money and continue to enshitify their side hustle of running the biggest shop on the planet.&lt;/li&gt;
&lt;li&gt;Meta are using generative "AI" to create bottomless doomscrolls of slop on Facebook that they can infuse with adverts to monetise people's partial attention.  They need to do this because nobody is using the Metaverse, which was their last great idea to burn money in order to monetise people, and TikTok is doing a better job of capturing people's partial attention.&lt;/li&gt;
&lt;li&gt;Apple are eventually going to do something that is a pale imitation of something else that is already successful, describe it as "revolutionary" in their product launch, its form will eclipse its function, and people who never use anything other than Apple products will currently be saying "wait until we see what Apple does, it'll be revolutionary".&lt;/li&gt;
&lt;li&gt;𝕏 is building the Torment Nexus from the classic science fiction novel "Don't build the Torment Nexus".&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Jade Empire&lt;/h2&gt;
&lt;p&gt;Although it's the Human Sphere's #2 hyperpower, I've not been paying too much attention to what's coming out of China but I think they're going for releasing open weight models that are (comparatively) cheap to deliver.  Their competitive advantage looks to be that you can get a model that is scoring 90% as good as the top models on benchmarks, gives you the ability to tune it, and at fraction of the training and inference costs.  If the bubble pops due to limited demand for LLMs and we're left with lots of data centres full of, by then, out of date GPUs they could end up running Chinese models to easily fulfil that limited demand.&lt;/p&gt;
&lt;h2&gt;Finances&lt;/h2&gt;
&lt;p&gt;Let me explain... no there is too much, let me sum up: the finances seem to be based on significantly increased revenue coupled with significantly reduced expenditure in 2030.&lt;/p&gt;
&lt;h2&gt;What could possibly go right?&lt;/h2&gt;
&lt;p&gt;LLMs get significantly better at things that are valuable, capex goes down significantly once the data centres get built, and revenue skyrockets.  It will help things greatly if companies can become dependent on rented "AI" to increase their efficiency (i.e. getting rid of a lot of their employees to cut their opex).  All in the next five years.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Overvew+of+the+AI+landscape+%28Q4+2025%29&amp;url=%2Fblog%2F2025%2F11%2F21%2Fovervew-of-the-ai-landscape" style="border:0" alt="" /&gt;</summary><category term="AI"/></entry><entry><title>You won't lose your job to AI</title><link href="https://decadecity.net/blog/2025/09/29/you-wont-lose-your-job-to-ai" rel="alternate"/><published>2025-09-29T00:00:00+01:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:387adc8a-40db-445e-a502-6d78ec1776d2</id><summary type="html">&lt;p&gt;You'll lose your job to a "yes man" the management have hired to validate their expenditure on AI.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=You+won%27t+lose+your+job+to+AI&amp;url=%2Fblog%2F2025%2F09%2F29%2Fyou-wont-lose-your-job-to-ai" style="border:0" alt="" /&gt;</summary><category term="AI"/></entry><entry><title>Shaving AI hype with Hanlon's razor</title><link href="https://decadecity.net/blog/2025/09/14/shaving-ai-hype-with-hanlons-razor" rel="alternate"/><published>2025-09-14T00:00:00+01:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:60c530a3-b237-48b0-b5a4-7866e5edefb1</id><summary type="html">&lt;p&gt;I have a theory about the current "AI" situation.&lt;/p&gt;
&lt;p&gt;If you're not very good at something then LLMs make you seem "kind of OK" at it.&lt;sup&gt;&lt;a href="/blog/2024/11/10/aiing-fast-and-slow"&gt;1&lt;/a&gt;&lt;/sup&gt;  Conversely, if you are good at something they make you seem "kind of OK" at it.&lt;sup&gt;&lt;a href="/blog/2025/03/07/extracting-text-from-pdfs-using-python-and-tesseract"&gt;
2&lt;/a&gt;,&lt;a href="/blog/2025/05/22/still-not-convinced-by-llms"&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;p&gt;Now imagine you are a senior manager in a corporate job.  Using LLMs gives you the chance to experience being "kind of OK" at something for the first time in your existence so, naturally, you would think this technology was world changing...&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Shaving+AI+hype+with+Hanlon%27s+razor&amp;url=%2Fblog%2F2025%2F09%2F14%2Fshaving-ai-hype-with-hanlons-razor" style="border:0" alt="" /&gt;</summary><category term="AI"/></entry><entry><title>Minimal CSS view transitions</title><link href="https://decadecity.net/blog/2025/08/28/minimal-css-view-transitions" rel="alternate"/><published>2025-08-28T00:00:00+01:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:b56fdbff-2e03-448f-9330-bacdffcd8115</id><summary type="html">&lt;p&gt;With &lt;a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_view_transitions"&gt;CSS View Transitions&lt;/a&gt; it is possible to instruct the user agent to show transitions during page navigation.  Whilst complex transitions are possible, a minimal set of transitions will cover a basic site layout.&lt;/p&gt;
&lt;p&gt;If we have a page template that contains a header, main content area, and a footer which are semantically marked up:&lt;/p&gt;
&lt;pre class="code"&gt;&lt;code&gt;&amp;lt;body&amp;gt;
    &amp;lt;header/&amp;gt;
    &amp;lt;main/&amp;gt;
    &amp;lt;footer/&amp;gt;
&amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can enable CSS view transitions and declare a transition name for each:&lt;/p&gt;
&lt;pre class="code"&gt;&lt;code&gt;@view-transition {
    navigation: auto;
}
header {
    view-transition-name: header;
}
main {
    view-transition-name: main;
}
footer {
    view-transition-name: footer;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This creates a separate snapshot for each meaning they animate independently of each other and consistently within themselves.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Minimal+CSS+view+transitions&amp;url=%2Fblog%2F2025%2F08%2F28%2Fminimal-css-view-transitions" style="border:0" alt="" /&gt;</summary><category term="CSS"/><category term="article"/></entry><entry><title>Recreating jQuery's ready() with a proxy</title><link href="https://decadecity.net/blog/2025/06/03/recreating-jquerys-ready-with-a-proxy" rel="alternate"/><published>2025-06-03T00:00:00+01:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:e94fca8c-04e3-4b80-ad87-12529b7322f6</id><summary type="html">&lt;p&gt;jQuery's &lt;a href="https://api.jquery.com/ready/"&gt;&lt;code&gt;ready()&lt;/code&gt;&lt;/a&gt; is a handy way to ensure code is executed after the DOM is ready if called before it is ready, or immediately if it is already ready.  This differs from the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Document/DOMContentLoaded_event"&gt;&lt;code&gt;DOMContentLoaded&lt;/code&gt;&lt;/a&gt; event which will only execute if called before it is ready.  Recently I wanted to use the "fire and forget" mode of operation provided by &lt;code&gt;.ready()&lt;/code&gt; so I recreated it using a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy"&gt;&lt;code&gt;Proxy&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this version we have an array, imaginatively called &lt;code&gt;window.domReady&lt;/code&gt;, that we &lt;code&gt;.push()&lt;/code&gt; functions into: that's the interface.&lt;/p&gt;
&lt;pre class="code"&gt;&lt;code&gt;window.domReady.push(() =&gt; console.log("Hello world!"));&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The implementation lives as an inline &lt;code&gt;&amp;lt;script/&amp;gt;&lt;/code&gt; in the head of the document.  It can go anywhere before &lt;code&gt;DOMContentLoaded&lt;/code&gt; will fire really, or you could do an additional check for &lt;code&gt;document.readyState&lt;/code&gt; if you wanted to make it truly portable.&lt;/p&gt;
&lt;pre class="code"&gt;&lt;code&gt;window.domReady = [];
document.addEventListener("DOMContentLoaded", () =&gt; {
  const safeRun = (f) =&gt; {
    try {
      f();
    } catch (e) {
      console.error(e);
    }
  };
  window.domReady.forEach((f) =&gt; {
    safeRun(f);
  });
  window.domReady = new Proxy([], {
    set: (target, property, value) =&gt; {
      if (typeof value == "function") {
        safeRun(value);
      }
      return true;
    },
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this we use the &lt;code&gt;DOMContentLoaded&lt;/code&gt; event to run all the functions that have been pushed into &lt;code&gt;window.domReady&lt;/code&gt; when the DOM becomes ready then replace &lt;code&gt;window.domReady&lt;/code&gt; with a proxy array that immediately runs any function that is pushed into it.  The &lt;code&gt;safeRun()&lt;/code&gt; wrapper is there to stop exceptions in executed functions from blowing up and bringing the whole show to a halt, ask me how I know I need this...&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Recreating+jQuery%27s+ready%28%29+with+a+proxy&amp;url=%2Fblog%2F2025%2F06%2F03%2Frecreating-jquerys-ready-with-a-proxy" style="border:0" alt="" /&gt;</summary><category term="article"/><category term="JavaScript"/></entry><entry><title>Still not convinced by LLMs</title><link href="https://decadecity.net/blog/2025/05/22/still-not-convinced-by-llms" rel="alternate"/><published>2025-05-22T00:00:00+01:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:b5f636ed-a0e2-45fb-8d67-2b26294d68a0</id><summary type="html">&lt;p&gt;As a trial I used ChatGPT to do a task I have professional domain experience of (from a job I did for a couple of years 25 years ago) and I now have a paid professional advisor to handle for me.&lt;/p&gt;
&lt;p&gt;Doing it myself (i.e. compiling a 13 row, 11 column spreadsheet to compare the options), talking to my advisor, and using ChatGPT all took about the same time (~30 minutes).  However, ChatGPT made numerous mistakes trying to compile the data - getting some data meaningfully wrong and omitting a number of the options - before I gave it the spreadsheet I had manually created to analyse; when - despite telling me that it had identified the field - it then failed to account for the length of time meaning its analysis was completely inverted.  I then asked it to specifically account for that and it came to the same conclusion as my own analysis and that of my advisor (but accompanied by some patronising "no shit Sherlock" high level text about the problem domain).&lt;/p&gt;
&lt;p&gt;So far I've only tried LLMs for things I know how to do myself and the results have been so bad I don't trust them to do things I don't know about.  Also doing the work is what gives me a lot of the value from the task, in this case compiling the spreadsheet, so by using the LLM to do the work I'm getting an output that has little value whilst missing out on the real value which comes from doing the work.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Still+not+convinced+by+LLMs&amp;url=%2Fblog%2F2025%2F05%2F22%2Fstill-not-convinced-by-llms" style="border:0" alt="" /&gt;</summary><category term="article"/></entry><entry><title>Technical debt</title><link href="https://decadecity.net/blog/2025/04/11/technical-debt" rel="alternate"/><published>2025-04-11T00:00:00+01:00</published><author><name>Orde Saunders</name><uri>https://decadecity.net/</uri></author><id>urn:uuid:37cc20c7-e982-4bff-aab7-2ae3d687a95f</id><summary type="html">&lt;p&gt;You borrow time from the future in order to get something faster in the present.  Then, when the future becomes the present, you find yourself slowed down by having to pay back that borrowed time - so you borrow yet more time from the future.&lt;/p&gt; &lt;img src="https://stats.decadecity.net/piwik.php?idsite=13&amp;rec=1&amp;action_name=Technical+debt&amp;url=%2Fblog%2F2025%2F04%2F11%2Ftechnical-debt" style="border:0" alt="" /&gt;</summary></entry></feed>