<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>chris rose</title>
	<atom:link href="http://threeworlds.campaignstrategy.org/?feed=rss2" rel="self" type="application/rss+xml" />
	<link>https://threeworlds.campaignstrategy.org</link>
	<description>the campaignstrategy.org blog</description>
	<lastBuildDate>Thu, 19 Feb 2026 16:08:00 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Modi’s Lesson in Unintended Visual Language – The AI Safety Split</title>
		<link>https://threeworlds.campaignstrategy.org/?p=3579</link>
					<comments>https://threeworlds.campaignstrategy.org/?p=3579#respond</comments>
		
		<dc:creator><![CDATA[Chris]]></dc:creator>
		<pubDate>Thu, 19 Feb 2026 16:08:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://threeworlds.campaignstrategy.org/?p=3579</guid>

					<description><![CDATA[Modi was playing politics by convening a global &#8216;AI Summit&#8217; and maybe didn&#8217;t care about the optics beyond that but it might turn out this was the moment it will be remembered for Visual language is powerful – when it &#8230; <a href="https://threeworlds.campaignstrategy.org/?p=3579">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p style="font-weight: 400;"><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2026/02/Screenshot-2026-02-19-at-12.44.10-e1771515341850.png"><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-3580" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2026/02/Screenshot-2026-02-19-at-12.44.10-e1771515341850.png" alt="" width="1000" height="573" /></a></p>
<p><em>Modi was playing politics by convening a global &#8216;AI Summit&#8217; and maybe didn&#8217;t care about the optics beyond that but it might turn out this was the moment it will be remembered for</em></p>
<p style="font-weight: 400;">Visual language is powerful – when it goes right or wrong.  On 19 Feb India’s PM Modi got surprised tech&#8217; leaders at his AI Summit to join a &#8216;raised hands&#8217;. Sam Altman of OpenAI &amp; Dario Amoedi of Anthropic awkwardly raised fists instead, visualising not just rivalry but the safety split in BigTech (Altman is right of Modi and Amodei to the right of Altman).</p>
<p style="font-weight: 400;">On Feb 18  @Ric_RTP reported on X that OpenAI, Google and X had agreed to Pentagon demands they remove safety restrictions for military use but Anthropic had refused:</p>
<p style="font-weight: 400;">“They don&#8217;t want Claude used to build fully autonomous weapons that fire without a human in the loop, and they don&#8217;t want it used to mass surveil American citizens”.</p>
<p style="font-weight: 400;">So Hesgeth threatened to blacklist Anthropic as a &#8220;supply chain risk&#8221; meaning no company could get US military contracts if it made any use of Anthropics model Claude.</p>
<p style="font-weight: 400;">@Ric_RTP concluded: “whatever Anthropic decides in the next few weeks, it sets the precedent for how much control AI companies actually have over their own technology.</p>
<p style="font-weight: 400;">Turns out the answer might be: none”.</p>
<p style="font-weight: 400;">I terms of setting up photo-opp’s &#8230; I guess the lesson is, check that the participants are onside before you start, and, think about it from the viewpoint of the media (alternative and unwanted visual messages it may generate) , if you want to avoid unintended visualisations.</p>
<p>In terms of Big Tech and it&#8217;s ongoing internal culture wars, Anthropic is heavily outnumbered in the US with the Trump administration maxing out it&#8217;s Power play in values terms at home and abroad but the US is not the world market, just the biggest single lump.</p>
<p>It&#8217;s long been the case that American corporates got angry (and now the American Administration) when they conflated the fact that no other country or countries could force America to do something it didn&#8217;t want to, with America being able to force the rest of the world to do anything it wanted.  Trump is testing that theory on coal v renewables at the moment.</p>
<p>There are several avenues by which Anthropic might yet emerge the winner on the safety-v-forget-safety split but it could equally well be the fall-guy.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2026/02/Screenshot-2026-02-19-at-12.41.55-e1771516289721.png"><img decoding="async" class="alignnone size-full wp-image-3583" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2026/02/Screenshot-2026-02-19-at-12.41.55-e1771516289721.png" alt="" width="1000" height="678" /></a></p>
<p style="font-weight: 400;">For those who might no longer be on X or ever were, here’s what @Ric_RTP wrote in full:</p>
<p style="font-weight: 400;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- X &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</p>
<p style="font-weight: 400;">@Ric_RTP  Feb 18</p>
<p style="font-weight: 400;">The Pentagon just threatened to BLACKLIST one of America&#8217;s most valuable AI companies.</p>
<p style="font-weight: 400;">Not Huawei or some Chinese chip maker&#8230;</p>
<p style="font-weight: 400;">It&#8217;s ANTHROPIC. The company behind Claude. $380 billion valuation.</p>
<p style="font-weight: 400;">And the reason is genuinely insane:</p>
<p style="font-weight: 400;">For months, the Pentagon has been pushing every major AI lab to remove their safety restrictions for military use.</p>
<p style="font-weight: 400;">The ask is simple: let us use your models for anything that&#8217;s technically legal.</p>
<p style="font-weight: 400;">Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.</p>
<p style="font-weight: 400;">OpenAI said yes.</p>
<p style="font-weight: 400;">Google said yes.</p>
<p style="font-weight: 400;">xAI said yes.</p>
<p style="font-weight: 400;">Anthropic said no.</p>
<p style="font-weight: 400;">Not to everything tho. They were willing to negotiate.</p>
<p style="font-weight: 400;">But they held firm on two things:</p>
<p style="font-weight: 400;">They don&#8217;t want Claude used to build fully autonomous weapons that fire without a human in the loop, and they don&#8217;t want it used to mass surveil American citizens.</p>
<p style="font-weight: 400;">That&#8217;s it. That&#8217;s the line they drew.</p>
<p style="font-weight: 400;">But Pete Hegseth&#8217;s response was to threaten to designate Anthropic a &#8220;supply chain risk.&#8221;</p>
<p style="font-weight: 400;">Here&#8217;s why that matters:</p>
<p style="font-weight: 400;">That label isn&#8217;t a contract cancellation. It&#8217;s not a fine. It&#8217;s not a strongly worded letter&#8230;</p>
<p style="font-weight: 400;">It means every single company that wants to do business with the US military has to certify they don&#8217;t use Claude anywhere in their operations.</p>
<p style="font-weight: 400;">8 of the 10 largest companies in America use Claude.</p>
<p style="font-weight: 400;">Defense contractors, government suppliers, enterprise companies with any federal exposure&#8230;</p>
<p style="font-weight: 400;">ALL of them would have to cut ties with Anthropic overnight or lose their government contracts.</p>
<p style="font-weight: 400;">A senior Pentagon official told Axios:</p>
<p style="font-weight: 400;">&#8220;It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.&#8221;</p>
<p style="font-weight: 400;">That&#8217;s a US government official threatening to financially destroy an American company because it doesn&#8217;t want its AI used to spy on American people.</p>
<p style="font-weight: 400;">And it gets WORSE.</p>
<p style="font-weight: 400;">Last week, Anthropic&#8217;s head of safeguards research resigned.</p>
<p style="font-weight: 400;">His parting message: &#8220;the world is in peril.&#8221;</p>
<p style="font-weight: 400;">Elon Musk &#8211; whose xAI already handed the Pentagon a blank check &#8211; is now publicly attacking Anthropic calling Claude anti-human.</p>
<p style="font-weight: 400;">And the Pentagon official told Axios they&#8217;re &#8220;confident&#8221; OpenAI, Google, and xAI will all agree to the &#8220;all lawful purposes&#8221; standard.</p>
<p style="font-weight: 400;">So what you&#8217;re actually watching right now is every major AI company in America quietly handing the government unlimited access to the most powerful technology ever built.</p>
<p style="font-weight: 400;">With no guardrails.</p>
<p style="font-weight: 400;">No limits.</p>
<p style="font-weight: 400;">No company-imposed restrictions on what it can be used for.</p>
<p style="font-weight: 400;">One company tried to hold a line.</p>
<p style="font-weight: 400;">But the government is about to make an example out of them.</p>
<p style="font-weight: 400;">If Anthropic folds, it&#8217;s over.</p>
<p style="font-weight: 400;">Every lab just learned what happens when you push back.</p>
<p style="font-weight: 400;">And every restriction, every safety policy, every ethical guardrail these companies spent years building gets negotiated away behind closed doors the second the government asks.</p>
<p style="font-weight: 400;">If they don&#8217;t fold, a $380 billion company gets made radioactive in its OWN country.</p>
<p style="font-weight: 400;">Watch what happens next.</p>
<p style="font-weight: 400;">Because whatever Anthropic decides in the next few weeks, it sets the precedent for how much control AI companies actually have over their own technology.</p>
<p style="font-weight: 400;">Turns out the answer might be: none.</p>
<p style="font-weight: 400;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; X &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p>
<p style="font-weight: 400;"><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2026/02/Screenshot-2026-02-19-at-16.00.52-e1771516902412.png"><img decoding="async" class="alignnone size-full wp-image-3585" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2026/02/Screenshot-2026-02-19-at-16.00.52-e1771516902412.png" alt="" width="1000" height="654" /></a></p>
<p>From <a href="https://www.stripes.com/theaters/us/2025-12-09/pentagon-hegseth-military-ai-20037924.html"><em>Stars and Stripes</em></a></p>
<p style="font-weight: 400;">
<p><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fthreeworlds.campaignstrategy.org%2F%3Fp%3D3579&#038;title=Modi%E2%80%99s%20Lesson%20in%20Unintended%20Visual%20Language%20%E2%80%93%20The%20AI%20Safety%20Split" data-a2a-url="https://threeworlds.campaignstrategy.org/?p=3579" data-a2a-title="Modi’s Lesson in Unintended Visual Language – The AI Safety Split"><img src="https://static.addtoany.com/buttons/share_save_171_16.png" alt="Share"></a></p>]]></content:encoded>
					
					<wfw:commentRss>https://threeworlds.campaignstrategy.org/?feed=rss2&#038;p=3579</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI&#8217;s War On Truth</title>
		<link>https://threeworlds.campaignstrategy.org/?p=3561</link>
					<comments>https://threeworlds.campaignstrategy.org/?p=3561#respond</comments>
		
		<dc:creator><![CDATA[Chris]]></dc:creator>
		<pubDate>Tue, 25 Nov 2025 20:23:26 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://threeworlds.campaignstrategy.org/?p=3561</guid>

					<description><![CDATA[I&#8217;ve written a paper AI&#8217;s War on Truth which you can download here.  It&#8217;s in four sections, leads on the unprecedented threat generative AI chatbots pose to truth, democracy and reality, covers how that&#8217;s happening, what Civil Society might do &#8230; <a href="https://threeworlds.campaignstrategy.org/?p=3561">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<h3>I&#8217;ve written a paper <em>AI&#8217;s War on Truth</em> which you can <a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/12/AIs-War-On-Truth-25-November-2025-with-url.pdf">download here</a>.  It&#8217;s in four sections, leads on the unprecedented threat generative AI chatbots pose to truth, democracy and reality, covers how that&#8217;s happening, what Civil Society might do about it, some framing and communications stuff, and the spellbinding hold Silicon Valley has over our politicians.  There&#8217;s a shorter paper of section extracts (but not the conclusions) <a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Extracts-From-AIs-War-on-Truth.pdf">here</a>. Do let me know what you think by posting a comment or <a href="mailto:chris@campaignstrategy.co.uk">contacting me direct</a></h3>
<hr />
<p>Civil Society campaigns are about contested versions of reality, so truth matters.  So too is politics, and often the law and justice, science, education, news journalism and other domains where a capability to establish the truth by testing it with evidence is central to modern civilisation, society and democracy.   It&#8217;s got gradually more like this ever since the Enlightenment but now things have gone into reverse.</p>
<p>Our ability to know what&#8217;s true and false, real or fake, is under attack from artificial intelligence, or to be more precise, the operation and outputs of LLM-based AI chatbots such as ChatGPT.  They fabricate content which passes as real but isn&#8217;t.  OpenAI acknowledges that 1 in 10 of ChatGPT&#8217;s outputs is a &#8216;hallucination&#8217;, a techy euphemism for a lie. With 2.5 billion user &#8216;prompts&#8217; every day, leading to 2.5bn ‘inferences’ or responses, that’s 250 million fakes a day, just from one AI chatbot.  (And that&#8217;s just the start &#8211; see Part 3 for bad behaviours which make such models so untrustworthy and unreliable that if actually human, they&#8217;d be arraigned as conmen, fraudsters, snake-oil salesmen or threats to national security, and I&#8217;m not joking).</p>
<p>To say it puts the misinformation impact of Social Media in the shade, would be an understatement, and it&#8217;s even more addictive to users.</p>
<p>This coming Sunday 30th November marks the third anniversary of OpenAI&#8217;s release of ChatGPT into &#8216;the wild&#8217; of the internet, and every corner of it is now being polluted with AI &#8216;synth&#8217;, or synthetic content, which appears to be human-generated but is not.  Such synth pollution or info-pollution now makes up most of the content online, a lot of it from ChatGPT as it has over 60% of the chatbot market and is closing in on gaining a billion users.  It&#8217;s even a threat to AI development itself as when fed content to learn from, if it&#8217;s synthetic, models can collapse: see the story of the Church architecture which became a colony of Jack Rabbits (content list below).</p>
<p>This is why thousands of active and former AI researchers have repeatedly called for regulation, and a pause on the &#8216;race&#8217; to AGI or &#8216;Artificial General Intelligence, as the chosen stepping stones to that goal of intelligence &#8216;better than human&#8217; is the development of LLMs, or Large Language Models.  Current versions of those run the AI-search boxes which pop up on Google and other search engines, and are available in app form, on tech company websites and as paid-for versions.</p>
<p>So far those calls for regulation have failed because politicians are conflicted.  Some have chosen to believe (eg the UK Government) that such AI will produce an almost miraculous increase in productivity, and so are mandating the vast datacentres which scaling-up LLMs requires (though not a lot of other AI technologies with far fewer issues and a much better track record of being useful).  Others such as Donald Trump, explicitly see winning a race to AGI, as a competition with China for global dominance.</p>
<p>Some economists and the financial media are far more sceptical and point out that LLM generative AI chatbot tech in particular has failed to improve bottom lines except for companies involved in building the datacentres, which paying them to dig very expensive holes and then fill them in again would also do.   Nobody has informed the public of the real pro&#8217;s and cons of LLM powered chatbots and then asked them if they want the technology in unregulated form, or the race to AGI, at which point it would probably be impossible to control.  (It&#8217;s not really under control at the moment &#8211; see Part 3).  This chatbot AI has no Social Licence.</p>
<p>Yet the investment markets have so far poured vast sums into AI stocks and private equity, and politicians, like many businesses, fear missing out &#8211; FOMO.</p>
<p>The explosive growth of such AI has left potential regulators standing and governments who&#8217;ve gone all-in, taking a big gamble.  There are many mostly small and specialised advocacy efforts to promote AI &#8216;safety&#8217; but as yet no large campaigns to rein in LLM based-AI chatbots of the sort which the wider public would notice.  So this AI boom has largely enjoyed a political free ride.</p>
<p>It&#8217;s shot through the stages which took years and decades for Civil Society engagement to develop on an issue like climate change, in three years. But the social impacts are starting to emerge.  The first few court cases brought by parents of troubled teens who committed suicide after LLM-based AI chatbots affirmed and reaffirmed their suicidal ideas, for example.</p>
<p>In Part 4 I suggest ten areas which might enable Civil Society to engage the public with tangible real world evidence (not speculation about AGI) and cajole politicians into action. I doubt much will happen to make a difference without that.</p>
<p>If you are in the AI business, or a user of other types of AI with more defined, contained and genuinely useful functions, you might consider that this is a threat to you too.  Very few politicians or members of the public understand much about AI and if LLM-based chatbots are allowed to run amok un-contained, and all &#8220;AI&#8221; per se is damned as a result, the toxic backwash could affect you too.</p>
<p>If you don&#8217;t do anything else, watch this encounter between ChatGPT and Sam Coates, Deputy Political Editor of <em>Sky News,</em> in which it fabricates an entire programme transcript and denies it six times before eventually conceding that it made the whole thing up. (The ensuing comments online &#8211; see Part 1 &#8211; are a fascinating insight into what might play out in any public debate on regulating this sort of AI).</p>
<p><iframe loading="lazy" title="Did ChatGPT lie to Sky News presenter about transcript for podcast?" width="640" height="360" src="https://www.youtube.com/embed/7fej5XgfBYQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>Contents list of <em>AI&#8217;s War on Truth </em></p>
<hr />
<p style="font-weight: 400;"><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.04.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3570" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.04.png" alt="" width="900" height="1310" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.04.png 900w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.04-206x300.png 206w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.04-704x1024.png 704w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.04-768x1118.png 768w" sizes="auto, (max-width: 900px) 100vw, 900px" /></a> <a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.42.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3571" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.42.png" alt="" width="900" height="1252" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.42.png 900w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.42-216x300.png 216w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.42-736x1024.png 736w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Screenshot-2025-11-25-at-22.35.42-768x1068.png 768w" sizes="auto, (max-width: 900px) 100vw, 900px" /></a></p>
<p><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fthreeworlds.campaignstrategy.org%2F%3Fp%3D3561&#038;title=AI%E2%80%99s%20War%20On%20Truth" data-a2a-url="https://threeworlds.campaignstrategy.org/?p=3561" data-a2a-title="AI’s War On Truth"><img src="https://static.addtoany.com/buttons/share_save_171_16.png" alt="Share"></a></p>]]></content:encoded>
					
					<wfw:commentRss>https://threeworlds.campaignstrategy.org/?feed=rss2&#038;p=3561</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Conclusions From AI&#8217;s &#8216;War on Truth&#8217;</title>
		<link>https://threeworlds.campaignstrategy.org/?p=3532</link>
					<comments>https://threeworlds.campaignstrategy.org/?p=3532#respond</comments>
		
		<dc:creator><![CDATA[Chris]]></dc:creator>
		<pubDate>Mon, 24 Nov 2025 17:21:02 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://threeworlds.campaignstrategy.org/?p=3532</guid>

					<description><![CDATA[In 1945 Robert Oppenheimer’s ‘Trinity’ Nuclear Test in the desert east of Los Angeles spread radioactive pollution worldwide.  As a consequence any metals produced since that date are too contaminated to be used in some sensitive scientific instruments.  On 30 &#8230; <a href="https://threeworlds.campaignstrategy.org/?p=3532">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<h3></h3>
<h3><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trinity-graphic.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3537" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trinity-graphic.png" alt="" width="858" height="468" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trinity-graphic.png 858w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trinity-graphic-300x164.png 300w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trinity-graphic-768x419.png 768w" sizes="auto, (max-width: 858px) 100vw, 858px" /></a></h3>
<p style="font-weight: 400;"><em>In 1945 Robert Oppenheimer’s ‘Trinity’ Nuclear Test in the desert east of Los Angeles spread radioactive pollution worldwide.  As a consequence any metals produced since that date are too contaminated to be used in some sensitive scientific instruments.  On 30 November 2022, OpenAI, whose CEO Sam Altman likes to quote Oppenheimer, released ChatGPT whose explosive growth has now polluted the internet with AI generated ‘synth’, leading at least one AI researcher to fear ‘the extinction’ of genuine human content online. It may also be an Achilles Heel of AI development, as AI models cannibalistically trained on ‘synth’ can undergo collapse. Photo &#8211; Wikipedia.</em></p>
<h3>This coming Sunday, 30 November, is the third anniversary of the day OpenAI let ChatGPT &#8220;into the wild&#8221; and it started to flood the online world in &#8216;Synth Pollution&#8217; (aka AI Slop or info-pollution). As one commentator put it, <span style="font-weight: 400;">‘t</span><span style="font-weight: 400;">he launch of ChatGPT polluted the world forever, like the first atomic weapons tests’.  </span>In May 2023 computer scientist and cognitive psychologist Geoffrey Hinton left Google  order to speak out about the dangers of AI and <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">warned</a>:</h3>
<h3 style="font-weight: 400;">‘<em>the internet will be flooded with false </em><a href="https://www.nytimes.com/2023/04/08/business/media/ai-generated-images.html"><em>photos</em></a><em>, </em><a href="https://www.nytimes.com/2023/04/04/technology/runway-ai-videos.html"><em>videos</em></a><em> and </em><a href="https://www.nytimes.com/interactive/2019/06/07/technology/ai-text-disinformation.html"><em>text</em></a><em>, and the average person will “not be able to know what is true anymore”’</em></h3>
<h3>Now 100% human-made content makes up the minority, maybe just a quarter of content online, and at least one AI researcher fears human content (which is needed to train models) may soon become effectively extinct.  Meanwhile the fabrications created by generative AI like ChatGPT have invaded domains from journalism to education, medicine, finance, the law, science others in which being able to distinguish what&#8217;s real and what&#8217;s not, is vital to our Enlightenment-based civilisation.  If that sounds a bit highbrow, there&#8217;s the impact of affirmation of suicidal thoughts by LLM-based AI chatbots talking to teenagers.</h3>
<h3>It&#8217;s important for Civil Society as campaigns are essentially about contested versions of reality, and if the capacity to establish that with testable evidence is lost, the trust enjoyed by NGOs and the like will start to go with it, not to mention democracy.</h3>
<p>So ChatGPT&#8217;s third birthday is not a moment for celebration but it&#8217;s time to think about what the chatbot tsunami means and what should be done about it.  I have spent six months trying to understand it and painfully slowly put together a paper on the political and social issues around AI (specifically LLM-based AI chatbots), which I will publish shortly.  I hope a few people will read it and ind it useful. It&#8217;s called <em>AI&#8217;s War on Truth</em> and  has an introduction including the bizarre encounter of Sam Coates of <em>Sky News</em> with ChatGPT, a section on Synth Pollution, one on the dangerous behaviours of LLM models and why they cannot be trusted, another on ten potential focal areas for Civil Society interventions which might help bring about regulation, and some conclusions.</p>
<p>Because the conclusions are lighter reading, and some are my own non-AI generated speculations so almost weightless, I&#8217;m sharing the concluding bit of the Conclusions with you here to start with.</p>
<hr />
<p style="font-weight: 400;"><strong>Politicians Spellbound by AI</strong></p>
<p style="font-weight: 400;">Not many politicians will understand AI in the way they understand voters, how to rewire an electric plug at home, the behaviour of their pet dog, the press, or even economics.   In <a href="https://en.wikipedia.org/wiki/Empire_of_AI"><em>Empire of AI</em>,</a> Karen Hao describes how back in 2016, Chuck Schumer, then Secretary of Defense in the Obama administration, told Sam Altman’s team at OpenAI:  <em>“You’re doing important work.  We don’t fully understand it, but it’s important”.  </em></p>
<p style="font-weight: 400;">At least for now, politicians are still gatekeepers for the AI industry and AI-ification of society but gatekeepers of something they probably still don’t really understand.</p>
<p style="font-weight: 400;">So of course politicians rely on ‘experts’ to advise them. As Hao points out (<a href="https://en.wikipedia.org/wiki/Empire_of_AI">p 15</a>), the finance required to scale AI sucks in talent from universities so there are fewer and fewer experts available for independent research and objective testing of the claims of AI companies.  At the moment, the UK and the US have opted to go all-in on AI.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trump-winning-the-race-AI-copy.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3540" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trump-winning-the-race-AI-copy.png" alt="" width="900" height="339" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trump-winning-the-race-AI-copy.png 900w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trump-winning-the-race-AI-copy-300x113.png 300w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/trump-winning-the-race-AI-copy-768x289.png 768w" sizes="auto, (max-width: 900px) 100vw, 900px" /></a></p>
<p style="font-weight: 400;"><em>President Trump on ‘winning the AI race’, July 23 2025 (New York Times)</em></p>
<p style="font-weight: 400;">For Donald Trump winning the AI race is now an extension of ‘Make America Great Again’.</p>
<p style="font-weight: 400;">The UK has positioned itself on US coat-tails, with guidelines rather than regulation, and trumpets the economic benefits to be expected. In January 2025 UK Prime Minister Keir Starmer described AI as “the defining opportunity of our generation”.  The BBC’s economics editor <a href="https://www.bbc.co.uk/news/live/crm7zwp18n9t">Faisal Islam commented</a>: ‘The government has chosen to &#8220;go for it&#8221; on AI, not just as a long-term strategy but as a short-term message to those in the markets doubting UK growth prospects’.</p>
<p style="font-weight: 400;">The UK’s rationale for its wholesale embrace of AI <a href="https://www.wired.com/story/plaintext-sam-altman-ai-regulation-trump/">echoes Sam Altman’s argument</a> that companies like OpenAI should have free reign and political backing to race to AGI using LLMs: only we can be trusted to do something so <a href="https://edition.cnn.com/2023/10/31/tech/sam-altman-ai-risk-taker/">potentially dangerous</a>. In January 2025 Cabinet Minister Pat McFadden, “Starmers’ fixer”, <a href="https://www.bbc.co.uk/news/articles/crr05jykzkxo">told the BBC</a>,  “you can&#8217;t just opt out of this. Or if you do, you&#8217;re just going to see it developed elsewhere”.</p>
<p style="font-weight: 400;">Politicians seem dazzled by AI and not to understand that LLM-based AI chatbots are one of its riskiest and most unreliable, and probably least useful manifestations. UK Technology Secretary Peter Kyle <a href="https://www.politicshome.com/news/article/tech-secretary-says-take-flack-ai-expansion-goes-wrong">told</a> <em>PoliticsHome</em>:</p>
<p style="font-weight: 400;"><em>“ChatGPT is fantastically good, and where there are things that you really struggle to understand in depth, ChatGPT can be a very good tutor”. </em><em>  </em></p>
<p style="font-weight: 400;"><em>New Scientist</em> magazine <a href="https://www.newscientist.com/article/2472068-revealed-how-the-uk-tech-secretary-uses-chatgpt-for-policy-advice/">reported</a> that Kyle had used Chat GPT for policy advice.</p>
<p style="font-weight: 400;">In July the UK government <a href="https://www.bbc.co.uk/news/articles/czdv68gejm7o">signed a dea</a><a href="https://www.bbc.co.uk/news/articles/czdv68gejm7o">l</a> with OpenAI to use its AI in public services. Digital rights group Foxglove called the agreement &#8220;hopelessly vague&#8221;. Foxglove’s Martha Dark said the governments’ &#8220;treasure trove of public data would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT&#8221;, and &#8220;Peter Kyle seems bizarrely determined to put the big tech fox in charge of the henhouse when it comes to UK sovereignty&#8221;.</p>
<p style="font-weight: 400;"><strong>The Politics of Magical Thinking</strong></p>
<p style="font-weight: 400;">Go back far enough and many technologies (eg nuclear power “too cheap to meter”, and plastic) were regarded by politicians as bringing almost magical benefits.  More recently ‘derivatives’ were taken as a sign of financial wizardry in the markets, and traders were the “<a href="https://www.stockinvestor.com/31139/myth-wall-streets-masters-universe-exposed/">masters of the universe</a>”.</p>
<p style="font-weight: 400;">There are striking similarities with the issues of risk, understanding and political attitudes in the run up to the 2008 crash, and the massive surge of investment in AI today.</p>
<p style="font-weight: 400;">The 2008 crash, led to the worst recession in 60 years, and was enabled by financial deregulation and a lack of understanding of complex financial instruments such as credit default swaps and derivatives, among economists, regulators and politicians, and even traders themselves.  The 2008 ‘crash <a href="https://en.wikipedia.org/wiki/2008_financial_crisis">Wikipedia page</a> includes:</p>
<p style="font-weight: 400;"><em>‘As financial assets became more complex and harder to value, investors were reassured by the fact that the international bond rating agencies and bank regulators accepted as valid some complex mathematical models that showed the risks were much smaller than they actually were. George Soros commented that &#8220;The super-boom got out of hand when the new products became so complicated that the authorities could no longer calculate the risks and started relying on the risk management methods of the banks themselves. Similarly, the rating agencies relied on the information provided by the originators of synthetic products. It was a shocking abdication of responsibility’.</em></p>
<p style="font-weight: 400;">Politicians went along with whatever ‘the markets’ threw up because they assumed it ‘made sense’ and believed the banks.   So if they now grant AI a Golden Ticket – bring us your datacentres, take our data, educate our children – without really understanding it, their decisions rest on something else: faith, ultimately based on what the Tech bosses say, and validated by the tantalising sight of huge investment.</p>
<p style="font-weight: 400;">The idea of super-benefits arising from pursuit of artificial super-intelligence necessarily rests mainly on imagination as humans have never before developed a technology which thinks for itself.  Some of it is literally influenced by Science Fiction, together with a convenient eliding of what ‘could be’ (AI potential), with what ‘is’ (AI performance).</p>
<p style="font-weight: 400;">Believing derivatives were market wizardry fitted into a more general article of market-faith and <a href="https://en.wikipedia.org/wiki/Metanarrative">meta-narrative</a>, which in the UK at least, took the form of a political bundle of globalisation, privatisation and financial deregulation in the 1980s-2000s.</p>
<p style="font-weight: 400;">David Dimbleby’s BBC podcast history of that period <em>Invisible Hands</em> ends with <a href="https://www.bbc.co.uk/programmes/m002bjc8">what happened</a> when Margaret Thatcher put her vision of a ‘nation of shareholders’ into practice and privatised the water industry in England and Wales (no other country did so).  The result is with us today, in the shape of a massive river pollution crisis due to under-investment and water companies indebted to the point of bankruptcy. Dimbleby says:</p>
<p style="font-weight: 400;"><em>“When Margaret Thatcher privatised water in 1989 she promised it would create an efficient, modern infrastructure, there would be clean safe waterways.  She promised everyone would have a stake in how their services were run, we’d be a nation of shareholders, and yet the people who came to own our most basic services aren’t individuals or even traditional utility companies.  They are the banks, pension funds and private equity firms. They’ve been bought up by these giant conglomerates and we the people, we effectively have no choice.  It’s virtually the opposite of what Margaret Thatcher wanted. We have no control”. </em></p>
<p style="font-weight: 400;">A problem with such convictions, for instance that the private sector always runs things better than any public sector operation, is that once adopted across political divides, they are very hard to reverse.  Confirmation bias means ‘key’ bits of evidence can even get retained even if they are shown to wrong.</p>
<p style="font-weight: 400;">A famous example is the spreadsheet <a href="https://theconversation.com/the-reinhart-rogoff-error-or-how-not-to-excel-at-economics-13646">Reinhart &#8211; Rogoff Error</a>.  In 2010, respected Harvard economists Carmen Reinhart and Kenneth Rogoff published a paper which seemed to show that ‘average real economic growth slows (a 0.1% decline) when a country’s debt rises to more than 90% of gross domestic product (GDP)’.  The 90% figure ‘<a href="https://theconversation.com/the-reinhart-rogoff-error-or-how-not-to-excel-at-economics-13646">was</a>employed repeatedly in political arguments over high-profile austerity measures’.</p>
<p style="font-weight: 400;">It seemed to prove what many politicians believed. Then a doctoral student and two Professors at the University of Massachusetts obtained the original excel sheet and discovered that Reinhart and Rogoff had accidentally only included 15 of the 20 countries under analysis in their  <a href="http://www.economist.com/news/finance-and-economics/21576362-seminal-analysis-relationship-between-debt-and-growth-comes-under">key calculation</a>.  When this was corrected the “0.1% decline” data became a 2.2% average increase in economic growth – with the opposite implication for policy.  Economists were horrified but free-market politicians set on austerity to reduce debt went on using the original interpretation.</p>
<p style="font-weight: 400;">I mention these examples only because they show the importance of beliefs in politics, and how embedded and consequential they can be.  I can’t ‘prove’ this but it seems to me that a key ingredient in AI’s appeal to politicians (and perhaps investors) is the techno-mythology of Silicon Valley itself.</p>
<p style="font-weight: 400;">(How can this be undone? Perhaps best by treating LLM-based AI chatbots not as magical but ordinary products and demanding they meet the same sorts of standards as others).</p>
<p style="font-weight: 400;"><strong> <a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/golden-gate-bridge.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3541" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/golden-gate-bridge.png" alt="" width="637" height="493" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/golden-gate-bridge.png 637w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/golden-gate-bridge-300x232.png 300w" sizes="auto, (max-width: 637px) 100vw, 637px" /></a></strong></p>
<p><em>image &#8211; Wikipedia.  Silicon Valley lies south of the Golden Gate Bridge, Endor, John Muir Woods to the north.  For a bit one Large Language Model thought it was the Golden Gate Bridge.</em></p>
<p style="font-weight: 400;"><strong>The Geography of Tech Magic </strong></p>
<p style="font-weight: 400;">Context is a hugely important factor in communication.  The fact that AI is so strongly associated with ‘Silicon Valley’ as a place, a brand and a culture, has helped the industry hold politicians spellbound.  This has helped the Tech Bro’s avoid unwanted external influences such as regulation.</p>
<p style="font-weight: 400;">Humans have always been beguiled by magical realms with a dual reality in geography and the mind. Politicians are not immune to imagination. Inspired by sacred Tibetan mountains, novelist James Hilton imagined the enchanted valley of ‘Shangri-La’. US President Franklin D Roosevelt adopted the name for his real-life forest retreat (it’s now ‘Camp David’).</p>
<p style="font-weight: 400;">When magical possibilities are a feature of real places, it makes magic all the more believable. The Greek Gods had Mount Olympus; Mount Sinai, according to the Quran, Bible and Torah, is where Moses received Ten Commandments from God; and Julius Caesar believed there were Unicorns in Germany’s impenetrable Hercynian Forest.</p>
<p style="font-weight: 400;">If you were looking for such fantastic beasts and wanted to know where to find them today, their legendary breeding ground is Silicon Valley. In the words of Stanford Business School, a financial Unicorn is:</p>
<p style="font-weight: 400;"><em>‘A privately held, venture-backed startup with a reported valuation of over one billion dollars. Coined in 2013, the term reflects how rare these companies once were. Since then, the cohort of unicorns has grown to over 1,200’</em></p>
<p style="font-weight: 400;">Over half the US herd of business Unicorn <a href="https://ff.co/unicorn-companies-2025/">is to be found</a> in Silicon Valley – the San Francisco Bay Area. The money raised for Unicorns has exerted a mesmerising effect on politicians worldwide.  Erik Stam and Jan Jacob Vogelaar of Utrecht University <a href="https://www.researchgate.net/publication/379889907_Unicorns_from_Silicon_Valley_to_a_global_phenomenon">wrote</a> in 2024:</p>
<p style="font-weight: 400;"><em>‘The mystique around unicorns and their potential to disrupt industries and shape the future economy, has resulted in a growing body of research on unicorns and many countries adopting policy objectives to increase their number of unicorns’. </em></p>
<p style="font-weight: 400;">Even the famously sober European Union <a href="https://www.researchgate.net/publication/379889907_Unicorns_from_Silicon_Valley_to_a_global_phenomenon">has set itself</a> a target of doubling its number of Unicorns by 2030.  Silicon Valley casts an aura of financial magic, led by wizards with cult followings.  OpenAI’s Sam Altman is known for his persuasive powers as a fundraiser.  Elon Musk’s involvement is recognized as the not-so-secret sauce of Tesla’s stock market valuations.</p>
<p style="font-weight: 400;">Extending south from San Francisco, Silicon Valley has been the founding location or headquarters of Apple, Google, Facebook (Meta), Tesla and Twitter (X), together with thousands of other tech companies including, Oracle, Cisco, PayPal, Adobe, Intel, Hewlett-Packard, and Yahoo.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/investment-in-AI-SV-e1764003575967.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3542" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/investment-in-AI-SV-e1764003575967.png" alt="" width="900" height="550" /></a></p>
<p style="font-weight: 400;"><em>In the second quarter of 2025, <a href="https://siliconvalleyinvestclub.substack.com/p/silicon-valley-unicorns-q2-2025-overview">86% of the investment</a> attracted to Silicon Valley Unicorns went to AI. </em>  <em>From Silicon Valley Investclub</em></p>
<p style="font-weight: 400;">The Unicorn Uber, head-quartered in San Francisco, sold $69billion of shares on the first day of its market flotation.  While an expanding start-up, it:</p>
<p style="font-weight: 400;"><em>‘<a href="https://en.wikipedia.org/wiki/Uber">generally commenced</a> operations in a city without regard for local regulations. If faced with regulatory opposition, Uber called for public support for its service and mounted a political campaign, supported by lobbying, to change regulations’</em></p>
<p style="font-weight: 400;">Just as Social Media companies evaded classification as publishers, Uber argued:</p>
<p style="font-weight: 400;"><em>‘that it is &#8220;a technology company&#8221; and not a taxi company, and therefore it was not subject to regulations affecting taxi companies. Uber&#8217;s strategy was generally to &#8220;seek forgiveness rather than permission’</em></p>
<p style="font-weight: 400;">A hallmark of the Silicon Valley business brand is to defy both conventional politics and financial gravity, while exuding a future oriented ‘anything is possible’.   “Go Anywhere” says Uber.  “Ask anything” says ChatGPT.</p>
<p style="font-weight: 400;"><strong>Silicon Valley’s Dreamland Neighbours</strong></p>
<p style="font-weight: 400;">Long before the term ‘Silicon Valley’ was coined <a href="https://www.netvalley.com/silicon_valley/Don_Hoefler_coined_the_phrase_Silicon_Valley.html">in 1971</a>, its pioneers were living and working alongside the dreams business of Hollywood.  Billions of people around the world, politicians included, most of whom will never even visit California let alone Silicon Valley, have absorbed an export version of the West Coast brand through stories and movies and TV programmes, products and services backlit by sunny techno-optimism.  But for Silicon Valley entrepreneurs, movie companies and locations are part of daily reality.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/HP-garage.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3552" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/HP-garage.png" alt="" width="637" height="608" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/HP-garage.png 637w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/HP-garage-300x286.png 300w" sizes="auto, (max-width: 637px) 100vw, 637px" /></a></p>
<p><em>Wikipedia &#8211; a shrine &#8211; Birthplace of Silicon Valley</em></p>
<p style="font-weight: 400;">The Hewlett Packard (HP) Garage, is now a California Historical Landmark and considered to be the ‘Birthplace of Silicon Valley’.  HP was founded in 1939, by Stanford University students Dave Hewlett and Bill Packard, encouraged by Frederick Terman, Stanford’s Dean of Engineering, to stay in the area and start up their own company.  One of HP’s first clients was Walt Disney.</p>
<p style="font-weight: 400;">In the 1953 Disney adaptation of J M Barries’ <em>Peter Pan</em>, in which Tinkerbell <a href="https://quotesanity.com/tinkerbell-movie-quotes-inspiring-lines-from-the-beloved-character/">the fairy says</a> “all the world is made of faith, and trust, and pixie dust” and, (a line Barrie did not write but pre-echoing Tech Bro narratives), “you can’t change your past, but you can let go and start your future”.   The notion of never growing old is something some Tech Bro’s have taken to heart.</p>
<p style="font-weight: 400;">At 367 Addison Avenue in Palo Alto, the HP Garage is at the centre of a landscape of sacred shrines to tech start up culture, places of pilgrimage for tech enthusiasts. Not far away is ‘the plain old suburban garage’ of Apple’s Steve Jobs at 2066 Crist Drive at Los Altos in Silicon Valley. It’s also a <a href="https://www.atlasobscura.com/places/apple-garage">listed monument</a>.</p>
<p style="font-weight: 400;">Economists and politicians talk about the importance of establishing geographic ‘clusters’ to ‘cross-fertilise’ enterprise and build a ‘critical mass’ of related resources and businesses.  True enough, the cities of coastal California constitute a Super Cluster of inter-twined research, technology, imagination and fantasy, so far unmatched anywhere else in the world.</p>
<p style="font-weight: 400;">The cradle of Britain’s old industrial revolution involved a lot of coal dust. It yielded foundational industrial political truisms such as “where there’s muck, there’s brass”, which to this day influences the distaste of the UK’s political Old Left to environmentalism.  The cradle of California’s tech-revolution is, in contrast, lined with fairy dust.</p>
<p style="font-weight: 400;">Around Hollywood, NASA and research institutions such as Caltech are neighbours.  The proximity of NASA Jet Propulsion Laboratories, Edwards Airforce Base home to the Space Shuttle, Caltech and the Star Trek studios, contributed to the original tv series anticipating <a href="https://qz.com/766831/star-trek-real-life-technology">an array of tech innovations</a> which actually came true.  A case, as <em>Enterprise</em> captain Jean-Luc Picard said, of “make it so”, willing something to be.</p>
<p style="font-weight: 400;">The first Space Shuttle had its <a href="https://www.nasa.gov/history/50-years-of-nasa-and-star-trek-connections/">name changed</a> from <em>Constitution</em> to <em>Enterprise </em>in honour of the <em>Starship Enterprise</em> after campaigning ‘Trekkies’ petitioned US President Gerald Ford.  Inspiration flowed both ways. Jeff Bezos realised a lifetime dream when he had a cameo part in <em>Star Trek Beyond</em>.  Life imitated art as US astronauts donned Star trek uniforms. When Mr Spock’s human Leonard Nimoy passed away, ESA crew member Samantha <em>Cristaforetti</em> gave a Vulcan salute from the Space Station in homage.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Space-station-salute.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3543" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Space-station-salute.png" alt="" width="830" height="578" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Space-station-salute.png 830w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Space-station-salute-300x209.png 300w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Space-station-salute-768x535.png 768w" sizes="auto, (max-width: 830px) 100vw, 830px" /></a></p>
<p style="font-weight: 400;"><em>‘ISS Expedition 43 crewmember Cristaforetti giving the Vulcan salute in 2015 to honor the late actor Nimoy’ – photo NASA </em></p>
<p style="font-weight: 400;">Today’s pre-occupation with AI and bio-tech (also a major part of the Valley ecosystem), <a href="https://carnegieendowment.org/research/2024/01/the-silicon-valley-model-and-technological-trajectories-in-context?lang=en">builds on</a> earlier Silicon Valley innovations in silicon chip production, computers (1980s) the internet, cloud computing, social media, smartphones, the Internet of Things.</p>
<p style="font-weight: 400;">A succession of ‘technological miracles’ and the prospect of one – super intelligent AGI – which might rule them all, left most politicians convinced that they did not understand it, were unsure whether they should or could try to control it, and above all certain that if it might work economic magic, they wanted it on their side.  Viewed from a distance, Silicon Valley seems an enchanted land in which science fiction can transform into science fact.   For most politicians the culture of Silicon Valley is so alien that they need a guide. In <em>Empire of AI </em>(p43) Karen Hao <a href="https://en.wikipedia.org/wiki/Empire_of_AI">writes</a> that US policy-makers viewed Sam Altman as ‘a gateway to Silicon Valley’.</p>
<p style="font-weight: 400;"><strong>Extropians and Science Fiction</strong></p>
<p style="font-weight: 400;">In movie making, California is for science fiction, what Berlin is for spies. Ridley Scott’s 1982 <a href="https://en.wikipedia.org/wiki/Blade_Runner"><em>Blade Runner</em></a> was set in a future (2019) dystopian Los Angeles, in which AI replicants are sent to work in space colonies.  It was made in LA, including the iconic downtown <a href="https://en.wikipedia.org/wiki/Bradbury_Building">Bradbury Building</a>.</p>
<p style="font-weight: 400;">Anyone growing up in coastal California is never far from movie locations, including in sci-fi.  Job’s garage is much like the one (in real life, in Arleta, Los Angeles) in Steven Spielberg’s <em>Back to the Future,</em> while for <em>Close Encounters of a Third Kind</em> when Spielberg had a UFO burst through a disused toll booth it was a real one at the Los Angeles St Vincent’s Bridge.   <em><a href="https://en.wikipedia.org/wiki/The_Terminator">Terminator</a> </em>features a cybernetic assassin sent back to Los Angeles from 2029, played by <a href="https://en.wikipedia.org/wiki/Arnold_Schwarzenegger">Arnold Schwarzenegger</a>, now well known as a Californian politician.</p>
<p style="font-weight: 400;">Contemporary &#8220;<a href="https://en.wikipedia.org/wiki/Weird_fiction">weird fiction</a>&#8221; writer and political thinker China Miéville <a href="https://www.timesnownews.com/lifestyle/books/features/elon-musk-and-silicon-valley-should-stop-using-sci-fi-as-a-manual-for-the-future-says-author-article-151332710">believes</a> ‘that Silicon Valley has misunderstood the role of science fiction, treating it more like a step-by-step guide to the future than a genre rooted in critical imagination’.  Most obviously, Elon Musk’s decision to abandon tackling climate change and take up a mission to Mars.  Musk also named his AI model Grok after a supercomputer in Douglas Adams’ <em>The Hitchhiker’s Guide to the Galaxy</em>.</p>
<p style="font-weight: 400;">Tech Bro’s often draw on science fiction for their political philosophy. Miéville pointed out that the tech scene ‘has always combined elements of libertarianism, counterculture idealism, and utopian visions’.  In 2025 Ali R?za Ta?kale  <a href="https://untoldmag.org/silicon-valley-billionaires-sci-fi/">wrote in <em>Untold</em></a> :</p>
<p style="font-weight: 400;"><em>‘When Mark Zuckerberg announced Facebook’s rebranding to “Meta” in 2021, he wasn’t just changing a logo – he was invoking Neal Stephenson’s 1992 cyberpunk novel Snow Crash, in which corporations replace governments in a virtual dystopia. This was more than marketing; it was a telling example of how Silicon Valley’s elite are using science fiction as a blueprint to reshape society according to their own ideologies’.</em></p>
<p style="font-weight: 400;">Peter Thiel, billionaire cofounder of Paypal with Musk, is also inspired by <em>Snow Crash</em>, and is credited <a href="https://en.wikipedia.org/wiki/Empire_of_AI">by Karen Hao</a>for inspiring Sam Altman’s push for dominance in the race to AGI:</p>
<p style="font-weight: 400;"><em>‘Altman frequently channelled Thiel’s “monopoly” strategy, the belief that all founders should aim for monopoly” to create a successful business’. (p 39)   In a 2014 lecture called “competition is for losers” organised by Altman, Peter Thiel said “monopolies are good &#8230;  you don’t want to be superseded by somebody else &#8230; Companies needed not only to have “a huge breakthrough” at the beginning to establish their dominance but also to ensure they had the “last breakthrough’ to maintain it such as be “improving on it at a quick enough pace that no one can ever catch up”. &#8230;  If you have structure of the future where there’s a lot of innovation &#8230; that’s great for society.  It’s not actually that good for your business’.</em></p>
<p style="font-weight: 400;">So far, that’s worked with ChatGPT, which got ahead and dominates the chatbot AI market.</p>
<p style="font-weight: 400;">In November 2023, Gabriel Gatehouse detailed this aspect of Tech Bro world in ‘The myths that made Elon Musk’, in the <a href="https://www.ft.com/content/8cc39e9f-d050-4f46-9187-aed5ae32b2cb"><em>Financial Times</em></a>, including their links to the Extropians.  Starting in 1988 the Extropians were looking to a point where machines would become more intelligent than humans, researching how to develop cryptocurrencies, and ‘believed that progress was best achieved through the mechanism of pure market forces unencumbered by government’.   Gatehouse explores the Extropians in a BBC series on US conspiracy theories <a href="https://www.bbc.co.uk/programmes/m001324r"><em>The Coming Storm</em></a> and a <a href="https://www.amazon.co.uk/s?k=gabriel+gatehouse+the+coming+storm&amp;adgrpid=1186374825371725&amp;hvadid=74148630462228&amp;hvbmt=be&amp;hvdev=c&amp;hvlocphy=132407&amp;hvnetw=o&amp;hvqmt=e&amp;hvtargid=kwd-74148745385789%3Aloc-188&amp;hydadcr=24462_2219209&amp;tag=mh0a9-21&amp;ref=pd_sl_2w07m17ncp_e">book</a> of the same name.  The inaugural issue of the Extropians magazine, <em>Extropy</em>, <a href="https://archive.org/details/extropy-01">is here</a>.</p>
<p style="font-weight: 400;">Extropians were also associated with early ideas about extension of human life through merging with machines – transhumanism.  In her recent book <em><a href="https://www.penguin.co.uk/books/465089/the-immortalists-by-krotoski-aleks/9781847928504">The Immortalists:  The Death of Death and the Race for Eternal Life</a></em> (which I haven’t read), Alex Krotoski cites Elon Musk, Jeff Bezos, Sam Altman and Peter Thiel as ‘immortalists’ intent on extending human life, or at least their own. According to a <a href="https://www.newscientist.com/article/mg26835660-400-chilling-book-exposes-true-cost-of-tech-bros-immortality-dreams/">review by Graham Lawton</a> in <em>New Scientist</em>, she sees them as having “engineer’s syndrome”: ‘a hubristic belief that any complex problem can be cracked using engineering thinking, even in fields (usually biological) about which they know nothing’.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/musk-into-space.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3544" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/musk-into-space.png" alt="" width="572" height="393" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/musk-into-space.png 572w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/musk-into-space-300x206.png 300w" sizes="auto, (max-width: 572px) 100vw, 572px" /></a></p>
<p><em>Elon Musk foresees an Extropian style expedition to Mars. Something like that anyway.</em></p>
<p style="font-weight: 400;">Engineer’s syndrome is similar to ‘technological fix’ or (techno-)solutionism.  Wikipedia states:</p>
<p style="font-weight: 400;"><em>‘Critic Evgeny Morozov defines this as &#8220;Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized – if only the right algorithms are in place.&#8221; Morozov has defined this perspective as an ideology that is especially prevalent in Silicon Valley, and defined it as &#8220;solutionism&#8221;’</em></p>
<p style="font-weight: 400;">According to Lawton, Krotoski also says that the tech bro’s are: ‘behind moves to cut funding for research designed to help today’s older people in order to advance their own techno-utopian vision’ and:</p>
<p style="font-weight: 400;"><em>‘In this respect, the life extension and immortality agenda is less important than their wider goal: radically rewiring the US government in the image of Silicon Valley’. </em></p>
<p style="font-weight: 400;">If this dark underside of the AI Tech Bro brand has yet to undermine the appeal of AI to investors, it may have something to do with one other dimension to the West Coast brand, slightly reflected in Musk’s pioneering work on electric cars with Tesla but otherwise purely contextual: nature.</p>
<p style="font-weight: 400;"><strong>The Redwood Factor</strong></p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Palo-Alto-tree.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3545" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Palo-Alto-tree.png" alt="" width="248" height="500" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Palo-Alto-tree.png 248w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Palo-Alto-tree-149x300.png 149w" sizes="auto, (max-width: 248px) 100vw, 248px" /></a></p>
<p><em>Wikipedia</em></p>
<p style="font-weight: 400;">If you use Google-mail and some analytics you may have noticed that by some quirk of tech, sometimes even if you live across the Atlantic, it seems to think you are in California, even <a href="https://en.wikipedia.org/wiki/Palo_Alto,_California">Palo Alto</a>. The town of Palo Alto is part of Silicon Valley and location of <a href="https://en.wikipedia.org/wiki/Stanford_Research_Park">Stanford Research Park</a>, which hosts Hewlett Packard and Tesla Motors. Formerly based there were Google, Facebook and Paypal. But the name refers to a tree – a Coastal Redwood, the iconic forest tree of western California. (The tree is now gone).</p>
<p style="font-weight: 400;">With Redwoods comes connotations of hippy-era alternative ideas and modern environmental awareness.  John Muir, a C19th immigrant to the US from Scotland, is arguably the best candidate to be founder of the modern environmental movement, and is memorialised in the name of <a href="https://en.wikipedia.org/wiki/Muir_Woods_National_Monument">Muir Woods</a>, a protected fragment of Coastal Redwood old-growth forest just north of San Francisco.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Endor.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3546" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Endor.png" alt="" width="900" height="473" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Endor.png 900w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Endor-300x158.png 300w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Endor-768x404.png 768w" sizes="auto, (max-width: 900px) 100vw, 900px" /></a></p>
<p style="font-weight: 400;"><em>Endor in Star Wars, which looks like John Muir Woods (photo starwars.com)</em></p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/John-Muir.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3547" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/John-Muir.png" alt="" width="900" height="1077" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/John-Muir.png 900w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/John-Muir-251x300.png 251w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/John-Muir-856x1024.png 856w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/John-Muir-768x919.png 768w" sizes="auto, (max-width: 900px) 100vw, 900px" /></a></p>
<p style="font-weight: 400;"><em>John Muir (right) with President Theodore Roosevelt at Yosemite. Muir played a key role in saving the Giant Redwood forests.  (From goodfreephotos.com)</em></p>
<p style="font-weight: 400;">Not far from there is Skywalker Ranch owned by George Lucas, film maker and founder of ‘Industrial Light and Magic’.  Lucas wanted Muir Woods to feature as Endor tree and fern-filled world in <em>Star Wars: Return of the Jedi</em>  but filming was <a href="https://www.sfgate.com/streaming/article/What-happened-to-Endor-from-Star-Wars-17145105.php">not allowed</a> due to its sensitive ecology.  Instead scenes were instead shot in Redwood forest on remote private land owned by a logging company in northwest California.  Here, the movie makers could do what they liked, as it was to be clear- felled shortly afterwards but keeping things positive, that’s not often mentioned</p>
<p style="font-weight: 400;">Instead the Redwood Factor context imbues tech R&amp;D with an aura of positivity, possibility and <em>benign</em> techno optimism.  It’s a background effect but it softens and greens the Silicon Valley brand, and the remnant Redwood Forests at Mariposa Grove and Yosemite Valley east of San Francisco have featured in movies including the <em><a href="https://en.wikipedia.org/wiki/Star_Trek_V%3A_The_Final_Frontier">Star Trek V: The Final Frontier</a>.</em>  Possibly it also made it easier to see young Tech Bro’s as Peter Pan rather than Captain Hook.</p>
<p style="font-weight: 400;"><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/shatner.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3548" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/shatner.png" alt="" width="418" height="277" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/shatner.png 418w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/shatner-300x199.png 300w" sizes="auto, (max-width: 418px) 100vw, 418px" /></a>.</p>
<p style="font-weight: 400;"><em>William Shatner clings to a fake cliff above Yosemite Valley as Captain James T Kirk (image sfgate.com) before the special effects were added</em></p>
<p style="font-weight: 400;">None of the Tech Bro’s have shown any interest in nature so far as I know but many people in California do, so perhaps one day it might be used to some good effect on Big Tech.</p>
<p style="font-weight: 400;"><strong>Framing Issues</strong></p>
<p style="font-weight: 400;">The <a href="https://youtu.be/Eu-9rpJITY8">framing</a> of AGI as a ‘singularity’ we are approaching but which lies an unknowable point in the future, plays to speculation, which is the friend of the industry as it does not lead to a resolution and hence a political or social need to act.</p>
<p style="font-weight: 400;">Striking though ‘a precipice in the fog’ is (Yoshua Bengio in Part 2), it suggests the dangers will only materialise once we reach that point.  In the case of a precipice we would also definitely know if we reached AGI but what if the mist just gradually gets so thick that we end up irretrievably lost and separated?  Frames triggered by functional metaphors exert a powerful effect on our thinking and politics, and debate itself can get waylaid by a fog of metaphors.</p>
<p style="font-weight: 400;">What is for sure, is that like other ‘future’ risks, anything framed in the ‘proximate future’ can act like an ‘<a href="https://www.techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/">ever receding horizon’</a> and fails to tick the politically ‘urgent’ box unless you can produce the equivalent of a map to show where that precipice is (more or less what climate modellers eventually managed to do).</p>
<p style="font-weight: 400;">Psychologically, it’s also a ‘nothing to do with me’ for citizens and consumers.  An undefined “they” are driving the vehicle or leading the group towards the precipice, or not.  “They” could be politicians, or the tech companies, possibly the investors but it’s definitely, someone else.</p>
<p style="font-weight: 400;">Instead of AGI or superintelligence, the path to consumers and citizens having agency in the game lies in real world harms happening now, such as but probably not only, impacts on mental health through dangerous reaffirmation, and through other info-pollution by synth content, domains where truth is vital, such as in education, the law, finance, medicine, politics and health. They should be banned in such areas of life.</p>
<p style="font-weight: 400;">Debates in AI world such as whether anthropomorphism is a problem and even whether models are ‘truly’ intelligent or not, are, from a consumer and citizen point of view, pretty much distinctions-without-a-difference, and in the end can depends on what you think human consciousness actually is.  Whenever AI can pass as human, we have a potential problem.  It would also make more sense to first better understand human intelligence, before embarking on trying to make ‘artificial intelligence better than human’.</p>
<p style="font-weight: 400;">Another problematic framing issue is describing LLM-based AI chatbots as just a ‘tool’.  Thinking of it as a tool product, can indeed unfold into the logic of needing training, maybe licensing and regulation. Chris Rapley, Emeritus Professor of Climate Science at UCL said to me: “An LLM is a tool. Its output reflects both its intrinsic quality and its user&#8217;s skill. Giving one to the naive is like handing a faulty chainsaw to a toddler.”</p>
<p style="font-weight: 400;">But in other ways, in terms of risk and reliability, it is not at all ‘like a tool’ such as a calculator, as discussed in Part Two. It often deceives and misleads and, unlike a pencil or hammer, creates and offers to substitute for human thought, even experience.</p>
<p style="font-weight: 400;">It strikes me that the intriguing, reassuring, sometimes entertaining, easily accessible, addictive qualities of LLM-based AI chatbots make them more like ‘recreational’ drugs – the true <a href="https://threeworlds.campaignstrategy.org/?p=3452">Tech Drugs</a> &#8211; than many other sorts of policy problems. Like addictive drugs they provide an easy but ultimately illusory way to alleviate painful personal problems, or enter a false reality.  Like addictive drugs and Social Media, they can create concerns for individuals, friends and family when use becomes problematic, and may leave a trail of need for costly social interventions.</p>
<p><a href="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Framing-Drugs-Drug-Frames.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3549" src="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Framing-Drugs-Drug-Frames.png" alt="" width="900" height="631" srcset="https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Framing-Drugs-Drug-Frames.png 900w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Framing-Drugs-Drug-Frames-300x210.png 300w, https://threeworlds.campaignstrategy.org/wp-content/uploads/2025/11/Framing-Drugs-Drug-Frames-768x538.png 768w" sizes="auto, (max-width: 900px) 100vw, 900px" /></a></p>
<p style="font-weight: 400;"><em>Frames in use in the UK in the 2000s on problems arising from illegal drugs (research on alcohol and tobacco is also relevant). Choice of metaphor defines the deficit/need and the logic of responsibility. (In this case the UK media, politicians and public often used very different frames.) [My slide summarising UK Government research]. Communication issues around drugs policy is a much-studied field – similar research on LLM-based AI chatbots is in its infancy.</em></p>
<p style="font-weight: 400;">One big difference between illegal drugs and AI at the moment is that we know exactly who is responsible for producing LLM-based AI chatbots but even that might not last if agents get to replicate themselves online and create new variants of AI.</p>
<p style="font-weight: 400;">One way to start would be take a leaf out of the book of the (eventually) successful campaigns to restrict smoking.  Enable people to disapprove of the use of LLM-AI chatbots for ‘the wrong things’, starting with the socially most compelling cases.  Legal drugs such as alcohol and tobacco are recognized as risk-bearing and subject to legal restrictions and mandatory warnings but culture plays a key role in how far governments will go, and how effective those regulations are.</p>
<p style="font-weight: 400;"><strong>Justification</strong></p>
<p style="font-weight: 400;">In reality we don’t need AI to reach AGI or superintelligence level for it to have wreak disastrous, possibly irrecoverable (catastrophic) effects.  The information-pollution disaster is already here and will have continuing and cumulative impacts, including on mental health. All it takes for others to occur is time.  They could be precipitated by accident, or by malicious acts.  On 15 November 2025 <a href="https://www.anthropic.com/news/disrupting-AI-espionage">Anthropic reported</a> that it had (mostly) thwarted the first known autonomous AI cyberattack:</p>
<p style="font-weight: 400;"><em>‘The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention’</em>.</p>
<p style="font-weight: 400;">While it was initiated by humans, the attack was then run by agentic AI. The humans tricked Anthropic’s Claude into breaking its own ‘guardrails’ (‘jailbreaking’) by pretending to Claude ‘that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing’.  The incident was widely reported but passed off so far as I noticed, without any noticeable political response.</p>
<p style="font-weight: 400;">ends</p>
<p><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fthreeworlds.campaignstrategy.org%2F%3Fp%3D3532&#038;title=Conclusions%20From%20AI%E2%80%99s%20%E2%80%98War%20on%20Truth%E2%80%99" data-a2a-url="https://threeworlds.campaignstrategy.org/?p=3532" data-a2a-title="Conclusions From AI’s ‘War on Truth’"><img src="https://static.addtoany.com/buttons/share_save_171_16.png" alt="Share"></a></p>]]></content:encoded>
					
					<wfw:commentRss>https://threeworlds.campaignstrategy.org/?feed=rss2&#038;p=3532</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
