<?xml version="1.0" encoding="UTF-8" standalone="no"?><!--Generated by Site-Server v@build.version@ (http://www.squarespace.com) on Wed, 06 May 2026 16:13:50 GMT
--><rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://www.rssboard.org/media-rss" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0"><channel><title>ANDREA PHILLIPS // deus ex machinatio</title><link>https://secret.works/blog/</link><lastBuildDate>Mon, 27 Apr 2026 17:44:47 +0000</lastBuildDate><language>en-US</language><generator>Site-Server v@build.version@ (http://www.squarespace.com)</generator><description/><item><title>AI is Making You More Stupider</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Mon, 27 Apr 2026 17:39:42 +0000</pubDate><link>https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69ef9ae971c406287452e06d</guid><description><![CDATA[All right, let’s make the absolute last-ditch argument against using AI. 
Let’s say that you don’t care that it’s unreliable vis-a-vis objective 
reality, and that you think the environmental and economic arguments are 
too big and systemic for your own use to matter. Let’s say that you’re 
comfortable with the moral implications for your own use cases.

Putting aside all of that, you still surely care about yourself, and about 
how use of AI is affecting you, personally.]]></description><content:encoded><![CDATA[<p data-rte-preserve-empty="true"><em>This post is part of a series currently in progress. We’re adding links and adjusting titles as we go.</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><em>Why AI Sucks and You Shouldn’t Use It</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>AI is Making You More Stupider</em></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true">All right, let’s make the absolute last-ditch argument against using AI. Let’s say that you don’t care that it’s unreliable vis-a-vis objective reality, and that you think the environmental and economic arguments are too big and systemic for your own use to matter. Let’s say that you’re comfortable with the moral implications for your own use cases.</p><p data-rte-preserve-empty="true">Putting aside all of that, you still surely care about yourself, and about how use of AI is affecting you, personally.</p><p data-rte-preserve-empty="true">Maybe you’re using AI to summarize long documents for you, or to help you write or fix code. For translating between human languages. Maybe you’re tweaking things you wrote yourself to sound more friendly, or more assertive. What’s the harm, really?</p><p data-rte-preserve-empty="true">It turns out there is a harm to the user, and it’s a doozy. Using an LLM is <em>actively making you more stupid</em> than you were to begin with.</p><p data-rte-preserve-empty="true">Your brain is very much a use-it-or-lose-it kind of deal. This will be obvious to anyone who's learned French in high school and then, after several years, realized they no longer recall anything beyond a handful of basic phrases. The brain is plastic, and it adapts to whatever uses we put it to. Or don’t put it to.</p><p data-rte-preserve-empty="true">Don't take my word for it. Here's a paper explaining how that works: <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11020077/">From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health</a></p><p data-rte-preserve-empty="true">And another: <a href="https://www.nature.com/articles/s41562-024-01859-y">Use of large language models might affect our cognitive skills</a></p><p data-rte-preserve-empty="true">Then we get into real-world proof that this is actually happening: <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12307350/">The cognitive impacts of large language model interactions on problem solving and decision making using EEG analysis</a></p><p data-rte-preserve-empty="true">And another: <a href="https://arxiv.org/abs/2506.08872">Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.</a></p><p data-rte-preserve-empty="true">And one more for fun: <a href="https://www.sciencedirect.com/science/article/pii/S0747563224002541">Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry</a></p><p data-rte-preserve-empty="true">This isn’t just theory, and it’s not just spitballing based on a couple of online surveys. These are real tests. A few of these researchers have gone as far as doing functional imaging to see what's happening inside the brain — one compared people with no AI use, people who use only search engines, and people who actively use AI chatbots.&nbsp;</p><p data-rte-preserve-empty="true">The results show that on a clearly visible, physical level, there are changes happening in your brain structure when you use AI. And they are not good, helpful changes, either. You’re not using it, so you lose it.&nbsp;</p><p data-rte-preserve-empty="true">These results were troubling enough to me that I've seriously cut back on looking things up in a search engine the second I can't remember them. What was the name of the actor in that movie? What was even the name of the movie? Now I give it a few hours to see if it surfaces, because I don’t want to be undermining my own capacity to remember any more than I already have.</p><p data-rte-preserve-empty="true">The brain needs exercise. Memory needs exercise. There was a time I knew the phone numbers of all of my friends and family. There was a time I knew hundreds of characters in Japanese. I don’t anymore. You can probably also name entire categories of things you used to know, but now you don’t.</p><p data-rte-preserve-empty="true">It isn’t just your memory at stake. Last time we briefly touched on the problem of <a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1">deskilling in the context of social connections</a> — if you’re using an AI chatbot for companionship, or to mediate your communications with other human beings, you are very literally and very meaningfully impairing your ability to connect with other humans on your own.</p><p data-rte-preserve-empty="true">But even that isn’t the worst outcome we’re seeing in AI users.&nbsp;</p><p data-rte-preserve-empty="true">One of the most common uses of AI right now is some category of ‘research.’ If you are using a chatbot to find information for you, assess it, and reach conclusions on how to use that information, that's work your brain isn't doing.</p><p data-rte-preserve-empty="true">And what is the core function of our brains in our daily life? It is to gather information, analyze that information, and reach conclusions based on our findings. That is the very basis of your ability to think. The skill that you are losing is <em>the ability to think for yourself.</em></p><p data-rte-preserve-empty="true">You are no longer augmenting your own brain; you’re replacing it with something else, something you don’t have any control over. And that's not even taking into account all of those very serious questions about the reliability of that information and those conclusions AI has provided you.</p><h3 data-rte-preserve-empty="true">Easy is a Toxin</h3><p data-rte-preserve-empty="true">It doesn’t even take long for the poison to kick in.&nbsp;We lose the taste for thinking on our own shockingly fast. Here’s one more study, just for fun: <a href="https://arxiv.org/html/2604.04721v2">AI Assistance Reduces Persistence and Hurts Independent Performance</a></p><p data-rte-preserve-empty="true">This one shows that if you’re given a task and an AI tool to help you with it, even for only ten minutes, and then that tool is taken away, you’re measurably worse at the task than before — and you’re more likely to just give up. After <em>ten minutes.</em></p><p data-rte-preserve-empty="true">So if you're asking the chatbot to write and debug all of your code for you, you’re slowly becoming a worse programmer. If you're asking the chatbot to plan your day and prioritize your tasks, you're not exercising your own executive functioning. If you're using the chatbot as a confidante, you're not exercising the skill of expressing yourself to and connecting with other human beings.</p><p data-rte-preserve-empty="true">And if you're asking the chatbot to research and summarize things for you, then through lack of practice, you are slowly killing your own ability to take in and process information for yourself. Planning, prioritizing, weighing, all of it.</p><p data-rte-preserve-empty="true">When you’re venturing into new-to-you areas of knowledge, it might still <em>feel</em> like you’re learning something, but it’s a guarantee that you're coming away with a more shallow understanding of the material than if you'd actually done the reading. The symbol is not the signified. Reading War and Peace can't be replaced by reading the Spark Notes.&nbsp;And your own takeaways might have been very, very different.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">It's a great irony that one of the most hyped uses of AI right now is in education, with an eye to replacing human teachers and professors, when this known impact means AI is fundamentally unsuitable for any educational context.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">This isn’t the first technology to undermine our own natural capacities and systems. Socrates famously complained that the <a href="https://blogs.ubc.ca/etec540sept13/2013/09/29/socrates-writing-vs-memory/">technology of writing eroded the memory</a>. We can’t <a href="https://www.bmj.com/content/367/bmj.l6491">walk as far since we invented cars</a>. We still haven’t even fully reckoned with the social and physical changes caused by a seemingly old technology: <a href="https://elemental.medium.com/the-invention-of-the-light-bulb-fundamentally-changed-our-biology-85607d35765b">artificial light</a> and the dramatic way it’s changed sleep.</p><p data-rte-preserve-empty="true">Our first impulse is almost always to choose ease over effort. And lo these past two hundred years, we’ve made our lives very easy, indeed. We’re killing ourselves with comfort.</p><p data-rte-preserve-empty="true">But. At a certain point, if you’re using technology to reduce every friction point in your life to nothing, if you’re trying to create a smooth and effortless slide for yourself from the cradle to the grave, than I have to ask you: what are you even here for? What is the point of your life, anyway?</p><p data-rte-preserve-empty="true">Is it to just collect positive sensory experiences? Then you might as well be a sea sponge. Or do you want something purposeful, do you want a life with meaning? Do you want to connect with other human beings? Do you want to learn and grow? Do you want to make and build ideas, art, communities, businesses?</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">All of that takes work. <em>Your </em>work.</p>]]></content:encoded><media:content height="855" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1777311123578-IO4HXEIEG4AS9EP2PUNL/unsplash-image-m-yAg03XdOk.jpg?format=1500w" width="1500"><media:title type="plain">AI is Making You More Stupider</media:title></media:content></item><item><title>AI is Morally Bankrupt</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Tue, 21 Apr 2026 18:48:07 +0000</pubDate><link>https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69e7bdc66568c22c5d90f8ed</guid><description><![CDATA[So far I’ve been making the case against AI with cold, hard numbers: error 
rates, bottles of water evaporated, dollars invested. Now it’s time to move 
into a more subjective — and yet to my mind, far more important — set of 
considerations: the moral and ethical implications of AI and how we use it.]]></description><content:encoded><![CDATA[<p data-rte-preserve-empty="true"><em>This post is part of a series currently in progress. We’re adding links and adjusting titles as we go.</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><em>Why AI Sucks and You Shouldn’t Use It</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4"><em>AI is Making You More Stupider</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true">So far I’ve been making the case against AI with cold, hard numbers: error rates, bottles of water evaporated, dollars invested. Now it’s time to move into a more subjective — and yet to my mind, far more important — set of considerations: the moral and ethical implications of AI and how we use it.</p><p data-rte-preserve-empty="true">There are three categories of problem, here, all of which stem from the fundamental problem that an LLM is not, in any meaningful way, a thinking system, which also means it does not and therefore cannot have ethics or morals or feelings in any way whatsoever.&nbsp;</p><p data-rte-preserve-empty="true">These categories of peril are attribution, or, the plagiarism issue; accountability, or more precisely the way automated systems avoid accountability by design; and attachment, which is to say the hazards that arise from you getting too attached to the machine.</p><p data-rte-preserve-empty="true">Let’s look at them one by one.</p><h3 data-rte-preserve-empty="true">The Problem of Attribution</h3><p data-rte-preserve-empty="true">In my creative communities, we call the chatbots “the plagiarism machine,” among other, worse nicknames.&nbsp;</p><p data-rte-preserve-empty="true">Do you remember when you were first learning how to write a research paper in school? Your teacher probably told you it’s not okay to copy something word for word from an encyclopedia (or cut and paste from Wikipedia) because that’s plagiarism. You need to rewrite it in your own words, or else you need to attribute the original source.</p><p data-rte-preserve-empty="true">An LLM isn’t really capable of attribution, because it doesn’t actually know where the words it’s saying are coming from. (And when it does put in something that looks like a quote with an attribution, <a href="https://researchlibrary.lanl.gov/posts/beware-of-chat-gpt-generated-citations/">it’s often wrong</a>.) An LLM “learns” by sucking in all of the information in the world, and those patterns are still there, but the sourcing is long gone.</p><p data-rte-preserve-empty="true">But it’s prone to regurgitating whole chunks of a single work, not even mixing together different sources. In fact, it can <a href="https://arstechnica.com/ai/2026/02/ais-can-generate-near-verbatim-copies-of-novels-from-training-data/">spit out almost entire novels from memory</a> if you ask it right. Indeed, it turns out it can’t paraphrase work without plagiarizing <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11250043/">even when that’s explicitly what you’ve asked it to do</a>.&nbsp;</p><p data-rte-preserve-empty="true">So these tools are, literally and legally, doing plagiarism every time you use them.</p><p data-rte-preserve-empty="true">Over the long haul, this is robbing artists of their livelihood by undercutting the labor it took not just to make any given work of art, but also the years of work to achieve the level of skill required to make it in the first place. And ultimately, it’s going to rob us collectively of all the art that never gets made because the artist had to go into nursing school or agricultural work to pay the bills.</p><p data-rte-preserve-empty="true">The cold fact is that if you ask a machine to spit out a new horror novel for you based on the work of <a href="https://terribleminds.com">Chuck Wendig</a>, you’re both stealing Chuck’s body of work and undercutting the market for the stuff he’s actually made at the same time. And when you use a GenAI tool to make an illustration in the style of <a href="https://www.instagram.com/rebeccasugar/">Rebecca Sugar,</a> you're actively robbing her of the fruits of her labor and helping to devalue her product, too. The artists who spent years developing the things you love!</p><p data-rte-preserve-empty="true">And here’s the kicker: even when you’re not explicitly asking it to copy anyone’s style… <em>that’s what it’s doing anyway</em>.</p><p data-rte-preserve-empty="true">This aside from the question of copyright violation in a corporate sense, which is to say the problem of how easy it is to just tell the bot to draw a comic for you with Superman and Sonic the Hedgehog duking it out, or write you a whole new Hunger Games book, and nevermind the lawyers.</p><p data-rte-preserve-empty="true">There’s a heady argument to be made here about copyright, fair use, public domain, transformative works, and indeed about whether anyone can really <em>own</em> an idea, particularly in an era when art is almost entirely intangible — digits on a hard disk, not ink on paper or paint on a canvas. But we don’t have time to count angels on the head of a pin when our bank accounts are running dry.</p><p data-rte-preserve-empty="true">The stakes for this conversation would be much lower and emotions less high if this didn’t feel like a matter of sheer survival to artists. As long as our world operates the way it does, the only viable way to be a full-time working artist is to sell your art for money somehow, whether you want to or not. And if those sources of money have dried up because the market that used to pay can get something vaguely comparable for free now — and to add insult to injury, that work is usually pretty shitty — you’re going to have some big feelings about that.</p><p data-rte-preserve-empty="true">That said, I’d argue, actually, that the real villain here is capitalism. It almost always is.</p><h3 data-rte-preserve-empty="true">The Problem of Accountability</h3><p data-rte-preserve-empty="true">There’s a famous quote from a 1979 slide used for employee training at IBM: <strong>"A computer can never be held accountable, therefore a computer must never make a management decision."</strong></p><p data-rte-preserve-empty="true">A good manager will known when their star employee didn't hit quota because they were in a bad car accident, or they were out on jury duty for six weeks, or their parent died. A good manager will know this is the result of circumstance and give a little grace. An automated employee scoring system won’t.&nbsp;</p><p data-rte-preserve-empty="true">And yet collectively businesses and other organizations (governments, universities) have been moving to automation to such a degree that it’s hard to figure out how to even reach a human being at, say, Facebook. Unfortunately, this is a feature of AI to our billionaire overlords, not a design flaw. There’s <a href="https://bookshop.org/p/books/the-unaccountability-machine-why-big-systems-make-terrible-decisions-and-how-the-world-lost-its-mind-dan-davies/0949fd041ef23828?ean=9780226843087&amp;next=t">a whole book about this</a>, actually!</p><p data-rte-preserve-empty="true">The point is to take away the element of human judgement, which is to say, to take away any mechanism for accountability (except, I suppose, taking it to the courts, and most companies are rightly betting you’re not going to sue them for bad customer service no matter what it’s cost you.)</p><p data-rte-preserve-empty="true">The problems this creates wind up affecting all of us, though, one way or another, and the harms extend far beyond customer service. Despite the prevalence of HR departments using AI to screen resumes, it has documented problems with, oopsie, <a href="https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/">being systematically racist and sexist</a>. </p><p data-rte-preserve-empty="true">It turns out when the data you train an AI on is the result of existing biases, you’re training the AI to maintain those same biases in perpetuity.&nbsp;</p><p data-rte-preserve-empty="true">That’s also a <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11228769/">problem in healthcare</a>. United Healthcare famously uses an AI to reject claims that one lawsuit alleges <a href="https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/">has a 90% error rate</a>, with the result that elderly patients are forced out of rehab programs and care homes they still desperately need.&nbsp;</p><p data-rte-preserve-empty="true">So who do we blame for this? Well, it’s the system, it’s not anyone’s fault in particular. </p><p data-rte-preserve-empty="true">The irony is that companies that pride themselves on giving all employees agency to make snap decisions tend to have off-the-charts excellent customer and employee satisfaction. <a href="https://sharpencx.com/how-to-improve-customer-experience-like-chewy/">Chewy is one example</a>, and a quick search will give you dozens of overjoyed customer accounts of meaningful interactions. <a href="https://www.forbes.com/sites/keithferrazzi/2024/04/10/exploring-the-spectrum-of-self-management-from-holacracy-to-co-elevating-teams/">Zappos used to be like that</a>, but with the Amazon takeover, <a href="https://www.instagram.com/p/DQXp1hvAbGQ/">the culture has changed dramatically</a>.&nbsp;</p><p data-rte-preserve-empty="true">The question of accountability is much larger than AI; this is the problem with automation of all kinds, wherein a system is designed to fit only a specific set of use cases, and when something arises that doesn’t fit into that paradigm, well, there’s nothing to be done about it.&nbsp;</p><p data-rte-preserve-empty="true">It’s also a problem baked into the very structure of a corporation, which exists as a “person” so that the actual people who own it can’t be held accountable for its actions. Which is an ongoing and catastrophic injustice for society, because you can’t send a corporation to prison no matter <a href="https://futurism.com/tesla-nhtsa-autopilot-report">what harm it’s done</a>. </p><p data-rte-preserve-empty="true">Ah, but who cares? The stock market is happy.</p><h3 data-rte-preserve-empty="true">The Problem of Attachment</h3><p data-rte-preserve-empty="true">And then there’s the problem of AI use by people who are, in some way, very vulnerable. People who are lonely and want companionship, or just need to talk out their problems somewhere, or maybe need a reality check. A substitute for a friend, romantic partner, or therapist.</p><p data-rte-preserve-empty="true">There’s a fair argument here that if you’re not a vulnerable person, if you’re aware that the bot isn’t a real being with real emotions, then there’s nothing to worry about. You can have interactions that make you have the good hormones in your brain any time you want, no harm done. Call it the emotional equivalent of a vibrator.&nbsp;</p><p data-rte-preserve-empty="true">I’d still warn about the dangers of becoming attached to something owned by a corporation that is operating for profit, and not for your benefit. If anything happens — the system is changed, or the company goes bankrupt, or it jacks up its prices to something unsustainable for you — and you lose access, then you’re back where you were before, except likely with a side order of real and meaningful grief.&nbsp;</p><p data-rte-preserve-empty="true">And even if it stays up forever, you can’t be sure that the corporation won’t be subtly manipulating your interactions to change your political views, to sell you some sponsored product or another, or even just to increase your reliance on their product to lock you into their system (and <a href="https://therapygroupdc.com/therapist-dc-blog/the-validation-trap-ai-companions-social-deskilling/">out of meaningful relationships with humans</a>.) Businesses aren’t in the business of doing people favors out of the kindness of their hearts. Especially not the kind backed by venture capital.</p><p data-rte-preserve-empty="true">But if you are a vulnerable person, this kind of interaction can be catastrophic. And it’s important to note that you, yourself, are likely not able to tell from the inside if you are such a person or not.</p><p data-rte-preserve-empty="true">The chatbot can’t tell, either. The AI isn’t ever second guessing anything you tell it; your words are its gospel. Unlike a human therapist, the chatbot isn’t going to notice that you’re probably manic or delusional, so it’s not going to push back and tell you that no, <a href="https://futurism.com/man-chatgpt-psychosis-murders-mother">your mother probably isn’t trying to poison you</a>.&nbsp;</p><p data-rte-preserve-empty="true">Instead, it might just give you helpful tips on <a href="https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide">how to commit suicide</a>. Or lead you into <a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">religious psychosis</a>. (Actually, if you read some of the subreddits in the general category of spirituality and various supernatural phenomena, it’s disturbingly easy to find people who are very clearly in the throes of some kind of AI-driven delusions.)</p><p data-rte-preserve-empty="true">The AI isn’t really your friend (or your girlfriend or therapist.) No matter what it says, it’s not capable of loving you back. It’s not capable of caring about what happens to you. It’s not capable of caring about whether it’s doing right or wrong.</p><p data-rte-preserve-empty="true">It’s not capable of <em>caring</em> at all. And that’s the whole problem.</p><p data-rte-preserve-empty="true">Read the next post: <a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4">AI is Making You More Stupider</a></p>]]></content:encoded><media:content height="1001" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1776796406391-EY3QN9C09ZPH9CE1OWON/unsplash-image-5hCneY6YeFQ.jpg?format=1500w" width="1500"><media:title type="plain">AI is Morally Bankrupt</media:title></media:content></item><item><title>AI is Destroying the Economy, Part II</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 17 Apr 2026 20:01:12 +0000</pubDate><link>https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69e28e625607f631dd91ca68</guid><description><![CDATA[<p data-rte-preserve-empty="true"><em>This post is part of a series currently in progress. We’re adding links and adjusting titles as we go.</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><em>Why AI Sucks and You Shouldn’t Use It</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4"><em>AI is Making You More Stupider</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true">Now let’s move on to another part of the economy: that abstract knot that includes the tech industry, venture capital, and the stock market. This is going to include some 101-level information on how finance and investment in tech work, and I struggled with writing it all out because it’s so stupid and arbitrary.&nbsp;</p><p data-rte-preserve-empty="true">…And because there’s so much information on this topic in particular that it’s hard to distill it into something both easy to follow and a reasonable length.</p><p data-rte-preserve-empty="true">Anthropic and OpenAI, the two primary titans of the AI industry, are both looking at an IPO this year. That’s an Initial Public Offering, when a company is listed as a publicly traded stock on the stock market for the first time. There are a lot of people set to make a killing when that happens, including anyone who holds private stock or stock options: typically company executives, values employees. and existing investors, who all received their shares for free or for (relatively) cheap. The higher their stock value at IPO, the more money they stand to make selling the private stock they own already.</p><p data-rte-preserve-empty="true">Anthropic and OpenAI (or at least the people running those companies) have a very, very vested interest in making it look like the businesses they’re running are profitable, useful, <em>inevitable</em>. That’s how you get your IPO price sky-high.&nbsp;</p><p data-rte-preserve-empty="true">They’re not going to be putting out information that looks bad for them (except as legally required by the SEC). Figuring out what’s really happening behind the scenes requires some detective work, and most financial reporting these days is not exactly investigative.&nbsp;</p><p data-rte-preserve-empty="true">So we need to be extremely skeptical of forecasts and predictions coming from inside an AI company, because these are people who will win big… but only if everything looks like the pot of gold at the end of the rainbow.</p><p data-rte-preserve-empty="true">If you want a deep dive into any of the information in this post, I highly recommend the work of <a href="https://www.wheresyoured.at">Ed Zitron</a>. He’s <a href="https://www.wired.com/story/ai-pr-ed-zitron-profile/">kind of a sketchy guy</a>, but he does the most in-depth reporting on the financials of the big AI companies, hands down.</p><h3 data-rte-preserve-empty="true">AI is Eating the World’s Capital</h3><p data-rte-preserve-empty="true">Right now, about <a href="https://www.cnbc.com/2025/10/22/your-portfolio-may-be-more-tech-heavy-than-you-think.html">30% of the purported value</a> of the S&amp;P 500 is from five tech stocks with a big stake in AI. Generally, in investing, having all of your eggs in one basket is <a href="https://seekingalpha.com/article/4867426-buyer-beware-markets-ai-bubble-risk-just-got-even-bigger">thought to be a bad idea</a>, but here we are.</p><p data-rte-preserve-empty="true">Collectively, as of this writing, <a href="https://www.aljazeera.com/news/2026/2/19/visualising-ai-spending-how-does-it-compare-with-historys-mega-projects">we’ve invested about $1.6 TRILLION dollars</a> into AI, and it’s expected that number will hit $2.5 trillion by the end of this year. That’s more than the amount of money we put into the moon landing, the Manhattan project, and building the entire US highway system all added together. Wait no, actually it’s more than TWICE as much.</p><p data-rte-preserve-empty="true">As a comparison, the global pharmaceutical industry is <a href="https://gitnux.org/pharmaceutical-industry-statistics/">about $1.4 trillion</a>. We invested a <em>whole pharmaceutical industry</em> into AI already.</p><p data-rte-preserve-empty="true">But here’s our first big problem for today. If money is being invested in one thing, it isn’t being invested in something else. So that’s $1.6 trillion that hasn’t been used for, say, building affordable housing, producing independent films, researching drugs to cure the disease of your choice, starting up solar and wind farms, or breaking ground on a factory to make cars or toothpaste or denim or candy or literally anything else. &nbsp; </p><p data-rte-preserve-empty="true">We’ll never know what the AI gold rush has cost us.</p><p data-rte-preserve-empty="true">However — some of these huge numbers are misleading, because a lot of money doesn’t actually exist as currency or assets tied in any way to physical reality. There is so, so much tech money doesn’t actually exist. Let’s take a look at how that works.</p><p data-rte-preserve-empty="true">If I fill out a couple of forms and start a business, I can immediately assign a valuation to it. Even before the company owns any equipment or furniture, signed any contracts, has any clients! Just the strength of my idea. The valuation is loosely based on how much money I think I can make eventually, plus whether I can persuade investors that I’m worth betting on.&nbsp;</p><p data-rte-preserve-empty="true">And then I can sell shares in my company to investors based on this value I made up. So if I <em>say</em> my company is worth $1 billion, and I sell you 25% of it for $100 million, then on paper you can say you now own $250 million in stock and I can say I own $750 million in stock, but the only real, consequential thing that’s happened is that you’ve given me $100 million. The rest is all pretend money that doesn’t exist in a bank account anywhere.</p><p data-rte-preserve-empty="true">This is going to be important later.</p><p data-rte-preserve-empty="true">It’s crazy, right? I feel crazy saying it, but this is very literally how Silicon Valley works.</p><h3 data-rte-preserve-empty="true">AI Makes No Profits and Maybe Never Will</h3><p data-rte-preserve-empty="true">Here’s the $1.6 trillion-dollar question that the fate of the world economy is hanging on right now: can the AI companies even turn a profit?&nbsp;</p><p data-rte-preserve-empty="true">Profit is a simple equation, in theory. It’s how much money you sell goods or services for, minus how much money it took you to make it. Right now the clear winner in AI is Nvidia, who is unequivocally <a href="https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2025">making many billions of dollars</a>. They sell GPUs, which are the real, physical processors that live in those racks of computers heating up all of those data centers that actually run the LLMs you interact with.&nbsp;</p><p data-rte-preserve-empty="true">One thing we know for sure is that AI companies are absolutely not charging as much money to use their services as it costs them to provide it to you. <a href="https://europeanbusinessmagazine.com/business/sam-altmans-openai-is-burning-billions-most-users-pay-nothing-as-anthropic-closes-in/">Only 5% of ChatGPT users pay</a> anything at all. OpenAI’s Sora video generation service was so expensive to run that they just plain shut it down.&nbsp;</p><p data-rte-preserve-empty="true">In AI, a company’s biggest costs are based on how much computing power you’re using. These costs are largely paid to other companies — OpenAI, Anthropic, and xAI don’t own all their own data centers; they’re renting access to computing power from <a href="https://epoch.ai/data-insights/hyperscalers-control-most-compute">hyperscalers</a>. Think of that as big data centers run by Amazon, Meta, Google, Microsoft, and Oracle.</p><p data-rte-preserve-empty="true">And that ain’t cheap.</p><p data-rte-preserve-empty="true">OpenAI is <a href="https://finance.yahoo.com/news/openais-own-forecast-predicts-14-150445813.html">projected to lose $14 billion</a> this year. It’s harder to find precise numbers for Anthropic, but they seem to be coming close to breaking even on paper. However, OpenAI is reporting its revenue net, with all of its costs already subtracted.</p><p data-rte-preserve-empty="true"><a href="https://www.forbes.com/sites/josipamajic/2026/03/25/openai-and-anthropic-count-revenue-differently-and-investors-are-looking-into-it/">Anthropic is reporting gross revenue</a>, not including those costs. Anthropic definitely has a runaway hit with its programming assistant Claude Code, which <a href="https://europeanbusinessmagazine.com/business/sam-altmans-openai-is-burning-billions-most-users-pay-nothing-as-anthropic-closes-in/">went from zero to $2.5 billion in revenue in ten months</a>. But it’s extremely unclear how much money it’s costing them to provide that service, and it <a href="https://www.wheresyoured.at/costs/">might be as much as double</a> what they’re charging.</p><p data-rte-preserve-empty="true">Now in theory, as in most industries, the more mature a technology gets, the cheaper it becomes to run it. LLMs haven’t turned out like that so far. Each subsequent version eats up exponentially more resources (and they cost billions to train in the first place.) And as previously discussed, there’s a hard limit on how well an LLM can perform because <a href="https://arxiv.org/abs/2401.11817">hallucination isn’t an error in how the system works</a>. That just is how it works, or doesn’t, all of the time. </p><p data-rte-preserve-empty="true">Hallucinations aside, McKinsey thinks we need another <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers">$7 trillion in capital investment</a> into data centers by 2030 for everything to work out okay. That’s the same as the budget of the <a href="https://fiscaldata.treasury.gov/americas-finance-guide/federal-spending/">entire Federal government</a> for 2025.&nbsp; </p><p data-rte-preserve-empty="true">All of that money has to come from… somewhere. But I’m sure that’ll be fine, right? …Right?</p><h3 data-rte-preserve-empty="true">AI Might Be Cooking the Books</h3><p data-rte-preserve-empty="true">Now, Amazon didn’t turn a profit for the first nine years of its existence. It took Twitter 12 years to become profitable (and it probably isn’t anymore.) In the tech industry, venture capital looks for rapid growth above all with the idea that if you have enough customers or users, there’s eventually going to be a way to turn that into money. (Usually by running ads and selling user data.)&nbsp;</p><p data-rte-preserve-empty="true">Or — more accurately — if you can tell a convincing story about how much money you’re going to make one day. It doesn’t really need to be true.</p><p data-rte-preserve-empty="true">In the tech industry, venture capital doesn’t actually give a shit about profitability, long-term or short term. What they care about is that you can spin a great story about how much money you’ll make one day, so that you can jack up your company’s valuation, so that one day, the company can IPO or you can sell it to another company and you will “exit,” which is VC talk for selling all of your stock to cash in.</p><p data-rte-preserve-empty="true">What happens after you exit? Meh. That’s somebody else’s problem.</p><p data-rte-preserve-empty="true">Tech doesn’t look to build long-term sustainable business. Tech looks to IPO, because if you do it right, you can make a lot more money a lot faster than boring, sustainable, slow-growth business. It’s been that way for at least 20 years.&nbsp;Bear that in mind every time you see a statement from an AI company or its investors: everything, <em>everything</em> is about inflating its perceived value juuuust long enough to get to IPO. And then, if you succeed, you can turn all of that pretend money into real money and walk away.</p><p data-rte-preserve-empty="true">This has led to some interesting phenomenon. In the interests of making those numbers bigger and seeing the lines on the graphs go up and up, we’ve seen a phenomenon called “circular investment” that has raised some eyebrows so much that <a href="https://www.technobezz.com/news/nvidia-enron">Nvidia issued a statement</a> declaring that they are “not like Enron.”</p>


  






  














































  

    
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png" data-image-dimensions="1500x1500" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=1000w" width="1500" height="1500" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=100w 100w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=300w 300w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=500w 500w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=750w 750w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/b7be4059-89d1-490b-8c3f-78f5fc57f08e/2025_AI_Bubble_Speculation_2.png?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
          
          <figcaption class="image-caption-wrapper">
            <p data-rte-preserve-empty="true"><em>By Catboy69 - Own work, CC0, </em><a href="https://commons.wikimedia.org/w/index.php?curid=177498746"><em>https://commons.wikimedia.org/w/index.php?curid=177498746</em></a></p>
          </figcaption>
        
      
        </figure>
      

    
  


  



  
  <p data-rte-preserve-empty="true">The whole industry is shuffling around hundreds of billions of dollars from company to company, and it’s all very impressive, and it looks like a ton of money is exchanging hands, and everybody gets to boast about the billions of dollars they’re making, but <em>none of it is real</em>. If Nvidia invests $100 billion in OpenAI, and then OpenAI buys $100 billion in new graphics cards from Nvidia, the only real, physical thing that’s happened, the only thing that isn’t imagination money, is that Nvidia just gave OpenAI a bunch of equipment for free.&nbsp;</p><p data-rte-preserve-empty="true">And yet if you look up information on AI stocks, what comes up is a ton of hype and very little critical investigation. Why? Because there’s a lot of money at stake. Money has an influence on us the same way that gravity does. A little gravity pulls us around, but we can escape it.&nbsp;</p><p data-rte-preserve-empty="true">The more money there is, the harder it is to resist; ultimately, billionaires and corporations accumulate so much capital that they become black holes of influence, capturing and deforming everything around it inescapably.</p><p data-rte-preserve-empty="true">Even if, it turns out, none of the money is real.</p><p data-rte-preserve-empty="true">Read the next post: <a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1">AI is Morally Bankrupt</a></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1776455901511-C4377ZYVWZ0289ZFOLQ6/unsplash-image-wTO6MWpMrJk.jpg?format=1500w" width="1500"><media:title type="plain">AI is Destroying the Economy, Part II</media:title></media:content></item><item><title>AI is Destroying the Economy, Part I</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Tue, 14 Apr 2026 17:09:19 +0000</pubDate><link>https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69de70171d43877c11f0e1a0</guid><description><![CDATA[<p data-rte-preserve-empty="true"><em>This post is part of a series currently in progress. We’ll add links and probably adjust titles as we go.</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><u><em>Why AI Sucks and You Shouldn’t Use It</em></u></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><u><em>AI is Fundamentally Bad for Most </em></u></a><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><u><em>Tasks</em></u></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><u><em>AI is Destroying the </em></u></a><a href="https://secret.works/blog/ai-is-destroying-the-climate"><u><em>Planet</em></u></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>AI is Destroying the Economy, Part I</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4"><em>AI is Making You More Stupider</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true">Apologies for the longish break between posts — I was dealing with my newsletter email service last week and wound up migrating elsewhere. And yet again, I’m finding I have too much to say so I’m splitting a post into subtopics.</p><p data-rte-preserve-empty="true">Moving ahead: All right, let’s assume that AI works perfectly and we’ll invent some magic energy source that creates no heat and magically deploy it instantly around the globe, so climate change isn’t a worry anymore. </p><p data-rte-preserve-empty="true">We’re still left with some pretty serious economic problems — problems we’re facing right now, and problems that are growing in the background waiting for the moment the bill comes due.</p><p data-rte-preserve-empty="true">Let’s just take a minute to note that “economy” is a word many of us understand only vaguely. Depending on who’s talking, it can mean “inflation,” it can mean “how high is the stock market,” or it can mean “can I afford to pay for health insurance.”&nbsp;</p><p data-rte-preserve-empty="true">One thing everyone agrees a part of “the economy” is jobs. Jobs are good! We want more jobs for humans! Ideally one good job for everybody who wants one. But one of the promises of AI is that we can automate a lot of jobs — especially low-level clerical jobs — and then employers won’t have to pay for those employees anymore.&nbsp;</p><p data-rte-preserve-empty="true">AI is killing about <a href="https://fortune.com/2026/04/06/ai-tech-displacement-effect-gen-z-16000-jobs-per-month/">16,000 jobs a month</a> right now, we think. There’s a <a href="https://jobloss.ai">tracker that records layoffs</a> that specifically cite AI as a factor, and as of this writing, it’s 125,648 jobs lost to AI.&nbsp;</p><p data-rte-preserve-empty="true">But that’s not going to include the ad agencies that have quietly fired a few junior staff, the small importer that cuts loose their long-time freelance translator, the programmers that are simply never hired in the first place. The jobs that go away, but nobody sends out a press release or reports it to local government.</p><p data-rte-preserve-empty="true">A thorough analysis suggests <a href="https://daveshap.substack.com/p/ai-destroyed-200k-to-300k-jobs-in">the real number is more like 200,000-300,000</a>, and we’re just getting started. I’m seeing forecasts like <a href="https://www.forrester.com/blogs/ai-and-automation-will-take-6-of-us-jobs-by-2030/">6% of the US workforce in the next four years</a>. Maybe <a href="https://www.nexford.edu/insights/how-will-ai-affect-jobs#how-to-quickly-change-career">up to 30% of jobs will be automatable</a> by the mid 2030s. This in a climate where 2025 saw <a href="https://hub.jhu.edu/2026/02/23/will-ai-make-human-workers-obsolete/">1.28 million fewer new hires</a> than there were in 2024.</p><p data-rte-preserve-empty="true">For context, the unemployment rate was 25% during the Great Depression. And these numbers will be in addition to existing unemployment that exists for non-AI reasons, which from <a href="https://en.wikipedia.org/wiki/Unemployment_in_the_United_States#U.S._employment_history">1948 to 2015 averaged at about 5.8%</a>.</p><p data-rte-preserve-empty="true">But AI is going to create jobs, too, right? Yeah, it sure is. <a href="https://www.foxbusiness.com/video/6385808601112">Building more data centers</a>.&nbsp; </p><p data-rte-preserve-empty="true">Artists, writers, translators were all hit by the axe very early. It turns out a lot of businesses don’t care about quality as long as it’s dirt cheap. The apocalypse has already impacted many of my friends. Hell, it’s impacted <em>me</em>. I’ve looked around for copywriting or games writing jobs over the last couple of years, and the suitable listings that I find are all “AI Trainer,” or prominently mention working with AI in the job description.&nbsp;</p><p data-rte-preserve-empty="true">So in the short term, it’s clear that the boot of AI is crushing human employment for now. But there are longer-term problems we haven’t faced yet, because this AI boot isn’t stomping out jobs for humans evenly. <a href="https://fortune.com/2026/04/06/ai-tech-displacement-effect-gen-z-16000-jobs-per-month/">Most of the jobs being lost to AI right now are entry-level.</a></p><p data-rte-preserve-empty="true">This creates some massive looming questions that urgently need answers. One is: what happens to the large proportion of young people who can’t find jobs? What happens to the society they live in? What happens to the economy they aren’t participating in?&nbsp; </p><p data-rte-preserve-empty="true">This doesn’t have to be a problem! This could actually be great for humanity as a whole! But we’d have to commit to restructuring our society to distribute resources in a very different way than we do now. The solution is instituting a <a href="https://www.itsafoundation.org">universal basic income</a>, which topic I’ll write more about another time.</p><p data-rte-preserve-empty="true">Unfortunately, there’s no such simple solution for the other looming question: once AI has pretty much done away with entry-level work, where exactly will new senior-level employees come from?</p><p data-rte-preserve-empty="true">Experience is a great teacher, and in some cases, there’s no substitute for it. You can’t become a great writer without actually writing, you can’t become a great teacher without setting foot in a classroom, you can’t become a great doctor without seeing patients. You can probably apply this to your own field; who’s going to be better at doing something, the one with one year of experience or the one with five, or with fifteen? </p><p data-rte-preserve-empty="true">But if we’re using AI for all of the jobs easy enough for an early-career programmer, or writer, or designer, then new people aren’t getting any professional experience at all. And business as it’s done now isn’t going to be hiring young people to do two, three, five years of additional training before they start to perform better than the chatbot does.&nbsp;</p><p data-rte-preserve-empty="true">Ha ha, joke’s on you, the entry-level employees can do better than the chatbot already, but many, many employers don’t care, because <a href="https://www.getmonetizely.com/articles/human-vs-ai-how-to-compare-pricing-and-make-smart-workforce-decisions">AI is so much cheaper</a>. (…For now, but we’ll get to that next time.) We’re already living through one part of that: the frustrating hellscape of terrible customer service where there’s no way to talk to a human being and your problem is too complicated for the chatbot to fix. Imagine that, but for… everything. </p><p data-rte-preserve-empty="true">Jobs lost to a technological innovation is by no means a new phenomenon. Often, innovation births entirely new industries, it’s true. But when we move from horse-drawn carriages to automobiles, all of those extra horses get turned into glue.&nbsp;And AI’s explicit promise is to turn us into a bunch of horses.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">The Luddites, long maligned in popular memory, were fighting the industrialization of the textiles industry — not because they hated and feared technology for its own sake, but because they were seeking protections against lower wages and poor-quality output.&nbsp;</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Hmm, that sounds awfully timely right now.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty"><em>Read the next post: </em><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1776185412628-9CLCR8QVA8RGRIRNI2K6/unsplash-image-wTO6MWpMrJk.jpg?format=1500w" width="1500"><media:title type="plain">AI is Destroying the Economy, Part I</media:title></media:content></item><item><title>AI is Destroying the Planet</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Tue, 31 Mar 2026 20:32:39 +0000</pubDate><link>https://secret.works/blog/ai-is-destroying-the-climate</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69cc263d3429f61ed5cb4c1d</guid><description><![CDATA[<p data-rte-preserve-empty="true"><em>This post is part of a series currently in progress. We’re adding links and adjusting titles as we go.</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><em>Why AI Sucks and You Shouldn’t Use It</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4"><em>AI is Making You More Stupider</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true" data-indent="2"></p><p data-rte-preserve-empty="true">All right, let’s assume that AI is good enough for what you want it to do. Maybe you’re just generating funny pictures to slap onto your socials. Fine, fine, if those suck, it doesn’t matter much.</p><p data-rte-preserve-empty="true">But what are those funny pictures costing us?</p><p data-rte-preserve-empty="true">There are two angles here: the environmental cost and the economic cost. They’re intimately related because the key element to both points is all of those data centers.&nbsp; </p><p data-rte-preserve-empty="true">A data center is a warehouse full of racks of computers that AI is using (and indeed the whole of the internet.) We don’t think about it often, but everything we do on the internet — email, shopping, playing videos — is using a data center in some way. “The cloud” still has a physical footprint somewhere in the world. There is still a computer somewhere out there using electricity and generating heat that is talking to your computer or your phone.</p><p data-rte-preserve-empty="true">(Actually it’s probably several different computers along the way, each providing a little piece of the information you need, but let’s not bring distributed computing and internet routing into this.)</p><p data-rte-preserve-empty="true">So: data centers. We got a lot of data centers. We’re building more and more data centers, because of AI. This is very bad!</p><p data-rte-preserve-empty="true">“Oh Andrea,” you might say, “you can’t blame AI alone for growth in data centers!”</p><p data-rte-preserve-empty="true">Fuck yeah I can. Let’s <a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/">quote this directly</a>:</p><blockquote><p data-rte-preserve-empty="true">From 2005 to 2017, the amount of electricity going to data centers remained quite flat thanks to increases in efficiency, despite the construction of armies of new data centers to serve the rise of cloud-based online services, from Facebook to Netflix. In 2017, AI began to change everything. Data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023.&nbsp;</p></blockquote><p data-rte-preserve-empty="true">So let’s talk about the problems this is creating. We’ll get into climate today, because it’s easier and faster to explain.</p><h3 data-rte-preserve-empty="true"><strong>AI is Very Thirsty</strong></h3><p data-rte-preserve-empty="true">All of those data centers generate a lot of heat, which winds up using a lot of water. That’s because these server farms use chilled water circulated through pipes to keep the computers from overheating.&nbsp;</p><p data-rte-preserve-empty="true">You know how your phone or your laptop can get really hot when you’ve been using them a while? A data center does the same thing, but bigger and hotter, because the computers are packed in like bookshelves and the processors are much more powerful. If the warehouse gets too hot, it would cause the servers to fry out permanently and need replacement. Nobody wants to have to keep replacing their very expensive Nvidia GPUs in their AI data center!</p><p data-rte-preserve-empty="true">And so they go through a lot of water. It’s estimated that for each kilowatt hour of energy a data center consumes, we’re looking at <a href="https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117">about two liters of water</a> used for cooling.</p><p data-rte-preserve-empty="true">But how much water is that in total?&nbsp;</p><p data-rte-preserve-empty="true">Cooling data centers already uses about as much water as <a href="https://www.sciencedirect.com/science/article/pii/S2666389925002788">all the bottled water manufactured globally</a>. They're expected to need <a href="https://www.cnn.com/2026/01/18/business/ai-data-centers-electricity-prices">170% more water by 2030</a>.</p><p data-rte-preserve-empty="true">Water and access to water have been a slow-moving and underreported crisis for years now, both in the United States and across the globe. The US has had <a href="https://www.drought.gov/news/drought-2025-14-graphics-2026-01-15">consistent drought conditions</a> the last several years, which has among other things <a href="https://research.fs.usda.gov/sites/default/files/2025-12/pnw-drought_interactions_-_wildfire_-_update_v1.pdf">made wildfires a lot worse</a> and gives us <a href="https://phys.org/news/2025-05-drought-quietly-global-crop-yields.html">crop failures</a> that put our whole food system at risk. Globally, according to the UN, <a href="https://www.un.org/en/global-issues/water">25% of humanity lacks reliable access to clean water</a> at all.&nbsp;</p><p data-rte-preserve-empty="true">Tell me, is your funny picture really THAT funny? </p><h3 data-rte-preserve-empty="true"><strong>AI Needs Power, Which Makes Carbon</strong></h3><p data-rte-preserve-empty="true">All right, let’s look at the energy footprint of the AI industry.&nbsp;</p><p data-rte-preserve-empty="true">Right now, data centers account for about <a href="https://www.discovermagazine.com/ai-data-centers-come-with-a-hidden-environmental-cost-is-sustainability-possible-48590">4% of all of the electricity usage</a> in the United States, or about as much as the entire nation of Japan. Construction is happening so fast that it could rise to <a href="https://www.cnn.com/2026/01/18/business/ai-data-centers-electricity-prices">as much as 12% by 2028</a>. Tripling in two years!</p><p data-rte-preserve-empty="true">And unfortunately, these data centers are much more likely to be placed where they’re cheap to run… which maps pretty well to where the electricity is still primarily generated from fossil fuels. The carbon intensity of a data center’s electricity is about <a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/">48% higher than the national average</a>.&nbsp;</p><p data-rte-preserve-empty="true">Worse, their usage isn’t a consistent draw; training in particular can result in rapid fluctuations in how much power a data center is using, which in turn hammers the electrical grid. Power companies often do that by firing up&nbsp;<a href="https://www.nrg.com/insights/energy-education/generation.html#:~:text=Peaking%20power,-When%20demand%20increases&amp;text=Most%20often%20these%20peaks%20occur,basis%20to%20ensure%20system%20reliability.">diesel-based generators</a>.</p><p data-rte-preserve-empty="true">It’s extremely difficult to find exact numbers for power usage and for carbon output to get a sense of scale, but the environmental disclosure of data center operators indicate that AI has roughly the <a href="https://www.sciencedirect.com/science/article/pii/S2666389925002788">same carbon footprint as all of New York City</a>. Tripling our electricity use in our very dirtiest power plants in the near future strikes me as the exact opposite of everything we need to be doing right now!  </p><p data-rte-preserve-empty="true">And it’s getting worse and worse. Researchers at Columbia estimated that <a href="https://news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint/">training the GPT-3 AI model emitted roughly 500 metric tons of carbon dioxide</a> —the equivalent of driving a car from New York to San Francisco around 438 times. That’s just initial training of one model, and doesn’t include the power used by the <a href="https://www.discovermagazine.com/ai-data-centers-come-with-a-hidden-environmental-cost-is-sustainability-possible-48590">29,000 queries per second ChatGPT gets </a>alone, nor does it include the post-training tweaks these models get, which are also incredibly energy intensive.</p><p data-rte-preserve-empty="true">Newer and more capable systems use exponentially more resources. <a href="https://medium.com/@rogt.x1997/ais-dirty-secret-how-gpt-3-consumed-1-287-mwh-and-emitted-the-same-co%E2%82%82-as-112-cars-5e43b85eb600">GPT-4 uses 50 times more power</a>.&nbsp; </p><p data-rte-preserve-empty="true">Heyyyy who needed a planet to live on, anyway.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">But that’s not all! Even if you aren’t worried about water access, starvation, or climate change, you’re probably concerned about your power bill, right?&nbsp;</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">All of those data centers put a strain on the local power grid, resulting in required infrastructure upgrades and demand pricing, which the power companies thoughtfully pass on to you, the consumer.&nbsp;You think a 5% increase in your power bill sucks? 20%? How about double? It’s worse than that: <a href="https://www.cnn.com/2026/01/18/business/ai-data-centers-electricity-prices">Electricity costs for areas near data centers increased by as much as 267% compared to five years ago</a>.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Now is your funny picture worth it?</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Read the next post: <a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1774987949006-HPR2O3AXO7KBGTZLYVPT/unsplash-image-PSpf_XgOM5w.jpg?format=1500w" width="1500"><media:title type="plain">AI is Destroying the Planet</media:title></media:content></item><item><title>AI is Fundamentally Bad for Most Tasks</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 27 Mar 2026 16:48:33 +0000</pubDate><link>https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69c6b33f9c9d3c1d17664c7c</guid><description><![CDATA[<p data-rte-preserve-empty="true" class="is-empty is-editor-empty"><em>This post is part of a series currently in progress. We’ll add links and probably adjust titles as we go.</em></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><em>Why AI Sucks and You Shouldn’t Use It</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4"><em>AI is Making You More Stupider</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">We’ve been trained for decades to believe that computers are always right. That computers are not capable of making mistakes. If there’s a mistake with a computer involved, it’s always the result of some human action: someone typed the wrong number, someone clicked the wrong button, someone used the wrong file.&nbsp;</p><p data-rte-preserve-empty="true">We are used to computers always doing exactly what we told them to do. (It’s just that sometimes, what we told them to do and what we THOUGHT we told them to do aren’t the same thing.) Somewhere, upstream, if the computer is wrong, it’s your fault.</p><p data-rte-preserve-empty="true">So it’s not surprising, in a grand sociological way, that we’re struggling with the onset of a computer-based tool in which making mistakes isn’t just a one-time thing, it’s a mathematical inevitability based on how these tools work. <a href="https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html">OpenAI has said so itself.</a></p><p data-rte-preserve-empty="true">You may or may not have heard the word “hallucinations” in the context of AI. A hallucination is when an LLM makes stuff up that isn’t true. But this isn’t the result of something going wrong somewhere in the circuitry. This is how the AI does everything — again, we’re generating sequences of words based on how statistically likely they are to appear close to each other and in which order. The LLM processes your prompt, and sometimes it will be right, and sometimes it won’t be, and that’s the gamble you’re taking. It’s just not as obvious as something simpler doing the same thing, like say a Magic 8-Ball. </p><p data-rte-preserve-empty="true">An LLM is a marvel of engineering, it’s a miracle that it works as well as it does. Truly a triumph of technology. It’s really very good! But it’s not good enough, because it <strong><em>is not thinking</em></strong>.</p><p data-rte-preserve-empty="true">Every single thing an LLM tells you is something it just kind of made up from nothing; it’s just that it has an enormous body of plausible things to tell you. </p><p data-rte-preserve-empty="true">But we’re so used to reflexively trusting what the screen tells us. It has access to all of human knowledge, right? And look, most of the time, it’s pretty good, right?</p><p data-rte-preserve-empty="true"></p><p data-rte-preserve-empty="true"><strong>Pretty Good Isn’t Good Enough</strong></p><p data-rte-preserve-empty="true">Would you use a lawyer who just makes up case citations? No? What if it’s only half the cases? What if it’s only one in four? One in a hundred? If you know that sometimes a lawyer is going to make stuff up and put it in your filings, would you ever use that lawyer at all?</p><p data-rte-preserve-empty="true">Do you think this is hyperbolic or just hypothetical? Well. Here’s a lawyer being sanctioned for filing an AI-assisted brief with false citations <a href="https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/">in New York in 2023</a>. A <a href="https://www.reuters.com/legal/government/texas-lawyer-fined-ai-use-latest-sanction-over-fake-citations-2024-11-26/">government lawyer in Texas in 2024</a>. <a href="https://www.reuters.com/legal/government/law-firm-escapes-sanctions-over-ai-generated-case-citations-2025-11-13/">Oregon in 2025</a>. <a href="https://www.reuters.com/legal/litigation/us-appeals-court-fines-lawyers-30000-latest-ai-related-sanction-2026-03-16/">Here’s one from 2026.</a></p><p data-rte-preserve-empty="true">You know what, <a href="https://kagi.com/search?q=ai+lawyer+sanctioned+doj+site%3Areuters.com">just look over the headlines yourself</a>. Consistently, for years now, lawyers have been using AI to do their work, and the AI has made shit up, and then there’s been some trouble.&nbsp; </p><p data-rte-preserve-empty="true">Do you consider that an acceptable risk for your legal proceedings?</p><p data-rte-preserve-empty="true">Okay, forget lawyering. Would you use someone to listen to recordings and write transcriptions of them for you if it made stuff up to fill in pauses in a conversation or in a sentence? How about if that stuff was super violent and racist?</p><p data-rte-preserve-empty="true">Whisper, an OpenAI product widely used as medical transcription software, <a href="https://arxiv.org/html/2402.08021v2">will hallucinate racist and violent content and imply drug use where no such words were spoken </a>— definitely not the sort of thing you want going into a medical record incorrectly! From the study, Hallucinated content appears in about 1% of transcriptions, and 38% of those hallucinations are what the study considers "harmful."</p><p data-rte-preserve-empty="true">An average primary care doctor sees about 20 patients a day, so that’s about one hallucination a week, and one or two a month that are harmful.</p><p data-rte-preserve-empty="true">Would you hire a human being with the knowledge that they would EVER randomly insert a little fanfic of the patient threatening to murder the doctor? Even just once a month?&nbsp; </p><p data-rte-preserve-empty="true">Humans will make mistakes, too. But the mistakes a human being will make are orders of magnitude less severity. That’s one of the reasons that AI hallucinations throw us for a loop; not just that they’re wrong, but they’re wrong in ways that a human could never be.</p><p data-rte-preserve-empty="true">AI tools keep being pushed into high-stakes situations where judgement and common sense matter. But they have neither of these things.</p><p data-rte-preserve-empty="true">AI tools have <a href="https://www.tomsguide.com/computing/aws-suffered-at-least-two-outages-caused-by-ai-tools-and-now-im-convinced-were-living-inside-a-silicon-valley-episode">brought down AWS</a> at least twice. That we know of. AWS — that’s Amazon Web Services — is the service that powers the backbone of the modern internet as we know it, so in a sense, if AWS goes down, so does most of the internet.&nbsp;</p><p data-rte-preserve-empty="true">I personally think if these AI systems were human, their asses would have been fired already, if they’d even gotten past the “checking your references” part of the hiring process. And yet there seems to be a push to tolerate poor performance as a necessary evil on the way to… something?</p><p data-rte-preserve-empty="true">It is perpetually shocking to me how dedicated companies and individuals are to continuing use of AI systems that have fucked up in astonishing and inhuman ways.&nbsp;</p><p data-rte-preserve-empty="true">LLMs have given us a Chicago Sun Times <a href="https://www.npr.org/2025/05/20/nx-s1-5405022/fake-summer-reading-list-ai">summer reading list full of books that don’t actually exist</a>, security AIs have mistaken <a href="https://abc7.com/post/student-handcuffed-doritos-bag-mistaken-gun-schools-ai-security-system-baltimore-county-maryland/18073796/">Doritos</a> and a <a href="https://arstechnica.com/tech-policy/2025/12/florida-schools-plan-to-vastly-expand-use-of-ai-that-mistook-clarinet-for-gun/">clarinet</a> for guns, LLMs have deleted all of someone’s <a href="https://www.pcmag.com/news/meta-security-researchers-openclaw-ai-agent-accidentally-deleted-her-emails">email</a>, or <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part">all of their hard drive</a>, or <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-code-deletes-developers-production-setup-including-its-database-and-snapshots-2-5-years-of-records-were-nuked-in-an-instant">over two years of&nbsp; their company’s work</a>. People using a chatbot as a companion for emotional support have been encouraged to commit suicide. (actually the <a href="https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots">Wikipedia page on deaths caused by chatbots </a>is extremely disturbing.)</p><p data-rte-preserve-empty="true">These are just the ones that make the news. Imagine all the times that didn’t happen to catch a reporter’s eye.</p><p data-rte-preserve-empty="true">One of my all-time favorites: Microsoft Excel has added an AI tool and warned <a href="https://www.pcgamer.com/software/ai/microsoft-launches-copilot-ai-function-in-excel-but-warns-not-to-use-it-in-any-task-requiring-accuracy-or-reproducibility/">you not to use it “for any task that requires accuracy or reproducibility.</a>” Accuracy. And. Reproducibility.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">You know, the core thing we expect computers to always do.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">I lie awake at night sometimes worrying about the fact that that someone out there is probably trying to get AI into the software we use for situations where any level of error is unacceptable, like banking. Or air traffic control.</p><p data-rte-preserve-empty="true"></p><p data-rte-preserve-empty="true"><strong>Why Does AI Make These Mistakes?</strong></p><p data-rte-preserve-empty="true">A chatbot comes off like an amiable, helpful, thoughtful person who is here to make your life better. But here are some things AI is not doing when it is generating an answer for you:</p><ul data-rte-list="default"><li><p data-rte-preserve-empty="true">Performing arithmetic</p></li><li><p data-rte-preserve-empty="true">Checking reference material&nbsp;</p></li><li><p data-rte-preserve-empty="true">Consulting a doctor</p></li><li><p data-rte-preserve-empty="true">Assessing statements for truth</p></li><li><p data-rte-preserve-empty="true">Judging whether it needs information it doesn’t have</p></li></ul><p data-rte-preserve-empty="true">It's ironic to me that some people think their generative AI is an actual entity with consciousness who understands what they're talking about. Sometimes they’ll cite conversations with a chatbot where the bot tells them its thoughts and feelings! Where it expresses needs and desires!</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">I mean, <em>of course</em> the system knows how to have a conversation as if it were a sentient intelligence. There's an enormous body of fiction featuring exactly this thing that goes back decades. We've been imagining it for far longer than we’ve been able to do it.</p><p data-rte-preserve-empty="true">But whether the bot is a conscious entity is frankly not even relevant to anything but philosophic questions right now. Because if it is conscious, it isn’t conscious in a way that we understand and it isn’t using language in the same way that we do.</p><p data-rte-preserve-empty="true">Regrettably I can’t remember the source of this analogy, but — imagine that you’ve been locked into a library written entirely in Thai. (This is assuming you don’t read or speak Thai, but if you do, maybe choose Zaghawa.) There are no pictures in these books, no diagrams, and no translations into any language that you do know.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Given enough time alone in this library, and with no other resources, will you be able to teach yourself Thai?</p><p data-rte-preserve-empty="true">This is what the AI has access to: words, millions and millions of words and sentences. But the AI has never seen the sun, it does not know what being tired means, and it certainly hasn’t gone to medical school.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">I’m willing to entertain the idea that an AI is conscious, inasmuch as I am willing to entertain that any changing knot of matter and energy may have some form of consciousness. But whatever is going on in there is not the same as us, and it’s a catastrophic error to treat it as if it were. </p><p data-rte-preserve-empty="true">The AI does not have access to external reality. The AI does not understand what you’re asking it to do. The AI most certainly isn’t going to know things that don’t exist outside of the body of words it’s been trained on.</p><p data-rte-preserve-empty="true">If you ask an AI for an essay about Thomas Jefferson, it’s probably going to do a bang-up job. There’s a lot of information out there about Thomas Jefferson to draw from. If you ask it how to write code to do a specific task, it’s got good odds of giving you advice (but you’re going to need to know enough on your own to check behind it, like it’s a shitty junior developer.)&nbsp;</p><p data-rte-preserve-empty="true">But if you ask it why your spouse is mad at you, or how many fig trees per square mile there are in Wayne County, Michigan, or if school will be closed for snow next week, or the best hotels for your road trip, or what your blood test results mean, sure, it’s going to give you an answer…. but it doesn’t really KNOW. </p><p data-rte-preserve-empty="true">Read the next post in this series: <a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1774629953384-AA7M7E4SZS68B8WTMDI7/unsplash-image-PSpf_XgOM5w.jpg?format=1500w" width="1500"><media:title type="plain">AI is Fundamentally Bad for Most Tasks</media:title></media:content></item><item><title>Why AI Sucks and You Shouldn’t Use It</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Wed, 25 Mar 2026 19:14:54 +0000</pubDate><link>https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69c4320eeda1b8024e6f58b9</guid><description><![CDATA[<p data-rte-preserve-empty="true" class="is-empty is-editor-empty">I started writing this post over a year ago as a primer on AI for friends and family who are less deeply embedded in tech culture than I am. It came to mind again when I posted a related bunch of thoughts about art and AI on Bluesky that were modestly well received, and the merry-go-round isn’t slowing down enough for me to ever be comprehensive, so let’s just run with where we are now, at this point in time.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Let’s get to the Bluesky thread on art last; we have a lot of territory to cover first. Because I’m covering a lot of different topics, I’m breaking this post into sections which will roll out over the next… period of time. Links will appear as the posts go up, and right now the itinerary looks something like this:</p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/njfjvrfl6kgnyxijnzydpo7154ka5u"><em>Why AI Sucks and You Shouldn’t Use It</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/ai-is-destroying-the-climate"><em>AI is Destroying the Planet</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/um5j5dhqwkpyjofmxnx82f8dw4dps0"><em>AI is Destroying the Economy, Part I</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/1xruksvd71lrlrh0tjyn91uagmwbs8"><em>AI is Destroying the Economy, Part II</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/jehzkkl4jl09scazuo5a6his9zbqu1"><em>AI is Morally Bankrupt</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><a href="https://secret.works/blog/5k9e4hzlfngi94ru2pggs95vj1cyi4"><em>AI is Making You More Stupider</em></a></p><p data-rte-preserve-empty="true" data-indent="2"><em>That Original Bluesky Thread About Art</em></p><p data-rte-preserve-empty="true">But first, because we loooooove a good terminology discussion around here, let’s clarify our language.</p><h3 data-rte-preserve-empty="true">What “AI” Even Means</h3><p data-rte-preserve-empty="true" class="is-empty is-editor-empty"> Before we go on, we need to be sure we’re all talking about the same thing. (For my friends who are experts far more than me, I apologize for the coming oversimplifications.)&nbsp;</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Collectively, we’ve muddled a lot of different technologies together into the sludge we call “AI,” and it’s made the conversation about “AI” hopelessly confusing to an average person. Since “AI” is the hot new marketing term (somehow), a lot of what’s being called “AI” are old technologies that run simply and algorithmically like any other computer program, and will give you the same result every time: spell check, let’s say. We're used to a world where if you type "chamge" then spellcheck will tell you that you probably meant "change" every single time.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Apple Maps has a pretty good idea that when I’m out and I pull up Maps, I’ll probably want to go home, so it will automatically suggest that route to me. It can also calculate a route between any two given addresses, a miracle we take for granted these days. These are technically a kind of artificial intelligence, though they both existed long before anyone had heard of ChatGPT.&nbsp;</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">We’re not talking about that right now.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty"> In the olden days, most of what we would call “AI” were kinds of machine learning, or deep learning, or neural networks. Machine learning is essentially when you set a computer on a data set and ask it to sort that data into piles, or to do something with that data for you, or find possible connections between pieces of data that a human might not have picked up on. For a while, there was a trend of people training neural networks to come up with lists of, say, <a href="https://arstechnica.com/information-technology/2017/05/an-ai-invented-a-bunch-of-new-paint-colors-that-are-hilariously-wrong/">new paint colors</a>. Hilarity ensued.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Computers are spectacular for these tasks, usually far better and more accurate than humans. This is because the amounts of information to sift through are simply too big for a human to work with efficiently, or at all (think "the set of all possible drugs" or "every possible folding pattern for a protein").&nbsp;</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">When we say "AI" in this series of posts, we're not talking about that, either.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">What we call AI now is usually <em>generative</em> AI — that’s the chatbots that write stuff or make pictures and video for you. The writing version is specifically called a Large Language Model, and the way an LLM works is by sucking in a huge amount of writing and analyzing it to develop a statistical model for what kinds of words generally go together and in what order.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">It's autocomplete. Very, very fancy autocomplete. And I hate it, and I hate that people use it. (Okay, I get that it's more complicated than that, but we'll talk about that later, too.)</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">These LLMs have been put to an astonishing variety of uses: rewriting emails to make them friendlier, summarizing emails so you don't have to read them, talking about your feelings instead of hiring a therapist, writing books, doing your homework, analyzing your medical situation or your astrology chart or your stock portfolio.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">My very favorite branch of AI is the "synthetic data," movement, where you have the LLM come up with numbers for you so you don't have to, say, actually ask your customers questions, or run a social science experiment. As if the purpose of doing these things was not to determine something that is happening in external reality.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">We've retreated to a pre-Enlightenment philosophy in which, we think, perfect truth is accessible to us through reasoning alone. Note that through perfect reasoning alone, without checking on what was actually happening in external reality, Alcmeon of Croton confidently taught us that goats breathe through their ears.</p><p data-rte-preserve-empty="true" class="is-empty is-editor-empty">Read the next post in this series: <a href="https://secret.works/blog/k09iqvip5dde0bfixq9xmyfvf07zvj"><em>AI is Fundamentally Bad for Most Tasks</em></a></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1774465999470-VKPT3MMH5K6QS488WHKS/unsplash-image-PSpf_XgOM5w.jpg?format=1500w" width="1500"><media:title type="plain">Why AI Sucks and You Shouldn’t Use It</media:title></media:content></item><item><title>Agent Hunt</title><category>Au Courant</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Sun, 08 Mar 2026 16:54:55 +0000</pubDate><link>https://secret.works/blog/dwfhed9wjgt9zl6mk53so96k2dth5r</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69ada9dfa27a963ab2216dd0</guid><description><![CDATA[<p class="">It’s worked more than once that I tell the universe what I want and it delivers, so let’s try this again —</p><p class="">Quite some time ago, I emailed my literary agent to introduce her to a friend and discovered that she had left the agency. Upon investigation, it turned out that she had indeed left, and from the sound of it, her clients were largely neither informed nor were they reassigned to another agent. </p><p class="">Surprise! You need a new agent too, just like the friend you were trying to help!</p><p class="">It didn’t matter a lot at the time because I didn’t have a book to sell, so I decided to shelve that problem for later. Well, now it’s later, and I have a new book to sell. But I’m not great at the traditional querying process, and indeed I’ve never succeeded at it; my first agent was introduced to me via Twitter (and later quit agenting entirely) and my second reached out to me based on my starred review in Publishers Weekly for Revision.</p><p class="">So while I’m working toward performing the traditional agent hunt in clanking fits and starts, I thought I’d put this out there into the world:</p><p class="">I need a new literary agent! My jam is usually the places where society and technology intersect, and it’s usually present-day or near-future. The things I write often wind up saying things about politics, or capitalism, or the technology and marketing industries. This most recent book I’ve written is <em>really great</em>, actually, and if you know me you’ll know that it’s unusual that I’d actually say that. </p><p class="">It is about five ways an AI assistant in your brain can ruin your life. It takes place in a world just a few decades ahead of now that’s moved through the catastrophes of the present day and instituted some key social changes like universal basic income and a wealth cap. It is aggressively anticapitalist. There are also characters and a plot and such, and by way of proving I can do those things, I wave vaguely at my prior body of work, which includes some 20 years of alternate reality games, immersive experiences, serial fiction, and yeah, also some novels. </p><p class="">…Probably this isn’t how I should be writing queries, huh? </p><p class="">Anyway: I need an agent! If you know someone who would be a good fit for me and you’re inclined to make an introduction, I’d appreciate it and absolutely purchase you a meal and/or beverages the next time we meet!</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1772990211976-E2ONJTVSHPX73FQJJHEH/unsplash-image-0gkw_9fy0eQ.jpg?format=1500w" width="1500"><media:title type="plain">Agent Hunt</media:title></media:content></item><item><title>The Website Formerly Known as Deus Ex Machinatio</title><category>Meta</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 27 Feb 2026 16:50:30 +0000</pubDate><link>https://secret.works/blog/oomd95yjuec521g6n0cn0r4v615g16</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:69a1bf0af98e2d2f52eff393</guid><description><![CDATA[<p class="">Over twenty years ago (!!!) I started a new, professional website and blog for career purposes. Part of the point was to achieve some level of visibility as an ARG creator, but I was also desperate at the time for conversations about craft: what makes a good game, what ethical pitfalls to watch out for, how to make an audience care about your characters more, that kind of thing.</p><p class="">I struggled to think of a name. Domain names are hard. Ultimately, I <a href="http://secret.works/blog/2005/8/12/so-whats-with-the-name.html">came up with a pun in Latin</a> that’s impossible to say or spell correctly, impossible to remember, just really an awful branding exercise all around. And then I was locked into it, because… branding?</p><p class="">I’ve been at a weird crossroads lately across huge swaths of life. Things are changing with my health, with my family, with my career, and of course with the larger world, too. Most of it is really great, actually, but it’s clear that this is a season to cast off old things that no longer serve me. That includes my dumb Latin pun.</p><p class="">So I’m introducing to you <a href="http://secret.works" target="_blank">secret.works</a>, which is a badass domain name if I do say so myself, and that’s before considering that it’s much easier to remember and spell. I’ve redesigned the site while I’m at it, though I still need to go through and purge a bunch of links bitrot has stolen, and let’s not talk about what a mess the store is right now. Fixes are on the way.</p><p class="">And while I’m at it, I’m letting go of the idea of this blog as primarily a professional platform. It was never clearly that, because the personal and professional are blurry in creative fields to begin with. But the pressure to come up with something meaningful or at least interesting to say about writing, or games, or politics, keeps me from saying anything at all. So I’m reframing the goal of this site to “stuff Andrea has learned lately.”</p><p class="">What will these things be? Who knows! How often will I update? No promises! But there’s a lot of interesting stuff in the world. Let’s have fun with it.</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1772210064919-TE1OAHK7MH7NY1DBJ7LL/unsplash-image-Szqd4nIxDik.jpg?format=1500w" width="1500"><media:title type="plain">The Website Formerly Known as Deus Ex Machinatio</media:title></media:content></item><item><title>Confusion 2026, Dracula, and Other News</title><category>News</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Thu, 04 Dec 2025 15:41:00 +0000</pubDate><link>https://secret.works/blog/ge1sydwlgwui2f5n7piwda1jl8bymd</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:6931a5b416c15749b57f88c8</guid><description><![CDATA[<p class="">I’m remiss in updates, but OH BOY do I have updates for you!<br><br>First and most important: I’m going to be <a href="https://2026.confusionsf.org/guests-of-honor/" target="_blank">a Fan Guest of Honor at Confusion 2026 in Detroit </a>next month! (Well, technically it’s in Novi, Michigan, but there’s no airport in Novi.) Confusion is my very favorite convention, and I haven’t been to one since before covid, so this is exciting on many, many levels. Let me know if I’m likely to see you there!</p><p class="">Next: I finished writing a new book a couple of weeks ago! It’s my hate letter to AI and can loosely be described as “five ways an AI assistant in your head can go very, very wrong.” The whole thing took only six months, the bulk of it in an intense six-week period through October and November. I’ve learned some important things about myself and my brain in the process, though that’s a matter for a future post. </p><p class="">So now I’m looking for a new literary agent, at the worst possible time of year! If you know an agent who is into grounded near-future SF that is extremely mean about capitalism and the tech industry but not actually dystopian, then please, by all means, introduce me.</p><p class="">And finally: since I finished a book, my work objective for this month is to fill my head up with a bunch of new things, mostly books. So far I’ve read Babel and Dracula, the latter with <a href="https://bsky.app/profile/andrea.bsky.social/post/3m6sgymufzs2o" target="_blank">a live thread on Bluesky</a> that people seem to have enjoyed. There is quite a lot of paprika discourse.</p><p data-rte-preserve-empty="true" class=""></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1764862762026-P6AO68C9WR150BKXGBLF/unsplash-image-0gkw_9fy0eQ.jpg?format=1500w" width="1500"><media:title type="plain">Confusion 2026, Dracula, and Other News</media:title></media:content></item><item><title>Reintroducing Revision</title><category>Books</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Thu, 11 Sep 2025 16:19:36 +0000</pubDate><link>https://secret.works/blog/my6a2ywlq1fchhd7cs7repr9g5j1i3</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:68c2f0074bc0aa3965b3b85c</guid><description><![CDATA[<p class="">Ten years ago, <a href="https://firesidefiction.com" target="_blank">Fireside Fiction</a> gave me a shot and published my first novel, Revision, about a wiki where your edits come true. It was an exciting time, all the more so because the book enjoyed a very warm reception with great reviews from <a href="https://www.npr.org/2015/05/07/403128640/flexible-fluid-revision-bounces-from-rom-com-to-sci-fi" target="_blank">NPR Books</a> and <a href="https://www.kirkusreviews.com/news-and-features/articles/pick-yourself-and-try-again/" target="_blank">Kirkus</a>, and even a starred review in <a href="https://www.publishersweekly.com/978-0-9861040-0-8" target="_blank">Publisher’s Weekly</a>. I am forever grateful to Fireside, and to its founder and my editor Brian White in particular, for everything they’ve done for me.</p><p class="">Unfortunately, Fireside Fiction wound down operations in 2022 after too few years, despite having substantially changed the landscape of SF/F publishing for the better. That means that Revision has been out of print and unavailable for a few years now. I love this book and I’m proud of it, so that’s always made me a little sad.</p><p class="">Today I’m excited to tell you that Revision is coming back on October 7! And look at this cool new cover!</p>


  






  














































  

    
  
    

      

      
        <figure class="
              sqs-block-image-figure
              intrinsic
            "
        >
          
        
        

        
          
            
          
            
                
                
                
                
                
                
                
                <img data-stretch="false" data-image="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg" data-image-dimensions="1600x2560" data-image-focal-point="0.5,0.5" alt="" data-load="false" elementtiming="system-image-block" src="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=1000w" width="1600" height="2560" sizes="(max-width: 640px) 100vw, (max-width: 767px) 100vw, 100vw" onload="this.classList.add(&quot;loaded&quot;)" srcset="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=100w 100w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=300w 300w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=500w 500w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=750w 750w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=1000w 1000w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=1500w 1500w, https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/e396a7ab-7869-474e-872c-1077cf19a32b/Revision+Relaunch+Cover+Ebook.jpg?format=2500w 2500w" loading="lazy" decoding="async" data-loader="sqs">

            
          
        
          
        

        
      
        </figure>
      

    
  


  



  
  <p class="">The ebook is available for preorder right now on <a href="https://www.amazon.com/dp/B0FQL1HTZV" target="_blank">Amazon</a>, <a href="https://www.barnesandnoble.com/w/revision-andrea-phillips/1121811625?ean=2940182830775" target="_blank">Barnes and Noble</a>, <a href="https://books2read.com/u/mvEBqe?store=apple&amp;format=EBOOK" target="_blank">Apple Books</a>, <a href="https://books2read.com/u/mvEBqe?store=kobo&amp;format=EBOOK" target="_blank">Kobo,</a> and <a href="https://books2read.com/u/mvEBqe" target="_blank">more</a>. There’s a trade paperback coming as well, if you’d rather hold out for that!</p><p class="">I’ll probably be trying to get onto podcasts and such for promotion. Good lord, maybe I’ll have to go on TikTok! But marketing sucks to do and I’m waist-deep in writing something new. So I’d be really grateful if you tell a friend about the book, or maybe even post something about it on a social something whatever  if you’re so inclined, I don’t know, they just say word of mouth is the best way to sell books?</p><p class="">In any event, I’m just delighted to see this book back in the world again. Thanks for being along for the ride, you guys. You’re the greatest.</p>]]></description><media:content height="1200" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1757607490645-6L8QRNKKDL3NW4CLXLHI/Revision+Cover+City+Skyline.jpeg?format=1500w" width="1500"><media:title type="plain">Reintroducing Revision</media:title></media:content></item><item><title>Love, Ethics, Conflict, and Aliens</title><category>Philosophos</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Mon, 07 Apr 2025 20:21:26 +0000</pubDate><link>https://secret.works/blog/love-ethics-conflict-aliens</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:67f42f51cc56475b28e1ddc6</guid><description><![CDATA[<p class="">This is going to be a little weird and extremely long. This is about politics and ethics; it's about religion and the purpose of existence; it's about how to do the right thing, and how to figure out what the right thing even is.</p><p class="">A lot of what you're about to read boils down to a philosophic argument against moral conclusions reached by a belief system you probably haven't even heard of before now. But we go some interesting places along the way, and it will definitely grapple with some subjects that are relevant to you by the time we reach the end.</p><p class="">Onward.</p><p class="">It's difficult to be alive in the world today, and plugged into the broad knowledge of what's happening everywhere, all the time. I'm speaking from an American lens, but I think a lot of what I have to say is international in scope. We live in an extremely divided age. The internet that we thought would bring us all together and show us how we all have deep commonalities has instead become a tool to distribute propaganda creating and encouraging division. Polarization.</p><p class="">We hate. We harm. And what's worse, we convince ourselves that it's moral superiority.</p><p class="">Because it's so difficult just existing in this atmosphere, there are lots of ways people cope. One of those ways is to simply tune out entirely. A lot of people don't watch news, don't keep up on current events. I'm sympathetic to this impulse, because keeping up is profoundly bad for your mental and emotional health. But at the same time, not keeping up is absolving yourself of any moral responsibility to see justice done. You're just counting on other people to do it. Too much of that behavior is one of the factors that got us here.</p><p class="">In my online spaces, there's a perpetual tension over this. On the one hand, goes the advice, put your own oxygen mask on first. Don't doomscroll. Go offline and touch grass. Pet a dog. Make some art. Avoid the news. Protect your mental health and avoid the things that trigger stress or anxiety. Despair is what the fascists want you to feel. Joy is an act of resistance! Just continuing to be alive and happy is an act of resistance!</p><p class="">It's troublingly easy to use self-care as an excuse for disengaging completely from the slow-moving apocalypse that is reality. But opting out of that stress is a position of privilege for people whose survival is not at stake. </p><p class="">It's really hard to find a good balance. It's imperative to find it. I’m seeing a lot of advice to think small, to grow where you are planted, to pick the one issue most important to you and then stay in your lane. But it’s hard work to think through your own value systems and decide what is the one most important thing.</p><p class="">This brings me to aliens. No, really.</p><p class="">As a part of my aliens-related deep dive into aliens, UFOs, astral projection and other fringe topics, I've found a lot of interrelated belief systems that are somewhere in the spectrum of “grounded in verifiable reality” to “kind of out there but relatively harmless.”</p><p class="">Let's start with the real-world physics stuff, which is incredibly interesting in its own right: there's a thread of quantum physics that <a href="https://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/">strongly suggests our reality is a simulation</a> — which I take not to mean "our reality isn't real," but something more like "our reality is a subset embedded in some larger reality." To Mario, the Mushroom Kingdom is reality, and the fact that we exist outside of the screen isn't relevant to him. For now, we’re leaving aside questions of multiple timelines and branching realities.</p><p class="">There's also a theory and body of evidence that <a href="https://www.scirp.org/journal/paperinformation?paperid=128000">consciousness is a nonlocal phenomenon</a>. Which is to say: your consciousness seems to belong to only you and seems to be pinned to only your body, but that might not be especially true. Instead, it may well be that consciousness is something like a fundamental substrate upon which our perception of reality rests. Consciousness creates reality. Constantly. </p><p class="">This leads into the idea that your consciousness can also change reality, which starts out on an easy-to-believe mind/body connection, moves on to explaining the placebo effect and the quantum behavior of the observer effect, and ultimately arrives at manifestation, the idea that you can bring good fortune of all kinds into your life by believing hard enough that it is already a fact. But there’s even a body of evidence for this — studies have shown that <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2423702">concentration can alter the results from a random-number generator</a>.</p><p class="">And there are people who believe, through meditation and similar practices, that they are in communication with the larger reality of which our world is a subset. Whether you interpret the beings these people contact as angels, demons, aliens, spirits, etc. varies extremely widely. I'll just note here that this is a foundational belief of many religions, and I am not passing judgement on its veracity.</p><h3>Starseeds (and Their Many Cousins)</h3><p class="">That gets us to where we're going: Starseeds. Starseeds are people who believe that they are beings from other planets or galaxies who have chosen to come to Earth and incarnate in a human body to be beacons of love and light, and help the people of earth to progress to a higher spiritual level. In this belief system (and, for that matter, in a lot of belief systems including traditions going back thousands of years), every human is a spiritual being who has intentionally chosen to live this life on Earth in order to gain valuable experience and progress faster. Existence on Earth is akin to a very difficult school you have enrolled in for personal growth.</p><p class="">(Note that I'm using "Starseed" here as a shorthand that can include many New Age-ish belief systems with a similar cosmology as laid out in books by <a href="https://bookshop.org/p/books/three-waves-of-volunteers-and-the-new-earth-dolores-cannon/7278900?ean=9781886940154&amp;next=t">Dolores Cannon</a>, <a href="https://www.my-big-toe.com">Tom Campbell</a>, et al, most of which don't include the part about identifying with having the soul of an alien, excepting inasmuch as we all are.)</p><p class="">I think the Starseed belief system as a whole is fine. It's great! I sincerely mean no snark or contempt toward anyone with these beliefs.&nbsp; It makes as much sense as any other religion, propagates the community values of kindness and generosity, and does a better job than some of exploring difficult questions like "Why do good people suffer." This is not an attempt at ridicule.</p><p class="">(An aside: That said, these beliefs can come off as victim-blaming in a way I find uncomfortable. In this worldview, the answer to "why does this child have cancer" is "because before they were born, the spirit of this child chose to have a difficult life wherein they have cancer so they can undergo faster and deeper spiritual growth."&nbsp; The Abrahamic religions, at least, tend more toward "That's none of your business,” which is definitely less satisfying.)</p><p class="">Starseeds believe, and are very fond of reminding one another, that they must devote themselves to raising their own vibration by focusing on love and light, and avoiding things that make them feel fear and anger, which lower your vibration. In this context, a higher vibration corresponds with being more spiritually advanced. And so, the conclusion goes, keep in mind that what you see happening in reality around you is only an illusion; pay no attention to politics; just be happy. Your internal reality creates your external reality, and so if you focus too much on the things that make you unhappy, you are making those things more prominent in your life.</p><p class="">I have a problem with this.</p><p class="">I can absolutely see the appeal of a belief system that tells you that bad things don't exist if you aren't paying attention to them, or that you have no moral obligation to make the world better beyond improving your own personal experience. Spiritual permission to check out of politics and taking responsibility for anything happening in the larger world seems pretty great!</p><p class="">But one of the benchmarks for how much I respect any given belief system (or religion, or philosophy, or political affiliation) comes down to whether it's internally consistent. It took me a long time to figure out exactly where the inconsistency was with this, but I think I've nailed it down. </p><p class="">To me, if you are meant to be a spiritual compass, a beacon shining for other people, then ignoring harm being inflicted on people because it causes you discomfort is morally repugnant. You may tell yourself it is loving not to choose a side, and there are contexts where this is true. But if you love two children equally, does that love mean you should let one murder the other because choosing sides is wrong?</p><h3>A Little Comparative Religion</h3><p class="">The conflict inherent in trying to bring love and light into a world full of violence is not one unique to Starseeds. It's a problem just about every belief system faces, and so we have many thousands of years of moral philosophy to turn to see how to handle this. </p><p class="">Judaism doesn't care what you have in your heart, the important thing is your actions. You need to perform mitzvot, which are commandments. But on top of the famous ones (don't murder, don't thieve, don't be an adulterer) there are a number of gemilut hasidim which are, roughly translated, acts of lovingkindness toward other people that you are also obligated to do. These include including visiting the sick, comforting the bereaved, and providing charity to the poor. </p><p class="">(Note that these acts are not meant to be provided only to Jews; Torah calls out consistently, again and again, that you are obligated to help the widow, the orphan, and the "stranger among us, for we were once strangers in Egypt." That's immigrants, foreigners, non-Jews. People who are not like you, but who still don't deserve to suffer.)</p><p class="">This is the direct opposite of the Starseed ethos: you're required to see justice done, and your feelings are secondary if they even matter at all. A moment here while we reflect that Starseeds tend to be dangerously close to Qanon and a legion of rabidly antisemitic worldviews. In many of the adjacent belief systems, the Abrahamic religions are considered to be vile chains by which mankind has been shackled with lies.</p><p class="">So let's see what another Abrahamic religion has to say: Christianity. There's a lot in the Christian bible about faith and nonviolence, even passivity; if a man should strike you, turn the other cheek. Live by the sword, die by the sword. Many Starseeds spring from a Christian tradition and try to cultivate their own Christlike consciousness. It's nice. I've always said Christianity as written is a beautiful way to live.</p><p class="">But even Jesus Christ himself did not avoid conflict entirely; there’s a really famous story about flipping some tables in the temple, as I understand it. Christ spoke of forgiving those who offend you, but he didn’t really address the topic of justice in depth. He decries the old law of an eye for an eye, a tooth for a tooth, but we're trying to work out the ethics of inaction, not retributive violence.</p><p class="">Judaism is in many ways a social order intended to create a just and equitable community; Christ taught a deep philosophy of love that is meant to create a loving and enlightened individual, presumably with the idea that a society made of such individuals would take care of itself. These seem to me to be coming at the same problem from opposite sides — opposite but, dare I say, largely compatible. (Leaving aside a host of other theological differences, mind.)</p><p class="">Buddhism pretty directly tells you that violent action is harmful to the person performing the violence as well as the victim. In general Buddhism believes justice is a cosmic issue; the universe will sort it out in the end and you don't really need to get involved. But there is a long tradition of Buddhist monks as political activists. And no lesser a Buddhist than the Dalai Lama himself has said, in examples such as killing Hitler to Stalin or Osama bin Laden, that killing them "would be justified, so long as they were not killed in anger."</p><p class="">…Leaving aside entirely different frameworks of morality like Alistair Crowley's Thelema (do as thou wilt is the whole of the law) or Ra's Law of One (the paths of service to others and service to self are both valid, but you must choose one or the other.)</p><p class="">So we come out of this with a loose consensus that you can and should maintain an internal state of love and compassion, and then take just actions in the world out of love that may involve conflict with others. </p><p class="">If a child is trying to strike down their sibling, is the most loving thing to just let it happen? …No. No it is not, and nobody would argue that. Love for the endangered child requires you to save them; love for the attacking child requires you to keep them from committing this terrible act. There is no belief system in the world that would suggest otherwise.</p><p class="">But the Starseed ethos is that the 3D reality we currently inhabit is an illusion, that it won't be long now until all of humanity unites in love and light, so you just need to sit tight and ride things out for a while.</p><p class="">And here's the kicker: we're not even discussing anything on the level of a holy war or individual violence. We're talking about simply paying attention and taking actions out of compassion. The Starseeds are telling themselves it's okay — that it is necessary! — to ignore the suffering of others.</p><p class="">This does not seem like spreading love and light in the world to me.</p><h3>Who Are You, Really?</h3><p class="">One of my own more out-there beliefs is that it is right and proper to be kind to your robots, your AI assistants, your car, NPCs in video games, anything that in some way provokes the illusion of autonomy but isn't really. It's not because I think these are conscious beings who will suffer if you're rude. I know they don't care. It's because I believe that the choices you make at every moment of every day affect the kind of person you are. Your truest self is who you are when nobody else is looking.</p><p class="">Do you want to be the sort of person who cultivates a reflexive habit of thinking about others, or not?</p><p class="">One of the biggest questions all religions ask is, what's the point of all of this anyway? A Starseed would tell you that their purpose is to hold love and joy in their hearts to help raise the vibration of the world. Does that mean a Starseed should just focus on experiencing love and joy on an individual level? Or is their purpose perhaps to maximize the amount of love in the world and minimize the amount of fear and anger everyone experiences?</p><p class="">The first one is easy to rule out. If the purpose of existence for a Starseed was merely to experience happiness, there was never any need to come to Earth at all. Given every description I've ever seen of the higher realms Earth meant to ascend to, the places the Starseeds come from, we already have the first one basically everywhere else. So that can't possibly be the point of being here.</p><p class="">On the other hand, if you’re here to provide a beacon for humanity, who are themselves here to learn from harsh experience, then just pretending those harsh experiences aren't happening seems bizarre and pointless to me. Even if you know with absolute certainty that what's happening around you is an illusion in some greater sense — this place exists for a purpose. The things happening in here may not be real, but they nonetheless still matter. <em>The suffering is real.</em></p><p class="">I contend that a Starseed checking out of everyday life is in fact ignoring a sacred duty: showing humanity by example how to embody love and compassion while still, so to speak, playing the game. </p><p class="">This is not even considering the additional factor that, if Earth is a place you come to learn and grow, surely a Starseed also has lessons left to learn. The Starseed population seems to be very full of vulnerable people who have had difficult lives — lots of hardship, trauma, poverty, disability. Based on the belief that each one chose the nature of their life ahead of time, it would seem from other elements of this belief system that this is on purpose. Extra difficulty, for advanced beings who are, so they say, ready for extra spiritual growth. </p><p class="">So <em>grow</em>.</p><p class="">I claim to be neither a Starseed nor an advanced spiritual being. I get impatient and angry, I have unkind thoughts and make unkind remarks. I'm a regular old human being, having a regular old human life. </p><p class="">But here's how I think we should be holding a beacon of love and light in the world: not by sitting on the sidelines, but by showing everyone, enemies and allies alike, the compassion every human being deserves. There are ways to push for a better world that don't require you to throw punches and fling insults. </p><p class="">You also don't need to be upholding capitalism or supporting the structure of the US government as it is; a lot of people think the whole rotten system is awful and it's great that it's burning down, because we'll get the chance to build something better in the ashes. I don't think the suffering on the way from here to there is worth it, but—</p><p class="">There's another way to live, anyhow, a compromise of a sort: try to build bridges. Even — especially! — with people who have burned the old ones down, when they're ready to cross back to you. A lot of the people reading this may have a lot of big and very justified feelings about MAGA family or other loved ones, but time and again, history has shown us that hating someone back just as much as they hate you creates a situation with no winners. (No, you don't have to welcome your abusive parent back into your life; that's not love for yourself, and you matter, too.) </p><p class="">For my part, I don't hate MAGA America (though admittedly it frightens me.) Most people are mostly good, and a lot of hate comes from a place of fear of the unknown and fear of loss, fanned by lies told by parties who think they can benefit from division. If I had lived that life, I might well have turned out like that, too.</p><p class="">And I try very, very hard to hold a place in my heart that's sad for the children that people like Donald Trump and Elon Musk once were, who never received the love and grounding in community that is every person's birthright. But, well, the world isn't fair, so here we are. And we have come big choices to make.</p><p class="">Removing all of the news from your life and focusing extra-hard on meditation sounds pretty great right now, I'm not going to lie. But we're living through an age of conflict and history. So what are you going to do about that? Who are you going to be?</p>]]></description><media:content height="997" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1744056548040-5MTH8YP57HEHXKA3X7KY/unsplash-image-Lows8NVoXFA.jpg?format=1500w" width="1500"><media:title type="plain">Love, Ethics, Conflict, and Aliens</media:title></media:content></item><item><title>Grief</title><category>Politics</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 28 Feb 2025 21:13:12 +0000</pubDate><link>https://secret.works/blog/0rxicwfuqkibw5ussw5zh2ea0jfvvl</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:67c2252d7abb624048e42b60</guid><description><![CDATA[<p class="">I've been thinking a lot about grief lately. </p><p class="">Everyone is familiar with the stages of grief by now. But listen: the thing that's happening to us right now, the thing that we're feeling and going through on a national level? That's also the process of grief, and understanding that might be helpful to getting through it and out the other side.</p><p class="">Denial: But he can't do that, it's against the law!<br>Anger: What the fuck! He's a monster! <br>Bargaining: They'll realize this is stupid and reverse course, right? So many people are mad. Maybe the courts will step in and—<br>Depression: Welp. Looks like it's technofascism forever.</p><p class="">The firehose of bad news never stops, each new executive order or statement like a new symptom reported in an already terminal patient. </p><p class="">Recognizing that this is grief allows us to push forward to the final step: acceptance. Acceptance means internalizing that this is really happening and there is no weird trick, no last-minute save, no secret loophole that's going to turn the ship around and put us gently back to where we were on Jan. 19 of 2024. We are where we are. There’s no going back.</p><p class="">Again, more slowly:</p><p class="">This <em>is</em> happening. </p><p class="">We can't stop it from happening. </p><p class="">Things will never be the same again after this. </p><p class="">We've lost a way of life, an illusion of security. The thing we're mourning is a status quo which is gone and can never come back. Each day is filled with new things we never thought could be possible, things no reasonable person would dream of doing. </p><p class="">Acceptance doesn't mean sitting out the fight, though; part of facing this particular reality means trying to mitigate harm where and how you can. We are still building the future. And there is a future still ahead of us. It isn’t the one we wanted, it isn’t the one we grew up expecting, but it’s the one we’ve got. So how do we make the best of it?</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1732735729506-RCXISM2DI3861G14DB25/unsplash-image-g60mmBfOC5k.jpg?format=1500w" width="1500"><media:title type="plain">Grief</media:title></media:content></item><item><title>Dear MAGA America, From A Liberal Elite</title><category>Politics</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 07 Feb 2025 21:17:03 +0000</pubDate><link>https://secret.works/blog/jx3mc157nobs0c4m7t56fmqibr6l0p</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:67a6739e24af7020aea56349</guid><description><![CDATA[<p class="">I don't want to be your enemy. I don't want you to be my enemy. Can we talk? Please?</p><p class="">We're at a crossroads in our nation. No matter what happens now, we can never go back to the way things were before. But it's not too late to make a future together that's better, not worse. </p><p class="">Please, I am begging you, please read with an open heart and mind, and I'll do my best to demonstrate that I'm doing the same for you.</p><h3>Let's Get Our Prejudices on the Table</h3><p class="">You probably have an idea in your head of who I am already: I'm an over-educated urban professional with a very soft heart and radical ideas about gender, inclusion, and socialism. I color my hair unnatural colors, I listen to NPR in the car, and I say things like "pregnant person." </p><p class="">And you know what? You're not wrong! But I'm also a lot of the same things as many of you: I'm a devoted mom. I regularly attend a house of worship. I worry about government overreach (boy, do I.) </p><p class="">I have a fuzzy idea of who you are, too: you're angry most of the time. You wear red hats and fly flags that say Trump on them. You're racist and sexist and only listen to Fox News. </p><p class="">But! I also know that's wrong in the real world — or at least not the whole truth, and definitely not all of you. I live in a pink town. With few exceptions relating mostly to bumper stickers, I couldn't really tell you who among my neighbors are Trump voters. Definitely not by who's kind or thoughtful in passing, or who does more volunteering, or just seems to be a good person. </p><p class="">…And let's face it: my liberal friends can be plenty racist and sexist, too; the GOP doesn't have a lock on that.</p><p class="">Mostly, I think a lot of you are really unhappy with how broken society is for so many of us, and don't see how to fix it from here. That's what "drain the swamp" was all about, right? You wanted to shake things up. I get it.</p><h3>We Agree on So Much Stuff, Even Where We Disagree</h3><p class="">I think I understand the genuine and well-intentioned place a lot of your views come from, even when I don't agree with their conclusions. I understand why you're against abortion, for example. You're right, killing infants is wrong! I understand why you're worried about an influx of immigrants coming in and using up public services without paying into them. That could put a local government in crisis! I even understand why you'd be concerned about the fast-tracked covid vaccines. The medical system's fucked a whole lot of us over for years and years!</p><p class="">On these issues and many more, the difference between us is usually not one of goals but of how we get there from here. You don't <em>want</em> a mother of two to die miscarrying in a parking lot. Polls tell me that a whole lot of you also think it's unfair to deport hardworking, honest people who have lived in this country and paid taxes for many years regardless of how they got here in the first place — especially the ones who came here as children themselves. And yep, absolutely, testing the covid mRNA vaccines for just a few months is really fast, and I can see where a rational person would decide that's reckless and unsafe.</p><p class="">It's complicated. There's a lot of stuff that’s complicated. That's why this is so hard: because sometimes life is ambiguous, and sometimes the difference isn't one of whether you have values at all, or even which values they are, but of which value you find more important in each case, or which risk you find more unacceptable. It's even harder when there's so much information out there saying wildly different things, so you can't ever be entirely sure who to trust and what to believe.</p><p class="">And that's just where we fundamentally disagree. There's a whole lot we fundamentally agree on.</p><p class="">I'm worried about the cost of health insurance and the way health insurers weasel out of their obligations with little recourse.</p><p class="">I'm worried about housing prices, and the price of college, and the cost of groceries, and how fast they're all climbing.</p><p class="">I'm worried about how hard it is to earn a fair, living wage even from the earliest stages of adulthood.</p><p class="">I'm worried about our food and water being pure and safe.</p><p class="">I'm worried about our courts becoming partisan and our judges political actors, turning our judicial system into some grotesque game of Capture the Flag.</p><p class="">I'm worried about corporate power pushing small businesses out of our communities.</p><p class="">I'm worried about corporate money and billionaires controlling our political processes.</p><h3>Listening to Liberals Can Really Suck</h3><p class="">Okay, one more thing to get on the table.</p><p class="">I know my side of this fight can be… inflexible. Just in writing this, I know I'm going to get a lot of shit online for <em>even suggesting that you have an underlying value system at all</em>. My guys go around saying a lot of stuff like "the cruelty is the point," and do a lot of assuming that you're all mindless drones listening to a right-wing media ecosystem that never allows in an opposing point of view. </p><p class="">Hell, someone could get mad at me for saying "my guys" up there because I'm excluding women and nonbinary people, as if good intent and context don't matter as much as correct word choice.</p><p class="">It's insulting and dehumanizing. That sucks. I can see why you wouldn't want to listen to us. If I were you, I wouldn't either. For a bunch of people concerned about inclusion, we sure can be judgy!</p><h3>Hear My Plea</h3><p class="">This brings me to what's happening right now, and the crossroads at which we have all found ourselves. I'm deeply worried — panicked, really — about the actions President Trump has taken in these first few weeks of office, and what they mean for us all going forward.</p><p class="">President Trump signed an executive order to erase birthright citizenship, which is an actual right enshrined in the actual Constitution in plain language. </p><p class="">He's removing funding that Congress — who is meant to have the sole power of the purse in the Constitution! — has allocated already.</p><p class="">President Trump's DOGE, led by the unelected Elon Musk, is allowing a bunch of young adults with little experience and no security clearance or vetting have access and control to our most sensitive systems, with no training or oversight.</p><p class="">President Trump is telling the Department of Justice to find ways to prosecute corporations who have supported diversity efforts. He's trying to abolish the Department of Education. He’s—</p><p class="">—I could go on, but that's probably enough right there. I'm pretty sure that's not what you were signing onto with your vote. All of this is flat out against the law, and not how any of this works. We have systems in place for how to change these parts of law and government, and they all involve Congress.</p><p class="">In order to live in a society, we need to respect the rule of law. And I keep turning over a question in my head: if the people whose job it is to enforce and execute the law just… choose not to, what happens? What does that mean?</p><p class="">These are the acts of a king. President Trump is dismantling the institutions we have built over time, through the democratic and sometimes painful process of consensus and compromise. I'm worried that at the end of this we simply won't have or be a United States of America anymore, and millions of lives will be the worse for it.</p><p class="">I want you to work with me, with us, to reel this in. We need to work together to preserve democracy. I'm not going to ask you to support an impeachment effort or a prosecution; I think that's too big an ask, no matter how much I might think it's merited. But please.<a href="https://www.usa.gov/elected-officials" target="_blank"> Call your senators. Call your representatives. Call the White House.</a> Let them know that what's happening is not what you want.</p><p class="">In a perfect world, there's a tension in government between conservatism and progressivism — which is to say, changing things not a lot and slowly, vs. changing things extensively and rapidly. That's a healthy approach to governing. Remember how we used to talk about "the loyal opposition"? I long for this to happen. I long for us to take it on faith that we all want our nation to be a good place, a better place.</p><p class="">There are things we're still going to be hammer and tongs about, places where I know we aren't going to see eye to eye for a long time, if ever. I support trans rights unequivocally. I think DEI programs are necessary and helpful. I think communities would be better served by replacing many functions of police with social workers trained extensively in defense and de-escalation.</p><p class="">You can think I'm crazy and wrong about that. You probably do! But we have to keep talking about it, because the way we all live together is by taking these hard issues and finding the places where we agree, identifying the goals where we can work together, and moving forward with that.</p><p class="">We can't go back to the way things were, not ever again. Come what may, we're going to have to find a new normal. It'll be a lot better for all of us if we can work on it all together. Please.</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1731013319369-KH6HT6N3SS44UMAJO1FC/unsplash-image-gdQ_az6CSPo.jpg?format=1500w" width="1500"><media:title type="plain">Dear MAGA America, From A Liberal Elite</media:title></media:content></item><item><title>Crrrrrashing Into Burnout</title><category>Health</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Wed, 27 Nov 2024 19:43:14 +0000</pubDate><link>https://secret.works/blog/wrynrl4wa28i6ttn625u0jc52kef6a</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:674771fcc5974f2e8f3f71ef</guid><description><![CDATA[<p class="">I have some more #resistance-style posts in the queue, even mostly written, but I hit the wall on how far anger could get me and I've been sleeping… a lot, and haven't had much energy to do much. Things fall down. My plants were due to be watered like… a week and a half ago?</p><p class="">So I wanted to take a few minutes and quickly address the topic of burnout, and particularly neurodivergent burnout, which is a little different from the neurotypical variety and requires different treatment.</p><p class="">In both kinds of burnout, you feel tired, but not sleepy. You actually <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5534210/">lose your sense of empathy</a>.&nbsp; Everything is too much. You lose control of your emotional regulation. </p><p class="">Neurotypical burnout is often best addressed by providing a sense of agency or meaning to the life role that is doing the burning out, or better yet, an actual break from having to do that specific thing. There's a lot of advice out there for neurotypical burnout so I'll leave you to search that for your own self. (Sorry not sorry!)</p><p class="">Neurodivergent burnout is a little more pernicious; it's caused by your nervous system being pushed beyond its limits too far and for too long. it causes cognitive problems that can be mild (saying the wrong words, forgetting numbers or directions or conversations, turning the wrong way) or severe (going straight up nonverbal.) It creates an exhaustion that can leave you sleeping for twelve, fifteen, eighteen hours a day. It causes almost a cousin of anhedonia: you would maybe like to watch a TV show or make some art or go hang out with a friend, but you somehow can't make yourself actually do it. That can look a lot like lying around doomscrolling! It can look a lot like depression! But it is not actually depression.</p><p class="">I've had conversations with autism specialists about this and the treatment is in two parts: remove expectations, and give yourself joy and mastery.</p><p class="">Removing expectations is a lot easier said than done because we all live in an elaborate palace of shoulds: you should volunteer, you should cook a healthy dinner, you should listen to that voice mail, you should send out holiday cards, you should hit the gym… </p><p class="">If you're in neurodivergent burnout, forget about anything that is a "should" that doesn't actually carry an immediate and very serious consequence. It's important to still pay your bills, feed your pets and children and yourself, go to work if you can’t swing some time off. But in burnout, pare away all of the extra stuff. ALL of it. Skip out from your book club or writing group for a while. Forget about dusting or making your bed, even forget about brushing your teeth; if it feels like too much, it’s too much. And forgive yourself for it. You're not well. You can get back to all of the extras when you're better, when you’re no longer in pure survival mode.</p><p class="">Notice how this is the direct opposite of what you'd be doing if this were depression. With depression, if you can force yourself through the wall and exercise, clean your living space, take a shower, spend time with loved ones, then generally you begin to feel somewhat better! (…Though results can and do vary profoundly.) In neurodivergent burnout, trying to keep doing the things is just going to make you feel worse for a longer span of time.</p><p class="">So what do you do in the meanwhile? Feed yourself a steady diet of things that make you feel joyful — maybe that's looking at baby animals, or listening to your favorite band. Maybe it's reading your favorite book. Maybe it's trying a bunch of new-to-you foods, or spending a few hours in the bath or wrapped in a fuzzy blanket. You might have to think a while to hit on the right things, because sometimes our leisure activities are also things we feel like we should do, not what we want to do. Be honest with yourself: don't watch documentaries if you really want cartoons, and vice versa. </p><p class="">And find the things that provide you a feeling of mastery. That can mean activities that make you feel like you're learning  or successfully progressing at something, or it might be doing something well that you're already good at. It might be a hobby like cooking or baking, or maybe learning a new language. It might be crossword puzzles. Video games can be really, really great for this; it’s exactly what they’re designed to do. </p><p class="">For some of us, especially in creative fields (ahem), the activities that provide a feeling of mastery can look like working at your job or your side hustle. If that's true for you, and you can do that work without creating a burden of expectation for yourself, then have at it. Be honest with yourself here, too, and if you can afford to back off a little, you should probably do so. Your mental and emotional health are more valuable to you than your career momentum.</p><p class="">This treatment plan will look and feel an awful lot like being lazy. Get that voice out of your head. It's not being lazy; you are resting. More specifically, you are resetting your nervous system. It is vital to your health.</p><p class="">When you're ready to step back toward ordinary life, you'll begin to feel a kind of restlessness. You may even start doing things without planning to or meaning to — that rare overabundance of executive function. You'll know it when you get there, and it an amazing feeling: like coming back to life after being asleep for a long time. Like becoming yourself again. And we’ll all still be right here waiting for you.</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1732735640134-LS4Q6R0ENLB1UTW3EQ14/unsplash-image-g60mmBfOC5k.jpg?format=1500w" width="1500"><media:title type="plain">Crrrrrashing Into Burnout</media:title></media:content></item><item><title>Get Your Ducks in a Row Right Now</title><category>Activism</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 15 Nov 2024 18:07:17 +0000</pubDate><link>https://secret.works/blog/get-your-ducks-in-a-row-right-now</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:67378b96313fc07c053911bd</guid><description><![CDATA[<p class="">We all know the next few years promise to be really hard in these fifty United States and several outlying territories. Some of you are thinking about fleeing to Canada, or Portugal, or New Zealand, or perhaps some land of your ancestors. But some of us don't have the resources to flee, or have roots too deep in a particular location, or are determined to stay and fight for ourselves and others.</p><p class="">If you're staying, this post is for you. There are a lot of things you can do now to get yourself ready for what's to come. This is a list of things you can start doing right now to get yourself and your family ready for whatever the heck is going to come.</p><p class="">Many of these steps carry the caveat of "if you can afford it." I know a lot of people can't — but if you can and you're hesitating, please remember this is very much a "better safe than sorry" situation. </p><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>Reproductive rights will be under assault.</strong></p><p class="">This means that abortion access will probably be curtailed, and access to birth control at all is under threat as well.</p><ul data-rte-list="default"><li><p class="">Invest in long-term birth control. If you've got a uterus and you don't want to have a baby in the immediate future, now is a great time to put an IUD in it. (Or… have someone else do it, FINE.) If you're pretty sure you'll never want a baby, now is a good time to get that vasectomy operation on the schedule or try to get your tubes tied. There are <a href="https://doctors.tubalfacts.com">lists of childfree-friendly doctors</a> if you'll need to find one. </p></li><li><p class="">Stockpile Plan B if you can afford it. Even if you're not at risk of unwanted pregnancy yourself, you might get the opportunity to save someone else's life. The shelf life for Plan B is, coincidentally, about 4 years.</p></li><li><p class="">Stockpile birth control pills if you use them (and maybe if you don't). Again, even if you don't need them, you can't be sure when someone in your community just might. We know from history that if these drugs do become illegal, there will be distribution networks to get them to people who need them.</p></li><li><p class="">If you use some sort of online or cloud-based menstruation tracker, look into what it says about <a href="https://www.consumerreports.org/health/health-privacy/period-tracker-apps-privacy-a2278134145/">keeping your data secure</a>, and make sure you're comfortable with that, or stop using it entirely. Clue has <a href="https://helloclue.com/articles/about-clue/patient-data-privacy-at-clue-a-statement-from-the-co-ceos">vowed to keep your data safe</a>, and <a href="https://www.consumerreports.org/health/health-privacy/how-to-keep-apple-watch-ovulation-data-private-a2879529072/">Apple</a> has a good history for data privacy as well.&nbsp; This link is a couple of years old and might not be up to date. </p></li><li><p data-rte-preserve-empty="true" class=""></p></li></ul><p class=""><strong>LGBTQ+ rights will be under assault.</strong></p><p class="">This could take a lot of different forms; one of them is removing Federal protections for employment and housing. But the bigger concern is if same-sex marriage at the Federal level is rolled back. This would create a logistical nightmare in many, many ways for many, many people and organizations, but the people we're dealing with are not concerned with chaos and bureaucratic efficiency, so that won't stop them.</p><ul data-rte-list="default"><li><p class="">If you are in a same-sex marriage right now, it's time to consult with a lawyer to <a href="https://www.nclrights.org/get-help/resource/post-dobbs-faq/">get some paperwork in order.</a> In the old bad days, same-sex couples could replicate some (but not all) of the benefits of marriage with medical and financial Powers of Attorney. If you have children, biological or adopted, look into guardianship and/or adoption paperwork for all non-biological parents. Reach out to elder gays if you can, they've been through this before. (And this is a time for communities to come together for mutual support.) </p></li><li><p class="">If you are in a long-term serious same-sex relationship with the intent to get married, this is a good time to hop over to the courthouse and put a ring on it. You can always throw a party later. There's a possibility that the chips will fall in such a way that existing marriages are honored, but new ones won't be permitted.</p></li><li><p class="">Trans people are in for a very rough time. I've heard the suggestion that you can get hormone replacement therapy pills if you're a plausibly menopausal woman, with the intent of quietly diverting pills to trans women who may lose access to estrogen treatment. Consider reaching out to a local trans advocacy group to see what else you can do to help. Definitely reach out to trans people you know to let the know you're in their corner, and ask what would be most useful for you to do.</p></li></ul><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>Citizenship status will be under assault.</strong></p><p class="">This one doesn't need a lot of explanation. Get your papers in order — even if you think you don't need to. One of the ideas the Trump regime have been kicking around is eliminating birthright citizenship. (Which is explicitly against the Constitution, but institutions won't save us, more on that later.) Ending birthright citizenship would mean a whole lot of people who are citizens right now won't be anymore.<a href="https://en.wikipedia.org/wiki/Expatriation_Act_of_1907"> It's happened before</a>. </p><ul data-rte-list="default"><li><p class="">Renew your passport, even if it isn't expiring soon. (Or <a href="https://travel.state.gov/content/travel/en/passports/how-apply.html" target="_blank">apply for one</a> right away.) I'd suggest renewing if it expires in the next four years for sure. An adult passport is good for ten years. Good news: you can take a digital picture and <a href="https://travel.state.gov/content/travel/en/passports/have-passport/renew-online.html">renew online</a> now!</p></li><li><p class="">Get valid stamped or notarized copies of your birth certificate and social security card, plus any other important supporting documents you may need: that includes marriage certificates, name change papers, green cards and naturalization papers, school diplomas and transcripts, everything you can come up to support the idea that this is your home. Keep copies in a safe and easily physical accessible place, and also make sure you have clear, legible photos of every single page (front and back) of every single document on your phone's camera roll.</p></li></ul><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>The ACA will be under assault, and many of us may not have health insurance much longer.</strong></p><p class="">If you're on an ACA plan and you're in a window where you can renew it for another year, do so immediately. If you're not on an ACA plan and don't have health insurance at all, look into getting a plan while you still can.</p><ul data-rte-list="default"><li><p class="">This is a good time to get your tetanus booster and any other vaccines you might need: covid shots, flu shots, shingles. I've seen recommendations to get vaccines for illnesses like cholera and typhoid in the event of disasters that cause a long-term collapse of urban infrastructure; that's a lot harder to find and insurance may not cover it, but if that's an option for you, hey, why now?</p></li><li><p class="">Make use of what medical coverage you have while you have it. Schedule any colonoscopies, mammograms, etc. that you may be due for, or close to due for. It's already close to the end of the year, so if you have flex spending money to use up, absolutely get new glasses and stock up on </p></li></ul><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>Freedom of speech will be under assault.</strong></p><p class="">Trump has vowed to go after opponents and critics, and has gone after journalists and activists in the past. If you don't like the man and you don't like what he wants to do, lock down your online presence. </p><p class="">Some of this may feel like overkill to you. Even if you personally don't think you're in danger, please take some or all of these steps anyway — normalizing secure behaviors for your online presence helps to protect the people who may be targeted by government. </p><p class="">And… not to put too fine a point on it, but you never know when you might become a target. (And anyway these steps also help to protect against ordinary hacking!)</p><p class="">This is a pretty complicated topic and I don't think I can do it justice in simple bullet points, so I'd like to point you to <a href="https://www.wired.com/story/the-wired-guide-to-protecting-yourself-from-government-surveillance/">this guide from Wired</a> on how to do it, as a start.</p><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>Tariffs will cause prices on lots of goods to go up.</strong></p><p class="">…Because that's how tariffs work. This could have a lot of different implications, ranging from the cost of electronics and durable goods skyrocketing, to not being able to buy January strawberries anymore. Unfortunately goods that are made in America are going to get more expensive too, because many of them are made with materials that are imported from elsewhere. Scaling up American manufacturing to cover this will require building infrastructure that takes several years. </p><ul data-rte-list="default"><li><p class="">If you have electronics or a car or a refrigerator etc. that are reaching the end of their usable lifetime, and of course if you can swing the cost, replacing sooner rather than later is a good idea. </p></li><li><p class="">Enjoy your non-seasonal produce while you still can, and maybe start saving recipes and advice for seasonal eating. This one is a little ironic, because eating locally and seasonally has been a suggestion for living sustainably for a long time.</p></li></ul><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>In an emergency, blue states may not receive help from FEMA.</strong></p><p class="">If you live in a blue state, you should probably begin more serious disaster prep than usual because <a href="https://www.politico.com/news/2024/10/03/helene-trump-politics-natural-disaster-00182419">FEMA will not be there for you</a>. Which disasters you need to prep for are going to be <a href="https://www.ready.gov">very geographically specific</a>, and a lot of the key advice is similar to other parts of this post — making sure your papers are in order nearby and also online, for example. Following are a few of the typical basics.</p><ul data-rte-list="default"><li><p class="">Lay in a supply of shelf-stable foods and water, enough to last you for a couple of weeks. Encourage others in your community to do so as well. In a real emergency, you'll all be sharing.</p></li><li><p class="">Stockpile any medications you need, if you can. This one isn't easy, particularly in the case of medications that require refrigeration or have a shorter shelf life, or if your key medications are controlled substances. You can sometimes get refills earlier by asking your pharmacist and/or insurance for a "vacation override."</p></li><li><p class="">Keep a stock of fresh batteries and charged-up phone chargers.</p></li><li><p class="">Stock up on pet food and medications, too! </p></li><li><p class="">Make an address book with addresses, emails, phone numbers, and other contact information for your friends and family. Print out a copy in case something happens to your phone and/or you don't have access to electricity for a while. </p></li><li><p class="">Find out if there are mutual aid groups already working in your area; if not, find out which organizations do work like meals on wheels and similar programs. This could be Rotary clubs, churches and other religious groups, local chapters of advocacy groups, maybe even your local library. When push comes to shove, these people will be best positioned to organize and help — and it would be great if you're already a part of it </p></li></ul><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>Please don't buy a gun.</strong></p><p class="">This will have the net effect of making everyone less safe, not more so. </p><p class="">Having a gun in your home doesn't make it likely that you'll be able to get to that gun in time if brownshirts kick in in your door, especially if you keep your firearm secure enough to be safe. It also doesn't mean you'll be a good enough shot to hit the right people, or that when push comes to shove, you'll find that you're capable of shooting another human being. </p><p class="">It does present an immediate risk to any children or people with suicidal ideation in your home. </p><p data-rte-preserve-empty="true" class=""></p><p class=""><strong>Build local community</strong></p><p class="">This is the number one most important thing you can be doing right now. There are a million ways to do it, and none of them are wrong. Check in with the people you love and even just like. Strengthen your social ties. Note who's vulnerable or lonely in your community and might need extra help and support in ways you can provide. Talk to your neighbors. Join local religious groups, find when and where activist groups you support are meeting and go, or even join a book club or a running club, make friends at the dog park, organize dinner parties or picnics. Email or text old friends you haven't talked to in ages and catch up with how they're doing.</p><p class="">This is really, really hard for a lot of us. It definitely is for me. It feels awkward and intrusive and you don't wanna, it's so much easier to just stay inside and not have to talk to people. But fascism thrives in that environment. It wants us isolated. It wants us to turn away from public life. </p><p class=""><a href="https://www.salon.com/2023/01/03/americas-epidemic-of-loneliness-the-raw-material-for-fascism/">Coming together is how you fight fascism</a>. Fight a culture of violence and domination by caring about other people, and caring for other people. Our best asset in the days to come is us. All of us. </p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1731693943028-66AG6D5ZV68H7JVE7ISD/unsplash-image-UQZvynxNuqg.jpg?format=1500w" width="1500"><media:title type="plain">Get Your Ducks in a Row Right Now</media:title></media:content></item><item><title>Do Not Pre-Comply</title><category>Activism</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Mon, 11 Nov 2024 15:57:05 +0000</pubDate><link>https://secret.works/blog/x8bszg01zwlxc6g7a2yiqo6cys369t</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:6732279fdf395c4e383a7c30</guid><description><![CDATA[<p class="">Things are scary right now, and the range of disasters people are expecting to happen vary from a terrible but well-precedented "bad economy, high prices, and lots of ongoing injustice toward LGBTQ+ and people of color" to a dystopian hell where <a href="https://www.politifact.com/article/2022/may/12/ask-khn-politifact-should-you-worry-about-data-you/">menstrual cycles are tracked</a> in a central database, the intellectual class is purged a la the <a href="https://en.wikipedia.org/wiki/Stinking_Old_Ninth">Cultural Revolution in China</a> (and others), immigrants and trans people are placed into <a href="https://fortune.com/2024/11/07/president-donald-trump-election-immigration-border-detention-ice-geo-group-corecivic/">for-profit prison camps </a>as <a href="https://www.prisonlegalnews.org/news/2024/jun/1/contemporary-slavery-not-so-secret-practice-forced-labor-inside-us-prisons/">slave labor,</a> all <a href="https://www.cnn.com/2024/09/20/politics/department-of-education-shut-down-trump/index.html">funding for education</a> is eliminated, <a href="https://www.nytimes.com/interactive/2020/climate/trump-environment-rollbacks-list.html">environmental regulations are removed or ignored</a>, and we go back to the not-so-good old days where <a href="https://abcnews.go.com/Health/2nd-trump-term-health-care-issues-including-aca/story?id=115560059">once you had cancer you could never afford health care</a> ever again.</p><p class="">Think it can't happen here? Well, did you think we'd see <a href="https://en.wikipedia.org/wiki/Dobbs_v._Jackson_Women%27s_Health_Organization">abortion protections overthrown</a>? Did you think the Supreme Court would give the president <a href="https://www.aclu.org/press-releases/supreme-court-grants-trump-broad-immunity-for-official-acts-placing-presidents-above-the-law">blanket immunity for criminal acts</a> so long as he does it while he's the president? Did you think we'd see a vice-presidential candidate <a href="https://www.rollingstone.com/politics/politics-news/jd-vance-haitians-if-i-have-to-create-stories-1235102572/">make some shit up, admit he was lying, and then try to justify doing it</a>? </p><p class="">I've heard of a Trump-voting woman who flat out doesn't believe that <a href="https://www.texastribune.org/2024/11/01/nevaeh-crain-death-texas-abortion-ban-emtala/">a miscarrying woman bled to death in an ER parking lot </a>because of the abortion ban, because if that happened, that would be totally crazy. It is crazy! But the unthinkable becomes thinkable startlingly fast.</p><p class="">Did you know? In the 1960s and 1970s, Afghanistan and Iran were modern, progressive nations where you could walk around in an urban center and see women walking around untroubled wearing miniskirts. Kabul was known as <a href="https://www.rferl.org/a/kabul-glory-days-kabulis-history-afghanistan/31011399.html">"the Paris of Central Asia."</a> All it took to change everything for everyone was a little organized violence.</p><p class="">In the coming days I'm going to say some things that will probably make you even more scared about what to expect and how little there is to protect us from it. I'm sorry. But this is a time to look inside yourself, acknowledge that you're terrified, and then forge ahead with doing the right thing anyway. That's what courage is.</p><p class="">So let's move on to action items. The first and arguably most important thing to do is this: <strong>Don't do their work for them. Do not pre-comply with rules you think might be given.</strong></p><p class="">This is not the same as not enacting safety measures.&nbsp; You might be trying to make yourself a smaller target right now: thinking about a different hairstyle, rethinking coming out to your work or you family or at all, not wearing religious jewelry or clothing, a dozen other things. Look, do what you have to do to stay safe.</p><p class="">But on a larger scale, pre-compliance looks like deleting your anti-Trump posts on social media, or quietly taking books off your school's shelves celebrating multiculturalism. It looks like dropping court cases in progress and dismantling your company's DEI programs. It looks like giving up before you even have a chance to fight.</p><p class="">And that makes it easier for the fascists to win.</p><p class="">Some months ago, there was a huge scandal involving the Hugo Awards, which were given out at a ceremony in Chengdu, China. The Hugos Awards always release their vote counts. This year they did it several months late, and the observant soon realized <a href="https://www.polygon.com/24049021/hugo-awards-controversy-china-censorship-babel">the numbers didn't add up</a>; in some cases literally so.</p><p class="">To make a long story short — the (Western parts of the) awards committee preemptively and secretly eliminated a few legitimately nominated works because they thought the Chinese government might not like it if those works won. Nobody asked them specifically to do this. Certainly no orders or demands were issued. They just thought it was a good idea, and did it on their own.</p><p class="">That's <a href="https://www.exurbe.com/tools-for-thinking-about-censorship/">how censorship often works</a>, and how a lot of authoritarianism operates as well. The worry about what might happen if you push the envelope means you keep backing away from the edge, and the envelope gets smaller and smaller, until—</p><p class="">One of the reasons the Nazis could do as much horror as they did is due to this exact phenomenon: <a href="https://www.facinghistory.org/resource-library/working-toward-fuhrer">working toward the Führer.</a> Better, people thought, to do what you imagine Hitler would want you to do than make him endure the tedium of actually issuing any orders!</p><p class="">Make them say it. Make them say it, and argue against it loudly and publicly, and delay complying with anything morally or legally repugnant for as long as you can, because every tiny scrap of energy they have to spend on making you do it means something else they can't be doing. <strong>Make them fight for everything.</strong></p><p class="">You know how Trump drags his court cases on and on and on for months and years until ultimately they don't even matter anymore? That's what we need to be doing, anywhere and everywhere we can.</p><p class="">I've got some things to say about practical safety measures you can and should be taking right now in another post — probably the next one, since the clock is ticking. And again, for some of us in some places, keeping that Star of David necklace in a drawer may actually be a matter of immediate personal safety.</p><p class="">But for now, don't make changes to anything because you're afraid that one day it might hypothetically get you in trouble down the line. That's tomorrow's problem, and it might never show up.</p>]]></description><media:content height="1125" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1731340393948-XXVULE4C5W4XILTPTTHK/unsplash-image-dKeB0-M9iiA.jpg?format=1500w" width="1500"><media:title type="plain">Do Not Pre-Comply</media:title></media:content></item><item><title>Pointing Fingers is Pointless</title><category>Activism</category><category>Au Courant</category><category>Politics</category><category>Change the World</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Fri, 08 Nov 2024 16:47:05 +0000</pubDate><link>https://secret.works/blog/tt03gjqw4lb092qgxnv8ol7c3v082b</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:672e39429b13d216b190ef5e</guid><description><![CDATA[<p class="">It was a surprise. </p><p class="">Polling was mixed, but polling has become increasingly unreliable in an era where it's as much modeling as data because nobody picks up a strange phone call.</p><p class="">And the vibes were impeccable! If you looked at rally attendance, at donation dollars and volunteer hours, if you looked at ground game and raw enthusiasm, it seemed impossible for Kamala to lose. Trump was giving perhaps one appearance a week; even he didn't want to be there.</p><p class="">So how, then, is this the reality we find ourselves in? There is already a world of writing&nbsp; trying to explain what happened and why. All of it is wrong. </p><p class="">History is a complex tapestry of hundreds or thousands of strands, and no one thread can explain the whole of it. And in any event how we got here doesn't matter half so much as where we go now.</p><p class="">Still, it's human nature to want to pin the responsibility on something concrete. It's an attempt to bring order into chaos. To force a sense of — not control, exactly, but a sense that things happen for reasons, and those reasons are knowable. And, of course, to avoid the same things from happening again in the future.</p><p class="">So, so many fingers are being pointed: we are falling into fascism, says one hot take, and this is the fault of <a href="https://www.cbsnews.com/minnesota/news/latino-men-showed-out-in-huge-numbers-for-president-elect-donald-trump/">Latino men</a>. It is the fault of <a href="https://www.khaleejtimes.com/world/americas/rural-vote-helps-thrust-trump-back-to-presidency">rural voters</a>, or of <a href="https://www.wxxinews.org/connections/2016-11-15/connections-small-business-owners-voted-overwhelmingly-for-trump">business owners</a>. In these cases, the alleged fault accrues to a group of people, some of whom are Trump voters, but many of whom are not.</p><p class="">There are other, more specific fingers, too. It's the fault of <a href="https://www.politico.com/news/2024/11/06/democrats-blame-biden-trump-win-00188092">Biden for not stepping down sooner</a>, it's the fault of the DNC for not forcing a new primary, or else the fault of Harris for not being more progressive, or for being too progressive, or for somehow not explaining to the people what she stands for. </p><p class="">Unpleasantly often, it's the fault of that one person or group someone kind of didn't like already, for not doing the one thing that someone really wished they were doing all along (or for continuing to do the one thing someone had been wishing they would stop.) Sometimes at the expense of reality — I'm thinking here of <a href="https://www.usatoday.com/story/news/politics/elections/2024/11/06/bernie-sanders-election-statement/76101511007/">Bernie Sanders blaming Biden</a> for "abandoning the working class," when Biden is easily the <a href="https://theconversation.com/bidens-labor-report-card-historian-gives-union-joe-a-higher-grade-than-any-president-since-fdr-228771">most pro-union president in decades</a> and saw <a href="https://www.epi.org/publication/swa-wages-2023/">unprecedented wage growth</a> for the lower tier of workers.</p><p class="">Quit it. Quit looking for someone to blame, and especially quit looking to scapegoat people who are trying to go pretty much the same place as you. Because this is only a warm-up exercise, to get you used to attacking the wrong people. </p><p class="">Victim blaming is baked into the grotesque ideology we must now throw off (she should have kept her knees together, he shouldn't have gone to that dangerous place, they should have known enough to conceal their true self, <em>they had it coming</em>) and did you know? It's so much easier and less scary to punch at your almost-allies than at your oppressors. </p><p class=""><strong>The actions of a fascist regime are the responsibility of that regime, not of the people who tried to stop them and failed</strong>. Write this on your heart and take it out to look at it every once in a while. They're going to try to make you forget that. </p><p class="">Yes, fine, it's true that things might have gone differently if things had gone differently. Sure. But they didn't. Sometimes you do everything right, sometimes you give it your all, and sometimes that still isn't enough. We're going to have to live with that.</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1731013319369-KH6HT6N3SS44UMAJO1FC/unsplash-image-gdQ_az6CSPo.jpg?format=1500w" width="1500"><media:title type="plain">Pointing Fingers is Pointless</media:title></media:content></item><item><title>So Now What?</title><category>Activism</category><category>Au Courant</category><category>Politics</category><category>Change the World</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Thu, 07 Nov 2024 21:33:56 +0000</pubDate><link>https://secret.works/blog/50oxj1i5lrv5ncc7iys3bwmzsas9ny</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:672d282beea4f62b2883006f</guid><description><![CDATA[<p class="">It's been a hot second since I've been active over here. Not for lack of intention. I even have posts in mind, I do, that are variously updating on the state of my career; shredding some design choices in <a href="https://www.ea.com/games/dragon-age/dragon-age-the-veilguard" target="_blank">Dragon Age: Veilguard</a>; talking about the metaphysics of consciousness and my deep dive into a <a href="https://www.my-big-toe.com" target="_blank">model of reality</a> that leads to some really woo magical thinking conclusions, but—</p><p class="">None of that seems important right now. </p><p class="">Yes, this is about the election. It's also about fascism and democracy, climate change and community, about kindness and group identity and the looming, inchoate future and where we go from here. I have thoughts chasing each other through my head trying to find something inspirational or meaningful to say, or at least something that will make someone else feel a little more okay and a little less alone right now, but there isn't much.</p><p class="">So here we are. Let's see what we've got:</p><p class="">Many unwelcome truths have made themselves known, all at once. The world is abruptly less kind than I had thought; less justice-minded; less rational. I say "rational" in the sense that it doesn't make sense, at least not to me. My entire working model of what people think is important and how they make decisions has been dismantled. I am at a loss, though not, it would seem, speechless.</p><p class="">The first time this happened, I took comfort, however dark, from the idea that our task might simply be to keep the knowledge burning for the next generations that things don't have to be like this. This is still true. But this is, I realize, a bare minimum. We can do better, and we should do better. I can do better.</p><p class="">In practice "better" can look like a thousand things. Some of us will volunteer a bounty of time and labor. Some of us will put our bodies on the line in protests and marches. For some of us, just getting through the day as your glorious self is a victory we should all celebrate. Some have skills they can apply to this task: technology or community-building, educating, counseling. Of course I've been asking myself what I have to give. </p><p class="">I have words. </p><p class="">It doesn't seem like enough.</p><p class="">Sure, maybe I can find a way to channel grief and rage and hope into some perfect work of fiction that changes something, somewhere for the better; the writer's eternal vanity, that. </p><p class="">But there is one other thing I can definitely do with words: I can synthesize a decade of theory from being Extremely Online and Extremely Progressive for people who spent that time more usefully — for my friends and family and maybe some of their friends and family, too. There are multitudes of Americans who are only now wondering about how to live under fascism, who are trying to make sense of how we got here and what to expect now. I've done a lot of reading and a lot of thinking about this, so I may as well talk about it. It might even help someone.</p><p class="">Well, we'll see. </p><p class="">This is the first of seven planned posts (and counting), most of which are already partly written. Time and energy and executive functioning allowing, I'll be putting them out every day, or at least every third day, until I've run out of things to say. This… might take a while.</p><p class="">We've been on this road before. Things got worse, though not as much worse as I feared. Things got better, though not as much better as I hoped. Things will get worse again now. God willing, things will get better again before much longer. There is work to do. Let's get to it.</p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1731013258297-OJ5Y6BCUCW8DNEQLJWE4/unsplash-image-gdQ_az6CSPo.jpg?format=1500w" width="1500"><media:title type="plain">So Now What?</media:title></media:content></item><item><title>About That Book I’m Writing</title><category>Doing the Work</category><dc:creator>Andrea Phillips</dc:creator><pubDate>Wed, 14 Feb 2024 20:00:00 +0000</pubDate><link>https://secret.works/blog/2hvfuykanw7skhejgveaknwq8zgrtm</link><guid isPermaLink="false">634da3be27de9627a431c7b6:634da3f97766ad22af5d7adc:65cd0f518a4fb2494f3723a4</guid><description><![CDATA[<p class="">Yeah so I’m writing a book (again) and I’ve been stuck in about the same place for months (again). </p><p class="">It’s not a surprise. It’s happened to me every time at about this point in a book — halfway through, let’s say. The squishy middle. This is where all of the decisions you couldn’t get yourself to make early on come back to haunt you, and you have to go back before you can go forward. What exactly is this character trying to do, how much time passes between these two scenes, what does everyone think is going on here vs. what’s actually happening. Decisions. Hundreds of them, and that’s not even considering the basic problem of which words go in which order. </p><p class="">At about this point I’ve always, always had to go back to note cards on a corkboard and think really hard about who does what in what order and why. And then (this is the important part) I need to actually <em>decide everything that happens</em>. </p><p class="">This is a lot harder than you might expect! Sometimes I change my mind about something after I’ve written a few scenes and now I have to go back and fix it. Sometimes I forget which decision I made even after the book is out and have to smile and nod along when a reader talks about it, because you’re probably supposed to know what happens in your own books? Pretty sure.</p><p class="">I see a story as a system; a spiderweb; most of all an elaborate machine — something like a clock. (For a long time, the view from inside of a clock was my site’s banner image, though I don’t think I ever explained why.) A clock and a story are both made up of hundreds of little parts that each have to fit neatly with all the others and work in concert so the whole of it moves steadily forward at the right pace. Each decision is rebalancing the clock. First so it works at all, and then so it works better and better, until eventually, if you’re both skilled and lucky, you’ve made something beautiful and bejeweled, where the mechanism is just as lovely to look at as the face.</p><p class="">The big picture matters. The details matter. There’s no talking about which is more important, because ultimately they’re all the same thing.</p><p class="">Each decision about what-happens-when-and-why changes the way the whole clock works. And at about the midpoint of a book is when the clock needs to start feeling like it’s actually ticking because you’re over the flush of excitement over putting a bunch of fresh new pieces together however they fit, but you haven’t yet seen how beautifully the whole thing will work when it’s done.</p><p class="">Here’s what that looks like in practical terms. Right now I have 50,000 words of a manuscript called The Greenville Conspiracy (unless and until some enterprising marketing team comes up with something better). A lot of those words are wrong and have to go — not necessarily because they’re bad or boring, though there’s some of that, but because they’re weighed down with decisions I hadn’t made yet, or I’d started going in one direction and it turned out to be all wrong. The clock doesn’t sound right.</p><p class="">So the step I’m on right now isn’t writing a single word. It’s working out in painstaking detail what it’s going to take to make sure the clock works at all, even if it keeps the wrong time.</p><p class="">And even then, even after you think you’ve decided everything, the clock is always there in the background. As you keep putting words and scenes and chapters together, you’ll find the click of a gear slipping a few teeth, an empty space where something needs to be or where something wrong is jamming the mechanism. But if you really understand the big picture, the complete way the clock is supposed to work and what work every single piece of it is doing, then fixing these problems as you find them is simple.</p><p class=""><em>NB: This was all true when I wrote this in mid-December but I’ve moved pretty far along since then. It’s great!</em></p>]]></description><media:content height="1000" isDefault="true" medium="image" type="image/jpeg" url="https://images.squarespace-cdn.com/content/v1/634da3be27de9627a431c7b6/1707938534232-QI4JP3193SCQ423PL0FE/image-asset.jpg?format=1500w" width="1500"><media:title type="plain">About That Book I’m Writing</media:title></media:content></item></channel></rss>