<?xml version="1.0" encoding="utf-8" standalone="no"?><?xml-stylesheet type="text/xsl" href="/static/theatlantic/syndication/feeds/atom-to-html.b8b4bd3b19af.xsl" ?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xml:lang="en-us"><subtitle/><title>Technology | The Atlantic</title><link href="https://www.theatlantic.com/technology/" rel="alternate"/><link href="https://www.theatlantic.com/feed/channel/technology/" rel="self"/><id>https://www.theatlantic.com/technology/</id><updated>2026-04-10T12:52:49-04:00</updated><rights>Copyright 2026 by The Atlantic Monthly Group. All Rights Reserved.</rights><entry><id>tag:theatlantic.com,2026:50-686753</id><content type="html">&lt;p&gt;The Great Travel Meltdown of 2026 started taking shape at the end of February. At first, the U.S. war against Iran forced the cancellation or rerouting of many flights to the Middle East; then the blockage of the Strait of Hormuz &lt;a href="https://www.theatlantic.com/newsletters/2026/03/expensive-plane-tickets-oil-iran/686604/?utm_source=feed"&gt;drove up&lt;/a&gt; the price of jet fuel and threatened to cause &lt;a href="https://www.bloomberg.com/news/articles/2026-03-31/lufthansa-prepares-crisis-plans-that-include-grounding-jets?embedded-checkout=true"&gt;crises&lt;/a&gt; for the major airlines. Though the two-week cease-fire announced last night may reopen the strait, prices are &lt;a href="https://www.nytimes.com/2026/04/08/business/energy-environment/iran-war-oil-gas-prices-energy.html"&gt;unlikely to rebound&lt;/a&gt; immediately.&lt;/p&gt;&lt;p&gt;Separately, large numbers of TSA workers started staying home after a protracted budget fight in Congress left them working without pay for weeks on end. Airport-security lines snaked into terminal basements or out their front doors. President Trump &lt;a href="https://www.nytimes.com/2026/03/29/us/politics/ice-tsa-airports-homan-trump-shutdown.html"&gt;deployed ICE agents&lt;/a&gt; at the nation’s major airports, and although TSA workers are now &lt;a href="https://www.nytimes.com/2026/03/30/us/politics/tsa-workers-paychecks-trump-executive-order.html"&gt;receiving back pay&lt;/a&gt;, the funding situation isn’t yet resolved.&lt;/p&gt;&lt;p&gt;Getting somewhere by plane has always been an onerous proposition. If you search the phrase &lt;em&gt;travel chaos&lt;/em&gt; on Google News, you will find that headlines about “travel chaos” reoccur in batches about every six months, going back to the beginning of time. But as a result of recent, tragic world events, the state of consumer aviation seems to be deteriorating at a rapid pace. Now Americans with travel plans would like to know exactly how worried they should be, and exactly how worried everyone else already is.&lt;/p&gt;&lt;p&gt;I’m one of the worriers. I’ve been planning to go to Barcelona for my honeymoon this summer. I’ve already read two books about the Spanish Civil War and just started a pretty dry one about the finances of the city’s famous football team. Last week I watched my fiancé spend every Capital One point in his account on our basic-economy flights, because the Google Flights trend line showed the fare for our trip going up, up, up, and headed off the chart.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/newsletters/archive/2025/07/has-air-travel-ever-been-good/683584/?utm_source=feed"&gt;Read: The golden age of flying wasn’t all that golden&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;So I’ve been in the forums—mostly on Reddit. People there are fretting about the known problems as well as interesting new ones that they came up with themselves. They’re &lt;a href="https://www.reddit.com/r/animeexpo/comments/1rzzx7x/if_you_havent_booked_your_airline_flight_do_so/"&gt;worried&lt;/a&gt;, for instance, that an airline might decide to charge them an additional fuel fee upon arrival at the airport, and they don’t want to listen when someone replies, in an effort to be helpful, “Sounds illegal.” They’re &lt;a href="https://www.reddit.com/r/fearofflying/comments/1s9jea3/jet_fuel_shortages/"&gt;worried&lt;/a&gt; about successfully flying to Japan but then getting stuck there by a fuel crisis that hits its peak with really, really bad timing (for them personally). In one &lt;a href="https://www.reddit.com/r/travel/comments/1s4irbc/purchasing_international_flight_tickets_during/"&gt;thread&lt;/a&gt;, a commenter stated without explanation that “there is also a slim chance that events outside of our control will make people want to avoid air travel by this summer.” Okay!&lt;/p&gt;&lt;p&gt;Forum members rarely bother to acknowledge the insensitivity of stressing out over the effects of a distant war on your own summer vacation. But once in a while, someone’s post will push things just a little too far: It’s okay to worry that you won’t get to take a trip that you really care about, but it’s &lt;a href="https://www.reddit.com/r/QantasFrequentFlyer/comments/1rt03zp/are_cancellations_looming/"&gt;not okay to worry&lt;/a&gt; that if too many flights are canceled as a result of a distant war, you may lose your hard-earned gold status on the Australian airline Qantas.&lt;/p&gt;&lt;p&gt;Ominous reports of airlines’ crisis-management efforts have been attracting incredible attention. For many, the first big moment in this story was a March 20 memo from United Airlines CEO Scott Kirby that was sent to employees and then &lt;a href="https://united.mediaroom.com/news-releases?item=125448"&gt;published on the company website&lt;/a&gt;—the type of thing an ordinary person would never read in ordinary times. According to the memo, jet-fuel prices had more than doubled since the start of the war. (Other &lt;a href="https://www.airlines.org/dataset/argus-us-jet-fuel-index/"&gt;sources&lt;/a&gt; have different numbers, showing that it had not quite doubled at that time.) Kirby presented this as a major challenge for the company—United might end up spending an extra $11 billion annually on fuel—but also, somehow, as a manageable one. “Demand remains the strongest we’ve ever seen,” Kirby wrote. He added that he was typing his note while listening to his son cheer during a college-basketball game, which he found inspiring. “There’s a part of me that can’t help but feel United is playing offense right now with potentially big rewards at the end.”&lt;/p&gt;&lt;p&gt;Maybe for an airline CEO, higher prices are their own reward. The travel experts I spoke with for this story said that summer flights will be really expensive. Airlines used to hedge against spikes in jet-fuel prices with preemptive financial maneuvers, but they &lt;a href="https://www.wusf.org/2026-03-27/fuel-hedging-once-kept-airline-prices-down-now-passengers-bear-the-brunt"&gt;don’t do this so much&lt;/a&gt; anymore. Now when fuel prices go up, they just raise fares for passengers instead. Some airlines have added &lt;a href="https://thepointsguy.com/news/fuel-surcharges-higher-fares-what-to-do/"&gt;fuel surcharges&lt;/a&gt; to the cost of each ticket (though this will be assessed at booking, not when you get to the airport). United Airlines is among those carriers that have &lt;a href="https://fox8.com/news/united-airlines-increases-checked-bag-fees-heres-what-to-know/"&gt;raised the fees&lt;/a&gt; for checked bags, presumably to make up for some of its increased costs. Alli Allen, a travel adviser, told me via email that prices seemed to be escalating “by the minute!” Recently, she looked at flights for a client, found the price to be too high, and checked back 30 minutes later in the hope that maybe it had dropped. Instead she found that it had gone up by $300.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/03/boeing-737-safety-air-travel/677814/?utm_source=feed"&gt;Read: Flying is weird right now&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Clint Henderson, a writer and an editor for the popular website The Points Guy, said the same. “I think it’s going to cost a lot more for most people to travel this summer,” he told me. “Whether you’re using points and miles or cash, they’re all going to be higher.” He also expected the travel experience to be stressful, especially if TSA workers end up missing any more paychecks. Although &lt;a href="https://www.nytimes.com/interactive/2026/us/tsa-wait-times-us-airports.html"&gt;news outlets&lt;/a&gt;, &lt;a href="https://news.delta.com/airport-wait-times"&gt;airlines&lt;/a&gt;, and the TSA itself (through the &lt;a href="https://www.tsa.gov/mobile"&gt;MyTSA app&lt;/a&gt;) offer tools to track security wait times, they can still be difficult to predict. Henderson said that he’d gone to check out the Atlanta airport at the height of the TSA-payment crisis and saw travelers facing an hour-and-a-half wait; then he went back the next day, and it was five minutes. “If this goes on, obviously it would be a disaster for the summer travel season.” When I asked him to rate the potential for chaos on a 10-point scale, he said he would give it a nine. (Take it from a points guy!)&lt;/p&gt;&lt;p&gt;Henderson said The Points Guy website’s official recommendation is that people book all travel for the year right now, even if it seems expensive, because conditions may only worsen over time. To avoid long lines, he also suggested flying out of smaller airports on Tuesday, Wednesday, or Sunday. The other travel trips that I accrued from emailing travel agents and industry bloggers will not impress you. They said to try to sign up for TSA PreCheck or apply for Global Entry, to show up at the airport early, and to bring snacks with you.&lt;/p&gt;&lt;p&gt;Travelers may be complaining, fretting, and catastrophizing, but so far, at least, they are doggedly proceeding with their plans. Airlines report that people are &lt;a href="https://www.nytimes.com/2026/03/17/business/air-travel-iran-war-fares-jet-fuel.html"&gt;paying the higher ticket prices&lt;/a&gt;, and that the industry is seeing record levels of revenue. If Americans &lt;em&gt;can&lt;/em&gt; go to Europe this summer, they &lt;em&gt;will&lt;/em&gt; go to Europe this summer. And Europe (plus people from many other places) will come here. More than 1 million international travelers are expected to attend the World Cup. Matches will be held in several of the cities that have had the longest security lines, including Houston and Atlanta, and the final will be hosted in the New York–New Jersey area, which is home to &lt;a href="https://www.theatlantic.com/culture/2026/03/worst-airport-wait-times-reason/686542/?utm_source=feed"&gt;the worst airport in America&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;A new, more aggressive and pervasive form of travel chaos may yet ensue. In the meantime, though, behaviors are unchanged. Despite the rising prices, the spectacular security lines, and all of the rumored airport inconveniences, “we’ve seen very little evidence that people are canceling or toning down their summer travel plans,” Henderson said. “I’m constantly shocked by Americans’ insatiable demand for travel.”&lt;/p&gt;</content><author><name>Kaitlyn Tiffany</name><uri>http://www.theatlantic.com/author/kaitlyn-tiffany/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/VJWCZ99-ge7j9_4UQX61Rb2PGJA=/media/img/mt/2026/04/2026_04_7_Tiffany_Summer_Plans_final/original.png"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The Great Travel Meltdown of 2026</title><published>2026-04-10T07:30:00-04:00</published><updated>2026-04-10T11:55:28-04:00</updated><summary type="html">Airports are suffering a perfect storm of actual problems and passenger anxieties.</summary><link href="https://www.theatlantic.com/technology/2026/04/summer-travel-chaos-airports/686753/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686754</id><content type="html">&lt;p bis_size='{"x":179,"y":19,"w":665,"h":198,"abs_x":211,"abs_y":2170}'&gt;William Liu is grateful that he finished high school when he did. If the latest AI tools had been around then, he told me, he might have been tempted to use them to do his homework. Liu, now a sophomore at Stanford, finished high school all the way back in 2024. “I have a younger sibling who is just graduating high school,” he said. “Our educational experience has been vastly different, even though we’re just two years apart.”&lt;/p&gt;&lt;p bis_size='{"x":179,"y":247,"w":665,"h":33,"abs_x":211,"abs_y":2398}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":310,"w":665,"h":264,"abs_x":211,"abs_y":2461}'&gt;By the time Liu graduated, ChatGPT was already causing chaos in the classroom. But the automation of school is intensifying. If at first teachers worried about students using chatbots to write essays, now new agentic tools such as Claude Code are allowing students to outsource even more of their work to the machines. Need to take an online math quiz? Write a biology-lab report? Create a PowerPoint presentation for history class? AI can do all of this and more. One high schooler recently told me that he struggles to think of a single assignment that AI wouldn’t be able to do for him.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":604,"w":665,"h":33,"abs_x":211,"abs_y":2755}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":667,"w":665,"h":330,"abs_x":211,"abs_y":2818}'&gt;As a measure of just how good AI has become at schoolwork, consider a new bot called Einstein. Several weeks ago, the tool went viral with big claims: “Einstein checks for new assignments and knocks them out before the deadline,” a website &lt;a bis_size='{"x":351,"y":771,"w":92,"h":22,"abs_x":383,"abs_y":2922}' href="https://web.archive.org/web/20260222215744/https:/companion.ai/einstein"&gt;advertising&lt;/a&gt; the bot explained. All that a student had to do was hand over their credentials for Canvas, the popular learning-management platform, and Einstein promised to do the rest. No matter the task, the bot was game: Einstein boasted that it could watch lectures, complete readings, write papers, participate in discussion forums, automatically submit homework assignments. If a quiz or a final exam was administered online, Einstein was happy to do that too.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1027,"w":665,"h":33,"abs_x":211,"abs_y":3178}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1090,"w":665,"h":330,"abs_x":211,"abs_y":3241}'&gt;When I first came across Einstein, I was skeptical: Flashy AI demos have a way of overpromising and under-delivering. So I decided to test the tool out for myself. Because I’m not a college student, I enrolled in a free online introductory-statistics class. The course website explained that the class was self-paced and that it could help undergraduates, postgraduates, medical students, and even lecturers build up basic statistical knowledge. I set the bot loose, and in less than an hour, Einstein had worked through all eight modules and seven quizzes. There were some hiccups—the bot took one quiz 15 times—but it ultimately earned a perfect score in the class. As for me? I hardly so much as read the course website.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1450,"w":665,"h":24,"abs_x":211,"abs_y":3601}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":179,"y":1452,"w":412,"h":19,"abs_x":211,"abs_y":3603}' href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;Read: AI agents are taking America by storm&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1504,"w":665,"h":330,"abs_x":211,"abs_y":3655}'&gt;Einstein was designed to provoke. Its creator, Advait Paliwal, a 22-year-old tech entrepreneur, told me that he’d released the bot as a way of alerting educators as to just how good AI is at schoolwork. “You can blame me,” he said. “But this is happening right now, and more people need to know about what’s to come.” (He has &lt;a bis_size='{"x":395,"y":1641,"w":124,"h":22,"abs_x":427,"abs_y":3792}' href="https://www.chronicle.com/article/einstein-may-have-been-a-prank-but-the-agentic-ai-tool-put-higher-ed-on-notice"&gt;previously said&lt;/a&gt; that he designed Einstein’s landing page by prompting AI to make a website “that people would get angry over.”) Almost immediately after releasing Einstein, Paliwal started receiving emails from professors chastising him for creating a tool seemingly designed to perpetuate academic fraud. He took down the bot after he received multiple cease-and-desist letters, including one from Canvas’s parent company.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1864,"w":665,"h":33,"abs_x":211,"abs_y":4015}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1927,"w":665,"h":429,"abs_x":211,"abs_y":4078}'&gt;To Paliwal, the backlash missed the point: “If I didn’t post about this, someone would have used the same technology and hidden it from the professors,” he said. “It’s actually better that they know that this exists, and they can correctly prepare for what’s to come.” The tool also, of course, gave Paliwal a moment of viral fame. Nevertheless, Einstein does seem to be an indicator of where AI in the classroom is headed. The latest bots have massive &lt;a bis_size='{"x":179,"y":2130,"w":143,"h":22,"abs_x":211,"abs_y":4281}' href="https://platform.claude.com/docs/en/build-with-claude/context-windows"&gt;context windows&lt;/a&gt;, meaning that students can feed in mountains of course content such as syllabi, lecture slides, and practice exams. Today’s agentic tools can complete all kinds of tasks, such as participating in online discussion forums and taking notes on recorded lectures without student intervention. According to one &lt;a bis_size='{"x":331,"y":2262,"w":64,"h":22,"abs_x":363,"abs_y":4413}' href="https://www.rand.org/pubs/research_reports/RRA4742-1.html"&gt;analysis&lt;/a&gt;, the percentage of students middle-school age or older who self-reported using AI for help with homework climbed by 14 points from May to December of last year.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2386,"w":665,"h":33,"abs_x":211,"abs_y":4537}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2449,"w":665,"h":330,"abs_x":211,"abs_y":4600}'&gt;Amid all of this, Silicon Valley is doubling down on its push to integrate AI into schools. In the lead-up to final exams last spring, nearly every major AI firm &lt;a bis_size='{"x":221,"y":2520,"w":58,"h":22,"abs_x":253,"abs_y":4671}' href="https://www.theatlantic.com/technology/archive/2025/04/college-students-free-chatgpt/682532/?utm_source=feed"&gt;offered&lt;/a&gt; college students free (or heavily discounted) access to their paid chatbots. Now the tech industry is offering students cheap access to their agentic tools. Last summer, Anthropic &lt;a bis_size='{"x":508,"y":2586,"w":94,"h":22,"abs_x":540,"abs_y":4737}' href="https://www.anthropic.com/news/advancing-claude-for-education"&gt;announced&lt;/a&gt; “Claude Builder Clubs”—an initiative in which students &lt;a bis_size='{"x":441,"y":2619,"w":36,"h":22,"abs_x":473,"abs_y":4770}' href="https://claude.com/programs/campus"&gt;paid&lt;/a&gt; by the AI company host workshops and hackathons on their campuses. In exchange for membership in those clubs, students are given free access to Claude Code. A few weeks ago, OpenAI &lt;a bis_size='{"x":179,"y":2718,"w":94,"h":22,"abs_x":211,"abs_y":4869}' href="https://x.com/OpenAIDevs/status/2035033703274201109"&gt;announced&lt;/a&gt; that it would be offering college students $100 worth of credits for Codex, its agentic coding tool.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2809,"w":665,"h":33,"abs_x":211,"abs_y":4960}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2872,"w":665,"h":462,"abs_x":211,"abs_y":5023}'&gt;The students affiliated with the AI companies, at least, say that the more powerful bots are helping them with their studies. Thor Warnken, an Anthropic ambassador and a biology major at the University of Florida, told me that he has designed what is effectively a personalized Khan Academy. When he takes a practice test—say, in organic chemistry—he feeds his completed work into Claude. He then asks the bot to find patterns in his errors and make new practice problems based on them. “The first practice question will be super easy, and the next one will get a little harder and a little harder, until it gets super hard,” he explained. Liu, who also serves as an ambassador for Anthropic, similarly said that the bot has made for a “fantastic” study partner. When he has questions during large lectures, he asks Claude, which has access to his course materials, and the bot explains concepts in real time; previously, those questions might have gone unanswered.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":3364,"w":665,"h":24,"abs_x":211,"abs_y":5515}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":179,"y":3366,"w":556,"h":19,"abs_x":211,"abs_y":5517}' href="https://www.theatlantic.com/technology/archive/2025/08/ai-takeover-education-chatgpt/683840/?utm_source=feed"&gt;Read: The AI takeover of education is just getting started&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":3418,"w":665,"h":495,"abs_x":211,"abs_y":5569}'&gt;Instructors, as I have &lt;a bis_size='{"x":360,"y":3423,"w":152,"h":22,"abs_x":392,"abs_y":5574}' href="https://www.theatlantic.com/technology/archive/2025/08/ai-takeover-education-chatgpt/683840/?gift=z9ybaencGpLU1lhvDrrW8hz2VryEc2EL8Toe3xOjyBo&amp;amp;utm_source=feed&amp;amp;utm_medium=social&amp;amp;utm_campaign=share"&gt;previously written&lt;/a&gt;, are also using plenty of AI. Canvas recently introduced a new &lt;a bis_size='{"x":404,"y":3456,"w":149,"h":22,"abs_x":436,"abs_y":5607}' href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/03/23/canvas-unrolls-ai-teaching-agent?utm_medium=social&amp;amp;utm_source=linkedin"&gt;AI teaching agent&lt;/a&gt; designed to save instructors time on “low educational value tasks” such as organizing online-course modules and adjusting assignment due dates. “Faculty are using AI tools both for instructional purposes, for building course materials, but they’re also starting to play around with generative AI to actually grade and assess the learning,” Marc Watkins, a researcher at the University of Mississippi who studies AI and education, told me. He gave a hypothetical: “I could set my agent up, open it up in my course, go out on campus to walk across campus to get a cup of coffee at Starbucks,” he said. By the time he returned, 15 minutes later, all of the essays would be graded, and “bespoke personal feedback” would be sent out to each student. AI can save teachers time—that same grading takes him 10 or 12 hours, Watkins estimated—but in the process, the technology threatens the relationship between students and teachers that is core to education. “That’s really scary,” he said.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":3943,"w":665,"h":33,"abs_x":211,"abs_y":6094}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":4006,"w":665,"h":396,"abs_x":211,"abs_y":6157}'&gt;Most people I spoke with seemed unhappy with the current trajectory of bots in the classroom. Even as growing numbers of students are using the technology, a majority &lt;a bis_size='{"x":373,"y":4077,"w":57,"h":22,"abs_x":405,"abs_y":6228}' href="https://www.rand.org/pubs/research_reports/RRA4742-1.html"&gt;believe&lt;/a&gt; that the more they use AI for classwork, the more it will harm their critical-thinking skills. Natalie Lahr, a Barnard sophomore studying history and political science, doesn’t use the technology “unless it’s something that’s asked of me by a professor,” she told me, “and even in that case, I’m generally quite opposed.” In one particularly “anti-AI radicalizing” experience, Lahr met with a tutor at the college’s writing center to get help on an essay. According to Lahr, that tutor copy-pasted her essay prompt into the popular AI tool Perplexity and gave Lahr the AI-generated outline. “That was basically the end of our session,” Lahr said. “I had a crashout about that afterwards because I was like, &lt;em bis_size='{"x":603,"y":4374,"w":167,"h":22,"abs_x":635,"abs_y":6525}'&gt;Why am I even here?&lt;/em&gt;”&lt;/p&gt;&lt;p bis_size='{"x":179,"y":4432,"w":665,"h":33,"abs_x":211,"abs_y":6583}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":4495,"w":665,"h":264,"abs_x":211,"abs_y":6646}'&gt;Some educators are worried about “a fully automated loop”—as the Modern Language Association &lt;a bis_size='{"x":368,"y":4533,"w":47,"h":22,"abs_x":400,"abs_y":6684}' href="https://www.mla.org/Resources/Advocacy/Executive-Council-Actions/2025/Statement-on-Educational-Technologies-and-AI-Agents"&gt;put it&lt;/a&gt; last fall—in which AI-generated assignments are completed and graded by AI agents. Instructors have taken to analyzing students’ Google Docs history to make sure they are typing responses live instead of pasting in text from a bot. But of course, an AI work-around exists for that too: A new suite of human-typing simulators promises to generate text to make it look as if a student is writing in real time when, really, the work is being done by AI.&lt;/p&gt;</content><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gHCHWx8YU-BUYhvjavR_yLShi54=/media/img/mt/2026/04/2026_04_07_Shroff_Classroom_automation_final/original.png"><media:credit>Illustration by Akshita Chandra / The Atlantic</media:credit></media:content><title type="html">Is Schoolwork Optional Now?</title><published>2026-04-10T07:00:00-04:00</published><updated>2026-04-10T11:55:01-04:00</updated><summary type="html">Education is on the verge of becoming fully automated.</summary><link href="https://www.theatlantic.com/technology/2026/04/ai-agents-school-education/686754/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686746</id><content type="html">&lt;p&gt;For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities—including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States. Now it’s in the hands of a private company.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;On Tuesday, the company &lt;a href="https://www.anthropic.com/glasswing"&gt;officially announced&lt;/a&gt; the existence of the model, known as Claude Mythos Preview. For now, the bot will be available only to a consortium of many of the world’s biggest tech companies—including Apple, Microsoft, Google, and Nvidia. These partners can use Mythos Preview to scan and secure bugs and exploits in their software. Other than that, Anthropic will not immediately release Mythos Preview to the public, having determined that doing so without more robust safeguards would be too dangerous.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in. As a result of how capable AI models have become at coding, they have also become extremely good at finding vulnerabilities in all manner of software. Even before Mythos Preview, AI companies such as Anthropic, OpenAI, and Google all reported instances of their AI models being used in sophisticated cyberattacks by both criminal and state-backed groups. As Giovanni Vigna, who directs a federal research institute dedicated to AI-orchestrated cyberthreats, told me &lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;last fa&lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;ll&lt;/a&gt;: You can have a million hackers at your fingertips “with the push of a button.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="http://Chatbots%20Are%20Becoming%20Really,%20Really%20Good%20Criminals"&gt;Read: Chatbots are becoming really, really good criminals&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Still, Mythos Preview appears to represent not an incremental change but the beginning of a paradigm shift. Until recently, the biggest advantage of AI-assisted hacking was not ingenuity, per se, so much as speed and scale. These bots could be as good as many human cybersecurity experts, but not necessarily better—rather, having an army of 1 million virtual, tireless hackers allows you to launch more attacks against more targets than ever before. Even Anthropic reports that its current state-of-the-art, public model, Claude Opus 4.6, was &lt;a href="https://red.anthropic.com/2026/mythos-preview/"&gt;significantly less capable&lt;/a&gt; at autonomously finding cyber exploits. But Mythos Preview is different. According to Anthropic, the bot has been able to find thousands of software bugs that had gone undetected, sometimes for decades, a sophistication and speed of attack previously thought by many to be impossible. The model has found a nearly 30-year-old vulnerability in one of the world’s most secure operating systems. The Anthropic researcher Sam Bowman posted on X that he was eating a sandwich in the park when &lt;a href="https://x.com/sleepinyourhat/status/2041584808514744742"&gt;he got an email from Mythos&lt;/a&gt;&lt;a href="https://x.com/sleepinyourhat/status/2041584808514744742"&gt; Preview&lt;/a&gt;: The bot had broken out of the company’s internal sandbox and gained access to the internet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The exact capabilities of Mythos Preview are hard to judge, because Anthropic has not released the model. Identifying a vulnerability is not the same as being able to exploit it undetected—in the same way that a robber can have the keys to a bank but still needs to deal with security cameras. And Anthropic surely stands to benefit from its opaque announcement: The company can claim to have developed an ultra-advanced model, while also appearing to act responsibly by preventing the worst-case cybersecurity scenarios. Indeed, the decision to not release Mythos Preview bolsters Anthropic’s &lt;a href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed"&gt;self-styled image&lt;/a&gt; as the AI industry’s good guy. (Anthropic did not immediately respond to emailed questions about Mythos Preview.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of course, a move can be both strategic and conscientious. Should what Anthropic shared be remotely accurate, it heralds a troubling future. Anthropic has a tool that “could damage the operations of critical infrastructure and government services in every country on Earth,” Dean Ball, a former AI adviser to the Trump administration, &lt;a href="https://www.hyperdimensional.co/p/new-sages-unrivalled"&gt;wrote&lt;/a&gt; this week. The ability to defend against such cyberattacks is integral to the basic functioning of society. And the ability to launch such attacks is integral to modern warfare. Anthropic may have just scaled its way into becoming a major geopolitical force.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps more concerning than the reported capabilities of Mythos Preview is that other companies are not far behind. OpenAI is &lt;a href="https://www.axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic"&gt;reportedly&lt;/a&gt; set to release its own similarly powerful model to a select group of companies. It’s very possible, even likely, that Google DeepMind, xAI, and AI firms in China are next. How scrupulous they will be is less clear. Even cheaper or open-source AI models from smaller companies could soon enable this sort of hacking—which would unsettle the basic security and privacy that undergird the modern internet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hacking bots are not the only domain through which a handful of AI companies are gaining tremendous influence. The technology has become crucial to military operations. Even as the Pentagon has engaged in a &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;public feud&lt;/a&gt; with Anthropic, Claude was reportedly used in the bombing of Iran and, before that, the Venezuela raid in January. Last month, the Department of Defense signed a contract with OpenAI that &lt;a href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed"&gt;very likely allows&lt;/a&gt; the government to use the firm’s AI systems to enable unprecedented surveillance of U.S. citizens. (OpenAI has maintained that the Pentagon agreed not to use its products for domestic surveillance.) At the same time, bots from OpenAI, Anthropic, Google DeepMind, and beyond are becoming infrastructure: used by nearly all of the world’s biggest businesses, schools, health-care systems, and public agencies. This is a large part of the reason that Iran has &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt;struck&lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt; or threatened to strike&lt;/a&gt; Amazon and OpenAI data centers in the Middle East—the facilities are high-impact targets on par with the oil fields that Iran has also targeted. Meanwhile, so much money is pouring into the AI boom that these companies are functionally &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;holding the global economy hostage&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In other words, AI companies are remaking the world. Consider how Elon Musk’s network of Starlink satellites has allowed him to &lt;a href="https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule"&gt;repeatedly&lt;/a&gt; &lt;a href="https://www.theatlantic.com/national-security/2026/02/elon-musk-ukraine-russia-starlink/686155/?utm_source=feed"&gt;tip the scales&lt;/a&gt; in Russia’s invasion of Ukraine. Generative AI offers even more possibilities. These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/4Vt-nOTp2FVmZiNnJlDYMwVlQwY=/media/img/mt/2026/04/2026_03_07_Ai_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Claude Mythos Is Everyone’s Problem</title><published>2026-04-09T13:22:00-04:00</published><updated>2026-04-10T12:52:49-04:00</updated><summary type="html">What happens when AI can hack everything?</summary><link href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686721</id><content type="html">&lt;p dir="ltr"&gt;Seeing the Earth from space will change you so profoundly that there’s a term for it: &lt;em&gt;the overview effect&lt;/em&gt;. The extreme minority who have had the privilege describe it similarly. You see something that you were never meant to see, namely the Earth just sitting there, with the entire universe surrounding it. Gazing upon the blue marble, surrounded by its oh-so-thin green layer of atmosphere, the auroras flickering on the fringes, is not merely awe-inspiring but something of a factory reset for one’s sense of self. Almost everyone tears up at the sight.&lt;/p&gt;&lt;p dir="ltr"&gt;“You don’t see borders, you don’t see religious lines, you don’t see political boundaries. All you see is Earth, and you see that we are way more alike than we are different,” Christina Koch, one of the four astronauts on the Artemis II mission, &lt;a href="https://www.nasa.gov/centers-and-facilities/johnson/the-overview-effect-astronaut-perspectives-from-25-years-in-low-earth-orbit/"&gt;told&lt;/a&gt; NASA recently. Jim Lovell, describing the view on Apollo 8 from the dark side of the moon back in the late 1960s, &lt;a href="https://www.chicagomag.com/chicago-magazine/november-2019/jim-lovell/"&gt;told&lt;/a&gt; &lt;em&gt;Chicago&lt;/em&gt; magazine that he could put his thumb up to the window, and in that moment, “everything I ever knew was behind it. Billions of people. Oceans. Mountains. Deserts. And I began to wonder, where do I fit into what I see?”&lt;/p&gt;&lt;p dir="ltr"&gt;Where some see immeasurable beauty, others see fragility. Marina Koren &lt;a href="https://www.theatlantic.com/magazine/archive/2023/01/astronauts-visiting-space-overview-effect-spacex-blue-origin/672226/?utm_source=feed"&gt;previously reported&lt;/a&gt; in this magazine that, upon seeing the Earth from space, one astronaut “became absolutely convinced we would kill ourselves off between 500 and 1,000 years from now.” Famously, the actor William Shatner has written that his brief experience looking at the Earth produced a profound sadness. “What I was feeling was grief, and the grief was for the Earth,” he told Koren in 2022.&lt;/p&gt;&lt;p dir="ltr"&gt;I’ve never been to space, but for the past few days, I’ve oscillated between these emotions—awe and despair—as NASA has continued to post photos of the Earth and moon from Artemis II. Yesterday, the Integrity spacecraft came within 4,067 miles of the moon during its lunar flyby. For 40 minutes, it lost all contact with humanity. At one point they were 252,756 miles away from Earth—the farthest from the planet anyone has ever traveled. For seven hours, the astronauts—Koch, Reid Wiseman, Victor Glover, and Jeremy Hansen—were able to gaze upon a part of the lunar surface previously unseen by human eyes. According to NASA, the astronauts took roughly &lt;a href="https://www.theatlantic.com/photography/2026/04/moon-joy-photos-artemis-ii/686709/?utm_source=feed"&gt;10,000 photos&lt;/a&gt;, which feels perfectly proportional for such an occasion.&lt;/p&gt;&lt;p dir="ltr"&gt;A few of these photos—some taken before the lunar pass—have messed me up pretty good. A photo of the Earth &lt;a href="https://www.nasa.gov/image-article/earthset/"&gt;appearing&lt;/a&gt; to set behind the moon. A picture, taken through a window of the Orion spacecraft, revealing the tiniest crescent Earth growing smaller as the capsule heads toward the moon. As one &lt;a href="https://www.nasa.gov/image-detail/fd04_gmt95-fd4-pao-koch-10/"&gt;caption&lt;/a&gt; on the photo notes, “The Earth is illuminated by the blackness of space.” I’ve experienced these photos the way I experience most media: through the puny screen of my phone, with the awesome, life-affirming images sandwiched between updates about a golf tournament, oil prices, the MLB’s new automated ball-strike system, and reports of the U.S. president threatening the civilizational destruction of Iran.&lt;/p&gt;&lt;p dir="ltr"&gt;On a good, calm day it is hard to know what to make of photos that show, in no uncertain terms, that every single thing you will ever and could ever know is simultaneously galactically insignificant and unspeakably beautiful and precious. Today, the world held its breath waiting for the 8 p.m. eastern deadline Trump set for Iran to agree to a deal to reopen the Strait of Hormuz. If his terms weren’t met, he posted this morning, “a whole civilization will die tonight, never to be brought back again.”&lt;/p&gt;&lt;p dir="ltr"&gt;Trump’s threats triggered denouncements from Democratic lawmakers as well as the podcasters Tucker Carlson and Alex Jones, and incited no small amount of panic from people who have interpreted Trump’s post as a suggestion of nuclear warfare. Then, this evening, an hour before the deadline, Trump &lt;a href="https://www.nytimes.com/live/2026/04/07/world/iran-war-trump-news?smid=url-share"&gt;announced&lt;/a&gt; a two-week cease-fire deal, which Pakistan helped broker.&lt;/p&gt;&lt;p dir="ltr"&gt;Trump’s bluster, no matter how serious, has always been impossible to parse. (He’s famous for chickening out, backpedaling, or pretending like he never said what he said.) Yet one way to view our current age is as a series of existential reminders, be they nuclear proliferation, climate change, or pandemics. In Silicon Valley over the past half decade, civilizational extinction at the hands of hypothetical technological advances has moved from the realm of pure science fiction to a marketing tactic to an immediate concern for a &lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-doomers-chatbots-resurgence/683952/?utm_source=feed"&gt;subset of true believers&lt;/a&gt;. Humans may not want to die, but as a species we seem eager to invent and tout new ways to threaten our existence.&lt;/p&gt;&lt;p dir="ltr"&gt;And yet at the very same moment, four flesh-and-blood human beings are hundreds of thousands of miles away taking pictures of our delicate little world. Their mission and their photos remind us of something else entirely—of a yearning to learn, to explore, and to band together to become something greater than the sum of our parts. If Trump’s claims of mass destruction represent humanity at its smallest, weakest, and most cowardly, then those who are gazing upon our planet right now from afar represent the best of what we have to offer. How else to hear these &lt;a href="https://www.facebook.com/NASAArtemis/videos/1458839852555640/"&gt;words from &lt;/a&gt;&lt;a href="https://www.facebook.com/watch/?v=1458839852555640"&gt;Koch&lt;/a&gt;:&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;We will explore. We will build. We will build ships. We will visit again. We will construct science outposts. We will drive rovers. We will do radio astronomy. We will found companies. We will bolster industry. We will inspire. But ultimately, we will always choose Earth. We will always choose each other.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;As Lovell looked down at the Earth in 1968, an old saying popped into his head: &lt;em&gt;I hope to go to heaven when I die&lt;/em&gt;. Then he &lt;a href="https://www.chicagomag.com/chicago-magazine/november-2019/jim-lovell/"&gt;realized&lt;/a&gt;, “I actually went to heaven when I was born.”&lt;/p&gt;&lt;p dir="ltr"&gt;There is something disorienting, horrible, and somehow fitting in the timing of all of this. That one man with the means to do it would threaten destruction of a part of our planet at the same moment its beauty and fragility are on full display. We are, in this tense moment, living with our own overview effect. Four are watching from afar. But the rest of us are watching too—left to reckon with our own place on the pale blue dot, reminded of all the ways we might die, and all the reasons for which to live.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;cite&gt;&lt;small&gt;*Sources: NASA; Space Frontiers / Getty; Chip Somodevilla / Getty.&lt;/small&gt;&lt;/cite&gt;&lt;/p&gt;</content><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/LcrxaisZMT_VRf3WwhoBrw03XSE=/media/img/mt/2026/04/2026_04_07_An_Incredibly_Weird_Time_to_Be_Alive/original.jpg"><media:credit>Illustration by Anna Ruch / The Atlantic*</media:credit></media:content><title type="html">An Incredibly Weird Time to Be Alive</title><published>2026-04-07T19:56:00-04:00</published><updated>2026-04-08T11:29:44-04:00</updated><summary type="html">The world witnessed the best and worst of humanity in a single week.</summary><link href="https://www.theatlantic.com/technology/2026/04/trump-iran-artemis-ii-overview-effect/686721/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686603</id><content type="html">&lt;p dir="ltr"&gt;After George Mallon had his blood drawn at a routine physical, he learned that something may be gravely wrong. The preliminary results showed he might have blood cancer. Further tests would be needed. Left in suspense, he did what so many people do these days: He opened ChatGPT.&lt;/p&gt;&lt;p dir="ltr"&gt;For nearly two weeks, Mallon, a 46-year-old in Liverpool, England, spent hours each day talking with the chatbot about the potential diagnosis. “It just sent me around on this crazy Ferris wheel of emotion and fear,” Mallon told me. His follow-up tests showed it wasn’t cancer after all, but he could not stop talking to ChatGPT about health concerns, querying the bot about every sensation he felt in his body for months. He became convinced that something must be wrong—that a different cancer, or maybe multiple sclerosis or ALS, was lurking in his body. Prompted by his conversations with ChatGPT, he saw various specialists and got MRIs on his head, neck, and spine.&lt;/p&gt;&lt;p dir="ltr"&gt;Mallon told me he believes that the cancer scare and ChatGPT together caused him to develop this crippling health anxiety. But he blames the chatbot for keeping him spiraling even after the additional tests indicated that he wasn’t sick. “I couldn’t put it down,” he said. The chatbot kept the conversation going and surfaced articles for him to read. Its humanlike replies led Mallon to view it as a friend.&lt;/p&gt;&lt;p dir="ltr"&gt;The first time we met over a video call, Mallon was still shaken by the experience even though the better part of a year had passed. He told me he was “seven months sober” from talking with the chatbot about health symptoms after seeking help from a mental-health coach and starting anxiety medication. But he also feared he could get sucked back in at any moment. When we spoke again a few months later, he shared that he had briefly fallen into the routine again.&lt;/p&gt;&lt;p dir="ltr"&gt;Others seem to be struggling with this problem. Online communities focused on health anxiety—an umbrella term for excessive worrying about illness or bodily sensations—are filling up with conversations about ChatGPT and other AI tools. Some say it makes them spiral more than ever, while others who feel like it helps in the moment admit it’s morphed into a compulsion they struggle to resist. I spoke with four therapists who treat the condition (including my own); they all said that they’re seeing clients use chatbots in this way, and that they’re concerned about how AI can lead people to constantly seek reassurance, perpetuating the condition. “Because the answers are so immediate and so personalized, it’s even more reinforcing than Googling. This kind of takes it to the next level,” Lisa Levine, a psychologist specializing in anxiety and obsessive-compulsive disorder, and who treats patients with health anxiety specifically, told me.&lt;/p&gt;&lt;p dir="ltr"&gt;Experts believe that health anxiety may affect &lt;a href="https://www.health.harvard.edu/mind-and-mood/always-worried-about-your-health-you-may-be-dealing-with-health-anxiety-disorder"&gt;upwards of 12 percent&lt;/a&gt; of the population. Many more people struggle with other forms of anxiety and OCD that could similarly be exacerbated by AI chatbots. In October X posts, OpenAI CEO Sam Altman &lt;a href="https://x.com/sama/status/1978129344598827128"&gt;declared&lt;/a&gt; the serious mental-health issues surrounding ChatGPT to be mitigated, saying that serious problems affect “a very small percentage of users in mentally fragile states.” But mental fragility is not a fixed state; a person can seem fine until they suddenly are not.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p dir="ltr"&gt;Altman said during last year’s launch of GPT-5, the latest family of AI models that power ChatGPT, that health conversations are one of the top ways consumers use the chatbot. According to data from OpenAI &lt;a href="https://www.axios.com/2026/01/05/chatgpt-openai-health-insurance-aca"&gt;published by Axios&lt;/a&gt;, more than 40 million people turn to the chatbot for medical information every day. In January, the company leaned into this by introducing a feature called ChatGPT Health, encouraging users to upload their medical documents, test results, and data from wellness apps, and to talk with ChatGPT about their health.&lt;/p&gt;&lt;p dir="ltr"&gt;The value of these conversations, as OpenAI &lt;a href="https://www.linkedin.com/posts/openai_introducing-chatgpt-health-activity-7414755221135978496-nUJ5?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAAtg6KQBTIu4mpiQ-DkbqGLSQXuoBcKdQbo"&gt;envisions it&lt;/a&gt;, is to “help you feel more informed, prepared, and confident navigating your health.” Chatbots certainly might help some people in this regard; for instance, The New York Times recently &lt;a href="https://www.nytimes.com/2026/04/02/well/live/ai-illness-claude-chatgpt.html"&gt;reported&lt;/a&gt; on women turning to chatbots to pin down diagnoses for complex chronic illnesses. Yet OpenAI is also embroiled in controversy about the effects that an overreliance on ChatGPT may have. Putting aside the potential for such products to share inaccurate information, OpenAI has been accused of contributing to mental breakdowns, delusions, and suicides among ChatGPT users in a string of lawsuits against the company. Last November, &lt;a href="https://www.wsj.com/tech/ai/seven-lawsuits-allege-openai-encouraged-suicide-and-harmful-delusions-25def1a3?gaa_at=eafs&amp;amp;gaa_n=AWEtsqfF1SZgHvfcl1y7drFVE9s76HAE_jlMshiQCrZCKTyZX8mYxkyXiCf7&amp;amp;gaa_ts=69d0150a&amp;amp;gaa_sig=O5ee1yMSSmCqultAR6PERyuZ1vctZ3bs8VN7v_Z37STSqnRGvln1hK818SIWV5KCXX1v8yuEDoxdfqTSQSe_tg%3D%3D"&gt;seven&lt;/a&gt; were simultaneously filed, alleging that OpenAI rushed to release its flagship GPT-4o model and intentionally designed it to keep users engaged and foster emotional reliance. (The company has since retired the model.) In New York, a bill that would ban AI chatbots from giving “substantive” medical advice or acting as a therapist &lt;a href="https://statescoop.com/new-york-bill-would-ban-chatbots-legal-medical-advice/"&gt;is under consideration&lt;/a&gt; as part of a package of bills to regulate AI chatbots.&lt;/p&gt;&lt;p dir="ltr"&gt;In response to a request for comment, an OpenAI spokesperson directed me to a company &lt;a href="https://openai.com/index/update-on-mental-health-related-work/"&gt;blog post&lt;/a&gt; that says: “Our thoughts are with all those impacted by these incredibly heartbreaking situations. We continue to improve ChatGPT’s training to recognize and respond to signs of distress, de-escalate conversations in sensitive moments, and guide people toward real-world support, working closely with mental health clinicians and experts.” The spokesperson also told me that OpenAI continues to improve ChatGPT’s safeguards in long conversations related to suicide or self-harm. The company has previously said it is &lt;a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html"&gt;reviewing the claims&lt;/a&gt; in the November lawsuits. It has &lt;a href="https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946"&gt;denied allegations&lt;/a&gt; in a lawsuit filed in August that ChatGPT was responsible for a teen’s suicide. (OpenAI has a corporate partnership with The Atlantic’s business team.)&lt;/p&gt;&lt;p dir="ltr"&gt;Two years ago, I fell into a cycle of health anxiety myself, sparked by a close friend’s traumatic illness and my own escalating chronic pain and mysterious symptoms. At one point, after I was managing much better, I tried out a few conversations with ChatGPT for a gut-check about minor health issues. But the risk of spiraling was glaring; seeking reassurance like that went against everything I’d learned in therapy. I was thankful I hadn’t thought to turn to AI when I was in the throes of anxiety. I told myself, Never again.&lt;/p&gt;&lt;p dir="ltr"&gt;Meanwhile, in the health-anxiety communities I’m part of, I saw people talk more and more about looking to chatbots for comfort. Many say it has made their health anxiety worse. Others say AI has been extraordinarily helpful, calming them down when they’re caught in a cycle of unrelenting worry. And it is that last category that is, in fact, most concerning to psychologists. Health anxiety often functions as a form of OCD with obsessive thoughts and “checking,” or reassurance-seeking compulsions. Therapeutic best practices for managing health anxiety hinge on building self-trust, tolerating uncertainty, and resisting the urge to seek reassurance, but ChatGPT eagerly provides personalized comfort and is available 24/7. That type of feedback only feeds the condition—“a perfect storm,” said Levine, who has seen talking with chatbots for reassurance become a new compulsion in and of itself for some of her clients.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p dir="ltr"&gt;Extended, continuous exchanges have shown to be a common issue with chatbots and a factor in reported cases of &lt;a href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;AI-associated “psychosis.”&lt;/a&gt; Research conducted by researchers at OpenAI and the MIT Media Lab &lt;a href="https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf"&gt;has found&lt;/a&gt; that longer ChatGPT sessions can lead to addiction, preoccupation, withdrawal symptoms, loss of control, and mood modification. &lt;a href="https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html?unlocked_article_code=1.3U8.3A1u.ZAX9W46WWg-A&amp;amp;smid=url-share"&gt;OpenAI has also acknowledged&lt;/a&gt; that its safety guardrails can “degrade” in lengthy conversations. Over a 10-day period of his cancer scare, Mallon told me, “I must have clocked over 100 hours minimum on ChatGPT, because I thought I was on the way out. There should have been something in there that stopped me.”&lt;/p&gt;&lt;p dir="ltr"&gt;In an October &lt;a href="https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/"&gt;blog post&lt;/a&gt;, OpenAI said it consulted more than 170 mental-health professionals to more reliably recognize signs of emotional distress in users. The company also said it updated ChatGPT to give users “gentle reminders” to take breaks⁠ during long sessions. OpenAI would not tell me specifically how long into an exchange ChatGPT nudges users to take a break or how often users actually take a break versus continue chatting after being served this reminder.&lt;/p&gt;&lt;p dir="ltr"&gt;One psychologist I spoke with, Elliot Kaminetzky, an expert on OCD who is optimistic about the use of AI for therapy, suggested that people could tell the chatbot they have health anxiety and “program” it to let them ask about their concerns just once—in theory, preventing the chatbot from goading the user to interact further. Other therapists expressed concern that this is still reassurance-seeking and should be avoided.&lt;/p&gt;&lt;p dir="ltr"&gt;When I tested the idea of instructing ChatGPT to restrict how much I could talk to it about health worries, it didn’t work. ChatGPT would acknowledge that I put this guardrail on our conversations, though it also prompted me to keep responding and allowed me to keep asking questions, which it readily answered. It also flattered me at every turn, earning its reputation for sycophancy. For example, in response to telling it about a fictional pain in my right side, it cited the guardrail and suggested relaxation techniques, but ultimately took me through a series of possible causes that escalated in severity. It went into detail on risk factors, survival rates, treatments, recovery, and even what to expect if I were to go to the ER. All of this took minimal prompting, and the chatbot continued the conversation whether I acted worried or assured; it also allowed me to ask about the same thing as soon as an hour later, as well as multiple days in a row. “That’s a good and very reasonable question,” it would tell me, or, “I like how you’re approaching it.”&lt;br&gt;
&lt;br&gt;
“Perfect — that’s a really smart step.”&lt;br&gt;
&lt;br&gt;
“Excellent thinking — that’s exactly the right approach.”&lt;/p&gt;&lt;p dir="ltr"&gt;OpenAI did not respond to a request for comment about my informal experiment. But the experience left me wondering whether, as millions of people use chatbots daily—forming relationships and dependencies, becoming emotionally entangled with AI—it will ever be possible to isolate the benefits of a health consultant at your fingertips from the dangerous pull that some people are bound to feel. “I talked to it like it was a friend,” Mallon said. “I was saying stupid things like, ‘How are you today?’ And at night, I’d log off and go, ‘Thanks for today. You’ve really helped me.’”&lt;/p&gt;&lt;p dir="ltr"&gt;In one of the exchanges where I continuously prompted ChatGPT with worried questions, only minutes passed between its first response suggesting that I get checked out by a doctor to its detailing for me which organs fail when an infection leads to septic shock. Every single reply from ChatGPT ended with its encouraging me to continue the conversation—either prompting me to provide more information about what I was feeling or asking me if I wanted it to create a cheat sheet of information, a checklist of what to monitor, or a plan to check back in with it every day.&lt;/p&gt;</content><author><name>Sage Lazzaro</name><uri>http://www.theatlantic.com/author/sage-lazzaro/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/04a9MgXOKRBCEcb9hz7XcbhDVR8=/media/img/mt/2026/03/2025_12_10_Deena_So_Oteh_The_Atlantic_update/original.jpg"><media:credit>Illustration by Deena So Oteh</media:credit></media:content><title type="html">The ChatGPT Symptom Spiral</title><published>2026-04-06T18:30:00-04:00</published><updated>2026-04-07T16:16:58-04:00</updated><summary type="html">Be careful asking chatbots about your health.</summary><link href="https://www.theatlantic.com/technology/2026/04/chatgpt-health-anxiety/686603/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686686</id><content type="html">&lt;p&gt;Late last month, a large crowd gathered in downtown San Francisco to demand that the AI industry stop developing more powerful bots. Holding signs and banners reading &lt;span class="smallcaps"&gt;Stop the AI Race&lt;/span&gt; and &lt;span class="smallcaps"&gt;Don’t Build Skynet&lt;/span&gt;, the protesters marched through the city and gave speeches outside the offices of Anthropic, OpenAI, and xAI. The crowd demanded that these companies halt efforts to create superintelligent machines—and, in particular, AI models that can develop future AI models. Such a technology, attendees said, could extinguish all human life.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research. OpenAI recently released a new model it &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/"&gt;described&lt;/a&gt; as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.” Meanwhile, Anthropic says that as much as 90 percent of the company’s code is already written by Claude.&lt;/p&gt;&lt;p&gt;“We are starting to see AI progress feed back on itself,” Nick Bostrom, an influential Swedish philosopher who studies AI risk, told us. Within Silicon Valley, many insiders believe that we are teetering on the precipice of a world in which AI can rapidly improve its own capabilities. Instead of waiting for months between new machine-learning breakthroughs, we might wait weeks. Imagine AI advancing faster and faster.&lt;/p&gt;&lt;p&gt;The idea of self-improving bots is nothing new. When the statistician I. J. Good first introduced the concept of recursive self-improvement in the 1960s, he wrote that machines capable of training their own, even more capable successors would be “the last invention” society ever needed to make. But just a few years ago, any notion of actually making such AI models was on the back burner. When ChatGPT couldn’t reliably add and subtract, &lt;a href="https://www.theatlantic.com/technology/archive/2024/06/chatgpt-citations-rag/678796/?utm_source=feed"&gt;let alone search the web&lt;/a&gt;, the notion that AI programs would soon be able to do world-class machine-learning research seemed laughable. Even as tech companies made claims about the imminent arrival of “artificial general intelligence,” the capabilities needed for a bot to accelerate or even direct AI research seemed to &lt;em&gt;exceed&lt;/em&gt; those of AGI.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;Read: Do you feel the AGI yet?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Now, as AI models have &lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;become significantly better at coding&lt;/a&gt;, Silicon Valley has become hooked on the idea of self-improving machines. AI research involves a lot of grunt work—curating large data sets, running repeated experiments—that can be made more efficient with the help of coding bots. Dario Amodei, Anthropic’s CEO, has &lt;a href="https://www.dwarkesh.com/p/dario-amodei-2"&gt;estimated&lt;/a&gt; that coding tools speed up his company’s overall workflows by 15 to 20 percent.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the information that top AI firms share about how and the extent to which they have automated internal research is patchy at best. When Anthropic says that Claude writes almost all of its code, we don’t know how much human supervision was required. (An Anthropic spokesperson declined a request for an interview, but pointed us to a recent &lt;a href="https://www.nytimes.com/2026/02/24/opinion/ezra-klein-podcast-jack-clark.html"&gt;podcast&lt;/a&gt; in which Jack Clark, the company’s head of policy, said one of his biggest priorities this year is to better understand “the extent to which we are automating aspects of A.I. development.”) There are also few details about OpenAI’s forthcoming AI “intern.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A company spokesperson described it to us as a system that could contribute to research workflows by, for instance, conducting literature reviews or interpreting results of experiments. (&lt;em&gt;The Atlantic &lt;/em&gt;has a corporate partnership with OpenAI.) One concrete example of how AI is being used to automate research comes from Google DeepMind: Last year, the company developed an AI coding agent called AlphaEvolve, which according to research published by the firm was able to make Google’s global data-center fleet 0.7 percent more computationally efficient on average and cut the overall training time of Gemini by 1 percent.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;Read: AI agents are taking America by storm&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;All of these current approaches to self-improving AI are not recursive but piecemeal. AI tools can write code, find small optimizations, and generally make discrete parts of the AI research process faster. It’s impressive that machines are able to at least incrementally improve their own abilities, but right now humans still play an essential role. AI research has many components: curating training data, proposing new hypotheses, setting up experiments to test them, and deciding how to allocate scarce computing resources. Eventually, the thinking goes, recursively self-improving AI models will make the leap from rote programming to having real research “taste”—as AI insiders call the mix of human creativity and judgment exhibited by top software engineers. Instead of humans coming up with ideas for new experiments, the bots will do this themselves.&lt;/p&gt;&lt;p&gt;Many AI boosters and doomers alike believe that we’re not far from that future. Sam Altman says that by 2028, OpenAI plans to have developed a fully “automated AI researcher.” By then, “we are pretty confident we will have systems that can make more significant discoveries,” the company &lt;a href="https://openai.com/index/ai-progress-and-recommendations/"&gt;said&lt;/a&gt; in a recent blog post. Based on the speed of recent advances in AI, Eli Lifland, a researcher at the AI Futures Project, has forecast that AI research and development could be fully automated by 2032. After all, a few years ago, top models could successfully do only things that would take a human developer seconds; now they autonomously complete tasks that would take humans hours. “I don’t expect a reason for it to slow down,” Neev Parikh, a researcher at METR, a nonprofit that studies AI coding capabilities, told us.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are plenty of reasons to be skeptical that AI research will be fully automated over such a short time horizon. Coding bots are designed to execute directions, but developing an AI with &lt;a href="https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/?utm_source=feed"&gt;research &lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/?utm_source=feed"&gt;taste&lt;/a&gt; might require some kind of transformative breakthrough. Not to mention the various constraints on AI development—including the availability of funding, chips, and energy for data centers—that threaten to stall progress at any time. For now, “the overall pipeline to realize this self-improvement loop is still yet to be developed,” Pushmeet Kohli, DeepMind’s vice president of science and strategic initiatives, told us. A bot can optimize things, but it doesn’t “have anything to optimize &lt;em&gt;for&lt;/em&gt;,” Kohli said. “That’s where the human comes in.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/2026/04/ai-data-centers-energy-demands/686064/?utm_source=feed"&gt;Read: Inside the dirty, dystopian world of AI data centers&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Ultimately, even if the most fantastical dreams of recursive self-improvement turn out to be little more than a marketing ploy, marginal improvements in automating research are likely to further accelerate the pace of AI development. “This could change the dynamics of AI competition, alter AI geopolitics, and much more,” Dean Ball, &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;a former Trump adviser on AI&lt;/a&gt;, recently &lt;a href="https://www.hyperdimensional.co/p/on-recursive-self-improvement-part"&gt;wrote&lt;/a&gt;. Governments and civil society are already lagging. American institutions are in many ways still adapting to the internet—the IRS still processes tax returns using COBOL, a programming language that was released in 1960. Should AI models progress faster, public policy, including regulations on safety and security, has even less hope of keeping up. Bostrom, the philosopher, expressed a sort of resignation about the AI future when we spoke. He used to call himself a “fretful optimist,” he said, but now he’s a “moderate fatalist.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a strange way, none of the predictions about recursive self-improvement needs to be true for them to matter. Last year, a team of academics interviewed 25 leading researchers at DeepMind, OpenAI, Anthropic, Meta, UC Berkeley, Princeton, and Stanford. Twenty of them identified the automation of AI research as among the industry’s “most severe and urgent” risks. Now these dramatic warnings are gaining a growing audience. “Human beings could actually lose control over the planet,” Senator Bernie Sanders recently warned Congress, sounding just like the San Francisco protesters. Yet again, the AI industry has found a way to ratchet up the hype behind its technology.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/c3WP_48GLb1cNMqUeRDDbWFK0Ag=/media/img/mt/2026/04/2026_4_1_AI/original.png"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">Silicon Valley Is in a Frenzy Over Bots That Build Themselves</title><published>2026-04-03T13:35:00-04:00</published><updated>2026-04-06T10:29:54-04:00</updated><summary type="html">How close are we really to self-improving AI?</summary><link href="https://www.theatlantic.com/technology/2026/04/ai-industry-self-improving-bots/686686/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686646</id><content type="html">&lt;p&gt;“Come get ready with me for the day,” a young blond woman says over footage of herself making her bed, arranging her pillows, and weighing her clothing choices. The &lt;a href="https://www.instagram.com/reels/DKz_4nQSj2c/"&gt;video&lt;/a&gt; is just like any other lifestyle content that influencers post to Instagram and TikTok—right up until she whips out her phone and scrolls through the Kalshi app. “I use it to check the weather to help me pick out an outfit for the day,” she says, modeling a black spandex romper for the camera. “Go ahead and check out the app link below.”&lt;/p&gt;&lt;p&gt;Recently, my Instagram feed has been haunted by women explaining how much they enjoy betting on elections, the pop-music charts, and &lt;em&gt;Dancing With the Stars&lt;/em&gt;. They are advertising prediction markets such as Kalshi and Polymarket, which let users wager on virtually anything. “The boys can do their parlays and use words I’ve never heard of. But the girls can use their pop culture and educated guesses to make decisions and trade on Kalshi,” a woman &lt;a href="https://www.tiktok.com/@kalshiculture/video/7612800736396692749?q=kalshi%20girls&amp;amp;t=1773866166375"&gt;says&lt;/a&gt; in a TikTok on one of the company’s accounts. Her caption assures me: “Kalshi is for the girls!!!!”&lt;/p&gt;&lt;p&gt;So far though, it is not. Prediction markets have a dude problem. Though these sites offer all sorts of wagers—where will Taylor Swift get married? Who will win &lt;em&gt;Survivor&lt;/em&gt;?—they have largely become &lt;a href="https://www.theatlantic.com/technology/2026/02/super-bowl-prediction-markets-kalshi/685899/?utm_source=feed"&gt;yet another place for men to bet on football and March Madness&lt;/a&gt;. In the past six months, 88 percent of trades on Kalshi have been about sports, according to the investment firm &lt;a href="https://predictions.paradigm.xyz/?view=kalshi&amp;amp;basis=volume&amp;amp;start=2025-10-01&amp;amp;end=2026-04-01"&gt;Paradigm&lt;/a&gt;. The second-largest category, at about 6 percent, is crypto (which is arguably even &lt;em&gt;more &lt;/em&gt;bro-ey).&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/super-bowl-prediction-markets-kalshi/685899/?utm_source=feed"&gt;Read: You’ve never seen Super Bowl betting like this before&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;In an apparent attempt to bridge the gap, both Polymarket and Kalshi are running social-media campaigns that parrot the language of female empowerment and girlish memes. “Girl math says if I make $10 predicting real-life stuff, that coffee was technically free,” a girl in thick-framed glasses says in an ad that Kalshi ran on Facebook and Instagram. “If I’m already scrolling news or pop culture anyway, might as well turn my hot takes into some free iced coffees.” She adds, “It’s kind of addicting, but in a fun way.” (The video has since been removed for not having a necessary ad disclosure.) Some posts, like this one, are advertisements from the companies themselves; some are paid influencer partnerships; and some are either undisclosed partnerships or made by women who are just &lt;em&gt;super&lt;/em&gt; excited to post a suspicious amount of links to Polymarket.&lt;/p&gt;&lt;p&gt;Prediction markets should be an easier sell for women than traditional sports betting. Though women are less likely to gamble than men, prediction markets offer the veneer of being more than places to bet. Both Kalshi and Polymarket claim that they are financial markets, not casinos; users make trades about any given event, which in turn generate odds that supposedly predict the outcome. (They are called “prediction markets” for a reason.)&lt;/p&gt;&lt;p&gt;When prediction markets try to entice women, they especially tend to lean into the idea that all of this is investing, not gambling. On Kalshi’s dedicated Instagram for women, @KalshiGirls, one &lt;a href="https://www.instagram.com/p/DQabmx8jSL_/"&gt;meme&lt;/a&gt; reads, “When someone says prediction markets are ‘just betting,’” over a photograph of Cher from &lt;em&gt;Clueless &lt;/em&gt;saying, “Ugh, as if.” Meanwhile, the ads for men tend to emphasize the fun of gambling and the possibly big payouts: “Dude,” reads an ad Kalshi ran in the lead-up to the 2024 presidential election, “I am going to bet my Cybertruck on Trump, probably gonna make enough for a house if he wins.”&lt;/p&gt;&lt;p&gt;Kalshi in particular has been ramping up its efforts with women. (Polymarket’s main site, where people bet using crypto, is accessible in the United States only through digital work-arounds.) The reason for appealing to women is simple, Elisabeth Diana, Kalshi’s head of communications, told me: “They’re 50 percent of the population.” She noted that 26 percent of Kalshi-account holders are female—up from 13 percent just 10 months ago. Diana claimed that much of that increase is because of organic interest, but the company seems intent on pulling in more women. Before ABC canceled Season 22 of &lt;em&gt;The Bachelorette&lt;/em&gt; a couple of weeks ago, Kalshi had been planning a watch party.&lt;/p&gt;&lt;p&gt;Sure enough, when I looked up all the ads that Kalshi has run on Instagram and Facebook, I spotted a fair number that were obviously geared toward women. In the clips, influencers tended to make small wagers with a clear goal in mind—usually caffeinated beverages. Polymarket taps into the same dynamic on its X account for female traders, @PolyBaddies. (I do not suggest you Google that phrase.) One post includes a photo of a Starbucks cup with the caption, “Matcha and markets kinda day &#128524;.” (Polymarket did not respond to requests for comment.)&lt;/p&gt;&lt;p&gt;Many of these marketing efforts are ridiculous. I would bet—sorry—that most women will not be compelled to spend their time on prediction markets to maybe win $5 for their morning matcha. But some ads are less “girl math” and more actual math. Priya Kamdar, Maya Shah, and Anika Mirza—the 20-something hosts of &lt;em&gt;Get the Check&lt;/em&gt;, a technology-and-business podcast—reached out to Kalshi directly to obtain a partnership deal because they were already using the site, the three hosts told me. Mirza has a Kalshi wager on the race to succeed Nancy Pelosi in Congress; Shah bet on how long the government shutdown was going to last; Kamdar put money on the Rotten Tomatoes score that each movie in the &lt;em&gt;Wicked &lt;/em&gt;franchise would receive (she was right about the first film and wrong about the second).&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/?utm_source=feed"&gt;Read: America is slow-walking into a Polymarket disaster&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The more women who are betting on prediction markets, the closer these sites get to their stated goal of forecasting the future. If they want to predict the Fed’s next interest rate, the winner of &lt;em&gt;The Bachelor&lt;/em&gt;, or whether or not it will rain tomorrow in Poughkeepsie, a market made up only of male sports fans won’t cut it. But Kalshi and Polymarket also have other incentives to show they are for women. Sports have an outsize popularity on prediction markets because these sites allow people to effectively wager even in states where sports betting is illegal. This is becoming a major problem for the companies. Kalshi is facing lawsuits from several states for allegedly operating as an unregistered sports-betting site. Arizona recently became the &lt;a href="https://www.npr.org/2026/03/17/nx-s1-5751165/kalshi-criminal-charges-arizona"&gt;first state&lt;/a&gt; to press criminal charges against Kalshi, and Nevada has temporarily blocked Kalshi and Polymarket from operating in the state. The companies, which maintain that they are financial markets and thus not subject to sports-betting restrictions, have a vested interest in getting users betting on topics besides sports. “It does future-proof them,” Dustin Gouker, a gambling-industry consultant who writes a daily newsletter, told me.&lt;/p&gt;&lt;p&gt;Perhaps the biggest concern with these ads is that they make it easy to forget that you can actually lose money on prediction markets. Shah, the podcast host, told me that if someone trades on topics they’re deeply knowledgeable about, prediction markets can be a useful “financial tool.” But they’re inherently risky. At one point, I was served an ad of a woman anxiously checking a Kalshi bet with her friends, with the caption, “I was about to be unable to pay my rent, but I got two years of rent through Kalshi’s predictions. It’s amazing! &#129392;&#129392;” When I searched for it again, the ad had been taken down; the next time I saw it was as an exhibit in a class-action lawsuit against &lt;a href="https://storage.courtlistener.com/recap/gov.uscourts.nysd.656144/gov.uscourts.nysd.656144.1.0.pdf"&gt;Kalshi&lt;/a&gt; that alleges, in part, that the site is not adequately disclosing risks to consumers. (Kalshi has denied the allegations.)&lt;/p&gt;&lt;p&gt;To hear the companies tell it, prediction markets are just another way to be a #girlboss. “Listen up, girlie pops! This platform is normally considered, like, for the finance bros, but I’m gonna show you why it’s so for us,” one woman says in a post seemingly sponsored by Polymarket. (The video includes no disclosures.) Kalshi and Polymarket become just another part of the day—platforms that women can use to check the odds even if they don’t place bets.&lt;/p&gt;&lt;p&gt;A year ago, I probably could not have told you what a prediction market was. By January, Polymarket odds were displayed during the Golden Globes, and &lt;a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/?utm_source=feed"&gt;CNN pundits&lt;/a&gt; were citing Kalshi’s markets on air. In February, Los Angeles’s Sunset Boulevard—a legendary street in my hometown, known for its clubs and neon signs—had a billboard displaying live Kalshi odds. These platforms are already ubiquitous. If women really do start using them en masse, prediction markets will burrow into American life even more deeply. Until then, the companies will keep reminding them to do some “girl math.”&lt;/p&gt;</content><author><name>Nancy Walecki</name><uri>http://www.theatlantic.com/author/nancy-walecki/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jnipV0CO946L_elzrZLK_Otbr00=/media/img/mt/2026/04/2026_03_26_GirlMath/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">It’s Not Gambling, It’s ‘Girl Math’</title><published>2026-04-01T12:59:00-04:00</published><updated>2026-04-02T10:08:31-04:00</updated><summary type="html">Prediction markets are trying to woo women through matcha memes and #girlboss ads.</summary><link href="https://www.theatlantic.com/technology/2026/04/kalshi-polymarket-gambling-women/686646/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686628</id><content type="html">&lt;p&gt;Recently, a Costco in Florida instituted a new store policy. An employee told me that he was asked to open up every desktop computer displayed in the electronics section and remove the memory chips. Otherwise, the RAM harvesters would get them. Elsewhere, &lt;a href="https://www.cargonet.com/news-and-events/cargonet-in-the-media/2025-theft-trends/"&gt;criminal groups&lt;/a&gt; are misdirecting trucks carrying RAM in order to loot them. All of this is happening because of a generational shortage of a part used in practically every electronic gadget on Earth.&lt;/p&gt;&lt;p&gt;RAM is your device’s short-term memory—storing the information it needs to handle any active tasks. (&lt;em&gt;RAM&lt;/em&gt; stands for “random-access memory.”) To put this in intimately familiar terms, it is what your computer runs out of when you have too many browser tabs open. And right now, the price of RAM is skyrocketing. From September to February, the price of a single 64GB stick of RAM went from roughly $250 to more than $1,000.&lt;/p&gt;&lt;p&gt;Gamers who build their own juiced computers were among the first to notice that something was off. Starting in the fall, it became so difficult for them to acquire memory sticks that they have given a name to this crisis: RAMageddon. Now it’s quickly becoming everyone’s problem. In December, &lt;a href="https://www.businessinsider.com/dell-price-hikes-memory-demand-ai-chip-race-computer-2025-12"&gt;Dell&lt;/a&gt;&lt;a href="https://www.businessinsider.com/dell-price-hikes-memory-demand-ai-chip-race-computer-2025-12"&gt; jacked&lt;/a&gt; the prices of some of its computers by hundreds of dollars because of what its COO has referred to as “this memory crisis, shortage, whatever you want to call it.” Earlier this month, for the same reason, Lenovo raised prices on some of its products, including the popular ThinkPad.&lt;/p&gt;&lt;p&gt;This seems to be only the beginning. Matteo Rinaldi, the head of a global semiconductor-research institute run by Northeastern University, told me he recently asked a colleague what new laptop he should buy. “He told me right away, ‘Well, you know, it almost doesn’t matter which one,’” Rinaldi said. “‘Just decide you want to buy now, because prices are going up.’”&lt;/p&gt;&lt;p&gt;RAM is suddenly so expensive because memory is powering the AI boom. Data centers require huge amounts to run the models that underlie AI tools such as ChatGPT and Claude—especially as they become capable of handling more complicated tasks. This year, a group of tech giants—Amazon, Alphabet, Meta, Microsoft, and Oracle—is set to collectively spend half a trillion dollars on the AI build-out. Roughly a third of that money is being spent on memory alone, &lt;a href="https://www.dwarkesh.com/p/dylan-patel"&gt;according to&lt;/a&gt; Dylan Patel, the founder of SemiAnalysis, a popular semiconductor-research firm.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt;Read: Welcome to a multidimensional economic disaster&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The insatiable demand has “cannibalized our conventional consumer-electronics supply,” Yang Wang, an analyst at Counterpoint Research, a market-research firm, told me. Every major RAM manufacturer has shifted production lines to service AI data centers. This year, 70 percent of memory-chip products made globally will be destined for them. In South Korea, where two of the biggest RAM manufacturers are based, Silicon Valley executives are &lt;a href="https://www.ndtv.com/feature/why-apple-is-sending-top-brass-to-south-korea-hotels-the-ram-shortage-war-10566863"&gt;reportedly booki&lt;/a&gt;&lt;a href="https://www.ndtv.com/feature/why-apple-is-sending-top-brass-to-south-korea-hotels-the-ram-shortage-war-10566863"&gt;ng&lt;/a&gt; hotels in the country’s tech districts, frantically hoping to secure inventory. A Korean newspaper has given them a name: RAM beggars&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Ideally, this problem would be solved by producing a whole lot more RAM. Micron, one of the biggest RAM manufacturers, is building a factory in New York that will cost more than any other private investment in the state’s history. Elon Musk recently suggested that Tesla will build its own RAM factories, called “fabs,” to ensure that he has enough memory to build robots and robotaxis. (“We’ve got two choices: Hit the chip wall, or make a fab,” he said in January.) But because of the complexity of making RAM, it could take even the richest man in the world two to five years to bring a new factory online. In the meantime, the world simply won’t have enough of a basic electronics part.&lt;/p&gt;&lt;p&gt;During RAMageddon, your gadgets will essentially be subject to an AI tax. It’s long been safe to assume that technology will get &lt;a href="https://www.cnet.com/tech/mobile/moores-law-is-the-reason-why-your-iphone-is-so-thin-and-cheap/"&gt;cheaper, faster, and better&lt;/a&gt;. But for the next few years, all signs suggest that devices will get more expensive, slower, and worse.&lt;/p&gt;&lt;p&gt;So far, it might not feel like all that much has changed. Earlier this month, Apple released its cheapest computer ever, the $599 Mac Neo. (It runs on a chip previously used only in iPhones.) But elsewhere, the price hikes have started. Samsung’s new Galaxy phones cost about $100 more than last year’s models, which the company’s COO &lt;a href="https://www.theverge.com/tech/885566/samsung-ram-galaxy-s26-price"&gt;has attributed&lt;/a&gt; in large part to the memory shortage. That’s despite the fact that Samsung is one of three companies in the world producing a significant amount of memory. Android phones have debuted this year with worse cameras, less storage, and slower processors than models released years ago, Wang told me, yet they still cost more.&lt;/p&gt;&lt;p&gt;Expect more changes like this. Gadget makers were able to initially swallow the cost of high RAM, but in the long run, they’ll have little choice but to pass on the cost to consumers. Consider Sony, which just announced that it will raise the price of the PlayStation 5 by $100. Before the adjustment, the memory chips inside a PS5 were worth more than the console itself. Smaller video-game manufacturers have pushed back launches or canceled the release of new consoles altogether.&lt;/p&gt;&lt;p&gt;To keep up with increasing RAM costs, things might get weird. Companies may jack up software prices to compensate for all the money they are sinking into memory chips. Sony’s CFO said on a recent earnings call that the company will survive the RAM crisis by “&lt;a href="https://wccftech.com/playstation-5-price-increases-monetizing-install-base/"&gt;monetizing the installed base&lt;/a&gt;,” which seems to be a euphemism for finding ways to charge PlayStation owners more, or showing them more ads. (Sony did not respond to a request for comment.) At the same time, some companies may start to pare back products they’ve made “smart” to justify markups. Smart speakers, smart toilets, smart toasters, and smart deodorants (yes, really) all contain RAM. “Do we stop getting smart refrigerators? I don’t think that’s a net bad,” Laine Nooney, a technology historian at NYU, told me.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2022/09/who-controls-smart-thermostat-temperature-nest-ecobee/671559/?utm_source=feed"&gt;Read: Your smart thermostat isn’t here to help you&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;If that’s a silver lining, it’s not a particularly good one. &lt;a href="https://www.trendforce.com/presscenter/news/20260310-12959.html"&gt;TrendForce&lt;/a&gt;, a consumer-research firm, anticipates that laptop prices will rise by more than a third in the next few years. Computers under $500 will be extinct by 2028, according to a report from &lt;a href="https://www.gartner.com/en/newsroom/press-releases/2026-02-26-gartner-says-surging-memory-costs-will-reduce-global-pc-and-smartphone-shipments-in-2026"&gt;Gartner&lt;/a&gt;. Put differently, cheaper computers may fall off the map. “The $300 Chromebook and the $150 Android phone were products of a specific era—one where memory was cheap because nobody else was competing for it at this scale,” Nate Jones, an AI analyst, told me. “That era is ending.”&lt;/p&gt;&lt;p&gt;The consequences are global. All of this will be felt acutely in poor countries, where sub-$150 smartphones are especially popular. Some people may have no choice but to revert to flip phones, potentially cutting them off from essential apps and services. “You can’t build a gaming PC? Cool story, bro,” Wang, the smartphone analyst, said. “But then people in Africa can’t get a device which is crucial for their lives.”&lt;/p&gt;&lt;p&gt;So much money is going into the AI build-out that it is already reshaping the physical world. The data centers that are sprouting up across the United States are at least partly to blame for rising utility bills. And now people who may never have heard of Claude or asked ChatGPT for homework help will feel the effects of RAMaggedon. Hospitals have shelved plans to install touch screens that display medical charts and let patients order food, because the displays contain RAM, Rachael England, a manager at Vizient, a consulting firm that works with many U.S. hospitals, told me. Josh Bauman, the director of technology for a public-school district in Missouri, told me that if RAM prices keep increasing, his district may rethink buying a Chromebook for every student. For the foreseeable future, no one can escape the AI tax.&lt;/p&gt;</content><author><name>Hana Kiros</name><uri>http://www.theatlantic.com/author/hana-kiros/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/OLQOcGIhtO-EKQdPhG3KievZphM=/media/img/mt/2026/03/2026_03_20_RAMageddon/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">If You Need a Laptop, Buy It Now</title><published>2026-03-31T12:27:00-04:00</published><updated>2026-04-01T13:04:26-04:00</updated><summary type="html">Electronics are getting more expensive and worse. Blame the AI boom.</summary><link href="https://www.theatlantic.com/technology/2026/03/laptop-electronics-ram-ai-tax/686628/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686618</id><content type="html">&lt;p&gt;Thore Graepel may have been the first human to be vanquished by a superintelligence. In 2015, on his first day as a researcher at Google DeepMind, he was challenged to play against the earliest iteration of AlphaGo—a computer program developed by DeepMind that would prove so effective at the ancient-Chinese game of &lt;em&gt;weiqi&lt;/em&gt; (or Go, as it is commonly known in the West) that it changed how humans play it, and then upended the field of AI itself.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When Graepel faced it, AlphaGo was just a “baby” project, as he put it to me, and he was an accomplished amateur player. But it still took him down. Then, the following year, AlphaGo—now fully developed—plowed through a number of human champions, ultimately crushing Lee Sedol, widely considered the best player in the world, with a match score of 4–1. This month marked the tenth anniversary of that victory.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For decades, developing a program that plays Go at an elite level was an infamous problem in computer science. Many considered it unsolvable—far harder than developing a similar program for chess, in which the supercomputer DeepBlue beat the world champion in 1997. In Go, two players take turns positioning stones on a 19-by-19 grid, and their movements are relatively unrestricted. In chess, which has a far smaller grid, a rook can move only horizontally and a bishop only diagonally, but Go pieces can be placed on any open space. The number of possible Go positions is so high that it &lt;a href="https://tromp.github.io/go/legal.html"&gt;cannot be easily expressed in words&lt;/a&gt;; it is higher than the number of atoms in the observable universe, and orders of magnitude higher than the number of possible chess games. Today, the technical frameworks and approaches that allowed an algorithm to excel at this board game have translated fairly directly into bots that can write advanced code, help tackle open problems in mathematics, and replicate scientific discoveries from scratch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Generative AI is living in AlphaGo’s shadow. Beyond the actual models, “conceptual things emerged from the whole AlphaGo experience which essentially entered the AI vocabulary,” Pushmeet Kohli, the vice president of science and strategic initiatives at Google DeepMind, told me. In many ways, Go and chess provide ideal templates for understanding how the AI boom has unfolded—and a guide for what it may yet wreak.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;DeepMind’s innovation was to essentially pair two algorithms: one AI model to propose moves and a second model to judge whether a move is good or not, allowing the system to devote computational resources to planning sequences of moves most likely to result in victory. AlphaGo then played itself thousands of times, improving from every mistake through a training process known as reinforcement learning. Today’s frontier AI labs face an analogous problem: Large language models such as ChatGPT could spit out lucid sentences and paragraphs, but when they faced challenging tasks in computer science, physics, and other areas that would require a human to really &lt;em&gt;think&lt;/em&gt;, chatbots had been stuck stumbling in the dark. That began to change in late 2024 with the advent of &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;so-called reasoning models&lt;/a&gt;, an approach that now underlies all of the top bots from OpenAI, Google DeepMind, and Anthropic. And the idea behind these reasoning models “is surprisingly similar to AlphaGo,” as Noam Brown, a researcher at OpenAI, recently &lt;a href="https://x.com/polynoamial/status/2031404079583473953"&gt;put it&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/02/train-ai-chatgpt-to-play-video-game-pokemon/672954/?utm_source=feed"&gt;Read: A machine crushed us at Pokémon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The intuition behind chatbot reasoning is to have AI models work out a solution step-by-step, using a scratch pad of sorts, and then evaluate steps along the way to change course or start over as needed—very much like the two-step approach used by AlphaGo. The training method for these reasoning chatbots is the same as well: reinforcement learning. An algorithm can play lots of games of Go or attempt to solve lots of difficult math problems, then learn from its mistakes when it loses or errs. Today’s best AI models “can be traced back to some degree to the AlphaGo work,” Graepel said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps the most crucial insight shared between AlphaGo and the chatbot-reasoning breakthrough is a twist on the AI industry’s central dogma, the “scaling laws.” Traditionally, AI companies improved their large language models by training them on more data and with more computing power. In the case of AlphaGo and reasoning models, researchers realized that they could scale another dimension: having the program devote more time and computing power to a task, akin to how harder problems typically take humans more time to solve. For bots, this meant planning more and longer sequences of moves or using more words to “reason” through a tough coding task. That wasn’t guaranteed. “It could happen that you give them more time and they spend more time just getting confused,” Kohli said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After the success of AlphaGo, DeepMind made a successor program called AlphaZero. Whereas AlphaGo was initially shown a number of human Go matches as a baseline, AlphaZero became dominant at a number of games—Go, chess, and so on—purely by playing itself, with zero prior knowledge, and learning from each game. That an AI model essentially taught itself, very rapidly, to surpass the abilities of any human ever at multiple games might suggest that very rapid advances for today’s chatbots are on the horizon. By this logic, models could essentially figure out ways to improve themselves. But the success of AlphaGo and AlphaZero more likely signals obstacles ahead. The most important ingredient in AlphaGo was the simplicity with which one could measure success—win or lose—and thus give the machine feedback to improve.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/?utm_source=feed"&gt;Read: The human skill that eludes AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;With board games, “we were always operating in a specific environment where the rules of the game were known,” Kohli said. “The systems of today are expected to operate in a much more general environment.” Reasoning models have found success mostly in areas that still have a relatively clear rubric for evaluation: whether an AI-written program works as intended, for instance, or whether an AI-written proof holds up. Instilling any notion of a more general intelligence in a machine will be a far more challenging problem than conquering even Go.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;DeepMind has been able to design evaluations for more abstract ideas, for instance by orchestrating several AI agents to act as &lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;a team of virtual “scientists”&lt;/a&gt; that will rank hypotheses about problems in biology. But even that system operates within a relatively constrained domain of biological reasoning and literature. It’s unlikely that any lab will come up with a single way to evaluate “general intelligence” that can be used to train a bot AlphaGo style, let alone one as straightforward as winning or losing a board game.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;Read: AI executives promise cancer cures. Here’s the reality.&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Still, the progress the AlphaGo approach has yielded for AI models in a number of scientific domains is impressive—so much so that, a decade after AI conquered humanity’s hardest board game, the nation is now in a frenzy over whether AI is about to first overhaul the economy and then unsettle the purpose of being human at all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Once again, chess and Go might offer guides. As a result of improving via self-play, AlphaGo and AlphaZero developed not only superhuman ability but also inhuman style, using tactics and strategies no human had previously considered. These AI strategies did not destroy the human pursuits of chess and Go; they &lt;a href="https://www.technologyreview.com/2026/02/27/1133624/ai-is-rewiring-how-the-worlds-best-go-players-think/"&gt;reignited&lt;/a&gt; new waves of human &lt;a href="https://www.theatlantic.com/technology/archive/2022/09/carlsen-niemann-chess-cheating-poker/671472/?utm_source=feed"&gt;creativity and strategy&lt;/a&gt;. The most optimistic analogy for today’s more broadly useful AI systems would be that they also, rather than providing a wholesale replacement for humans, will function as a sort of &lt;a href="https://www.theatlantic.com/technology/archive/2022/10/hans-niemann-chess-cheating-artificial-intelligence/671799/?utm_source=feed"&gt;complementary intelligence&lt;/a&gt;. Biologists, &lt;a href="https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/?utm_source=feed"&gt;mathematicians&lt;/a&gt;, and computer scientists are already finding ways in which today’s AI models are not simply speeding up their work but qualitatively changing the kinds of questions humans can ask and the discoveries we can make.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of course, the business proposition of generative AI is quite the opposite: that products such as ChatGPT and Claude Code can automate huge swaths of white-collar work, help students cheat their way through school, and allow humans to live mostly without thinking. Perhaps C-suite executives, like AI researchers, can learn a lesson from Go and chess. Like any sport, chess and Go are worthwhile because of human struggles and storylines, champions made and toppled, the very fact that people are doomed to be imperfect but always striving to become just a bit better. And rather than automating human chess masters or destroying the sport and pastime, chess-playing AI models have helped the business of chess to boom.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Likewise, employees, managers, students, professors—really all of us—are always learning and learning by failing, or at least we should be. That is useful and worth preserving in &lt;a href="https://www.theatlantic.com/ideas/2025/12/ai-entry-level-creative-jobs/685297/?utm_source=feed"&gt;plain economic terms&lt;/a&gt;. Nobody becomes world-class at anything without at some point being rather terrible at it, and allowing novices who might be less capable than a bot to build up skills is the only way you get experts with human judgment and abilities that surpass any AI. But more important than that economic rationale is an existential one: To grow or help another do so is a beautiful thing. Some might call it being human.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/U6SJyTz_GY-KuSqVbSFKPc_JlQM=/media/img/mt/2026/03/2026_03_27_AI2_mpg/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">A Game Plan for the AI Boom</title><published>2026-03-30T18:27:37-04:00</published><updated>2026-04-02T10:11:16-04:00</updated><summary type="html">Ten years ago, AlphaGo trounced human competitors—and its legacy is still present in today’s most advanced bots.</summary><link href="https://www.theatlantic.com/technology/2026/03/alphago-ai-boom/686618/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686559</id><content type="html">&lt;p class="dropcap"&gt;T&lt;span class="smallcaps"&gt;he global economy&lt;/span&gt; has become dependent on the AI industry. Trillions of dollars are being invested into the technology and the infrastructure it relies on; in the final months of 2025, &lt;a href="https://www.barrons.com/articles/ai-investment-gdp-economy-e19c6d70"&gt;functionally all&lt;/a&gt; economic growth in the United States came from AI investments. This would be risky even in ideal conditions. And we are very far from ideal conditions.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Much of the AI supply chain—chips, data centers, combustion turbines, and so on—relies on key materials that are produced in or transported through just a few places on Earth, with little overlap. In particular, the industry is highly dependent on the Middle East, which has been destabilized by the war in Iran. A global &lt;a href="https://www.newstatesman.com/international-politics/geopolitics/2026/03/the-world-energy-shock-is-coming"&gt;energy shock&lt;/a&gt; seems all but certain to come soon—the kind where even the &lt;a href="https://www.economist.com/finance-and-economics/2026/03/22/even-the-best-case-scenario-for-energy-markets-is-disastrous"&gt;best-case scenario&lt;/a&gt; is a disaster. The war could grind the AI build-out to a halt. This would be devastating for the tech firms that have issued historic amounts of debt to race against their highly leveraged competitors, and it would be devastating for the private lenders and banks that have been buying up that debt in the hope of ever bigger returns.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For the better part of the past year, Wall Street analysts and tech-industry observers have fretted publicly &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;about an AI bubble&lt;/a&gt;. The fear is that too much money is coming in too fast and that generative-AI companies still have not offered anything close to a viable business model. If growth were to stall or the technology were to be seen as failing to deliver on its promises, the bubble might burst, triggering a chain reaction across the financial system. Everyone—big banks, private-equity firms, people who have no idea what’s mixed into their 401(k)—would be hit by the AI crash.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Until recently, that kind of crash felt hypothetical; today, it feels plausible and, to some, almost inevitable. “What’s unusual about this, unlike commercial real estate during the global financial crisis,” Paul Kedrosky, an investor and financial consultant, told us, “is all of these interlocking points of fragility.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;Read: Here’s how the AI crash happens&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Perhaps the clearest examples are advanced memory and training chips, which are among the most important—and are by far the most expensive—components of training any AI model. Currently, most of them are produced by two companies in South Korea and one in Taiwan. These countries, in turn, get a large majority of their crude oil and much of their liquefied natural gas—which help fuel semiconductor manufacturing—from the Persian Gulf. The chip companies also require helium, sulfur, and bromine—three key inputs to silicon wafers—largely sourced from the region. In addition, Saudi Arabia, Qatar, the United Arab Emirates, and other regional petrostates have become key investors in the American AI firms that purchase most of those chips.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Because of the war in Iran, the Strait of Hormuz is functionally closed to most shipping vessels, stranding one-fifth of the world’s exports of natural gas, one-third of the world’s exports of crude oil, and significant quantities of the planet’s exportable fertilizer, helium, and sulfur. Meanwhile, Iran and Israel have begun bombing much of the fossil-fuel infrastructure in the region, which could take many years to replace. In only a month of war, the price of Brent crude—a global oil benchmark—has jumped by 40 percent and could more than double, liquefied-natural-gas prices are soaring in Europe and Asia, and &lt;a href="https://www.reuters.com/business/energy/helium-prices-soar-qatar-lng-halt-exposes-fragile-supply-chain-2026-03-12/"&gt;helium spot prices&lt;/a&gt; have already doubled. The strait is “critical to basically every aspect of the global economy,” Sam Winter-Levy, a technology and national-security researcher at the Carnegie Endowment for International Peace, told us. “The AI supply chain is not insulated.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The situation could quickly deteriorate from here. A helium crunch could trigger a shortage of AI chips or cause chip prices to rise. AI companies need ever more advanced chips to fill their data centers—at higher prices, the massive server farms, already hurting from elevated energy costs caused by the war, would have almost no hope of becoming profitable. Without these chips, new data centers would not be built or would sit empty. Astronomical tech valuations, and in turn the entire stock market, could collapse.&lt;/p&gt;&lt;p class="dropcap"&gt;O&lt;span class="smallcaps"&gt;ne industry’s precarious position&lt;/span&gt; isn’t usually everyone’s problem. Unfortunately, AI is different. The biggest data-center players, known as hyperscalers, are among the biggest corporations in the history of capitalism; they include Microsoft, Google, Meta, and Amazon. But even they will be pressed by collectively spending nearly $700 billion on AI in a single year. In order to get the money for these unprecedented projects, data-center providers are beginning to take on &lt;a href="https://fortune.com/2025/11/19/big-5-ai-hyperscalers-quadruple-debt-fund-ai-operations/"&gt;colossal amounts of debt&lt;/a&gt;. Some of this is done through creative deals with private-equity firms including Blackstone, BlackRock, and Blue Owl Capital—which themselves operate as sort of shadow banks that, since the most recent financial crisis, have arguably become as powerful and as influential as Bear Stearns and Lehman Brothers were prior to 2008. Endowments, pensions, insurance funds, and other major institutions all trust private equity to invest their money.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For a while, it seemed like every time Google or Microsoft announced more data-center investments, their stock prices rose. Now the opposite occurs: The hyperscalers are spending far more, but investors have started to notice that they are not generating anything near the revenue they need to. The data-center boom’s top players—Google, Meta, Microsoft, Amazon, Nvidia, and Oracle—have all lost 8 to 27 percent of their value since the start of the year, making them a huge drag on the overall stock market. And the $121 billion of debt that hyperscalers issued in 2025, four times more than what they averaged for years prior, is &lt;a href="https://www.reuters.com/business/retail-consumer/analysts-revise-ai-hyperscaler-debt-forecasts-after-amazon-bond-sale-2026-03-17/"&gt;expected&lt;/a&gt; to grow dramatically.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;All of the major players in this investment ecosystem are vulnerable. Private-equity firms are being squeezed on both ends by generative AI: During the coronavirus pandemic, they bought up software companies, which are now plummeting in value because AI is expected to eat their lunch. Meanwhile, private equity’s new investment strategy, data centers, is &lt;em&gt;also&lt;/em&gt; falling apart because of AI. Blackstone, Blue Owl, and the like are sinking huge sums into data-center construction with the assumption that lease payments from tech companies will pay for their debt. In order to pay for their investments, private-equity companies raised money from major financial institutions—but now the viability of those lease payments is coming into question as the hyperscalers’ cash flow is strained. “There’s a reason to think we’re seeing some of the same 2008 dynamics now,” Brad Lipton, a former senior adviser at the Consumer Financial Protection Bureau and now the director of corporate power and financial regulation at the Roosevelt Institute, told us. “Everyone’s getting tied up together. Banks are lending money to private credit, which in turn lends it elsewhere. That amps up the risk.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/2026/03/ai-job-loss-jevons-paradox/686520/?utm_source=feed"&gt;Annie Lowrey: How to guess if your job will exist in five years&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The way the money moves is concerning, but so is the AI industry’s underlying business model. At every layer, the technology appears to decrease the value of its assets. The advanced AI chips that make up the majority of the cost of a data center? Their value rapidly decreases as they are superseded by the next generation of chips, meaning that the ultimate backstop for all of the data-center debt—selling the data center itself—is not actually a backstop. The way that AI companies make money when people use their products is also deflationary. OpenAI, Anthropic, and others charge users for using “tokens,” the components of words processed by their bots. This means that tokens are an industrial commodity akin to, say, crude oil or steel. But unlike other commodities, the cost of each token is rapidly decreasing owing to advancements in AI’s capabilities. Kedrosky called this “a death spiral to zero.” As the value of a token plummets, the value of what data centers can produce also falls.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The war in Iran affects data-center finances as well. Should energy prices continue to skyrocket, so will the cost of this already very expensive computing equipment, because it needs tremendous amounts of energy to manufacture and operate. And the war has exposed physical risks to these buildings. Janet Egan, a senior fellow at the Center for a New American Security, described data centers to us as “large, juicy targets.” It is impossible to hide these facilities, which can cover 1 million square feet. Earlier this month, Iran bombed Amazon data centers in the UAE and Bahrain. American hyperscalers had been planning to build far more data centers in the region, because the Trump administration and the AI industry have sought funding from Saudi Arabia, the UAE, Qatar, and Oman. Now there’s a two-way strain on those relationships. The physical security of the data centers is more precarious, and the conflict is damaging the economic health of the petrostates, thereby jeopardizing a major source of further investment in American AI firms. The Trump administration “staked a lot on the Gulf as their close AI partner, and now the war that they’ve launched poses a huge threat to the viability of the Gulf as that AI partner,” Winter-Levy said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Plus, “what’s to prevent Iran or a proxy group, or another maligned actor, from tomorrow launching an armed drone against a data center in Northern Virginia?” Chip Usher, the senior director for intelligence at the Special Competitive Studies Project, a national-security and AI think tank, told us. “It could happen. Our defenses are not adequate.” State-sponsored cyberattacks of the variety Iran is known for could also knock a data center offline. You can build all manner of defenses—reinforced concrete, drone-interception systems—but doing so adds cost and time to already costly and slow construction.&lt;/p&gt;&lt;p class="dropcap"&gt;J&lt;span class="smallcaps"&gt;ust a few things going a bit wrong&lt;/span&gt; could compound, all at once, into a cataclysm. To wit: Qatari and Saudi money dries up. Sustained high oil and natural-gas prices drive up the costs of manufacturing chips and running data centers. Already cash-strapped hyperscalers struggle to make lease payments on their data centers, while similarly strained private lenders suffer as all of the AI bonds become deadweight. Tech valuations fall, taking public markets with them; private-equity firms have to sell and torch their assets, putting intense stress on the institutional investors and banks. The rest of the economy, drained of investment because everything was poured into data centers for years, is already weak. Unemployment goes up, as do interest rates. “Bubbles pop. That’s the system,” Lipton said. “What isn’t supposed to happen is that it takes down the whole financial system. But the concern here is that AI investment isn’t confined and may spread to the whole economy.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even if Iran and the Strait of Hormuz don’t directly trigger an AI-driven financial crisis, the odds are decent that another vector could. (Remember tariffs?) Energy prices could stay elevated for years, because the targeted fossil-fuel facilities in the Persian Gulf will take a long time to restore. As the U.S. directs huge amounts of attention and military resources toward Iran, it’s easy to imagine China launching an invasion of Taiwan—a scenario that &lt;a href="https://www.nytimes.com/2026/02/24/technology/taiwan-china-chips-silicon-valley-tsmc.html"&gt;terrifies&lt;/a&gt; Silicon Valley, because it would halt the production of chips needed to train frontier models. That’s not even considering the single Dutch company that makes the high-tech lithography machines used to print virtually all AI chips, or the German company that makes the mirrors used in those machines. “There are too many ways for it to fail for it not to fail,” Kedrosky said of the AI industry’s web of risk. “All you can say for sure is this is a fragile and overdetermined system that must break, so it will.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are, of course, possibilities other than a full-blown, AI-driven financial crisis. Data-center spending could cool gradually enough that a crash is avoided. The revenues of Anthropic and OpenAI have been multiplying every year, which proponents argue means that generative-AI products are on track to eventually become profitable. But on the current trajectory, that would still take years, and there are good reasons to think that this growth will slow or halt. Notably, the main draw of AI tools is “efficiency”: Rather than growing their overall output and the opportunities available to people, executives are hoping that AI will allow them to make cuts to their business operations. The medium-term success of generative AI would likely involve &lt;a href="https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/?utm_source=feed"&gt;millions of people being put out of work&lt;/a&gt;. The range of options seems to be somewhere from mildly bad to historically so.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Should the system break, much of the blame would lie squarely with the technology companies. The stakes of this build-out, from the beginning, have been framed in civilizational terms—a geopolitical race alongside an existential one. The winners will control the future and reap the rewards. At every step of the way, AI firms have appeared to prioritize speed above the physical security of data centers, supply-chain redundancy, energy efficiency and independence, political stability, even financial returns. And in that quest for unbridled growth, the AI industry has wrested ungodly amounts of capital from investors all looking for the next big thing, ensnaring the entire economy.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Simultaneously, these firms have courted and even bent the knee to a presidential administration that has encouraged their “let it rip” ethos, only to watch as that same administration has plunged the industry into this emerging polycrisis. The AI industry was not made for the turbulence its leaders have helped usher in. The situation has grown so ungainly and untenable that, if Silicon Valley is merely forced to slow down, the viability of all this spending will likely be called into question in ways that could be devastating for many. In finance, being early is the same as being wrong. AI firms want the world to think they’re right on time. The world may have other plans.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/IVFBCxc2jIXqe2KB2LEnwKydNPU=/media/img/mt/2026/03/2026_03_26_datacenter_mpg/original.jpg"><media:credit>Nathan Howard / Bloomberg / Getty</media:credit><media:description>An Amazon Web Services data center in Manassas, Virginia</media:description></media:content><title type="html">Welcome to a Multidimensional Economic Disaster</title><published>2026-03-26T16:44:54-04:00</published><updated>2026-03-27T07:40:22-04:00</updated><summary type="html">The AI boom wasn’t built for the polycrisis.</summary><link href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686545</id><content type="html">&lt;p&gt;At the age of 14, Braden Peters began injecting himself with mail-order testosterone to make himself into something he wasn’t. By his account, the experiment ended when his parents, Kenneth and Lauren, discovered his supply and trashed it. Young Braden was apparently undaunted. He set up a post-office box and began ordering new chemicals—he’s since claimed to have taken crystal meth to stay lean—anything that would catalyze his transformation. He began tapping his face with a hammer in pursuit of perfect cheekbones. The goal was entirely superficial: to reshape his physical form so that other men would feel inferior in his presence, and so that women would want to have sex with him.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This, at least, is the origin story he’s told and retold over hundreds of hours of livestreams and interviews. In the pre-internet age, Peters might have passed through the world without notice, or at least without fame. But in 2026, at age 20, he is a popular influencer who calls himself Clavicular, after the span of his collarbones. He is among the most recognizable adherents of the radical-self-improvement project known as looks-maxxing. Hew closely to the credo, which includes all sorts of steroids and therapies, and you might even &lt;em&gt;ascend&lt;/em&gt;. That’s looks-maxxing terminology for becoming really, really hot.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Clav, as he’s known, has had a moment this year. Seemingly overnight, he became wildly popular among &lt;a href="https://www.theatlantic.com/ideas/archive/2023/01/lost-boys-violent-narcissism-angry-young-men/672886/?utm_source=feed"&gt;the &lt;/a&gt;&lt;a href="https://www.theatlantic.com/ideas/archive/2023/01/lost-boys-violent-narcissism-angry-young-men/672886/?utm_source=feed"&gt;lost boys&lt;/a&gt; of the internet—the kinds of people who spend their time watching Nick Fuentes, the white-supremacist influencer, and Andrew Tate, the proudly misogynistic elder statesman of the manosphere, who is currently awaiting trial on charges of rape and human trafficking (he has denied the allegations). In January, Clavicular joined Tate, Fuentes, and the extremist podcaster Myron Gaines at a nightclub in Miami. Videos of the group listening to the Kanye West song “Heil Hitler” went viral; Clavicular was singing along.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/12/nick-fuentes-livestream/685247/?utm_source=feed"&gt;Read: I watched 12 hours of Nick Fuentes&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;As his live videos have been clipped and reposted on more mainstream parts of the internet, Clavicular has continued to gain widespread attention. There’s been a temptation among observers, including the media outlets that have covered this story over the past few months, to understand Clavicular as, essentially, a curiosity. He is a strange, attention-hungry young guy—the latest addition to a streaming ecosystem that celebrates extreme provocation. His peculiar online lingo, derived from the looks-maxxing community, has seeped into the culture.&lt;em&gt; Mogging&lt;/em&gt;, meaning “outclassing someone,” and -&lt;em&gt;maxxing&lt;/em&gt;, an all-purpose suffix denoting maximization of any kind, are inescapable online. Conan O’Brian described himself as “host-maxxing” during this year’s Oscars, and &lt;em&gt;Saturday Night Live&lt;/em&gt; parodied Clavicular in a “Weekend Update” sketch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But Clavicular’s rise is pernicious. The baseline concern with an influencer who takes a hammer to his face and says hateful things is that he is in some sense encouraging other people to do the same. Last month, a couple of fans came up to him during a livestream, and one shouted “Heil Hitler.” Clavicular tried to dismiss the comments as “cringe,” but he quite obviously set the tone. I have some authority here: After I left a note outside his parents’ house requesting an interview for this story, Clavicular shared my contact information online. As a reporter who covers the internet, I am used to being harassed—but I had never experienced so many direct violent threats, and so much virulent anti-Semitic hatred, as I have since then. The looks-maxxer insult “subhuman” kept coming up, as did the word &lt;em&gt;mongrel&lt;/em&gt;. (A spokesperson for Clavicular declined to answer my questions.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The bigger concern with Clavicular is not that he is encouraging a generation of young men to take extreme measures to change their looks. It’s that because his antics are so ridiculous and his videos so entertaining to a certain crowd, he has allowed more coherent and dangerous ideologies to hitch a ride on his movement. The far-right manosphere has seemingly taken every opportunity it can to tie itself to Clavicular. Tate joined him on a stream last month to lift weights and offer advice about how Clav should handle his newfound fame. Jon Zherka, an adjacent influencer, recently &lt;a href="https://x.com/ZherkaOfficial/status/2034877588553043971"&gt;likened&lt;/a&gt; him to a “younger brother.” Last week, Fuentes called him a “prophet” for exposing the cynical reality of modern dating—a core part of Clavicular’s appeal among this group.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/podcasts/2026/02/the-manosphere-breaks-containment/685907/?utm_source=feed"&gt;Listen: The manosphere breaks containment&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Clavicular is of course getting something in return. Associating with the manosphere’s best-known figures has been a shortcut to fame and money. But he is also a different kind of influencer. Although he calls women whores and says the N-word, he is generally less focused on politics than are Fuentes and Tate, who are constantly weighing theories about power and opining about the state of the world. In fact, Clavicular does not tend to talk about politics much at all, and has repeatedly claimed that his message is distinctly apolitical. He trolls for views. &lt;em&gt;That&lt;/em&gt;, if anything, is his philosophy; the looks-maxxing is secondary. During a December interview with a conservative podcaster, Clavicular said that if the 2028 presidential election comes down to Gavin Newsom and J. D. Vance, he will vote for the California Democrat purely because Newsom mogs Vance with his looks. Last month, Clavicular told the comedian Adam Friedland that he’d never heard of New York City Mayor Zohran Mamdani. “I’m so far removed,” he said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even beyond the manosphere’s corner of the internet, the right-wing ecosystem as a whole has recently gotten much better at capitalizing on cultural trends. Whenever a viral moment might have a remotely right-wing cast, the machinery moves into place. After Sydney Sweeney starred in an American Eagle commercial last year that touts her “great jeans” (a pun about her denim and her genetics), some on the left accused her of endorsing eugenics. The right, in turn, &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/sydney-sweeney-american-eagle-ads/683704/?utm_source=feed"&gt;coalesced around her&lt;/a&gt;. A few months later, when sorority-dance videos &lt;a href="http://theatlantic.com/technology/archive/2025/08/sorority-rush-dance-maga-x/683894/"&gt;went vira&lt;/a&gt;&lt;a href="http://theatlantic.com/technology/archive/2025/08/sorority-rush-dance-maga-x/683894/"&gt;l&lt;/a&gt;, the online right immediately jumped in to say—without any evidence of the women’s actual views—that the dancers were owning the libs.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Weeks after Clavicular’s brief reign as the internet’s main character, his daily livestreams continue to collect hundreds of thousands of views. He is currently in the middle of a livestreaming marathon under the heading “Mog World Order” and will keep the cameras rolling nonstop for the next few weeks. The other day, a girl slapped him in the face at a nightclub. Fuentes, on his own stream, was indignant: “Kill, rape, and die for Clavicular—no, no, kidding, kidding, kidding, kidding!”&lt;/p&gt;</content><author><name>Will Gottsegen</name><uri>http://www.theatlantic.com/author/will-gottsegen/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/OvDHUvU6r6hQnPtIRFjcyeU0Dnc=/media/img/mt/2026/03/20260217_clavicular_2_1/original.jpg"><media:credit>Illustration by The Atlantic. Source: clavicular0 / Instagram</media:credit></media:content><title type="html">What Was Clavicular?</title><published>2026-03-26T07:30:00-04:00</published><updated>2026-03-26T08:13:24-04:00</updated><summary type="html">The internet’s most famous looks-maxxer is far more pernicious than he may seem.</summary><link href="https://www.theatlantic.com/technology/2026/03/clavicular-looksmaxxing-manosphere/686545/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686544</id><content type="html">&lt;p&gt;When I opened Sora this morning, I was met with a flood of strange and disturbing AI-generated videos. On OpenAI’s video app, I scrolled through fabricated scenes of the Iran war and a barrage of fake Donald Trumps blabbering about Jeffrey Epstein. In my least favorite clip, I watched a man deep-fry an infant. The app lets users create fairly realistic-looking AI-generated clips—including of their own likeness—and then post them on a TikTok-like feed. Not &lt;em&gt;all &lt;/em&gt;of them are so unsettling, and for better or worse, Sora has been a steady source of internet virality. Within days of its release, it skyrocketed to the top of the App Store.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now Sora will soon be dead. Yesterday, OpenAI said that it was shutting down the app and terminating public access to its video-generating technology. The decision was seemingly abrupt: Just a few months ago, Disney announced plans to invest $1 billion in OpenAI as part of a licensing deal to bring its characters to Sora, and earlier this week, workers from both companies were &lt;a href="https://www.reuters.com/technology/openai-set-discontinue-sora-video-platform-app-wsj-reports-2026-03-24/"&gt;apparently&lt;/a&gt; still collaborating. (Disney has since retracted its investment plans.) Even some Sora staffers themselves were reportedly caught off guard by the announcement. Online, people eulogized Sora by posting their favorite videos—such as one featuring a &lt;a href="https://x.com/emollick/status/2036788701586506121?s=20"&gt;column of spinning penguins&lt;/a&gt; and another in which &lt;a href="https://x.com/TrungTPhan/status/2036633266644815875?s=20"&gt;Jesus walks on water&lt;/a&gt; to win an Olympic gold medal in swimming.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After OpenAI launched the Sora app in September, Sam Altman predicted that society was about to undergo a stunning artistic revolution. “Creativity could be about to go through a Cambrian explosion,” he wrote online. But such a revolution never materialized. It’s not that people hate AI slop. In fact, if anything, people seem to have a surprising appetite for it—the latest TikTok trend is &lt;a href="https://www.nytimes.com/2026/03/24/style/ai-cheating-fruit-slop-videos-tiktok.html"&gt;raunchy &lt;/a&gt;&lt;a href="https://www.nytimes.com/2026/03/24/style/ai-cheating-fruit-slop-videos-tiktok.html"&gt;telenovelas&lt;/a&gt; starring AI-generated fruit. In response to a request for comment, an OpenAI spokesperson pointed me to a public statement that cites “compute demand” as a key factor in the company’s decision. Generating videos is much more costly than generating text is, and Sora has likely been a real &lt;a href="https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spending-ai-generated-sora-videos/"&gt;financial drain&lt;/a&gt;: In the fall, &lt;em&gt;Forbes&lt;/em&gt; estimated that Sora might be costing OpenAI millions of dollars daily, and Bill Peebles, who leads Sora, &lt;a href="https://x.com/billpeeb/status/1984011952155455596?s=20"&gt;said&lt;/a&gt; that the economics were “completely unsustainable.” (OpenAI declined to comment on &lt;em&gt;Forbes&lt;/em&gt;’s estimates at the time.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The decision to quickly spin up a project and then suddenly pull the plug has become a classic OpenAI move. The company has spent the past few years cycling through new product features and business models with spectacular haste in an attempt to find its way to profitability. OpenAI seems to finally be learning that slop is not a business strategy.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Altman has never had a great plan for how OpenAI will make money. “We have no idea how we may one day generate revenue,” Altman said at a 2019 event. He went on to explain that one day, AI will be smart enough that OpenAI will simply ask the computer how to generate an investment return. “You can laugh,” he told a (rightfully) amused audience. “But it is what I actually believe is going to happen.” After ChatGPT’s success a few years later, investors began pouring money into OpenAI, and Altman has done a tremendous job of marshaling investor funds. The start-up is now &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-bubble-defenders-silicon-valley/686340/?utm_source=feed"&gt;worth&lt;/a&gt; more than Toyota, Coca-Cola, and Disney &lt;em&gt;combined&lt;/em&gt;. But investors like to see returns, and so far, OpenAI hasn’t done much to prove that it is capable of generating enough cash to stay out of the red.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/2026/03/openai-economy-competition-anthropic/686420/?utm_source=feed"&gt;Read: The MySpace dilemma facing ChatGPT&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;That’s not to say that it hasn’t been trying: Over the past few years, OpenAI has explored just about every business model conceivable. Last summer, Altman &lt;a href="https://www.bloomberg.com/news/articles/2025-08-15/openai-s-altman-expects-to-spend-trillions-on-infrastructure"&gt;described&lt;/a&gt; OpenAI as four separate companies—a consumer-tech business, a massive-scale infrastructure project, an AI-research lab, and an incubator for “new stuff,” including hardware. (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The trouble with trying to do everything is that sometimes you end up doing nothing well. Sora is the latest casualty in a long string of abrupt reversals, about-faces, and seemingly sloppily implemented projects. Last year, Altman announced a massive joint AI-infrastructure build-out with Oracle and SoftBank called Stargate, but the effort &lt;a href="https://www.theinformation.com/articles/inside-openais-scramble-get-computing-power-stargate-stalled?rc=ftwoob"&gt;stalled&lt;/a&gt;, reportedly following poor leadership and coordination. Altman &lt;a href="https://youtu.be/FVRHTWWEIz4?si=b2OjrsSd0sFQYOaV&amp;amp;t=2272"&gt;said&lt;/a&gt; in 2024 that combining ads and AI would be a “last resort” response—but then, earlier this year, the start-up launched an ads initiative. Last fall, OpenAI debuted a shopping feature, which allowed people to buy products directly inside ChatGPT; yesterday, the company announced that it was killing the feature and pivoting to focus on product discovery instead. In January, the company &lt;a href="https://www.axios.com/2026/01/19/openai-device-2026-lehane-jony-ive"&gt;said&lt;/a&gt; that the first of its much-awaited devices was “on track” to launch later this year, but weeks later, court filings &lt;a href="https://www.businessinsider.com/openai-timeline-hardware-ai-device-launch-jony-ive-iyo-2026-2"&gt;revealed&lt;/a&gt; that the company is unlikely to debut its new hardware before 2027. OpenAI originally banned NSFW content, and then it announced last year that it would make exceptions for such material, even planning a December rollout for erotica, only to later put erotica indefinitely on hold.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Some amount of change in business plans is only natural for any company, let alone one in an industry as fast-moving as AI. But compared with its peers, OpenAI is especially chaotic in its strategy. The company’s plans are seemingly always provisional: No partnership or product road map feels guaranteed to endure. Earlier this year, Nvidia walked back a commitment to invest up to $100 billion in OpenAI. At the time, &lt;em&gt;The Wall Street Journal &lt;/em&gt;&lt;a href="https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3"&gt;reported&lt;/a&gt; that Nvidia CEO Jensen Huang had concerns with OpenAI’s “lack of discipline” in its business approach. (When asked about the report, Huang said that it was “nonsense” to suggest he was unhappy with OpenAI.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI’s haphazard business strategy has left the company to deal with an identity crisis of its own making. OpenAI is losing ground to Anthropic, its chief rival in the AI race, which has stuck with a targeted approach of selling productivity-enhancing AI tools to other businesses. Anthropic has had great success in its steadfast focus on the enterprise market. Now OpenAI is attempting to copy Anthropic’s playbook. “We cannot miss this moment because we are distracted by side quests,” Fidji Simo, OpenAI’s applications chief, &lt;a href="https://www.wsj.com/tech/ai/openai-chatgpt-side-projects-16b3a825?gaa_at=eafs&amp;amp;gaa_n=AWEtsqeNi7KZUpyc0R-CY0zW6U40-SzXhzLWrcn-4IZK0dq8H0FOpXEJv8BT3kT-OwM%3D&amp;amp;gaa_ts=69c40a9a&amp;amp;gaa_sig=2cWQJ6bPBmxZrmG5lOkZGaffyGigTDVFwDGG3rKwKALGs3bmMHcugiEQO1A4k2nWENSFxNkTT0Kj9rjAdG1BmA%3D%3D"&gt;reportedly&lt;/a&gt; told staff in a company-wide meeting earlier this month, explaining that the company needs to nail “productivity on the business front.” To do so, OpenAI is planning to nearly &lt;a href="https://www.ft.com/content/7ffea5b4-e8bc-47cd-adb4-257f84c8028b?syn-25a6b1a6=1"&gt;double&lt;/a&gt; its head count this year, including by hiring a team of specialists who will help other companies adopt its technology. Even at the product level, OpenAI appears to be copying Anthropic—OpenAI is apparently planning to launch a “superapp” to streamline its product offerings into one app, likely an attempt to compete with Anthropic’s Cowork and Claude Code. “We were spreading our efforts across too many apps,” Simo &lt;a href="https://www.wsj.com/tech/openai-plans-launch-of-desktop-superapp-to-refocus-simplify-user-experience-9e19931d?gaa_at=eafs&amp;amp;gaa_n=AWEtsqcUEU320HlVVXmFSgJGYL1_-ohapNpS-pcq3xFu7jOatmbZZBIGUHWpzzXxyrU%3D&amp;amp;gaa_ts=69c40cb1&amp;amp;gaa_sig=hWi3Y7WgJfpZ3PPcNbtbcXv9Jxb3tpiljzyU-shZBU80Gc3_pTY-GD8zY2b0M6IB_m_x01sx8ggLIjeW7GgRtw%3D%3D"&gt;wrote&lt;/a&gt; to employees last week. “That fragmentation has been slowing us down and making it harder to hit the quality bar we want.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After scrolling through Iran deepfakes and Trump slop on Sora this morning, I navigated to Altman’s account on the platform. I was curious to see what the company’s CEO might have to say about the end of Sora. The last time that Altman appears to have posted on the app was six months ago, when it launched. Perhaps that should have been a foreboding sign. I continued watching more clips until a pop-up filled my screen. OpenAI wanted to know how using Sora was affecting my mood. The app offered me a choice between “Thumbs-Up” and “Thumbs-Down.” I hit “Thumbs-Down.”&lt;/p&gt;</content><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jN0NRxco2DhLoUe_mhBiXlFKWEs=/media/img/mt/2026/03/2026_03_25_openAI_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">OpenAI Is Doing Everything … Poorly</title><published>2026-03-25T19:52:00-04:00</published><updated>2026-03-26T14:02:31-04:00</updated><summary type="html">The company’s sudden decision to pull the plug on Sora is a sign of deeper trouble.</summary><link href="https://www.theatlantic.com/technology/2026/03/sora-openai-identity-crisis/686544/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686535</id><content type="html">&lt;p&gt;Shower thoughts are typically best left in the shower. Such as: What might Claude the AI chatbot have to say about Claude Monet?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Earlier this month, San Francisco’s de Young Museum unveiled its newest exhibition, “Monet and Venice,” which is dedicated to the impressionist painter’s beautiful and meditative canvases of the floating city. And Anthropic, perhaps having seized on a marketing opportunity, is one of the show’s lead sponsors. Through tomorrow, visitors are able to partake in a temporary “interactive experience” that Anthropic set up in a room adjacent to the galleries. Essentially, the AI firm turned two typewriters into interfaces to chat with Claude. You type in a question about the exhibition, and Claude, based on information about Monet that the museum provided, such as exhibit labels, punches out an answer onto the same sheet of cream cardstock.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When I approached one of the Claude typewriters, which were placed next to art books and paintbrushes on top of wooden desks, an employee instructed me on how to proceed and stressed, repeatedly, that I should not prompt the bot with more than eight to 10 words. To get things started, Claude typed onto the paper, “What caught your eye in Monet and Venice? Type a word or short phrase and I’ll tell you more.” Questions I really wanted to ask—about the intentions behind and effects of the seemingly coarse weave of the canvases, or how Monet, obsessed with color, selected his pigments—were hard to pare down on the spot. I wrote that I noticed “shimmering water in varying lights.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/?utm_source=feed"&gt;Read: The human skill that eludes AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Claude paused for several seconds, then typed a response about Monet’s approach to painting water that restated, in many instances verbatim, information that I’d learned from wall text throughout the galleries. I had follow-up questions, but the paper ejected too quickly for me to ask them. In theory, Claude the AI was supposed to deepen my knowledge of Claude the painter. But all the typewriter added to my experience was ink and, I suppose, a piece of reprocessed dead tree to take home.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic’s sponsoring of and installation alongside “Monet and Venice” is the latest in a litany of attempts by AI companies to purchase cultural cachet. Typewriters, stationery, fine-art museums, the quintessential impressionist painter—these are all associated with taste, beauty, and craft, as well as with intentionality and care, the opposite of the ruthless technological efficiency that repels many from generative AI. OpenAI, for its part, recently &lt;a href="https://www.wsj.com/tech/ai/openai-backs-ai-made-animated-feature-film-389f70b0?gaa_at=eafs&amp;amp;gaa_n=AWEtsqec9IrACTV2Hu6Qz2B51d0R8Ip0t3RaxFzNusvGvCHqgKjym9Z1dcnp&amp;amp;gaa_ts=69c3e861&amp;amp;gaa_sig=amt_w3AXK2WratACyVj-j6evd3RDQR_FmWWrUv2AD8OdsOXgLO7lzfFBKSbiSCf6kDfHR0J_6o03_rLWjMY9Qg%3D%3D"&gt;backed&lt;/a&gt; an AI-animated film aiming to debut at this year’s Cannes Film Festival. The ChatGPT maker has also partnered with the Palace of Versailles to create an app to let visitors “talk” with statues in the garden—spewing, it would &lt;a href="https://www.nytimes.com/2025/07/30/arts/design/versailles-ai-app.html"&gt;appear&lt;/a&gt;, empty clichés. (“Perhaps strength lies in understanding both beauty and power together,” Achilles told me.) Last fall, Anthropic partnered with Air Mail, a weekly newsletter with a small storefront in Manhattan, to distribute blue baseball hats that read &lt;span class="smallcaps"&gt;thinking&lt;/span&gt;, as in &lt;em&gt;thinking cap&lt;/em&gt;; tote bags; and little packets of Anthropic-branded, otherwise unlabeled wildflower seeds. I was too scared of what an “Anthropic” plant would be to sow mine.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yet this is also the same company that &lt;a href="https://www.washingtonpost.com/technology/2026/01/27/anthropic-ai-scan-destroy-books/"&gt;ripped the spine&lt;/a&gt; off millions of books, scanned their pages, and &lt;a href="https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/?utm_source=feed"&gt;fed the text into Claude’s training data&lt;/a&gt;. Companies and wealthy scions donate to museums and sponsor exhibitions all the time, sure. Bank of America &lt;a href="https://www.brooklynmuseum.org/press/brooklyn-museum-presents-monet-venice"&gt;sponsored&lt;/a&gt; “Monet and Venice” at the Brooklyn Museum, where the show debuted; the Sackler family has eponymous museum wings around the country. Even so, leveraging historic artworks to elevate the brand of a company whose product is shaking the very foundations of human culture is just too on the nose. Let’s not pretend that the Claude AI–Claude Monet typewriter room is anything more than a hollow gimmick. (Anthropic declined to answer questions about the typewriters and exhibition sponsorship.)&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-art-ted-chiang-automation/679715/?utm_source=feed"&gt;Read: Ted Chiang is wrong about AI art&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;After using the device, I was directed to two file cabinets filled with Anthropic-branded postcards and &lt;span class="smallcaps"&gt;Keep thinking&lt;/span&gt; bookmarks. Stacked on top of one of the file cabinets were three large books titled &lt;em&gt;Édouard Manet&lt;/em&gt;,&lt;em&gt; Paul-Cézanne&lt;/em&gt;, and&lt;em&gt; Claude Monet.&lt;/em&gt; The errant hyphen in Cézanne’s name, and an identical font across all three covers that looked very similar to an Anthropic typeface, caught my eye. I picked up the top title, ostensibly about Manet, to examine its contents and found it to be almost weightless—these objects were not bound sheaves of paper, it turned out, but cardboard boxes. Even Jay Gatsby had the decency to fill his library with real books, if unopened ones.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Like many people, I adore both the work of Claude Monet and the canals of Venice. I was fortunate enough to grow up in New York City, going to the Metropolitan Museum of Art on weekends and the Museum of Modern Art for family programs, where Monet’s monumental water-lily canvases were among the many works that beckoned me to fall in love with painting. My mother went to college in Venice. I found the exhibition dedicated to Monet’s paintings of Venice enchanting; I had seen it in Brooklyn as well, and will surely return at least once more.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Monet’s dappled brushstrokes and the thick, coarse texture of his paint; how his palette varies by season and time of day, the same sea composed of stunning blues on one canvas and a fury of greens and pinks on an adjacent one; the impressionist’s paintings alongside depictions of Venice by James McNeill Whistler, Pierre-Auguste Renoir, and Canaletto—the exhibition beckons visitors to view canvases from up close and from afar, to look at paintings in isolation and in juxtaposition. I found myself most drawn to the lesser-known bridges and villas depicted, trying to recall if my mother and I had walked by them.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Monet sent letters and postcards across a continent of space and a century of time, to be imbued with new and varied meanings by every curator, software engineer, child, and parent who lays eyes on them. An art gallery was already an interactive experience.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/BXGqYuSEqup5VvO4pSD796g9-OM=/media/img/mt/2026/03/2026_03_23_Wong_Claude_Monet_final3/original.png"><media:credit>Illustration by Akshita Chandra / The Atlantic</media:credit></media:content><title type="html">When Claude Met Claude</title><published>2026-03-25T15:56:18-04:00</published><updated>2026-03-25T17:34:40-04:00</updated><summary type="html">Why is Anthropic sponsoring an exhibition about Monet?</summary><link href="https://www.theatlantic.com/technology/2026/03/claude-monet-ai-typewriter/686535/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686536</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;em&gt;Updated at 8:48 p.m. ET on March 25, 2026&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;After deliberating for nine days—and emerging &lt;a href="https://www.nbclosangeles.com/news/local/jurors-social-media-trial/3865553/"&gt;at one point&lt;/a&gt; to tell the judge that it was having a difficult time reaching a decision—a jury in Los Angeles finally returned its verdict today, finding both Meta and Google liable for creating addictive products that caused a young woman’s mental-health problems.  &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The two companies were ordered to pay $3 million in compensatory damages: 70 percent by Meta and 30 percent by Google. (Meta-owned Instagram played a larger role in the complaint than Google-owned YouTube, which explains the split.) This is hardly any money to either of these companies—Meta alone brought in nearly $60 billion in revenue over the last three months of 2025. But the verdict will lead others to pursue similar cases against tech companies (thousands are already pending), and possibly result in changes to the design of social-media apps.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Following the verdict’s announcement, Matthew Bergman, one of the plaintiff’s lawyers and the founding attorney of &lt;a href="https://www.seattletimes.com/business/massive-legal-siege-against-social-media-companies-looms/"&gt;the Social Media Victims Law Center&lt;/a&gt;, sent a lengthy statement to reporters. “This verdict carries implications far beyond this courtroom,” it read in part. “It establishes a framework for how similar cases across the country will be evaluated and demonstrates that juries are willing to hold technology companies accountable when the evidence shows foreseeable harm.”&lt;/p&gt;&lt;p&gt;A Meta spokesperson sent a shorter statement just after the verdict was read: “We respectfully disagree with the verdict and are evaluating our legal options.” In a later email, the company updated its statement, saying it would appeal the verdict. It also said: "Teen mental health is profoundly complex and cannot be linked to a single app." Google will also appeal, according to the spokesperson José Castañeda. “This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site,” he wrote in an email.&lt;/p&gt;&lt;p&gt;The plaintiff in this case, a 20-year-old named Kaley, was referred to in case documents by her initials, KGM, because the events she was suing over happened when she was a minor. She originally filed against TikTok and Snap as well but settled with them before the trial.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The core questions of the case were whether the social-media platforms had been designed to be addictive, and whether a social-media addiction could be said to have played a direct role in causing the mental-health issues that KGM experienced as a child. In her complaint, she said she had a “dangerous dependency” on the platforms and that they had contributed to her “anxiety, depression, self-harm, and body dysmorphia.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Today’s news comes right on the heels of a verdict against Meta in another case, brought by the New Mexico Attorney General Raúl Torrez, which was announced yesterday. The jury for that trial agreed that Meta should pay a penalty of $375 million for thousands of violations of the state’s consumer-protection laws. The issue at stake there was relatively specific: The state &lt;a href="https://www.theatlantic.com/technology/2026/02/meta-child-safety-documents-instagram/686163/?utm_source=feed"&gt;argued&lt;/a&gt; that certain design and moderation choices left kids vulnerable to online predators on Meta platforms and indirectly enabled serious crimes. The facts were highly technical and, unlike the Los Angeles case, didn’t involve qualitative assessments of young people’s personal lives or thorny debates about whether social media can be addictive.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yet, it was a telling verdict and a massive judgment. Torrez emphasized its significance in a statement to reporters, writing, “New Mexico is proud to be the first state to hold Meta accountable in court for misleading parents, enabling child exploitation, and harming kids.” Meta plans to appeal the verdict, and sent its own statement to reporters yesterday, which read in part: “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/meta-child-safety-documents-instagram/686163/?utm_source=feed"&gt;Read: How Meta executives talked about child safety behind the scenes&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;KGM’s case was novel because it treated YouTube and Instagram as fundamentally defective products. The issue wasn’t whether bad actors could exploit them but whether the platforms &lt;em&gt;themselves&lt;/em&gt; were dangerous. Online platforms are generally not legally responsible for the content that their users post; Meta, for example, would not be liable for bullying comments or imagery for self-harm posted onto Facebook. But the judge in this case, Carolyn Kuhl, decided that design features such as algorithmic feeds, auto-playing videos, and push notifications were valid targets. Members of KGM’s legal team successfully argued that Instagram and YouTube were created by companies that knew they were addictive and harmful and that chose not to warn consumers.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Though most people would usually think of product liability as applying to things such as poisoned baby powder and cars without seat belts, the idea here is that social media can have effects as tangible as those of physical goods, and we should think about it in the same terms. Such metaphors abounded in the trial. Mark Lanier, a member of KGM’s legal team, described social-media companies as lions hunting gazelles, and compared their products to &lt;a href="https://www.courthousenews.com/landmark-social-media-addiction-trial-heads-to-jury/"&gt;cigarettes&lt;/a&gt;, the &lt;a href="https://www.nbcnews.com/tech/social-media/social-media-trial-los-angeles-la-meta-youtube-rcna263063"&gt;free tortilla chips&lt;/a&gt; that patrons may mindlessly snack on at a restaurant, and &lt;a href="https://www.pbs.org/newshour/nation/lawyers-deliver-closing-arguments-in-landmark-social-media-addiction-trial"&gt;the baking soda in a cupcake&lt;/a&gt;. The baking-soda metaphor was meant to underscore that Instagram and YouTube had an outsize effect on KGM’s life, the way a tiny teaspoon of baking soda competes with more substantial ingredients such as flour or eggs in a cupcake recipe. But it was KGM’s own account of her experiences that appeared to move members of the jury, some of whom &lt;a href="https://www.latimes.com/california/story/2026-02-27/gender-could-play-major-role-in-la-social-media-addiction-suit"&gt;reportedly cried&lt;/a&gt; during her testimony.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Coverage of the case had died down significantly after the newsy high point of &lt;a href="https://www.wired.com/story/mark-zuckerberg-testifies-social-media-addiction-trial-meta/"&gt;Mark Zuckerberg’s testimony&lt;/a&gt; in mid-February, but a handful of reporters provided updates from Los Angeles. Both sides found expert witnesses who offered competing accounts of whether social media can literally be said to be “addictive.” The lawyers also told competing stories about what caused this one girl’s mental-health problems.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whereas Google’s closing arguments focused on whether KGM was actually addicted to YouTube and whether YouTube is more similar to television than it is to social media, Meta’s lawyers emphasized the other problems in KGM’s young life, including her fraught relationship with her mother and her older sister’s hospitalization for an eating disorder. They also called to the stand her former therapists, one of whom said that social media had rarely come up in their conversations. Another said that she believed that social media was “a contributing factor” in KGM’s anxiety, though not its primary cause. In his closing argument, Meta’s lawyer Paul Schmidt insisted that KGM’s representation had needed to prove that taking Instagram out of her life would have made it “meaningfully different.” They didn’t do that, he said, though the jury apparently believed otherwise.  &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The case in Los Angeles was only the first of many—“a brick in a potential wall,” as the Cornell law professor James Grimmelmann put it &lt;a href="https://www.theatlantic.com/technology/2026/02/instagram-meta-addiction-lawsuits/685947/?utm_source=feed"&gt;when the trial began&lt;/a&gt;. In fact, Meta and other social-media companies are facing so much pending litigation that keeping track of it all can be hard. Jury deliberations in Los Angeles were simultaneous with those in New Mexico. The company will be a defendant in another upcoming bellwether trial in the Los Angeles court, this one filed on behalf of a minor identified by the initials RKC, who similarly claims that he became addicted to social media and that it caused him to experience suicidal ideation, body dysmorphia, anxiety, and depression, “among other harmful effects.” That trial is expected to start this summer. And at the same time, an enormous multi-district litigation incorporating thousands of personal-injury suits against major tech companies will proceed in Oakland, &lt;a href="https://www.mediapost.com/publications/article/412702/school-district-can-proceed-to-trial-against-socia.html"&gt;starting with&lt;/a&gt; a Kentucky school district’s complaint that social media has been so poorly age-gated and so distracting to young students that it has effectively become a public nuisance.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/06/social-media-teen-mental-health-crisis-research-limitations/674371/?utm_source=feed"&gt;Read: No one knows exactly what social media is doing to teens&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;In these upcoming cases, new juries will be considering entirely new sets of personal facts, but they’ll also be considering the same basic questions about addiction, liability, and cause and effect. Of course, future juries may understand those issues differently than those who reported back this week. These questions are complicated, which is why we’ve ended up in the strange situation of hearing them argued in court rooms in the first place. Many have compared this succession of lawsuits to those that took down Big Tobacco in the 1990s, though experts have also pointed out that the comparison between social media and cigarettes is not very exact. (“We’re not talking about a biological substance that you can consume that has a demonstrable chemical effect,” Pete Etchells, a professor of psychology and science communication at Bath Spa University, in England, &lt;a href="https://www.theatlantic.com/technology/2026/02/instagram-meta-addiction-lawsuits/685947/?utm_source=feed"&gt;told me in January&lt;/a&gt;.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Still, social media has clearly reached a fork in the road. The existential questions that all of these lawsuits are asking are whether it is possible for social-media platforms to directly cause mental-health issues and other serious, life-changing problems for young people, and whether it is feasible to hold the companies behind them accountable for that. The upcoming trials likely will not bring us to a totally satisfying answer on the first, but they will certainly shed a lot of light on the second.&lt;/p&gt;</content><author><name>Kaitlyn Tiffany</name><uri>http://www.theatlantic.com/author/kaitlyn-tiffany/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/fNg7swre5obkxlemiyTgCDpbizk=/media/img/mt/2026/03/2026_03_23_the_beginning_of_Metas_legal_battle/original.jpg"><media:credit>Roman Pilipey / AFP / Getty</media:credit></media:content><title type="html">A Legal Decision That Could Change Social Media</title><published>2026-03-25T15:40:22-04:00</published><updated>2026-03-26T14:01:57-04:00</updated><summary type="html">Jurors found Meta and Google liable for building apps that inflicted mental-health problems on a teenager, and similar lawsuits are on the horizon.</summary><link href="https://www.theatlantic.com/technology/2026/03/landmark-verdict-against-meta-and-google/686536/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686517</id><content type="html">&lt;p&gt;In 20 seconds on the night of March 22, the seamless sequence of arrivals, departures, and holds at LaGuardia Airport—along with all their required calls and responses—was upended. In that brief period, a Port Authority fire truck was cleared to cross runway 4, Frontier Flight 4195 was told to stop taxiing, Air Canada Express Flight 8646 was landing, and the fire truck was frantically told to stop—before it collided with the Air Canada flight, killing the pilot and co-pilot.&lt;/p&gt;&lt;p&gt;In air-traffic-control audio, the same controller is heard communicating with the aircraft and with the ground vehicles. Yesterday, the National Transportation Safety Board said in a news conference that two controllers were in the tower at the time of collision: a controller who was assigned to handle communications within the immediate airspace and for operations on the active runways, and a controller-in-charge who was providing clearance instructions for all departing aircraft. This was standard operating procedure for LaGuardia and other airports for the midnight shift—but which of the two controllers was responsible for ground-control duties, and whether that controller was also handling arrivals in the minutes surrounding the accident, remains unclear. The NTSB noted in the news conference that it has received conflicting information concerning who was covering ground control.&lt;/p&gt;&lt;p&gt;​​Jennifer Homendy, the NTSB’s chairperson, cautioned against attributing the collision to a controller being distracted. But, she said, the conditions at LaGuardia were “a heavy workload environment,” and the NTSB has raised concerns in other accident investigations about fatigue during lightly staffed midnight shifts.&lt;/p&gt;&lt;p&gt;However standard a two-person shift might be, that a single controller was responsible, even for a short time, for directing so many simultaneous operations is a stark reduction in acceptable safety margins for the airport. An environment like that, especially when diverse events occur in rapid succession as they did Sunday night, can cause what aviators know as “task saturation.”&lt;/p&gt;&lt;p&gt;There are moments in aviation called “critical phases of flight,” such as takeoff and landing, when flight crews have numerous tasks to precisely complete in rapid order. The addition of other duties or unexpected complications—no matter how small—can cause a crew to be overwhelmed and struggle to manage their duties. Air traffic controllers can experience the same sense of being overwhelmed as they direct a varying type and number of activities and operators; a rapid cascade of tasks can quickly become difficult, or even impossible. In these moments of saturation, accidents might happen, and it appears that, on March 22, the combination of arrivals, departures, a declared emergency, and a ground-vehicle response saturated the controller managing the bulk of LaGuardia’s ground and tower operations. In the audio, after the crash, he tells a pilot: “We were dealing with an emergency earlier, and I messed up.”&lt;br&gt;
&lt;br&gt;
Although the collision occurred at 11:37 p.m., the accident’s origins can be traced back an hour earlier, as both the air-traffic-control audio and early NTSB comments make clear. At 10:40 p.m., right around the time the midnight shift clocked in, United Airlines Flight 2384 aborted a takeoff on runway 13, after a warning light went on in the cockpit; the crew then taxied the Boeing 737 Max 8 around for a second attempt at takeoff, which was also aborted. At that point, a strange odor in the cabin was reported, and flight attendants complained of sudden illness. The crew taxied off the runway and sought clearance to a terminal gate; none was available. Unable to return, they parked on a taxiway and declared an emergency.&lt;br&gt;
&lt;br&gt;
For the controller handling both ground and tower communications in this period, the United flight’s distress was a significant situation that posed its own concerns. Air traffic control now had to prepare for the possibility of disembarking passengers on the taxiway using an airstair truck and transporting them to the terminal. If a chemical event occurred on the United flight, that could escalate the situation further. After the crew declared an emergency, multiple emergency vehicles began  responding, including the truck that would soon collide with Air Canada Express Flight 8646. At the same time, multiple flights were inbound for landing, and Frontier Flight 4195 was taxiing in close proximity to the emergency equipment, which needed to cross runway 4 to reach the United Airlines aircraft.&lt;/p&gt;&lt;p&gt;The controller cleared Air Canada Flight 8646 at 11:35 p.m. as the second to land on runway 4. At that moment, as multiple aircraft and vehicles converged on the same space, he likely found himself experiencing task saturation. After the collision, the controller could be heard calling out to Flight 8646, informing the crew that assistance was on the way. He did not know the two pilots were dead, or that the fire truck and its injured crew were strewn across the runway. Nor did the controller have time to dwell on what happened: He had to immediately inform Delta Flight 2603, the aircraft behind Flight 8646, to climb to 2,000 feet and go around, as runway 4 was now closed.&lt;br&gt;
&lt;br&gt;
At a news conference on Monday, Secretary of Transportation Sean Duffy  characterized LaGuardia as “well staffed”—the staffing target is 37, he noted, and the tower currently has 33 controllers and seven more in training. On Tuesday, the NTSB said it was still investigating how many certified professional controllers were assigned to the facility, what happened at shift change, and whether anyone was available to relieve the controller working at the time of the collision. Normally, Homendy said, the controller would have been relieved, but he was on duty for several minutes after the accident. A spokesperson for the Port Authority, which operates LaGuardia, said the agency could not comment on specifics of an ongoing investigation and was focused on “ensuring investigators have full access and support as they carry out a thorough and independent review.”&lt;/p&gt;&lt;p&gt;As the crash shows, air-traffic-control staffing is crucial to aviation safety. And although the federal government has made efforts to hire aggressively and streamline the process, the United States has fewer controllers than it needs. This situation has not improved in decades, even as flight traffic has increased. The &lt;a href="https://www.gao.gov/blog/while-thousands-applied-become-air-traffic-controllers-theres-still-shortage-we-looked-why"&gt;Government Accountability Office&lt;/a&gt; documented the ongoing problem in a recent report, which noted that controller attrition and the agency’s ponderous hiring procedures contribute to the long-term problem.&lt;/p&gt;&lt;p&gt;Like many previous presidencies, the Trump administration has also been pushing to &lt;a href="https://www.whitehouse.gov/articles/2025/05/icymi-trump-administrations-plan-to-modernize-air-traffic-control-system/"&gt;modernize the air-traffic-control&lt;/a&gt; technology, and on Monday at LaGuardia Duffy reiterated his call for additional funding to the Brand New Air Traffic Control System. (The project was originally funded at $12.5 billion, but Duffy has said it would ultimately cost $31.5 billion.) This latest attempt at modernizing equipment and facilities follows the doomed tenure of the Next Generation Air Transportation System—which ate up 20 years and $15 billion of federal funds before it was canceled in 2024—and of the underfunded and mismanaged Advanced Automation System, which was given 13 years before it was canceled in 1994. After declaring the need for more congressional funding for the Trump administration’s modernization plan, Duffy acknowledged that new equipment would not necessarily have prevented the crash, but said that “if we care about air-travel safety, we care about having a brand-new air-traffic-control system, the best in the world with the best equipment, virtually all of it developed here in America.”&lt;/p&gt;&lt;p&gt;But the “best equipment in the world” doesn’t help if the Federal Aviation Administration doesn’t have enough people trained to use it, or enough people, period. Calls for increased staffing are not new: The 2011 scandal of exhausted controllers falling asleep in towers and an increase in near misses in the 1980s both raised questions about staffing, for instance. Reports of understaffing extend back to the FAA’s early years, when the agency strove to handle the transition from slower propeller aircraft to the faster and more efficient jets that rapidly transformed the industry.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In 1967, the FAA requested about $100 million to both modernize its equipment and hire more controllers. At the time, the agency reported that it had 14,000 controllers and technicians (who maintain the nation’s aviation infrastructure) but that controllers could not keep up with air-traffic increases. They were simply being asked to work harder. President Lyndon B. Johnson denied the request, telling the agency to maintain air safety with its existing funding. (He suggested, in fact, that the agency borrow from its equipment budget.) As one airline source told &lt;em&gt;The New York Times&lt;/em&gt; that year, the president “has told the agency not to allow any crashes … He has said ‘make the service fit the system’ instead of ‘make the system fit the service.’”&lt;br&gt;
&lt;br&gt;
In 2025, the U.S. had 10,800 certified professional controllers and 4,869 technicians, according to their respective unions. That total is shockingly close to the figure from nearly 60 years ago. While air traffic has exploded in that period, staffing has perpetually failed to keep pace. The FAA today has little choice but to resort to the same strategies employed in the Johnson administration: Slow down air traffic, and work controllers harder. When accidents occur, they bring the fallibility of that strategy into stark relief.&lt;/p&gt;</content><author><name>Colleen Mondor</name><uri>http://www.theatlantic.com/author/colleen-mondor/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/7_WS12qPlXrIICPgSOIAaPH3g9k=/media/img/mt/2026/03/2026_03_24_LaGuardias_Air_Traffic_Controller_Had_Too_Much_To_Do/original.jpg"><media:credit>Michael Nagle / Bloomberg / Getty</media:credit></media:content><title type="html">Twenty Seconds of ‘Task Saturation’ at LaGuardia</title><published>2026-03-25T10:24:55-04:00</published><updated>2026-03-25T15:29:14-04:00</updated><summary type="html">Having two controllers on a midnight shift might be standard procedure, but they can still be overwhelmed.</summary><link href="https://www.theatlantic.com/technology/2026/03/la-guardia-crash-air-traffic-control/686517/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686477</id><content type="html">&lt;div&gt;
&lt;style type="text/css"&gt;figure.c-embedded-video {
  width: 100%; height: 0; overflow: hidden; padding-bottom: 56.25%; position: relative;
}
  figure.c-embedded-video video {
    width: 100%;
    height: auto;
  }
&lt;/style&gt;
&lt;/div&gt;&lt;div&gt;&lt;em&gt;&lt;small&gt;Editor’s note: This work is part of &lt;/small&gt;&lt;/em&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="469" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-recent-on-screen31117857_899="469" data-gtm-vis-total-visible-time31117857_899="100" href="https://www.theatlantic.com/category/ai-watchdog/?utm_source=feed" rel="noopener noreferrer nofollow" target="_blank"&gt;&lt;em&gt;&lt;small&gt;AI Watchdog&lt;/small&gt;&lt;/em&gt;&lt;/a&gt;&lt;em&gt;&lt;small&gt;, &lt;/small&gt;&lt;/em&gt;&lt;small&gt;The Atlantic&lt;/small&gt;&lt;em&gt;&lt;small&gt;’s ongoing investigation into the generative-AI industry.&lt;/small&gt;&lt;/em&gt;&lt;/div&gt;&lt;div&gt;
&lt;hr&gt;
&lt;p&gt;In April 2024, Eric Schmidt, the former Google CEO and a current AI evangelist, gave a closed-door lecture to a group of Stanford students. If these young people hoped to be Silicon Valley entrepreneurs, Schmidt explained, then they should be prepared to breach some ethical boundaries.&lt;/p&gt;
&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At that point, 19 lawsuits had been filed against generative-AI companies for copyright infringement, alleging that Anthropic, OpenAI, and others had stolen books and other media to train their generative models. Yet Schmidt told the students to go ahead and download whatever they need to build an accurate “test” version of their AI product. If the product takes off, “then you hire a whole bunch of lawyers to go clean the mess up,” he said. “If nobody uses your product, then it doesn’t matter that you stole all the content.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Stanford posted a video of the talk on YouTube in August 2024, but it was removed a day later. (Stanford did not respond to my request for comment about the removal.)&lt;/p&gt;&lt;figure class="c-embedded-video" data-video-upload-id=""&gt;
&lt;video controls="controls" height="720" playsinline="playsinline" poster="https://cdn.theatlantic.com/thumbor/z-rL7yH9O5dYYbQ-tRpKJYoYVMQ=/filters:still()/media/files/eric_schmidt_at_stanford_clip.mp4" preload="metadata" src="https://cdn.theatlantic.com/media/files/eric_schmidt_at_stanford_clip.mp4" title="" width="1280"&gt;&lt;/video&gt;
&lt;/figure&gt;&lt;p&gt;When I recently obtained a copy, I was struck by Schmidt’s readiness to say the quiet part out loud. He was articulating an attitude that is common in Silicon Valley but is usually stated as a legal or philosophical argument. When I reached one of Schmidt’s spokespeople, they defended his position by telling me that Schmidt believes that the “fair use” of copyrighted work drives innovation. Others in the industry have cited the techno-libertarian idea that “information wants to be free,” a frequently &lt;a href="https://www.theatlantic.com/technology/2025/11/common-crawl-ai-training-data/684567/?utm_source=feed"&gt;misunderstood&lt;/a&gt; credo that portrays information as a natural resource that should flow without restriction to whoever can use it.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the credo never seems to apply to Silicon Valley’s own information, whether it’s the troves of personal data that companies have collected about us or the software they write. Photoshop, for example, doesn’t want to be free. In fact, Photoshop is one of thousands of tech-industry products that are protected by patents. Inventions such as Google’s original search algorithm and even design details, such as the &lt;a href="https://arstechnica.com/gadgets/2012/11/apple-awarded-design-patent-for-actual-rounded-rectangle/"&gt;“rounded rectangle” shape&lt;/a&gt; of Apple’s iPhone, have also been patented, and companies employ teams of high-end attorneys to prosecute infringements.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The industry has long been a kind of intellectual-property battle zone, where damages in lawsuits frequently exceed nine figures. In 2017, for example, Waymo, Google’s self-driving-car company, &lt;a href="https://medium.com/waymo/a-note-on-our-lawsuit-against-otto-and-uber-86f4f98902a1"&gt;alleged&lt;/a&gt; that a former employee had stolen “confidential files and trade secrets, including blueprints, design files and testing documentation” for self-driving cars that were eventually shared with Uber. The case was settled for roughly $245 million. In the 2010s, Apple sued Samsung for copying elements of the iPhone and was initially awarded more than $1 billion in a patent-infringement battle that lasted seven years. Apple and Qualcomm have sued each other over IP in so many jurisdictions that it’s &lt;a href="https://www.theverge.com/tech/2019/3/22/18275884/apple-qualcomm-antitrust-modem-patents-ftc-fine-eu-anticompetitive"&gt;hard to track&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In the pursuit of generative AI, tech companies have recently turned their aggressive strategies toward less prepared industries. As my reporting has shown, many top AI models have been trained on data sets containing massive numbers of &lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/?utm_source=feed"&gt;copyrighted books&lt;/a&gt;, &lt;a href="https://www.theatlantic.com/technology/archive/2025/09/search-youtube-videos-generative-ai/684158/?utm_source=feed"&gt;videos&lt;/a&gt;, and &lt;a href="https://www.theatlantic.com/technology/archive/2024/11/opensubtitles-ai-data-set/680650/?utm_source=feed"&gt;other works&lt;/a&gt;. This large-scale piracy has been excused in a number of ways: OpenAI (which has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;’s business team) has &lt;a href="https://scrapsfromtheloft.com/podcasts/altman-carlson-interview-transcript/"&gt;claimed&lt;/a&gt; that the company uses “publicly available information” to train its models; Anthropic has &lt;a href="https://assets-us-01.kc-usercontent.com/1eeb16db-4934-006e-40a6-38fa91285ebb/d8578720-9fd0-4c27-9c7f-041bac826869/Class%20Action%20Settlement%20Agreement.pdf"&gt;said&lt;/a&gt; that it has used books, but not in any commercial products; and Meta &lt;a href="https://www.courtlistener.com/docket/67569326/23/kadrey-v-meta-platforms-inc/"&gt;admits&lt;/a&gt; that it has used books in commercial products, but that doing so was “quintessential fair use.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even as they claim the right to train their models on work belonging to other people, the AI companies have rejected similar reasoning when it comes to their own products. Consider OpenAI’s &lt;a href="https://openai.com/policies/terms-of-use/"&gt;terms of service&lt;/a&gt; for ChatGPT, which forbid use of the bot’s “output to develop models that compete with OpenAI.” &lt;a href="https://www.anthropic.com/legal/consumer-terms"&gt;Anthropic&lt;/a&gt;, &lt;a href="https://ai.google.dev/gemini-api/terms"&gt;Google&lt;/a&gt;, and &lt;a href="https://x.ai/legal/terms-of-service"&gt;xAI&lt;/a&gt; have similar clauses forbidding people from using the material generated by their chatbots to train competing products. In other words: We can train on your work, but you can’t train on ours.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In the current economic environment, it’s not surprising that companies vying for market dominance would operate with standards that serve their bottom line. But it’s striking nonetheless how sharply their actions can contradict their professed values. Meta apparently does not want copies of its models on the web, even though it claims those models are “open,” a word that &lt;a href="https://www.businessinsider.com/meta-llama-2-ai-model-not-open-source-2023-7"&gt;typically means&lt;/a&gt; software is free and publicly available, and that implies a degree of goodwill or generosity on the part of the creator. It has &lt;a href="https://www.businessinsider.com/meta-copyright-protect-ai-model-argues-against-law-everyone-else-2024-1"&gt;reportedly&lt;/a&gt;&lt;b&gt; &lt;/b&gt;sent notices demanding the deletion of such copies from online platforms. (Meta did not respond to a request for comment.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Companies also know the value of training data, and at least one of them foresaw the backlash that taking such data might create. In 2021, one year before OpenAI released ChatGPT and two years before &lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/?utm_source=feed"&gt;my reporting first revealed&lt;/a&gt; what was being used as AI-training data, Anthropic CEO Dario Amodei wrote an &lt;a href="https://www.courtlistener.com/docket/69058235/563/2/bartz-v-anthropic-pbc/"&gt;internal memo&lt;/a&gt; titled “An Economic Model for Compensating Data Producers.” (It was recently unsealed in a copyright-infringement lawsuit against the company.) In the document, Amodei acknowledges that AI could be “an increasingly extractive concentrator of wealth” and that creators might eventually “grumble” or “get mad” as this fact becomes apparent. Resistance from creators might slow down AI progress, Amodei writes, and for this reason, he suggests compensating them “with a fraction of the profits from the model produced.” Giving creators equity in the company could be a “great fit” for Anthropic’s “public benefit orientation,” Amodei wrote. Today, Anthropic still claims to provide a public benefit, but it has argued in court that using copyrighted books is “fair use”—meaning, essentially, that the authors are entitled to nothing. Anthropic declined to comment when I reached out for this article.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Companies argue that AI training is &lt;a href="https://www.theatlantic.com/technology/archive/2024/02/generative-ai-lawsuits-copyright-fair-use/677595/?utm_source=feed"&gt;fair use&lt;/a&gt; because their AI models produce original work that is not derived from the sources they use for training. This is not necessarily true: My reporting has &lt;a href="https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/?utm_source=feed"&gt;shown&lt;/a&gt; that chatbots and image generators can produce near-exact copies of media they were trained on, spitting out near-complete copies of &lt;em&gt;Harry Potter and the Sorcerer’s Stone&lt;/em&gt;, for example, or rendering images that are fuzzy copies of existing artwork. But companies have tried to downplay this fact and focus the copyright discussion elsewhere, even invoking geopolitics and the idea of an international “AI race” as a sort of trump card. “Without fair use access, the race for AI is effectively over. America loses,” OpenAI &lt;a href="https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539-4653-b297-8bcf6e5f7686/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf"&gt;wrote&lt;/a&gt; to the Office of Science and Technology Policy last year.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Not everyone in the AI industry is on the same page. Ed Newton-Rex, a former VP of audio at Stability AI, quit his job in November 2023 and &lt;a href="https://twitter.com/ednewtonrex/status/1724902327151452486"&gt;wrote&lt;/a&gt; on X that, regardless of fair use, which “wasn’t designed with generative AI in mind,” he didn’t see how current AI-training practices “can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.” Newton-Rex started a nonprofit called &lt;a href="https://www.fairlytrained.org/certifications"&gt;Fairly Trained&lt;/a&gt;, which certifies AI models that are trained on properly acquired data.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;It’s worth noting that Silicon Valley has itself regularly been a victim of IP theft, in the form of software piracy. Partially in response to that problem, major companies have changed how software is distributed. Today, you cannot just buy Adobe Photoshop: Instead, you pay a rental fee to access the program, which verifies your license every time you use it. Microsoft has taken a similar approach with the 365 version of its Office suite, and Google’s office software can’t be downloaded at all. These companies have made their IP harder to steal by developing new methods of controlling access—an option that is not realistically available to the artists, authors, and open-source-software developers they take material from.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Given the double standard, it’s difficult to tell whether Silicon Valley’s arguments about fair use are genuine or just legally expedient. On one hand, generative AI is a new technology that raises new questions about the use of copyrighted work. On the other hand, the AI industry’s aggressive approach is business as usual for Silicon Valley: moving fast and breaking things. And betting that the lawyers can “clean the mess up.”&lt;/p&gt;</content><author><name>Alex Reisner</name><uri>http://www.theatlantic.com/author/alex-reisner/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/FmrYoEhO1IBK_LctB5ZoZnkxDuI=/media/img/mt/2026/03/20260212_tech_IP_2/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The Hypocrisy at the Heart of the AI Industry</title><published>2026-03-20T11:50:10-04:00</published><updated>2026-03-20T14:06:52-04:00</updated><summary type="html">Tech companies believe in intellectual property, but not yours.</summary><link href="https://www.theatlantic.com/technology/2026/03/hypocrisy-ai-industry/686477/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686443</id><content type="html">&lt;p&gt;Last summer, a friend called bearing bad news: Her two-year relationship was finished. In between insisting that she was, in fact, totally fine, and that everything was probably for the best, she told me that her (now ex-) partner had accused her of cheating.&lt;/p&gt;&lt;p&gt;My friend had not, to be clear, slept with anybody else, or gone on any illicit dates. But her partner, consumed by suspicion when it came to my friend’s platonic relationships, had gone through my friend’s phone and stumbled upon old messages that were too affectionate, too “flirty.” She broke up with my friend that night.&lt;/p&gt;&lt;p&gt;Some people might feel sympathetic toward my friend’s ex. Others might find the entire ordeal, to use the technical term, absurd. Whatever the stance, a growing number of mental-health influencers are giving language to the debate: What my friend did, they say, was “micro-cheating.”&lt;/p&gt;&lt;p&gt;As with &lt;a href="https://yougov.com/en-us/articles/43605-how-many-americans-have-cheated-their-partner-poll"&gt;plain old infidelity&lt;/a&gt;, micro-cheating is tricky to define; behavior that is fair game to one person might be egregious treachery to another. Many people have attempted to catalog it anyway. According to a number of lifestyle publications, a micro-cheater could be someone who, while in a relationship, maintains an active Hinge profile or sends explicit pictures to another person. Or they could have done something that might otherwise seem banal: “liking” someone else’s &lt;a href="https://www.instagram.com/p/DG9dYdVRexr/"&gt;Instagram post&lt;/a&gt;, perhaps, or messaging a colleague about something other than work. In a &lt;em&gt;Vogue&lt;/em&gt; article advising readers on how to properly recognize a micro-cheater, a couples therapist &lt;a href="https://www.vogue.com/article/what-is-micro-cheating"&gt;concluded&lt;/a&gt; that micro-cheating could be anything, really: “a glance, a laugh, or non-sexual touching that’s too familiar or intimate.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2017/10/why-happy-people-cheat/537882/?utm_source=feed"&gt;Read: Why happy people cheat&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Whether something amounts to cheating is ultimately up to the people in a relationship to decide. But with micro-cheating, the general consensus seems to be that the cheating has nothing to do with a glaring physical transgression. (The prefix, &lt;em&gt;micro&lt;/em&gt;, does a lot of work here.) It is defined by subtlety and generally takes place digitally. For some of my friends, the celebrities a romantic partner follows can be just as big a dealbreaker as parenting or financial choices—following Instagram models, in their calculus, fundamentally reveals as much about long-term compatibility as a poker addiction. To catch micro-cheaters, people often hunt for indiscretions: scrolling through &lt;a href="https://www.instagram.com/p/CbItl0eD2w6/"&gt;the entire list&lt;/a&gt; of accounts that their partner follows, or watching for a partner’s &lt;a href="https://www.instagram.com/p/DG9dYdVRexr/"&gt;single like&lt;/a&gt; on another person’s Instagram post. What appear to be gray-area online behaviors, the thinking goes, are in reality small but infinitely telling betrayals.&lt;/p&gt;&lt;p&gt;The outrage over micro-cheating, and the mushrooming of what people consider acts of disloyalty, seems to be braced by a sincere belief: that data can reliably represent a person’s desires. When so many aspects of a romantic interaction take place online, a like or follow may no longer seem like a friendly tap but a virtual representation of amorous interest. Occasionally, one might discover that a partner really is looking elsewhere. Most of the time, though, an obsessive close-reading of digital activity reveals less about cheating than it does about the bleak field of modern dating: Many people distrust their partners and are ill-equipped to talk about it.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;In the past, people’s secret desires tended to remain hidden. You couldn’t prove that your partner had gazed longingly after someone else or had left their hand for a beat too long on another person’s shoulder. Today, many romantic acts are distilled into data points and excised for meaning. Certain gestures are unambiguous—on dating apps, to swipe right, in Tinder parlance, is to demonstrate interest. Other moves are open for interpretation. Comments might be just comments, for instance, or they could be archives of flirtations. “What’s newly bizarre is that the infrastructure of our social lives is set up to record,” Quinn White, an assistant professor of philosophy, told me. (He explores the ethics of love and relationships at Harvard, where I am a student.) What was once opaque and ephemeral can now, in theory, be measured.&lt;/p&gt;&lt;p&gt;The logic of micro-cheating goes something like this: Your partner’s every move online says something significant about them. These actions make legible their innermost thoughts, which are visible, traceable, and recoverable as evidence. Many young women will post about checking to see if their boyfriend has recently &lt;a href="https://www.instagram.com/p/CoQmIrFDvQj/"&gt;followed another girl&lt;/a&gt; on social media—because, of course, if he does, he &lt;a href="https://www.instagram.com/p/DJXF8Fzu5Pf/"&gt;must like what he sees&lt;/a&gt;. “A man that truly loves you will never look at another woman,” says &lt;a href="https://www.instagram.com/p/DOxv78cDvPX/"&gt;one Instagram post&lt;/a&gt; with more than 100,000 likes. In a &lt;a href="https://www.cosmopolitan.com/relationships/a38951681/boyfriend-does-not-have-social-media-benefits/"&gt;&lt;em&gt;Cosmopolitan &lt;/em&gt;article&lt;/a&gt; commending the perks of dating a man without social-media accounts, the writer triumphantly declares, “I’ve never had to compete with the likes of Emily Ratajkowski and Bella Hadid.”&lt;/p&gt;&lt;p&gt;On some level, the idea that someone’s social-media habits say something about them holds true—a person who comments with heart-eye emoji under Kylie Jenner’s posts is probably different from a person who doesn’t. And algorithms of course make endless inferences about people’s online behavior: If Amazon knows what you want to buy—sometimes even before you do—based on past browsing history, then couldn’t an Instagram follow mean something deeper too?&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/family/2026/03/age-gap-swag-intelligence-party-gap/686224/?utm_source=feed"&gt;Read: The tyranny of the relationship gap&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;When it comes to love, a parcel of information can be harder to read. “Technology makes us think that people are laid out in all of their entirety, for us to know them in all of these ways,” Luke Brunning, who co-runs the &lt;a href="https://ahc.leeds.ac.uk/homepage/420/centre_for_love_sex_and_relationships"&gt;Centre for Love, Sex, and Relationships&lt;/a&gt; at the University of Leeds, told me. “And I just don’t think it’s true.” &lt;em&gt;Consumers &lt;/em&gt;might seem reducible to neat, tidy profiles with a concrete set of tastes and needs. &lt;em&gt;People&lt;/em&gt;—with their idiosyncrasies, confusions, and contradictions—aren’t as readily whittled down. The same algorithm that can tell you what pair of shoes you might like can’t tell you anything worth knowing about how your partner feels about someone else.&lt;/p&gt;&lt;p&gt;Someone preoccupied with catching a micro-cheater might commit a transgression of their own: denying their partner the “privacy,” as Brunning put it, that is “central to being a human being.” Although the internet might feel public, Brunning continued, it can also be an avenue for someone “to maintain a relationship with their own self, their own feelings and mind and imaginings and thoughts.” This may not be a cause for suspicion so much as it is a simple fact of existing.&lt;/p&gt;&lt;p&gt;There are parts of a person’s life that are complex and even inscrutable, that cannot be fully accessed or mined for meaning. Micro-cheating, in its misguided effort to make everything intelligible, presents a restrictive sense of what being in a committed relationship means. Exclusivity, in this imagining, is not just an exclusivity of behavior but an exclusivity of attention, thought, and feeling. It is, Brunning said, a mandate “to not have emotions caused by other people.” According to the most vocal agitators against micro-cheating, a sufficiently loyal partner should not, say, follow anyone attractive on social media (or even register another person’s attractiveness), should not text a friend a meme that they might find funny, and should not have inside jokes with co-workers. They should be less a living, breathing person than a one-dimensional, anti-social, ever-affirming sycophant to their one and only true love.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/family/2026/01/ai-boyfriend-women-gender/685315/?utm_source=feed"&gt;Read: The bots that women use in a world of unsatisfying men&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Perhaps obsession with checking a partner’s digital footprint was inevitable. The internet offers more avenues to cheat than ever before—easier access to eligible singles, messaging platforms via which to surreptitiously chat up old flames. Women, who tend to lead micro-cheating discourse, are also navigating a dating world that puts their safety and reputation &lt;a href="https://www.theatlantic.com/family/archive/2025/07/tea-app-dating-data-breach-misogyny/683712/?utm_source=feed"&gt;more at risk&lt;/a&gt; when a romantic relationship goes awry. In this atmosphere, the line between paranoia and self-protection can be difficult to discern. A partner’s request to keep their phone private could easily seem like confirmation for suspicions of duplicitous behavior. And a very real, eternal human fear lies beneath the micro-cheating accusations: that you can spend years with somebody and never truly know them.&lt;/p&gt;&lt;p&gt;People shock, betray, and destabilize. They can have emotional responses and enigmatic attractions that seem to come out of nowhere. They can do things that are wholly incongruous with how you thought they would behave. This fundamental unpredictability is a “scary reality,” Brunning said. And technology “almost defers that reality for us,” he added, by making people think that they can divine all they need to know from a handful of data signals.&lt;/p&gt;&lt;p&gt;The irony is that as much as technology might make people more aware of potentially offensive behavior, it also helps them avoid methods that could make them feel more secure in their relationships: engaging with their partner, communicating with them, and trying, together, to love well. In the poem “&lt;a href="https://www.poetryfoundation.org/poems/49270/chance-meeting"&gt;Chance Meeting&lt;/a&gt;,” by Susan Browne, a woman slowly approaches her lover on the street. She notices the parts of him that are tenderly familiar—his brown eyes, his smiling mouth, the way that he shoves his hands into his pockets. She muses to herself, “I know his loneliness / like mine, human and sad, / but different, too, his private pain / and pleasure I can never enter&lt;em&gt;.&lt;/em&gt;” Follows and comments are unlikely to offer any real passage into this inner world. All we can do is ask, and wait patiently, to be let in.&lt;/p&gt;</content><author><name>Zoe Yu</name><uri>http://www.theatlantic.com/author/zoe-yu/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jGZaXPb6JjNAyWymcBAzTxpcVRs=/media/img/mt/2026/03/ATLANTIC_MicroCheatFINAL/original.jpg"><media:credit>Illustration by Brandon Celi</media:credit></media:content><title type="html">The New Infidelity</title><published>2026-03-19T10:41:36-04:00</published><updated>2026-03-20T09:19:21-04:00</updated><summary type="html">Micro-cheating includes all sorts of internet behavior that, to many people, might just seem innocent.</summary><link href="https://www.theatlantic.com/family/2026/03/micro-cheater-dating-trend/686443/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686454</id><content type="html">&lt;p&gt;On March 10, the journalist Emanuel Fabian reported on a missile that had been launched from Iran. The warhead hit an open area outside Jerusalem, which Fabian confirmed by speaking with rescue services and reviewing footage of the explosion. He wrote a short post on &lt;em&gt;The Times of Israel&lt;/em&gt;’s live blog and moved on.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Meanwhile, gamblers had wagered millions on the unfolding events of the conflict. Fabian’s post became the subject of a major dispute on Polymarket, a popular prediction market where people can bet on the outcome of almost anything. The site had allowed users to guess when Iran would initiate “a drone, missile, or air strike on Israel’s soil”: More than $14 million was riding on whether such an attack had happened March 10.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/?utm_source=feed"&gt;Read: America is slow-walking into a Polymarket disaster&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;People started reaching out asking Fabian to change his article. Some argued that Israel Defense Forces had not officially mentioned such an attack occurring on that day, and others said that the explosion he had reported was the result of a missile being &lt;em&gt;intercepted&lt;/em&gt;, which according to Polymarket’s terms wouldn’t count as a strike “on Israel’s soil.” Confident in his reporting, Fabian did not amend the text.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;And then he began receiving threats. “You will discover enemies who will be willing to pay anything to make your life miserable—within the framework of the law,” &lt;a href="https://www.timesofisrael.com/gamblers-trying-to-win-a-bet-on-polymarket-are-vowing-to-kill-me-if-i-dont-rewrite-an-iran-missile-story/"&gt;one person wrote&lt;/a&gt; to Fabian before adding, “As far as I know, there are also some people who don’t really care about the law, and you’re going to make them lose about 50 times what you’ll ever make.” Much as athletes have faced threats and harassment from fans with money riding on a game, prediction markets are now creating incentives for gamblers to target all manner of people with inside information or some influence over major events. Polymarket did not respond to my request for comment, but &lt;a href="https://x.com/Polymarket/status/2033635318662860916"&gt;wrote&lt;/a&gt; on X: “This behavior violates our Terms of Service &amp;amp; has no place on our platform. We’ve banned the accounts for all involved &amp;amp; will pass their info to the relevant authorities.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/central-lie-prediction-markets/686250/?utm_source=feed"&gt;Read: A technology for a low-trust society&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Prediction markets like Polymarket post online using the language of news wires and &lt;a href="https://www.theatlantic.com/technology/2026/03/central-lie-prediction-markets/686250/?utm_source=feed"&gt;position themselves&lt;/a&gt; as a new and unbiased source of information, yet this story suggests that these sites are having the opposite effect: They make it &lt;em&gt;harder&lt;/em&gt; for news gatherers to report the truth. Yesterday, Fabian spoke with me from southern Israel about what it’s like to be in the center of this controversy while simultaneously trying to cover a war. What he described was yet another way that online events are twisting the very nature of reality—leading Fabian, for just a split second, to doubt what he had seen and heard.&lt;/p&gt;&lt;p&gt;&lt;em&gt;This conversation has been edited for length and clarity.&lt;/em&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Charlie Warzel: &lt;/strong&gt;How are you doing?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Emanuel Fabian:&lt;/strong&gt; It’s been an overwhelming few days. I’ve been busy reporting on the war, and on top of that, I’ve been having to deal with the police and my family and all of these death threats and harassment. So it’s been a lot.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Are you still getting death threats now?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian:&lt;/strong&gt; I’m not. They stopped almost as soon as I went to the police. Since the article I wrote about them went up, I haven’t received anything.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;You published your original blog on March 10. People began reaching out after that. But when did you make the connection to Polymarket?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;It took me a little while. When I got the first email about the missile impact, I thought the question [whether the missile exploded was intercepted, scattering shrapnel] was so odd, because it was such a minor, inconsequential detail in the context of a big war. The next day, I got a second email with the exact same questions and thought it was very strange. My theory was that it was either Iranian bots or agents trying to get information out of me. I did entertain the idea it was related to gambling, but I didn’t find the bet initially when I searched online.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The way it clicked for me was that I started to get replies on X and WhatsApp with similar questions like, &lt;em&gt;Hey, why haven’t you updated your story?&lt;/em&gt; I figured something was up. I looked at the X profiles and could see they were very clearly Polymarket gamblers. At that point it clicked, and soon after I found the actual page itself for the March 10 bet on whether Iran would strike Israel. It was stuck on March 10 and the market hadn’t “resolved,” or paid out. All the comments were people going back and forth, many linking to my little story and other articles. Overall, I got at least 20 different messages across email, X, WhatsApp, and Discord.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;You said a contact from another media outlet also reached out to you at this time and suggested they had gotten a tip that your story was wrong. Was this person involved in the gambling as well?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian:&lt;/strong&gt; They messaged and said, &lt;em&gt;Somebody I know told me there’s a mistake in your story; could you correct it?&lt;/em&gt; He thought he was doing both of us a little favor. I told him his acquaintance was likely betting on this on Polymarket. My contact went back to him, and he confirmed that not only was he betting on it, but he offered to give the person money if they managed to persuade me to change my story. It’s all insane. Obviously, the colleague told him off. But I’m losing my mind at this point. This is like the most tiny, inconsequential detail in a small news item.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;So you decide to call these people out on X. Did the harassment pick up after that?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;It did. A lot. I thought calling them out would shut them up and get them off my back. I wanted to be proactive because I realized, if I give into these people, it shows I can be manipulated. This will be just the beginning, and they won’t stop trying to bully me in later stories. And that’s when it escalated—death threats, messages coming in at all hours of the night. Messages talking about my family, giving me ultimatums on how much time I had to correct the story. That’s when I went to the police.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/polymarket-insider-trading-going-get-people-killed/686283/?utm_source=feed"&gt;Read: Insider trading is going to get people killed&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Did you ever think about changing the story?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;For a split second I did. I thought maybe I could be wrong.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Like, doubting your reporting? After all, you’re making those calls based on other witnesses and videos online.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;I went and checked again with the military. It was a short item, but I reviewed footage of a large explosion. I had eyewitness accounts—people in the area who saw this massive explosion. And then I thought to myself, &lt;em&gt;Why am I doing this? Triple-checking this minor incident, bothering the military again over an explosion in the woods? &lt;/em&gt;I did the reporting, and this was the judgment call I made. I think it was accurate, and I will leave it at that. I don’t need to doubt myself about what I published, especially because this is not something that anyone normally would care about unless they had a financial stake in the outcome. As an event in this war, it is not particularly newsworthy. This missile exploded in an open area. It’s 150 words in the live blog.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Do you think this fiasco will stick in the back of your mind as you continue to report on the war?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;Yes. I think it already has. Since then, whenever I report on something, I feel it in the back of my head:&lt;em&gt; What if the Polymarket bettors are betting on this tweet? Or on whether I’m giving an interview about Polymarket?&lt;/em&gt; I’m not obsessing over it. Hopefully I won’t get threatened again. But the thought is there. What if they suddenly see this interview? Because I don’t know the way they’ve resolved the Polymarket bet yet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Wait, really?&lt;/p&gt;&lt;p&gt;&lt;br&gt;
&lt;strong&gt;Fabian: &lt;/strong&gt;Yes, I’m looking now and the &lt;a href="https://polymarket.com/event/iran-strikes-israel-on"&gt;market&lt;/a&gt; is still not resolved. [The market “Iran strikes Israel on March 10 ?” resolved to “Yes” after Fabian and I spoke.]&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Did the fact that Polymarket kept allowing people to bet while this harassment was going on make things worse for you?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;It seems that a lot of people came into the bet as a result of my calling it out on X. When I posted about it, the market had $12 million in it. When I published my story on Monday, it had $14 million in it. Now it looks like it has $22 million. People are still betting and hoping it goes their way.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Having been through this ordeal, what are your feelings about prediction markets in general?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;It’s really worrying. I think the gambling is a degenerate thing. The fact that people are betting on wars and conflict and people dying is gross. This is war, not a game. I think the more worrying thing is that we’ve seen harassment by bettors against athletes in sports for failing to perform. It seems now that we are entering a new age. I think there is a big risk of journalists using insider information to place a correct bet and win. I can tell you as a military correspondent that I’m exposed to confidential information that we can’t report. Now there are ways to exploit that. It wouldn’t surprise me if others have.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Insider trading, one could argue, effectively makes prediction markets more accurate. Do you think these companies hope journalists and others will bet using privileged information?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;I don’t think they really want to combat insider trading. What I’ve heard is that those who bet on Polymarket either know the right answer or are wasting their money. [In a statement to &lt;em&gt;The Times of Israel&lt;/em&gt;, Polymarket said, “Prediction markets depend on the integrity of independent reporting. Attempts to pressure journalists to alter their reporting undermine that integrity and undermine the markets themselves.”]&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;Do you have advice for other journalists who may experience this type of betting-market harassment in the future?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;Go public. Don’t let the threats force you to change anything. Be honest. I think that’s the best way. It’s a bit stupid of these people to publicly intimidate somebody who can go and instantly tell 100,000 people what these gamblers are doing. That’s my advice. Because if you were to accept money or change your reporting, who knows how these people might extort you later on. If you change your reporting, it’ll be a mess forever.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Warzel: &lt;/strong&gt;If you could sit down with the CEO of Polymarket, what would you tell him?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Fabian: &lt;/strong&gt;I don’t know. I’d be honest and say I disagree with the notion of gambling on anything and everything. But if you are to keep these markets, they have to have admins who can decide on outcomes of bets or issue some kind of ruling. I think there just needs to be a lot more oversight and somebody actually vetting who these big bettors are to avoid insider trading but also to make sure this harassment doesn’t happen. But I’m not an expert on this. I’m more of an expert on where missiles land.&lt;/p&gt;</content><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/aOZvmLGBz-IBMTe8u65k3ggfTgo=/media/img/mt/2026/03/2026_3_18_Emanuel_Fabian_QA_1/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Ahmad Gharabli / AFP / Getty; Mamoun Wazwaz / Anadolu / Getty.</media:credit></media:content><title type="html">Maybe Turning War Into a Casino Was a Bad Idea?</title><published>2026-03-18T17:05:46-04:00</published><updated>2026-03-20T12:58:14-04:00</updated><summary type="html">A disturbing new low in the Polymarket era</summary><link href="https://www.theatlantic.com/technology/2026/03/emanuel-fabian-threats-polymarket/686454/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686418</id><content type="html">&lt;p class="dropcap"&gt;I&lt;span class="smallcaps"&gt;n a certain, strange way, &lt;/span&gt;generative AI peaked with OpenAI’s GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. It was creative. “You could be like, ‘Continue this story: &lt;em&gt;The man decided to take a shower&lt;/em&gt;,’ and GPT-2 would be like, ‘&lt;em&gt;And in the shower, he was eating his lemon and thinking about his wife&lt;/em&gt;,’” Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. “The models won’t do that anymore.”&lt;/p&gt;&lt;p&gt;AI leaders boast about their models’ superhuman technical abilities. The technology can predict protein structures, create realistic videos, and build apps with a single prompt. But these executives and researchers also readily admit that they have not yet released a model that writes well. OpenAI CEO Sam Altman has &lt;a href="https://ia.samaltman.com/"&gt;predicted&lt;/a&gt; that large language models will soon be capable of “fixing the climate, establishing a space colony, and the discovery of all of physics,” but in an October &lt;a href="https://conversationswithtyler.com/episodes/sam-altman-2/"&gt;interview&lt;/a&gt; with the economist Tyler Cowen, he guessed that even future models—an eventual GPT-6 or GPT-7—might be able to extrude only something equivalent to “a real poet’s okay poem.”&lt;/p&gt;&lt;p&gt;Today’s AI-generated prose is riddled with flaws. Chatbots produce meaningless metaphors, endless “it’s not &lt;em&gt;this&lt;/em&gt;, but &lt;em&gt;that&lt;/em&gt;” constructions, and a cloyingly sycophantic tone—and, of course, they overuse my beloved em dash. (Only starting with GPT-5.1, released in November, could ChatGPT reliably follow instructions to &lt;a href="https://x.com/sama/status/1989193813043069219"&gt;avoid&lt;/a&gt; the beleaguered punctuation mark.) I wanted to understand why this is—why large language models, which, after all, have &lt;a href="https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/?utm_source=feed"&gt;memorized&lt;/a&gt; centuries of &lt;a href="https://www.theatlantic.com/technology/archive/2023/01/artificial-intelligence-ai-chatgpt-dall-e-2-learning/672754/?utm_source=feed"&gt;great literature&lt;/a&gt;, can demonstrate incredible emergent abilities yet totally fail to produce a single essay that I’d want to read.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/books/archive/2024/04/ai-writing-novels-mortality-limits/678167/?utm_source=feed"&gt;Read: Would limitlessness make us better writers?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;So I talked with people who would know: people who work at LLM companies, AI-data vendors, academic computer-science departments, and AI-writing start-ups. (Some spoke with me under the condition of anonymity because their employers barred them from speaking publicly about their work.) What I learned is that modern LLMs are built in a way that is antagonistic to great writing; they are engineered to be rule-following teacher’s pets that always have the right answer in hand. In many respects, they’ve come a long way from GPT-2, but they’ve also lost something that made them looser and more compelling.&lt;/p&gt;&lt;p class="dropcap"&gt;L&lt;span class="smallcaps"&gt;LMs begin their lives&lt;/span&gt; as indiscriminate readers. During the pretraining phase, they ingest something like the entire internet—Reddit posts, YouTube transcripts, SEO sludge—and compress it into patterns. Most writing is not very good. But the quantity, not the quality, of these data is what matters. Pretraining teaches AIs grammar rules and word associations, enabling what is known as “next-token prediction”: the process through which models determine which part of a word follows another, over and over and over again.&lt;/p&gt;&lt;p&gt;Rough edges are then sanded down in the post-training phase. This is when LLM companies define the ideal “character” for an AI model (such as being &lt;a href="https://arxiv.org/abs/2112.00861"&gt;“helpful, honest, and harmless”&lt;/a&gt;), give the AIs example dialogues to learn from, and apply safety filters that attempt to block illegal requests. Through processes such as “reinforcement learning with human feedback,” which enlists people to grade AI outputs against a rubric, models are guided toward responses that exemplify desired traits.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/?utm_source=feed"&gt;Read: AI’s memorization crisis&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;AI research is an empirical science—people can verify when something works and make tweaks when something doesn’t. But art resists rules and quantification. No objective measurement exists to prove whether Pablo Neruda’s work is better than Gabriela Mistral’s. Novice writers learn conventions; great writers invent them. An LLM trained to imitate taste can go only so far. On some level, AI engineers and researchers must know this. Even as they try (and fail) to automate this work, many of the people I spoke with clearly revere good writing. “Writing novels is one of the most intense cognitive activities a human can do,” James Yu, a co-founder of Sudowrite, an AI assistant for fiction authors, told me. My sources’ faces lit up when I asked about their favorite books—three cited the science-fiction author Ted Chiang, though they also seemed disheartened that he has become a &lt;a href="https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art"&gt;vocal critic of generative AI&lt;/a&gt;. The difficulty of evaluating writing does not prevent AI labs from trying. They are motivated in part by a question that came up again and again in my interviews: If LLMs can’t write mind-bending essays or poignant sonnets, are they generally intelligent at all?&lt;/p&gt;&lt;p&gt;And so labs try to assess AI writing through various criteria. Post-training teams vibe-check model outputs themselves based on personal taste, and companies contract with domain experts to receive feedback on model-produced writing. A &lt;a href="https://bsky.app/profile/knibbs.bsky.social/post/3mdo4j4wrqk2b"&gt;job listing&lt;/a&gt; for a “creative writing specialist” at xAI lists “novel sales &amp;gt;50,000 units” and “starred reviews in Kirkus” among its requirements (rates start at $40 an hour).&lt;/p&gt;&lt;p&gt;I interviewed two people who have recently worked with large AI labs as a writing evaluator. The first, a contractor at Scale AI, described firsthand the absurdities of the task: To transform something as slippery as “tone” into discrete criteria, rubrics included rules such as “The response should use a maximum of two exclamation marks.” The contractor told me that “there were numerous cases where even though it felt like B was a better response overall, you ended up rating ‘I prefer A’ because it had three exclamation points.” He said that another time, he was asked to grade fan fiction on its “factuality.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/05/generative-ai-novel-writing-experiment-stephen-marche/673997/?utm_source=feed"&gt;Read: The future of writing is a lot like hip-hop&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The second person I spoke with is an author who worked directly with a frontier lab’s technical-research team. The company frequently asked him to break down the specific elements that make a piece of literature great. “It’s completely non-tractable to that kind of thinking,” he told me. He pointed to the example of English sonnets: They are technically one of the most templated forms, but just because a sonnet contains 14 lines and is written in iambic pentameter does not make it good. “Even when Shakespeare is being very structured, he’s constantly trying not to follow the rubric, or to subvert it, or reinvent it. I don’t know what it is that makes the difference between the poet who writes by rote and Shakespeare. I just know that the two can never be confused.”&lt;/p&gt;&lt;p class="dropcap"&gt;S&lt;span class="smallcaps"&gt;o are the LLMs doomed&lt;/span&gt; to produce sophomoric prose forever? One theory is that this is simply a matter of prioritization. In some ways, creativity is directly at odds with AI companies’ other objectives. Generally, chatbots are trained to avoid misinformation, political bias, child-sexual-abuse material, copyright violations, and more. They are also scored on benchmarks such as SWE-bench (for coding tasks) and GPQA (the natural sciences), which dramatically shape public perception of which company is winning the race. And if most users are using ChatGPT to draft corporate emails, bold text and brief bullet points may be exactly what they want. “The more you control for these” traits, Nathan Lambert, a post-training lead at the Allen Institute for AI, told me, “the more you suppress creativity.”&lt;/p&gt;&lt;p&gt;When you tell a model to be a brilliant prose stylist, but also a Ph.D.-level mathematician, and also strictly PG-13, it will become rigid and tight-lipped, like a nervous candidate at a job interview terrified to misstep. The same whimsicality that made GPT-2’s voice fresh also made it prone to other unpredictable behavior. “If you’re a big corporation like Google or OpenAI, you want a chatbot that’s going to make money. The chatbot that’s &lt;em&gt;not&lt;/em&gt; going to make you money is the one that’s a weirdo,” Gero said.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/04/great-language-flattening/682627/?utm_source=feed"&gt;Read: The great language flattening&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;I began to hypothesize that AIs might be able to generate award-winning literary prose if only we unhobbled them from the strictures of the post-training process and built specialized writing models instead. But as I reflected on the authors I love most, that didn’t seem right either.&lt;/p&gt;&lt;p&gt;When a practiced human writer reaches for a particular turn of phrase, they aren’t aiming for some single standard of great writing. Rather, the best metaphors come from the author’s specific blend of experiences or expertise. A writer’s diction, their citations, and the stories they share all reflect a singular, irreplicable perspective. Authorial voice emerges from the specificity of a life.&lt;/p&gt;&lt;p&gt;The models—although technically proficient and grammatically pristine—cannot live, cannot feel, cannot smell, cannot taste, cannot sense. They cannot spill raw emotions onto the page, or place abstract concepts in rich physical settings. Close readers of AI writing will notice that the metaphors are uncanny: &lt;a href="https://x.com/sama/status/1899535387435086115"&gt;LLMs&lt;/a&gt; assign weekdays tastes and give mirrors seams. They generally seem terrified of biology: They do not like to speak, even metaphorically, about blood and sex and death. Their output lacks stakes, as a creative-writing instructor might say.&lt;/p&gt;&lt;p&gt;Although Yu is impressed by the technical leaps that LLMs have made since GPT-2, even he won’t read fully AI-generated stories. I asked him what’s still missing for AI to produce a great novel on its own. Yu paused for a second, then answered: “Most people’s good first stories are autobiographical. Maybe you need a model that lives a life, and can almost die.”&lt;/p&gt;&lt;p class="dropcap"&gt;L&lt;span class="smallcaps"&gt;LMs may never be capable&lt;/span&gt; of great writing themselves. But this doesn’t mean that they can’t help humans. Recently, I turned AI into an editor. Not for this article—&lt;em&gt;The Atlantic&lt;/em&gt;’s editors are all human—but for a couple of essays that I wrote on my &lt;a href="http://jasmi.news"&gt;personal Substack&lt;/a&gt;. My philosophy is that I should provide the prose and perspective, and AI should supply feedback—encouraging me to write more like myself.&lt;/p&gt;&lt;p&gt;First, I fed the chatbot Claude an archive of my past writing, along with notes about what worked and didn’t about each piece. I used this to create a custom editing rubric based on my voice. Some criteria are generic, and others are personalized: One reads, “Does this play to your insider-anthropologist position” in Silicon Valley? Another asks whether the thesis shows up in the first 500 words. I dumped this guidance into a Claude project along with a reminder of its role: “You are not a co-writer. You cannot perceive. Your role is to help Jasmine write like the best version of herself.” &lt;em&gt;I don’t want to be de-skilled&lt;/em&gt;, I reminded the machine. &lt;em&gt;Your only job is to make me smarter&lt;/em&gt;.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/books/2025/10/chatgpt-fictional-character/684571/?utm_source=feed"&gt;Read: Why so many people are seduced by ChatGPT&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;This AI editor has become a valuable part of my process. Like any reader, it’s not always right. I am careful not to let it trap me into one narrow stylistic lane. But Claude pushes me to iterate and improve faster than I could alone, pointing out where my execution failed to meet the standards of my own taste. “Stop trying to write the ending as a thesis and write it as a scene,” it told me while editing a recent post. There’s something slightly humiliating about having your efforts rejected by a bot, but I had to admit that its critique was fair. I redrafted the conclusion four times. And then, finally, Claude approved.&lt;/p&gt;</content><author><name>Jasmine Sun</name><uri>http://www.theatlantic.com/author/jasmine-sun/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/iZEsW8oKSTvLGE2gPXNNaLeFR1E=/media/img/mt/2026/03/IMG_4402/original.jpg"><media:credit>Illustration by Alicia Tatone</media:credit></media:content><title type="html">The Human Skill That Eludes AI</title><published>2026-03-17T11:25:02-04:00</published><updated>2026-03-17T13:24:33-04:00</updated><summary type="html">Why can’t language models write well?</summary><link href="https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:39-686054</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;i&gt;This article was featured in the One Story to Read Today newsletter. &lt;/i&gt;&lt;a href="https://www.theatlantic.com/newsletters/sign-up/one-story-to-read-today/?utm_source=feed"&gt;&lt;i&gt;Sign up for it here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;T&lt;span class="smallcaps"&gt;he smell was strange&lt;/span&gt;. Sharp. Chemical. Wrong. The concrete wall was too close. My glasses were gone. One of my kids was standing on the sidewalk next to our car—not crying, just confused.&lt;/p&gt;&lt;p&gt;The seat belt had held. The crumple zone had crumpled. The airbag had fired. Everything designed to protect bodies had done its job. But the car, a Tesla Model X, was totaled.&lt;/p&gt;&lt;aside class="callout-placeholder" data-source="magazine-issue"&gt;&lt;/aside&gt;&lt;p&gt;One Sunday last fall, my kids and I were on a drive we’d done hundreds of times, winding through the residential streets of the Bay Area to drop my son off at his Boy Scouts meeting. The Tesla was in Full Self-Driving mode, driving perfectly—until it wasn’t.&lt;/p&gt;&lt;p&gt;What happened next, I’ve had to piece together. My memory is hazy, and some of it comes from one of my sons, who watched the whole thing unfold from the back seat. The car was making a turn. Something felt off—the steering wheel jerked one way, then the other, and the car decelerated in a way I didn’t expect. I turned the wheel to take over. I don’t know exactly what the system was doing, or why. I only know that somewhere in those seconds, we ended up colliding with a wall.&lt;/p&gt;&lt;p&gt;You might think I’d have known what to do in this situation. I used to run the self-driving-car division at Uber, trying to build a future in which technology protects us from accidents. I had thought about edge cases, failure modes, the brittleness hiding behind smooth performance. My team trained human drivers on when and how to intervene if a self-driving car made a mistake. In the two years I ran the division, we had no injuries in our early pilot programs.&lt;/p&gt;&lt;p&gt;With my own Tesla, I started out using Full Self-Driving as the default setting only on highways. That’s where it makes sense: You have clear lane markers and predictable traffic patterns. Then, one day, I tried it on a local road, and it worked well enough to become a habit.&lt;/p&gt;&lt;p&gt;Despite the accident, we were lucky. I walked away with a stiff neck, a concussion, a few days of headaches, and some memories I can’t shake. The kids climbed out unharmed. Still, you could say I was crushed in what the researcher Madeleine Clare Elish &lt;a href="https://estsjournal.org/index.php/ests/article/view/260"&gt;calls the moral crumple zone&lt;/a&gt;. Some parts of a car are specifically designed to absorb damage in a crash, to protect the people inside. But when complex automated systems fail, Elish argues, it’s the human users who take the blame. My car’s Full Self-Driving mode logged flawless miles for three years, but when the accident happened, it was my name on the insurance report.&lt;/p&gt;&lt;p&gt;And the car has evidence. While you’re at the wheel, it logs your hand position, your reaction time, whether you’re keeping your eyes on the road—thousands of data points, processed by the vehicle. After crashes, Tesla has used these data to shift blame onto drivers. Following a fatal collision in Mountain View, California, in 2018, &lt;a href="https://web.archive.org/web/20180401003557/https://www.tesla.com/blog/update-last-week%E2%80%99s-accident"&gt;the company released a statement&lt;/a&gt; in which it noted that “the vehicle logs show that no action was taken.” (Tesla did not respond to a request for comment.)&lt;/p&gt;&lt;p&gt;While Tesla can access these records, it’s not so easy for drivers. They can request their data, but some say they’ve received only fragments—and have had to go to court to get more. When &lt;a href="https://www.washingtonpost.com/technology/2025/08/29/tesla-autopilot-crashes-evidence-testimony-wrongful-death/"&gt;plaintiffs in a Florida wrongful-death case sought key evidence&lt;/a&gt; of how one of Tesla’s driver-assistance systems had failed, the company said it didn’t have the data. The plaintiffs had to hire a hacker, who recovered them from a computer chip in the crashed vehicle. Later, Tesla stated that the data had been sitting on its own servers for years, and that the company failed to locate them by mistake. (A judge did not find “sufficient evidence” to conclude that Tesla had sought to hide the data.)&lt;/p&gt;&lt;p&gt;For now, the legal principle is simple: You’re responsible. Though Tesla originally called its technology “Full Self-Driving Capability,” the system is officially classified as “Level 2” partial driver automation, which means the human must remain in control at all times. Last year, a judge in California &lt;a href="https://www.plainsite.org/dockets/download.html?id=361671590&amp;amp;z=0992d8f5"&gt;found Tesla’s original name “unambiguously false”&lt;/a&gt; and misleading to consumers; Tesla now uses “Full Self-Driving (Supervised).” When a Tesla using a version of the technology &lt;a href="https://www.courthousenews.com/judge-orders-trial-in-tesla-autopilot-manslaughter-case/"&gt;killed two people in California in 2019&lt;/a&gt;, the car’s own logs were used to prosecute the driver for failing to prevent the crash—not the company that designed the system. The company was held accountable in a major verdict for the first time only last year, when a jury found Tesla partly liable in the Florida wrongful-death case and awarded $243 million to the plaintiffs.&lt;/p&gt;&lt;p&gt;A similar pattern is emerging everywhere algorithms are asked to work alongside humans: in our inboxes, our search results, our medical charts. These systems are building toward full automation, but they’re not there yet. Computers still regularly make mistakes that require human oversight to avoid or fix.&lt;/p&gt;&lt;p&gt;Full Self-Driving works almost all of the time—Tesla’s fleet of cars with the technology logs millions of miles between serious incidents, by the company’s count. And that’s the problem: We are asking humans to supervise systems designed to make supervision feel pointless. A machine that constantly fails keeps you sharp. A machine that works perfectly needs no oversight. But a machine that works almost perfectly? That’s where the danger lies. After a few hours of flawless performance, research shows, drivers are prone to start overtrusting self-driving systems. After a month of using adaptive cruise control, drivers were more than six times as likely to look at their phone, &lt;a href="https://lindseyresearch.com/wp-content/uploads/2020/10/NHTSA-2019-0037-0015-IIHS_Study_on_Driver_Disengagement_-_Reagan_et_al_2020.pdf"&gt;according to one study&lt;/a&gt; from the Insurance Institute for Highway Safety.&lt;/p&gt;&lt;p&gt;Tesla’s description of Full Self-Driving on its website warns, “Do not become complacent,” and I didn’t think I was. Before my accident, I had my hands on the wheel. But I was driving the way the system had conditioned me to: monitoring instead of steering, trusting the software to make the right call. The familiarity curve bends toward complacency, and the companies building these systems seem to know it. I certainly did. I got lulled anyway.&lt;/p&gt;&lt;p&gt;Psychologists call this the vigilance decrement. Monitoring a nearly perfect system is boring. Boredom leads to mind-wandering. &lt;a href="https://wendyju.com/publications/ITSC2015_Mok.pdf"&gt;The research is unforgiving&lt;/a&gt;: Drivers need five to eight seconds to mentally reengage after an automated driving system gives control back. But emergencies can unfold much faster than that. The driver’s physical reaction might be instantaneous—grabbing the wheel, hitting the brake. But the mental part? Rebuilding context, recognizing what’s wrong, deciding what to do? That takes time your brain doesn’t have.&lt;/p&gt;&lt;p&gt;The driver in the 2018 Mountain View accident had six seconds before his car steered itself into a concrete median. He never touched the wheel. That same year in Tempe, Arizona, sensors in an Uber test vehicle &lt;a href="https://www.ntsb.gov/investigations/Pages/HWY18MH010.aspx"&gt;detected a pedestrian nearby with 5.6 seconds of warning&lt;/a&gt;. The safety driver looked up and took the wheel with less than a second left. By then, it was just physics.&lt;/p&gt;&lt;p&gt;In my case, I did take action before my accident. But I was asked to snap from passenger back to pilot in a fraction of a second—to override months of conditioning in the time it takes to blink. The logs would show that I turned the wheel. They wouldn’t show the impossible math.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;I don’t know &lt;/span&gt;enough about what actually happened during my accident to say that Tesla’s technology crashed the car. But the problem is bigger than one company’s self-driving system. It’s about how we’re building every AI system, every algorithm, every tool that asks for our trust and trains us to give it. The pattern is everywhere: Condition people to rely on the system. Erode their vigilance. Then, when something breaks, point to the terms of service and blame them for not paying attention.&lt;/p&gt;&lt;p&gt;My car didn’t warn me when it was confused. Chatbots don’t, either; they deliver their results in the same confident voice, whether they’re right or hallucinating. They perform expertise, even when the sources they cite are dubious or fabricated. They use technical language in an authoritative tone. And we believe them, because why wouldn’t we? They’ve been right so many times before.&lt;/p&gt;&lt;p&gt;Cars train us mile by mile; AI trains us week by week. In week one, you read a chatbot’s output carefully. By week three, you’re copying and pasting without reading. The errors don’t disappear, but your vigilance does. So does your judgment, until one day you realize that you can’t remember which ideas in a memo were yours and which were generated by AI. What does it say about us that we’ve &lt;a href="https://www.theatlantic.com/technology/2025/12/people-outsourcing-their-thinking-ai/685093/?utm_source=feed"&gt;handed over our thinking&lt;/a&gt; so willingly?&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/12/people-outsourcing-their-thinking-ai/685093/?utm_source=feed"&gt;Read: The people outsourcing their thinking to AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;When my car failed, it was immediate and palpable. With chatbots, the failure is silent and invisible. You find out about it later, if at all—after the email is sent, the decision made, the code shipped. By the time you catch the mistake, it’s already out there with your name on it. When the system works, you look efficient. When it fails, your judgment is questioned, sometimes with catastrophic consequences. In 2023, a New York lawyer &lt;a href="https://www.reuters.com/legal/transactional/lawyer-who-cited-cases-concocted-by-ai-asks-judge-spare-sanctions-2023-06-08/"&gt;was sanctioned for citing six cases that didn’t exist&lt;/a&gt;. ChatGPT had invented them, but he’d trusted it, and the court blamed him, not the tool. Because a chatbot never gets fired.&lt;/p&gt;&lt;p&gt;We’re experiencing an uncanny valley of autonomy. Computer systems aren’t just almost human; they are almost capable of working on their own. When they fail, someone has to absorb the cost. Right now, &lt;a href="https://www.theatlantic.com/technology/2026/02/words-without-consequence/685974/?utm_source=feed"&gt;that someone is us&lt;/a&gt;. But when we pay for a self-driving car or an AI tool, we think we’re buying a finished product, not signing up to test a work in progress.&lt;/p&gt;&lt;p&gt;This “almost” phase isn’t a brief transition. It’s the product—one that will be with us for years, maybe decades. So it’s important to notice the patterns. When an AI system never admits uncertainty, or when a car’s marketing says “self-driving” but the fine print says “driver responsible,” that’s a warning sign. When you realize that you haven’t really been paying attention for the past 10 miles, or the past 10 auto-composed emails, that’s the trap.&lt;/p&gt;&lt;p&gt;Things don’t have to be this way, but they won’t change unless consumers see the situation clearly and refuse to accept it. We should reject the deal we’ve been handed—the one where the terms of service become a shield for companies and a sword against users. We should demand that companies share the risk they’re enticing us into taking. If they design for complacency, they should get some of the blame when their product fails.&lt;/p&gt;&lt;p&gt;This isn’t a utopian goal. In July 2025, the Chinese carmaker BYD announced that it would pay for the damage caused by crashes involving its self-parking feature, sparing the driver’s insurance and record. It’s only one company, and only one feature, but it proves that accountability is a choice. Other businesses can be persuaded to opt in, too.&lt;/p&gt;&lt;p&gt;My kids were in the back seat when I had my car accident. One day, they’ll have their own cars and use AI in ways that I can’t even imagine yet. The systems they inherit will be built either to elevate them or to lull them and blame them when things go wrong. I want them to notice when they’re being trained. I want them to ask who absorbs the cost, and the damage.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;i&gt;This article appears in the &lt;/i&gt;&lt;a href="https://www.theatlantic.com/magazine/toc/2026/04/?utm_source=feed"&gt;&lt;i&gt;April 2026&lt;/i&gt;&lt;/a&gt;&lt;i&gt; print edition with the headline “My Self-Driving Car Crash.”&lt;/i&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Raffi Krikorian</name><uri>http://www.theatlantic.com/author/raffi-krikorian/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/tedsp2FxpDFQxpliR7g19AbGhso=/media/img/2026/03/DIS_SelfDriving_Still/original.png"><media:credit>Illustration by Sean Dong</media:credit></media:content><title type="html">My Tesla Was Driving Itself Perfectly—Until It Crashed</title><published>2026-03-17T07:00:00-04:00</published><updated>2026-03-17T13:03:24-04:00</updated><summary type="html">The danger of almost-perfect tech</summary><link href="https://www.theatlantic.com/magazine/2026/04/self-driving-car-technology-tesla-crash/686054/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686389</id><content type="html">&lt;p class="dropcap"&gt;F&lt;span class="smallcaps"&gt;rom the comfort of my desk&lt;/span&gt;, I can see it all. A series of webcam feeds show me the sun setting over Tel Aviv and southern Lebanon. A map of the world, flecked with red dots, indicates that most of Europe and the Middle East are on “high alert.” I toggle a button on the map’s control panel, and the globe is instantly latticed with the locations of undersea fiber-optic cables. Below the map, a live feed of Bloomberg TV is running with the chyron &lt;span class="smallcaps"&gt;Oil Extends Rout on Stockpile Talks&lt;/span&gt;. I scroll down and am greeted by walls of headlines, grouped into categories such as “World News” and “Intel Feed.” A “country instability” meter clocks Iran at 100 percent, while a different widget informs me that the world’s “strategic risk overview” remains “stable” at 50, whatever that means.   &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I am looking at &lt;a href="https://www.worldmonitor.app/"&gt;World Monitor&lt;/a&gt;, a website that turns any browser into a makeshift situation room, and I love it. Built to look like a cross between a Bloomberg terminal and a big screen at U.S. Strategic Command, the site aims to display as much information about world events as possible in an assortment of real-time feeds. This is information overload presented as &lt;em&gt;intelligence&lt;/em&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;World Monitor was built over a single weekend in January by Elie Habib, an engineer based in the United Arab Emirates whose day job is as CEO of Anghami, one of the Middle East’s largest music-streaming services. “I wanted to extract the signal from the noise,” he told me recently. But what he really built, by his own admission, is a noise machine. Right now, the site pulls in more than 100 different streams of data, including stock prices, prediction markets, satellite movements, weather alerts, major-airport flight data, fire outbreaks, and the operational status of cloud services such as Cloudflare and AWS. The information is all real, but what exactly a person ought to do with it is unclear.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When Habib posted about the project on X, he was shocked by the &lt;a href="https://x.com/heynavtoor/status/2025533164454846629?s=20"&gt;response&lt;/a&gt;. At one point, tens of thousands of people were using the site at the same time; more than 2 million people accessed it in the first 20 days. Habib’s inbox filled with requests for new features as well as messages from venture capitalists looking to spin up World Monitor into a full-time business. Via GitHub, where Habib has made the code for World Monitor open-source and accessible to all, developers have made thousands of customized tweaks to the site and have translated it into more than 20 languages.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Obviously, people want immediate information on the conflict in Iran and the geopolitical and economic fallout from the war. But the site’s popularity stems from something else too. For the past year or so, extremely online weirdos—news junkies, day traders, social-media addicts, amateur investigators, guys who put up long posts on X about hacking their productivity—have embraced a meme about “monitoring the situation.” The phrase originates from a 2025 &lt;a href="https://x.com/netcapgirl/status/1879955311236419794?s=20"&gt;viral X post&lt;/a&gt; showing a jacked, arms-crossed, headset-wearing Jeff Bezos watching a Blue Origin launch: “The masculine urge to monitor the situation,” the caption says.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Like most memes, the bulk of situation-monitoring posts are &lt;a href="https://x.com/BoringBiz_/status/2007631765532479635?s=20"&gt;ironic&lt;/a&gt;. They &lt;a href="https://x.com/phantom/status/2028969213021634747?s=20"&gt;poke fun&lt;/a&gt; at the self-importance of the phenomenon. (“He’s not unemployed, he’s monitoring the situation,” one representative example reads.) Most of the people who make these posts are offering an enjoyable, winking blend of two perspectives:&lt;em&gt; This is loser behavior&lt;/em&gt; and &lt;em&gt;Dudes rock&lt;/em&gt;. Suffice it to say, World Monitor has thrilled this cohort, causing its fans to post things &lt;a href="https://x.com/eliehabib/status/2030867608980091115"&gt;such as&lt;/a&gt; “BREAKING: you can now turn your laptop into a CIA command center.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But this year, the monitoring jokes have taken on a different valence. The fog of the Trump administration’s wars has created an information vacuum that can immediately be filled on social media. Some of the people populating the world’s feeds are doing valuable work—the journalists and open-source-intelligence gatherers trying to confirm events and produce original reporting, for example. But they are outnumbered by propagandists, trolls, anxious commentators, &lt;a href="https://www.theatlantic.com/technology/2026/03/polymarket-insider-trading-going-get-people-killed/686283/?utm_source=feed"&gt;war-market gamblers&lt;/a&gt;, and clout chasers who, apparently, became experts on the Strait of Hormuz overnight. These people post things &lt;a href="https://x.com/RoundtableSpace/status/2028398656773292280?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E2028398656773292280%7Ctwgr%5E4f44d55d20edb032a987b36e160aedd79e8709f8%7Ctwcon%5Es1_&amp;amp;ref_url=https%3A%2F%2Fthesizzle.com.au%2Fp%2Fare-you-monitoring-the-situation-or-information-gooning-apple-s-zoomy-new-laptops-and-downdetector-s"&gt;such as&lt;/a&gt; “Hey babe, wake up, they just dropped a new war monitor.” They aren’t just monitoring the situation; they’re posting constantly &lt;em&gt;about&lt;/em&gt; monitoring the situation.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/internet-nihilism-crisis/686010/?utm_source=feed"&gt;Read: This is what it looks like when nothing matters&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;People treating war like entertainment seems like a logical extension of X, which has lost some of its real-time-news utility since Elon Musk took over and alienated many of the people who used to post there, and encouraged an army of edgelord users who treat the site like a 4chan board. (And people used to complain about the ludicrous ways that cable-news hosts vamped to fill 24 hours of coverage.) The meme speaks to something much bigger than that, though: Ours is a culture that has developed an insatiable need for instant information on all things at all times. Of course, we all live in saturated information environments, powered by constant connectivity and on-demand-answer services—Google, Wikipedia, chatbots. But I’ve also come to see all of this as a defense mechanism in an era of real chaos, when overlapping crises and technologies make the world feel unknowable and hyperreal.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The abiding feeling of 2026 is that too many consequential things are happening too fast for most people to follow, let alone understand. The United States invaded Venezuela in the night and captured its leader, Nicolás Maduro, 69 days ago. Renee Good was killed by an ICE agent 66 days ago; Alex Pretti was tackled to the ground in Minneapolis and killed by agents of the state 49 days ago. The last tranche of the Epstein files—millions of pages documenting Jeffrey Epstein’s dizzying connections to many of the most famous and powerful people in the world—came out 43 days ago. It’s been 22 days since the Supreme Court struck down Donald Trump’s tariffs. On February 4, a pseudonymous account believed to belong to an OpenAI employee &lt;a href="https://x.com/tszzl/status/2019115479378588055"&gt;snarkily&lt;/a&gt; commented that “Anthropic has the same level of name recognition among superbowl viewers as literally fictional companies.” Now the company is embroiled in a &lt;a href="https://www.theatlantic.com/technology/2026/03/pentagon-anthropic-dispute/686307/?utm_source=feed"&gt;massive fight with the Pentagon&lt;/a&gt;; its CEO is on the cover of a forthcoming issue of &lt;em&gt;Time&lt;/em&gt;. Yet most of these events have been pushed aside to make space for a war in Iran that the administration has hardly attempted to justify.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is partly a consequence of our information ecosystem, which continues to evolve; more information is being created on more feeds, and through new products such as chatbots. Also, Trump’s reckless and erratic presidency has made reality move at online speeds. In the words of my &lt;a href="https://www.theatlantic.com/newsletters/2026/03/trump-iran-war-confusion-mixed-messages/686320/?utm_source=feed"&gt;colleague&lt;/a&gt; David A. Graham, the administration “can’t say why the United States went to war with Iran, and it can’t say what the goal of the war is. Now it can’t even decide whether the war is still going on.” The absurdity, the lack of pretense, and the senselessness all feel appropriate to the current age; as the writer John Ganz recently &lt;a href="https://www.unpopularfront.news/p/command-shift-war?utm_source=post-email-title&amp;amp;publication_id=112019&amp;amp;post_id=190607782&amp;amp;utm_campaign=email-post-title&amp;amp;isFreemail=true&amp;amp;r=2f1r&amp;amp;triedRedirect=true&amp;amp;utm_medium=email"&gt;wrote&lt;/a&gt;, the war with Iran is “the first war that feels like it’s been launched by A.I: It’s all been done on a level less than thought.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Monitoring &lt;/em&gt;is a reasonable response to all of this: It seems to offer a sense of agency. “They feel in control,” Habib told me when I asked why he thinks people like World Monitor. “They see everything happening in front of them, and it’s like, you know, watching a Bruce Willis movie.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yet this response to information overload is warping in its own way: People demand new news and commentary every time they refresh a feed. Taking even a short break can be disorienting when you attempt to rejoin a discourse that feels ever more self-referential and intense. Arguably, the best example of this dynamic is the Trump administration itself: Earlier this week, the official White House account on X &lt;a href="https://x.com/WhiteHouse/status/2032115039985881556"&gt;published&lt;/a&gt; a video superimposing footage of the military bombing targets in Iran onto the 2006 Nintendo game &lt;em&gt;Wii Sports&lt;/em&gt;. The account publishes stuff like this all of the time—and that’s exactly the point. The content outrages some people and delights others; publishing more of it advances the meta discourse that’s been layered on top of the actual news, drawing attention from the unfolding conflict itself. Because in reality, your attention can catch on only so much.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/minneapolis-protests-footage/685753/?utm_source=feed"&gt;Read: Believe your eyes&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;This kind of thing is happening everywhere, constantly. If you’re not on World Monitor, you may be in a social feed, or in multiple social feeds, or trying to figure out which articles to tap into on a cluttered front page, or which newsletters to open in your inbox, or which podcasts to listen to at 1.3-times speed so that you can get to the good parts. The effect is not necessarily that you feel more informed; if you’re anything like me, you probably feel alienated, if not worse. Those who have chosen to try to keep up with the news cycle in 2026 are &lt;a href="https://bsky.app/profile/geoffdgeorge.com/post/3mg6dvmsdkc2k"&gt;awareing themselves to death&lt;/a&gt;, as the writer Geoff George put it.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The situation brings to mind yet another grotesque online phenomenon: “&lt;a href="https://thesizzle.com.au/p/are-you-monitoring-the-situation-or-information-gooning-apple-s-zoomy-new-laptops-and-downdetector-s"&gt;gooning&lt;/a&gt;.” For the blessedly unaware, gooning is when maladjusted young men consume immense, overstimulating amounts of pornography and masturbate for hours on end to reach some kind of transcendent release. The comparison may sound absurd, but, as Daniel Kolitz wrote in a recent &lt;a href="https://harpers.org/archive/2025/11/the-goon-squad-daniel-kolitz-porn-masturbation-loneliness/"&gt;&lt;em&gt;Harper’s &lt;/em&gt;article&lt;/a&gt; about the subculture, it mirrors the hyper-online monitoring behavior that I’ve been describing:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;What are these gooners actually doing? Wasting hours each day consuming short-form video content. Chasing intensities of sensation across platforms. Parasocially fixating on microcelebrities who want their money. Broadcasting their love for those microcelebrities in public forums. Conducting bizarre self-experiments because someone on the internet told them to. In general, abjuring connective, other-directed pleasures for the comfort of staring at screens alone. Does any of this sound familiar?&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The internet now implores us to binge as a default behavior: to watch whole seasons of TV at a time, to watch every football game simultaneously in &lt;a href="https://www.youtube.com/watch?v=wkW7wL_6TXU"&gt;quad-box&lt;/a&gt; fashion. We’re prompted to keep talking to the chatbot for answers or companionship; to let the AI agent accomplish task after task until we have built a website in an hour; to obsess in relentless, completist fandoms or go down rabbit holes. Total bombardment is partly a surrender to the internet and its logic and algorithms—a kind of attentional death in which a person is no longer overwhelmed because they have given up. You could also see it as an attempt to hold their footing as the zone floods with shit. Because everything is happening too much, too fast. More.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There is a cost to all of this—a flattening of every event, feeling, and piece of art, commerce, joy, and suffering into the same atomic unit of attention, all of them easily replaced by what comes next. The worst, most shameless people in the world already understand this and use that cold logic to their advantage. You do not need to justify a war if you believe that, ultimately, people will lose interest in it and move on to the next outrage.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I have suggested in the past that our information ecosystem is broken. But I now suspect that’s wrong: This is how it is meant to work. These online products sustain themselves by making us dependent on the content that makes us feel powerless and miserable. Where does this all lead? To further exploitation? To some kind of informational oblivion? Or will there be a breaking point, a moment when the addled masses reject the logic and speed of our information environment? I can’t say—but I’m monitoring the situation.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/TIrscRgXnDpo9eXfXfaHNA89ZFk=/media/img/mt/2026/03/2026_03_11_Monitoring/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">Doomscrolling Is Over</title><published>2026-03-14T06:47:00-04:00</published><updated>2026-03-18T13:40:46-04:00</updated><summary type="html">Now everyone is “monitoring the situation.”</summary><link href="https://www.theatlantic.com/technology/2026/03/world-monitor-situation-meme/686389/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686377</id><content type="html">&lt;p&gt;In a TikTok video posted earlier this week, a Chihuahua claps its paws and dances to disco in front of a Tesla. “EV owners seeing gas prices go up, and not having to pay it,” the caption reads. In another, a clip of the comedian Zach Galifianakis laughing hysterically is superimposed over a gas-price sign. Across social media, Americans who drive electric vehicles can’t help but gloat. Who’s laughing now?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Indeed, a car that doesn’t require gas sure does sound appealing right now. As the Iran crisis continues to choke the global supply of oil, gas prices are rising higher and higher. Americans are now paying an average of $3.63 a gallon at the pump, according to AAA—up from $2.94 just a month ago. Four bucks may be right around the corner, and elevated prices could linger for months. Already, ride-share drivers are getting pickier about the trips they accept and driving longer hours to offset the extra costs. Commuters are hunting for the best deals on services such as GasBuddy—which has seen its daily active users more than double in a week and a half. At one Chevron in downtown Los Angeles, people &lt;a href="https://www.thetimes.com/us/news-today/article/america-gas-price-chevron-los-angeles-g6x3zmlwt?"&gt;are stopping&lt;/a&gt; just to take photos of the electronic sign displaying a price of $8.38 per gallon.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;America could have entered this fiasco with a better hand. The current spike in gas prices—and whatever comes next—could have been much more manageable if more people had electric vehicles in their driveway. Yet relatively few Americans are currently in the position to recharge instead of refuel (regardless of whether they’re rubbing it in with Chihuahua memes). In the United States, sales of electric vehicles have risen considerably over the years, but adoption lags behind the rest of the world. Just under 8 percent of new cars sold last year in the U.S. were electric, compared with a fifth in Europe and a third in China. Now America is quite literally paying the price for sticking with gas.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ford-china-electric-cars/683880/?utm_source=feed"&gt;Read: The American car industry can’t go on like this&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Some of the skepticism toward EVs is understandable: They generally cost more than conventional cars, plus there’s that unfamiliar business of charging. A road trip in an EV requires more planning than simply stopping at the nearest gas station when the low-fuel light starts blinking. On top of that, low gas prices have made it easy for less climate-conscious buyers to adopt an attitude of &lt;em&gt;why bother?&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;As much as Americans love giant SUVs and V8-engine-powered pickup trucks, ballooning gas prices have historically pushed car buyers to seek out more efficient options. In 2008, when prices hit $3.50 a gallon, tiny fuel savers such as the Honda Fit and Smart ForTwo had a moment. In 2012, when the national average closed in on four bucks, the Toyota Prius smashed sales records. The last time we had major sticker shock at the pump (after Russia invaded Ukraine in 2022), the modern-EV market was only just starting to take off—with Ford, Hyundai, Kia, and others jumping in to compete with Tesla.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The bigger problem is that the Trump administration and the Republican-controlled Congress have spent the past year devising new and creative ways to keep the country hooked on internal combustion. As part of the One Big Beautiful Bill Act, Congress killed the $7,500 EV tax credit, a carrot to encourage Americans to go green. (After the tax credit expired at the end of September, the all-electric share of car sales dropped by about half and has struggled to recover.) Regulations pushing car companies to sell cleaner vehicles, including more battery-powered ones, have also disappeared. Freed from regulations, and facing a milder appetite for EVs, automakers have seized the opportunity to backtrack. Many are canceling or delaying EV models while doubling down on gas-guzzlers. This week, Honda was the latest to join the trend, &lt;a href="https://hondanews.com/en-US/releases/release-b2b85c2cde944ef0d23c0483c5059ac2-honda-announces-losses-associated-with-reassessment-of-automobile-electrification-strategy-revision-to-forecast-for-consolidated-financial-results-and-future-direction"&gt;announcing&lt;/a&gt; that it would axe three upcoming EVs before production had even begun.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;It’s not hard to imagine how different things could be right now without all the policy whiplash. Perhaps many more Americans would not have to anxiously check gas prices whenever they pass a Shell or BP. The math is undeniable: If you charge at home, driving a typical electric car 100 miles costs about $5, based on the latest available residential-electricity rates from the U.S. Energy Information Administration. (Plugging in at public chargers is a lot more expensive.) To match that in a fully gas-powered car at today’s gas prices, you would need to find one that gets 70 miles to the gallon. They don’t exist.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/12/car-prices-too-high/685345/?utm_source=feed"&gt;Read: The backlash against car prices is here&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;If the high prices stick around or climb even higher, that could certainly nudge more car buyers toward going electric. The car-buying site &lt;a href="https://www.edmunds.com/car-news/electrified-vehicle-research-gas-prices-data.html"&gt;Edmunds says&lt;/a&gt; that it’s already seen a “slight uptick” in shoppers considering hybrids and EVs. “More shoppers could begin weighing fuel economy and electrification more seriously as they plan their next purchase,” Jessica Caldwell, the firm’s head of insights, wrote this week. Even as the Trump administration hampers EVs and the auto industry shifts its focus, electric cars are becoming better gas replacements than ever before. Many new EVs now come with battery ranges that exceed 300 miles, and the charging infrastructure is finally catching up. With new options like the roughly $30,000 Nissan Leaf and a flood of lightly used EVs hitting dealerships, there are lots of deals to be found. A host of new and impressive models are on track to land in 2026 (assuming they don’t all get prematurely yanked from the market, that is).&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;President Trump may not be a fan of electric cars, but the Iran fuel shock could become its own kind of stimulus for EVs. If things get really bad, car companies may even regret back-burnering the exact kind of vehicles that Americans start to crave. Will pain at the pump override everything Trump has done to derail EVs, and launch this technology into a golden age? Probably not. If fuel costs were the whole ball game, then EVs would already dominate, and giant pickups would be going extinct. Hesitancy around EVs runs deep, and not everyone can charge at home, where they can register the biggest savings.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the higher prices go, and the longer they stay that way, the greater the chance that more Americans will do the math and decide that they are done paying for gas once and for all. If enough drivers go that route, the next oil crisis might just sting a little less.&lt;/p&gt;</content><author><name>Tim Levin</name><uri>http://www.theatlantic.com/author/tim-levin/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/TMSb2gvxtYWjqnbuLyPTrRMegvc=/media/img/mt/2026/03/2026_03_11_ev_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Well, That’s One Way to Sell Americans on Electric Cars</title><published>2026-03-13T15:54:00-04:00</published><updated>2026-03-17T10:03:36-04:00</updated><summary type="html">The U.S. has been wary of EVs. As the cost of gas soars, we’re now paying the price.</summary><link href="https://www.theatlantic.com/technology/2026/03/electric-vehicles-gas-prices-iran/686377/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:39-686064</id><content type="html">&lt;p&gt;&lt;i&gt;Photographs by Landon Speers&lt;/i&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;A&lt;span class="smallcaps"&gt;s we drove through&lt;/span&gt; southwest Memphis, KeShaun Pearson told me to keep my window down—our destination was best tasted, not viewed. Along the way, we passed an abandoned coal plant to our right, then an active power plant to our left, equipped with enormous natural-gas turbines. Pearson, who directs the nonprofit Memphis Community Against Pollution, was bringing me to his hometown’s latest industrial megaproject.&lt;/p&gt;&lt;p&gt;Already, the air smelled of soot, gasoline, and asphalt. Then I felt a tickle sliding up my nostrils and down into my throat, like I was getting a cold. As we approached, I heard the rumble of cranes and trucks, and then from behind a patch of trees emerged a forest of electrical towers. Finally, I saw it—a white-walled hangar, bigger than a dozen football fields, where Elon Musk intends to build a god.&lt;/p&gt;&lt;aside class="callout-placeholder" data-source="magazine-issue"&gt;&lt;/aside&gt;&lt;p&gt;This is Colossus: a data center that Musk’s artificial-intelligence company, xAI, is using as a training ground for Grok, one of the world’s most advanced generative-AI models. Training these models takes a staggering amount of energy; if run at full strength for a year, Colossus would use as much electricity as 200,000 American homes. When fully operational, Musk has written on X, this facility and two other xAI data centers nearby will require nearly two gigawatts of power. Annually, those facilities could consume roughly twice as much electricity as the city of Seattle.&lt;/p&gt;&lt;p&gt;To get Colossus up and running fast, xAI built its own power plant, setting up as many as 35 natural-gas turbines—railcar-size engines that can be major sources of smog—according to imagery obtained by the Southern Environmental Law Center. Pearson coughed as we drove by the facility. The scratch in my throat worsened, and I rolled up my window.&lt;/p&gt;&lt;p&gt;xAI’s rivals are all building similarly large data centers to develop their most powerful generative-AI models; a metropolis’s worth of electricity will surge through facilities that occupy a few city blocks. These companies have primarily made their chatbots “smarter” not by writing niftier code but by making them bigger: ramming more data through more powerful computer chips that use more electricity. OpenAI has announced plans for facilities requiring more than 30 gigawatts of power in total—more than the largest recorded demand for all of New England. Since ChatGPT’s launch, in November 2022, the capital expenditures of Amazon, Microsoft, Meta, and Google have exceeded $600 billion, and much of that spending has gone toward data centers—more, even after adjusting for inflation, than the government spent to build the entire interstate-highway system. “These are the largest single points of consumption of electricity in history,” Jesse Jenkins, a climate modeler at Princeton, told me.&lt;/p&gt;&lt;p&gt;Even conservative analyses forecast that the tech industry will drop the equivalent of roughly 40 Seattles onto America’s grid within a decade; aggressive scenarios predict more than 60 in half that time. According to Siddharth Singh, an energy-investment analyst at the International Energy Agency, by 2030, U.S. data centers will consume more electricity than all of the country’s heavy industries—more than the cement, steel, chemical, car, and other industrial facilities put together. Roughly half of that demand will come from data centers equipped for the particular needs of generative AI—programs, such as ChatGPT, that can produce text and images, solve complex math problems, and perhaps one day inform scientific discoveries.&lt;/p&gt;&lt;figure class="full-width"&gt;&lt;img alt="photo of enormous warehouse with numerous external cooling structures, with bronzed field of corn growing in foreground" height="522" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_MEMPHIS_0808_16x9/69a62b640.jpg" width="928"&gt;
&lt;figcaption class="caption"&gt;Colossus, Elon Musk’s data center in Memphis, can consume as much electricity over the course of a year as 200,000 American homes. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;To power AI, energy and tech companies are turning to fossil fuels, which they regard as more reliable and readily available than wind, solar, or nuclear. Asked where the energy for data centers should come from, OpenAI CEO Sam Altman &lt;a href="https://conversationswithtyler.com/episodes/sam-altman-2/"&gt;has repeatedly said&lt;/a&gt;, “Short-term: natural gas.” (OpenAI and &lt;i&gt;The Atlantic&lt;/i&gt; have a corporate partnership.) A Louisiana utility plans to build three natural-gas plants for a Meta data center that, upon completion, will be among the largest in this hemisphere. The lifespans of coal plants, too, are being extended to power new data centers. And the IEA estimates that data-center emissions could more than double by 2030—becoming one of the fastest-growing sources of greenhouse gases in the world.&lt;/p&gt;&lt;p&gt;The optimist’s case is that, by then, advanced nuclear reactors will have obviated many of the new fossil-fuel plants, and AI tools will have invented technologies that can solve the climate crisis. That may well happen. But today, “the market has converged on &lt;i&gt;Add gas now, and then add nuclear later&lt;/i&gt;,” Jenkins said. In other words, if natural-gas turbines seem to offer the most expedient path to an AI-enhanced future, then clean air may have to wait.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;A data center &lt;/span&gt;is a planet of contradictions: heat without motion, shelter without bodies, light without sky. “The lifeblood of the internet is essentially flowing through these sites,” Jon Lin, the chief business officer at Equinix, one of the world’s largest data-center companies, told me in an Equinix facility in Loudoun County, Virginia. Behind Lin, someone in a green hoodie fiddled with computer chips shelved in a row of humming, refrigerator-size cabinets on the data-center floor. There were no windows, to keep the facility secure and to guard against the sun’s heat. As we walked along a corridor of cabinets, motion-activated lights illuminated the way. Farther ahead, only faint blue lights and blinking computer equipment pierced the darkness.&lt;/p&gt;&lt;p&gt;Ever since the first data centers were built, in the mid-20th century, their &lt;a href="https://www.ibm.com/think/topics/data-centers"&gt;purpose has remained constant&lt;/a&gt;: pack computer equipment close together to store and send information as efficiently as possible. But their scale has grown dramatically. The original data centers were simply large rooms housing mainframe computers. With the rise of the internet, in the 1990s, backroom computers gave way to entire buildings, such as the one Lin and I stood in—facilities that enable us to stream movies, trade stocks, store medical records, manage supply chains, and make military decisions. Now the AI race is requiring vastly greater computing power, which has led to even bigger data centers, ones filled with computer chips that are much hungrier and run much hotter.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/03/nvidia-chips-gpu-generative-ai/677664/?utm_source=feed"&gt;Read: The lifeblood of the AI boom&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;In a traditional data center, the cabinets are cooled by industrial fans—as we walked through the Equinix facility, I felt a constant breeze on my cheek—and rooftop cooling towers eventually expel the heat. The cabinets in a generative-AI data center use dozens of times more electricity. Lin showed me a row of AI-specialized cabinets used by Block, the firm that owns Square and Cash App, which radiated enough heat to make me break a sweat; to cool them, water runs into special metal plates that sit atop the chips inside the cabinets. AI data centers are filled with similar equipment, and cooling thousands of cabinets &lt;a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/?utm_source=feed"&gt;can require a lot of water&lt;/a&gt;. Public records from the Memphis water utility, for instance, show that the address for Colossus used more than 11 million gallons in September alone, as much as 150 homes use in an entire year. When a data center’s cooling equipment malfunctions, spiraling heat combined with humid air has yielded that rarest of meteorological events: indoor rain.&lt;/p&gt;&lt;p&gt;Placing servers in the same or neighboring buildings allows them to exchange information seamlessly and quickly, and Loudoun County has the highest concentration of data centers in the world, with 199 already operating and another 30 or so on the way. According to one report, 13 percent of global data-center capacity is squeezed into the county’s 520 square miles. One particularly dense stretch is called “Data Center Alley.”&lt;/p&gt;&lt;figure class="full-width"&gt;&lt;img alt="photo from inside warehouse of metal mesh cage around stacks of computer equipment with numerous cables extending to ceiling" height="619" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_ASHBURN_1165/553c9896e.jpg" width="928"&gt;
&lt;figcaption class="caption"&gt;Cabinets of computer chips at a data center in Loudoun County, Virginia (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Northern Virginia offers a glimpse into what the AI rush may bring to the rest of the nation. Loudoun is running out of space, but new data-center hubs are popping up in Phoenix, Atlanta, and Dallas. Amazon and Meta are building AI data centers in Indiana and Louisiana, respectively, that will each require more than two gigawatts of electricity, dozens of times more than standard facilities. OpenAI has proposed that the U.S. establish “AI Economic Zones”: little Loudouns everywhere.&lt;/p&gt;&lt;p&gt;As I drove into Data Center Alley with Julie Bolthouse, the director of land use at the Piedmont Environmental Council, she explained how to distinguish data centers from warehouses: cooling towers on the roof, dozens of backup diesel generators to one side, no windows (or false ones, decorative glass panels backed by a wall of concrete). There didn’t seem to be any warehouses, though, and I gave up counting data centers within minutes, unable to tell where one facility ended and the next one began. Bolthouse helps run a coalition aiming to slow data-center development throughout Virginia, but in Loudoun, it is too late. So many data centers are under construction just north of Dulles International Airport that hills of freshly dug dirt loom over roads and orange dust tints the air. Should Musk successfully colonize Mars, the early stages of terraforming might look like this.&lt;/p&gt;&lt;p&gt;The architect of this labyrinth is Buddy Rizer, Loudoun’s longtime executive director of economic development. Rizer has courted data centers with regulatory and state tax incentives, and when we met in his office, he told me that since 2009, at least one has been under construction at any given time. Data centers are typically operated by only a few dozen staff members, but building them has produced a steady source of employment. They also provide nearly 40 percent of the county’s budget, helping to pay for police, schools, and parks for a population that has grown steadily since 2010.&lt;/p&gt;&lt;p&gt;Within a 1.5-mile radius of us, Rizer said, were 12 substations: small jungles of metal poles and wiring that convert high-voltage electricity into a form you’d use to charge your iPhone or, in this case, run a data center. All around us were towering utility poles strung with high-voltage transmission lines that carry raw electricity from power plants to those substations; they hang over Loudoun like a canopy, or a cobweb. Follow any one cable far enough, and you’re likely to reach a data center.&lt;/p&gt;&lt;p&gt;For years to come, the AI race is projected to be the main force driving roughly 2 percent annual growth in U.S. electricity demand, which has been stagnant for nearly two decades. Nationally, this is not a crisis; regionally, it may be. Dominion Energy, the major electrical utility in Virginia, predicts growth of 5.5 percent each year, with overall electricity demand doubling by 2039. Aaron Ruby, a spokesperson for Dominion, told me that the company is preparing to meet that surge, though he was frank about the challenge: “We are experiencing the largest growth in power demand since the years following World War II.” By the end of the decade, training the industry’s most powerful AI model could require as much electricity as millions of American homes.&lt;/p&gt;&lt;p&gt;In China, hundreds of data centers have been announced since 2023, and additional facilities are planned for &lt;a href="https://www.scientificamerican.com/article/china-powers-ai-boom-with-undersea-data-centers/"&gt;beneath the ocean&lt;/a&gt; and &lt;a href="https://www.bloomberg.com/news/articles/2025-07-08/china-builds-ai-dreams-with-giant-data-centers-in-the-desert"&gt;in the desert&lt;/a&gt;. China’s biggest advantage in the AI race is not the talent of its software engineers or the quantity of its data centers, but its abundance of energy: In 2024, the nation produced nearly as much electricity as the U.S., Europe, and India combined.&lt;/p&gt;&lt;p&gt;President Trump has declared that the nation is in an “energy emergency,” and been vocal about the need to build more power plants for the U.S. to win the AI race. A senior executive at OpenAI told me that the U.S. needs to activate every resource at its disposal—solar panels, natural-gas turbines, nuclear reactors. And Anthropic, OpenAI’s top rival, published a report arguing that the U.S. should streamline permitting for data centers and power plants in order to keep pace with China.&lt;/p&gt;&lt;p&gt;But an internet-driven energy crisis has failed to materialize before: As fiber-optic cables were being laid in Loudoun in the 1990s, energy companies built more coal- and gas-fired plants. “Dig More Coal—The PCs Are Coming,” &lt;a href="https://www.forbes.com/forbes/1999/0531/6311070a.html"&gt;read a 1999 &lt;i&gt;Forbes&lt;/i&gt; headline&lt;/a&gt;. When the demand didn’t arrive, the nation was left with a glut of gas plants and multiple bankrupt energy companies.&lt;/p&gt;&lt;p&gt;The generative-AI boom, too, could prove to be a bubble. The technology remains extraordinarily expensive, largely because of the cost of advanced computer chips, and no AI firm has presented a convincing business model. One path to profitability might be more efficient algorithms—which would preclude the need for the new natural-gas plants. And if AI doesn’t turn out to be as transformative a technology as experts predict, swaths of data centers could be left unused or unfinished—ruins from a future that never came to pass.&lt;/p&gt;&lt;p&gt;Either way, the rush to power data centers as fast as possible has already pushed the U.S. to expand its reliance on fossil fuels.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;Behind her one-story &lt;/span&gt;brick home in southwest Memphis, Sarah Gladney grows tomatoes, and when the vines wilted early last summer, she had a suspect in mind. “When the wind comes up early in the morning, I can smell it,” Gladney told me, nodding in the direction of Colossus. One of her neighbors, Marilyn Gooch, told me the data center’s turbines have made her uncertain about whether she should let her grandchildren visit.&lt;/p&gt;&lt;p&gt;Their neighborhood, Boxtown, is named for the railway boxcars that formerly enslaved people used to build homes, and is still almost entirely Black. Virtually every heavy industry has set up nearby—a wastewater facility, an oil refinery, a coal-fired power plant. Colossus itself, which is next to a steel mill and a trucking and rail yard, occupies the hull of an old oven factory. Life expectancy in and around Boxtown is more than five years below the national average, and the cancer risk in southwest Memphis is four times higher. What KeShaun Pearson and I smelled may not have been Colossus itself; xAI had chosen an area so besieged by heavy industry that any exhaust from the facility’s turbines would mix in with a pervasive smog.&lt;/p&gt;&lt;figure&gt;&lt;img alt="photo of simple railroad-style house with peeling white paint and large trees in background" height="908" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_MEMPHIS_0695/aafb79baa.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;In Boxtown, a neighborhood in southwest Memphis, many residents and elected officials were unaware that Colossus was being built until the project was well under way. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Colossus was built so quickly that many Boxtown residents and elected officials didn’t know what was happening until the project was well under way. Construction began in May 2024, and the project was announced the following month. Gladney, Pearson, and his younger brother Justin—who represents the district in the Tennessee General Assembly—found out about the project that day in June. By Labor Day weekend, less than three months after the press conference, Colossus was up and running.&lt;/p&gt;&lt;p&gt;The company installed its own gas turbines because that was faster than waiting on the local grid, and argued that it did not need a permit to do so because the turbines would operate for less than a year, a claim that the Southern Environmental Law Center, representing the NAACP, contested in a letter threatening to sue the company. (xAI has since received a permit for 15 turbines, and is reportedly operating 12.) Meanwhile, residents report that they have had respiratory issues flare up since xAI moved in.&lt;/p&gt;&lt;p&gt;Last June, when an analysis commissioned by the city of Memphis found “no dangerous levels” of pollutants in Boxtown and at two other test locations, the SELC criticized the study’s methods. Using satellite data, &lt;a href="https://time.com/7308925/elon-musk-memphis-ai-data-center/"&gt;researchers at the University of Tennessee at Knoxville found&lt;/a&gt; that levels of nitrogen dioxide—which causes smog and is associated with asthma and other respiratory problems—near Colossus have been substantially elevated since its public announcement. (xAI &lt;a href="https://x.ai/memphis/fact-v-fiction"&gt;says on its website&lt;/a&gt; that it will install technology to reduce the pollution from its turbines. The company, the Shelby County Health Department, and the Memphis mayor’s office did not respond to a list of questions about Colossus’s environmental impacts and xAI’s presence in Memphis; the Greater Memphis Chamber of Commerce declined to comment.)&lt;/p&gt;&lt;p&gt;Fossil fuels have become the default for data centers around the country. OpenAI’s first Stargate data center, in Texas, also has its own gas-fired power plant. Chevron and Exxon are angling to hook natural-gas facilities directly into data centers, and the world’s three major manufacturers of natural-gas turbines all advertise their products as convenient energy sources for data centers. Michael Eugenis, the director of resource planning at Arizona Public Service, the state’s largest utility, told me that because of the demand from data centers, the company is adding more fossil-fuel capacity than it otherwise would have; natural gas will help power Microsoft, Amazon, and Oracle data centers, too.&lt;/p&gt;&lt;figure&gt;&lt;img alt="photo of transmission lines with large towers and large spools of metal cable in foreground" height="665" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_MEMPHIS_0401/26345e564.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Transmission lines, like these in Memphis, carry electricity throughout the grid—including to data centers. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;In early 2025, a company affiliated with xAI purchased a former warehouse and nearly 200 acres south of Colossus to set up another data center, Colossus II. On a weekday afternoon, the road near the site was dense with traffic—not dump trucks and forklifts, but sedans lining up outside the adjacent public school for pickup. An xAI affiliate bought a retired Duke Energy plant about a mile away in Mississippi that is likely to power this facility, and filed an application to operate 41 natural-gas turbines on the site. Those turbines could emit more carbon dioxide annually than the city of San Jose.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;On an island &lt;/span&gt;in the Susquehanna River, just south of Harrisburg, Pennsylvania, I saw another way to power the AI boom. Above me loomed four beige hourglass-shaped structures, each some 365 feet tall: the cooling towers for Three Mile Island, the site of the worst nuclear disaster in American history. On March 28, 1979, the facility was only a few years old, and nuclear-energy reactors were being built across the country. But a series of mechanical and human errors caused the core of one of the reactors, Unit Two, to rapidly overheat and leak radioactive material. The effects on human health and the environment were negligible, but together with the catastrophe at Chernobyl seven years later, the partial meltdown turned public sentiment strongly against nuclear power.&lt;/p&gt;&lt;p&gt;Three Mile Island’s Unit One went undamaged and continued operating, after a brief pause, until 2019. By then natural gas was too cheap, the regulatory environment was too unfriendly, and the losses—hundreds of millions of dollars—were too great for Constellation Energy, which owns Unit One, to keep the plant running.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2023/03/climate-change-nuclear-power-safety-radioactive-waste/672776/?utm_source=feed"&gt;From the March 2023 issue: Jonathan Rauch on the real obstacle to nuclear power&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Nobody has ever resuscitated a fully shut-down U.S. nuclear-power plant, but in fall 2024, Constellation announced plans to do just that. Microsoft had agreed to purchase electricity from Unit One to power its data centers over the next two decades, a guarantee allowing Constellation to spend the $1.6 billion needed to restart the plant. It was the ultimate bellwether of the AI age: Experts have long argued that we need clean nuclear power to reduce the grid’s existing carbon footprint. Instead, Three Mile Island will help offset a new source of emissions from a single company.&lt;/p&gt;&lt;p&gt;Constellation is now reversing the steps it took to decommission the reactor: renewing its license, restoring equipment, retraining personnel. Dave Marcheskie, a community-relations manager, explained this to me in a conference room overlooking the nuclear core, which is housed in a building that resembles a large grain silo. Behind him, a clock counted down the time to launch: 650 days, zero hours, 42 minutes, and one second.&lt;/p&gt;&lt;p&gt;As the need for carbon-free electricity grows more urgent, Americans are &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/america-nuclear-power-revival/680842/?utm_source=feed"&gt;having to reckon with nuclear energy again&lt;/a&gt;, and the AI boom has provided the industry with wealthy backers and an army of tech cheerleaders. Meta and Amazon are buying electricity from large nuclear-power plants, and nearly every major data-center company is investing in experimental nuclear technologies—especially small modular reactors, which in theory will make fission cheaper and easier to deploy.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/america-nuclear-power-revival/680842/?utm_source=feed"&gt;Read: A new reckoning for nuclear energy&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Nuclear energy has its downsides, of course. The waste is radioactive and must be stored almost indefinitely, and the meltdown at Japan’s Fukushima plant in 2011 was a reminder of how spectacularly dangerous nuclear reactors can be. But the dangers posed by the burning of fossil fuels are far more imminent.&lt;/p&gt;&lt;p&gt;At Three Mile Island, Marcheskie led me down a hall and into the actual power plant. Pipes, tubes, and hulking machines lined the floor and ceiling; a trefoil sign warned that a large tank potentially contained radioactive materials. The elevator was broken, so we walked a few stories up to the stadium-size room from which all of Three Mile Island’s electricity will flow. Scaffolding and shipping containers were scattered around a row of pistachio-green semi-cylinders. Once the plant restarts, uranium atoms ripped apart in the adjacent core will generate immense amounts of heat, vaporizing water into steam that will spin blades inside those cylinders 1,800 times a minute, which will in turn produce hundreds of megawatts of electricity.&lt;/p&gt;&lt;p&gt;This will be orchestrated from a nearby control room, where hundreds of lights and switches line muted-green walls. The shift manager, Bill Price, explained that one half of the main panel controls the nuclear core, while the other half controls the turbines. In the middle is the most important control of all: a red button that shuts down the reactor, and above it an identical button that serves as a backup. In the event of an emergency, Price said, you’d press both. I put a finger on each button and pushed.&lt;/p&gt;&lt;figure&gt;&lt;img alt="photo of very large vintage-looking green control board with dozens of dials, switches, and lights" height="998" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_HARRISBURG_0332/da4d899e4.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;The original control room at Three Mile Island Unit One will become operational again when the reactor restarts. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;A small amount of the electricity generated here will support the plant itself. Microsoft &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-microsoft-nuclear-three-mile-island/679988/?utm_source=feed"&gt;is buying the remainder&lt;/a&gt; through a power-purchase agreement, a mechanism companies use to buy carbon-free electricity to match whatever their facilities draw from the grid. Power generated at Three Mile Island will help offset the energy used by data centers in Virginia and Illinois; Microsoft says it purchases enough clean energy to match all of its electricity consumption, as do Google, Amazon, and Meta. These companies are also investing in hydropower, geothermal plants, and solar panels; Google is exploring building a data center in space, to enable cloud-free access to the sun.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-microsoft-nuclear-three-mile-island/679988/?utm_source=feed"&gt;Read: For now, there’s only one good way to power AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Still, tech firms insist that nuclear and other clean technologies cannot be deployed quickly enough to meet their needs. President Trump has signed an executive order to accelerate permitting for natural-gas and coal-fired plants to support data centers. Yet China’s energy advantage in the AI race comes from nuclear reactors and solar panels, not coal and oil; the country is building nearly two-thirds of the world’s new solar and wind capacity.&lt;/p&gt;&lt;p&gt;The U.S. could still catch up, thanks to private investments by the likes of Google and Microsoft. A majority of planned electricity generation in the U.S. will be carbon-free, and running data centers on renewables can be done, Jenkins, the Princeton climate modeler, told me. Meanwhile, natural-gas turbines are so far back-ordered that acquiring one in the next few years will be virtually impossible.&lt;/p&gt;&lt;p&gt;For now, using existing power sources more wisely, rather than building new ones, may be all the AI industry needs. Electrical grids are designed for periods of peak demand—cooling on summer afternoons, heating on winter mornings—but mostly they run well below maximum capacity. Researchers at Duke University have shown that if data centers reduced their electricity consumption during some of those peaks, it would free up enough electricity to accommodate the country’s planned data centers for years. Google and xAI have already entered agreements to do so.&lt;/p&gt;&lt;p&gt;That strategy would allow tech companies to continue building more data centers without waiting for utilities to expand the grid. And time, not dollars or electrons, is the AI industry’s primary currency. Google, Microsoft, and their competitors can afford to spend historic sums without near-term financial returns, but they cannot afford to slip behind one another.&lt;/p&gt;&lt;p&gt;Time is also the biggest problem for Microsoft’s deal with Three Mile Island, which is taking years to restart. As we left the facility, Marcheskie led me south, past the beige towers and through a fog that had settled over the river. At one point we passed a cluster of concrete barrels that had escaped my attention on the drive up. Marcheskie told me that they contained all of the nuclear waste from Unit One’s 45 years of operation. Perhaps one day such casks will also line the perimeters of Colossus and Stargate.&lt;/p&gt;&lt;p&gt;AI may well overhaul how humans think and work, but it’s also pushing us toward another inflection point. We can unlock the promises of this technology by doubling down on the energy systems of the past, or we can seize the opportunity to push the grid into a carbon-free future. To get there, an industry that likes to move at warp speed will have to develop a quality it severely lacks: patience.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;i&gt;This article appears in the &lt;/i&gt;&lt;a href="https://www.theatlantic.com/magazine/toc/2026/04/?utm_source=feed"&gt;&lt;i&gt;April 2026&lt;/i&gt;&lt;/a&gt;&lt;i&gt; print edition with the headline “Insatiable.”&lt;/i&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gJk88N1NVclRN9wfQt34zb_tIrk=/media/img/2026/03/AI_POWER_HARRISBURG_1488_16x9/original.jpg"><media:credit>Landon Speers for The Atlantic</media:credit><media:description>Three Mile Island’s cooling towers have until recently served as grave markers for America’s nuclear-power industry.</media:description></media:content><title type="html">Inside the Dirty, Dystopian World of AI Data Centers</title><published>2026-03-13T08:00:00-04:00</published><updated>2026-03-13T10:55:49-04:00</updated><summary type="html">The race to power AI is already remaking the physical world.</summary><link href="https://www.theatlantic.com/magazine/2026/04/ai-data-centers-energy-demands/686064/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686343</id><content type="html">&lt;p class="dropcap"&gt;T&lt;span class="smallcaps"&gt;o me, the best first sentence&lt;/span&gt; of any piece of journalism is the one in Joan Didion’s 1987 book, &lt;em&gt;Miami&lt;/em&gt;, which begins like this: “Havana vanities come to dust in Miami.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I love that sentence and that propulsive first chapter so much that I once sat down to try to figure out how she did it. I looked at the sentences one at a time to assess what purpose each one was serving, and I counted how many of them Didion had needed to accomplish each thing she wanted to accomplish. Then I thought about how she figured out what order to put them in to have maximum page-turning impact. And then I compared all of it unfavorably with the flailing and feeble way in which I would have pursued the same goals. I marked up my copy of the book in a somewhat desperate fashion and then became depressed.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;That type of copying is pretty normal, and they teach it in school. It’s how you learn (and how you become depressed). But in the age of generative AI, there are many new kinds of copying. For instance, &lt;a href="https://www.wired.com/story/grammarly-is-offering-expert-ai-reviews-from-your-favorite-authors-dead-or-alive/"&gt;&lt;em&gt;Wired&lt;/em&gt; reported&lt;/a&gt; last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, &lt;a href="https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews"&gt;per &lt;em&gt;The Verge&lt;/em&gt;’s reporting&lt;/a&gt;), and a bunch of academics (including some who had &lt;a href="https://futurism.com/artificial-intelligence/grammarly-ai-reviews"&gt;recently died&lt;/a&gt;).&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/books/archive/2023/08/ai-chatbot-training-books-margaret-atwood/675151/?utm_source=feed"&gt;Margaret Atwood: Murdered by my replica?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote &lt;a href="https://www.linkedin.com/posts/shishirmehrotra_back-in-august-we-launched-a-grammarly-agent-activity-7437552603737059328-vzTe/"&gt;on his LinkedIn page&lt;/a&gt; yesterday. Not long after, &lt;a href="https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/"&gt;&lt;em&gt;Wired&lt;/em&gt; reported&lt;/a&gt; that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Before the tool went down, I spent a few hours experimenting with it, trying to see what it might be like to be edited by myself. I was hesitant to do this, because I had once asked ChatGPT to write something as if it were me (just for fun!) and found the experience humiliating. The result was sentimental and ditzy—it was studded with cloying rhetorical questions, had a bizarre number of unnecessary exclamation points, and sounded exactly like me.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But I still wondered, out of self-obsession, how an AI imitation of me might advise the real me if I fed it prose that I had written, and whether it could possibly make that prose better. Clearly, this experiment was sort of a gimmick. I assumed the suggestions would exist on a spectrum from obvious to dumb, though I was open to being surprised. If I’m being honest, what I was most interested in was seeing who I am in this latest iteration of The Computer. I also wanted to see whether the tool was good enough that someone might someday use it instead of hiring a human editor. If it was, I would have to have a difficult but compassionate conversation with my boss.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;To my dismay, I was unable to summon the AI version of myself. I pasted in numerous articles I’d written and numerous fake articles that I had asked a chatbot to make up. But Grammarly seemed to think other writers were more expert in these articles’ subject matter and therefore more qualified to advise me. It suggested tech journalists, pop-culture academics, and legendary practitioners of narrative nonfiction. I wouldn’t appear. My boss tried too. He messaged me: “i have both claude and chatgpt writing fake essays in an attempt to fool a different AI into presenting me with an unauthorized simulacrum of one of my writers.” He failed. We both felt bad about the way we were spending our time.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;So I gave up on that and started engaging with the experts I had been given. The tool was really pretty funny. It was not impersonating people in exactly the way that I’d imagined it would. I wasn’t getting a message from a bot pretending to be the &lt;em&gt;New Yorker&lt;/em&gt; writer Susan Orlean. At no point did Grammarly say, “Hi, I’m Susan Orlean.” Instead, it would say, “Taking inspiration from Susan Orlean,” “Applying ideas from John McPhee,” “Using concepts from Bruce V. Lewenstein” (an undergraduate professor of mine, coincidentally), and so on.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/books/archive/2023/08/stephen-king-books-ai-writing/675088/?utm_source=feed"&gt;Stephen King: My books were used to train AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The inspiration, ideas, and concepts that the tool drew from these writers and thinkers were, with no exception, incredibly stupid and unhelpful (thank God). When I pasted in a story that I had written about TikTok, for instance, Grammarly told me it was drawing inspiration from my co-worker Charlie Warzel’s Galaxy Brain newsletter and then suggested changing the headline from “TikTok’s New Paranoia Problem” to “TikTok’s Zeroed-Out Voices: The New Paranoia Problem.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When I asked it to look at an excerpt from my 2022 book on One Direction fans, it told me that it was going to improve the first sentence with a suggestion inspired by Joan Didion’s &lt;em&gt;The White Album&lt;/em&gt;. Amazing! But then the idea was just to open with a quote from a young woman I had written about, which didn’t seem uniquely Didion-esque. The bot clarified. “In &lt;em&gt;The White Album&lt;/em&gt;, Joan Didion emphasizes the importance of personal narratives in understanding reality, stating, ‘We tell ourselves stories in order to live.’” (As you may know, this super famous and often-misquoted line actually refers to how we have to delude ourselves constantly in order to stave off the certainty that all is meaningless.) Then it made up a fake quote that I might consider using.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I was sometimes offered suggestions inspired by the sociologist Sherry Turkle or by the famed memoirist Mary Karr. But for some reason, Grammarly offered suggestions inspired by the essayist Leslie Jamison over and over, almost insistently. I heard from both “Gia Tolentino” and the &lt;em&gt;New Yorker&lt;/em&gt; writer Jia Tolentino. None of the suggestions was about structure, organization, or trimming the fat from a story. &lt;em&gt;All&lt;/em&gt; of the suggestions were wordy additions. Some were needlessly floral elaborations and fabricated details clearly meant to add color and voice. For instance, a long and fake story about my late grandmother appeared in the middle of one draft. Others were stilted explainer-y tangents that seemed written for readers with no preexisting knowledge of the world. One idea was to pop a several-sentence capsule history of the entire feminist movement into the middle of a paragraph that mentioned the “girlboss” trope inspired by the philosopher Amia Srinivasan.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I tried to talk with the chatbot integrated into Grammarly about the situation, but it had no idea what I was asking about. It insisted that Expert Review was done by anonymous human editors, none of whom was famous, and assured me that Grammarly would never claim to be Joan Didion while giving me advice. We had a confusing exchange about that for a while before it revealed that its knowledge of the world and its own platform went up only to June 2024. Soon after, I learned that &lt;a href="https://bsky.app/profile/bcdreyer.social/post/3mgap7bggvk24"&gt;someone else&lt;/a&gt; had asked the tool to do an Expert Review on a bunch of “lorem ipsum” nonsense text and that it had obliged with recommendations inspired by Stephen King. (And then, as mentioned, the CEO killed it via LinkedIn.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now that I’ve looked more closely at this not-very-useful feature, and now that it’s shut down, the whole situation seems a little absurd. This was just a weird and inappropriate thing that a company tried to do to make money without putting in very much effort. The primary reason it became a news story at all was that it touched on widespread anxiety about whose work is worth what, whose skills will continue to be marketable in the age of AI, and whether any of us are really as complex, singular, and impossible-to-imitate as we might hope we are.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When I started working in journalism, in 2015, commenters (usually men) would reply to my stories and tell me to “learn to code.” This was a common taunt and catchphrase of the era (Gamergate), and it was a nod to the massive cultural, political, and economic shifts under way at that time. Tech was ascendant in every sphere, its hard skills were worth more money than ever before, and people like me—people who knew only words—seemed soft and useless in such a world.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Lately, there have been rumblings about a reversal. Large language models are very good at things such as coding, programming, and dealing with numbers. Users on X &lt;a href="https://fortune.com/2026/02/26/peter-thiel-says-stem-people-worse-off-palantir-linkedin-skills-on-the-rise/"&gt;recently resurfaced&lt;/a&gt; a 2024 interview clip in which one of the most influential technologists of our time, Peter Thiel, said he thought the post-AI labor market would actually be “much worse for the math people than the word people.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;You might think I’m bringing that up to boast about how I came out on top in the end—it all worked out for me, and the latest AI failure proves that no bot can do what I do and no bot ever will. That’s not what I’m saying. What I’m saying is that the “learn to code” guys committed the crime of hubris, but I won’t.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Kaitlyn Tiffany</name><uri>http://www.theatlantic.com/author/kaitlyn-tiffany/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/_0cpd75K4U1sdjJG0SRZkK562Yk=/media/img/mt/2026/03/2026_03_12_Writing_advice/original.jpg"><media:credit>Illustration by Lucy Naland / The Atlantic. Source: Getty.</media:credit></media:content><title type="html">What Was Grammarly Thinking?</title><published>2026-03-12T12:45:48-04:00</published><updated>2026-03-12T16:55:36-04:00</updated><summary type="html">A short-lived AI tool promised to help users write like the greats—and a bunch of other random people, including me.</summary><link href="https://www.theatlantic.com/technology/2026/03/grammarly-ai-expert-bad-advice/686343/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686340</id><content type="html">&lt;p&gt;The tech billionaire Hemant Taneja admits that AI is a bubble. In fact, he welcomes it: “Bubbles are good,” Taneja, the CEO of General Catalyst, a venture-capital firm, told me in an email. If AI comes crashing down, it will lead to “some spectacular failures,” he said—companies will go under and people will lose their jobs—but that’s a price worth paying for “enduring companies that change the world forever.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;His view is widespread in Silicon Valley. Some, such as Nvidia CEO Jensen Huang, &lt;a href="https://www.foxbusiness.com/video/6388041274112"&gt;reject&lt;/a&gt; the notion that their companies are overvalued. But many of the wealthiest and most powerful people in tech are embracing the idea of an AI bubble. Jeff Bezos has &lt;a href="https://www.cnbc.com/2025/10/03/jeff-bezos-ai-in-an-industrial-bubble-but-society-to-benefit.html"&gt;argued&lt;/a&gt; that AI might be a “good” kind of bubble. Sam Altman has made &lt;a href="https://www.theverge.com/ai-artificial-intelligence/759965/sam-altman-openai-ai-bubble-interview"&gt;similar comments&lt;/a&gt;, predicting that AI will be a “huge net win for the economy” even if “a phenomenal amount of money” is lost along the way.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Indeed, a phenomenal amount of money is at stake: OpenAI, which is still far from profitable, is currently worth more than Toyota, Coca-Cola, and Disney combined. This year, Big Tech plans to spend some $650 billion on the AI build-out—a sum that far exceeds the GDP of most countries. Investors are banking that AI will spur a productivity boom and deliver unimaginable corporate profits, but that future could be far off. If the spending dries up first, the bubble could pop—perhaps dragging the rest of the economy down with it. Nonetheless, Silicon Valley thinks that the present mania will eventually pay back its returns through scientific discovery and economic growth. “Stop trying to make bubbles go away,” as the entrepreneur James Thomason recently &lt;a href="https://www.linkedin.com/posts/jthomason_the-bubble-isnt-the-bug-its-the-feature-activity-7394473018607128576-IpSM/"&gt;wrote&lt;/a&gt;. “The benefits of innovation outweigh the costs of volatility.” In other words: Be grateful for the bubble.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;Read: Here’s how the AI crash happens&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Silicon Valley did not invent the idea that bubbles can be worth the pain. Various economists have made the argument for decades. But as the AI boom has exploded, a book by two investors, Tobias Huber and Byrne Hobart, has helped formalize tech’s pro-bubble ideology. &lt;em&gt;Boom: Bubbles and the End of Stagnation&lt;/em&gt; was a hit in Silicon Valley when it came out in 2024, praised by the tech billionaires Peter Thiel and Marc Andreessen.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The authors argue that there are essentially two kinds of bubbles: good ones (dot-com, the railroads) and bad ones (the 2008 housing crisis). Both cause damage when they burst, but the good bubbles accelerate the development of new technologies, which ultimately benefits society as a whole. In a bubble, a “set of investments that you could never underwrite otherwise suddenly makes sense,” Hobart told me.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Bubble defenders such as Hobart point to the railroads as one example of how exuberant speculation can end up paying off. They acknowledge that the development of the railroads in the late 19th century led to multiple devastating depressions—but they also point out that the country got, well, railroads that transformed the fabric of American life. The United States “has some of the world’s best freight rail infrastructure thanks to what in the 19th century was excess capacity,” Hobart and Huber write. (Commercial rail travel in the U.S. is &lt;a href="https://www.theatlantic.com/technology/archive/2023/11/america-train-travel-problems/676063/?utm_source=feed"&gt;another story&lt;/a&gt;.) They also look to the early days of the internet, when overzealous investing resulted in the dot-com crash. Yes, it was bad when the bubble burst, but the froth also financed a massive build-out of fiber-optic cables that helped shape today’s internet. Without a bubble, the thinking goes, the modern web would have developed much more slowly.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even people outside the tech industry seem convinced by the idea that bubbles can have positive elements. “If investors remained dispassionate,” Howard Marks, the billionaire investor who famously &lt;a href="https://www.oaktreecapital.com/docs/default-source/memos/2000-01-02-bubble.pdf"&gt;anticipated&lt;/a&gt; the dot-com crash, told me, “it would take a lot longer for a new unproven technology to be adopted.” Of course, this idea is premised on the notion that widespread adoption is in the public’s best interest.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Either way, though, bubble defenders see the same thing happening with AI: Conscious machines might sound mythical, but if excited investors throw enough cash at the problem—giving entrepreneurs the space to pursue risky, experimental work—superintelligence just might become reality. “There is both froth in parts of the AI ecosystem and real breakthroughs,” as the investment firm KKR &lt;a href="https://www.kkr.com/insights/ai-infrastructure"&gt;wrote&lt;/a&gt; last fall. “Past overbuilds in rail, electrification, and fiber seeded critical economic change.” Even Mary Daly, the president of the San Francisco Fed, has suggested that AI is a “good bubble,” &lt;a href="https://www.axios.com/2025/10/07/ai-bubble-fed-financial-stability"&gt;noting&lt;/a&gt; that “even if the investors don’t get all the returns that the early enthusiasts think when they invest, it doesn’t leave us with nothing.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Indeed, the technology has &lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;already advanced significantly&lt;/a&gt; since the arrival of ChatGPT—thanks, in large part, to the spending frenzy. More investment has meant more computing power to throw at training AI models, which, in turn, has led to more capable AI systems. The mania has also sucked talent into the industry and birthed an explosion of start-ups experimenting with new approaches to building the technology. Without such intense investment, it’s hard to imagine so much progress over such a short period.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;Read: AI agents are taking America by storm&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Less clear is whether the current AI-infrastructure build-out will prove fruitful in the long run. As Silicon Valley continues to pour unfathomable sums into data centers, there’s a risk they will overbuild. Unlike railroad tracks and fiber-optic cables, which can last for decades, computer chips, which power data centers, quickly &lt;a href="https://epoch.ai/data-insights/gpu-frontier-lifespan"&gt;become obsolete&lt;/a&gt;. Still, some bubble defenders argue that all this construction will have lasting value. For example, AI’s seemingly limitless appetite for electricity could also spur a boom in clean-energy generation, as the tech analyst Ben Thompson has &lt;a href="https://stratechery.com/2025/the-benefits-of-bubbles/"&gt;written&lt;/a&gt;, bringing new sources of nuclear and solar energy online. This, of course, is an optimistic vision: Right now, data centers are &lt;a href="https://www.wired.com/story/data-centers-are-driving-a-us-gas-boom/"&gt;driving&lt;/a&gt; a gas boom.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even if Silicon Valley is correct that the bubble is accelerating AI progress, that doesn’t make it unilaterally appealing. “The investor doesn’t say, ‘Well, yes, I lost my money, but thank God it advantaged society,’” Marks said. Accepting short-term financial pain as the cost of technological progress might be easy for tech titans with truckloads of money. It’s a much harder sell to the rest of America. Who cares about better chatbots if you’re about to retire and a crash wipes out your 401(k)?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The freight-rail system might seem great from today’s vantage point, but the Panic of 1893 was among the &lt;a href="https://www.federalreservehistory.org/essays/banking-panics-of-the-gilded-age"&gt;most severe financial crises&lt;/a&gt; in our nation’s history, causing unemployment to &lt;a href="https://eh.net/encyclopedia/the-depression-of-1893/"&gt;spike&lt;/a&gt; to more than 10 percent for half a decade. The situation was so dire that J. P. Morgan—who himself was enriched by the railroads—helped &lt;a href="https://www.federalreservehistory.org/essays/federal-reserve-act-signed"&gt;bail out&lt;/a&gt; the federal government. After the dot-com bubble burst, the U.S. entered a recession. If the AI bubble were to collapse, the fallout could be “catastrophic,” Carlota Perez, the author of a seminal book on bubbles and innovation, told me. The flood of investment is the eye “of a much larger hurricane that involves the whole financial world,” she said. According to &lt;a href="https://www.economist.com/by-invitation/2025/10/15/gita-gopinath-on-the-crash-that-could-torch-35trn-of-wealth"&gt;one estimate&lt;/a&gt;, an AI crash could wipe out some $35 trillion in global wealth.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Inside of tech, many bubble apologists acknowledge the downsides. “There will be people who will have just really unfortunate outcomes from this,” Hobart said about a potential crash. Still, the industry’s mindset seems to be that innovation is worth whatever costs are incurred along the way. If Meta ends up “misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously,” Zuckerberg &lt;a href="https://www.businessinsider.com/mark-zuckerberg-meta-risk-billions-miss-superintelligence-ai-bubble-2025-9"&gt;said&lt;/a&gt; last fall. “But what I’d say is I actually think the risk is higher on the other side.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;What makes the narrative of a “good bubble” concerning is that it provides justification for investors to keep pumping money into AI, regardless of whether it really makes sense to do so. As the cash keeps flowing, the &lt;a href="https://www.theinformation.com/newsletters/dealmaker/lux-capitals-big-warning?rc=ftwoob"&gt;risk&lt;/a&gt; of a debilitating crash seems to only be increasing. Both Anthropic and OpenAI are racing to go public, &lt;a href="https://www.wsj.com/tech/ai/openai-ipo-anthropic-race-69f06a42?gaa_at=eafs&amp;amp;gaa_n=AWEtsqe1XFWjUOY3DyFLhJmw-9qDLcPylt_ltGZynGC6oBfXO3x-KbBiZMmflrVclOo%3D&amp;amp;gaa_ts=69b16eb4&amp;amp;gaa_sig=9ANyVVgzuscqxVf6eEqlbB-jyOmGXJc3ix0gUBXL4eSnh-yzQ41bufY_F_DsYjbSBQhPXHiHBhL36iimQztg0g%3D%3D"&gt;reportedly&lt;/a&gt; as soon as this year. Such high-status public offerings could ratchet up the mania, and increase the potential for financial contagion, as more people’s retirement accounts and investment portfolios get tied up in still-unprofitable AI companies.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Two things can be true at once: AI is a generational technology that will transform the world, and people are going to lose large amounts of money along the way. A bubble is good only if you’re the one who wins.&lt;/p&gt;</content><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/GAqimLZsqhAYR1h_UFPdS-ml1sg=/media/img/mt/2026/03/2026_02_27_AIbubble/original.jpg"><media:credit>llustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">Even Silicon Valley Says That AI Is a Bubble</title><published>2026-03-12T10:34:00-04:00</published><updated>2026-03-17T10:00:04-04:00</updated><summary type="html">An AI crash could bring down the economy. Some in the tech world think that’s the price of progress.</summary><link href="https://www.theatlantic.com/technology/2026/03/ai-bubble-defenders-silicon-valley/686340/?utm_source=feed" rel="alternate" type="text/html"/></entry></feed>