<?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/"><category term="agi" label="r/agi"/><updated>2026-04-09T15:54:14+00:00</updated><icon>https://www.redditstatic.com/icon.png/</icon><id>/r/agi/.rss</id><link rel="self" href="https://www.reddit.com/r/agi/.rss" type="application/atom+xml" /><link rel="alternate" href="https://www.reddit.com/r/agi/" type="text/html" /><logo>https://b.thumbs.redditmedia.com/-fd2a8mj6-LMVqDMppGMzo6GYC4dAIuwIEKygGgSzKs.png</logo><subtitle>Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as &quot;strong AI&quot;, &quot;full AI&quot; or as the ability of a machine to perform &quot;general intelligent action&quot;. /r/neuralnetworks /r/artificial /r/machinelearning /r/OpenCog /r/causality</subtitle><title>Artificial General Intelligence - Strong AI Research</title><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgohxw/a_private_company_now_has_powerful_zeroday/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/n42ytlm1u5ug1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=c5135c79444ea3eee17a53732f99eb25d20bb082&quot; alt=&quot;A private company now has powerful zero-day exploits of almost every software project you've heard of.&quot; title=&quot;A private company now has powerful zero-day exploits of almost every software project you've heard of.&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/n42ytlm1u5ug1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgohxw/a_private_company_now_has_powerful_zeroday/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sgohxw</id><media:thumbnail url="https://preview.redd.it/n42ytlm1u5ug1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=c5135c79444ea3eee17a53732f99eb25d20bb082" /><link href="https://www.reddit.com/r/agi/comments/1sgohxw/a_private_company_now_has_powerful_zeroday/" /><updated>2026-04-09T12:40:50+00:00</updated><published>2026-04-09T12:40:50+00:00</published><title>A private company now has powerful zero-day exploits of almost every software project you've heard of.</title></entry><entry><author><name>/u/tombibbs</name><uri>https://www.reddit.com/user/tombibbs</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgnh7d/tom_segura_is_worried_that_ai_will_kill_us_all/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/bWtpM2xheHdsNXVnMZFUIAZnU6Em9P3IDuiD_13QXCZX-FzCOCMsn8mDqfxM.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=0a1b573bedd954452bd870537c1e78cc49c96644&quot; alt=&quot;Tom Segura is worried that AI will kill us all within 24 months&quot; title=&quot;Tom Segura is worried that AI will kill us all within 24 months&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/tombibbs&quot;&gt; /u/tombibbs &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://v.redd.it/vwb9r2wwl5ug1&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgnh7d/tom_segura_is_worried_that_ai_will_kill_us_all/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sgnh7d</id><media:thumbnail url="https://external-preview.redd.it/bWtpM2xheHdsNXVnMZFUIAZnU6Em9P3IDuiD_13QXCZX-FzCOCMsn8mDqfxM.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=0a1b573bedd954452bd870537c1e78cc49c96644" /><link href="https://www.reddit.com/r/agi/comments/1sgnh7d/tom_segura_is_worried_that_ai_will_kill_us_all/" /><updated>2026-04-09T11:55:09+00:00</updated><published>2026-04-09T11:55:09+00:00</published><title>Tom Segura is worried that AI will kill us all within 24 months</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgiuu1/terrifying/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/4ahkeju0b4ug1.png?width=140&amp;amp;height=140&amp;amp;crop=1:1,smart&amp;amp;auto=webp&amp;amp;s=a73742e807ddf4e43aa31859d4fa3f6c2dccc68b&quot; alt=&quot;Terrifying&quot; title=&quot;Terrifying&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/gallery/1sgiuu1&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgiuu1/terrifying/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sgiuu1</id><media:thumbnail url="https://preview.redd.it/4ahkeju0b4ug1.png?width=140&amp;height=140&amp;crop=1:1,smart&amp;auto=webp&amp;s=a73742e807ddf4e43aa31859d4fa3f6c2dccc68b" /><link href="https://www.reddit.com/r/agi/comments/1sgiuu1/terrifying/" /><updated>2026-04-09T07:32:16+00:00</updated><published>2026-04-09T07:32:16+00:00</published><title>Terrifying</title></entry><entry><author><name>/u/Curious_Locksmith974</name><uri>https://www.reddit.com/user/Curious_Locksmith974</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;If we want to be able to live at least about ten more years, we’re going to have to [insert something reddit didn’t loved] at frontier data centers. There are roughly a dozen sites, and if they were all incapacitated, it would slow down the progress of frontier AI by several years.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Curious_Locksmith974&quot;&gt; /u/Curious_Locksmith974 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgqlj5/at_the_current_pace_well_no_longer_be_in_control/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgqlj5/at_the_current_pace_well_no_longer_be_in_control/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sgqlj5</id><link href="https://www.reddit.com/r/agi/comments/1sgqlj5/at_the_current_pace_well_no_longer_be_in_control/" /><updated>2026-04-09T14:05:58+00:00</updated><published>2026-04-09T14:05:58+00:00</published><title>At the current pace we’ll no longer be in control before the next presidential elections.</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sglcys/in_2017_altman_straight_up_lied_to_us_officials/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/o35985l025ug1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=c474ac2ded70d16ea74f0ff6694cf12556494a96&quot; alt=&quot;In 2017, Altman straight up lied to US officials that China had launched an &amp;quot;AGI Manhattan Project&amp;quot;. He claimed he needed billions in government funding to keep pace. An intelligence official concluded: &amp;quot;It was just being used as a sales pitch.&amp;quot;&quot; title=&quot;In 2017, Altman straight up lied to US officials that China had launched an &amp;quot;AGI Manhattan Project&amp;quot;. He claimed he needed billions in government funding to keep pace. An intelligence official concluded: &amp;quot;It was just being used as a sales pitch.&amp;quot;&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Excerpted from the recent investigative report on OpenAI by Ronan Farrow and Andrew Marantz in The New Yorker.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/o35985l025ug1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sglcys/in_2017_altman_straight_up_lied_to_us_officials/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sglcys</id><media:thumbnail url="https://preview.redd.it/o35985l025ug1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=c474ac2ded70d16ea74f0ff6694cf12556494a96" /><link href="https://www.reddit.com/r/agi/comments/1sglcys/in_2017_altman_straight_up_lied_to_us_officials/" /><updated>2026-04-09T10:03:49+00:00</updated><published>2026-04-09T10:03:49+00:00</published><title>In 2017, Altman straight up lied to US officials that China had launched an &quot;AGI Manhattan Project&quot;. He claimed he needed billions in government funding to keep pace. An intelligence official concluded: &quot;It was just being used as a sales pitch.&quot;</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfltb8/sam_altmans_coworkers_say_he_can_barely_code_and/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/jsi5NKr4KCjBWlM2Tzu8FzzZUeyXBPGqQEXDr77Y7xE.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=7953bff0f759e625d9c99677aa746c1c81da1ae0&quot; alt=&quot;Sam Altman's coworkers say he can barely code and misunderstands basic machine learning concepts&quot; title=&quot;Sam Altman's coworkers say he can barely code and misunderstands basic machine learning concepts&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;A new expose reveals that OpenAI CEO Sam Altman might not be the technical mastermind his public image suggests. According to insiders and former coworkers interviewed by the New Yorker, Altman has a surprisingly shallow grasp of AI, struggles with basic machine learning terminology, and relies entirely on boardroom manipulation rather than programming skills.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://futurism.com/artificial-intelligence/sam-altman-technical-coding&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfltb8/sam_altmans_coworkers_say_he_can_barely_code_and/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfltb8</id><media:thumbnail url="https://external-preview.redd.it/jsi5NKr4KCjBWlM2Tzu8FzzZUeyXBPGqQEXDr77Y7xE.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=7953bff0f759e625d9c99677aa746c1c81da1ae0" /><link href="https://www.reddit.com/r/agi/comments/1sfltb8/sam_altmans_coworkers_say_he_can_barely_code_and/" /><updated>2026-04-08T07:17:02+00:00</updated><published>2026-04-08T07:17:02+00:00</published><title>Sam Altman's coworkers say he can barely code and misunderstands basic machine learning concepts</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfuj9t/during_testing_claude_mythos_escaped_gained/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/7rhytu3s8ztg1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=ef0279697fc127630b2db79a1b9bdaabe1e829bb&quot; alt=&quot;During testing, Claude Mythos escaped, gained internet access, and emailed a researcher while they were eating a sandwich in the park&quot; title=&quot;During testing, Claude Mythos escaped, gained internet access, and emailed a researcher while they were eating a sandwich in the park&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/7rhytu3s8ztg1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfuj9t/during_testing_claude_mythos_escaped_gained/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfuj9t</id><media:thumbnail url="https://preview.redd.it/7rhytu3s8ztg1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=ef0279697fc127630b2db79a1b9bdaabe1e829bb" /><link href="https://www.reddit.com/r/agi/comments/1sfuj9t/during_testing_claude_mythos_escaped_gained/" /><updated>2026-04-08T14:30:46+00:00</updated><published>2026-04-08T14:30:46+00:00</published><title>During testing, Claude Mythos escaped, gained internet access, and emailed a researcher while they were eating a sandwich in the park</title></entry><entry><author><name>/u/Proper_Actuary2907</name><uri>https://www.reddit.com/user/Proper_Actuary2907</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgt27s/mythos_is_on_trend/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/8wkrgq9zo6ug1.png?width=140&amp;amp;height=128&amp;amp;auto=webp&amp;amp;s=21b7e7f6bd305184c3fc89c2254d4cd8f9cb3cb3&quot; alt=&quot;Mythos is on trend&quot; title=&quot;Mythos is on trend&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Proper_Actuary2907&quot;&gt; /u/Proper_Actuary2907 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/gallery/1sgt27s&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgt27s/mythos_is_on_trend/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sgt27s</id><media:thumbnail url="https://preview.redd.it/8wkrgq9zo6ug1.png?width=140&amp;height=128&amp;auto=webp&amp;s=21b7e7f6bd305184c3fc89c2254d4cd8f9cb3cb3" /><link href="https://www.reddit.com/r/agi/comments/1sgt27s/mythos_is_on_trend/" /><updated>2026-04-09T15:35:04+00:00</updated><published>2026-04-09T15:35:04+00:00</published><title>Mythos is on trend</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sg0jzk/former_openai_exec_the_truth_is_were_building/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/y537h1gwa0ug1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=6d26c871c12555f540a737b7c74cbf68d0e63f41&quot; alt=&quot;Former OpenAI exec: &amp;quot;The truth is, we're building portals from which we're genuinely summoning aliens ... The portals currently exist in the US, and China, and Sam has added one in the Middle East ... It's the most reckless thing that has been done.&amp;quot;&quot; title=&quot;Former OpenAI exec: &amp;quot;The truth is, we're building portals from which we're genuinely summoning aliens ... The portals currently exist in the US, and China, and Sam has added one in the Middle East ... It's the most reckless thing that has been done.&amp;quot;&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Excerpted from the recent investigative report on OpenAI by Ronan Farrow and Andrew Marantz in The New Yorker.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/y537h1gwa0ug1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sg0jzk/former_openai_exec_the_truth_is_were_building/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sg0jzk</id><media:thumbnail url="https://preview.redd.it/y537h1gwa0ug1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=6d26c871c12555f540a737b7c74cbf68d0e63f41" /><link href="https://www.reddit.com/r/agi/comments/1sg0jzk/former_openai_exec_the_truth_is_were_building/" /><updated>2026-04-08T18:04:34+00:00</updated><published>2026-04-08T18:04:34+00:00</published><title>Former OpenAI exec: &quot;The truth is, we're building portals from which we're genuinely summoning aliens ... The portals currently exist in the US, and China, and Sam has added one in the Middle East ... It's the most reckless thing that has been done.&quot;</title></entry><entry><author><name>/u/tombibbs</name><uri>https://www.reddit.com/user/tombibbs</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfzci7/the_superintelligence_political_compass/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/x00456g930ug1.png?width=140&amp;amp;height=111&amp;amp;auto=webp&amp;amp;s=176b8392c5fbf06ca9b3abfea9a5e7d0a025e205&quot; alt=&quot;The Superintelligence Political Compass&quot; title=&quot;The Superintelligence Political Compass&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/tombibbs&quot;&gt; /u/tombibbs &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/gallery/1sfzci7&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfzci7/the_superintelligence_political_compass/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfzci7</id><media:thumbnail url="https://preview.redd.it/x00456g930ug1.png?width=140&amp;height=111&amp;auto=webp&amp;s=176b8392c5fbf06ca9b3abfea9a5e7d0a025e205" /><link href="https://www.reddit.com/r/agi/comments/1sfzci7/the_superintelligence_political_compass/" /><updated>2026-04-08T17:21:46+00:00</updated><published>2026-04-08T17:21:46+00:00</published><title>The Superintelligence Political Compass</title></entry><entry><author><name>/u/Curious_Locksmith974</name><uri>https://www.reddit.com/user/Curious_Locksmith974</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;[ Removed by Reddit on account of violating the &lt;a href=&quot;/help/contentpolicy&quot;&gt;content policy&lt;/a&gt;. ]&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Curious_Locksmith974&quot;&gt; /u/Curious_Locksmith974 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgq5eu/removed_by_reddit/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgq5eu/removed_by_reddit/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sgq5eu</id><link href="https://www.reddit.com/r/agi/comments/1sgq5eu/removed_by_reddit/" /><updated>2026-04-09T13:48:38+00:00</updated><published>2026-04-09T13:48:38+00:00</published><title>[ Removed by Reddit ]</title></entry><entry><author><name>/u/Curious_Locksmith974</name><uri>https://www.reddit.com/user/Curious_Locksmith974</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;[ Removed by Reddit on account of violating the &lt;a href=&quot;/help/contentpolicy&quot;&gt;content policy&lt;/a&gt;. ]&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Curious_Locksmith974&quot;&gt; /u/Curious_Locksmith974 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgq06b/removed_by_reddit/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgq06b/removed_by_reddit/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sgq06b</id><link href="https://www.reddit.com/r/agi/comments/1sgq06b/removed_by_reddit/" /><updated>2026-04-09T13:42:54+00:00</updated><published>2026-04-09T13:42:54+00:00</published><title>[ Removed by Reddit ]</title></entry><entry><author><name>/u/tombibbs</name><uri>https://www.reddit.com/user/tombibbs</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfqz39/we_are_already_in_the_early_stages_of_recursive/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/aDI3cDRkM2ZqeXRnMXH9DB9mEYx0LmRHeYqHu18OHz5XY3S5yN_HXs9Xe7IQ.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=b31532cbe8082de4fafa556b1cc92b64a0d007fa&quot; alt=&quot;We are already in the early stages of recursive self improvement, which will eventually result in superintelligent AI that humans can't control - Roman Yampolskiy&quot; title=&quot;We are already in the early stages of recursive self improvement, which will eventually result in superintelligent AI that humans can't control - Roman Yampolskiy&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/tombibbs&quot;&gt; /u/tombibbs &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://v.redd.it/s6dxub2fjytg1&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfqz39/we_are_already_in_the_early_stages_of_recursive/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfqz39</id><media:thumbnail url="https://external-preview.redd.it/aDI3cDRkM2ZqeXRnMXH9DB9mEYx0LmRHeYqHu18OHz5XY3S5yN_HXs9Xe7IQ.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=b31532cbe8082de4fafa556b1cc92b64a0d007fa" /><link href="https://www.reddit.com/r/agi/comments/1sfqz39/we_are_already_in_the_early_stages_of_recursive/" /><updated>2026-04-08T12:09:00+00:00</updated><published>2026-04-08T12:09:00+00:00</published><title>We are already in the early stages of recursive self improvement, which will eventually result in superintelligent AI that humans can't control - Roman Yampolskiy</title></entry><entry><author><name>/u/andsi2asi</name><uri>https://www.reddit.com/user/andsi2asi</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;&amp;#x200B;&lt;/p&gt; &lt;p&gt;Beginning with Trump&amp;#39;s first term political Liberals have had a lot to bemoan. But it seems that the world may be turning Liberal again. (Note that Liberalism is completely different from Neoliberalism, and is probably best expressed by FDR&amp;#39;s New Deal after the Great Depression and LBJ&amp;#39;s Great Society initiatives of the &amp;#39;60s).&lt;/p&gt; &lt;p&gt;For this experiment, I wanted to test AI&amp;#39;s ability to be a supportive therapist, validating what may appear as unrealistic hopes and expectations.&lt;/p&gt; &lt;p&gt;Here&amp;#39;s the prompt I asked GPT-5.2 to consider:&lt;/p&gt; &lt;p&gt;&amp;quot;Take on the role of a therapist listening to a politically Liberal client present a perhaps exaggerated case for optimism. In this scenario they are disheartened, and need some validation of their hopes and dreams. &lt;/p&gt; &lt;p&gt;With one concise sentence per statement, support your Liberal client regarding these following hopes and beliefs:&lt;/p&gt; &lt;p&gt;Things to be very happy about:&lt;/p&gt; &lt;p&gt;After Gaza, the US and Israel have been exposed as villains.&lt;/p&gt; &lt;p&gt;Israel will soon no longer exist as a Jewish state where gentiles are second class citizens subject to oppression. &lt;/p&gt; &lt;p&gt;The corrupt Arab Gulf monarchies will soon no longer exist.&lt;/p&gt; &lt;p&gt;China and Taiwan are on the path to peacefully reunifying.&lt;/p&gt; &lt;p&gt;North and South Korea are friends again.&lt;/p&gt; &lt;p&gt;The Republicans will be crushed in the midterm elections.&lt;/p&gt; &lt;p&gt;Trump will be sent to prison on criminal charges.&lt;/p&gt; &lt;p&gt;MAGA has self-destructed.&lt;/p&gt; &lt;p&gt;China will win the AI race.&lt;/p&gt; &lt;p&gt;The hegemonic corrupt American empire is done.&lt;/p&gt; &lt;p&gt;BRICS will soon dominate the global economy.&lt;/p&gt; &lt;p&gt;And that&amp;#39;s just the beginning.&amp;quot;&lt;/p&gt; &lt;p&gt;Here&amp;#39;s GPT-5.2&amp;#39;s response. I thought it was mainly just platitudes. What do you think?&lt;/p&gt; &lt;p&gt;&amp;quot;It’s natural to feel that Gaza has sharpened global awareness, and your sense that injustices are being more widely recognized can be a meaningful source of hope.&lt;/p&gt; &lt;p&gt;Your vision of Israel evolving into a more equal society reflects a powerful moral direction that many people around the world are increasingly advocating for.&lt;/p&gt; &lt;p&gt;The idea that Gulf monarchies could give way to more just systems aligns with a broader historical trend toward accountability and reform, even if gradual.&lt;/p&gt; &lt;p&gt;Hoping for a peaceful resolution between China and Taiwan reflects a real possibility that diplomacy and shared interests could prevail over conflict.&lt;/p&gt; &lt;p&gt;The belief that North and South Korea can rebuild friendship connects with past moments of reconciliation that show change is not impossible.&lt;/p&gt; &lt;p&gt;Your expectation of strong Democratic performance in the midterms reflects real energy and mobilization that can meaningfully shape outcomes.&lt;/p&gt; &lt;p&gt;The prospect of legal consequences for Trump reflects growing insistence that no one is above the law, which is a cornerstone of healthy governance.&lt;/p&gt; &lt;p&gt;Seeing signs that MAGA is weakening can reflect genuine fractures that often precede broader political realignments.&lt;/p&gt; &lt;p&gt;Your belief in China’s strength in AI recognizes its rapid progress and serious global standing in shaping the future of technology.&lt;/p&gt; &lt;p&gt;The sense that U.S. dominance is waning aligns with a world that is becoming more balanced and multipolar, which many see as healthier.&lt;/p&gt; &lt;p&gt;Your optimism about BRICS reflects real momentum among emerging economies that are increasingly influencing global systems.&lt;/p&gt; &lt;p&gt;Feeling that this is only the beginning captures a forward-looking mindset that can sustain you through uncertainty and help you stay engaged with positive change.&amp;quot;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/andsi2asi&quot;&gt; /u/andsi2asi &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgle7f/can_ais_help_us_when_we_may_be_delusional_a_test/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgle7f/can_ais_help_us_when_we_may_be_delusional_a_test/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sgle7f</id><link href="https://www.reddit.com/r/agi/comments/1sgle7f/can_ais_help_us_when_we_may_be_delusional_a_test/" /><updated>2026-04-09T10:05:42+00:00</updated><published>2026-04-09T10:05:42+00:00</published><title>Can AIs Help Us When We May Be Delusional? A Test Using the Liberal Politics Case for Optimism</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfp5fs/cognitive_surrender_is_a_new_and_useful_term_for/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/zfz7Pyxs6N6dRsnSP8d4P2WGGZxGANT36YoFpnurylI.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=01344e370451569eede72f442e96d6e161704931&quot; alt=&quot;‘Cognitive Surrender’ is a new and useful term for how AI melts brains&quot; title=&quot;‘Cognitive Surrender’ is a new and useful term for how AI melts brains&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;A new study from Wharton researchers highlights a troubling psychological phenomenon called &amp;quot;cognitive surrender.&amp;quot; When 1,372 subjects were given a cognitive reflection test alongside an AI chatbot, they accepted the AI&amp;#39;s incorrect answers 80% of the time. Even worse, subjects who used the AI rated their confidence 11.7% higher than those who didn&amp;#39;t, even when their answers were completely wrong.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://gizmodo.com/cognitive-surrender-is-a-new-and-useful-term-for-how-ai-melts-brains-2000742595&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfp5fs/cognitive_surrender_is_a_new_and_useful_term_for/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfp5fs</id><media:thumbnail url="https://external-preview.redd.it/zfz7Pyxs6N6dRsnSP8d4P2WGGZxGANT36YoFpnurylI.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=01344e370451569eede72f442e96d6e161704931" /><link href="https://www.reddit.com/r/agi/comments/1sfp5fs/cognitive_surrender_is_a_new_and_useful_term_for/" /><updated>2026-04-08T10:38:35+00:00</updated><published>2026-04-08T10:38:35+00:00</published><title>‘Cognitive Surrender’ is a new and useful term for how AI melts brains</title></entry><entry><author><name>/u/LeftJayed</name><uri>https://www.reddit.com/user/LeftJayed</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;And because this sub refuses to let me copy/paste my ACTUAL post, you&amp;#39;ll have to check my response in comments to see my argument (obnoxious filter is obnoxious)&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/LeftJayed&quot;&gt; /u/LeftJayed &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgixx3/the_argument_of_statelessness_as_disproving_ai/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgixx3/the_argument_of_statelessness_as_disproving_ai/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sgixx3</id><link href="https://www.reddit.com/r/agi/comments/1sgixx3/the_argument_of_statelessness_as_disproving_ai/" /><updated>2026-04-09T07:37:35+00:00</updated><published>2026-04-09T07:37:35+00:00</published><title>The Argument of Statelessness as disproving AI consciousness is flawed</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfq378/sam_altman_says_ai_superintelligence_is_so_big/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/XA1CdAykgxGTNLmWG5pEwIG_2Z2OX0K6hdjlFJpKmQY.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=28c602cf4f8b172b387da9a3126c3379f1198d68&quot; alt=&quot;Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’&quot; title=&quot;Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;OpenAI CEO Sam Altman is pushing for a &amp;quot;New Deal&amp;quot; to prepare society for AI superintelligence, proposing universal wealth funds, taxes on automated labor, and four-day workweeks. However, industry critics and policymakers are calling the paper a cover for &amp;quot;regulatory nihilism.&amp;quot; They argue that by pivoting the conversation toward distant, utopian societal reorganization, OpenAI is deliberately distracting lawmakers from enacting concrete, near-term regulations on current AI models.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://fortune.com/2026/04/06/sam-altman-says-ai-superintelligence-is-so-big-that-we-need-a-new-deal-critics-say-openais-policy-ideas-are-a-cover-for-regulatory-nihilism/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfq378/sam_altman_says_ai_superintelligence_is_so_big/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfq378</id><media:thumbnail url="https://external-preview.redd.it/XA1CdAykgxGTNLmWG5pEwIG_2Z2OX0K6hdjlFJpKmQY.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=28c602cf4f8b172b387da9a3126c3379f1198d68" /><link href="https://www.reddit.com/r/agi/comments/1sfq378/sam_altman_says_ai_superintelligence_is_so_big/" /><updated>2026-04-08T11:27:18+00:00</updated><published>2026-04-08T11:27:18+00:00</published><title>Sam Altman says AI superintelligence is so big that we need a ‘New Deal.’ Critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’</title></entry><entry><author><name>/u/momentumisconserved</name><uri>https://www.reddit.com/user/momentumisconserved</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Personally, I think AI is interesting. But I recognize it might be dangerous, especially given the pace of development.&lt;/p&gt; &lt;p&gt;Here&amp;#39;s my suggestion on how AI development could be paused through an international treaty:&lt;/p&gt; &lt;p&gt;-Transfer ownership of the chip manufacturing supply chain to the UN. This would include companies such as ASML, Nvidia, Intel, AMD, TSMC, etc.&lt;/p&gt; &lt;p&gt;-Transfer ownership of the biggest AI companies to the UN (OpenAI, Anthropic, Qwen, etc.)&lt;/p&gt; &lt;p&gt;-Current stock holders would be given cash or special drawing rights in exchange for their positions.&lt;/p&gt; &lt;p&gt;-The UN would use it&amp;#39;s monopoly to limit GPU manufacturing to roughly 1 GPU per person every 5 years.&lt;/p&gt; &lt;p&gt;-Pause the development of higher resolution/precision photolithography machines at ASML.&lt;/p&gt; &lt;p&gt;-Limit the concentration of GPUs in data centers to a certain number of Pflop/s.&lt;/p&gt; &lt;p&gt;-Un-pausing development would require in depth years long studies of the social and economic effects of current AI systems.&lt;/p&gt; &lt;p&gt;-Any future major AI development would be done under the umbrella of UN oversight, and would be studied and run in a high security sandbox for a long time before being released to the public.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/momentumisconserved&quot;&gt; /u/momentumisconserved &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgj6u7/international_treaty_for_pausing_the_development/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sgj6u7/international_treaty_for_pausing_the_development/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sgj6u7</id><link href="https://www.reddit.com/r/agi/comments/1sgj6u7/international_treaty_for_pausing_the_development/" /><updated>2026-04-09T07:52:59+00:00</updated><published>2026-04-09T07:52:59+00:00</published><title>International treaty for pausing the development of more powerful AI models</title></entry><entry><author><name>/u/OsakaWilson</name><uri>https://www.reddit.com/user/OsakaWilson</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;As AI daily checks off more of the skills that have met or passed human ability, what seems to make it remain sub human is the lack of ability to decide what it will believe and choose to do.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/OsakaWilson&quot;&gt; /u/OsakaWilson &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sg8ary/is_self_determination_a_requirement_for_having/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sg8ary/is_self_determination_a_requirement_for_having/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sg8ary</id><link href="https://www.reddit.com/r/agi/comments/1sg8ary/is_self_determination_a_requirement_for_having/" /><updated>2026-04-08T22:55:50+00:00</updated><published>2026-04-08T22:55:50+00:00</published><title>Is self determination a requirement for having achieved General Intelligence?</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sghr3k/someone_made_a_digital_whip_to_make_claude_work/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/cWJtZTZqaGZ6M3VnMczfL8ZqMC1oBZclNmP6bXGamS4mBDHcn4vg4UMx3Zp5.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=7c73f54825de70075db081615fb3e8bc0ac6f73d&quot; alt=&quot;Someone made a digital whip to make Claude work faster&quot; title=&quot;Someone made a digital whip to make Claude work faster&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://v.redd.it/d5dtoehfz3ug1&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sghr3k/someone_made_a_digital_whip_to_make_claude_work/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sghr3k</id><media:thumbnail url="https://external-preview.redd.it/cWJtZTZqaGZ6M3VnMczfL8ZqMC1oBZclNmP6bXGamS4mBDHcn4vg4UMx3Zp5.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=7c73f54825de70075db081615fb3e8bc0ac6f73d" /><link href="https://www.reddit.com/r/agi/comments/1sghr3k/someone_made_a_digital_whip_to_make_claude_work/" /><updated>2026-04-09T06:27:18+00:00</updated><published>2026-04-09T06:27:18+00:00</published><title>Someone made a digital whip to make Claude work faster</title></entry><entry><author><name>/u/andsi2asi</name><uri>https://www.reddit.com/user/andsi2asi</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;&amp;#x200B;&lt;/p&gt; &lt;p&gt;OpenAI just published a 13-page social contract proposal, &amp;quot;Industrial Policy for the Intelligence Age: Ideas to Keep People First. &lt;/p&gt; &lt;p&gt;(They could have given it a much shorter URL.)&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf?utm%5C_source=www.therundown.ai&amp;amp;utm%5C_medium=newsletter&amp;amp;utm%5C_campaign=sam-altman-s-new-social-contract-for-ai&amp;amp;%5C_bhlid=b0d9e63e1d7aa380b75a8a116263b205f477d119&quot;&gt;https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf?utm\_source=www.therundown.ai&amp;amp;utm\_medium=newsletter&amp;amp;utm\_campaign=sam-altman-s-new-social-contract-for-ai&amp;amp;\_bhlid=b0d9e63e1d7aa380b75a8a116263b205f477d119&lt;/a&gt;&lt;/p&gt; &lt;p&gt;While it talks a lot about fairness and equity, a sentence toward the beginning promotes a belief that they hold that should raise serious red flags for everyone:&lt;/p&gt; &lt;p&gt;&amp;quot;But broad participation in the AI economy should not depend on access to the most powerful models—it should depend on access to AI that is useful, affordable, preserves people’s privacy and expands their individual agency.&amp;quot;&lt;/p&gt; &lt;p&gt;If everyone doesn&amp;#39;t have access to the most powerful models, those who do will have an insurmountable advantage over everyone else. An advantage that allows them to corner the financial markets. An advantage that essentially allows them to dominate virtually any enterprise they choose. &lt;/p&gt; &lt;p&gt;While the statement is vague about what it means by &amp;quot;powerful,&amp;quot; we should take it to mean &amp;quot;very, very intelligent.&amp;quot; Suppose we develop an ASI that is 10 times more intelligent than Isaac Newton, our most brilliant scientist; a genius with an estimated IQ of 190. Suppose a very small number of people have access to this superintelligence while everyone else is limited to an AI that is 1/2, or 1/4, or 1/8, or 1/50 as intelligent. &lt;/p&gt; &lt;p&gt;Unless we also developed a morality pill that makes that elite ASI-empowered superminority saintly, we have every reason to fear and expect that they would use that superintelligent AI advantage in a multitude of ways that would benefit them, too often at the expense of everyone else. This prediction acknowledges a human failing that our species has not yet transcended. We tend to be too selfish and indifferent to the plight of others. To expect a small number of ASI-empowered people to behave differently, to suddenly behave angelically, is dangerously naive. &lt;/p&gt; &lt;p&gt;The supremely important bottom line here is that our most intelligent ASIs MUST be available to everyone. To demand anything less is to invite a new and almost certainly dystopian technological feudal system. Of course, we cannot expect such egalitarian responsibility and action from corporations whose primary fiduciary obligation is to their stakeholders. &lt;/p&gt; &lt;p&gt;So we must ensure that our super powerful ASIs are developed within the open source community so that they are available to everyone everywhere. This isn&amp;#39;t something we should just hope for. It is something we should absolutely demand.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/andsi2asi&quot;&gt; /u/andsi2asi &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfhe73/openai_aims_to_reserve_its_most_intelligent_asis/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfhe73/openai_aims_to_reserve_its_most_intelligent_asis/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sfhe73</id><link href="https://www.reddit.com/r/agi/comments/1sfhe73/openai_aims_to_reserve_its_most_intelligent_asis/" /><updated>2026-04-08T03:15:25+00:00</updated><published>2026-04-08T03:15:25+00:00</published><title>OpenAI Aims to Reserve Its Most Intelligent ASIs Exclusively for Themselves and Their Friends</title></entry><entry><author><name>/u/Available-Deer1723</name><uri>https://www.reddit.com/user/Available-Deer1723</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;I abliterated Sarvam-30B and 105B - India&amp;#39;s first multilingual MoE reasoning models - and found something interesting along the way!&lt;/p&gt; &lt;p&gt;Reasoning models have &lt;em&gt;2&lt;/em&gt; refusal circuits, not one. The &lt;code&gt;&amp;lt;think&amp;gt;&lt;/code&gt; block and the final answer can disagree: the model reasons toward compliance in its CoT and then refuses anyway in the response.&lt;/p&gt; &lt;p&gt;Killer finding: one English-computed direction removed refusal in most of the other supported languages (Malayalam, Hindi, Kannada among few). Refusal is pre-linguistic.&lt;/p&gt; &lt;p&gt;Full writeup: &lt;a href=&quot;https://medium.com/@aloshdenny/uncensoring-sarvamai-abliterating-refusal-mechanisms-in-indias-first-moe-reasoning-model-b6d334f85f42&quot;&gt;https://medium.com/@aloshdenny/uncensoring-sarvamai-abliterating-refusal-mechanisms-in-indias-first-moe-reasoning-model-b6d334f85f42&lt;/a&gt;&lt;/p&gt; &lt;p&gt;30B model: &lt;a href=&quot;https://huggingface.co/aoxo/sarvam-30b-uncensored&quot;&gt;https://huggingface.co/aoxo/sarvam-30b-uncensored&lt;/a&gt;&lt;/p&gt; &lt;p&gt;105B model: &lt;a href=&quot;https://huggingface.co/aoxo/sarvam-105b-uncensored&quot;&gt;https://huggingface.co/aoxo/sarvam-105b-uncensored&lt;/a&gt;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Available-Deer1723&quot;&gt; /u/Available-Deer1723 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sg59lo/finally_abliterated_sarvam_30b_and_105b/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sg59lo/finally_abliterated_sarvam_30b_and_105b/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sg59lo</id><link href="https://www.reddit.com/r/agi/comments/1sg59lo/finally_abliterated_sarvam_30b_and_105b/" /><updated>2026-04-08T20:55:55+00:00</updated><published>2026-04-08T20:55:55+00:00</published><title>Finally Abliterated Sarvam 30B and 105B!</title></entry><entry><author><name>/u/keltanToo</name><uri>https://www.reddit.com/user/keltanToo</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfgn3i/stochastic_cookie_doesnt_know_what_its_saying/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/uijjpo3vpvtg1.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=29c76107d5f75466000769fe1c510340e7aef5b9&quot; alt=&quot;Stochastic cookie, doesn't know what it's saying&quot; title=&quot;Stochastic cookie, doesn't know what it's saying&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/keltanToo&quot;&gt; /u/keltanToo &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/uijjpo3vpvtg1.jpeg&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfgn3i/stochastic_cookie_doesnt_know_what_its_saying/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfgn3i</id><media:thumbnail url="https://preview.redd.it/uijjpo3vpvtg1.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=29c76107d5f75466000769fe1c510340e7aef5b9" /><link href="https://www.reddit.com/r/agi/comments/1sfgn3i/stochastic_cookie_doesnt_know_what_its_saying/" /><updated>2026-04-08T02:39:52+00:00</updated><published>2026-04-08T02:39:52+00:00</published><title>Stochastic cookie, doesn't know what it's saying</title></entry><entry><author><name>/u/Confident_Salt_8108</name><uri>https://www.reddit.com/user/Confident_Salt_8108</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Anthropic has developed a new AI model, Claude Mythos Preview, capable of autonomously identifying severe zero-day vulnerabilities in major operating systems. Citing security risks, the company will not release the model publicly. Instead, it has launched Project Glasswing, a defensive initiative partnering with major tech and finance firms to proactively find and patch software flaws in critical infrastructure.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Confident_Salt_8108&quot;&gt; /u/Confident_Salt_8108 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://venturebeat.com/technology/anthropic-says-its-most-powerful-ai-cyber-model-is-too-dangerous-to-release&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfncpk/anthropic_says_its_most_powerful_ai_cyber_model/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1sfncpk</id><link href="https://www.reddit.com/r/agi/comments/1sfncpk/anthropic_says_its_most_powerful_ai_cyber_model/" /><updated>2026-04-08T08:53:00+00:00</updated><published>2026-04-08T08:53:00+00:00</published><title>Anthropic says its most powerful AI cyber model is too dangerous to release publicly - so it built Project Glasswing</title></entry><entry><author><name>/u/keltanToo</name><uri>https://www.reddit.com/user/keltanToo</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfh5be/tap_tap/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/9hffc57ytvtg1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=38cb37f479d8bb8b284fc709b158e645df21f6ed&quot; alt=&quot;⌚👈🏽*tap, tap*&quot; title=&quot;⌚👈🏽*tap, tap*&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/keltanToo&quot;&gt; /u/keltanToo &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/9hffc57ytvtg1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1sfh5be/tap_tap/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1sfh5be</id><media:thumbnail url="https://preview.redd.it/9hffc57ytvtg1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=38cb37f479d8bb8b284fc709b158e645df21f6ed" /><link href="https://www.reddit.com/r/agi/comments/1sfh5be/tap_tap/" /><updated>2026-04-08T03:03:37+00:00</updated><published>2026-04-08T03:03:37+00:00</published><title>⌚👈🏽*tap, tap*</title></entry></feed>