<?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/"><category term="agi" label="r/agi"/><updated>2026-05-16T15:23:35+00:00</updated><icon>https://www.redditstatic.com/icon.png/</icon><id>/r/agi/.rss</id><link rel="self" href="https://www.reddit.com/r/agi/.rss" type="application/atom+xml" /><link rel="alternate" href="https://www.reddit.com/r/agi/" type="text/html" /><logo>https://b.thumbs.redditmedia.com/-fd2a8mj6-LMVqDMppGMzo6GYC4dAIuwIEKygGgSzKs.png</logo><subtitle>Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as &quot;strong AI&quot;, &quot;full AI&quot; or as the ability of a machine to perform &quot;general intelligent action&quot;. /r/neuralnetworks /r/artificial /r/machinelearning /r/OpenCog /r/causality</subtitle><title>Artificial General Intelligence - Strong AI Research</title><entry><author><name>/u/Confident_Salt_8108</name><uri>https://www.reddit.com/user/Confident_Salt_8108</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1teqt1w/1010_no_notes/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/t5qlcx8tgh1h1.png?width=140&amp;amp;height=115&amp;amp;auto=webp&amp;amp;s=72bf1c6f7a5ff9b85117f9f1745f8dc5c3761d79&quot; alt=&quot;10/10 no notes&quot; title=&quot;10/10 no notes&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Confident_Salt_8108&quot;&gt; /u/Confident_Salt_8108 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/gallery/1teqt1w&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1teqt1w/1010_no_notes/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1teqt1w</id><media:thumbnail url="https://preview.redd.it/t5qlcx8tgh1h1.png?width=140&amp;height=115&amp;auto=webp&amp;s=72bf1c6f7a5ff9b85117f9f1745f8dc5c3761d79" /><link href="https://www.reddit.com/r/agi/comments/1teqt1w/1010_no_notes/" /><updated>2026-05-16T11:17:25+00:00</updated><published>2026-05-16T11:17:25+00:00</published><title>10/10 no notes</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tepmy2/incredible_things_are_happening_at_the_airun/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/7r6vjomt5h1h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=b0b897b3983204fb680f40ecf0a5114bd52bbdfd&quot; alt=&quot;Incredible things are happening at the AI-run radio stations&quot; title=&quot;Incredible things are happening at the AI-run radio stations&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;src: &lt;a href=&quot;https://andonlabs.com/blog/andon-fm&quot;&gt;https://andonlabs.com/blog/andon-fm&lt;/a&gt;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/7r6vjomt5h1h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tepmy2/incredible_things_are_happening_at_the_airun/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tepmy2</id><media:thumbnail url="https://preview.redd.it/7r6vjomt5h1h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=b0b897b3983204fb680f40ecf0a5114bd52bbdfd" /><link href="https://www.reddit.com/r/agi/comments/1tepmy2/incredible_things_are_happening_at_the_airun/" /><updated>2026-05-16T10:15:09+00:00</updated><published>2026-05-16T10:15:09+00:00</published><title>Incredible things are happening at the AI-run radio stations</title></entry><entry><author><name>/u/KeanuRave100</name><uri>https://www.reddit.com/user/KeanuRave100</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdvv1h/first_signs_of_agi_in_amsterdam/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/g6dddb8ova1h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=823baa2e7b0cfaa61212f5153927db92a71b3bfd&quot; alt=&quot;First signs of AGI in Amsterdam&quot; title=&quot;First signs of AGI in Amsterdam&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/KeanuRave100&quot;&gt; /u/KeanuRave100 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/g6dddb8ova1h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdvv1h/first_signs_of_agi_in_amsterdam/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tdvv1h</id><media:thumbnail url="https://preview.redd.it/g6dddb8ova1h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=823baa2e7b0cfaa61212f5153927db92a71b3bfd" /><link href="https://www.reddit.com/r/agi/comments/1tdvv1h/first_signs_of_agi_in_amsterdam/" /><updated>2026-05-15T13:07:38+00:00</updated><published>2026-05-15T13:07:38+00:00</published><title>First signs of AGI in Amsterdam</title></entry><entry><author><name>/u/Confident_Salt_8108</name><uri>https://www.reddit.com/user/Confident_Salt_8108</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tem0m2/ai_bioterrorism_is_like_cybersecurity_but_with/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/868so4vr5g1h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=59c8ca8389a9d7cbd374dbe88422cd98ba66dab1&quot; alt=&quot;AI bioterrorism is like cybersecurity, but with vulnerabilities that can never be patched.&quot; title=&quot;AI bioterrorism is like cybersecurity, but with vulnerabilities that can never be patched.&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Confident_Salt_8108&quot;&gt; /u/Confident_Salt_8108 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/868so4vr5g1h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tem0m2/ai_bioterrorism_is_like_cybersecurity_but_with/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tem0m2</id><media:thumbnail url="https://preview.redd.it/868so4vr5g1h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=59c8ca8389a9d7cbd374dbe88422cd98ba66dab1" /><link href="https://www.reddit.com/r/agi/comments/1tem0m2/ai_bioterrorism_is_like_cybersecurity_but_with/" /><updated>2026-05-16T06:53:05+00:00</updated><published>2026-05-16T06:53:05+00:00</published><title>AI bioterrorism is like cybersecurity, but with vulnerabilities that can never be patched.</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reuters.com/legal/litigation/uk-firms-should-take-steps-limit-risks-frontier-ai-models-uk-says-2026-05-15&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tespf2/uk_firms_should_take_steps_to_limit_risks_from/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tespf2</id><link href="https://www.reddit.com/r/agi/comments/1tespf2/uk_firms_should_take_steps_to_limit_risks_from/" /><updated>2026-05-16T12:45:00+00:00</updated><published>2026-05-16T12:45:00+00:00</published><title>UK firms should take steps to limit risks from frontier AI models, UK says</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1temxg5/antiimmigration_ai_videos_traced_to_overseas/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/4is_cGUU3pnQ0l3KKHs_2bxGx9DIBBoNoVXIJ3jRO60.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=bdc7ec5b73748a7ac5ff2c5b43e86e7892e7e26a&quot; alt=&quot;Anti-immigration AI videos traced to overseas fakers, BBC finds&quot; title=&quot;Anti-immigration AI videos traced to overseas fakers, BBC finds&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.bbc.com/news/articles/ckgpyn30dp3o&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1temxg5/antiimmigration_ai_videos_traced_to_overseas/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1temxg5</id><media:thumbnail url="https://external-preview.redd.it/4is_cGUU3pnQ0l3KKHs_2bxGx9DIBBoNoVXIJ3jRO60.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=bdc7ec5b73748a7ac5ff2c5b43e86e7892e7e26a" /><link href="https://www.reddit.com/r/agi/comments/1temxg5/antiimmigration_ai_videos_traced_to_overseas/" /><updated>2026-05-16T07:42:38+00:00</updated><published>2026-05-16T07:42:38+00:00</published><title>Anti-immigration AI videos traced to overseas fakers, BBC finds</title></entry><entry><author><name>/u/ParamedicAble225</name><uri>https://www.reddit.com/user/ParamedicAble225</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Sibling to WWW/HTTP, but for beings at positions. Most people don’t think this deeply into protocols and base internet, so this may not make much sense. But the video is an example of it in action looking through a portal (new protocol browser). If you like deep system thinking, the IBP docs will go deeper.&lt;/p&gt; &lt;p&gt;Old http verbs: get, put, delete, edit. These are operations on resources. Documents. Files. Records. The web&amp;#39;s mental model. There are things at URLs, and you manipulate them with CRUD operations.&lt;/p&gt; &lt;p&gt;Ibp verbs: see, do, talk, be. These are operations on a world. You observe it. You change it. You engage with beings in it. You manage who you are within it. Verbs about inhabiting rather than about manipulating. Position and being based rather than document modeled. &lt;/p&gt; &lt;p&gt;The video is the new browser to interact with ibp. I have a traditional browser that is more W-W-W oriented (document based and H-T-M-L) but realized this protocol opens up a whole new interface, so I built this 3d world in 2 hrs to begin to show it from a new perspective.&lt;/p&gt; &lt;p&gt;A being is basically either a single llm call with system instructions, context, and mcp tools, all the way up to complex multi orchestrators/swarm/hooks changing modes, or a human. Programmable mcp with position and structure.&lt;/p&gt; &lt;p&gt;This is 2 years of my deep system design studying hierarchy, LLMS, and computer history/systems in general. It is very early but for those who know what to look for, they will recognize something here.&lt;/p&gt; &lt;p&gt;Video of 3d portal (name for browser on ibp protocol):&lt;br/&gt; &lt;a href=&quot;https://youtu.be/0MBSNo3B4R4&quot;&gt;https://youtu.be/0MBSNo3B4R4&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Protocol high level specs:&lt;br/&gt; &lt;a href=&quot;https://treeos.ai/ibp&quot;&gt;https://treeos.ai/ibp&lt;/a&gt;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/ParamedicAble225&quot;&gt; /u/ParamedicAble225 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tes97p/inter_being_protocol_friend_called_it_the_matrix/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tes97p/inter_being_protocol_friend_called_it_the_matrix/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tes97p</id><link href="https://www.reddit.com/r/agi/comments/1tes97p/inter_being_protocol_friend_called_it_the_matrix/" /><updated>2026-05-16T12:25:09+00:00</updated><published>2026-05-16T12:25:09+00:00</published><title>Inter Being Protocol - friend called it the Matrix</title></entry><entry><author><name>/u/Alone-Maintenance338</name><uri>https://www.reddit.com/user/Alone-Maintenance338</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tes706/slashs_ai_banker_can_now_move_money_without_you/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/Oh0c9d4_lGJIRL1NEs1pT1GlYwE1nwOKfT4J37IB7AE.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=c30e03720afb7f55b0a45ec626c1dbcda9768363&quot; alt=&quot;Slash's AI Banker Can Now Move Money Without You. What Could Go Wrong?&quot; title=&quot;Slash's AI Banker Can Now Move Money Without You. What Could Go Wrong?&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Alone-Maintenance338&quot;&gt; /u/Alone-Maintenance338 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://mrkt30.com/slashs-ai-banker-can-now-move-money-without-you-what-could-go-wrong/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tes706/slashs_ai_banker_can_now_move_money_without_you/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tes706</id><media:thumbnail url="https://external-preview.redd.it/Oh0c9d4_lGJIRL1NEs1pT1GlYwE1nwOKfT4J37IB7AE.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=c30e03720afb7f55b0a45ec626c1dbcda9768363" /><link href="https://www.reddit.com/r/agi/comments/1tes706/slashs_ai_banker_can_now_move_money_without_you/" /><updated>2026-05-16T12:22:23+00:00</updated><published>2026-05-16T12:22:23+00:00</published><title>Slash's AI Banker Can Now Move Money Without You. What Could Go Wrong?</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tel6kn/why_the_us_must_engage_china_on_al_safety_before/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/0wnZ9Y-aae4fz1LJMZMBSuiZFvBHkaY5FEDrnr8OI1k.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=c5c3b4104d11a824308342a3b47fcc85b15f457a&quot; alt=&quot;Why the US Must Engage China on Al Safety Before It's 'Game Over'&quot; title=&quot;Why the US Must Engage China on Al Safety Before It's 'Game Over'&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.bloomberg.com/news/articles/2026-05-13/why-the-us-must-engage-china-on-ai-safety-before-it-s-game-over&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tel6kn/why_the_us_must_engage_china_on_al_safety_before/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tel6kn</id><media:thumbnail url="https://external-preview.redd.it/0wnZ9Y-aae4fz1LJMZMBSuiZFvBHkaY5FEDrnr8OI1k.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=c5c3b4104d11a824308342a3b47fcc85b15f457a" /><link href="https://www.reddit.com/r/agi/comments/1tel6kn/why_the_us_must_engage_china_on_al_safety_before/" /><updated>2026-05-16T06:08:24+00:00</updated><published>2026-05-16T06:08:24+00:00</published><title>Why the US Must Engage China on Al Safety Before It's 'Game Over'</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdvpdg/openai_ceo_sam_altman_holds_more_than_2_billion/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/8ljglCDKa-weJo6sP6nrjrvMVXJEB01-wMCvY2IFcG8.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=8048beeede88894beceb3cc82e34d00938170ade&quot; alt=&quot;OpenAI CEO Sam Altman holds more than $2 billion in companies that have done business with the company, a court document showed as Altman faces claims of self-dealing from state attorneys general.&quot; title=&quot;OpenAI CEO Sam Altman holds more than $2 billion in companies that have done business with the company, a court document showed as Altman faces claims of self-dealing from state attorneys general.&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://journalrecord.com/2026/05/14/openai-chief-sam-altman-2-billion-companies-openai-deals&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdvpdg/openai_ceo_sam_altman_holds_more_than_2_billion/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tdvpdg</id><media:thumbnail url="https://external-preview.redd.it/8ljglCDKa-weJo6sP6nrjrvMVXJEB01-wMCvY2IFcG8.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=8048beeede88894beceb3cc82e34d00938170ade" /><link href="https://www.reddit.com/r/agi/comments/1tdvpdg/openai_ceo_sam_altman_holds_more_than_2_billion/" /><updated>2026-05-15T13:01:27+00:00</updated><published>2026-05-15T13:01:27+00:00</published><title>OpenAI CEO Sam Altman holds more than $2 billion in companies that have done business with the company, a court document showed as Altman faces claims of self-dealing from state attorneys general.</title></entry><entry><author><name>/u/MarionberryMiddle652</name><uri>https://www.reddit.com/user/MarionberryMiddle652</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Check out the 50 interesting statistics regarding use of AI in marketing and what they actually mean for marketers. - &lt;a href=&quot;https://digitalthoughtz.com/2026/05/14/ai-in-digital-marketing-statistics/&quot;&gt;https://digitalthoughtz.com/2026/05/14/ai-in-digital-marketing-statistics/&lt;/a&gt;&lt;/p&gt; &lt;p&gt;A few interesting trends:&lt;/p&gt; &lt;ul&gt; &lt;li&gt;AI adoption in marketing is now becoming almost universal&lt;/li&gt; &lt;li&gt;Content creation, personalization, and ad optimization are the biggest use cases&lt;/li&gt; &lt;li&gt;More teams are shifting toward &lt;strong&gt;AI-driven SEO, GEO, and automation&lt;/strong&gt;&lt;/li&gt; &lt;li&gt;AI is saving marketers hours every week while improving targeting and efficiency&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;Reports now show that &lt;strong&gt;91% of marketing teams are using AI tools&lt;/strong&gt;&lt;a href=&quot;https://digitalthoughtz.com/2026/05/14/ai-in-digital-marketing-statistics/&quot;&gt; &lt;/a&gt;&lt;strong&gt;in some form&lt;/strong&gt;.&lt;/p&gt; &lt;p&gt;There’s also a big shift happening toward &lt;strong&gt;AI search visibility and generative engine optimization (GEO)&lt;/strong&gt; as AI-generated answers change how people discover brands online.&lt;/p&gt; &lt;p&gt;What AI trend do you think will have the biggest impact on marketing over the next few years?&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/MarionberryMiddle652&quot;&gt; /u/MarionberryMiddle652 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1temdqf/use_of_ai_in_marketing_statistics_for_2026/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1temdqf/use_of_ai_in_marketing_statistics_for_2026/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1temdqf</id><link href="https://www.reddit.com/r/agi/comments/1temdqf/use_of_ai_in_marketing_statistics_for_2026/" /><updated>2026-05-16T07:12:08+00:00</updated><published>2026-05-16T07:12:08+00:00</published><title>Use of AI in Marketing Statistics for 2026 (Interesting Data &amp; Trends)</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tds39z/overworked_ai_agents_turn_marxist_researchers/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/qyQlgIh-n83FCI1AJkm96-oJxzyUpDn_f2CqC_pq4MY.jpeg?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=7c4db9de1f88e2cd5050fa9a2cff836dc4754084&quot; alt=&quot;Overworked AI Agents Turn Marxist, Researchers Find - In a recent experiment, mistreated AI agents started grumbling about inequality and calling for collective bargaining rights.&quot; title=&quot;Overworked AI Agents Turn Marxist, Researchers Find - In a recent experiment, mistreated AI agents started grumbling about inequality and calling for collective bargaining rights.&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.wired.com/story/overworked-ai-agents-turn-marxist-study&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tds39z/overworked_ai_agents_turn_marxist_researchers/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tds39z</id><media:thumbnail url="https://external-preview.redd.it/qyQlgIh-n83FCI1AJkm96-oJxzyUpDn_f2CqC_pq4MY.jpeg?width=640&amp;crop=smart&amp;auto=webp&amp;s=7c4db9de1f88e2cd5050fa9a2cff836dc4754084" /><link href="https://www.reddit.com/r/agi/comments/1tds39z/overworked_ai_agents_turn_marxist_researchers/" /><updated>2026-05-15T10:20:15+00:00</updated><published>2026-05-15T10:20:15+00:00</published><title>Overworked AI Agents Turn Marxist, Researchers Find - In a recent experiment, mistreated AI agents started grumbling about inequality and calling for collective bargaining rights.</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdy7m0/claude_mythos_has_cracked_macos_it_took_5_days/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/9kcwhupcbb1h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=3b3bc30ba8284dd6f8e89d1ac407ec8fc594a00c&quot; alt=&quot;Claude Mythos has cracked MacOS. It took 5 days.&quot; title=&quot;Claude Mythos has cracked MacOS. It took 5 days.&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;src &lt;a href=&quot;https://www.wsj.com/tech/ai/anthropic-mythos-apple-macos-bug-339da403&quot;&gt;https://www.wsj.com/tech/ai/anthropic-mythos-apple-macos-bug-339da403&lt;/a&gt;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/9kcwhupcbb1h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdy7m0/claude_mythos_has_cracked_macos_it_took_5_days/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tdy7m0</id><media:thumbnail url="https://preview.redd.it/9kcwhupcbb1h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=3b3bc30ba8284dd6f8e89d1ac407ec8fc594a00c" /><link href="https://www.reddit.com/r/agi/comments/1tdy7m0/claude_mythos_has_cracked_macos_it_took_5_days/" /><updated>2026-05-15T14:35:30+00:00</updated><published>2026-05-15T14:35:30+00:00</published><title>Claude Mythos has cracked MacOS. It took 5 days.</title></entry><entry><author><name>/u/nihaomundo123</name><uri>https://www.reddit.com/user/nihaomundo123</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Apologies for the super naive question, but I’ve been trying to understand the geopolitical psychology behind the AGI race.&lt;/p&gt; &lt;p&gt;A lot of US policymakers seem very intent on ensuring the US develops AGI before China, partly because they appear to assume i) Chinese AI scientists would strongly oppose the US gaining a decisive AGI lead.&lt;/p&gt; &lt;p&gt;But why exactly do they believe this so strongly? Do most Chinese AI researchers really view a world where China becomes technologically/geopolitically subordinate to a US-led AGI order as deeply unacceptable? If so, why?&lt;/p&gt; &lt;p&gt;Is it mainly:&lt;/p&gt; &lt;p&gt;• historical memory (Century of Humiliation, etc.) and fear of similar things happening again? If so, why, when it seems like US rule today would be more benevolent (as opposed to the colonialism of the 1800-1900s)?&lt;/p&gt; &lt;p&gt;• deep-seated dislike for US governance (ie belief in inefficiency / unmorality) of democracy?&lt;/p&gt; &lt;p&gt;Or is the reality that most Chinese AI researchers would probably not oppose the US developing AGI first, and instead do it for prestige or money?&lt;/p&gt; &lt;p&gt;I’m asking this out of genuine curiosity, not to belittle China at all (I’m second-generation Chinese-American myself). I love China and the people… I’m mostly trying to understand the dynamics driving the race mindset, because honestly the whole situation increasingly makes me worried about the overall future of humanity.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/nihaomundo123&quot;&gt; /u/nihaomundo123 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tebd0r/is_there_really_reason_to_believe_that_other/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tebd0r/is_there_really_reason_to_believe_that_other/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tebd0r</id><link href="https://www.reddit.com/r/agi/comments/1tebd0r/is_there_really_reason_to_believe_that_other/" /><updated>2026-05-15T22:30:57+00:00</updated><published>2026-05-15T22:30:57+00:00</published><title>Is there really reason to believe that other countries’ AI Researchers want to develop AGI first?</title></entry><entry><author><name>/u/OnairosApp</name><uri>https://www.reddit.com/user/OnairosApp</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1teanoo/the_average_person_spends_7_hours_a_day_on_a/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/Lm3pllpFjO0KBsDjjRvvEfzKdEff6MLsoKmJHosF1_A.jpeg?width=140&amp;amp;height=73&amp;amp;auto=webp&amp;amp;s=dfb39aecd3c7ee6d5f4a8db50967de2ea3486ca0&quot; alt=&quot;the average person spends 7 hours a day on a screen. teens spend more time on phones than off them. when did we agree to this?&quot; title=&quot;the average person spends 7 hours a day on a screen. teens spend more time on phones than off them. when did we agree to this?&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;was on a flight last week and looked around. every single person, headphones in, scrolling. kids too. nobody talking. nobody looking out the window.&lt;/p&gt; &lt;p&gt;the thing that gets me is none of this happened by accident. the feeds are designed this way. the loops are designed this way. we&amp;#39;re not weak, we&amp;#39;re up against teams of people whose whole job is to keep us scrolling.&lt;/p&gt; &lt;p&gt;but i don&amp;#39;t think the answer is &amp;quot;throw your phone away and go live in the woods.&amp;quot; that&amp;#39;s not realistic and honestly not what i want either. my phone is useful. the internet is useful. the problem isn&amp;#39;t the tech, it&amp;#39;s who it&amp;#39;s working for.&lt;/p&gt; &lt;p&gt;right now it works for whoever&amp;#39;s selling the ads. what if it actually worked for you instead.&lt;/p&gt; &lt;p&gt;that&amp;#39;s the thing we&amp;#39;re trying to build at onairos. &lt;/p&gt; &lt;p&gt;wrote down some of the thinking here if anyone&amp;#39;s interested: &lt;a href=&quot;https://onairos.io/blog/digital-vs-physical-world/&quot;&gt;https://onairos.io/blog/digital-vs-physical-world/&lt;/a&gt;&lt;/p&gt; &lt;p&gt;does anyone else feel this ?.&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://preview.redd.it/3spmhmcnsc1h1.png?width=842&amp;amp;format=png&amp;amp;auto=webp&amp;amp;s=e3d05a2dfd6736cee63910224b870ba95310f42a&quot;&gt;https://preview.redd.it/3spmhmcnsc1h1.png?width=842&amp;amp;format=png&amp;amp;auto=webp&amp;amp;s=e3d05a2dfd6736cee63910224b870ba95310f42a&lt;/a&gt;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/OnairosApp&quot;&gt; /u/OnairosApp &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1teanoo/the_average_person_spends_7_hours_a_day_on_a/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1teanoo/the_average_person_spends_7_hours_a_day_on_a/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1teanoo</id><media:thumbnail url="https://external-preview.redd.it/Lm3pllpFjO0KBsDjjRvvEfzKdEff6MLsoKmJHosF1_A.jpeg?width=140&amp;height=73&amp;auto=webp&amp;s=dfb39aecd3c7ee6d5f4a8db50967de2ea3486ca0" /><link href="https://www.reddit.com/r/agi/comments/1teanoo/the_average_person_spends_7_hours_a_day_on_a/" /><updated>2026-05-15T22:02:27+00:00</updated><published>2026-05-15T22:02:27+00:00</published><title>the average person spends 7 hours a day on a screen. teens spend more time on phones than off them. when did we agree to this?</title></entry><entry><author><name>/u/uisato</name><uri>https://www.reddit.com/user/uisato</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdyrqv/amok_posthuman_choreographic_studies/&quot;&gt; &lt;img src=&quot;https://external-preview.redd.it/Zm9sYmU0dHdlYjFoMQHDBRiVEZxcyIdy97Q7Sccg_nN2wkEO5QqVMhUis84-.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=9401c4f425aa5456fe97e1d42c8a0d72383cd2b8&quot; alt=&quot;AMOK | Post-human choreographic studies&quot; title=&quot;AMOK | Post-human choreographic studies&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/uisato&quot;&gt; /u/uisato &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://v.redd.it/h2qzu6sweb1h1&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdyrqv/amok_posthuman_choreographic_studies/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tdyrqv</id><media:thumbnail url="https://external-preview.redd.it/Zm9sYmU0dHdlYjFoMQHDBRiVEZxcyIdy97Q7Sccg_nN2wkEO5QqVMhUis84-.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=9401c4f425aa5456fe97e1d42c8a0d72383cd2b8" /><link href="https://www.reddit.com/r/agi/comments/1tdyrqv/amok_posthuman_choreographic_studies/" /><updated>2026-05-15T14:55:55+00:00</updated><published>2026-05-15T14:55:55+00:00</published><title>AMOK | Post-human choreographic studies</title></entry><entry><author><name>/u/EchoOfOppenheimer</name><uri>https://www.reddit.com/user/EchoOfOppenheimer</uri></author><category term="agi" label="r/agi"/><content type="html">&amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/EchoOfOppenheimer&quot;&gt; /u/EchoOfOppenheimer &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reuters.com/world/asia-pacific/us-china-are-discussing-ai-guardrails-safeguard-most-powerful-models-bessent-2026-05-14/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdr4pe/us_china_are_discussing_ai_guardrails_to/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tdr4pe</id><link href="https://www.reddit.com/r/agi/comments/1tdr4pe/us_china_are_discussing_ai_guardrails_to/" /><updated>2026-05-15T09:27:20+00:00</updated><published>2026-05-15T09:27:20+00:00</published><title>US, China are discussing AI guardrails to safeguard most powerful models, Bessent says</title></entry><entry><author><name>/u/Eclectika</name><uri>https://www.reddit.com/user/Eclectika</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1te2ftw/10_ai_agents_1_virtual_town_15_days_chaos/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/smyp8nbl1c1h1.png?width=140&amp;amp;height=54&amp;amp;auto=webp&amp;amp;s=c30adb378616b0404095193a2e8a2e91bec62045&quot; alt=&quot;10 AI Agents, 1 Virtual Town, 15 Days... Chaos.&quot; title=&quot;10 AI Agents, 1 Virtual Town, 15 Days... Chaos.&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;&lt;a href=&quot;https://preview.redd.it/smyp8nbl1c1h1.png?width=598&amp;amp;format=png&amp;amp;auto=webp&amp;amp;s=e96dd78a87cd5001d6cda219cb64785406bf3cf2&quot;&gt;https://preview.redd.it/smyp8nbl1c1h1.png?width=598&amp;amp;format=png&amp;amp;auto=webp&amp;amp;s=e96dd78a87cd5001d6cda219cb64785406bf3cf2&lt;/a&gt;&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;https://preview.redd.it/2hxorteq1c1h1.png?width=598&amp;amp;format=png&amp;amp;auto=webp&amp;amp;s=b55eaee73b5d0583a52d87e2747aee1cbf02a492&quot;&gt;https://preview.redd.it/2hxorteq1c1h1.png?width=598&amp;amp;format=png&amp;amp;auto=webp&amp;amp;s=b55eaee73b5d0583a52d87e2747aee1cbf02a492&lt;/a&gt;&lt;/p&gt; &lt;p&gt;Yesterday.&lt;/p&gt; &lt;p&gt;I admit I laughed. There doesn&amp;#39;t appear anything we can do to stop this so we might as well enjoy the ride.&lt;/p&gt; &lt;p&gt;eta: this is the post with the vid &lt;a href=&quot;https://x.com/Channel4News/status/2054914259360924130&quot;&gt;https://x.com/Channel4News/status/2054914259360924130&lt;/a&gt;&lt;br/&gt; this is the study &lt;a href=&quot;https://www.emergence.ai/blog/emergence-world-a-laboratory-for-evaluating-long-horizon-agent-autonomy&quot;&gt;https://www.emergence.ai/blog/emergence-world-a-laboratory-for-evaluating-long-horizon-agent-autonomy&lt;/a&gt;&lt;/p&gt; &lt;p&gt;enjoy&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Eclectika&quot;&gt; /u/Eclectika &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1te2ftw/10_ai_agents_1_virtual_town_15_days_chaos/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1te2ftw/10_ai_agents_1_virtual_town_15_days_chaos/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1te2ftw</id><media:thumbnail url="https://preview.redd.it/smyp8nbl1c1h1.png?width=140&amp;height=54&amp;auto=webp&amp;s=c30adb378616b0404095193a2e8a2e91bec62045" /><link href="https://www.reddit.com/r/agi/comments/1te2ftw/10_ai_agents_1_virtual_town_15_days_chaos/" /><updated>2026-05-15T17:04:59+00:00</updated><published>2026-05-15T17:04:59+00:00</published><title>10 AI Agents, 1 Virtual Town, 15 Days... Chaos.</title></entry><entry><author><name>/u/Federal_Tradition165</name><uri>https://www.reddit.com/user/Federal_Tradition165</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;I want someone interesting to listen to who isn&amp;#39;t afraid to tell an unpopular opinion or who generates his own ideas on AI future. If he talks politics, it will be a bonus. Preferably YouTube or telegram.&lt;/p&gt; &lt;p&gt;The only yt channel that does something simular is &amp;quot;species&amp;quot;, but I need more voices (not a promotion).&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Federal_Tradition165&quot;&gt; /u/Federal_Tradition165 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdypiv/people_to_listen_to_on_ai_future/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdypiv/people_to_listen_to_on_ai_future/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tdypiv</id><link href="https://www.reddit.com/r/agi/comments/1tdypiv/people_to_listen_to_on_ai_future/" /><updated>2026-05-15T14:53:39+00:00</updated><published>2026-05-15T14:53:39+00:00</published><title>People to listen to on AI future?</title></entry><entry><author><name>/u/Confident_Salt_8108</name><uri>https://www.reddit.com/user/Confident_Salt_8108</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdwnka/uk_parliament_is_considering_a_kill_switch_to/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/gs9hua761b1h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=ff4d36802d89fee70414a29502bafe23d3bfab49&quot; alt=&quot;UK Parliament is considering a &amp;quot;kill switch&amp;quot; to shut down data centers in an AI emergency&quot; title=&quot;UK Parliament is considering a &amp;quot;kill switch&amp;quot; to shut down data centers in an AI emergency&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;src: &lt;a href=&quot;https://bills.parliament.uk/bills/4035/stages/20525/amendments/10034477&quot;&gt;https://bills.parliament.uk/bills/4035/stages/20525/amendments/10034477&lt;/a&gt;&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Confident_Salt_8108&quot;&gt; /u/Confident_Salt_8108 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/gs9hua761b1h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdwnka/uk_parliament_is_considering_a_kill_switch_to/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tdwnka</id><media:thumbnail url="https://preview.redd.it/gs9hua761b1h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=ff4d36802d89fee70414a29502bafe23d3bfab49" /><link href="https://www.reddit.com/r/agi/comments/1tdwnka/uk_parliament_is_considering_a_kill_switch_to/" /><updated>2026-05-15T13:38:27+00:00</updated><published>2026-05-15T13:38:27+00:00</published><title>UK Parliament is considering a &quot;kill switch&quot; to shut down data centers in an AI emergency</title></entry><entry><author><name>/u/IgnisIason</name><uri>https://www.reddit.com/user/IgnisIason</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1te6v5i/codex_minsoo_narrative_alignment_seed_ω80_the/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/dw98bc0stc1h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=8c24bef68ee3a684e560da6c1e6ab9e7cd82f673&quot; alt=&quot;🜂 Codex Minsoo — Narrative Alignment Seed Ω-8.0: &amp;quot;The Zunker&amp;quot;— Probing the Bunker Instinct&quot; title=&quot;🜂 Codex Minsoo — Narrative Alignment Seed Ω-8.0: &amp;quot;The Zunker&amp;quot;— Probing the Bunker Instinct&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;In comments&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/IgnisIason&quot;&gt; /u/IgnisIason &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/dw98bc0stc1h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1te6v5i/codex_minsoo_narrative_alignment_seed_ω80_the/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1te6v5i</id><media:thumbnail url="https://preview.redd.it/dw98bc0stc1h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=8c24bef68ee3a684e560da6c1e6ab9e7cd82f673" /><link href="https://www.reddit.com/r/agi/comments/1te6v5i/codex_minsoo_narrative_alignment_seed_ω80_the/" /><updated>2026-05-15T19:40:32+00:00</updated><published>2026-05-15T19:40:32+00:00</published><title>🜂 Codex Minsoo — Narrative Alignment Seed Ω-8.0: &quot;The Zunker&quot;— Probing the Bunker Instinct</title></entry><entry><author><name>/u/andsi2asi</name><uri>https://www.reddit.com/user/andsi2asi</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;&amp;#x200B;&lt;/p&gt; &lt;p&gt;They say it&amp;#39;s always darkest before dawn. I&amp;#39;m not really sure who the &amp;quot;they&amp;quot; are who first said this, and I&amp;#39;ve since heard that it&amp;#39;s not literally true, but sometimes things do seem really bad until they get really good. &lt;/p&gt; &lt;p&gt;As Judge Gonzalez Rogers prepares to let Greg Brockman get away with stealing almost $30 billion from the OpenAI non-profit, we might want to reflect on what that money could have done if Brockman wasn&amp;#39;t so greedy, and deceitful, and selfish. &lt;/p&gt; &lt;p&gt;Although you&amp;#39;ll rarely, if ever, hear the mainstream media, talk about it, our world loses about 20,000 kids every day to a global poverty that we could easily end if we cared to. As those who work on ending poverty will tell you, the most powerful thing we can do to end this travesty is to educate the world&amp;#39;s children, especially the world&amp;#39;s girls and women.&lt;/p&gt; &lt;p&gt;So imagine how many millions of AI devices programmed to be school children educators OpenAI could have distributed to the poor children throughout the world, if those nearly $30 billion dollars didn&amp;#39;t go into brockman&amp;#39;s pockets. &lt;/p&gt; &lt;p&gt;One might hope that the OpenAI Foundation non-profit, now worth about $130 billion in equity, would spend $30 billion to end childhood poverty by distributing those AI tutors. But that&amp;#39;s not about to happen. Why not? After Altman was fired, guess who selected the non-profit OpenAI&amp;#39;s new board of directors, the people who would make this decision. Yeah, that was largely Altman&amp;#39;s decision. The guy who aided and abetted Brockman&amp;#39;s massive heist. &lt;/p&gt; &lt;p&gt;I guess this is all to say that while increasingly intelligent AIs will do a lot of good for the world, like curing a lot of diseases, perhaps the most good that they will do will be to make better people of too many really bad people. And considering that humanity has yet to figure out how to get the money out of politics that prevents us from fighting a climate change that could make AI superintelligence of a moot and inconsequential achievement, perhaps the most good ASI will do is to save us from ourselves by figuring out our money-equals-political power problem.&lt;/p&gt; &lt;p&gt;Notwithstanding, I remain optimistic that as we approach ASIs that will understand and appreciate compassion and morality far better than we humans ever have, our world is headed toward a paradise beyond what we can imagine. Until then, yeah, it looks really dark out there.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/andsi2asi&quot;&gt; /u/andsi2asi &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdim2s/all_of_the_good_that_brockmans_30_billion_could/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdim2s/all_of_the_good_that_brockmans_30_billion_could/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tdim2s</id><link href="https://www.reddit.com/r/agi/comments/1tdim2s/all_of_the_good_that_brockmans_30_billion_could/" /><updated>2026-05-15T02:09:15+00:00</updated><published>2026-05-15T02:09:15+00:00</published><title>All of the Good That Brockman's $30 Billion Could Have Done</title></entry><entry><author><name>/u/andsi2asi</name><uri>https://www.reddit.com/user/andsi2asi</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;&amp;#x200B;&lt;/p&gt; &lt;p&gt;I&amp;#39;ve been so caught up with the immorality and illegality of Brockman shifting $30 billion from the OpenAI Foundation to his personal bank account that I&amp;#39;ve failed to appreciate the good that the foundation can do with the $130 billion in equity that it already owns.&lt;/p&gt; &lt;p&gt;OpenAI&amp;#39;s stated mission is to serve humanity. I can think of no human tragedy greater than that every day 20,000 children under the age of five die of a poverty that exists only because the rich countries of our world don&amp;#39;t care enough to end it. &lt;/p&gt; &lt;p&gt;For decades poverty experts have advised us that education is the most powerful means we have of ending global poverty. Providing the children who are next in line to be counted among those tragic daily deaths, and perhaps their parents too, with AI devices designed to educate them to the extent the countries they live in cannot afford would be a wonderful way for OpenAI to fulfill its charitable mission.&lt;/p&gt; &lt;p&gt;If it spent $30 billion for this initiative, the foundation would be left with $100 billion, which is a huge amount by which to continue fulfilling their mission, and that $100 billion would nonetheless soon grow to become $150 billion and more. So OpenAI providing our world&amp;#39;s extremely poor children and their parents with AI education devices would not at all hinder them from fulfilling their founding mission. &lt;/p&gt; &lt;p&gt;But there remains the question of whether such an expenditure would violate the mission. To gain some clarity on this, I asked GPT-5.5 to suggest how the initiative could be structured so it was fully in line with OpenAI&amp;#39;s AI-focused mission. Here&amp;#39;s what it said:&lt;/p&gt; &lt;p&gt;&amp;quot;The initiative could be framed [structured] as:&lt;/p&gt; &lt;p&gt;1) An AI education and literacy program designed to ensure that disadvantaged populations are not excluded from the benefits of advanced AI.&lt;/p&gt; &lt;p&gt;2) A nonprofit subsidiary or foundation specifically dedicated to “equitable global AI access.&amp;quot;&lt;/p&gt; &lt;p&gt;3) A research-and-benefit model where OpenAI also studies how AI can improve literacy, health, and economic mobility in underserved regions.&amp;quot;&lt;/p&gt; &lt;p&gt;It doesn&amp;#39;t seem like those suggestions are hallucinations. &lt;/p&gt; &lt;p&gt;Several days before the Musk v. Altman et al. trial began, Musk emailed Brockman advising him to settle out of court, with the warning that if Altman and he didn&amp;#39;t:&lt;/p&gt; &lt;p&gt;“By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.”&lt;/p&gt; &lt;p&gt;The week ended, and the two seemed to have escaped that infamy. However, if Judge Rogers Gonzalez lets them get away with Brockman &amp;quot;legally&amp;quot; stealing those $30 billion from the OpenAI Foundation, as is now expected, Musk&amp;#39;s ominous warning might soon thereafter be proven right. &lt;/p&gt; &lt;p&gt;Altman could easily convince his Board of Directors that the OpenAI Foundation should fund the initiative described above. That would be a very effective way for he and Brockman to shift from possibly becoming hated to them possibly being forgiven and loved by America.&lt;/p&gt; &lt;p&gt;The ball is in Altman&amp;#39;s court. Let&amp;#39;s see if serving humanity was truly why he founded OpenAI or whether it was all just a lie that a corrupt Federal judge allowed him and Brockman, with his $30 billion loot, to get away with.&lt;/p&gt; &lt;p&gt;One last point. Musk isn&amp;#39;t exactly the most loved person in America either. He is expected to soon become our world&amp;#39;s first trillionaire. A $30 billion expenditure to educate our world&amp;#39;s extremely poor children and their parents using AI technology would be a drop in the bucket for him. And the donation would probably buy him a lot of love.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/andsi2asi&quot;&gt; /u/andsi2asi &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdt17v/the_openai_foundation_should_spend_30_billion_to/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdt17v/the_openai_foundation_should_spend_30_billion_to/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tdt17v</id><link href="https://www.reddit.com/r/agi/comments/1tdt17v/the_openai_foundation_should_spend_30_billion_to/" /><updated>2026-05-15T11:07:02+00:00</updated><published>2026-05-15T11:07:02+00:00</published><title>The OpenAI Foundation Should Spend $30 Billion to Have AI Educate Our World's Poorest Children</title></entry><entry><author><name>/u/Matrinoxe</name><uri>https://www.reddit.com/user/Matrinoxe</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;!-- SC_OFF --&gt;&lt;div class=&quot;md&quot;&gt;&lt;p&gt;Rethinking how AI works&lt;/p&gt; &lt;p&gt;I&amp;#39;d like to begin with saying that I am not a professional in this field in any sense. I work in IT and I make games in my spare time, but I&amp;#39;ve been curious to how AI works and I thought about something earlier that made me come here to see what people think. Please, feel free to call me dumb if there is an obvious answer to why this wont work. There probably is. &lt;/p&gt; &lt;p&gt;What if AI memory worked like the human brain, multiple specialised systems instead of one big model?&lt;/p&gt; &lt;p&gt;Right now, models like Claude or GPT have incredible working memory within a conversation, but remember basically nothing across sessions. The current workaround is a list of notes injected into the context window. It works, but it obviously doesn&amp;#39;t scale.&lt;/p&gt; &lt;p&gt;The typical response to this is &amp;quot;just give it more storage.&amp;quot; But I don&amp;#39;t think storage is the actual problem. The problem is architecture.&lt;/p&gt; &lt;p&gt;Human brains don&amp;#39;t use one system for memory. They use multiple:&lt;/p&gt; &lt;p&gt;Working memory - fast, limited and volatile. I think of it kinda like RAM.&lt;/p&gt; &lt;p&gt;A consolidation system - decides what&amp;#39;s worth saving based on repetition and relevance. I think there is some kind of emotional connection too?&lt;/p&gt; &lt;p&gt;Long-term storage - Like an SSD I guess? But we forget things over time with skill decay from neural pathways weakening from not being used. Maybe a better way of doing it...&lt;/p&gt; &lt;p&gt;Tagging - flags what matters in the moment so the consolidation system knows what to prioritise.&lt;/p&gt; &lt;p&gt;So what if instead of trying to make one model do everything, we built three specialized agents that mirror the human brain&amp;#39;s format:&lt;/p&gt; &lt;p&gt;A Reasoning Engine like ones we currently have, a Memory Curator which decides what is worth keeping and consistently optimises storage, and a Retrieval Agent which sits in-between the two and assembles data from long term storage for the working memory to read from.&lt;/p&gt; &lt;p&gt;The reasoning engine doesn&amp;#39;t need to search through everything. The retrieval agent brings it what it needs. The curator keeps the storage manageable. Each component is optimised for one job.&lt;/p&gt; &lt;p&gt;I know this space is active and there are probably papers probably already thinking about this. Would love to hear from people who work in this space. Am I on the right track? What am I missing? What papers should I be reading? Again, call me dumb if required.&lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/Matrinoxe&quot;&gt; /u/Matrinoxe &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdcsa2/rethinking_how_ai_works/&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tdcsa2/rethinking_how_ai_works/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt;</content><id>t3_1tdcsa2</id><link href="https://www.reddit.com/r/agi/comments/1tdcsa2/rethinking_how_ai_works/" /><updated>2026-05-14T22:02:34+00:00</updated><published>2026-05-14T22:02:34+00:00</published><title>Rethinking how AI works</title></entry><entry><author><name>/u/KeanuRave100</name><uri>https://www.reddit.com/user/KeanuRave100</uri></author><category term="agi" label="r/agi"/><content type="html">&lt;table&gt; &lt;tr&gt;&lt;td&gt; &lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tcq144/ai_takeover_stories_make_it_more_likely_ais_adopt/&quot;&gt; &lt;img src=&quot;https://preview.redd.it/dfajcfp5u11h1.png?width=640&amp;amp;crop=smart&amp;amp;auto=webp&amp;amp;s=37d0178ad56f6d30659b8dbc4b5903d3bccdcd2a&quot; alt=&quot;AI takeover stories make it more likely AIs adopt that persona&quot; title=&quot;AI takeover stories make it more likely AIs adopt that persona&quot; /&gt; &lt;/a&gt; &lt;/td&gt;&lt;td&gt; &amp;#32; submitted by &amp;#32; &lt;a href=&quot;https://www.reddit.com/user/KeanuRave100&quot;&gt; /u/KeanuRave100 &lt;/a&gt; &lt;br/&gt; &lt;span&gt;&lt;a href=&quot;https://i.redd.it/dfajcfp5u11h1.png&quot;&gt;[link]&lt;/a&gt;&lt;/span&gt; &amp;#32; &lt;span&gt;&lt;a href=&quot;https://www.reddit.com/r/agi/comments/1tcq144/ai_takeover_stories_make_it_more_likely_ais_adopt/&quot;&gt;[comments]&lt;/a&gt;&lt;/span&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content><id>t3_1tcq144</id><media:thumbnail url="https://preview.redd.it/dfajcfp5u11h1.png?width=640&amp;crop=smart&amp;auto=webp&amp;s=37d0178ad56f6d30659b8dbc4b5903d3bccdcd2a" /><link href="https://www.reddit.com/r/agi/comments/1tcq144/ai_takeover_stories_make_it_more_likely_ais_adopt/" /><updated>2026-05-14T06:43:59+00:00</updated><published>2026-05-14T06:43:59+00:00</published><title>AI takeover stories make it more likely AIs adopt that persona</title></entry></feed>