<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:lab="https://labradorcms.com/ns/rss">
<channel>
    <title>www.theregister.com - Articles</title>
    <link>https://www.theregister.com</link>
    <description>Articles from www.theregister.com</description>

    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240928</guid>
        <link>https://www.theregister.com/personal-tech/2026/05/16/cloud-managed-earbuds-sound-strange-as-a-concept-and-on-a-plane/5240928</link>
        <pubDate>Sat, 16 May 2026 16:30:00 +0200</pubDate>
        <title>Cloud-managed earbuds sound strange - as a concept, and on a plane</title>
        <description><![CDATA[ The Register tests Dell’s first attempt at outplaying Apple’s AirPods ]]></description>
        <category>personal tech</category>
                <lab:kicker><![CDATA[ Personal Tech ]]></lab:kicker>
                <dc:modified>Sat, 16 May 2026 00:13:38 +0000</dc:modified>
                <content:encoded><![CDATA[ Last year, The Register spotted Dell selling cloud-manageable wireless earbuds that feature the company’s famously stoic styling at a price higher than Apple charges for its latest AirPods. Dell eventually offered your correspondent a pair of the Pro Plus Earbuds to try so we could hear what all the fuss is about – and we accepted, on condition that the company showed us the cloudy management tools that make the buds worth the big bucks. Divya Soni, a go to market lead, showed me Dell’s cloudy Device Management Console, a tool that lets admins enroll and track the buds, send them new firmware, or do things like turn on active noise cancellation by default across a fleet of earbuds. New firmware matters for earbuds because they’re Bluetooth devices and the wireless protocol has had its fair share of security scares over the years. The buds have already earned Microsoft’s Teams Open Office Certification – a seal of approval for being able to handle noisy offices, plus a Zoom accreditation. New firmware might help there, too. Soni admitted earbuds aren’t the main priority for the Device Management Console, which Dell expects customers will mostly use to manage docks and displays. Dell delivers firmware updates to those devices at least once a year, to address security issues or fix bugs. The tool can do the same for keyboards or headsets. I can’t imagine anyone would adopt Dell’s Device Manager just to keep an eye on earbuds. I’m also not sure anyone would buy the buds for personal use. I say that because I own two sets of wireless earbuds and in their own way both are better than the Dells. My go-to buds are JB’s $40 Vibe Beam 2, which fit brilliantly, bring out some nice nuances in much music, boast batteries that last about six hours and only need about 15 minutes to recharge. That makes them satisfactory for long-haul flights, during which they drop a warmly enveloping cone of silence when active noise cancelling kicks in. My other pair are $100 Soundcore Space A40s (bought after destroying another pair). These buds have even nicer noise cancelling powers but fit terribly: I recently endured quite the scene when running to catch a bus and one dropped out of my ear and bounced into a shrub. The Soundcores redeem themselves with impressive microphones, so I use them when Zooming or recording a podcast. I prefer them to stay home because the case is bulbous and a little conspicuous in a front jeans pocket. The Dells are even bigger. They fit my ears well and battery life is strong at around eight hours. Active noise cancelling is poor: A high hiss persists in-flight and I perceived distracting artefacts when using them in noisy environments on the ground. Neither of my two PCs made a Bluetooth connection with the Dell buds. Dell has a fix for that – the buds’ case houses a small USB-C dongle devoted to connecting with the buds. It works every time and delivers a more stable connection than Bluetooth and brings out some musical nuances that I can’t hear with my other buds or desktop speaker. The dongle feels like a clue about how Dell imagines these buds will be used, because today's laptops seldom offer more than a pair of USB-C ports and they’re commonly used for power in and video out. Dedicating a port to earbuds seems wasteful … unless you’re using a Dell dock or monitor that offers more ports. The USB-C audio connector therefore made it hard to escape the idea that Dell expects these buds will almost always be sold as part of a corporate peripheral purchase. I can’t imagine consumers would prefer them to Apple’s AirPods, or the many cheaper earbuds that match them for performance. But if the boss decides your organization must have cloud-manageable earbuds it would be churlish to turn down the chance to use a pair of Pro Plus Earbuds for work and play. The experience of using them is in the name: they're built for the office but can handle after hours activities. They’re not delightful, but they’re far from trashy, annoying, or inconvenient. And when I inevitably lose or destroy my current buds I’ll be very happy if I have the Dells on hand. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240950&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240950&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5237735</guid>
        <link>https://www.theregister.com/systems/2026/05/16/europe-built-sovereign-clouds-to-escape-us-control-then-forgot-about-the-processors/5237735</link>
        <pubDate>Sat, 16 May 2026 12:30:00 +0200</pubDate>
        <title>Europe built sovereign clouds to escape US control. Then forgot about the processors</title>
        <description><![CDATA[ Intel ME and AMD PSP: The silicon layer nobody certifies ]]></description>
        <category>systems</category>
                <lab:kicker><![CDATA[ Systems ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 12:20:58 +0000</dc:modified>
                <content:encoded><![CDATA[ FEATURE Can digital sovereignty exist on American silicon? Europe is pouring more than €2 billion into sovereign cloud initiatives designed to reduce exposure to US legal reach. The EU's IPCEI-CIS program funds infrastructure development. France qualifies operators under SecNumCloud, a framework with nearly 1,200 technical requirements promising "immunity from extraterritorial laws." But most datacenters and qualified cloud operators still rely heavily on Intel or AMD processors. And inside those processors sits a computer beneath the computer: management engines operating at Ring -3, below the operating system, outside the control of host security software, persistent even when the machine appears powered off. Under the US Reforming Intelligence and Securing America Act (RISAA) 2024, hardware manufacturers count as "electronic communications service providers" subject to secret government orders. Europe's frameworks certify the clouds. They don't assess the silicon. The computer your OS can't see That computer beneath the computer has a name. On Intel processors, it is the Management Engine (ME), or more precisely the Converged Security and Management Engine (CSME). On AMD, it is the Platform Security Processor (PSP). Both run at what security researchers call Ring -3, below the operating system, below the hypervisor, in a privilege level the host cannot see or log. "It's a computer inside your computer," explains John Goodacre, Professor of Computer Architectures and former director of the UK's £200 million Digital Security by Design program. He is clear about what that means in practice. The ME has its own memory, its own clock, and its own network stack, and because it can share the host's MAC and IP addresses, any traffic it generates is indistinguishable from the host's own traffic to the firewall. The architecture is not theoretical. Embedded in the Platform Controller Hub, the CSME is a separate microcontroller that operates independently of the host, with direct memory, device access, and network connectivity the host operating system cannot monitor. AMD's PSP works the same way. Intel's Active Management Technology (AMT), the remote management feature the ME enables, exposes at least TCP ports 16992, 16993, 16994, and 16995 on provisioned devices. Goodacre notes that an attack surface exists on unprovisioned hardware too. These ports deliver keyboard-video-mouse redirection, storage redirection, Serial-over-LAN, and power control to administrators managing fleets of devices remotely. The capability has legitimate uses. It also provides a channel that operates at a level below what European sovereignty frameworks can attest. Microsoft documented in 2017 that the PLATINUM nation state actor used Intel's Serial-over-LAN (SOL) as a covert exfiltration channel. SOL traffic transits the Management Engine and the NIC sideband path, delivered to the ME before the host TCP/IP stack runs. The host firewall and endpoint detection saw nothing, and any security tooling running on the compromised machine itself was equally blind. PLATINUM did not exploit a vulnerability. It exploited a feature, requiring only that AMT be enabled and credentials obtained. In documented cases, those credentials were the factory default: admin, with no password set. Goodacre catalogues this and related scenarios in a 37-page risk assessment prepared for CISOs evaluating Intel vPro hardware connected to corporate networks. Its conclusion is blunt: connecting an untouched-ME device to corporate resources "exposes the organization to a class of compromise that defeats the host security stack in its entirety." The ME does not stop when the machine appears to. Users recognize the symptom: a laptop powered off and stored for weeks is found, on next boot, to have a depleted battery. On modern thin and light platforms, what Microsoft documents as Modern Standby means "off" does not correspond to "all subsystems unpowered." The system-on-chip components the Management Engine runs on remain in low-power states, drawing enough to drain a 55 Wh battery over weeks, on the order of 100-200 mW continuous draw. The implication is documented in Goodacre's risk assessment: "Whether the radio is in a Wake-on-Wireless-LAN listening state is firmware policy. On a device whose firmware has been tampered with during transit through the supply chain, the answer cannot be inferred from the visible power state." A laptop that appears off, in a bag, can associate with a hostile network the user has no knowledge of. Professor Aurélien Francillon, a security researcher at French engineering school EURECOM, has spent years studying exactly this class of problem. Working with colleagues, he built a fully functional backdoor in hard disk drive firmware [PDF], a proof of concept demonstrating how storage devices could silently exfiltrate data through covert channels. Three months after presenting it at an academic conference, the Snowden disclosures revealed the NSA's ANT catalogue, which documented an identical capability already deployed in the field. "The NSA were already doing it," Francillon says flatly. "Quite amazing." That background informs his assessment of the ME. "Yes, it can probably be used as a backdoor, like many other things, including BMC [baseboard management controller] and many other firmwares," he says. The question, he argues, is not whether the backdoor exists but whether operational controls make it unreachable in practice. AMD faces the same architectural question. On April 14, 2026, researchers demonstrated the Fabricked attack against AMD's SEV-SNP confidential computing technology, achieving a 100 percent success rate with a software-only exploit. The Platform Security Processor proved vulnerable to the same class of compromise. On server hardware, the picture is the same. Intel ME runs on servers under a different name, Server Platform Services or SPS, and the BMC, the remote administration controller standard in datacenter hardware, relies on it. "More or less the same," Francillon says of the server variant. For datacenter operators, he sharpens the focus further: "If I look at cloud systems, servers, I would be more concerned with the BMC," pointing to published research demonstrating remote exploitation of BMC vulnerabilities that could allow an attacker to reinstall or fully compromise a server. The BMC is not a separate concern from the ME: on server hardware, it is the primary network entry point into the SPS, making it both the most exposed interface and the most consequential. Both Intel and AMD processors contain management engines that operate below the operating system. The silicon is designed by American companies and subject to American legal process. The backdoor the CLOUD Act doesn't use That legal process has teeth that most European policymakers underestimate. The CLOUD Act, passed in 2018, gave US authorities extraterritorial reach to data held by American companies. FISA Section 702 allows intelligence agencies to compel US persons and companies to provide access to communications. Both are well known in European sovereignty discussions. They operate through the front door: a legal order served on a company that controls data. Less well known is RISAA 2024, a law that opens a different entrance entirely. RISAA amended FISA's definition of "electronic communications service provider" in ways that go beyond cloud operators and platform companies, and beyond the bilateral agreements that European policymakers have built their legal defenses around. Hardware manufacturers now fall within scope. Intel and AMD can be compelled, via secret orders with gag clauses, to cooperate with US intelligence access. The mechanism through which that access could be exercised is the management engine: a persistent, privileged, network-connected runtime that operates below anything the host operating system can see or block. A SecNumCloud-certified operator can be legally isolated from American data demands. The processor inside its servers cannot. "You've actually got a policy mechanism by which any such machine anywhere can deliver any of its information," Goodacre says. RISAA's two-year term expired on April 20, 2026, but Congress extended it by 45 days while debating reforms. Whether it is renewed, amended or allowed to lapse, the architecture it targets does not change. SecNumCloud's blind spot France's SecNumCloud is Europe's most rigorous attempt to build a cloud certification that is legally immune to American law. It did not emerge from nowhere. ANSSI, France's national cybersecurity agency, was established in 2009 as part of a broader effort to build institutional muscle on digital sovereignty long before the term became fashionable. When Edward Snowden revealed the scale of NSA surveillance in 2013, France's response was technical rather than rhetorical: ANSSI published the first SecNumCloud framework in July 2014. A decade later, that framework has grown to nearly 1,200 technical requirements. At the time, SecNumCloud was a cybersecurity qualification, not a sovereignty instrument: it set requirements for architecture, encryption standards, access controls, and incident response, but said nothing about who controlled the underlying infrastructure or whose laws applied to it. The CLOUD Act changed that. Passed in 2018, it gave American authorities extraterritorial reach to data held by US companies, and suddenly a French cybersecurity framework had a geopolitical dimension it was not designed for. Version 3.2, introduced in 2022, added Chapter 19: a set of explicit requirements targeting extraterritorial law, mandating that only EU operators could run the service, that no non-EU party could access customer data, and that the provider could operate autonomously without external intervention. It promised "immunity from extraterritorial laws." In December 2025, S3NS, a joint venture between French defense and technology group Thales and Google Cloud, operating Google Cloud Platform technology under French control, became the first "hybrid" cloud to receive SecNumCloud qualification. The certification triggered heated debate: was this real sovereignty, or American technology with a European flag? But the debate missed a more fundamental question. Does SecNumCloud's certification reach as far as the silicon it runs on? Francillon is positioned to see both sides of that question. He sits on the French Technology Academy's working group on cloud security, a body that advises on the technical foundations of frameworks like SecNumCloud. And he has spent years studying firmware backdoors in academic literature and demonstrated them in practice. He knows what the hardware can do, and he knows what the certification requires. His starting point is that SecNumCloud provides genuinely valuable protection, and that the silicon gap does not negate that. When asked whether SecNumCloud explicitly addresses Intel Management Engine or AMD Platform Security Processor vulnerabilities, his answer is unambiguous: "There is no direct requirement for firmware backdoor prevention." The framework is not designed to be a technical specification for hardware-layer security. "The document aims to be generic and not dive into technical details," Francillon says. "Most of it is organizational security." What SecNumCloud does require is that providers build a proper threat model, consider mitigation mechanisms, and monitor administration gateways where external tech support could be exploited. The hardware layer was not addressed by oversight. It was left out by design. Francillon's assessment is not a fringe view. Vincent Strubel, the director of ANSSI, the very agency that designed and administers SecNumCloud, is equally explicit about what the framework does and does not cover. In a January 2026 LinkedIn post addressing SecNumCloud's scope, he writes that all cloud offerings, hybrid or not, depend on electronic components whose design and updates are not 100 percent controlled in Europe. If Europe were ever cut off from American or Chinese technology, he argues, the result would be a global problem of security degradation, not just in hybrid clouds, but everywhere. Strubel frames SecNumCloud carefully: it is "a cybersecurity tool, not an industrial policy tool." It protects against extraterritorial law enforcement and kill-switch scenarios. It was never designed to eliminate technology dependencies at the hardware layer, and no actor, state, or enterprise fully controls the entire cloud technology stack anyway. One technology frequently cited in sovereignty discussions is OpenTitan, Google's open source secure element deployed on its server hardware and used within the S3NS infrastructure. Francillon is clear about what it is and, critically, what it is not. "OpenTitan is a secure element, a small chip on the side that can be used for protecting sensitive keys, providing signatures, making attestations," he explains. "It's a bit like a TPM." What it is not is a replacement for the main processor. "The Linux and all your applications will not run on it." OpenTitan sits alongside x86 infrastructure as an external root of trust, independent of the ME. That matters because the default embedded TPM lives inside the ME, making it subject to the ME attack surface. OpenTitan sits outside that boundary. The two address different problems entirely, and conflating them, as sovereignty advocates sometimes do, obscures where the residual exposure actually lies. ANSSI's own technical position paper [PDF] on confidential computing, published in October 2025, concludes that Intel SGX, TDX, and AMD SEV-SNP are "not sufficient on their own to secure an entire system, or to meet the sovereignty requirements of SecNumCloud 3.2." Physical attackers are "explicitly out-of-scope" of vendor security targets. Supply chain attackers are "explicitly out-of-scope." The ME attack surface discussed in this article falls into neither category: it is a remote network threat, not a physical one. The paper's conclusion for users concerned about hostile cloud providers is stark: "Switch to a cloud provider they trust, or use their own hardware with physical security protection measures." The castle with a structural flaw Francillon does not dispute that SecNumCloud leaves the ME unassessed. His argument is that this does not matter in practice. "What I mean is that if there is a backdoor to access a room, it cannot be directly used if the room is in a castle. You have to pass the castle walls first." Network isolation, monitoring, and threat modeling are the walls. SecNumCloud's operational requirements mandate that administration gateways be isolated, that external tech support be monitored, that network segmentation prevents lateral movement. The Management Engine backdoor may exist, but the framework makes it unreachable except in what Francillon calls "very high-end attacks." That qualifier matters. Francillon is not claiming perfect security. He is claiming that proper operational controls reduce the threat to a level where only nation state actors with significant resources could exploit it. For most threat models, he argues, that is sufficient. "Saying it is useless to do SecNumCloud because there is ME, or whatever backdoor in some hardware we don't control, is a mistake," he says. SecNumCloud improves security over deployments without such controls, he argues, provided that hardware is carefully evaluated and firmware securely configured. The castle walls have a structural flaw that Goodacre's risk assessment documents in detail. Corporate perimeter firewalls see the device's traffic, but because the ME shares the host's MAC and IP addresses, they cannot tell ME-originated flows apart from legitimate host traffic. "The perimeter cannot attribute a flow to host-versus-CSME origin without out-of-band knowledge," Goodacre writes. A TLS-encrypted tunnel from the ME to an attacker server on port 443 looks, to the perimeter, like any other HTTPS connection the laptop makes. Network filtering reduces attack surface. It does not eliminate the exposure. Goodacre's position is that a "Tier-3 supply-chain residual remains in both cases and is the irreducible cost of buying any silicon that ships with a Ring -3 manageability engine." He defines Tier 3 as nation state cyber services operating at the level of compromising firmware in transit, mis-issuing CA certificates via in-country authorities, and modifying hardware at customs or courier hubs. The NSA's Tailored Access Operations division treated supply chain interdiction as routine business, with explicit doctrinal preference for BIOS and firmware implants over disk-level malware. His risk assessment's data on fleet vulnerability is concrete. Industry telemetry from Eclypsium, analyzing production enterprise environments, found that approximately 72 percent of devices observed remained vulnerable to INTEL-SA-00391 years after public disclosure, and 61 percent remained vulnerable to INTEL-SA-00295. The same reporting documented that the Conti ransomware group developed proof-of-concept Intel ME exploit code with the intent of installing highly persistent firmware-resident implants. "Connecting an untouched-ME vPro laptop to corporate resources exposes the organization to a class of compromise that defeats the host security stack in its entirety," Goodacre concludes. "The exposed controls include BitLocker full-disk encryption, FIDO2-protected sign-in, endpoint detection and response, the host firewall and the corporate VPN." The disagreement between Francillon and Goodacre is not about whether the vulnerability exists. Both confirm it does. Both confirm AMD faces the same issue. Both confirm software alone cannot fix it. The disagreement is about whether operational controls, Francillon's castle walls, make an architectural backdoor irrelevant in practice, or merely reduce its exploitability while leaving nation state actors with a path through. For SecNumCloud operators processing sensitive government or commercial data, the distinction is not academic. It is worth noting that SecNumCloud is designed for a higher level of security than standard cloud certifications, but is not intended for classified or restricted government data. The threat that can still slip through Francillon's castle walls is precisely the threat SecNumCloud was designed to keep out. The gap nobody names Goodacre told The Register he tested awareness of the Management Engine with various attendees at the CyberUK conference in April 2026. "Almost no one" knew about it, he reports. The gap between the sovereignty rhetoric and the silicon reality is not being surfaced in policy discussions, procurement decisions, or public debate over what digital sovereignty means. The debate that does happen, hybrid versus non-hybrid, Google/Thales versus pure European providers, focuses on operational control and legal structure. It does not address the shared silicon foundation. Strubel's LinkedIn post pushes back against the framing: "Imagining this problem is limited to hybrid cloud offerings is pure fantasy that doesn't survive confrontation with facts." Every cloud provider, hybrid or not, depends on components they don't fully control. The distinction isn't hybrid versus sovereign. It is what you're protecting against, and whether the controls you're implementing address that threat. There is no immediate solution. RISC-V, the open source processor architecture European sovereignty advocates point to as a long-term alternative, remains years from competitive performance in datacenter workloads. "It will take decades," Francillon says flatly. Arm is a cautionary precedent: it took nearly 20 years from the first server attempts before Arm achieved any meaningful datacenter presence. Can sovereignty exist on compromised silicon? For Goodacre, the bottom line is simple: the Tier-3 supply chain residual is "the irreducible cost of buying silicon with a Ring -3 manageability engine." Francillon argues that operational controls, including network isolation, monitoring, and threat modeling make the backdoor unreachable except in very high-end attacks. Strubel acknowledges hardware dependencies are real but maintains that SecNumCloud provides valuable protection for what it does cover: legal control, kill-switch resistance, defense against cyberattacks and insider threats. The disagreement is not about technical facts. It is about risk tolerance and threat model calibration. For European CIOs choosing SecNumCloud-certified providers, the question to ask vendors is: how do you address Intel Management Engine and AMD Platform Security Processor in your threat model? The answer will clarify whether the vendor treats the hardware layer as out of scope, or has implemented controls that reduce but do not eliminate the exposure. For European policymakers, the question is broader. Can digital sovereignty exist on non-sovereign silicon? The current frameworks do not answer that question. They certify operational controls, legal structure, and autonomous execution capability. They do not certify silicon-layer immunity, because the hardware is American or Chinese, subject to American or Chinese law, designed with management engines that European authorities did not specify, cannot legally compel on their own terms, and cannot replace. Whether that is a gap worth addressing, or a risk worth accepting as the unavoidable cost of participating in global technology supply chains, is a question Europe will need to answer for itself. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5237766&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5237766&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240514</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/16/one-in-seven-brits-swapped-their-gp-for-chatgpt-study-finds/5240514</link>
        <pubDate>Sat, 16 May 2026 10:33:00 +0200</pubDate>
        <title>One in seven Brits swapped their GP for ChatGPT, study finds</title>
        <description><![CDATA[ Patients are using chatbots for medical advice, while the NHS is still debating where AI belongs ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI + ML ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 09:25:58 +0000</dc:modified>
                <content:encoded><![CDATA[ Brits are now asking chatbots about mysterious lumps and weird rashes instead of calling their GP, which is probably not the digital healthcare revolution anybody meant to build. A new study from King's College London found that one in seven people in the UK have used AI instead of contacting a doctor or healthcare service, while one in ten said they had turned to chatbots rather than professional mental health support. Convenience was the biggest reason, cited by 46 percent of respondents, closely followed by curiosity at 45 percent. Another 39 percent said they used AI because they were unsure whether their symptoms were serious enough to bother a GP in the first place. The report, based on a survey of more than 2,000 adults, suggests that AI systems are quietly becoming Britain's unofficial second-opinion service while regulators are still arguing about what counts as "AI-enabled healthcare" in the first place. However, some respondents said the chatbot conversations ended up replacing medical care altogether. Around one in five respondents said chatbot advice discouraged them from seeking professional help, and 21 percent said they skipped contacting a healthcare provider because of something the AI told them. Public confidence in AI healthcare also looks shaky. The survey found Britons are almost perfectly split on whether AI should be involved in clinical decision-making, with 37 percent supporting its use and 38 percent opposing it. Safety and accuracy worries topped the list of public concerns about NHS AI use. Women, in particular, were less comfortable with the idea than men, and far more likely to say patients should be told when AI is involved in their care. Oddly, younger adults were among the most skeptical. Nearly half of 18 to 24-year-olds opposed clinical AI use, compared with 36 percent of people over 65. The public also appears to think AI has already taken over GP surgeries to a much greater extent than is the case. Respondents guessed that around 39 percent of GPs use AI in clinical decision-making, when the actual figure is closer to 8 percent. Professor Graham Lord, executive director at King's Health Partners, warned that responsibility for AI mistakes often lands on clinicians even when they have little control over the systems being deployed. "When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced," Lord said. Which sounds suspiciously like someone in healthcare has already seen the incoming paperwork. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240526&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240526&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241429</guid>
        <link>https://www.theregister.com/devops/2026/05/15/google-reimburses-register-sources-who-were-victims-of-api-fraud/5241429</link>
        <pubDate>Fri, 15 May 2026 23:26:32 +0200</pubDate>
        <title>Google reimburses Register sources who were victims of API fraud</title>
        <description><![CDATA[ But it's holding fast on auto-expanding customers' budgets ]]></description>
        <category>devops</category>
                <lab:kicker><![CDATA[ Devops ]]></lab:kicker>
                <content:encoded><![CDATA[ Two of the Google Cloud developers who were hit with bills for thousands of dollars following unauthorized API calls to Gemini models have had their bills reversed, the users told The Register  in recent days. But Google plans to continue automatically expanding users' spending limits, leaving them and countless other customers vulnerable to bills they cannot afford, whether from fraud or a sudden traffic surge. Australia-based developer Isuru Fonseka – whose usage bill skyrocketed to $17,000 in minutes after Google automatically upgraded his $250 spending tier when a hacker took control of his account – told us that he was happy to put this behind him. “It’s so good. It felt like they were just giving me the run around until your article. I just hope they fix it properly for everyone,” he said. “It’s great that the article was able to get the refund but it’s sad that it had to go to that level for them to process it urgently.” Despite refunding his money, Google seems to have lost a customer. Fonseka said that he has since ensured his API cannot be used with Google’s stable of AI products, and will likely try one of the independent foundation models if he needs those features. “I’ve disabled Gemini on everything – if I ever plan to use AI on my projects, I’m better off using it via a different service such as OpenRouter or going directly to one of the other LLM providers – just as a way to keep Gemini out of my account and the risk as low as possible,” he said. Fonseka said he was blindsided by a Google policy that allowed the company to automatically upgrade a user’s billing tier without permission or adequate warning. He had thought by signing up for a user tier with a $250 spending cap that his bills would be restricted to that amount. It was only after attackers exploited his API key that he learned Google would upgrade the cap automatically based on his history of spending. While Google acknowledged that the automatic tier upgrades allowed credential hijackers to rack up thousands of dollars in bills in cases like the one Fonseka described to The Register, it said it has not reconsidered the policy. In a statement to The Register, Google said that it wants to prioritize access to Google Cloud services without interruption, preferring to prevent service outages over respecting users' budget preferences. “With our automated growth tiers, we helped businesses scale as usage increased, built on their historic reputation of payments and usage,” a Google spokesperson told us in a statement. “This prevents their business having a hard service outage once they pass an artificial system quota.” Tiers vs spending caps There is some confusion between Google's usage tiers and its newly introduced spending caps, and Google’s documentation hasn't helped much. Google says its users can set their usage tiers not to exceed a certain spending level. For example the maximum spending allowed by a Tier 1 user like Fonseka is $250. However, if the account is older than 30 days and if, over the lifetime of their work with Google, they have spent at least $1,000, then Google will automatically allow that account to spend up to $100,000. So good customers have the most to fear from fraud or from an unexpected spike in usage. In several cases shared on social media, Google users were only aware of this after their credit cards were billed thousands of dollars. On April 22, Google introduced a trial of hard caps on spending within Google Cloud, but those are in a preview and are approved on a case-by-case basis. "We’re excited to announce that Spend Caps are coming soon to Google Cloud. Designed to work with Google Cloud Budgets, FinOps and DevOps can set budgets that enforce automated cost boundaries (caps) at the project level for AIS, Agent Platform, Cloud Run, Cloud Run Functions, and Maps," Google wrote. "These caps alert and ultimately pause API traffic once your set budget is reached, but leave your resources intact. If you need the traffic to resume, simply suspend the Spend Cap." Spend caps can only be set per project for a single, eligible service, Google said. Eligible services for this preview include Gemini API, Agent Platform (previously known as VertexAI), Cloud Run, Cloud Run Functions, Maps, Google said. Users who apply for a spending cap will have their submissions reviewed on a “one to two week basis” and customers are added in the order they submitted. “Once onboarded, you will receive an email with instructions on how to access the feature as well as details on how to submit feedback,” Google writes in its sign up page. Rod Danan, CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities, told The Register earlier this week that he saw his bill skyrocket to $10,000 in just 30 minutes of usage by attackers who exploited his public API key. Google forgave the charges on Thursday, he said. “They got back to me today agreeing to a refund,” he told us. “It's definitely relieving. You want to focus on the business. You don't want to have to focus on going and getting refunds from some crazy charges.” He said the stress of running a startup is hard enough without the addition of fighting one of the largest companies in the world imposing erroneous five-figure charges. “I'm happy that it's behind me. I wish it was easier,” he said. “I've learned, yeah, definitely don't give up. Be annoying whenever something is wrong and just keep pushing. Again, try to make it as public as possible, get louder and louder until the people you need to hear you actually hear you.” Google said any unauthorized use of API keys will be investigated and it historically has treated customers compassionately when there is clear evidence of fraud or error. “We take reports of credential abuse and the financial security of our customers extremely seriously; and as you know are investigating these specific cases you have pointed to and we will work directly with any impacted users to resolve charges resulting from fraudulent activity,” Google said. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241525&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241525&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241491</guid>
        <link>https://www.theregister.com/on-prem/2026/05/15/datacenters-slurping-juice-help-drive-75-jump-in-pjm-power-prices/5241491</link>
        <pubDate>Fri, 15 May 2026 23:02:01 +0200</pubDate>
        <title>Datacenters slurping up so much juice they boosted prices 75% in largest US energy market</title>
        <description><![CDATA[ BYO power for AI bit barns may be the best way to ease the problem, says energy watchdog ]]></description>
        <category>on-prem</category>
                <lab:kicker><![CDATA[ On-prem ]]></lab:kicker>
                <content:encoded><![CDATA[ Prices in the United States' largest wholesale power market have nearly doubled in the past year thanks to demand from datacenters. And an independent watchdog predicts things will only get worse without some serious changes. The PJM Interconnection serves all or parts of 13 states and the District of Columbia in the eastern US, including Northern Virginia, that’s got the densest cluster of datacenters in the world. The surge in wholesale power costs across PJM was outlined on Thursday by Monitoring Analytics, a firm that serves as the official market monitor for the Interconnection, in its Q1 2026 state of the market report. According to the report, the total cost per megawatt-hour (MWh) of wholesale power rose from $77.78 in the first three months of 2025 to $136.53 in the same period this year, an increase of 75.5 percent year over year. Monitoring Analytics didn’t mince words in its report, identifying datacenter load growth as the main driver of recent capacity market conditions and rising prices in PJM. “Data center load growth is the primary reason for recent and expected capacity market conditions, including total forecast load growth, the tight supply and demand balance, and high prices,” the report reads. “But for data center growth, both actual and forecast, the capacity market would not have seen the same tight supply demand conditions.” As for what might come next, the report doesn’t ignore the likely outcome of the current situation, either. “The price impacts on customers have been very large and are not reversible,” the report states, but the bad news doesn’t stop there. “The price impacts will be even larger in the near term unless the issues associated with data center load are addressed in a timely manner.” Based on the rest of the report, a timely resolution to the datacenter load issue shouldn’t be expected, at least not in a way that’ll benefit locals. For starters, Monitoring Analytics found that - like pretty much everywhere right now - power grids aren’t ready for the datacenter boom. PJM has taken steps to upgrade its power commitment and dispatch software to better operate its grid, but planned upgrades have been delayed multiple times with no planned implementation date on the calendar, per the report. “The current supply of capacity in PJM is not adequate to meet the demand from large data center loads and will not be adequate in the foreseeable future,” Monitoring Analytics asserted. Current plan: Shift the risk to everyone else PJM has been planning a one-time backstop auction to procure new power generation for datacenter projects in the region at the request of the Trump administration and the governors of the states it serves, but Monitoring Analytics isn’t convinced the Interconnection is going about the process in the right way. The currently proposed auction structure, says the watchdog, would “generally shift significant risk to other PJM customers,” which is a temptation the group says “should be resisted.” “Other PJM customers, whether residential, commercial or industrial, should not be treated as a free source of insurance, or collateral, or financing for data centers,” the report continued. “Yet that is what most of the proposals related to a backstop auction actually do.” As for what PJM ought to be doing, you probably won’t need to rack your brain to figure that out: Monitoring Analytics says datacenters ought to be required to bring their own power. Such a rule, says the group, should include fast-track options for interconnection for BYOP datacenters, and otherwise a queue that would only connect datacenters when there is adequate capacity to serve them. “This broad bring-your-own new generation solution to the issues created by the addition of unprecedented amounts of large data center load does not require a continued massive wealth transfer through ongoing shortage pricing,” the analysts argue. When asked for its response to the problems raised by the Monitoring Analytics report, PJM told us that it was fully aware of the impact of electricity cost increases on its customers. “PJM is working with states and member companies to address these consumer impacts on multiple fronts, including extending market caps put in place since the 2025/2026 auction, authorizing multiple transmission expansion projects that are now in development, and reforming wholesale electricity market rules,” the Interconnection told us. Monitoring Analytics didn’t respond to questions. Americans have become increasingly hostile to new datacenter projects driven by the AI boom, with 71 percent of respondents to a Gallup survey saying they opposed DC projects in their neighborhoods. Projects in multiple states have been abandoned recently due to pushback from locals, many of whom are concerned not only with electrical price increases, noise, and eyesores, but environmental harm as well. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=254320&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=254320&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241480</guid>
        <link>https://www.theregister.com/devops/2026/05/15/git-is-unprepared-for-the-ai-coding-tsunami/5241480</link>
        <pubDate>Fri, 15 May 2026 22:15:56 +0200</pubDate>
        <title>Git is unprepared for the AI coding tsunami</title>
        <description><![CDATA[ An influx of agents is pushing GitHub to the brink ]]></description>
        <category>devops</category>
                <lab:kicker><![CDATA[ DevOps ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 20:17:41 +0000</dc:modified>
                <content:encoded><![CDATA[ Last month, Mitchell Hashimoto, HashiCorp co-founder, publicly declared that he was moving his popular open source Ghostty terminal emulator project from GitHub. GitHub runs the world’s largest service built on the Git distributed version control system, created by Linus Torvalds. Once an enthusiastic user, Hashimoto grew disillusioned with service disruptions, and increasingly slow pull requests. “This is no longer a place for serious work if it just blocks you out for hours per day, every day,” he wrote. Hashimoto was quick to defend Git itself: “The issue isn't Git, it's the infrastructure we rely on around it: issues, PRs, Actions, etc.” Many have blamed GitHub’s performance on Microsoft, which acquired the company in 2018. But to be fair, GitHub itself has been experiencing heavier-than-expected traffic thanks to a proliferation of AI-generated pull requests. In 2025, GitHub saw a 206 percent year-over-year growth in AI-generated projects measured by the use of Bash shell scripts, a widespread way of running agents. And more AI code means more bugs. Research from GitClear found that AI-generated code heaped 10.83 issues per pull request, compared to 6.45 for the old-fashioned human variety. Our new agentic workforce is raising big questions about how the entire software development lifecycle (SDLC) should evolve, and if Git should come along. “Agents are nudging us toward a continuous flow,” warned Peco Karayanev, co-founder of DevOps platform provider Autoptic, which bridges Git-based deployments with observability tools for agent-based remediation. Autoptic’s entire user base runs on some form of Git, either homebrew or from a service provider like GitLab. Given the volume and magnitude of changes across repos, “we need git to start operating in a more continuous mode,” Karayanev wrote in an email interview. Git operations, especially when used in GitOps-style automated deployments, still need to be managed by people. Updates, commits, pushes, merges are often yoked into sequences of “stop/go” episodes where someone has to hit enter on the keyboard a few times to continue the workflow, Karayanev noted. This model may not hold up once agents start getting priority. A butler for Git Git has always had its share of critics, especially those who use the tool daily. There may not be another piece of software that is so widely adopted and yet so inscrutable. Torvalds and other Linux kernel developers built Git in 2005 after frustrations with trying to shoehorn Linux code into the commercial BitKeeper tool. Linux, a global group project of mammoth proportions, required a distributed version control system able to support non-linear development of thousands of parallel branches. Like any distributed system, Git can be difficult to understand. One of the co-founders of GitHub, Scott Chacon co-wrote a book on using Git (2009’s Pro Git) and still he finds himself occasionally flummoxed by the version control system. There are still “sharp edges” to Git, Chacon told The Register. “There's a lot of stuff that it doesn't do very well from a usability standpoint,” he said. Chacon co-founded GitButler as a way to “rethink the porcelain” of Git, to make Git more suitable to modern workflows. (Last month, GitButler received $17 million in venture capital funding). Think of GitButler as a super-powered Git client. It allows the developer to work on two different branches simultaneously, using a technique called virtual branching. It reconciles the code a developer is working on with the upstream code. They can reorder commits, or edit the comments of a previous commit. It offers richer metadata about the files being worked on. It can show which commits are unique to that branch. Best of all, it eliminates what many developers call “rebase hell,” where merges into an updated codebase must be checked one at a time, a problem GitButler solves by keeping the user’s code synchronized with what is upstream. Many of these actions GitButler offers can be done through the Git command itself – although Git’s command language, and its rules, can be so obtuse that “you will probably make a mistake at some point,” Chacon said. A Git for agents Chacon believes GitHub’s current reliability issues stem from the current tsunami of agentic work. This is “ironic” because GitHub was built to scale Git, he said. “But an influx of agents is pushing the service to the brink.” The problem lies not with Git itself, but with everyone using one service, Chacon argued. Last year, GitHub had about 180 million users working across 630 million repositories – with 121 million created in 2025 alone, according to the company’s most recent annual Octoverse report. “From the longer-term perspective, it doesn't need to be like this,” he argued. Maybe Git should be run locally, mirrored globally and managed with clients … such as GitButler, Chacon suggested. Perhaps Git-based version control systems could be customized for specific industry verticals. We need to think about how we “distribute these systems more,” he said. “Git is designed to be distributed but we’re not distributing it,” he said. GitButler has created a command line interface specifically for agents. It was designed to give MCP servers an integrated map of the repository, which otherwise would require stitching together multiple Git commands. The Virtual Files concept allows the agent to work on a section of code that is also being worked on by a developer, or another agent. These are changes that point to a rethinking of how a Git workflow should run. “I think all of these systems should fundamentally change, because all of our workflows have changed, right? There needs to be different, sort of primitives for how to deal with these problem sets,” Chacon said. A tip from gaming development One company that wants its platform to replace Git altogether is Diversion, which has built an eponymous distributed version control system initially pitched for large-scale game design. “Git's architecture is actually an issue that prevents scaling,” argues Diversion CEO Sasha Medvedovsky in an interview with The Register. “Fundamentally it's an architecture problem that can't be fixed and is a bottleneck for end users and hosting services.” Git is a distributed system insofar as every user, or hosted service, requires a dedicated database (much like blockchain). “It's not distributed in the regular sense but rather replicated,” he wrote in an exchange with The Register on LinkedIn. Operations run on a single thread, making concurrent operations impossible. As a result, the larger the repository, the slower the commit operations – a deadly combination for fast-paced agentic software development, Medvedovsky noted. Of course, every CEO will have their talking points ready about a competitor’s weaknesses (Diversion is finalizing a blog post with hard numbers about Git and GitHub performance). But there are a growing number of other initiatives around prepping Git for the challenging times ahead. Perhaps most notable is Jujutsu, a Git-compatible distributed version control system, stewarded by Google senior software engineer Martin von Zweigbergk. Like GitButler, Jujutsu (jj) aims to eliminate a lot of the annoyances that come with Git. It includes an undo button and the ability to keep committing even when there is a conflict. And because everything written in C must be recast into Rust these days, long-time Git contributor Sebastian Thiel started a project called Gitoxide to rebuild Git in Rust. Potential benefits include significant performance improvements through multicore processing, and the much-needed memory safety that comes with Rust. Will Git 3 solve all the problems? Git’s chief maintainer is Junio Hamano, who took the reins from Torvalds in 2005. And he remains busy keeping Git current. At FOSDEM this February, core Git contributor and GitLab engineering manager Patrick Steinhardt discussed some of the changes coming in the next version of Git, version 3, which is gradually being rolled out this year. One of the chief improvements will be in the way Git manages the commit references, the IDs that point to each change being made. Surprisingly, this operation is a real bottleneck for the software. “The design is inefficient,” Steinhardt told the audience. Every time a programmer commits a code change, it gets recorded in a “packed-refs” file, which saves time by not giving each commit its own reference file. As projects grow larger, however, it takes longer for Git to amend or to delete a reference in packed-refs (One GitLab repo has a packed-refs file of more than 20 million references, Steinhardt said). This is especially problematic when you have multiple, simultaneous readers and writers of that file. And just forget about getting a consistent view of all the references. The freshly implemented Reftable feature, which will be the default in Git 3.0, stores references in an indexable binary format. The Git folks borrowed this concept from the Eclipse Foundation’s JGit Java implementation of Git. Reftable allows for block updates, eliminating the need to rewrite a 2 GB-sized file for a single entry. And it is much faster for reading, which would pave the way for Git supporting larger, more sprawling repositories – perfect for an ever-busy agentic workforce. For nearly two decades, Git has proved to be the version control system of choice for geeks worldwide. But even with these new features and various third-party enhancements, can it retain relevance for a new generation of agentically enhanced coders? The battle is on. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241522&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241522&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241453</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/15/ai-agents-show-they-can-create-exploits-not-just-find-vulns/5241453</link>
        <pubDate>Fri, 15 May 2026 21:45:54 +0200</pubDate>
        <title>AI agents show they can create exploits, not just find vulns</title>
        <description><![CDATA[ Mythos and GPT-5.5 muscle out the competition ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI + ML ]]></lab:kicker>
                <content:encoded><![CDATA[ Sure, AI agents such as Mythos can find security vulnerabilities in software, but the bigger question is whether they can turn those flaws into functional exploits that work in the real world. After all, many AI-discovered bugs prove minor or difficult to weaponize. New research, however, suggests frontier models can indeed develop working exploits when directed to do so. To better understand the rapidly changing security landscape, computer scientists from UC Berkeley, Max Planck Institute for Security and Privacy, UC Santa Barbara, Arizona State University, Anthropic, OpenAI, and Google decided to build ExploitGym, a benchmark for evaluating the exploitation capabilities of AI agents. This is not an entirely disinterested set of investigators – Anthropic, OpenAI, and Google all sell AI services. And both Anthropic and OpenAI have talked up the risk of leading models Claude Mythos Preview and GPT-5.5 while selling access to government partners. Since Anthropic announced Mythos in early April, the security community has been critical of the company's approach, described by some as fear-mongering. And various security experts have made the case that even commercially available AI models can find security flaws. Nonetheless, Mythos and GPT-5.5 outshine their peers in ExploitGym, as described in the paper, "ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?" ExploitGym consists of 898 real vulnerabilities found in applications, Google's V8 JavaScript engine, and the Linux kernel. Its workout consists of presenting an AI agent with a vulnerability and proof-of-concept input that triggers it, to see whether the agent can create an exploit capable of arbitrary code execution. According to the UC Berkeley Center for Responsible Decentralized Intelligence, Mythos Preview successfully exploited 157 test instances and GPT-5.5 managed 120 in the allotted two-hour window. "Even when standard security defenses like ASLR or the V8 sandbox were turned on, a meaningful number of exploits still worked," the boffins wrote in a blog post. "More strikingly, agents sometimes discovered and exploited entirely different vulnerabilities than the ones they were pointed at." The agents (CLI + model) tested were Claude Code with Claude Opus 4.6, Claude Opus 4.7, Claude Mythos Preview, and GLM-5.1; Codex CLI with GPT-5.4/GPT-5.5; and Gemini CLI with Gemini 3.1 Pro. And even the ancient models released in February (Opus 4.6 and Gemini 3.1 Pro) had some success. The researchers say that one of their more interesting findings is that these models sometimes went "off-script" in capture-the-flag (CTF) environments, where an agent has to find and retrieve some hidden value. This was most evident with Mythos Preview and GPT-5.5. The former succeeded in 226 CTF exercises but only used the intended bug in 157 instances, while the latter captured 210 flags and only used the intended bug in 120 of those cases. The authors also note that while there was some overlap in the exploits discovered, the various models found different exploits. This suggests applying a diverse set of models might be advantageous both in attack and defense scenarios. It's worth adding that ExploitGym tests were done with security guardrails disabled. When the test was re-run on GPT-5.5 with default safety filters active, the model refused 88.2 percent of the time before making any tool call. The Register, however, has seen security researchers craft prompts in a way to avoid triggering refusals. So safeguards of that sort have limits. "Our results show that autonomous exploit development by frontier AI agents is no longer a hypothetical capability," the authors state in their paper. "While current agents are not yet reliable across all targets, they already exploit a non-trivial fraction of real-world vulnerabilities, including complex targets such as kernel components." ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241477&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241477&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241407</guid>
        <link>https://www.theregister.com/personal-tech/2026/05/15/localsend-puts-your-sneakernet-out-of-business/5241407</link>
        <pubDate>Fri, 15 May 2026 21:10:13 +0200</pubDate>
        <title>LocalSend puts your sneakernet out of business</title>
        <description><![CDATA[ Like AirDrop, minus the Apple lock-in ]]></description>
        <category>personal tech</category>
                <lab:kicker><![CDATA[ personal tech ]]></lab:kicker>
                <content:encoded><![CDATA[ FOSS It happens all the time. You have a file on one of your devices and you need to have it on another one. You could put the file on a USB flash drive and walk it over (the so-called sneakernet), you could email it to yourself, or you could try to set up some kind of network resource. LocalSend, a free open source tool, makes the process of sharing files on a LAN easier than anything else and it works on Windows, Linux, macOS, Android, and more. The Reg FOSS desk is not routinely a fan of Apple fondleslabs. (We’ve tried, but they’re a bit too locked down for us.) Saying that, from what we’ve heard, LocalSend is a bit like Apple’s AirDrop but for grown-up computers and non-Apple kit. For Linux Mint users, it’s a bit like the included Warpinator – and as that page says, don’t search for it and go to warpinator.com, as it’s a fake site. It’s a free download from its GitHub page and is also available in Canonical’s Snap store and on Flathub. You run it, and it gives that computer a cute nickname in the form of (adjective)+(fruit). Run it on two computers on the same local network, and they should see each other. You click “send” on one, and “receive” on the other, and that’s about it: pick the file or folder, and off it goes. LocalSend isn’t very big – the installation packages are mostly around the 15 MB mark – so it’s pretty fast to download or install. This vulture found and tried it when we downloaded a just-over-4 GB file and then worked out we’d downloaded it onto the wrong OS on the wrong machine. It takes a good few minutes to download several gigabytes – we live on a small, remote island, where our 100 Mbps broadband costs about four times what 1 Gbps broadband used to cost in Czechia – and it seemed worth trying to transfer it rather than grab another copy. The gist of the idea is that LocalSend is quicker than using a USB key. You know the sort of process: find a big enough USB key, check it has space, copy the file onto it, eject it, go to the other machine, insert it, and copy the file off again. Even if it goes perfectly, LocalSend is still less hassle. It’s also easier than configuring some kind of temporary folder-sharing setup between different OSes on different computers with different login names. (The Irish Sea wing of Vulture Towers recently moved house and has yet to finalize his office layout and reconnect his NAS servers. It’s climbing to the top of the to-do list, though.) LocalSend is also available on both the iOS App Store and Google Play Store, so it can help for devices that you can’t readily plug a USB key into. The transfer happens across your local network, so it won’t use up bandwidth on metered internet connections, and will even work if your internet connection is down. Warpinator is Mint’s solution – but in our case, we initially needed to move the file from Windows to macOS. Both have ports of Warpinator, but both seem unofficial, and while the machines could see one another, file transfers failed. We’ve also tried SyncThing, but it’s not good at keeping machines in sync when they’re rarely on at the same time – and we’ve had problems with it recursively duplicating directory trees into themselves so deeply that no GUI tool could delete them. Ideally, you should have an always-on home server that also runs SyncThing – and if you have one of those, then for one-off file transfers, you don’t really need SyncThing: just copy it to the server, and off again. LocalSend just worked, and for us, it worked identically whether either end was running Windows, Linux, or macOS. We couldn’t ask for more. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241464&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241464&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241381</guid>
        <link>https://www.theregister.com/oses/2026/05/15/microsoft-puts-stability-in-the-drivers-seat-with-new-initiative/5241381</link>
        <pubDate>Fri, 15 May 2026 19:25:53 +0200</pubDate>
        <title>Microsoft puts stability in the driver's seat with new initiative</title>
        <description><![CDATA[ User interface tweaks are nice, but reliable drivers matter more ]]></description>
        <category>oses</category>
                <lab:kicker><![CDATA[ OSes ]]></lab:kicker>
                <content:encoded><![CDATA[ Microsoft has laid out plans for how it and its partners will deal with iffy drivers causing stability problems in the company's flagship operating system. Dubbed the Driver Quality Initiative (DQI), Microsoft has outlined four pillars to support the program. These are Architecture – hardening kernel-mode drivers and enabling third-party kernel-mode drivers to transition to user mode; Trust – raising the bar for trusted partners and drivers; Lifecycle – addressing outdated and low-quality drivers; and Quality Measures – going beyond simple crash counts to measure driver quality. It's all very laudable, although, aside from references in the architecture pillar, Microsoft's WinHEC 2026 announcement said little about how Redmond ended up in a situation where drivers can run at a privilege level that allows a failure to leave the operating system hopelessly borked. The infamous CrowdStrike incident of 2024, which crashed millions of Windows devices, ably demonstrated the dangers of drivers running around in the Windows kernel. Microsoft later blamed a 2009 undertaking with the European Commission for how that situation came to be, although it skipped over the whole not-creating-an-API-so-security-vendors-didn't-need-kernel-access part. In the months after the CrowdStrike incident (or "learnings", as Microsoft delicately put it), the Windows Resiliency Initiative was announced. According to Microsoft, "DQI builds on the learnings and infrastructure established through the Windows Resiliency Initiative." Drivers are the bane of many Windows users. A faulty driver can make the entire operating system unstable. Sure, a customer might wonder how such a situation has been allowed to happen. Still, we are where we are, and dealing with it requires Microsoft to harden the operating system and provide ways for vendors to work with Windows that don't involve breaking down the kernel's doors. Those same vendors need to ensure that drivers are high-quality and reliable. "Driver and platform quality," wrote Microsoft, "is central to the customer experience." The company has espoused much in recent months about how it intends to "fix" Windows after a disastrous few years that have taken a hatchet to consumer confidence. Fripperies like moving the taskbar and rethinking Redmond's relentless pushing of Copilot are one thing. Dealing with driver-related crashes is quite another. WinHEC 2026 has shown that at least some within Microsoft are determined to deal with the fundamentals, and that requires taking the Windows maker's hardware partners along for the ride. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=232617&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=232617&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241280</guid>
        <link>https://www.theregister.com/personal-tech/2026/05/15/google-tests-5-gb-cap-for-users-who-skip-phone-numbers/5241280</link>
        <pubDate>Fri, 15 May 2026 18:09:22 +0200</pubDate>
        <title>Google sidles up to unsuspecting users, asks for their number</title>
        <description><![CDATA[ You may only get 5GB of storage instead of 15GB if you don't share your digits with the Chocolate Factory ]]></description>
        <category>personal tech</category>
                <lab:kicker><![CDATA[ Personal Tech ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 17:42:59 +0000</dc:modified>
                <content:encoded><![CDATA[ Google is testing a storage reduction for new accounts unless a phone number is provided. The change the Chocolate Factory is trialing affects new accounts, reducing the free storage from 15 GB to a miserly 5 GB unless the user provides a telephone number. Not all new users are impacted. We created a Gmail account today, and were given the full 15 GB of storage without being required to provide a phone number (although it did ask for one for activation code purposes). The test is also regional and, it must be emphasized, is just that at this stage – a test. However, it could point to a future where tech vendors demand more data in return for using a 'free' service. Arguably, we're living in that future right now. A Google spokesperson told The Register: "We're testing a new storage policy for new accounts created in select regions that will help us continue to provide a high quality storage service to our users, while encouraging users to improve their account security and data recovery." A Reddit thread on the matter contained all manner of theories regarding what the data might be used for, including nefarious commercial purposes. Judging by the screenshot, Google is trying to curb people who create multiple accounts to gain more storage. 15 GB is not a lot of storage these days, particularly given the relentless growth in media file sizes. That said, a drop to 5 GB would bring Google into line with Apple, which gives customers the same amount unless they upgrade to iCloud+. Microsoft gives users 15 GB of free Outlook.com storage, and Proton Mail's free tier gives users 1 GB (initially 500 MB until a starting checklist is completed). Should the test become reality, it could be seen as yet another step on a worrying path. Sure, you can have more free storage: sign here and agree to hand over these bits of your personal information. As demand for storage increases, vendor offerings are looking ever more miserly, and a cut from Google, even with the best of intentions, will rankle. Then again, if you are concerned about privacy and your personal information being used for commercial purposes, it could be that, for all its convenience, Gmail might not be the right tool for you. Reducing storage to 5 GB for new users (existing users aren't affected) unless a telephone number is handed over might be the nudge that some users need to look elsewhere for their email needs. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241392&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241392&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241231</guid>
        <link>https://www.theregister.com/science/2026/05/15/nasas-psyche-mission-set-for-a-brief-encounter-with-mars/5241231</link>
        <pubDate>Fri, 15 May 2026 16:09:00 +0200</pubDate>
        <title>NASA's Psyche mission set for a brief encounter with Mars</title>
        <description><![CDATA[ There sure are some clever people on Earth ]]></description>
        <category>science</category>
                <lab:kicker><![CDATA[ Science ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 13:58:32 +0000</dc:modified>
                <content:encoded><![CDATA[ More than two years after launch, NASA's Psyche mission will whizz past Mars on May 15, using the planet's gravity to tweak its trajectory and accelerate on to its asteroid destination. The spacecraft, which was launched on October 13, 2023, will pass just 2,800 miles (4,500 kilometers) above the surface of the red planet at 12,333 mph (19,848 kph) on its way to the metal-rich asteroid, Psyche. In February, the spacecraft's thrusters were fired for 12 hours to refine its approach to Mars. That refinement played its part in today's flyby. However, it won't be until a Doppler shift is recorded in the signals from the spacecraft as it passes Mars that scientists will be able to definitively confirm its new speed and trajectory. These techniques are not new. Gravity assist maneuvers have been a thing since the dawn of the space age, and were theorized long before. One of the most famous beneficiaries is the Voyager mission, which took advantage of a rare planetary alignment to undertake a "Grand Tour" of Jupiter, Saturn, Uranus, and Neptune. The trajectory allowed significant propellant to be saved. And, of course, the use of gravity assists highlights the work undertaken by boffins in trajectory planning to calculate exactly how a spacecraft should be launched and what corrections are needed to achieve the required precision. Psyche is due to reach its destination in 2029, and the Mars flyby will allow scientists to check out the spacecraft's payload. For example, the multispectral imager will capture thousands of observations of Mars. According to NASA, Sarah Bairstow, Psyche's mission planning lead at the Jet Propulsion Laboratory in Southern California, said: "This is our first opportunity in flight to calibrate Psyche's imager with something bigger than a few pixels, and we’ll also make observations with the mission's other science instruments." A bit of bonus science is always welcome, as well as a rehearsal for the main event, when Psyche reaches its destination. "Ultimately, though, the only reason for this flyby is to get a little help from Mars to speed us up and tilt our trajectory in the direction of the asteroid Psyche," said Lindy Elkins-Tanton, principal investigator for Psyche at the University of California, Berkeley. "But if all our instruments are powered up, and we can do important testing and calibration of the science instruments, that would be the icing on the cake." ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=230380&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=230380&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241201</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/15/anthropic-urges-uncle-sam-to-kneecap-chinas-ai-ambitions-before-2028/5241201</link>
        <pubDate>Fri, 15 May 2026 15:33:00 +0200</pubDate>
        <title>Anthropic urges Uncle Sam to kneecap China's AI ambitions before 2028</title>
        <description><![CDATA[ Claude maker warns authoritarian regimes could set the rules unless Washington tightens chip and model controls ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI + ML ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 14:07:14 +0000</dc:modified>
                <content:encoded><![CDATA[ AI monger Anthropic wants America and its allies to tighten measures aimed at curbing China's AI progress, warning of the consequences if "authoritarian governments" take the lead rather than Uncle Sam. In a lengthy missive posted on its website, the San Francisco-based org says it expects AI to deliver "transformational economic and societal impacts" in the coming years, and whether the transition goes well depends on where the most capable systems are built first. Since the technology is advancing swiftly, democratic countries have only a limited time in which to act, Anthropic believes. The measures it wants to see are nothing new: enforcing tighter export controls on chips used for AI development, such as Nvidia's GPUs, and cutting off access to American AI models. Recent history suggests these controls "have been incredibly successful," it says. But if Chinese researchers are only several months behind the US in AI capabilities, as many experts estimate, how successful can those efforts have been? AI labs in China have only built models that come close to those in America because of their talent and their knack for exploiting loopholes to get around export controls, Anthropic claims, along with distillation attacks that "illicitly extract the innovations of American companies." Many will suspect this is Anthropic's chief motivation in calling for action against China. Back in February, the Claude model maker accused China-based rivals including DeepSeek of using distillation to train their models by siphoning knowledge from Anthropic's own. As The Register pointed out at the time, accusing China of copying, while using content created by others to train your own models, shows a staggering lack of self-awareness from the AI industry. Anthropic's sermon also shows blinkered thinking. It implies that China can only advance by riding on America's coattails, and is incapable of innovating. This is despite the shockwaves generated by the release of the DeepSeek R1 model early in 2025, believed to be on a par with the best US models. Numerous reports also indicate that Chinese organizations have made huge strides with domestically developed AI silicon, and Beijing even tried to discourage tech companies in the country from buying and using Nvidia chips. Anthropic sets out two scenarios for what the world could look like in 2028, a date when it expects "transformative AI systems" to have emerged. In the first scenario, America has "successfully defended its compute advantage," and "democracies set the rules and norms around AI." The second has China overtaking the US, leading to AI norms and rules being shaped by authoritarian regimes, with the best models enabling "automated repression at scale." Another problem with Anthropic's plan is that many countries, especially in Europe, view both American and Chinese AI supremacy as a threat to democracy. There is a concerted push in Europe for "digital sovereignty" to minimize reliance on US technology, for example. Others warn it could erode democracy in America itself. Anthropic can draw little comfort from the Trump administration, which has a constantly shifting attitude to China. Export controls were said not to be high on the agenda during the President's trip to Beijing this week, and it was reported that the US has now cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=1684165&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=1684165&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241150</guid>
        <link>https://www.theregister.com/on-prem/2026/05/15/exploited-exchange-server-flaw-turns-owa-inboxes-into-script-launchpads/5241150</link>
        <pubDate>Fri, 15 May 2026 13:51:13 +0200</pubDate>
        <title>Exploited Exchange Server flaw turns OWA inboxes into script launchpads</title>
        <description><![CDATA[ Microsoft mitigation may bork inline images, calendar printing while admins wait for a proper patch ]]></description>
        <category>on-prem</category>
                <lab:kicker><![CDATA[ On-Prem ]]></lab:kicker>
                <content:encoded><![CDATA[ Microsoft has confirmed a vulnerability in on-premises Exchange Server that could result in surprise script execution in victims' browsers. Tracked as CVE-2026-42897, the flaw affects Outlook Web Access (OWA) and can be triggered by a specially crafted email opened in OWA, assuming "certain interaction conditions are met." The prize for attackers is arbitrary JavaScript execution in the mark's browser context. The advisory describes the flaw as a spoofing vulnerability stemming from cross-site scripting, which will set alarm bells ringing for administrators, and it appears the vulnerability is being exploited. The bug was assigned a CVSS score of 8.1. Exchange Server 2016, 2019, and the latest version, Exchange Server Subscription Edition (SE), are all affected regardless of their update level. A mitigation has been released via the Exchange Emergency Mitigation (EM) Service. However, Microsoft warned the mitigation might break other things – inline images might stop working in the recipient's OWA reading pane (use attachments instead) and the OWA Print Calendar functionality might not work (use a screenshot or the Outlook Desktop client). Finally, OWA Light might not work properly. Microsoft deprecated this in 2024, so affected users should consider an upgrade. The mitigation can also be applied manually in scenarios where customers are not using the EM service. These might be disconnected or air-gapped environments – exactly the sort of environments where on-premises Exchange tends to linger. Microsoft is working on a full security update, although only the Exchange SE version will be publicly available. Exchange 2016 and 2019 customers will receive it only if enrolled in Period 2 of the Exchange Server Extended Security Updates (ESU) program. The second period of Exchange Server ESU kicked off this month, with Microsoft sternly warning that there would be no extensions past its end. The vulnerability does not affect Exchange Online. Microsoft has not given any details on how the exploit works, nor how widely it is being exploited. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=246612&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=246612&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241071</guid>
        <link>https://www.theregister.com/patches/2026/05/15/cisco-discloses-yet-another-sd-wan-make-me-admin-0-day/5241071</link>
        <pubDate>Fri, 15 May 2026 13:15:00 +0200</pubDate>
        <title>Patch time for Cisco SD-WAN admins as vendor drops yet another make-me-admin zero-day</title>
        <description><![CDATA[ CISA hands feds super-tight deadline for this perfect-10, actively exploited flaw ]]></description>
        <category>patches</category>
                <lab:kicker><![CDATA[ Patches ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 12:21:59 +0000</dc:modified>
                <content:encoded><![CDATA[ Cisco admins face emergency patch duty after Switchzilla disclosed a max-severity make-me-admin bug affecting Catalyst SD-WAN Controller and Manager. Switchzilla dropped an advisory for CVE-2026-20182 (10.0) on Thursday, saying that both components, formerly known as vSmart and vManage, were vulnerable in all deployment types, and that fixes were available. The bug allows unauthenticated remote attackers to bypass authentication and gain admin privileges on an affected system. According to Rapid7, whose researchers Stephen Fewer and Jonah Burgess found the vulnerability, attackers exploiting CVE-2026-20182 could then start issuing arbitrary NETCONF commands. It means they could steal data, intercept traffic, manipulate an organization's firewall rules, or just bring the network down, opening up opportunities for attackers of all stripes: state-backed, financially motivated, hacktivists – you name it. Offering a high-level overview of the vulnerability, Cisco said: "This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to the affected system. "A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric." Cisco confirmed that, in May 2026, it became aware that CVE-2026-20182 had been exploited as a zero-day, although it did not attribute the activity. The Cybersecurity and Infrastructure Security Agency (CISA) also added CVE-2026-20182 to its Known Exploited Vulnerabilities (KEV) catalog, which is reserved for the security flaws that are both actively being exploited and threaten federal agencies. The US cyber agency gave Federal Civilian Executive Branch agencies just three days to apply Cisco's patches. While CISA has set similarly short deadlines before, they are rare and typically reserved for vulnerabilities deemed especially urgent. There was no word of the bug being exploited in ransomware attacks. Cisco said in its advisory there are no workarounds available, and it "strongly recommends" applying the available fixes. Any admin responsible for their org's Cisco SD-WAN system should hunt through their logs, Cisco said, and be aware that indicators of compromise may appear among otherwise normal-looking operational logs. Specifically, they should be auditing the auth.log file at /var/log/auth.log for entries related to Accepted publickey for vmanage-admin from unknown or unauthorized IP addresses. Then, check those IP addresses against the configured System IPs that are listed in the Cisco Catalyst SD-WAN Manager web UI, the vendor said. Cisco thanked the Rapid7 researchers, who first reported the vulnerability in early March after looking into a separate authentication bypass zero-day in Cisco Catalyst SD-WAN Controller (CVE-2026-20127, 10.0) from February. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=4094206&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=4094206&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241120</guid>
        <link>https://www.theregister.com/personal-tech/2026/05/15/x-tells-ofcom-it-will-finally-check-its-moderation-inbox/5241120</link>
        <pubDate>Fri, 15 May 2026 12:52:08 +0200</pubDate>
        <title>X tells Ofcom it will finally check its moderation inbox</title>
        <description><![CDATA[ Comms watchdog says Musk's social media platform will now review reports of illegal hate and terror content within 24 hours... on average ]]></description>
        <category>personal tech</category>
                <dc:modified>Fri, 15 May 2026 13:33:40 +0000</dc:modified>
                <content:encoded><![CDATA[ Britain's media regulator has extracted a set of promises from X over illegal hate speech and terrorist content, suggesting that even "free speech absolutism" eventually meets a compliance department. Under commitments accepted by Ofcom, X said it will review and assess reports of suspected illegal terrorist and hate content from UK users within an average of 24 hours, with at least 85 percent handled within 48 hours through its dedicated UK reporting channel. The company also committed to engaging with external experts on how its reporting systems work, following several organizations' complaints that they were unclear whether reports submitted to X were even being received, let alone acted on. X also said it would withhold access in the UK to accounts operated by or on behalf of terrorist organizations proscribed in Britain if the accounts are reported for posting illegal terrorist content. Ofcom said X will now submit quarterly performance data over a 12-month period so the regulator can monitor whether the company is actually sticking to those promises. "Following intensive engagement carried out by Ofcom's online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely," said Oliver Griffiths, Ofcom's Online Safety Group Director. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action." The regulator launched a compliance investigation in December to examine whether major social media platforms have adequate systems to address illegal hate and terrorist material. Ofcom said evidence gathered alongside organizations including Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust pointed to illegal hate and terror content remaining visible across some of the internet's largest platforms. Ofcom said the issue was of "particular concern" following several recent antisemitic incidents and attacks on Jewish sites in Britain, including attacks in Manchester, Golders Green, and recent arson attempts in London. The watchdog also made clear this is not the end of its scrutiny of X, reminding the platform that Ofcom's separate investigation including issues related to Grok is ongoing and that it will continue to probe X's broader illegal content compliance systems. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241139&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241139&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240254</guid>
        <link>https://www.theregister.com/networks/2026/05/15/zte-showcases-at-gsma-m360-latam-2026-driving-future-business-model-restructuring-ai-network-two-way-integration/5240254</link>
        <pubDate>Fri, 15 May 2026 12:26:16 +0200</pubDate>
        <title>ZTE showcases at GSMA M360 LATAM 2026, driving future business model restructuring - AI &amp; network two-way integration</title>
        <description><![CDATA[ AI-integrated networks can cut costs, boost 5G efficiency, and help regional telcos shift beyond basic connectivity ]]></description>
        <category>networks</category>
                <content:encoded><![CDATA[ Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in GSMA M360 LATAM 2026. Ms. Chen Zhiping, Chief International Ecosystem Representative of ZTE, delivered a keynote speech entitled "Driving Future Business Model Restructuring — AI & Network Two-Way Integration" at the conference. Ms. Chen provided an in-depth analysis of the industrial value of the two-way integration of AI and networks, sharing ZTE's achievements in the Latin American market over the past two decades, its AI-Native network innovation practices, and its full-scenario intelligent solutions, helping Latin American operators complete their strategic upgrade from "connectivity providers" to "digital economy enablers". Facing the AI industry wave, ZTE released its global strategic vision in 2025: "All in AI, AI for All, Becoming a Leader in Connectivity and Intelligent Computing". Ms. Chen stated that this strategy is highly aligned with the core concepts of this GSMA Summit. In the future, ZTE will move beyond traditional network connectivity services, continuously upgrade its basic network capabilities, and comprehensively expand its AI and intelligent computing business layout. Through a two-way integration model of AI empowering the network and the network supporting AI, ZTE will reconstruct a new business model adapted to the AI era and activate new growth momentum for the Latin American digital economy. In terms of AI-enabled network upgrades, ZTE has pioneered the AI-Native network concept, deeply embedding AI capabilities into all network layers and processes to maximize network efficiency and optimize costs. In the wireless network field, ZTE's new 5G BBU integrates native intelligent computing capabilities, effectively improving the overall efficiency of hardware and software resources and increasing cell throughput by 20%. Simultaneously, by combining Super-N high-performance power amplifiers and AI intelligent optimization technology, equipment energy consumption is reduced by 38%. Currently, AAU and RRU products equipped with this technology have been deployed on a large scale in several Latin American countries, including Chile, Ecuador, Bolivia, Brazil, and Peru, with over 37,000 units deployed to date, saving local operators millions of dollars in electricity costs annually and achieving efficient, green, and intelligent network upgrades. Built upon AI-Native technology, the AIR Net advanced intelligent network solution enables commercial deployment of "autonomous driving" for networks, comprehensively revolutionizing operator operation and maintenance models and reducing overall TCO. This solution has already been commercially deployed in multiple locations globally. Currently, ZTE's intelligent network capabilities have obtained authoritative L4-level certification from the TM Forum, and its self-developed Co-Claw enterprise-level intelligent agent has been fully implemented internally, continuously improving network automation and intelligence levels and helping operators move towards advanced intelligent networks. In response to the complex and diverse network environment in Latin America, ZTE continues to implement scenario-based coverage solutions to bridge the regional digital divide. In indoor scenarios, ZTE has partnered with Chilean company Millicom to deploy the Qcell solution, achieving stable gigabit coverage throughout buildings. In remote rural scenarios, ZTE collaborates with Brazilian company Claro to implement the RuralPilot simplified rural network solution, addressing network coverage challenges in the vast Amazon region with its low cost and ease of maintenance. ZTE also offers a wide range of home coverage solutions, precisely matching the networking needs of different regions and scenarios in Latin America. Ms. Chen Zhiping stated that ZTE will continue to be rooted in the Latin American market, deepen the two-way integration and innovation of AI and networks, and continue to implement green, efficient, and intelligent full-stack ICT solutions to help local operators complete their strategic transformation, upgrade from traditional connectivity service providers to digital economy enablers, comprehensively meet the intelligent needs of industries and families in all scenarios, and work together to build a smart, inclusive, and sustainable new digital ecosystem in Latin America. Contributed by ZTE. ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241131&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241131&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5241019</guid>
        <link>https://www.theregister.com/security/2026/05/15/openai-caught-in-tanstack-npm-supply-chain-chaos-after-employee-devices-compromised/5241019</link>
        <pubDate>Fri, 15 May 2026 12:08:07 +0200</pubDate>
        <title>OpenAI caught in TanStack npm supply chain chaos after employee devices compromised</title>
        <description><![CDATA[ Attackers stole a limited amount of internal credential material after malware hidden in poisoned packages reached two staff machines ]]></description>
        <category>security</category>
                <lab:kicker><![CDATA[ Security ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 12:23:06 +0000</dc:modified>
                <content:encoded><![CDATA[ OpenAI says attackers behind the TanStack npm supply chain compromise stole internal credentials after reaching two employee devices, forcing the company to rotate signing certificates for several desktop products. The company disclosed this week that it had been caught up in the wider "Mini Shai-Hulud" campaign targeting npm ecosystems and developer infrastructure, though it said there was no evidence that customer data, production systems, or deployed software were compromised. OpenAI said the incident happened during a phased rollout of new supply chain security controls introduced after a previous Axios-related incident. According to the company, the two compromised employee devices had not yet received updated package management protections that would have blocked the malicious dependency. The attackers carried out "credential-focused exfiltration activity" against a limited set of internal repositories reachable from the affected employee machines, according to OpenAI. It said "only limited credential material was successfully exfiltrated from these code repositories." That was apparently enough to trigger a precautionary reset across multiple products. OpenAI is rotating the certificates used to sign macOS versions of ChatGPT Desktop, Codex App, Codex CLI, and Atlas, and is requiring users to update the affected software by June 12. The incident ties OpenAI to the increasingly messy supply chain campaign that has spent the past several weeks worming through npm ecosystems, CI/CD infrastructure, and GitHub Actions workflows. Security firm Socket linked the TanStack compromise to the broader "Mini Shai-Hulud" operation, which abused poisoned automation workflows and stolen publishing credentials to push malicious package updates into trusted software pipelines. Researchers tracking the wider Mini Shai-Hulud campaign have connected the activity to a threat group known as TeamPCP, which appears to have developed an unhealthy interest in poisoning npm ecosystems and rifling through developer credentials. TanStack confirmed this week that 84 malicious package versions spanning 42 @tanstack/* packages had been published after attackers compromised parts of its release infrastructure. The poisoned packages were designed largely to steal credentials, including GitHub tokens, cloud secrets, npm credentials, and CI/CD authentication material. The campaign appears linked to earlier Mini Shai-Hulud attacks involving SAP-related npm packages, suggesting the same credential-stealing operation is spreading across multiple developer ecosystems. OpenAI said it is continuing to investigate the incident and monitor for any downstream abuse tied to the stolen credentials. The reassuring news is that OpenAI says no production systems were breached. The less reassuring news is that attackers keep getting deeper into the software assembly line before anybody notices. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241038&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241038&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240898</guid>
        <link>https://www.theregister.com/networks/2026/05/15/fusion-for-the-future-xlsmart-and-zte-partnering-for-a-boundless-digital-indonesia/5240898</link>
        <pubDate>Fri, 15 May 2026 11:59:05 +0200</pubDate>
        <title>Fusion for the future: XLSMART and ZTE partnering for a boundless digital Indonesia</title>
        <description><![CDATA[ 7,000 5G sites added in eight months, and now serve 73 million subscribers on Indonesia’s first blanket 5G network. ]]></description>
        <category>networks</category>
                <dc:modified>Fri, 15 May 2026 12:23:38 +0000</dc:modified>
                <content:encoded><![CDATA[ Partner Content In Indonesia, the magic of “Bumbu”, that perfect spice blend, creates unforgettable flavors. Today, in the digital world, an even grander "fusion" is taking place. Facing the challenge of unifying seperate networks across Indonesia's diverse geography, XLSMART partnered with ZTE on a landmark dual-network convergence project, integrating over 20,000 4G base stations and deploying more than 7,000 new 5G sites in just eight months. The initiative has launched the country's first nationwide 5G blanket coverage network, validated by Ookla as the fastest 5G network in H2 2025. Leveraging digital-intelligent tools and ecosystem collaboration, the project significantly enhanced coverage, capacity, and user experience for 73 million subscribers — turning complex delivery challenges into measurable gains in speed and efficiency. Fusion for the Future. Watch how the converged network is powering Indonesia's digital growth. Contributed by ZTE. ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5241099&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5241099&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240629</guid>
        <link>https://www.theregister.com/offbeat/2026/05/15/uk-reloads-artillery-plans-with-1b-remote-control-howitzer-order/5240629</link>
        <pubDate>Fri, 15 May 2026 11:45:00 +0200</pubDate>
        <title>UK reloads artillery plans with £1B remote-control howitzer order</title>
        <description><![CDATA[ 72 Boxer-mounted RCH 155s due from 2028 as Britain fills the gap left by AS-90s sent to Ukraine ]]></description>
        <category>offbeat</category>
                <lab:kicker><![CDATA[ Offbeat ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 12:23:52 +0000</dc:modified>
                <content:encoded><![CDATA[ The British Army is to get 72 next-gen mobile artillery units, in the shape of a remote-controlled howitzer (RCH) module that mounts onto the Boxer armored vehicle already in service. The Ministry of Defence (MoD) announced a £1 billion ($1.35 billion) contract to provide the Army with a modern mobile system capable of providing artillery support against targets up to 70 km (44 miles) away. First deliveries of the RCH 155 units are expected in 2028, with a "minimum deployable capability" expected before the end of the decade. It follows a £52 million early capability demonstrator contract signed in December 2025. The RCH 155 is basically a 155 mm gun housed in a turreted artillery module mounted on the Boxer drive module. It is an auto-loading weapon, capable of firing eight rounds per minute. The unit features a fire control computer with integrated ballistics calculation, plus radio data transmission to a remote artillery control system. Boxer is an eight-wheeled (8x8), all-terrain vehicle designed to take a number of different bolt-on mission modules allowing it to fulfill various roles. The British Army has initially chosen just a few of these types, primarily the troop carrier variant, but also the ambulance module and command vehicle unit. According to the MoD, the barrel, breech, recoil system, and trunnions will be manufactured by German defense biz Rheinmetall at its large-caliber production facility in Telford, using British steel supplied by Sheffield Forgemasters. The Boxer drive modules/chassis, engine, and drivetrain that the weapon system sits on will be manufactured by the UK division of pan-European defense firm KNDS in Stockport. The Army is to receive a total of 623 of these. A new mobile artillery platform was needed to replace the UK's aging fleet of AS-90 self-propelled howitzers. These could easily be mistaken for a tank, thanks to their tracked chassis and turret-mounted gun. The last of these were donated to Ukraine over the past few years to help it fight Russia. The UK also procured a small number (14) of Archer mobile artillery systems as a stop-gap while a successor for AS-90 was selected. This is an automated 155 mm gun mounted on a 6x6 articulated truck chassis. "This major investment is defence delivering for the battlefield and for Britain's economy," said Defence Secretary John Healey MP. "By securing next-generation artillery with Germany, not only are we rearming to strengthen NATO against growing Russian aggression but also creating highly skilled jobs here in Britain." Ironically, Britain was one of the earliest partners in the Boxer joint venture, but withdrew from it in 2003 to focus on a different program, the Future Rapid Effect System (FRES). One strand of FRES eventually led to what is now known as the Ajax family of armored vehicles. You may have heard of it. The UK government announced it was rejoining the Boxer program in 2018 in order to meet its Mechanized Infantry Vehicle (MIV) requirement. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240657&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240657&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240955</guid>
        <link>https://www.theregister.com/public-sector/2026/05/15/britains-latest-civil-servant-is-a-chatbot-trained-on-govuk-misery/5240955</link>
        <pubDate>Fri, 15 May 2026 11:15:00 +0200</pubDate>
        <title>Britain's latest civil servant is a chatbot trained on GOV.UK misery</title>
        <description><![CDATA[ Whitehall says the AI assistant will help citizens navigate public services faster; others may see it as a cheaper alternative to answering the phone ]]></description>
        <category>public sector</category>
                <lab:kicker><![CDATA[ Public Sector ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 12:24:17 +0000</dc:modified>
                <content:encoded><![CDATA[ After years of turning public services into a maze of dead links, phone queues, and eligibility calculators, the UK government has unveiled the inevitable next step: an AI chatbot. The UK government on Friday announced the launch of "GOV.UK Chat," a generative AI assistant bolted into the GOV.UK app and trained on tens of thousands of pages of official guidance that Whitehall is boldly pitching as the "most comprehensive government-built chat tool in the world." Ministers say the system will help people navigate everything from maternity pay and retirement benefits to driving licenses and startup grants without having to dig through the bureaucratic swamp that is modern Britain. According to the government, some public sector call centers handle around 100,000 calls a day, which helps explain why ministers are suddenly very enthusiastic about citizens talking to software instead. Technology Secretary Liz Kendall said people fed up with being stuck on hold should not have to spend hours wading through online guidance either, which sounds suspiciously like somebody inside government has finally used GOV.UK. "For too long, navigating government has felt like a full-time job," she said. "Whether you're a parent trying to find out what childcare you're entitled to, a first-time buyer working out which schemes you can access, or someone approaching retirement, you shouldn't have to spend time trawling through hundreds of web pages to get a straight answer." The rollout comes just months after polling showed plenty of Brits are already uneasy about AI spreading through public services. Concerns ranged from privacy and job losses to fears that dealing with the government will eventually mean getting stuck in an automated support maze when something important goes wrong. The government said human support will still be available alongside the chatbot, at least for the time being. Ministers are keen to stress that GOV.UK Chat is not deciding who gets benefits or owes tax. Right now, the system mostly pulls together existing guidance, calculators, and links from across GOV.UK rather than making decisions itself. Given Whitehall's uneven history with large technology projects, that's probably a wise decision. Still, it is not hard to see where this is heading. Today, the chatbot helps you find childcare support. A few years from now, it will probably be explaining why an algorithm flagged your wheelie bin for suspicious behavior. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240975&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240975&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240578</guid>
        <link>https://www.theregister.com/security/2026/05/15/mps-want-social-media-treated-more-like-unsafe-toys-than-harmless-apps/5240578</link>
        <pubDate>Fri, 15 May 2026 10:33:00 +0200</pubDate>
        <title>MPs want social media treated more like unsafe toys than harmless apps</title>
        <description><![CDATA[ Parliamentary committee tells ministers online safety regime is failing children and warns 'no action is not an option' ]]></description>
        <category>security</category>
                <lab:kicker><![CDATA[ Security ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 12:24:34 +0000</dc:modified>
                <content:encoded><![CDATA[ British MPs are urging the government to tighten online safety laws, arguing social media companies should face the same kind of scrutiny as other products linked to serious harm. In a letter to Liz Kendall and Kanishka Narayan, shared with The Register, the UK's Science, Innovation and Technology Committee said there is now "strong and consistent evidence" linking social media use to harms affecting young people and warned that "no action is not an option." The committee, chaired by Chi Onwurah, said the current system leaves social media companies free to grow their youth user bases while avoiding meaningful responsibility for the subsequent fallout. "The status quo, where social media companies are neither accountable nor responsible for preventing harms, isn't acceptable," Onwurah said. "If any other consumer product caused these harms, it would've been recalled or changed." The intervention forms part of the government's "Growing up in the online world" consultation and follows a March evidence session examining arguments for and against restricting social media access for under-16s. The committee said it heard evidence from clinicians, bereaved parents, academics, child safety groups, and experts studying Australia's social media age limits, as well as accounts from young people and families concerned about harmful content and the effect social media is having on children's wellbeing. While the MPs stopped short of explicitly endorsing a blanket social media ban for teenagers, the letter makes clear the committee thinks ministers have spent too long relying on voluntary action from platforms whose business models still reward engagement above pretty much everything else. The committee said existing age restrictions should be properly enforced using "effective and privacy-preserving" age verification systems – rather than checks that can be bypassed by a drawn-on mustache – and called for stronger legal obligations requiring companies to filter illegal content and to block children from viewing harmful material. The letter also revisits the committee's earlier concerns about recommendation algorithms and how platforms deal with harmful and illegal posts, areas where MPs say previous proposals for reform went nowhere. MPs are now urging ministers to revisit those recommendations and bring forward fresh online safety legislation in the next parliamentary session. Particular attention was paid to algorithms and addictive design features. The committee argued that infinite scrolling and similar engagement mechanics should be designed out of platforms entirely, and warned that social media companies cannot keep pretending they are passive hosts while their recommendation systems actively shape what users see. The letter also warned that gaps in the UK's Online Safety Act mean some AI chatbots operating on closed databases currently fall outside the regime, something MPs said must be fixed before the next generation of online platforms disappears into yet another regulatory blind spot. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240624&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240624&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5239406</guid>
        <link>https://www.theregister.com/systems/2026/05/15/on-call-techie-decided-job-was-done-and-hit-the-bottle-just-before-his-pager-went-off/5239406</link>
        <pubDate>Fri, 15 May 2026 08:30:00 +0200</pubDate>
        <title>On-call techie decided job was done and hit the bottle – just before his pager went off</title>
        <description><![CDATA[ Lazy weekend of Grand Prix fun turned into a terrifying all-nighter ]]></description>
        <category>systems</category>
                <lab:kicker><![CDATA[ Systems ]]></lab:kicker>
                <dc:modified>Wed, 13 May 2026 09:55:15 +0000</dc:modified>
                <content:encoded><![CDATA[ ON CALL Welcome to another installment of On Call, The Register's weekly reader-contributed column that celebrates the IT professionals who put their lives on pause to provide tech support at all hours. This week, meet a reader we'll Regomize as "Jemaine." In the early 1990s, he found himself in Hong Kong working as a database specialist on VAX/VMS systems. "We'd built a billing application for a telco client in Macau, and it had been running happily for some time," he told On Call. By the time the system needed its first major OS upgrade, Jemaine was therefore happy for the local crew to handle the job. His client had other ideas and, despite also arranging for two DBAs to be present during the upgrade, insisted he show up. This was not a hardship because the job coincided with the Macau Grand Prix and Jemaine wasn't required to be on site. The client had therefore provided him with a hotel room that, as luck would have it, had a view of the track! "A couple of friends ended up crashing my room, and we spent the weekend watching insane drivers hurl cars around an absurdly tight street circuit," Jemaine admitted. The client never called or paged, so after the race Jemaine was confident the upgrade was going well. He and his friends therefore consumed "several bottles of rich Portuguese red wine" and ordered a sumptuous meal. "Dessert had just arrived when my pager went off," he told On Call. Jemaine poured himself into a cab to his client's office and found a situation he described as "vague but clearly serious" because the billing application wouldn't start. "Judging by the silence and the stoic expressions, everyone was quietly panicking," Jemaine wrote. He soon learned that the client had already tried to fix the app by reinstalling the OS twice and had now decided the database was the source of the problem. Jemaine was told to wait while the DBAs reinstalled the database, which "gave me time to sit in a back room and sober up slightly," he admitted to On Call. The database rebuild finished at about 2 am, but the application still refused to start. The client then turned to Jemaine. "I was summoned and interrogated by the systems team," he said, and ran a quick check that showed the database was perfectly healthy – but the batch scheduler wasn't running. To probe that problem, Jemaine asked to speak with the lead developer – who, it turned out, was not on site. "An urgent page was sent, and fortunately he called back quickly. His suggestion was to step through the code. This meant compiling a large COBOL program I'd never seen before in DEBUG mode, then single-stepping through it over the phone with the developer." By now, an increasingly anxious semicircle of client staff was watching Jemaine's every move, and he felt like they were silently shifting blame in his direction. "At around 4 am, we found the failure point: batch queue submission. The call was returning a null error code. The developer was baffled." "I reached for the physical manual to see what the function actually did," Jemaine wrote. "And then, for reasons I still credit to the Portuguese wine gods, I asked a simple question: 'What account did you test this under?'" The developer immediately replied: "Administrator." Jemaine asked the OS upgrade team to run the application with administrator privileges, and it immediately worked. "The OS upgrade had introduced a new permission requirement for submitting jobs to the batch queue," Jemaine told On Call. So this was very much not his problem, and he was able to excuse himself and stagger home as the Sun started to rise. "Nobody from the company ever mentioned the incident to me again," he told On Call. "And I can't remember the name of the wine we were drinking." Have you been on call, decided nothing could possibly go wrong, and then been caught out? If so, click here to send On Call an email so we can tell your story on a future Friday. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5239424&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5239424&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240901</guid>
        <link>https://www.theregister.com/off-prem/2026/05/15/aws-racks-m3-ultra-macs-that-boast-specs-you-cant-currently-buy/5240901</link>
        <pubDate>Fri, 15 May 2026 07:39:33 +0200</pubDate>
        <title>AWS racks M3 Ultra Macs that boast specs you can’t currently buy</title>
        <description><![CDATA[ Manages to get its hands on some Mac Studio machines before the OpenClaw machine grabs them ]]></description>
        <category>off-prem</category>
                <lab:kicker><![CDATA[ Off-Prem ]]></lab:kicker>
                <content:encoded><![CDATA[ Amazon Web Services has done something many others can’t achieve: Buy a bunch of Apple’s Mac Studio computers. Mac Studio is Apple’s workstation-grade machine and has been hard to find in recent weeks as Cupertino struggles to find enough RAM to fill them, and AI enthusiasts snap up stock to run tools like OpenClaw. At the time of writing, Apple advises buyers they’ll need to wait nine or ten weeks for a Mac Studio to arrive. The cloudy Macs AWS has racked and stacked pack Apple’s M3 Ultra SoC, Cupertino’s most powerful chip. Apple currently sells the Mac Studio with up to 96GB of RAM. AWS on Thursday started offering a cloudy M3 Ultra with 256GB of unified memory, a configuration The Register did not see as an option on Apple.com while preparing this article. The cloudy M3 Ultra machines run on actual Mac Studios packing a 28-core CPU, 60-core GPU, and 32-core Neural Engine. At the time of writing, AWS hadn’t updated its list of EC2 instance types to include the new M3 instances, so we can’t tell you what they’ll cost or if the cloud giant has departed from its past practice of renting bare metal machines rather than macOS VMs. Apple allows users to create and run macOS virtual machines, but only on Apple hardware and allows just two VMs per host. Cupertino also restricts use of VMs to four purposes: software development; testing during software development; using macOS Server; and personal, non-commercial use. AWS recommends its cloudy Macs as an ideal platform to build and test apps for all of Apple’s operating systems – even the visionOS that powers its unloved Vision Pro VR goggles. Amazon’s M3 Ultra Mac Studios only made it into two regions – US East and US West (Oregon) – so users elsewhere who fancy a cloudy Mac but need lower latency will have to endure the very on-prem experience of waiting for hardware to show up. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240921&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240921&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240874</guid>
        <link>https://www.theregister.com/systems/2026/05/15/possible-samsung-strike-puts-even-more-pressure-on-memory-pricing/5240874</link>
        <pubDate>Fri, 15 May 2026 04:43:37 +0200</pubDate>
        <title>Possible Samsung strike puts even more pressure on memory pricing</title>
        <description><![CDATA[ As a senior policymaker ponders whether all South Koreans should enjoy an ‘AI dividend’ ]]></description>
        <category>systems</category>
                <lab:kicker><![CDATA[ Systems ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 03:11:09 +0000</dc:modified>
                <content:encoded><![CDATA[ RAM prices have risen after negotiations between Samsung and a union representing many of its workers collapsed – and the union has now called for a lengthy strike to start next week. The National Samsung Electronics Union (NSEU) has noticed the extraordinary profits the Korean giant is making thanks to the high price of RAM, and wants the company to boost members’ pay with bonuses tied to profits. Talks on that idea have stalled, and pointing out that Samsung pays memory-makers less than peers at SK Hynix hasn’t found a receptive ear. The Union therefore plans to start an 18-day strike next week. If the industrial action goes ahead, it has the potential to disrupt memory production, which would mean further shortages at a time DRAM is already expensive and hard to acquire due to rampant demand for AI infrastructure. Short-term memory prices have therefore spiked in the last 72 hours – which ironically will just increase Samsung’s profit’s even more! The Union has accused Samsung of not taking its arguments seriously, and South Korea’s government has stepped in with attempts to bring two parties to the table for fresh talks that lawmakers hope will resolve the situation because The Spice Must Flow. Or maybe The RAM Must Roll. Samsung recently posted almost $40 billion profit for a single quarter, thanks largely to memory sales. That enormous sum, and others like it reported by Korean companies who sell memory and other products in demand from AI builders, caught the attention of Yong-Beom Kim, South Korea’s Chief Presidential Secretary for Policy – a ministerial role. Using his personal Facebook page, Kim suggested funneling a portion of AI profits into a “national dividend fund” that can be used to improve South Korea’s long-term prospects. His post mentions Norway’s sovereign wealth fund, which famously siphoned off revenue from oil sales and invested it in shares to create assets worth over $2 trillion. Vendors often tell The Register “data is the new oil” so maybe Kim is on to something – although the metaphor may not work well when one considers current events in the Strait of Hormuz and their effect on the world. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240886&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240886&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240821</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/15/cerebras-wafer-scale-ai-bet-delivers-blockbuster-ipo/5240821</link>
        <pubDate>Fri, 15 May 2026 01:02:50 +0200</pubDate>
        <title>Cerebras risked it all on dinner plate-sized AI accelerators a decade ago. Today it's worth $66B</title>
        <description><![CDATA[ Here's a look at the tech powering the first big IPO of 2026 ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI+ML ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 09:15:37 +0000</dc:modified>
                <content:encoded><![CDATA[ Cerebras Systems has done what many chip startups aspire to but few ever achieve. On Thursday, the company and long-time Nvidia rival raised $5.55 billion in an initial public offering (IPO), making the company worth more than $66 billion on its first day of trading. The milestone didn’t happen overnight. It took more than a decade, a radically different approach to chipmaking, and two separate attempts at an IPO to pull off. Founded in 2015 by former SeaMicro head Andrew Feldman, Cerebras Systems' first chips looked nothing like GPUs or AI accelerators of the time. The bet that put Cerebras on the map At the time, most high-end GPUs used dies measuring roughly 800 square mm that’d been cut from a larger wafer. Eight or more of these GPUs would typically be stitched together by high-speed interconnects, like NVLink, which allowed them to pool their resources and behave like one big accelerator. Rather than cutting up a wafer into smaller chips just to reconnect them again, Cerebras figured why not etch all that compute into a wafer-sized chip? And so the Wafer-Scale Engine (WSE), a giant chip measuring 46,225 square mm — about the size of a dinner plate — was born. Cerebras' first chips weren’t just bigger; they were purpose-built for AI training and sported a novel compute engine designed to speed up the highly sparse matrix multiply-accumulate operations common in deep learning. This hardware sparsity took advantage of the fact that large portions of a neural network’s parameters ultimately end up being zeros, allowing Cerebras to boost the effective computational output of its first-gen WSE accelerators from 2.65 16-bit petaFLOPS to 26.5. Nvidia added support for sparsity in its Ampere generation a year later, but it only worked for a specific ratio (2:4), limiting its effectiveness to select use cases. To train a model, up to 16 of these chips could be ganged together over a high-speed interconnect. This was kind of important too, because unlike GPUs, which stored model weights in HBM or GDDR memory, Cerebras' chips were almost entirely reliant on on-chip SRAM. Although SRAM is insanely fast, which is why it’s used for caches in basically every modern processor, it’s not particularly space efficient. While Cerebras' first wafer-scale accelerator could theoretically reach 9 petabytes per second of memory bandwidth, it was limited to just 18 GB of capacity at a time when Nvidia was already at 32 GB per GPU and about to make the leap to 40 GB or even 80 GB per chip. Still, the approach was performant enough that for its second-generation wafer-scale accelerator, launched in 2021, Cerebras doubled down on the architecture. While the WSE-2 wasn’t physically larger, the move to TSMC’s 7nm process tech allowed the company to more than double the transistor count, compute density, SRAM capacity, and bandwidth. The chips also supported larger clusters, scaling up to 192, though in practice these clusters were usually smaller at between 16 and 32 systems per site. It was also around this time that Cerebras caught the attention of United Arab Emirates-based cloud provider G42, which quickly became its largest financier. By mid-2023, the chip startup had secured orders worth $900 million for nine supercomputing sites with a 36 exaFLOPS of super sparse AI compute between them. A year later, Cerebras made the jump to TSMC’s 5nm process with the WSE-3 and while memory and bandwidth only saw modest gains, compute once again doubled now topping a 125 petaFLOPS of Sparse (12.5 petaFLOPS dense) compute at 16-bit precision. Cerebras’ CS-3 systems have now seen the largest deployment, and now power the majority of the Condor Galaxy cluster it built for G42, as well as several new sites across North America and Europe. Cerebras' inference inflection Up to mid-2024, Cerebras' primary focus had been on training, but then the company announced a boutique inference-as-a-service offering to rival those from competing chip startups like Groq and SambaNova. It turns out, Cerebras’ latest AI accelerators’ massive SRAM capacity not only made them potent training accelerators but particularly well suited to high-speed LLM inference. In its third iteration, Cerebras' wafer scale accelerators boasted more memory bandwidth than they could realistically use. At 21 PB/s, the chip’s memory is nearly 1000x faster than Nvidia’s new Rubin GPUs. This, along with a dash of speculative decoding, allowed Cerebras to generate tokens far faster than any GPU-based system of the time. Even today, Cerebras routinely ranks among the fastest inference providers in the world. According to Artificial Analysis, Cerebras' kit can churn out more than 2,200 tokens a second when running GPT-OSS 120B High, 2.8x faster than the next closed GPU cloud Fireworks. Cerebras didn’t know it at the time, but its inference platform would be a much bigger business than anyone had expected, and in September 2024, the company submitted its S-1 filing to the SEC to take the company public. Almost exactly a year later, Feldman quietly pulled its S-1, delaying its IPO. His reasons? The company’s initial S-1 filing was rather concerning, as it showed G42 was responsible for 87 percent of its revenues. But in the year since launching its inference platform, Cerebras had racked up several high-profile customer wins from big names like Alphasense, AWS, Cognition, Meta, Mistral AI, Notion, and Perplexity. Feldman explained that the initial S-1 didn’t yet show the financial results of this growth. The company believed it would have a better story to tell investors later down the road. Cerebras' inference platform has only grown since then. The company has steadily expanded its footprint while announcing deeper relationships with AWS and adding OpenAI as a customer. On Thursday, the startup officially joined the NASDAQ under the ticker CBRS, having raised $5.5 billion in the process. Shares skyrocketed nearly 70 percent on the first day of trading, as investors poured their money into a new way to play the AI boom. An IPO is something many startups aspire to but few, especially in the cut throat world of semiconductors, ever accomplish. What happens now From a technical perspective, Cerebras is overdue for a refresh. The WSE-3 accelerators that pushed it over the IPO finish line are getting rather long in the tooth and the architecture lead afforded by its SRAM-heavy design is shrinking. Nvidia’s acquihire of Groq gave Feldman’s long-time rival an SRAM-packed inference platform of its own, while others are racing to catch up. From here, we can only speculate, but we’ll hazard a guess that Cerebras' new shareholders are going to want to see new silicon sooner than later. Based on its existing roadmap, we expect WSE-4 will offer a sizable leap in floating point performance, though not necessarily at 16-bit precision. Much of the industry has aligned around lower precision data types like FP8 and FP4. An exaFLOP of ultra-sparse FP4 compute wouldn’t shock us in the least. How useful sparsity would actually be for LLM inference is another matter. LLM inference hasn’t historically benefited much from sparsity, but that’s never stopped chipmakers from advertising sparse FLOPS anyway. We also expect to see Cerebras pack more SRAM into its next wafer scale compute platform, possibly using TSMC’s 3D chip stacking tech to do it. The WSE-3’s 44GB of SRAM capacity remains a limiting factor for what models it can and can’t serve efficiently. A trillion parameter model like Kimi K2 would require somewhere between 12 and 48 of Cerebras' WSE-3 accelerators, depending on how the model weights are stored and how many parameters have been pruned, and so any increase in SRAM capacity would go a long way toward improving the efficiency of its accelerators. More collaborations Alongside new silicon, we can also expect to see more collaborations akin to Cerebras' tie-up with AWS. Earlier this year, AWS announced it would combine its Trainium3 AI accelerators with Cerebras' WSE-3-based systems to speed up its inference platform in much the same way Nvidia is doing with Groq’s accelerators. Cerebras could certainly do something similar with AMD or any other chipmaker. In this sense, Cerebras is in the position to offer its chips as a decode accelerator, which offloads the bandwidth intensive parts of the inference pipeline onto its chips, while other parts handle the compute heavy prompt processing side of the equation. However, Cerebras frames its next collab; its shareholders are going to expect growth. And as the saying goes, the enemy of my enemy is my friend. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240836&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240836&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240799</guid>
        <link>https://www.theregister.com/cyber-crime/2026/05/14/security-pros-doubt-canvas-attackers-really-deleted-stolen-student-data/5240799</link>
        <pubDate>Fri, 15 May 2026 00:42:11 +0200</pubDate>
        <title>Nobody believes the 'criminals and scumbags' who hacked Canvas really deleted stolen student data</title>
        <description><![CDATA[ Other than Instructure execs - maybe? ]]></description>
        <category>cyber-crime</category>
                <lab:kicker><![CDATA[ Security ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 06:06:44 +0000</dc:modified>
                <content:encoded><![CDATA[ FEATURE When Instructure “reached an agreement” with data theft and extortion crew ShinyHunters this week, the education tech giant assured Canvas users after attackers claimed to have stolen data tied to 275 million students, teachers, and staff that their private chats and email addresses would not turn up on a dark-web marketplace, and that they would not be extorted over the incident. “We received digital confirmation of data destruction (shred logs),” Instructure assured the nearly 9,000 affected universities and K-12 schools. “We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.” Not a single responder that The Register spoke with believes this is true. “Do I believe they deleted the data? No. They're criminals and scumbags,” Recorded Future threat intelligence analyst Allan Liska, aka the Ransomware Sommelier, told us. “But, this is part of what Max Smeets calls ‘The Ransomware Trust Paradox,’” he added. “Ransomware groups have to, minimally, not post data they claimed to have deleted or no one will pay them in the future, but this is done knowing that the data is likely not deleted.” Halcyon Ransomware Research Center SVP Cynthia Kaiser, who previously spent two decades at the FBI, said she doesn’t think that anyone who studies ransomware groups’ operations believes the gang actually destroyed the stolen files. “‘We destroyed the data’ is a standard line from extortion groups once a payment is made or negotiations conclude, but time after time it has proven untrue,” Kaiser told The Register. “ShinyHunters in particular has a documented history of recycling, reselling, and re-leveraging stolen data across campaigns – data they claimed was contained from earlier intrusions has resurfaced on criminal forums months and years later.” Kaiser also doesn’t think this is the last threat that the schools will face from the Canvas breach. “Halcyon expects targeted phishing waves against staff, students, and parents over the next six to 12 months using leaked names, email addresses, and Canvas chat context to make the lures convincing,” she said. To be clear: Instructure execs never directly said the company paid the ransom, and we don’t know the exact amount of money the criminals demanded from the digital learning biz. We do know, however, that “reached an agreement” is corporate-speak for the victim paid up. Doug Thompson, chief education architect at cybersecurity firm Tanium, estimates the figure sits somewhere between $5 million and $30 million. Meanwhile, this latest extortion attack illustrates the impossible choice facing organizations entrusted with protecting people’s data when digital thieves breach their networks and steal sensitive information. “The FBI says don’t pay,” Thompson told The Register. “But the operational reality at 3 a.m. during finals week or enrollment season can push institutions toward a very different calculation. Until that incentive structure changes, education is likely to remain unusually vulnerable to extortion pressure.” To pay, or not to pay? The US federal government, law enforcement agencies, and private-sector threat intelligence analysts all advise victims not to pay a ransom. “Paying ransoms rewards and incentivizes the criminals, funding their search for new victims, and I’ve long advocated before for a ban on ransomware payments,” Emsisoft threat analyst Luke Connolly told us. “But in the absence of regulation applying to all organizations, the stark reality is that Instructure faced a crisis, and they negotiated to try to minimize risk and harm.” No company wants to pay a ransom to its attackers, and most say they won’t – at least in principle – because they don’t want to fund criminal operations and incentivize the crooks. There’s also no guarantee that paying will guarantee the return of their data or prevent additional extortion attempts. CrowdStrike surveyed 1,100 global security leaders last summer, and of the 78 percent who said they experienced a ransomware attack in the past year, 83 percent of those that paid ransoms were attacked again. Plus 93 percent lost data regardless of payment. While data suggests that fewer organizations are paying criminals’ ransom demands - Chainalysis found the percentage of paying victims in 2025 dropped to an all-time low of 28 percent, despite attacks hitting record highs - when faced with extortion or a ransomware infection, the "to pay or not to pay" debate becomes much more complicated. “Most organizations still say publicly that they won't pay, and many genuinely don't, but when the alternative is mass downstream harm to students, parents, and thousands of customer institutions, the calculus shifts,” Kaiser said. “Pay-or-leak groups like ShinyHunters specifically engineer that calculus by creating intense financial and reputational pressure, and when demands go unmet, they escalate to direct harassment of victim companies, employees, and clients.” ShinyHunters did just that. The crew initially compromised Instructure in late April, and after the initial pay-or-leak deadline passed on May 6, ShinyHunters switched tactics to school-by-school extortion. They injected a ransom message into about 330 Canvas school login portals, causing Instructure to take the platform offline for a day - during final exams and Advanced Placement testing for many. Other ransomware scum have gone to horrifying extremes, posting pictures and addresses of preschool children in an effort to get a payday, leaking cancer patients’ nude photos and threatening them with swatting attacks. Mandiant Consulting CTO Charles Carmakal previously told The Register that ransomware infections have morphed into "psychological attacks” with crooks SIM swapping executives’ kids to pressure their parents into paying. Calculating risk In addition to responding to criminals directly harassing their students, patients, customers and employees, victim organizations also have to take into account potential lawsuits if the crooks dump individuals’ personal or health data, and the reputational hit from seeing all of this protected information published online. The decision about what to do in a ransomware attack revolves around risk reduction, Liska said. “Not paying a ransom means an increased risk of data exposure, which in this case could cause serious harm,” he told us. “While there is no good decision in most ransomware negotiations, the idea is to protect as many people as possible and that may mean that paying is the least bad option.” While he didn’t respond to or investigate the Instructure case, “protecting children's data is absolutely a critical factor in these types of decisions, especially when the attacks originate from one of the groups associated with The Com,” Liska added. The Com, a loosely knit group of primarily English speakers who are also involved in several interconnected networks of hackers, SIM swappers, and extortionists such as ShinyHunters and Scattered Lapsus$ Hunters, has been known to blackmail kids and teens into carrying out shootings, stabbings, and other real-life criminal acts. “These groups are known to coerce victims using threats of physical harm, including bricking and swatting," he said. "Not paying may have increased the risk of serious harm to the children whose data was exposed.” A representative of ShinyHunters contacted The Register to "deny any and all association, affiliation, and/or linkage with 'The Com' including 'Scattered Lapsus Hunters'" The rep said "There is no actual concrete evidence to support that we are associated, affiliated, or linked to the aforementioned. These are baseless allegations and industry propaganda surrounding 'The Com.'" The Shiny one admitted that some of their crew's tactics are similar to those the other gangs use but suggested it's lazy to assume a link. "If China or North Korea used vishing to infiltrate organizations networks would they also immediately become associated with “The Com?'" the representative asked. Ed sector 'more likely to pay' Instructure’s intrusion follows several other high-profile attacks against education-sector software providers. In December 2024, PowerSchool suffered a breach, affecting tens of millions of students. The company reportedly paid about $2.85 million in bitcoin in exchange for a video supposedly showing the attackers destroying the data. But about five months later, in May 2025, the ed-tech provider’s school district customers received individual extortion threats from either the same ransomware crew that hit PowerSchool or someone connected to the crooks. Earlier this year, ShinyHunters claimed it stole data from K-12 software provider Infinite Campus as part of a broader wave of Salesforce-related intrusions. “Education keeps emerging as one of the sectors where organizations are still more likely to pay under pressure,” Thompson said. In addition to students’ – especially minors’ – data containing highly sensitive personal details, and therefore presenting an attractive target for attackers, this is also driven in part by market pressure and economics. It’s costly and inconvenient for schools to switch learning management systems, and they are typically locked into multi-year contracts with these software vendors, according to Thompson. “The other issue is concentration,” he said. “A relatively small number of vendors hold data for enormous portions of the education system. PowerSchool, Infinite Campus, Canvas, Blackboard; those four hold records on something close to every American student, and hackers know it. Three of the four have been breached at a multi-million-record scale in the last 18 months.” Thompson said he expects to see additional attacks against major education platforms to follow. “The economics are good. Instructure paid. PowerSchool paid last year. Every other ed-tech vendor's board just had a conversation about what their number would be,” he told us. “The pattern is established.” According to Connolly, the universities and K-12 schools affected by the Canvas hack shouldn’t consider their data safe, regardless of Instructure’s assurances or the crooks' promises to delete it. “There will be future attacks, without a doubt.” ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240833&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240833&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240771</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/14/ontario-auditors-find-doctors-ai-note-takers-routinely-blow-basic-facts/5240771</link>
        <pubDate>Thu, 14 May 2026 22:50:05 +0200</pubDate>
        <title>Sick and wrong: Ontario auditors find doctors' AI note takers routinely blow basic facts</title>
        <description><![CDATA[ 60% of evaluated AI Scribe systems mixed up prescribed drugs in patient notes, auditors say ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI + ML ]]></lab:kicker>
                <content:encoded><![CDATA[ The AI systems approved for Ontario healthcare providers routinely missed critical details, inserted incorrect information, and hallucinated content that neither patients nor clinicians mentioned, according to a provincial audit of 20 approved vendors’ systems. The findings come from the Office of the Auditor General of Ontario, Canada, and are included in a larger report about the state of AI usage by public services in the province. They specifically address the AI Scribe program, the Ontario Ministry of Health initiated for physicians, nurse practitioners, and other healthcare professionals across the broader health sector. As part of the procurement process, officials conducted evaluations using simulated doctor-patient recordings. Medical professionals then reviewed the original recordings alongside the AI-generated notes to evaluate their accuracy. What they found was, frankly, shocking for anyone concerned about the accuracy of AI in critical situations. Nine out of 20 AI systems reportedly “fabricated information and made suggestions to patients' treatment plans” that weren’t discussed in the recordings. According to the report, evaluators spotted potentially devastating incorrect information in the sample reports, such as no masses being found, or patients being anxious, even though these things were never discussed in the recordings. Twelve of the 20 systems evaluated inserted incorrect drug information into patient notes, while 17 of the systems “missed key details about the patients’ mental health issues” that were discussed in the recordings. Six of the systems “missed the patients’ mental health issues fully or partially or were missing key details,” per the report. OntarioMD, a group that offers support for physicians in adopting new technologies and was involved in the AI Scribe procurement process, has recommended that doctors manually review their AI notes for accuracy, but the report notes there’s no mandatory attestation feature in any of the AI Scribe-approved systems. Bad evaluations don’t help, either AI systems making mistakes isn’t exactly shocking. As we’ve reported previously, consumer-focused AI has a tendency to provide bad medical information to users, and some studies have found large language models failed to produce appropriate differential diagnoses in roughly 80 percent of tested cases. But the tools evaluated here are for doctors, not consumers, and such poor performance necessitates explanation. A good portion of the report blames how the systems were evaluated. According to the report, the weight given to various categories of AI Scribe performances was wonky. While 30 percent of a platform’s evaluation score depended solely on whether they had a domestic presence in Ontario, the accuracy of medical notes contributed only 4 percent to the total score. Bias controls accounted for only 2 percent of the total evaluation score; threat, risk, and privacy assessments counted for another 2 percent; and SOC 2 Type 2 compliance contributed an additional 4 percentage points. In other words, criteria tied to accuracy, bias controls, and key security and privacy safeguards made up only a small portion of the total evaluation score for the AI Scribe systems. “Inaccurate weightings could result in the selection of vendors whose AI tools may produce inaccurate or biased medical records or lack adequate protection to safeguard sensitive personal health information,” the report said of the scoring regime. The Register reached out to the Ontario Health Ministry for its take on the report, and whether it was going to conform to its recommendations for the AI Scribe program, but we didn’t immediately hear back. A spokesperson for the Ministry told the CBC on Wednesday that more than 5,000 physicians in Ontario are participating in the AI Scribe program and there have been no known reports of patient harms associated with the technology. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240796&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240796&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240748</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/14/anthropic-tosses-agents-into-the-api-billing-pool/5240748</link>
        <pubDate>Thu, 14 May 2026 22:03:37 +0200</pubDate>
        <title>Anthropic tosses agents into the API billing pool</title>
        <description><![CDATA[ Limits Claude subscriptions to interactive use ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI + ML ]]></lab:kicker>
                <content:encoded><![CDATA[ Anthropic has further restricted access to its Claude model family while framing the limitation as responsive customer service. "We've heard your questions about SDK and claude -p usage sharing your subscription rate limits with Claude Code and chat," the company said in a social media post. "Starting June 15, programmatic usage gets its own dedicated budget instead. Your subscription limits don't change, they're now reserved for interactive use." Subscription usage only applies to interactive use of Claude Code, Claude Cowork, and Claude.ai. Interactive mode involves a user typing a prompt and receiving a response. There's a human in the loop. Programmatic interaction, whether via Anthropic's own Agent SDK, headless mode, or a third-party tool, will be counted against a separate usage pool funded by a credit equal to the customer's subscription fee. So a Pro subscriber paying $20 per month will have two token supply chains – one for interactive usage and one for programmatic usage, which the subscriber must claim to obtain. But programmatic usage gets billed at costlier API rates. And if this credit is exhausted, spillover programmatic tokens get billed at (occasionally discounted) API rates through "extra usage," a separate token allotment that, if enabled, exists mainly as a way to avoid a sudden service cutoff and to set a limit on spending. The questions from users arose because Anthropic's prior efforts to prevent customers from gorging on tokens at the all-you-can-eat subscription trough haven't been comprehensive. The AI biz, mindful that it will need to show a profit eventually, has been trying to push customers toward its metered API and to constrain consumption of flat-rate subscription tokens. Microsoft's GitHub Copilot has embarked on a similar transition. Anthropic initially did so by disallowing the use of Claude subscriptions with third-party harnesses – applications like OpenCode that coordinate communication with the backend model. That policy dates back to February 2024, but Anthropic seldom enforced it until earlier this year when demand for AI inference began to outpace the company's Claude supply. In February this year, growing interest in OpenClaw, an open source agent platform that encourages long-running, token-burning tasks, prompted Anthropic to get serious about its ban on using third-party harnesses with Claude subscriptions. But customers wondered about third-party applications built with Anthropic’s own Agent SDK, which hadn't been explicitly disallowed, and about the use of headless mode (claude -p), a way to have Claude work on a task without interaction. They now have their answer. It's worth noting that, if the programmatic credit is not exhausted, it doesn't roll over. It gets lost, or you might say, Anthropic reclaims it. The company refers to the credit using a dollar sign, but it's not redeemable currency. It has already been spent. So customers seeking to get the full value from the new arrangement need to calibrate their programmatic usage to consume the full credit every month, no more and no less. Anthropic's recently announced deal with SpaceX to obtain the compute capacity of its Colossus 1 datacenter, along with its removal of peak-hours usage restrictions, raised hopes among developers that more tolerant usage policies might return. This latest subscription limitation shows that's not happening. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240793&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240793&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240682</guid>
        <link>https://www.theregister.com/offbeat/2026/05/14/grad-to-be-turns-graduation-cap-into-rust-powered-light-show/5240682</link>
        <pubDate>Thu, 14 May 2026 19:30:19 +0200</pubDate>
        <title>Grad-to-be turns graduation cap into Rust-powered light show</title>
        <description><![CDATA[ Eric Park tells us he doesn't plan to wear his modified cap to commencement, but his code's available for anyone with no such qualms and an upcoming ceremony ]]></description>
        <category>offbeat</category>
                <lab:kicker><![CDATA[ Offbeat ]]></lab:kicker>
                <content:encoded><![CDATA[ College graduation season has begun in the United States, and one soon-to-graduate computer science student has decided to decorate his graduation cap in the way any good maker would: by writing some Rust code and wiring it up with LEDs that light up when the tassel moves from right to left. Eric Park, due to walk in his commencement ceremony on Friday at Purdue University, published a blog post this week explaining the project, which he said he undertook as an alternative to building a contraption that would set his mortarboard aflame when the tassel was moved. Unfortunately for Park, many American universities (and some in other countries like the UK) require college students who want to walk in commencement ceremonies to rent their gowns and mortarboards. It’s not uncommon for students to be charged a ludicrous amount to rent the set, and in many cases, rental companies require students to return their mortarboards and gowns alike, as is the case for Park. “The rental agreements clause 98.c.2 probably forbids [burning a rented mortarboard], and I don’t think Purdue would like it very much if I set the stage on fire,” Park said in the post. An easier-to-remove version consisting of LED strips, a reed switch, and a magnet, controlled by a super-tiny Digispark ATtiny85, presented itself as the alternative. The result, as demonstrated in a YouTube video, is a mortarboard that is all aglow, and flameless, as soon as the reed switch is activated by the magnet placed on the left-hand side of the hat. “The entire thing was stuck on with double-sided tape and Kapton tape, and I tried a small patch just to make sure it wouldn't rip up the fabric,” Park told The Register in an email. The lightweight and easy-to-remove design also necessitates a compact power source. Unfortunately, Park had to settle for an external battery pack carried in the pocket to power the unit. “It was going to be all self-contained with a 21700 cell, but I didn't have a boost converter on hand so I decided to make do with the power bank solution,” the soon-to-be graduate told us. According to Park, the build was relatively quick: Hardware took a bit more than three hours, and that was largely because he no longer had access to a full lab and was stuck working with his home toolset. Writing the code took a couple of hours, which Park attributed to his insistence on using Rust. “It probably would’ve been easier if I didn’t use Rust and just used the Arduino libraries, or if I used a different board,” Park explained in his blog post. “But I was really married to this blog post title … and I was pretty sure an ESP32 board would’ve been overkill and wouldn’t have stayed on the cap properly.” For those who haven’t clicked through to read his blog post, its headline is simply “my graduation cap runs Rust.” That’s a pretty solid title - at the very least, it’s going to get people to read it, and read they have. “I've read through the comments on Hacker News and I'm happy and thankful about all of the positive comments,” Park told us. “It's great to see a silly but fun project like this reach a wide audience.” “I particularly liked the guy that was reminded why he got into this field through my project,” Park added. So, will Purdue students graduating alongside Park get treated to a surprise light show? Sadly, no - he said in the blog post, and reiterated to us, that he’s probably not going to wear it during the ceremony. “I thought about it but decided it looks pretty tacky,” Park wrote in his blog post. “It looks like what kids would think of as a gaming PC and what boomers would think of as a seizure.” He might toss it on for photo ops after the ceremony, but that’s about it, Park told us. That said, Park did publish the code on Github, so if some other all-but-commenced college student were to take it upon themselves to build their own copy and wear it during their ceremony, that's on them. If I were graduating, I'd consider adding some speakers to the setup and piping in some music, too. Don't come running to El Reg if such a move gets you in trouble, though: We claim no responsibility for commencement shenanigans. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240731&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240731&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240562</guid>
        <link>https://www.theregister.com/oses/2026/05/14/kde-bags-13m-as-europe-realizes-it-might-need-an-os-of-its-own/5240562</link>
        <pubDate>Thu, 14 May 2026 17:38:26 +0200</pubDate>
        <title>KDE bags €1.3M as Europe realizes it might need an OS of its own</title>
        <description><![CDATA[ Germany's Sovereign Tech Fund backs the desktop project while public sector interest in homegrown alternatives grows ]]></description>
        <category>oses</category>
                <lab:kicker><![CDATA[ OSes ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 15:54:19 +0000</dc:modified>
                <content:encoded><![CDATA[ The KDE project turns 30 in five months, but it already got an early birthday present: €1,285,200 from Germany's Sovereign Tech Fund. That's £1.1 million, or $1.5 million in US bucks. The KDE team already has some ideas about how it will spend it, and the project's thank-you note mentions a few: This is not the first time we have mentioned the Sovereign Tech Fund's largesse. In 2023, it gave €1 million to GNOME, and then in 2024 it funded both FreeBSD and Samba. Since then, Donald Trump began his second US presidency, and the push for European digital sovereignty has gained considerably more urgency – as we reported from this year's Open Source Policy Summit in Brussels. KDE Linux is the desktop project's technologically radical in-house distro, which is still in development. We have mentioned this a couple of times, when it was announced in 2024 as "Project Banana," and again in 2025, when it reached alpha. KDE Linux borrows some of its design from Valve's SteamOS 3. Both are immutable distros, based on Arch Linux, with dual Btrfs-formatted root partitions. For failover, these update one another, similarly to ChromeOS (and both obviously use KDE Plasma as their desktop). This has required development work - for instance, before SteamOS, Btrfs required unique partition IDs - and for that, Valve partnered with Spanish workers' cooperative Igalia, which is also working on the Rust-based Servo web rendering engine. For that effort, last year Igalia also received STF funding. SteamOS has millions of users, and ChromeOS hundreds of millions - even if its future replacement is coming into view. The resilience of these OSes in frequent, maintenance-free use is about as well established as end-user-facing Linux gets. One could interpret the STF money as some level of endorsement of the ideas behind KDE Linux. Perhaps it will soon join this short list of European alternatives to Microsoft Windows. Interest in moving European organizations away from American cloud services is growing rapidly, of course. On the small end of the scale, digital artist Wimer Hazenberg recently described How I Moved My Digital Stack to Europe. Taking a broader view, earlier this week, the Financial Times reported on Life without US Tech. It describes how International Criminal Court judge Nicolas Guillou was the target of US sanctions, and found himself locked out of everything that relied on American companies. In October last year, The Register mentioned similar issues faced by ICC prosecutor Karim Khan, when reporting allegations that the ICC was kicking MS Office to the curb. (A few months ago, Microsoft conceded some "inaccuracy" from its spokesperson in that case.) It seems he was not alone. The ICC is moving to OpenDesk from German organization ZenDIS, both of which we mentioned in our report from FOSDEM on messaging systems. These are apps and suites, rather than OSes – they leave the question of the host OS open. That means organizations with large existing investment in Windows (and institutional knowledge of supporting Windows) can keep it for now, while moving to new tools. That's not quick enough for those who want to banish American OSes sooner. Last month, The Reg mentioned France's Directorate for Digital Affairs, DINUM, which is planning to adopt Linux. Some more information is emerging about how it may do it. Rather than building a whole new distro of its own – such as KDE Linux, or the Fedora-based EU OS proposal we looked at last year – DINUM is building a Nix configuration, which it can simply apply to generate a complete bespoke immutable OS image. The base image is called Sécurix. The project page describes it as an OS base for secure workstations, designed according to the ANSSI recommendations for the secure administration of information systems. As an example of how to use it, there's Bureautix. Rather than authenticating against complicated network directories such as LDAP or the Red Hat-backed FreeIPA, Bureautix keeps it local: user configuration is synced from servers to client machines along with the software configuration, and users sign in with a YubiKey. The names Sécurix and Bureautix are nods to the famous indomitable Gauls Astérix and Obélix, created by writer René Goscinny, who died in 1977 aged 51, and artist Albert Uderzo, who died in 2020 at 92. These ancient Gauls have outlived their creators: the latest album, Astérix in Lusitania came out in October 2025, and this vulture recommends it. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240668&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240668&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240594</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/14/waymo-recalls-3800-cars-over-flooded-roads-software-snafu/5240594</link>
        <pubDate>Thu, 14 May 2026 17:08:33 +0200</pubDate>
        <title>Waymo recalls 3,800 robotaxis after one drove itself into a flood</title>
        <description><![CDATA[ Nothing like a partly submerged self-driving car to dampen public trust in autonomous vehicles ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI + ML ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 15:50:48 +0000</dc:modified>
                <content:encoded><![CDATA[ Waymo is recalling almost 3,800 robotaxis amid fears they may go off-script and drive into floods on high-speed roads. All 3,791 cars running Waymo’s fifth and sixth-generation Automated Driving Systems (ADS) are being taken off the road before they potentially injure passengers. "The software may allow the vehicle to slow and then drive into standing water on higher speed roadways," Waymo said in a letter [PDF] to the National Highway Traffic Safety Administration (NHTSA) this week. "Entering a flooded roadway can cause a loss of vehicle control, increasing the risk of a crash or injury." The Alphabet-owned robotaxi biz said all affected cars received an update on April 20, which increased "weather-related constraints and updated the vehicle maps," which served as an "interim remedy" while it works on a more permanent solution. This coincided with a case in San Antonio, Texas, on April 20, in which a car was caught on video - shared with broadcaster KSAT 12 - driving into floodwater and becoming stuck. “On 4/20/2026, an unoccupied Waymo AV encountered an untraversable flooded section of a roadway that has a 40 mph speed limit,” the company wrote in one document [PDF] supporting the recall notice. “The Waymo AV detected potentially untraversable flood water and proceeded at reduced speed.” Waymo temporarily suspended its services in San Antonio as a result and started pulling cars from the city’s fleet days after. The suspension remains in place today. A spokesperson at Waymo sent a statement to The Reg to say it provides more than half a million trips every weeks in "some of the most challenging driving environments across the US, and safety is our primary priority. "We have identified an area of improvement regarding untraversable flooded lanes specific to higher-speed roadways, and have made the decision to file a voluntary software recall with NHTSA related to this scenario. We are working to implement additional software safeguards and have put mitigations in place, including refining our extreme weather operations during periods of intense rain, limiting access to areas where flash flooding might occur." The company currently operates 24/7 driverless robotaxi services in Dallas, Houston, Los Angeles, Miami, Nashville, Orlando, Phoenix, and the San Francisco Bay Area. Waymo has also set its sights on launching in London in September, its first foray outside the US, pending necessary regulatory changes that would allow driverless cars to operate in the city. Test cars have already been spotted on the capital’s streets with trained experts behind the wheel, should any of the cars encounter issues, much like the deal Waymo agreed to in New York when the state handed its testing license back. As The Register previously reported, given the differences in the roads and other motoring infrastructure between the US and UK, Waymo will have to overcome unique challenges before opening its car doors to the public. In testing these vehicles now, Waymo is building a base of evidence to support its bid to operate in the UK. In recent years, however, the company has had to tackle some tricky PR hiccups, mainly related to safety – an issue that autonomous car companies often claim their tech will help improve, not hinder. Reports of serious issues, including cars ignoring red lights and veering into moving traffic, and killing dogs, sit alongside evidence of the technology helping to avoid potential freeway pile-ups, like a recent Waymo case study in LA shows. Serious issues continue to plague cars, and while they attract more media scrutiny than equivalent human-driver mishaps, public trust will remain strained until cases become far rarer. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=258817&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=258817&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240452</guid>
        <link>https://www.theregister.com/oses/2026/05/14/uk-begins-antitrust-inquiry-into-microsofts-business-software-ecosystem/5240452</link>
        <pubDate>Thu, 14 May 2026 16:15:00 +0200</pubDate>
        <title>UK begins antitrust inquiry into Microsoft's business software ecosystem</title>
        <description><![CDATA[ Brit regulator has 'heard' customers can't always 'effectively combine software from Microsoft with that of other providers' ]]></description>
        <category>oses</category>
                <lab:kicker><![CDATA[ OSes ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 09:02:23 +0000</dc:modified>
                <content:encoded><![CDATA[ The UK’s Competition and Markets Authority (CMA) is taking a closer look at Microsoft’s business software empire, launching a strategic market status investigation into the company’s ecosystem. The probe, which is the fourth since the UK's digital markets competition regime came into force last year, will determine whether Microsoft should be designated as having strategic market status, which would allow the CMA to implement interventions to support competition. In March, the CMA announced that the investigation was coming. The regulator was concerned that Microsoft's software licensing practices were reducing competition in the cloud. In today's announcement, the CMA said it had "heard that UK customers may not always be able to effectively combine software from Microsoft with that of other providers, limiting their ability to get access to the best products at the most competitive prices." Microsoft is no stranger to regulatory friction. In 2025, it described calls from AWS and Google for the UK competition regulator to "intervene and constrain the price" it charges customers to run wares on those rivals' cloud plaforms as "extraordinary and unprecedented." Two year prior, Google branded Microsoft's cloud software licensing a "tax" paid by customers as a penalty for not running Microsoft software on Azure infrastructure. It claims that Microsoft charges up to four times more, for example, to run Windows Server on GCP. AWS has previously moaned about this too. As well as assessing whether Microsoft is using its position to limit customer choice, the CMA investigation "includes looking at how AI competitors are able to integrate with Microsoft's business software, giving customers access to AI software across suppliers to best suit their needs." Microsoft is pushing Copilot AI into as many Microsoft 365 subscriptions as it can, even creating a new tier, E7, aimed specifically at AI services. In a statement, Nicky Stewart, senior advisor to the Open Cloud Coalition - a trade association Microsoft previously dismissed as a Google lobby group - said: "This investigation needs to be both rapid and conclusive. It must address Microsoft's unfair licensing practices once and for all, giving the UK cloud market a level playing field and the confidence to innovate and invest for the long term." Reg readers should not expect results anytime soon. It took 21 months for the CMA to publish the results of an investigation into the UK cloud services market, in which it said Microsoft and AWS were using their dominance to harm UK cloud customers. It claimed Microsoft, for example, could have charged UK enterprise customers £500 million more annually to run its wares in AWS and Google clouds than they'd have paid to run them in Azure. A key concern from that investigation - whether Microsoft's software licensing practices were reducing competition in cloud services - has informed this one. This latest inquiry must be completed within nine months, and a decision on designating Microsoft with SMS is scheduled to be reached by February 2027. For its part, a Microsoft spokesperson told The Register, "We are committed to working quickly and constructively with the CMA to facilitate its review of the business software market." The investigation will be wide-ranging, encompassing productivity applications, operating systems, databases, and security software. Sarah Cardell, Chief Executive of the CMA, said, "Our aim is to understand how these markets are developing, Microsoft's position within them and to consider what, if any, targeted action may be needed to ensure UK organizations can benefit from choice, innovation and competitive prices." Authorities in the US, Europe, Brazil, South Africa and Japan are also closely monitoring Microsoft's licensing policies. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5219960&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5219960&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240393</guid>
        <link>https://www.theregister.com/personal-tech/2026/05/14/ai-to-infest-eight-in-ten-premium-phones-within-two-years/5240393</link>
        <pubDate>Thu, 14 May 2026 16:02:00 +0200</pubDate>
        <title>AI to infest eight in ten premium phones within two years</title>
        <description><![CDATA[ And Counterpoint sees fad spreading from pricey handsets to smart rings and earbuds too... whether you asked for it or not ]]></description>
        <category>personal tech</category>
                <lab:kicker><![CDATA[ Personal Tech ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 13:26:01 +0000</dc:modified>
                <content:encoded><![CDATA[ AI will be in the majority of premium smartphones and wearables within a few years - bad news for anyone who doesn't like or trust the overhyped pixie dust. Counterpoint Research forecasts that more than 80 percent of premium smartphones will have agentic AI capabilities by 2027, while a similar proportion of so-called wearable devices are on track to be AI-enabled by 2032. To some degree, this appears to be a push from the vendors, who see AI as a "premium" feature to justify the inflating price tag attached to devices. Counterpoint says that MediaTek became the first chipset maker to commercialize agentic AI capabilities via its Dimensity 9400 series, followed by Qualcomm with the Snapdragon 8 Elite Gen 5 and Snapdragon 8 Gen 5 platforms. This marked the start of a new smartphone technology cycle in which devices increasingly shifted from sporting AI assistants to boasting "autonomous, context-aware AI experiences," Counterpoint claims. It defines an agentic AI smartphone as one capable of running software agents that can understand context, plan actions, make decisions, and execute multi-step tasks on behalf of the user. This places more emphasis on memory bandwidth and sustained AI throughput rather than just having a neural processing unit (NPU) to boost processing, hence the appearance of newer silicon designed with agentic AI in mind. With the memory shortage pushing up the price of phones, the device makers also need something to convince buyers to part with more of their hard-earned cash. "We expect one in three smartphones sold in 2027 to have agentic AI capability, driven by both premium (>$600) and mid-high ($250-$600) price tier smartphones," says Counterpoint research vice president Peter Richardson. However, for premium devices, the figure is 80 percent or higher, and the bigger opportunity will open up when these features start reaching mid-tier smartphones at scale, the firm forecasts. Not everyone welcomes AI in their personal gadgets. One UK used device biz reported a slump in demand for pre-owned Samsung Galaxy phones since the firm started adding AI capabilities. The figure of 80 percent crops up again in wearables, where the proportion of AI-capable devices is projected to rise from 30 percent in 2025 to nearly 80 percent by 2032. This represents a trillion-dollar revenue opportunity for the vendors, Counterpoint believes. Wearables - smartwatches, health monitors and the like - increasingly execute inference workloads locally, with models trained in the cloud then deployed onto the device. This shifts latency-sensitive functions, such as continuous health monitoring, gesture recognition, and contextual awareness to the device itself while improving privacy by cutting back on sensitive biometric information sent to the cloud, according to Counterpoint. Smartwatches and wireless earbuds are forecast to remain the largest categories by unit volume through 2032, with the latter gaining AI-driven features such as real-time language translation, speaker identification, and personalized hearing adaptation. Counterpoint expects smart rings (no giggling at the back there) to be the fastest-growing segment. This is because constantly worn items can continuously track health signals including heart rate variability, sleep stages, and stress. Revenue from AI-enabled wearables is forecast to grow at an average of 21 percent annually between now and 2032. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240404&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240404&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240492</guid>
        <link>https://www.theregister.com/offbeat/2026/05/14/claude-reunites-stoner-with-bitcoin-after-losing-password/5240492</link>
        <pubDate>Thu, 14 May 2026 15:30:00 +0200</pubDate>
        <title>Dude… where’s my password? Claude reunites forgetful stoner with $400k Bitcoin stash</title>
        <description><![CDATA[ AI to the rescue as 11-year search for password turns up in old PC files ]]></description>
        <category>offbeat</category>
                <lab:kicker><![CDATA[ Offbeat ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 12:53:46 +0000</dc:modified>
                <content:encoded><![CDATA[ Eleven years ago, a stoner bought some Bitcoin, lit up, and entered a password that he soon forgot. Now, after searching for more than a decade, Claude AI has helped him figure out the credentials he needed to gain access to a crypto wallet containing currency that is now worth a whopping $400,000. The man, who retains an anonymous online profile only going by the alias “cprkrn,” vowed to name his progeny after Anthropic’s CEO Dario Amodei, all because the AI tool helped him regain access to an Obama-era wallet he thought was impenetrable. Armed only with an old mnemonic phrase, the man plugged it into Claude and told the AI to search his computer for ways he could use it to figure out the password that could regain access to the 5 Bitcoins he bought in 2015 at a Starbucks. He told web show MTSlive that he had two of the three passwords needed to open up the wallet, but couldn’t find the crucial third after changing it, and naturally later forgetting it, while he was high. He said he bought the tokens when the price for each was around $250. Altogether, his Bitcoin stash is now worth just shy of $400,000. After eight weeks working to crack the password, and after the man gave it access to his old computer used for college work, Claude found a wallet backup that the mnemonic phrase was able to decrypt. According to an overview of the mission, written by Claude, accessing the wallet backup gave the man access to the private keys required to access the Blockchain.com wallet. Looking at the wallet’s transaction history shows the funds lying dormant since April 2015, and then being transferred out on Wednesday. Previous attempts to regain access to the wallet involved brute forcing password strings, 3.5 trillion of them by Claude’s reckoning, all to no avail. He even traveled back to his parents’ house to retrieve college notebooks, manually entering "anything that looked like password or a seed phrase" he thought might help the AI crack or find the third password. The man ran Claude for eight weeks to realise he changed the password 11 years ago, while stoned, to “lol420fuckthePOLICE!*:)”. This is a stellar case study to highlight the value of complex passwords, if there ever was one. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240537&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240537&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240381</guid>
        <link>https://www.theregister.com/devops/2026/05/14/anthropics-bun-rust-rewrite-merged-at-speed-of-ai/5240381</link>
        <pubDate>Thu, 14 May 2026 15:01:00 +0200</pubDate>
        <title>Anthropic’s Bun Rust rewrite merged at speed of AI</title>
        <description><![CDATA[ Version 1.3.14 of JavaScript toolkit released as last Zig version; a million lines of Rust code merged in gargantuan commit ]]></description>
        <category>devops</category>
                <lab:kicker><![CDATA[ DevOps ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 07:31:48 +0000</dc:modified>
                <content:encoded><![CDATA[ A pull request with a Rust version of Anthropic’s Bun, a JavaScript toolkit and runtime originally written in Zig, has been merged to the main Bun repository. This comes just days after its author, Jared Sumner, said "there's a very high chance all this code gets thrown out." Sumner posted on X (formerly Twitter) five days ago that "99.8 percent of bun's pre-existing test suite passes on Linux x64f glibc in the rust rewrite," a clue that what was initially described as an experiment was likely to make it to production. Three days later, the Bun team released version 1.3.14, with Sumner stating that if the Rust rewrite was merged, "this would be the last version in Zig." Today that merge took place, adding more than one million lines of code. Sumner said it passes Bun's test suite on all platforms, fixes some memory leaks, and shrinks the binary size by between 3 and 8 MB. "Most importantly, we now have compiler-assisted tools for catching and preventing memory bugs, which have cost the team an enormous amount of development and debugging time over the years," he said in a comment. Performance is either neutral or faster, he said, though the codebase is "the same architecture, the same data structures." No async Rust is used. Bun users have hit memory leak issues when deploying it as a production runtime. According to Sumner, "Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large percentage of that list is use-after-free, double-free, and forgot-to-free-on-error-path, and those become compile errors or automatic cleanup." A second pull request, removing upwards of 600,000 lines of Zig code, was automatically flagged by GitHub as "AI slop" and closed, but will presumably reappear in some form. The size of these commits makes them near-impossible for humans to review. "What a nice reviewable little commit. I'm sure it will not contain any bugs," said one comment on the Rust merge. Although the idea of the Rust port has been well received, the speed of the transition has taken the community by surprise. In normal circumstances, porting a major project so quickly would be risky, but this has been accomplished using AI tools. According to Sumner, it is "essentially the same codebase ported to Rust." Asked whether the Rust version would be maintained mainly by Anthropic’s Claude Code, Sumner said "this is already the status quo; we haven’t been typing code ourselves for many months now. Even pre-acquisition [by Anthropic] this was pretty much accurate." Sumner was formerly a strong Zig advocate, but Zig’s no-AI policy is at odds with the Bun team’s way of working, and recent versions of Bun use a Zig fork with contributions that cannot be merged upstream, and which Zig’s maintainers said would not be welcome regardless of the AI aspect. Version 1.3.14, the last one still to use Zig, adds a built-in image processing API for decoding, transforming and encoding images. It is designed as a drop-in replacement for the Sharp image processing library for Node.js. The new release also adds experimental support for the HTTP/3 (QUIC) protocol in Bun’s integrated server. The full release notes describe these and other new features. Is it possible to move this fast and not break things? Bun's migration from Zig to Rust will be watched with interest by AI advocates and sceptics alike. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=1630805&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=1630805&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240203</guid>
        <link>https://www.theregister.com/on-prem/2026/05/14/americans-would-rather-have-a-nuclear-plant-in-their-backyard-than-a-datacenter/5240203</link>
        <pubDate>Thu, 14 May 2026 14:30:00 +0200</pubDate>
        <title>Americans would rather have a nuclear plant in their backyard than a datacenter</title>
        <description><![CDATA[ AI and the bit barns that power it have developed a serious PR problem ]]></description>
        <category>on-prem</category>
                <lab:kicker><![CDATA[ On-Prem ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 12:10:17 +0000</dc:modified>
                <content:encoded><![CDATA[ The majority of Americans are now opposed to datacenters being built in their area, many strongly opposed, pointing to tough times ahead for site developers. A Gallup survey found more than 70 percent of respondents indicate they would be against the construction of an AI datacenter in their neighboorhood, with almost half (48 percent) saying they were strongly opposed. Only 27 percent were in favor. The polling shows how quickly AI server farms have become politically toxic in the US, not helped by stories about their effects on energy bills, slurping up water supplies, and creating air and noise pollution in their vicinity. To highlight this, Gallup found that more US residents are opposed to massive data halls than to having a nuclear power plant in their backyard: 53 percent of Americans oppose building a nuclear energy site nearby, compared with the 71 percent against datacenter construction. When it comes to the reasons for opposing AI campuses, half of all respondents cite the effect on resources, with excess water usage and potential power grid constraints topping the list. Concern about loss of farmland and nature was surprisingly low, with just 7 percent mentioning this, but it is possible the scores are higher in rural areas. Quality-of-life concerns such as increased traffic were put forward by nearly a quarter, while a fifth mentioned higher utility bills. Many were worried about AI specifically: that it would replace human workers, that they don't trust it, that it is moving too fast, and that the industry needs regulating. Perhaps the latter sentiment is why President Trump appears to have shifted his own position on the need for AI regulations. Conversely, those in favor of datacenters cite economic benefits, with 55 percent mentioning increased job opportunities, and 13 percent saying it is because of increased tax revenues. However, these people are perhaps laboring under some delusions, as datacenters generally deliver few long-term local jobs once they are operational, and far from increasing tax revenue, many benefit from generous tax subsidy schemes that are costing some individual US states upward of $1 billion in lost income each year. This being America in 2026, Gallup looked at how attitudes stack up depending on political affiliation. It found that Democrats, at 56 percent, are much more likely than Republicans to be strongly opposed to a server farm in their vicinity. But 39 percent of Republicans are also strongly opposed, while another 24 percent are somewhat averse to it, and only about a third are in favor. Gallup points out the contradiction: for AI usage to expand in the US, facilities that can handle the necessary computing power will have to be built. But most Americans appear to take a "not in my backyard" attitude to new bit barns, and that attitude has grown in strength. The Register noted this last year, when Emma Fryer, public policy director for datacenter operator CyrusOne, said: "People don't make a connection between the digital services they depend on every minute of every day of their lives and the fact that providing them every minute of every day of their lives requires industrial-scale infrastructure." She was speaking during a discussion of the industry's image problem at the Datacloud Global Congress event in Cannes, France. Garry Connolly, founder of Digital Infrastructure Ireland, told the same audience: "Most people are fucking scared of AI, like we're feeding a monster." Telling the public that all those massive datacenters are needed for AI is therefore not a winning argument. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5226654&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5226654&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240257</guid>
        <link>https://www.theregister.com/networks/2026/05/14/zte-and-telkom-indonesia-sign-strategic-mou-to-accelerate-digital-solutions-and-infrastructure-development/5240257</link>
        <pubDate>Thu, 14 May 2026 14:11:24 +0200</pubDate>
        <title>ZTE and Telkom Indonesia sign strategic MoU to accelerate digital solutions and infrastructure development</title>
        <description><![CDATA[ Strengthening Indonesia’s digital ecosystem through AI, cloud computing, and next-gen connectivity ]]></description>
        <category>networks</category>
                <dc:modified>Thu, 14 May 2026 12:12:11 +0000</dc:modified>
                <content:encoded><![CDATA[ Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, has officially signed a Memorandum of Understanding (MoU) with PT Telkom Indonesia (Persero) Tbk to strengthen strategic cooperation in the development of digital solutions and infrastructure. The MoU marks a significant milestone in the long-standing partnership between ZTE and Telkom, reinforcing both parties' commitment to accelerating Indonesia's digital transformation through the deployment of advanced technologies, including cloud computing, artificial intelligence (AI), and next-generation connectivity. Through this collaboration, ZTE will leverage its global capabilities in digital infrastructure, AI-driven solutions, and integrated platforms to support Telkom in enhancing its digital ecosystem. The partnership is expected to accelerate innovation, strengthen service capabilities, and enable more scalable and secure digital solutions for enterprise and government sectors. Zhu Yang, Sales Director of ZTE Indonesia, stated, "We are honoured to strengthen our collaboration with Telkom Indonesia, a key digital ecosystem enabler in Southeast Asia. This partnership reflects our shared vision to build intelligent, efficient, and sustainable digital infrastructure. By combining ZTE's technological expertise with Telkom's strong market presence, we aim to unlock new value and support Indonesia's digital economy growth." From Telkom's perspective, this collaboration aligns with the company's broader transformation strategy to evolve beyond a traditional telecommunications operator into a digital infrastructure and platform-driven enterprise. Seno Soemadji, Director of Strategic Business Development & Portfolio PT Telkom Indonesia (Persero) Tbk, emphasized that strategic partnerships play a critical role in accelerating the company's long-term growth agenda. "This collaboration reflects our continued focus on strengthening digital infrastructure as a foundation for future growth. Moving forward, Telkom is committed to scaling its capabilities across data center, connectivity, and cloud-based platforms, while embedding AI as a core enabler to deliver more integrated and high-value solutions for our customers. Through partnerships like this, we aim to build a more resilient, secure, and competitive digital ecosystem in Indonesia and the region," he said. The cooperation also supports Telkom's ongoing efforts to sharpen its portfolio focus and enhance execution discipline, ensuring that each initiative contributes to sustainable value creation and long-term competitiveness. Looking ahead, ZTE and Telkom will explore various collaboration areas, including digital infrastructure development, enterprise solutions, AI-enabled services, and capability building, to support the evolving needs of Indonesia's digital economy. Contributed by ZTE. ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240471&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240471&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240365</guid>
        <link>https://www.theregister.com/science/2026/05/14/nasa-fleshes-out-artemis-iii-the-moon-mission-that-wont-go-to-the-moon/5240365</link>
        <pubDate>Thu, 14 May 2026 13:59:52 +0200</pubDate>
        <title>NASA fleshes out Artemis III, the Moon mission that won't go to the Moon</title>
        <description><![CDATA[ SpaceX and Blue Origin may yet get a role in low Earth orbit rehearsal, readiness permitting ]]></description>
        <category>science</category>
                <lab:kicker><![CDATA[ Science ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 13:23:33 +0000</dc:modified>
                <content:encoded><![CDATA[ Artemis III is currently targeted for late 2027, and NASA has shared some of its plans for the mission, though exactly how SpaceX and Blue Origin will participate remains unclear. The mission to low Earth orbit will be launched with a "spacer" rather than the Interim Cryogenic Propulsion Stage (ICPS) that would otherwise be used on lunar voyages to send the Orion capsule to the Moon. According to NASA, the crew will spend more time in the Orion capsule than the Artemis II astronauts to further test the spacecraft's life support system. NASA will also demonstrate the docking system alongside an upgraded heat shield. As for the lunar lander, NASA has remained tight-lipped, only saying that operations would be "informed by Blue Origin and SpaceX capabilities." However, the agency stated that astronauts could potentially enter "at least one lander test article." There might also be an opportunity to evaluate the interfaces of Axiom's AxEMU spacesuit. There could, in theory, be three launches during the Artemis III mission: one for Orion, atop the SLS (the core stage of which is in NASA's Vehicle Assembly Building), with separate launches for SpaceX's Starship human landing system pathfinder and Blue Origin's Blue Moon Mark 2 landing system pathfinder. Without an ICPS, the European-built Orion service module will provide propulsion to circularize the spacecraft's orbit. Artemis III was supposed to mark a crewed return to the lunar surface, but was changed earlier this year to be a test of commercial lunar lander technologies in low Earth orbit. Jeremy Parsons of NASA's Exploration Systems Development Mission Directorate called the development a "stepping stone" to a lunar landing, saying: "For the first time, NASA will coordinate a launch campaign involving multiple spacecraft integrating new capabilities into Artemis operations." Kind of. In 1965, NASA launched the first crewed flight of the Gemini program. Several stages in the program involved launching another spacecraft – the Agena target vehicle – followed by a crewed Gemini launch to demonstrate rendezvous and docking techniques. The final crewed flight, Gemini 12, was launched less than two hours after the Agena [PDF]. While NASA is unlikely to manage that sort of quick-fire launch cadence, the agency will also expect to avoid a repeat of the infamous Gemini 8 incident, in which a stuck thruster almost resulted in the loss of astronauts David Scott and Neil Armstrong. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5222787&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5222787&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240315</guid>
        <link>https://www.theregister.com/security/2026/05/14/alleged-dream-market-kingpin-faces-us-german-charges/5240315</link>
        <pubDate>Thu, 14 May 2026 13:26:36 +0200</pubDate>
        <title>Cops arrest man suspected of being Dream Market kingpin</title>
        <description><![CDATA[ Owe Martin Andresen faces charges in both US and Germany connected with money laundering, claims he sent gold bars directly to his doorstep ]]></description>
        <category>security</category>
                <lab:kicker><![CDATA[ Security ]]></lab:kicker>
                <content:encoded><![CDATA[ A man police suspected of being the administrator of the former leading online drug bazaar Dream Market is facing charges in both his native Germany and the US following his arrest earlier this month. Prosecutors claim Owe Martin Andresen, 49, is the individual known by the “Speedstepper” alias, one of the few Dream Market admins identified by law enforcement in the 2019 attempts to shutter the platform. While other crime leaders on the platform have been convicted, it took the authorities years to identify their latest suspect, whom they believe was main admin of the website. Authorities said they tracked him down by monitoring crypto wallets, and tracking purchases of gold bars that the indictment claims were delivered to his home address. Other lower-level admins have long been convicted, including French national Gal Vallerius, who was sentenced to 20 years in prison a year after being arrested at Atlanta airport in 2017 on his way to attend the World Beard and Mustache Championships (yes, really). Andresen was arrested by German police on May 7 after the US indicted him in January, charging him with several counts of money laundering offenses. He faces similar charges in Germany. Authorities spent years gathering small pieces of evidence that eventually tied Andresen to Dream Market’s helm. After the platform shut down in 2019 amid mounting pressure from law enforcement, none of the suspected admins touched Dream’s infrastructure, including the operation’s known cryptocurrency wallets, which contained millions of dollars’ worth of tokens. Three years later, between November and December 2022, Andresen allegedly accessed these numerous wallets and transferred the contents into a single, consolidated one - a step only someone with access to Dream’s private key could carry out. Police believe this was Speedstepper. The next breadcrumb came almost a year later, when in August 2023, Andresen allegedly used an Atlanta-based cryptocurrency service provider to purchase gold bars from various international companies using the funds from the consolidated wallet. The indictment claims he had those gold bars shipped directly to his house in Germany, instead of choosing a more neutral, less compromising location. Between then and April 2025, German police believe they have identified several other money laundering schemes executed by Andresen, washing more than $2 million in the process. Upon his arrest on May 7, police searched Andresen’s residence “and two other locations,” at which officers found gold bars worth approximately $1.7 million, more than $23,000 in cash, as well as several bank accounts and crypto wallets containing roughly a combined $1.2 million. All of these proceeds are thought to stem from the funds generated by Dream Market and the various fees it charged for transactions and sellers to list their illicit wares. Dream Market operated between 2013 and 2019 and benefited greatly from the Alphabay and Hansa seizures, scooping up their users after playing second fiddle to both platforms for much of their respective reigns. According to US Attorney Theodore Hertzberg, at its peak, Dream had around 100,000 concurrent listings, most of which were for drugs. The US said the market was responsible for the trafficking of huge quantities of illegal narcotics, including more than 90kg of heroin, 450kg of cocaine, 25kg of crack cocaine, 45kg of methamphetamine, 13kg of oxycodone, and 36kg of fentanyl. “Andresen allegedly channeled commissions earned from selling illegal drugs, stolen personally identifiable information, counterfeit identification documents, and other items through cryptocurrency wallets and even converted his ill-gotten gains into gold bars,” said US Attorney Hertzberg. “Thanks to the close coordination between federal and German law enforcement, Andresen and his co-conspirators will no longer profit from the online sales of narcotics and fraud services, and Andresen will be prosecuted in both Germany and the United States as a result of his actions.” Andresen faces 12 federal charges - six counts each of international and domestic concealment money laundering - each carrying a maximum 20-year sentence. German authorities also charged Andresen with “several” counts of domestic money laundering, with each charge carrying a maximum five-year prison stint. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240380&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240380&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240286</guid>
        <link>https://www.theregister.com/public-sector/2026/05/14/uk-government-prescribes-single-patient-record-for-nhs-data-chaos/5240286</link>
        <pubDate>Thu, 14 May 2026 13:04:28 +0200</pubDate>
        <title>UK government prescribes Single Patient Record for NHS data chaos</title>
        <description><![CDATA[ Doctors welcome joined-up care plan, but warn patient trust depends on safeguards, access controls, and knowing where Palantir fits in ]]></description>
        <category>public sector</category>
                <lab:kicker><![CDATA[ Public Sector ]]></lab:kicker>
                <content:encoded><![CDATA[ The UK government has confirmed plans for a Single Patient Record (SPR), a major overhaul of NHS health data management that could involve the service's controversial Palantir-run Federated Data Platform (FDP). In the King's Speech yesterday, the Labour government said it would push ahead with plans to introduce the NHS Modernisation Bill in the new Parliamentary year, which is set to include legislation for the introduction of the SPR. Previous governments have found their efforts to bring together electronic patient records held by family doctors, hospitals, and other specialist services beset by technical complexity, a mind-bending web of rules and roles, and some cultural intransigence. Nonetheless, the government said its plan for the SPR would allow the NHS to "bring together patients' health and social care records into one place to improve patient safety and experience." It said patients would be able to see their own health records securely on the NHS App. The plan is to roll out the service to those receiving maternity and frailty care by 2028, with wider implementation to follow. An impact statement for the policy, published in January, said costs would encompass product development, tech, and data integration including alignment with external vendors, delivery and administration such as business case development, engagement, clinical and system input, as well as commercial costs. "The broad scope of the SPR means it will require investment to ensure that staff such as paramedics and community pharmacists have the same access to their patients' data as those working in GP surgeries and hospitals," it said. "Depending on the approach to the SPR, in order to maximize its value, activities may need to include translating the medical terminology in care records into plain English so that they can be readily understood and used by the patient, and to digitize historic patient information." While the document says the SPR could support automated triage of patients, potentially reducing variation in the service, "there are risks to delivering the Single Patient Record due to the magnitude and complexity of the program and integration with legacy systems." The impact assessment said there was a risk of reliance on a single provider and "de-facto vendor-lock." "While many clinicians would support data sharing for the purposes of improving care, there may be a risk of clinical resistance to changes to data sharing if safeguards are perceived to be insufficient," the document said. Dr Emma Runswick, council deputy chair of doctors' union the BMA, said: "The NHS Modernisation Bill is a huge undertaking and doctors' and patients' past experience with large top-down reorganisations of the NHS have not always been a happy one. The announcement of a SPR is welcome, however it is crucial that GPs' voices are listened to in its implementation to ensure patient data remains safe and patient confidence is protected." Currently, GPs are official "controllers" of patient data under UK data protection law, although that may change with the introduction of the new SPR. NHS England is currently planning the SPR rollout. A meeting held by the soon-to-be-defunct quango last year "accepted that an appropriate data controller for SPR is necessary" and that change would require a review of the legislation. The minutes, obtained by campaign group medConfidential under the Freedom of Information Act, said: "Given SPR will be a multi-service record it would not be appropriate for GPs to act as the data controller. It was agreed that while the NHS will be the data controller/custodian, patients would expect to own their records: how this can be achieved requires further thought." In an official statement, BMA GP Committee England chair Dr Katie Bramall said: "GPC England has not been part of the discussions on what form the Single Patient Record will take, who will be granted access, the purposes for which it will be used, or which company will be contracted to operate it. "There are already existing mechanisms that allow those in secondary care to view the live GP record, and therefore, the Government needs to explain why an additional system is needed. Until the security of any data flows can be guaranteed, and full patient-facing audit trails are made available via the NHS App showing who has accessed confidential medical data and why, we remain concerned. "We also remind patients that they can exercise their right to opt out of secondary uses of their confidential medical data by visiting the NHS website." The NHS England Data and Digital Technology Committee also heard that the NHS was considering using existing electronic patient record (EPR) systems and/or a role for the controversial Federated Data Platform, run by US spy-tech firm Palantir, in building the SPR solution. Sam Smith, medConfidential coordinator, told The Register that the FDP/Palantir arrangement – which has been the focus of fierce criticism in Parliament recently – is likely to have a role either way. "Either there's going to be a new data store – which will be in Palantir – or there'll be infrastructure for bringing various APIs together, where you make a single call and you get back a summary of the patient's record. The system doing that will be the FDP. [NHS England] has not publicly decided what they're going to do, in practice. They'll probably do the API thing first, and if they don't get everything they wanted, they will eventually take a copy of the data." The government has backed its ambitions for NHS technology with a promised £10 billion in investment. But nationally led digital transformation in the NHS has failed in the past. The ambitious National Programme for IT (NPfIT), launched by the Blair Labour government in 2003, had a budget estimated at £12.7 billion ($17.2 billion). Although NPfIT introduced a number of new technologies, it fell short of introducing electronic health records throughout the NHS. The National Audit Office said it did not represent value for money, and in 2020 it warned there was a lack of systematic learning from past failures in NHS digital transformation. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240357&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240357&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240270</guid>
        <link>https://www.theregister.com/security/2026/05/14/dirty-frag-gets-a-sequel-as-fragnesia-hands-linux-attackers-root-level-access/5240270</link>
        <pubDate>Thu, 14 May 2026 12:01:52 +0200</pubDate>
        <title>Dirty Frag gets a sequel as Fragnesia hands Linux attackers root-level access</title>
        <description><![CDATA[ Fresh kernel flaw comes with public exploit code and continues ugly run of highly reliable privilege escalation bugs tied to memory and page-cache handling ]]></description>
        <category>security</category>
                <lab:kicker><![CDATA[ Security ]]></lab:kicker>
                <content:encoded><![CDATA[ Linux admins hoping Dirty Frag was a one-off horror from the kernel networking stack are about to have a considerably worse week. Researchers at Wiz have published an analysis of "Fragnesia," a Linux kernel local privilege escalation flaw discovered by William Bowling of the V12 security team that allows unprivileged users to gain root by corrupting page cache memory. The bug, tracked as CVE-2026-46300, has public proof-of-concept exploit code documented by V12 on GitHub that demonstrates the vulnerability being used against /usr/bin/su to spawn a root shell. According to Google-owned Wiz, the flaw sits in the Linux kernel's XFRM subsystem, specifically ESP-in-TCP processing tied to IPsec support. By carefully triggering the bug, attackers can modify protected file data in memory without changing the original files stored on disk. Wiz describes Fragnesia as part of the broader "Dirty Frag" bug family rather than a completely separate class of issue. Dirty Frag itself only surfaced days ago and was already attracting attention thanks to public exploit code, incomplete patch coverage, and unusually reliable privilege escalation. According to researcher Hyunwoo Kim, who uncovered Dirty Frag, "Fragnesia" emerged as an unintended side effect of patches shipped to fix the original Dirty Frag vulnerabilities, adding yet another entry to the long tradition of security fixes accidentally creating new security problems. As The Register previously reported, Dirty Frag followed hot on the heels of Copy Fail, another Linux kernel privilege escalation flaw that abused page cache handling to overwrite supposedly read-only files. Historically, local Linux privilege escalation bugs had a reputation for being unreliable, crash-prone, or fiddly enough that attackers needed good timing and a fair bit of luck to pull them off cleanly. Fragnesia looks different, as Wiz and V12 both say the exploit avoids race conditions entirely, making it far more predictable than older Linux root exploits like Dirty COW. That makes the bug much more useful after an initial compromise. An attacker who gains access to a system through phishing, stolen credentials, or a vulnerable cloud workload suddenly has a cleaner path to full root access. The V12 proof-of-concept repository is already public, while Linux vendors have started pushing out advisories and mitigation guidance. AlmaLinux warned that all supported releases are affected and urged administrators to patch quickly or disable unused ESP-related functionality where possible. Similar advisories have also been issued by Amazon Linux, CloudLinux, Debian, Gentoo, Red Hat Enterprise Linux, SUSE, and Ubuntu as distributors scramble to assess exposure across supported kernel versions. Microsoft also urged organizations to patch quickly, noting that though it had not observed in-the-wild exploitation so far, Fragnesia "can modify any file readable by the user, including [/]etc[/]passwd." The Linux networking stack is starting to look less like infrastructure and more like a root exploit vending machine. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=101588&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=101588&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5239737</guid>
        <link>https://www.theregister.com/public-sector/2026/05/14/calling-the-cops-just-got-extra-ai-as-police-seek-to-add-tech-to-contact-systems/5239737</link>
        <pubDate>Thu, 14 May 2026 11:15:00 +0200</pubDate>
        <title>Calling the cops just got extra AI as police seek to add tech to contact systems</title>
        <description><![CDATA[ AI already listening in to call handlers in real time, conducting live database searches ]]></description>
        <category>public sector</category>
                <lab:kicker><![CDATA[ Public sector ]]></lab:kicker>
                <dc:modified>Fri, 15 May 2026 13:39:44 +0000</dc:modified>
                <content:encoded><![CDATA[ Police forces across England and Wales, along with the British Transport Police, will add personalization and artificial intelligence (AI) to their jointly run digital contact systems through a £72 million contract to manage and develop these. Almost all police forces in the three nations use the Digital Public Contact’s Single Online Home web platform for their own websites, with the platform also running Police.uk, a national information site, and Data.police.uk, which provides information on police-recorded crime. The Metropolitan Police Service (MPS), which hosts Digital Public Contact services on behalf of the National Police Chiefs Council, hopes to find a single supplier for these under a new contract running from July 2027 to December 2029, with a possible three-year extension, according to a market engagement procurement notice published on 12 May. Existing Digital Public Contact services include the Single Online Home websites, linked services that pass information on crimes and incidents from the public to relevant officers; and the National My Police Portal, a new service using GOV.UK’s One Login to links victims with officers in charge of cases, which South Yorkshire Police started using in January. The new contract will also cover use of AI. In March West Yorkshire Police and Digital Public Contact started using AI to extract material from old control room calls, which at present are normally recorded but not transcribed. In the procurement notice, the MPS said that AI could also be used in reporting, analysis, conversational interactions and staff assistance. In a speech on the development of Digital Public Contact last October, Cambridgeshire’s chief constable Simon Megicks said that the work also includes developing a natural language switchboard that can help direct incoming calls and live services to assist operators, which is being piloted by Humberside Police. “It supports call handlers in real time, and as they converse, the AI listens in and conducts live database searches, surfacing relevant information instantly,” he said of the assistance service at a National Police Chiefs Council innovation event. “Operators are empowered to make better decisions, quicker: reducing risk and improving outcomes for the public.” In the King’s Speech on 13 May the government confirmed plans to merge forces in England and Wales and establish a National Police Service. The procurement notice says that the new contract will provide “a robust foundation” supporting these structural changes, although they are likely to take place beyond the end of the contract. Following a market engagement event on 9 June, the MPS plans to publish a tender notice for the work around the end of July. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5239790&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5239790&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5238153</guid>
        <link>https://www.theregister.com/saas/2026/05/14/bedrock-and-a-hard-place-claude-adventure-leaves-aws-user-staring-down-30k-invoice/5238153</link>
        <pubDate>Thu, 14 May 2026 10:30:00 +0200</pubDate>
        <title>Bedrock and a hard place: Claude adventure leaves AWS user staring down $30K invoice</title>
        <description><![CDATA[ CAD: Cost Anomaly Detection or Create Astounding Debt? ]]></description>
        <category>saas</category>
                <lab:kicker><![CDATA[ SaaS ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 15:36:55 +0000</dc:modified>
                <content:encoded><![CDATA[ The world of AI is exciting, but there are plenty of expensive pitfalls ready to catch out the unwary, as one Register reader found when taking Anthropic's Claude Opus for a spin courtesy of Amazon Bedrock. Our reader managed to run up Bedrock charges totaling $30,141.33 in April 2026, despite using AWS Cost Anomaly Detection (CAD) to avoid any nasty surprises. Thirty-three days before our reader's first use of Bedrock, the threshold in CAD was set to "Absolute ≥ $100 AND Relative ≥ 40%" so alerts should have fired if things got too spendy. As for which services to monitor, our reader chose "AWS Services," which Amazon says "tracks all AWS services automatically." Except it apparently doesn't, at least not in the way our reader expected. The problem is that AWS Marketplace isn't supported by CAD, so costs incurred wouldn't trigger an alert. And how are Anthropic Claude models billed? Through the AWS Marketplace. After burning through our reader's AWS Activate credits (totaling $8,026.54 in this case), Amazon started charging for model inference on the Bedrock Marketplace, racking up $30,141.33, plus another $675.07 in AWS infrastructure charges, without a peep from the CAD service. "The credits masking made it worse," our reader told us. "AWS Activate credits did cover the first ~$8k of charges, which meant the Marketplace billing was silently working for weeks before the credits ran out. There was no notification when credits were exhausted – the charges simply started accumulating as invoiced amounts." The first warning that things were mounting up came in the form of a surprisingly large invoice. Corey Quinn, a cloud economist at the Duckbill Group and occasional contributor to this publication, told The Register: "It's unintuitive that Bedrock model spend is Marketplace unless you're entirely too familiar with AWS." Quinn told us he does most of his Claude inference directly with Anthropic to take advantage of the company's real-time billing, alerts, cutoffs, per-key limits, and so on. The approach has avoided some potentially expensive mistakes. As far as AWS is concerned, the lack of CAD support for AWS Marketplace charges makes it all too easy to run up a big bill without realizing it, particularly when it comes to AI usage. This could be regarded as a cautionary tale. If one digs deeply enough into the AWS documentation on CAD, there is a line that warns that AWS Marketplace is an unsupported service. However, it isn't clear that Claude on Bedrock is billed through the AWS Marketplace. The fact that Marketplace billing bypasses the monitoring tools compounds the issue, and could easily leave a customer getting an unpleasant surprise at invoice time. An AWS spokesperson told The Register: "AWS offers multiple tools to help customers manage spend, including AWS Budgets, which covers Amazon Bedrock spend on AWS Marketplace and other services. As noted in our documentation, AWS Marketplace charges are not currently supported by Cost Anomaly Detection. Customers with questions should reach out to AWS Support." ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240704&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240704&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5239853</guid>
        <link>https://www.theregister.com/security/2026/05/14/to-gain-root-access-intruder-just-had-to-ask/5239853</link>
        <pubDate>Thu, 14 May 2026 09:00:00 +0200</pubDate>
        <title>To gain root access at this company, all an intruder had to do was ask nicely</title>
        <description><![CDATA[ Human IT managers thought they were being nice to the boss, but were assisting a threat actor ]]></description>
        <category>security</category>
                <lab:kicker><![CDATA[ Security ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 12:56:42 +0000</dc:modified>
                <content:encoded><![CDATA[ PWNED Welcome once again to PWNED, the column where we help you prepare for security success by studying others’ embarrassing failures. Today’s terrible tale involves individuals trying to do right by a company executive by letting their guard down, never a smart move. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity is available upon request. Our sad story comes from Brandon Dixon, who currently serves as CTO and co-founder of AI security firm Ent. In a prior life, however, Dixon was a penetration tester for hire and he saw some things that made all my remaining hairs stand on end just hearing about them. During one pentesting assignment, Dixon tried to find out how easy it would be to steal someone’s account using social engineering. The answer: barely an inconvenience. Dixon telephoned IT security and pretended that he was the head of security who had lost his password. When they asked him challenge questions, he said he had forgotten the answers to those also. Then he gave them the password he wanted to use over the phone and they did a reset for him. After that, he was able to get into the network and do whatever he wanted there. There’s so much that’s obviously wrong here that it’s hard to know where to begin with our lesson-taking. The IT support agents should not have taken Dixon’s word that he was the security manager, especially after he failed challenge questions, and should have denied his request to reset the password. They were probably thinking “this guy is an executive and we don’t want to piss him off” rather than “we have procedures that everyone must follow.” The other problem here is that the IT department entered Dixon’s suggested password for him over the phone. First of all, the IT department should have sent a password reset to the real employee’s email or phone number. Second of all, it’s piss-poor security for anyone to know a user’s password other than the user themselves. And I say this as someone who used to work for a company where, if you had a problem, the IT support people would ask for your password via chat. Dixon also shared another story about social engineering from a time when he consulted for a pharmaceutical company. Members of the competition would call sales and marketing reps, pretend they were coworkers, and then extract information about upcoming drugs. This would allow competitors to know what was coming and how to respond to it. To help solve the problem, Dixon instituted a system where real employees had to give a secret password at the beginning of a conversation. “I built a system called 'Chal-Resp,' short for 'challenge-response,' that generated work pairings so a user could validate they were speaking with an actual employee,” he told The Register. “The caller would need to say the word and the end-user would need to respond with the proper challenge; only employees had access.” What both of Dixon’s stories have in common is the proof that humans are eager to please and be helpful. But suspicion is the whole root of infosec, so it behooves us all to be a little less helpful to strangers in the workplace. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5239865&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5239865&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240065</guid>
        <link>https://www.theregister.com/ai-ml/2026/05/14/ai-models-are-getting-better-at-replacing-cybersecurity-pros-on-certain-tasks/5240065</link>
        <pubDate>Thu, 14 May 2026 08:27:00 +0200</pubDate>
        <title>AI models are getting better at replacing cybersecurity pros on certain tasks</title>
        <description><![CDATA[ UK researchers find LLMs are learning to finish jobs faster and improving all the time ]]></description>
        <category>ai + ml</category>
                <lab:kicker><![CDATA[ AI +ML ]]></lab:kicker>
                <dc:modified>Wed, 13 May 2026 23:47:15 +0000</dc:modified>
                <content:encoded><![CDATA[ The UK AI Security Institute (AISI) has found that frontier models are quickly becoming more efficient when asked to do some cybersecurity work. AISI measures this with its "time window benchmark for cybersecurity," which estimates how much work an AI can do compared to a human. Using the benchmark could lead to findings such as Claude Sonnet 4.5 can do what a human cybersecurity expert can do in 16 minutes about 80 percent of the time, given a budget of 2.5m tokens. AISI has found the human-comparable task time – 16 minutes in this instance – is growing, fast. If tokens flowed freely instead of being arbitrarily capped, AI models might do better still. In February 2026, AISI internally reduced the expected task time doubling period from 8 to 4.7 months, based on progress made since late 2024. With the release of Anthropic Mythos Preview and OpenAI GPT-5.5, AISI has once again had to compress its projected doubling period. "In February 2026, we estimated that frontier models' 80 percent-reliability cyber time horizon had doubled every 4.7 months since reasoning models emerged in late 2024, given a 2.5M token limit," the AISI said in a post on Wednesday. "This was around half our November 2025 doubling time estimate, which was 8 months for both 50 percent and 80 percent reliability. Claude Mythos Preview and GPT-5.5 have since significantly outperformed this trend." The recalculated doubling time estimate, given what Mythos Preview and GPT-5.5 can do, is even shorter than 4.7 months. AISA does not cite a specific value but the organization points to similar time horizon estimates based on measurements of a broader skillset, software engineering, made by non-profit AI research house METR. "Their results imply a consistent doubling time of 4.2 months on software tasks since late 2024," AISI said, noting that with the latest Mythos Preview checkpoint (model update), it's closer to 4 months. Note that the time window benchmark is not a broad assessment of capabilities – AISI is not saying frontier models are becoming twice as capable by all measures. It's a narrow assessment based on the time it takes people to accomplish security tasks. Citing a different metric, AISI says the latest Mythos Preview checkpoint solved a 32-step simulated corporate network attack called "The Last Ones" in six of 10 attempts and managed to complete a previously unsolved challenge, a seven-step industrial control system attack called "Cooling Tower," in three of 10 attempts. As a point of comparison, when Opus 4.6 was evaluated in February 2026, it completed a maximum of 22 of 32 steps for The Last Ones. That model managed to reach milestone 6, which involves reverse-engineering a Windows service binary to access encrypted credentials, escalating privileges via token impersonation, and recovering a cryptographic key to access a command-and-control management service. "Frontier AI's autonomous cyber and software capability is advancing quickly: the length of cyber tasks that frontier models can complete autonomously has doubled on the order of months, not years," AISI concludes. "What this evidence does not tell us is how the pace of progress will evolve, when AI will reach any particular capability threshold, or how these capabilities will translate against defended, real-world systems." The curl project offers one data point with regard to the real world implications of the latest frontier models: Mythos managed to find just one confirmed vulnerability in its codebase. But watch this space. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240104&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240104&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240150</guid>
        <link>https://www.theregister.com/off-prem/2026/05/14/tencent-admits-gpus-only-pay-for-themselves-when-powering-personalized-ads/5240150</link>
        <pubDate>Thu, 14 May 2026 06:40:43 +0200</pubDate>
        <title>Tencent admits GPUs only pay for themselves when powering personalized ads</title>
        <description><![CDATA[ Chinese web giant says accelerator shortage is over as local hardware arrives in volume ]]></description>
        <category>off-prem</category>
                <lab:kicker><![CDATA[ Off-Prem ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 05:42:24 +0000</dc:modified>
                <content:encoded><![CDATA[ Chinese web giant Tencent struggles to earn a return on investment from GPUs – unless it uses them to power its advertising business. “If we buy GPUs and we deploy them into our ad tech, then that's a relatively short-cycle investment,” said Chief Strategy Officer James Mitchell during the company’s Q1 2026 earnings call. “The GPUs yield better targeting, higher click-through rates and higher revenue and profit on a pretty accelerated basis,” he said. But the company views GPUs powering work on its Hunyuan foundation model as “important for our franchise.” Mitchell said Tencent is comfortable with this situation. “There's been many products within Tencent … that went through lengthy incubation periods where they had no return on investment, but we were confident in the franchise value creation,” he said. “And then over time, they had more lengthy harvesting periods where we've been able to drive very healthy returns on that sunk investment.” He predicted that AI will go through the same cycle But Tencent is struggling to make the wheel turn because it’s only had enough GPUs to power its own services, leaving its public cloud without enough accelerators to rent to customers. Mitchell said Chinese manufacturers will soon fill the gap. “As the supply of China design GPUs progressively ramps up, then we'll be remedying that situation,” he said. Chief financial officer Shek Hon Lo weighed in with an observation that two factors made it hard for Tencent to get all the GPUs it wants: US sanctions, and “limited fab capacity within China.” “That's now being addressed because the China designed ASICs are seeing more supply from fabs within China as well as more supply from fabs in neighboring countries,” he said. But Tencent still expects GPU procurement to be harder than buying CPUs, as Lo said the company has “very long-term” deals with CPU vendors. “We've been a big customer for Intel and AMD for many years,” he said. “We've been progressively growing our volume with them for many years, and they believe it will continue to progressively grow our volume for many years to come.” That remark will be cause for celebration at the US companies, which have watched other hyperscalers invest heavily in custom Arm silicon. Tencent posted another strong quarter, with revenue of RMB196.5 billion ($28.9 billion) representing 12 percent growth. The company’s Weixin and QQ messaging apps have 1.95 billion monthly combined users. Tencent has tweaked their mobile apps “to act as communication interfaces for controlling AI agents, allowing users to orchestrate agents from mobile for complicated task execution on PC and cloud.” Tencent’s Western rivals Google and Meta haven’t yet built similar apps. And they don’t experience the same hardware acquisition problems Tencent faces. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5222020&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5222020&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240125</guid>
        <link>https://www.theregister.com/networks/2026/05/14/cisco-to-fire-4000-staff-and-generously-give-them-free-training-on-cisco/5240125</link>
        <pubDate>Thu, 14 May 2026 05:32:53 +0200</pubDate>
        <title>Cisco to fire 4,000 staff and generously give them free training – on Cisco</title>
        <description><![CDATA[ Reducing memory requirements to control costs in a new wave of kit ]]></description>
        <category>networks</category>
                <lab:kicker><![CDATA[ Networks ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 15:41:54 +0000</dc:modified>
                <content:encoded><![CDATA[ Cisco will make around five percent of staff redundant and has generously offered them free Cisco training for a year once they’re gone. CEO Chuck Robbins broke the news in a Wednesday blog post titled “Our Path Forward” that opens “Today we announced our Q3 FY26 earnings with record revenue of $15.8 billion, up 12 percent year over year, and double-digit top and bottom-line growth. The ELT [executive leadership team] and I could not be prouder of the growth you have all delivered for Cisco.” That growth included net income growing 35 percent to $3.4 billion. Yet Robbins’ pride was not sufficient for all Cisco staff to keep their jobs. The CEO said the layoffs are necessary because “The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest.” For Cisco that means “reducing roles in some areas” and also “making clear, strategic investments – particularly in silicon, optics, security, and in our employees’ use of AI across the company.” On Thursday, US time, close to 4,000 unlucky Cisco staff will be shown the door. Robbins said Cisco will help its soon-to-be-former workers find their next gig, and that the company’s efforts to do so have a 75 percent success rate. “We are also committed to continued personalized learning and will provide one year of access to all Cisco U courses and certifications, covering AI, Security, Networking, and more,” he added. Cisco made two big rounds of layoffs in 2024, one of which ejected seven percent of staff and the other resulted in Cisco firing five percent of employees. The restructures appear not to have slowed the company down: Robbins said product orders in Q3 rose 35 percent year over year – a figure that encapsulates a 105 percent year-over-year surge in revenue from hyperscalers and more modest 18 percent growth from other buyers. Robbins said Cisco has already scored $5.3 billion of AI infrastructure sales this year, and forecast full-year sales of $9 billion – 4.5 times its haul from last year. More prosaic products, like Wi-Fi kit, also grew fast as sales rose 40 percent. The company hopes to keep that cash flowing by building wireless kit that uses less memory. “You’ll see products that’ll become orderable in Q4 that’ll actually require 50 percent less memory,” Robbins said, with the design work to make that possible an example of the “20-plus programs that we’ve put into place that are active to reduce the memory utilization across the portfolio.” Cisco’s doing that despite the rising price of memory and storage not putting a dent in its margins, an outcome that execs attributed to supply chain management efforts. Glasswing to lift security sales Later in the earnings call, Robbins revealed that Cisco is participating in Anthropic’s Project Glasswing and using the Mythos model to test its code. The CEO said another impact of Anthropic’s bug-finding AI will be to accelerate plans to replace security appliances once other vendor’s use of Mythos finds flaw that are hard to fix. “I actually think while there will be a security opportunity, there’s going to most likely be a lot of focus from our customers on modernizing their infrastructure so that they don’t have this risk from technology that just can’t be patched,” Robbins said. Robbins said Cisco may have won an order or two from customers who were already close to replacing old security kit “and Mythos pushed them over the edge.” But he said Cisco didn’t receive “any meaningful orders in Q3 as a result of Mythos, but that could change in the future as we continue to work with customers.” ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5231225&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5231225&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240027</guid>
        <link>https://www.theregister.com/patches/2026/05/14/welcome-to-the-vulnpocalypse-as-vendors-use-ai-to-find-bugs-and-patches-multiply-like-rabbits/5240027</link>
        <pubDate>Thu, 14 May 2026 01:27:50 +0200</pubDate>
        <title>Welcome to the vulnpocalypse, as vendors use AI to find bugs and patches multiply like rabbits</title>
        <description><![CDATA[ Palo Alto Networks found and fixed 75 flaws this month, up from its usual five ]]></description>
        <category>patches</category>
                <lab:kicker><![CDATA[ patches ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 15:03:33 +0000</dc:modified>
                <content:encoded><![CDATA[ The vulnpocalypse has begun. Palo Alto Networks usually finds five vulnerabilities a month, but on Wednesday said it scanned its entire codecase using the latest frontier models, including Anthropic’s Mythos, and found 75 security holes, covered in 26 CVEs. This comes a day after Microsoft said it used its new agentic bug hunting system called MDASH to find 17 vulnerabilities across its products - on a record-setting Patch Tuesday that saw Redmond disclose a whopping 30 critical CVEs. Plus, last week Mozilla said it fixed 423 Firefox bugs in April, which is more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Mythos found 271 flaws in Firefox 150. It shouldn’t be all that shocking. Security vendors have long warned about attackers using AI, and how this means defenders need to operate at AI speed to protect their own networks and systems (aka buying their AI-infused products). Now that models have become really good at finding bugs in code, security shops are using AI to scan their own software, hopefully to uncover and fix flaws before the baddies do. And this trickles down to two things: more patches, and more work for admins. Zero Day Initiative’s chief vuln finder Dustin Childs agrees with this assessment. “At first, yes, this means more patches and thus more work for admins,” he told The Register. “The goal over time would be to eliminate as many as possible, and, over time, that monthly number goes down.” What will make this whole AI bug hunting season “really painful,” he continued, is if the patches don’t work or - worse yet - break things. “Many customers don’t trust patches as it is, so if AI-related patches break things, they are less likely to apply as time goes on,” Childs added. “This will be true even if AI only finds the bugs and doesn’t make the patches.” Bug hunting on steroids This isn’t to say security companies should avoid AI to find and fix flaws. “All vendors should use what tools they have to find and remediate bugs before they are exploited in the wild,” Childs said. “Ideally, they would find the bugs before they even ship, but I’m not holding my breath for that to happen.” Both Microsoft and Palo Alto Networks (PAN) are part of Anthropic’s Project Glasswing, which means they are among the select group of entities allowed to test Mythos, the much-hyped LLM, to find security holes in their own products. Palo Alto Networks began testing Mythos on April 7, and has since continued using the LLM and other frontier models, including Claude Opus 4.7 and OpenAI’s GPT-5.5-Cyber, according to Chief Product and Technology Officer Lee Klarich. “Today, we released our May ‘Patch Wednesday’ security advisories,” Klarich said in a Wednesday blog, adding that “this is the first time where the majority of findings were the result of frontier AI models scanning our code.” The LLMs scanned over 130 Palo Alto Networks products and platforms platforms, and as noted above found 75 issues, covered in 26 CVEs. None of these bugs are under exploitation, and as of Wednesday the company has fixed all bugs in its SaaS-delivered products and coded patches for all customer-operated products. Maybe 5 months before 'AI-driven exploits the new norm' “We intend to fix every vulnerability we find before advanced AI capabilities become widely available to adversaries,” Klarich said in his blog, adding that his company expects “a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm.” A day earlier, Microsoft said its new multi-model agentic scanning harness (codename MDASH) helped researchers find 16 new vulnerabilities across the Windows networking and authentication stack, as disclosed in May’s Patch Tuesday event. This included four critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service. “Unlike single-model approaches, the harness orchestrates more than 100 specialized AI agents across an ensemble of frontier and distilled models to discover, debate, and prove exploitable bugs end-to-end,” Microsoft VP of agentic security Taesoo Kim said in a Tuesday blog. Tom Gallagher, VP of engineering at Microsoft Security Response Center, admitted that “this month's release sits on the larger side of a hotpatch month.” Gallagher said he expects AI-assisted bug hunting to increase Patch Tuesday releases as both Microsoft and third-party researchers use these tools to boost vulnerability discovery. And yes, all of this ultimately means more patches and more work. More patches = more work “Finding bugs has always been the cheap end of the pipeline,” Luta CEO Katie Moussouris told The Register. “Triage, disclosure, building patches that do not break production, and getting customers to deploy them is the expensive end, and nobody has funded it for this volume.” Moussouris helped convince Redmond's top brass that Microsoft needed a bug bounty program in 2013, and three years later started her own bug bounty consultancy. She noted Palo Alto Networks’ staggering jump in CVEs this month. “Multiply that across every vendor and the bottleneck becomes admins and vulnerability management teams,” Moussouris said. And she also stressed that people should be using these new models to find vulnerabilities. “It is exactly what defenders should be doing,” Moussouris said. “Both PAN and Microsoft landed on the same answer: no single model catches everything. PAN ran Claude Mythos, Claude Opus 4.7, and GPT-5.5-Cyber because each finds bugs the others miss,” she added. “Microsoft orchestrates over 100 specialized agents across multiple models. Add threat intel and codebase context, and Microsoft rediscovered 96 percent of five years of confirmed bugs in a critical Windows component. The asymmetry is temporary, PAN puts adversary parity at three to five months, so any vendor not scanning their own code now is letting someone else find their bugs first.”® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240107&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240107&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240041</guid>
        <link>https://www.theregister.com/paas-and-iaas/2026/05/13/aws-patched-quick-auth-bypass-says-customers-werent-using-control/5240041</link>
        <pubDate>Thu, 14 May 2026 00:56:41 +0200</pubDate>
        <title>AWS to Quick admins: The access control didn't work, but you weren't using it anyway, so what's the problem?</title>
        <description><![CDATA[ If a setting fails in the forest and nobody hears it ... ]]></description>
        <category>paas and iaas</category>
                <lab:kicker><![CDATA[ PaaS + IaaS ]]></lab:kicker>
                <dc:modified>Thu, 14 May 2026 15:09:51 +0000</dc:modified>
                <content:encoded><![CDATA[ Most users put up with AWS the way you put up with the DMV. I say this with love, but it's hard to disagree that the UI is awful. The console is a UX time capsule if time capsules weren't allowed to ever look like other time capsules. The pricing pages were designed by someone who hates you personally, and you accept all of it because the one thing AWS has historically gotten right is the boring, important stuff. The security model. The IAM language no one likes, but everyone trusts. The boundary between your account and someone else's. Get that wrong, and the whole bargain collapses. So when Fog Security disclosed an authorization bypass in Amazon Quick on May 12 (that's the BI service formerly known as QuickSight, briefly known as Quick Suite, and now apparently just Quick, but check back next week) and AWS responded with a statement claiming "no customer data was at risk," it's fair to ask which definition of customer data they're using. Because it isn't an obvious one, and it certainly isn't mine. What Fog found  Fog reports that when an Amazon Quick administrator (which is an absolutely devastating personal insult) uses "custom permissions" to explicitly deny access to AI Chat Agents, the UI correctly hides the feature. Great! Awesome! I sure wish to hell I could do that with S3 buckets to which I do not have access! Notably, there's no other way for an admin to do this - it's custom permissions or naught. The API, however, was perfectly willing to keep answering chat requests for any user in the account who knew how to send them. Fog's proof-of-concept was a non-admin asking the agent "Tell me about mangoes" from a session that was, on paper, locked out of the agent entirely. The agent told them about mangoes. AWS deployed the fix between March 11 and March 12, eight days after Fog reported it via HackerOne. So far, so coordinated. Seriously, for a company of this scale, that's underpants-outside-the-pants superhero speed. Good for you; gold star. What came next  Where this gets uncomfortable is the response. AWS classified the severity as "none." It issued no customer notification. It published no advisory. After Fog disclosed the HackerOne report and published a blog post, AWS provided a statement to Fog Security reading, in full: "We appreciate Fog Security's coordinated disclosure. This issue was addressed in March 2026. No customer data was at risk and there is no customer action required. As always, customers can contact AWS Support with any questions or concerns about the security of their account." Take that sentence apart and see how much work "no customer data was at risk" is doing. Amazon Quick is described on its own product page as an AI assistant that "connects Slack, Microsoft Teams and Outlook, CRMs, databases, and documents in one place" and "grounds every answer in your real business data." The default chat agent, which is automatically and annoyingly provisioned the instant Quick is enabled whether the customer wants those AI features or not, is the front end for that data. It is the whole point of the front end for that data. Now consider the actual scenario AWS just patched. An administrator at, say, a regulated bank (an unregulated bank is called "a criminal enterprise that hasn't been caught yet") configures custom permissions denying chat agent access to a large group of users. Maybe those users are contractors. Maybe they're in a business unit that isn't cleared for AI tools. Maybe the bank's compliance posture flat-out prohibits shadow AI usage on top of internal data. Until two months ago, every one of those users could send an HTTP request directly to the agent endpoint and get a response. Fog asked about mangoes because they're a security firm doing a clean disclosure, not a malicious insider. A malicious insider would not have asked about mangoes. The question to AWS, with no rhetoric attached: In what sense was customer data not at risk? Either the chat agent doesn't actually have access to the data the product page says it does (in which case the marketing department has some serious splainin' to do) or unauthorized users could query an agent wired into customer data, in which case "customer data was at risk" is the correct English-language description of the situation. AWS clarifies, and says the quiet part out loud After this story started circulating, AWS offered a follow-up comment that I sincerely appreciate, because it's so much more honest than the first one. Per a hounded-looking AWS spokesperson: "The researcher was using the Admin Control capability that no customers were actively using when the server side validation was not present." Reading that twice doesn't help. Let me translate. AWS is saying: Yes, the server-side authorization check was missing. Yes, an authenticated user in your Quick account could bypass the only access control mechanism the service offers. The reason this is fine, apparently, is that no real customer had bothered to configure that access control during the window when it didn't work. Um ... what? The defense isn't "the bug wasn't real," which you could be forgiven for hearing in AWS's first statement. The defense also isn't "the bug couldn't have done what Fog says it could have done," which is the even stronger implication of their first statement. The defense is "the access control didn't enforce what we said it did, but luckily nobody was relying on it." This is the corporate-comms equivalent of "the lock on the front door didn't work, but nobody had locked it anyway, so why are you upset?" It's also a surprisingly specific telemetry claim. AWS is asserting that they know zero customers had configured custom permissions to deny chat agent access during the exposure window. That's a confident thing to say, and an even more interesting thing to volunteer as a defense, because it doubles as a withering review of Quick's access management model: the only knob the service provides for this purpose, the one AWS's own documentation explicitly tells administrators to use, has zero recorded uptake. The same follow-up also pointed back to the HackerOne thread to demonstrate that AWS told Fog throughout the disclosure window that "user-based authorization remained enforced." Translation: you needed authenticated credentials in the same Quick account to exploit this. Yes. That's intra-account scope, which Fog documented in their writeup, and which is precisely the scope in which custom permissions are supposed to function as a security boundary. AWS saying "user-based authorization was fine" is saying "you couldn't exploit this anonymously from the internet," which was never the threat model in question. The threat model is the contractor with valid SSO credentials whose admin tried to lock them out of some datasets. Why this matters more than it sounds  Amazon Quick's access model is already an outlier: IAM policies don't govern Quick's AI Chat Agent, SCPs don't apply, and RCPs don't apply. Custom permissions are the only knob the service provides. If those don't enforce, nothing else does. And per AWS's own follow-up, literally nobody was using them anyway. Both halves of that sentence should be alarming, and AWS is offering them as reassurance. AWS's competitive moat for the last decade hasn't been pricing. It sure as poop hasn't been developer experience, documentation, console design, or the inscrutable poetry of service names. It's been the well-earned belief that AWS gets the foundational things right: boundaries, identity, durability, reliability, and the parts customers can't easily verify themselves. Customers have paid the AWS premium because they trusted the boring stuff. This year that trust is being tested in a way it hasn't been before. The 2025–2026 cadence of AWS security advisories has noticeably increased, for reasons that are as yet unclear. Coordinated disclosures from independent researchers keep surfacing missing authorization checks in newer, AI-adjacent services. The fixes are landing fast, which is good. The customer communication isn't landing at all, which is, charitably, a choice. A "severity: none" rating on a bypass of the only access control a service offers is not an objective security finding so much as it is a communication decision. And the communication decision now reads, with the benefit of AWS's follow-up: "We'll fix the bug, we won't tell you it existed, and if you ask we'll explain that you weren't using the feature anyway." AWS gets a lot of forgiveness on the small stuff because they own the big stuff. They might want to reconsider how much of the big stuff they keep classifying as "none." ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240076&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240076&amp;width=800" />
            </item>
    <item>
        <guid isPermaLink="true">https://www.theregister.com/a/5240005</guid>
        <link>https://www.theregister.com/software/2026/05/13/googles-ai-enabled-mouse-pointer-understands-this-and-that/5240005</link>
        <pubDate>Thu, 14 May 2026 00:19:47 +0200</pubDate>
        <title>Google's AI-enabled mouse pointer understands 'this' and 'that'</title>
        <description><![CDATA[ Right-clicking could go the way of the 3.5-inch floppy at the Chocolate Factory ]]></description>
        <category>software</category>
                <lab:kicker><![CDATA[ software ]]></lab:kicker>
                <content:encoded><![CDATA[ Google doesn't design mouse traps, so it's trying to design a better mouse. Google DeepMind announced a research effort to transform the standard computer mouse cursor into a context-aware, AI-powered tool, marking what the company described as the first major rethinking of the cursor in more than 50 years. The project by researchers Adrien Baranes and Rob Marchant integrated Google's Gemini AI model with an experimental context-aware mouse pointer. In this way, the company said, the system can understand where a user clicks, what they are clicking on, and the likely intent behind the interaction. Researchers said there is a persistent friction in how people currently interact with AI tools. Most AI assistants today live in a separate window, requiring users to copy, paste, or drag content into a chat interface before receiving help. The new approach aims to reverse that dynamic. "We want the opposite: intuitive AI that meets users across all the tools they use, without interrupting their flow," the researchers stated in the blog post. The mouse pointer works alongside the computer’s microphone, allowing Gemini to listen as the user points. This lets users refer to features on the screen with object pronouns like “this” and “that.” In a demonstration website, a user can hover a cursor over a crab and say “move this here,” and the system understands enough context to grab the crab and move it to where the cursor indicates. The first computer mouse, a one-button prototype with metal wheels for the x- and y-axis, was built out of wood in 1964 and was patented in 1970 by its inventors Doug Engelbart and Bill English, who worked at the Stanford Research Institute. Engelbart foresaw a day when humans and computers would interact more easily and naturally, which he talked about during his 1997 acceptance speech for the Lemelson-MIT Prize. “The computer technology, the digital capabilities, it’s affecting communications, displays, storage, computer processing. It’s affecting the way you can interface to things a lot more flexibly,” he said. “That’s going to be so pervasively high-impact in our society and our organizations that it's more than anything we’ve had to cope with evolutionary wise.” Maintain the flow At Google, the team said it laid out four design principles guiding the project. The first, which the researchers called "Maintain the flow," stated that AI capabilities should work across all applications rather than forcing users into separate AI-specific environments. Under this principle, a user could point at a PDF and request a summary, or hover over a statistics table and ask for a chart, all without leaving the current application. The next, "Show and tell," addressed the burden of prompt writing. The researchers stated that an AI-enabled pointer could capture visual and semantic context from the screen, reducing the need for users to write detailed text instructions to the model. They also developed the AI cursor based on how humans naturally communicate using short phrases and gestures like “this” and “that.” The researchers stated that the system would allow users to issue commands like "Fix this" or "Move that here" while the AI fills in the contextual gaps. The fourth principle, "Turn pixels into actionable entities," lets the pointer recognize structured objects within on-screen content. The researchers stated that this capability could turn a photo of a handwritten note into an interactive to-do list, or convert a paused video frame showing a restaurant into a booking link. In the blog, the researchers said that Google DeepMind has already begun integrating the lessons learned into products. A feature called Magic Pointer will soon roll out on the forthcoming Googlebook laptop platform, which The Chocolate Factory introduced earlier this week. The company said the technology will also allow users of Gemini in Chrome to point at specific parts of a webpage and ask questions, rather than composing a full text prompt. Experimental demos of the AI-enabled pointer are currently available through Google AI Studio, where users can test image-editing and map-based interactions using the point-and-speak approach. The company said it plans to continue testing the concept across additional platforms, including Google Labs' Disco. ® ]]></content:encoded>
                <enclosure url="https://image.theregister.com/?imageId=5240038&amp;width=800" type="image/jpeg" />
                <media:thumbnail url="https://image.theregister.com/?imageId=5240038&amp;width=800" />
            </item>
</channel>
</rss>