<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>IPv6.net</title>
	<atom:link href="https://ipv6.net/feed/" rel="self" type="application/rss+xml" />
	<link>https://ipv6.net/</link>
	<description>The IPv6 and IoT Resources</description>
	<lastBuildDate>Fri, 08 May 2026 11:37:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Microsoft brengt cumulatieve update uit voor Windows 11</title>
		<link>https://ipv6.net/news/microsoft-brengt-cumulatieve-update-uit-voor-windows-11/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Fri, 08 May 2026 11:37:08 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2910115</guid>

					<description><![CDATA[<p>Microsoft heeft de maandelijkse cumulatieve update voor Windows 11 uitgebracht, met versienummers 26200.8246 en 26100.8246 voor respectievelijk versie 25H2 en 24H2. De update (KB5083769) richt zich op Secure Boot-certificaten, BitLocker-herstelgedrag en de beveiliging van Remote Desktop-verbindingen. Daarnaast zijn er fixes voor VSS-problemen en zijn kwetsbare kernel-drivers geblokkeerd. De update is beschikbaar via Windows Update en [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/microsoft-brengt-cumulatieve-update-uit-voor-windows-11/">Microsoft brengt cumulatieve update uit voor Windows 11</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<p><strong>Microsoft heeft de maandelijkse cumulatieve update voor Windows 11 uitgebracht, met versienummers 26200.8246 en 26100.8246 voor respectievelijk versie 25H2 en 24H2. De update (KB5083769) richt zich op Secure Boot-certificaten, BitLocker-herstelgedrag en de beveiliging van Remote Desktop-verbindingen. Daarnaast zijn er fixes voor VSS-problemen en zijn kwetsbare kernel-drivers geblokkeerd.</strong></p>
<p>De update is beschikbaar via Windows Update en de Microsoft Update-catalogus. Voor de meeste gebruikers wordt de update automatisch gedownload en geïnstalleerd.</p>
<h3>Secure Boot en BitLocker</h3>
<p>Een opvallend onderdeel van deze release is de aanpassing aan het beheer van Secure Boot-certificaten. Microsoft voegt een nieuwe statusweergave toe aan de Windows-beveiligingsapp, zodat gebruikers direct kunnen zien of hun certificaten actueel zijn. De update verhelpt ook een bug waarbij systemen na een Secure Boot-update onbedoeld in de BitLocker Recovery-modus terechtkwamen, waardoor gebruikers hun herstelsleutel moesten invoeren na een reguliere update.</p>
<p>Microsoft wijst op een bekende resterende kwestie: systemen met een specifieke, niet-aanbevolen BitLocker-groepsbeleidconfiguratie kunnen nog steeds om een herstelsleutel worden gevraagd.</p>
<h3>Remote Desktop en phishing</h3>
<p>De update past ook het gedrag van Remote Desktop-bestanden (.rdp) aan. Windows toont voortaan alle verbindingsinstellingen voordat er verbinding wordt gemaakt. Standaard staan alle instellingen uitgeschakeld en krijgt de gebruiker een eenmalige beveiligingswaarschuwing bij het openen van een nieuw RDP-bestand. Microsoft geeft aan dat dit misbruik via gemanipuleerde RDP-bestanden moet tegengaan. Bij sommige systemen kunnen deze waarschuwingen vooralsnog niet correct worden weergegeven, aldus de releasenotes.</p>
<h3>VSS, drivers en netwerkstabiliteit</h3>
<p>Een bug uit de maart-update die ervoor zorgde dat de functie ‘Deze pc opnieuw instellen’ faalde door VSS-problemen, is in deze release opgelost. Daarnaast heeft Microsoft de lijst met geblokkeerde kernel-drivers uitgebreid. Gebruikers van oudere back-upsoftware kunnen daardoor foutmeldingen ontvangen, waaronder VSS-time-outs. Microsoft adviseert die software bij te werken naar versies met drivers die aan de huidige vereisten voldoen.</p>
<p>De betrouwbaarheid van SMB-compressie over QUIC is verbeterd, wat volgens Microsoft leidt tot minder time-outs bij gegevensoverdracht.</p>
<h3>AI-componenten bijgewerkt</h3>
<p>Verschillende AI-componenten in Windows 11, waaronder de beeldzoeker en semantische analyse, zijn bijgewerkt naar versie 1.2603.377.0.</p>
<p>Microsoft meldt dat de bijbehorende Servicing Stack Update (KB5088467) vereist is voor een stabiel installatieproces en na installatie niet verwijderd kan worden.</p>
<p>Het bericht <a href="https://mspbusiness.com/security-en-privacy/microsoft-brengt-cumulatieve-update-uit-voor-windows-11/">Microsoft brengt cumulatieve update uit voor Windows 11</a> verscheen eerst op <a href="https://mspbusiness.com/">MSP Business</a>.</p>
</div>
<p>Read more here: <a href="https://mspbusiness.com/security-en-privacy/microsoft-brengt-cumulatieve-update-uit-voor-windows-11/">https://mspbusiness.com/security-en-privacy/microsoft-brengt-cumulatieve-update-uit-voor-windows-11/</a></p>
<p>The post <a href="https://ipv6.net/news/microsoft-brengt-cumulatieve-update-uit-voor-windows-11/">Microsoft brengt cumulatieve update uit voor Windows 11</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How FermiLabs builds championship-level robots with Arduino</title>
		<link>https://ipv6.net/news/how-fermilabs-builds-championship-level-robots-with-arduino/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Fri, 08 May 2026 11:37:04 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2910118</guid>

					<description><![CDATA[<p>After-school workshops run by curious, driven students is where some of the most exciting engineering happens in the Arduino community! One of the most compelling examples of this is FermiLabs, the innovation hub at secondary school IIS “E. Fermi – R. Guttuso” in Giarre, Sicily, offering students afternoon lab sessions in robotics, automation, and experimental [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/how-fermilabs-builds-championship-level-robots-with-arduino/">How FermiLabs builds championship-level robots with Arduino</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<figure class="wp-block-image size-large">
<div class="image-post"><img fetchpriority="high" decoding="async" width="1024" height="558" src="https://blog.arduino.cc/wp-content/uploads/2026/05/image-4-1-1024x558.png" alt="" class="wp-image-42045" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/image-4-1-1024x558.png 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-4-1-300x164.png 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-4-1-768x419.png 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-4-1.png 1201w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>After-school workshops run by curious, driven students is where some of the most exciting engineering happens in the Arduino community! One of the most compelling examples of this is <a href="http://fermilabs.it/">FermiLabs</a>, the innovation hub at secondary school IIS “E. Fermi – R. Guttuso” in Giarre, Sicily, offering students afternoon lab sessions in robotics, automation, and experimental physics. The results speak for themselves: FermiLabs teams have earned multiple podium positions at <a href="https://www.robocupjunior.eu/">RoboCupJunior Europe</a>, one of the most demanding student robotics competitions in the world.</p>
<p>RoboCupJunior Rescue, in particular, challenges teams to <strong>design, build, and program fully autonomous robots capable of navigating disaster scenarios</strong> – from following lines across obstacle-laden terrain to exploring multi-level mazes and assisting simulated victims. For the 2026 season, two FermiLabs teams are pushing the limits of what student-built robots can do, with Arduino at the core of both machines.</p>
<h2 class="wp-block-heading">Team Tachyons: solving the maze with Arduino GIGA R1 WiFi</h2>
<p>The RoboCupJunior Rescue Maze requires a robot to autonomously explore a complex, multi-level labyrinth, identify victims, and deploy rescue kits with precision. <strong>The 2026 rulebook raised the bar significantly with the introduction of “cognitive targets”</strong> – five concentric colored circles that robots must decode in real-time to classify victim types. This shift from simple colored squares to dense visual patterns demands a substantial leap in processing power and sensor integration.</p>
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<div class="wp-block-embed__wrapper">
<iframe title="Breaking Ground with Arduino: Team Tachyons at RoboCup Junior" width="500" height="281" src="https://www.youtube.com/embed/mcZd5kCd6Ic?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
</figure>
<p>Team Tachyons – who showcased their work during Arduino Days 2026 and are led by YouTuber and TEDx speaker <a href="https://www.youtube.com/watch?v=J3CSU6Z9Upk">Etto Fins</a> – met that challenge by centering their robot on the <a href="https://store.arduino.cc/collections/giga/products/giga-r1-wifi">Arduino GIGA R1 WiFi</a>, leveraging the board’s ability to handle complex, multi-threaded tasks with the reliability and low latency that competitive robotics demands.</p>
<p>The robot’s intelligence lives in a custom-designed Arduino shield that acts as its central nervous system. Four dedicated stepper motor drivers deliver sub-millimeter positioning accuracy, while a six-axis IMU (Inertial Measurement Unit), fused with data from six ToF (Time-of-Flight) distance sensors, feeds a PID control loop that keeps the robot precisely centered within each tile – even on ramps and uneven terrain. On top of all this, the software builds a live 3D matrix to map the labyrinth in real-time, allowing the robot to backtrack and optimize its path autonomously.</p>
<p>The mechanical design is equally thoughtful. Custom silicone wheels, molded in-house with an airless structure, maximize traction while minimizing weight and absorbing shocks. The rescue kit deployment mechanism uses a compliant mechanism and twin springs to fire rescue cubelets at high velocity – and the kits themselves are engineered with the lowest possible coefficient of restitution, so they drop dead in place when they reach a victim rather than bouncing away.</p>
<p>After a successful showing at the regional selections in Catania, Team Tachyons placed second in the Italian Nationals with a new and improved model based on UNO Q 4GB boards… winning the chance to fly to Incheon, South Korea to compete with the best 3,000 robotics students in the world.</p>
<h2 class="wp-block-heading">Team Yellow Radiators: vision-first line following with Arduino UNO Q</h2>
<p>The Rescue Line challenge tasks a fully autonomous robot with following a black line across a modular arena of tiles, overcoming obstacles, debris, and varying terrain – ultimately locating and rescuing simulated victims before navigating to an extraction zone. <strong>Speed, reliability, and real-time visual processing are everything.</strong></p>
<p>Team Yellow Radiators chose to abandon traditional line-following sensors entirely in favor of a vision-first architecture built around <a href="https://www.arduino.cc/product-uno-q">Arduino UNO Q</a>. Rather than running high-level logic and low-level motor control on separate boards, this allowed them to unify both on a single platform. </p>
<p>A Python layer running OpenCV processes real-time camera data to identify the line and read intersection markers, while the Arduino side simultaneously handles the high-frequency motor control loop and sensor integration. A custom communication bridge between the Python vision layer and the Arduino language hardware layer makes this seamless two-brain operation possible.</p>
<p>For the competition, the team built a custom web control panel that transforms how the robot is calibrated on-site. Via a local Wi-Fi network, team members can view live camera buffers, toggle between different image masks to debug line detection in real-time, and adjust color calibration or sensor thresholds wirelessly using on-screen sliders – no code re-upload required. The dashboard even allows direct remote function calls to the Arduino core, so specific subsystems like the rescue kit grabber can be tested manually. In the variable lighting conditions of a competition arena, this kind of live debugging capability is a genuine competitive advantage.</p>
<p>On the AI side, the team deployed a custom-trained YOLO object detection model using the NCNN runtime, optimized for the UNO Q Arm-based Qualcomm Technologies’ SoC. Their next milestone: enabling GPU passthrough to leverage Vulkan acceleration on the onboard Qualcomm Adreno GPU, further reducing inference latency. Development has been eased significantly by the full Debian OS running on the board, letting the team work directly from VS Code via Remote Development – a proper professional workflow on a compact edge device.</p>
<figure class="wp-block-image size-large">
<div class="image-post"><img decoding="async" width="1024" height="870" src="https://blog.arduino.cc/wp-content/uploads/2026/05/image-3-1-1024x870.png" alt="" class="wp-image-42047" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/image-3-1-1024x870.png 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-3-1-300x255.png 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-3-1-768x653.png 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-3-1.png 1358w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<h2 class="wp-block-heading">From Sicily to the world championship</h2>
<p>Both projects illustrate something FermiLabs has made a habit of demonstrating: that with the right tools, a secondary school team can engineer solutions that rival professional-grade systems. Arduino’s role in both robots isn’t incidental – it’s <strong>the platform that makes rapid iteration, hardware control, and connectivity available to students who want to build things that actually work under pressure</strong>. </p>
<p>After multiple successes at the national level in Catania in April, FermiLabs is now gearing up to take two teams to the RoboCupJunior European Championships in Vienna, and two more to the RoboCup Federation Junior World Championships in South Korea. Follow <a href="http://fermilabs.it/">fermilabs.it on LinkedIn</a> to see their progress, or check out their <a href="https://www.isfermiguttuso.edu.it/call-for-partner-robocup-2026/">call for partners</a> to find out how you can support them.</p>
<p><em>Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Arduino, GIGA R1, and UNO are trademarks or registered trademarks of Arduino S.r.l.</em></p>
<p>The post <a href="https://blog.arduino.cc/2026/05/08/how-fermilabs-builds-championship-level-robots-with-arduino/">How FermiLabs builds championship-level robots with Arduino</a> appeared first on <a href="https://blog.arduino.cc/">Arduino Blog</a>.</p>
</div>
<p>Read more here: <a href="https://blog.arduino.cc/2026/05/08/how-fermilabs-builds-championship-level-robots-with-arduino/">https://blog.arduino.cc/2026/05/08/how-fermilabs-builds-championship-level-robots-with-arduino/</a></p>
<p>The post <a href="https://ipv6.net/news/how-fermilabs-builds-championship-level-robots-with-arduino/">How FermiLabs builds championship-level robots with Arduino</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>12 model-level deep cuts to slash AI training costs</title>
		<link>https://ipv6.net/news/12-model-level-deep-cuts-to-slash-ai-training-costs/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Fri, 08 May 2026 09:37:10 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2910102</guid>

					<description><![CDATA[<p>Optimizing artificial intelligence pipelines requires moving beyond surface-level hardware adjustments to fundamentally alter how models process data. While engineers often implement basic toggle-away efficiencies inside the training loop, achieving permanent cost reductions requires architectural changes directly inside the neural network. As I have previously argued, the science is solved, but the engineering is broken; true [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/12-model-level-deep-cuts-to-slash-ai-training-costs/">12 model-level deep cuts to slash AI training costs</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<div id="remove_no_follow">
<div class="grid grid--cols-10@md grid--cols-8@lg article-column">
<div class="col-12 col-10@md col-6@lg col-start-3@lg">
<div class="article-column__content">
<section class="wp-block-bigbite-multi-title">
<div class="container"></div>
</section>
<p>Optimizing artificial intelligence pipelines requires moving beyond surface-level hardware adjustments to fundamentally alter how models process data. While engineers often implement<a href="https://www.infoworld.com/article/4147702/the-toggle-away-efficiencies-cutting-ai-costs-inside-the-training-loop.html"> basic toggle-away efficiencies inside the training loop</a>, achieving permanent cost reductions requires architectural changes directly inside the neural network. As I have previously argued,<a href="https://aijourn.com/the-science-is-solved-the-engineering-is-broken-the-rise-of-the-ai-engineer/"> the science is solved, but the engineering is broken</a>; true FinOps maturity demands deep, model-level interventions. The following 12 architectural cuts will drastically lower the unit economics of your AI pipeline.</p>
<h2 class="wp-block-heading" id="redesigning-the-training-foundation">Redesigning the training foundation</h2>
<h3 class="wp-block-heading" id="1-fine-tune-dont-train-from-scratch">1. Fine-tune, don’t train from scratch</h3>
<p>Training a foundation model from scratch is computationally prohibitive and rarely necessary for standard enterprise applications. Instead of burning millions of dollars on raw compute, engineering teams should download highly capable, publicly available open-weight models. This baseline transfer learning approach is the mandatory first step when building internal corporate chatbots or domain-specific classifiers. Leveraging existing neural architectures instantly bypasses the massive energy and financial costs associated with initial pre-training phases.</p>
<h3 class="wp-block-heading" id="2-parameter-efficient-fine-tuning-lora">2. Parameter-efficient fine-tuning (LoRA)</h3>
<p>Even standard fine-tuning of a massive language model requires immense VRAM to store optimizer states and gradients. To solve this hardware bottleneck, engineers must implement parameter-efficient fine-tuning (PEFT) techniques like low-rank adaptation (LoRA). By freezing 99 percent of the pre-trained weights and injecting incredibly small trainable adapter layers, LoRA drastically reduces memory overhead. This mathematical shortcut is ideal for deploying highly customized generative AI features, allowing teams to fine-tune billions of parameters on a single consumer-grade GPU.</p>
<pre class="wp-block-code"><code>python
from peft import LoraConfig, get_peft_model

config = LoraConfig(r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"])
efficient_model = get_peft_model(base_model, config)</code></pre>
<h3 class="wp-block-heading" id="3-warm-start-embeddings-layers">3. Warm-start embeddings/layers</h3>
<p>When you must train specific network components from scratch, importing pre-trained embeddings ensures that only the remaining layers require heavy computational lifting. This warm-start approach slashes early-epoch compute because the model does not have to relearn basic, universal data representations. It should be used immediately in specialized domains, similar to<a href="https://www.entrepreneur.com/science-technology/how-this-startup-is-using-ai-to-bridge-the-health-literacy/501145"> how healthcare startups leverage AI to bridge the health literacy gap</a> using pre-existing medical vocabularies.</p>
<pre class="wp-block-code"><code>python
# PyTorch warm-start example
model.embedding_layer.weight.data.copy_(pretrained_medical_embeddings)
model.embedding_layer.requires_grad = False
</code></pre>
<h2 class="wp-block-heading" id="memory-optimization-and-execution-speed">Memory optimization and execution speed</h2>
<h3 class="wp-block-heading" id="4-gradient-checkpointing">4. Gradient checkpointing</h3>
<p>Memory constraints are the primary reason engineers are forced to rent expensive, high-VRAM cloud instances. Introduced by Chen et al., gradient checkpointing saves memory by recomputing certain forward activations during backpropagation rather than storing them all. Engineers should deploy this technique when facing persistent out-of-memory errors, as it allows networks that are 10 times larger to fit on the same GPU at the cost of approximately 20 percent extra compute time.</p>
<pre class="wp-block-code"><code>python
# Enable in Hugging Face / PyTorch
model.gradient_checkpointing_enable()</code></pre>
<h3 class="wp-block-heading" id="5-compiler-and-kernel-fusion">5. Compiler and kernel fusion</h3>
<p>Modern deep learning frameworks frequently suffer from memory bandwidth bottlenecks as data is constantly read and written across the hardware. Using graph-level compilers like XLA or PyTorch 2.0 fuses multiple operations into a single GPU kernel. This architectural optimization yields massive throughput improvements and faster execution speeds without requiring manual code changes. Engineers should enable compiler fusion by default on all production training runs to maximize hardware utilization.</p>
<pre class="wp-block-code"><code>python
import torch

# PyTorch 2.0 compiler fusion
optimized_model = torch.compile(model)</code></pre>
<h3 class="wp-block-heading" id="6-pruning-and-quantization">6. Pruning and quantization</h3>
<p>Deploying a massive, fully precise 16-bit neural network into production often requires renting top-tier cloud instances that destroy an application’s profit margins. Applying algorithmic pruning removes mathematically redundant weights, while quantization compresses the remaining parameters from 16-bit floating points down to 8-bit or 4-bit integers. For instance, if a retail enterprise deploys a customer service chatbot, quantizing the model allows it to run on significantly cheaper, lower-memory GPUs without any noticeable drop in conversational quality. This physical reduction is critical for financially scaling high-traffic applications, directly lowering<a href="https://www.cio.com/article/4132293/the-carbon-cost-of-an-api-call.html"> the carbon cost of an API call</a> when serving thousands of concurrent users.</p>
<pre class="wp-block-code"><code>python
import torch
import torch.nn.utils.prune as prune

# 1. Prune 20% of the lowest-magnitude weights in a layer
prune.l1_unstructured(model.fc, name="weight", amount=0.2)

# 2. Dynamic Quantization (Compress Float32 to Int8)
quantized_model = torch.ao.quantization.quantize_dynamic(
    model, {torch.nn.Linear}, dtype=torch.qint8
)</code></pre>
<h2 class="wp-block-heading" id="smarter-learning-dynamics">Smarter learning dynamics</h2>
<h3 class="wp-block-heading" id="7-curriculum-learning">7. Curriculum learning</h3>
<p>Feeding highly complex, noisy datasets into an untrained neural network forces the optimizer to thrash wildly, wasting expensive compute cycles trying to map chaotic gradients. Curriculum learning solves this by structuring the data pipeline to introduce clean, easily classifiable examples first before gradually scaling up to high-fidelity anomalies. For example, when training an autonomous driving vision model, engineers should initially feed it clear daytime highway images before spending compute on complex, snowy nighttime city intersections. This phased approach allows the network to map core mathematical features cheaply, reaching convergence much faster and with significantly less hardware burn.</p>
<h3 class="wp-block-heading" id="8-knowledge-distillation">8. Knowledge distillation</h3>
<p>Deploying a massive 70-billion parameter model for simple, repetitive tasks is a severe misallocation of enterprise compute resources. Knowledge distillation resolves this by training a highly efficient, lightweight “student” model to strictly mimic the predictive reasoning of the massive “teacher” model. Imagine an e-commerce company needing to run real-time product recommendations directly on a user’s smartphone, where battery and memory are strictly limited. Distillation allows that tiny mobile model to perform with the accuracy of a massive cloud-based architecture, permanently cutting inference costs and avoiding<a href="https://www.vktr.com/ai-technology/the-ai-accuracy-trap/"> the AI accuracy trap</a>.</p>
<h3 class="wp-block-heading" id="9-bayesian-optimization-and-hyperband">9. Bayesian optimization and hyperband</h3>
<p>Standard grid search algorithms waste massive amounts of cloud budget by blindly testing and completing network configurations that are doomed from the start. Smarter hyperparameter search methods, like Bayesian optimization and Hyperband, act as a ruthless financial governor by mathematically predicting and pruning bad trials during the very first epochs. For instance, if a bank is tuning a fraud detection model, Hyperband will instantly kill configurations that show poor early accuracy, redirecting all compute power only to the most promising setups. To further bound these costs, teams can integrate my <a href="https://github.com/Jayachander123/RES-Cost-Aware-Retraining-Framework">RES-Cost-Aware-Retraining-Framework</a>, which is based on recent <a href="https://ieeexplore.ieee.org/document/11429693">peer-reviewed IEEE research</a>.</p>
<h2 class="wp-block-heading" id="infrastructure-and-data-efficiency">Infrastructure and data efficiency</h2>
<h3 class="wp-block-heading" id="10-model-vs-data-parallel-right-sizing">10. Model vs. data-parallel right-sizing</h3>
<p>Improper cluster configuration creates massive network bottlenecks. If you split a moderately sized model across too many GPUs (model parallelism), the processors will spend more time waiting for data to travel across the network cables than actually doing math. Conversely, replicating the entire model across nodes (data parallelism) is highly efficient for processing massive datasets, provided the batch sizes are tuned correctly. A real-world FinOps team must dynamically right-size these parallel strategies based on the specific architecture, ensuring GPUs are never left idling while the network catches up.</p>
<h3 class="wp-block-heading" id="11-asynchronous-evaluation">11. Asynchronous evaluation</h3>
<p>Standard training pipelines constantly pause the primary, expensive GPU cluster just to run routine validation checks on the model’s progress. Stopping a massive hardware cluster for twenty minutes every epoch to calculate accuracy metrics is a catastrophic waste of hourly rental fees. By implementing asynchronous evaluation, engineers can offload these validation checks to a separate, much cheaper CPU or low-tier GPU instance. Keeping the primary high-cost GPUs 100 percent busy is a mandatory architectural separation that helps mitigate<a href="https://www.cio.com/article/4113246/beyond-the-cloud-bill-the-hidden-operational-costs-of-ai-governance.html"> the hidden operational costs of AI governance</a>.</p>
<h3 class="wp-block-heading" id="12-intelligent-data-sampling-and-selection">12. Intelligent data sampling and selection</h3>
<p>Blindly processing massive datasets forces the optimizer to waste expensive compute cycles on highly redundant, low-quality information. If a visual model has already seen ten thousand identical photos of a standard stop sign, processing the ten-thousand-and-first photo provides zero mathematical value. Using algorithmic sampling to curate an information-rich subset achieves the exact same model performance at a fraction of the hardware cost.</p>
<h2 class="wp-block-heading" id="conclusion">Conclusion</h2>
<p>Implementing these 12 model-level deep cuts transitions your AI strategy from a brute-force hardware approach to an elegant, software-defined discipline. By combining efficient training loop configurations with the architectural redesigns outlined here, engineering teams can stop throwing expensive GPUs at poorly optimized networks. However, even the most optimized training code will fail if the surrounding enterprise infrastructure is fragile. True operational maturity requires scaling these localized efficiencies across robust deployment architectures, which you can begin building today using the implementation scripts in my <a href="https://github.com/Jayachander123/green-ai-optimization-toolkit">open-source git repository</a>.</p>
<p><strong>This article is published as part of the Foundry Expert Contributor Network.</strong><br /><strong><a href="https://www.infoworld.com/expert-contributor-network/">Want to join?</a></strong></p>
</div>
</div>
</div>
</div>
</div>
<p>Read more here: <a href="https://www.infoworld.com/article/4168496/12-model-level-deep-cuts-to-slash-ai-training-costs.html">https://www.infoworld.com/article/4168496/12-model-level-deep-cuts-to-slash-ai-training-costs.html</a></p>
<p>The post <a href="https://ipv6.net/news/12-model-level-deep-cuts-to-slash-ai-training-costs/">12 model-level deep cuts to slash AI training costs</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Real talk: building with Arduino UNO Q</title>
		<link>https://ipv6.net/news/real-talk-building-with-arduino-uno-q/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Fri, 08 May 2026 08:07:05 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2910094</guid>

					<description><![CDATA[<p>We’re bringing the maker community behind the scenes with a new live format: Built with Arduino, a candid conversation between our own Andrea Richetta (Senior Product Manager) from Arduino (for Qualcomm Europe) and Rafik from Kamitronix, the creator behind a smart mirror project built entirely on the Arduino® UNO Q board. No polished demos, no [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/real-talk-building-with-arduino-uno-q/">Real talk: building with Arduino UNO Q</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<figure class="wp-block-image size-large">
<div class="image-post"><img fetchpriority="high" decoding="async" width="1024" height="559" src="https://blog.arduino.cc/wp-content/uploads/2026/05/Arduino.cc-Blogpost-Cover1100x600-1024x559.jpg" alt="" class="wp-image-42081" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/Arduino.cc-Blogpost-Cover1100x600-1024x559.jpg 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/Arduino.cc-Blogpost-Cover1100x600-300x164.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/Arduino.cc-Blogpost-Cover1100x600-768x419.jpg 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/Arduino.cc-Blogpost-Cover1100x600.jpg 1100w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>We’re bringing the maker community behind the scenes with a new live format: <em>Built with Arduino</em>, a candid conversation between our own Andrea Richetta (Senior Product Manager) from Arduino (for Qualcomm Europe) and Rafik from <a href="https://www.instagram.com/kamitronix/">Kamitronix</a>, the creator behind a <a href="https://www.instagram.com/reel/DSaVUVSjChc/?utm_source=ig_web_copy_link">smart mirror project</a> built entirely on the Arduino<sup>®</sup> UNO<sup><img decoding="async" src="https://s.w.org/images/core/emoji/17.0.2/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;"></sup> Q board.</p>
<p>No polished demos, no scripted walkthrough. Just an honest, back-and-forth discussion about what it’s actually like to prototype with the UNO Q ecosystem: <a href="https://docs.arduino.cc/software/app-lab/bricks/use-bricks/">Bricks</a>, <a href="https://store.arduino.cc/collections/modulino?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=EU-Pmax&amp;gad_source=1&amp;gad_campaignid=22591755262&amp;gbraid=0AAAAACbEa85-MakvdQDoHiUBKOMljO3ph&amp;gclid=Cj0KCQjw8PDPBhCeARIsAOJwmWWEo_sS5MkOAfrRzyaQc1fQNzMgUbogI6B4EAXVvVkTAmfpA1LbIK8aAisCEALw_wcB">Modulino</a>, <a href="https://docs.arduino.cc/software/app-lab/">App Lab</a> and all.</p>
<p>Over 40 minutes, we’ll dig into the real architectural choices every UNO Q developer faces: what belongs on the Linux side, what belongs in the real-time MCU, and how the latest updates from Arduino<sup>®</sup> App Lab reduce the friction in between. The final 20 minutes will be open for audience questions.</p>
<p>Three live quizzes will keep the session interactive. Come ready to participate: <strong>May 13th · 4:00 PM CET</strong>.</p>
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<div class="wp-block-embed__wrapper">
<iframe title="Built with Arduino -  A live chat with Andrea and Kamitronix" width="500" height="281" src="https://www.youtube.com/embed/zU2P9yFzQq0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
</figure>
<p class="has-text-align-center has-small-font-size"><em>Arduino and UNO and the Arduino logo are trademarks or registered trademarks of Arduino S.r.l.</em></p>
<p>The post <a href="https://blog.arduino.cc/2026/05/08/real-talk-building-with-arduino-uno-q/">Real talk: building with Arduino UNO Q</a> appeared first on <a href="https://blog.arduino.cc/">Arduino Blog</a>.</p>
</div>
<p>Read more here: <a href="https://blog.arduino.cc/2026/05/08/real-talk-building-with-arduino-uno-q/">https://blog.arduino.cc/2026/05/08/real-talk-building-with-arduino-uno-q/</a></p>
<p>The post <a href="https://ipv6.net/news/real-talk-building-with-arduino-uno-q/">Real talk: building with Arduino UNO Q</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>One board, two brains? Three ways a dual architecture board makes building simpler</title>
		<link>https://ipv6.net/news/one-board-two-brains-three-ways-a-dual-architecture-board-makes-building-simpler/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Thu, 07 May 2026 16:37:08 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2910037</guid>

					<description><![CDATA[<p>Most embedded projects don’t stay simple for long. You start with a microcontroller (MCU), reading sensors and controlling outputs. Then you add connectivity, maybe a user interface, maybe even AI. At that point, a single MCU starts to feel limiting. So you introduce a Linux-based system. Now you have flexibility – but also a new layer [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/one-board-two-brains-three-ways-a-dual-architecture-board-makes-building-simpler/">One board, two brains? Three ways a dual architecture board makes building simpler</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<figure class="wp-block-image size-large">
<div class="image-post"><img fetchpriority="high" decoding="async" width="1024" height="683" src="https://blog.arduino.cc/wp-content/uploads/2026/05/DSC6445-1024x683.jpeg" alt="" class="wp-image-42072" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/DSC6445-1024x683.jpeg 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/DSC6445-300x200.jpeg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/DSC6445-768x512.jpeg 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/DSC6445-1536x1024.jpeg 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/DSC6445-2048x1365.jpeg 2048w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>Most embedded projects don’t stay simple for long. You start with a microcontroller (MCU), reading sensors and controlling outputs. Then you add connectivity, maybe a user interface, maybe even AI. At that point, a single MCU starts to feel limiting. So you introduce a Linux-based system. Now you have flexibility – but also a new layer of complexity: two processors, two toolchains, and a growing amount of glue code just to keep everything in sync.</p>
<p><strong>You want the flexibility of Linux. You need the precision of real-time control</strong>. The <a href="https://www.arduino.cc/product-uno-q">Arduino<sup>®</sup> UNO<sup><img decoding="async" src="https://s.w.org/images/core/emoji/17.0.2/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;"></sup> Q</a> board is designed to bring these two worlds together and make this friction a thing of the past.</p>
<h2 class="wp-block-heading">A dual-brain architecture gives you the best of two worlds</h2>
<p>UNO Q combines two distinct processing environments on a single board.</p>
<p>A Linux-capable microprocessor (MPU) handles high-level workloads such as networking, AI inference, and application logic. Alongside it, a microcontroller manages real-time I/O, deterministic timing, and direct hardware interaction. This separation is intentional.</p>
<p>The MPU runs tasks that benefit from an operating system: multitasking, connectivity stacks, and model execution. The MCU handles tasks where timing and reliability are critical: reading sensors, generating signals, and controlling actuators.</p>
<figure class="wp-block-image size-full">
<div class="image-post"><img decoding="async" width="908" height="559" src="https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_1.jpg" alt="" class="wp-image-42051" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_1.jpg 908w, https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_1-300x185.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_1-768x473.jpg 768w" sizes="(max-width: 908px) 100vw, 908px"></div>
</figure>
<p>Instead of forcing one processor to do everything, each side does what it’s best at – and the magic happens when the two “talk” to each other through the UNO Q bridge mechanism. </p>
<p>In practice, this means your Python code can interact directly with hardware-level events handled by the microcontroller (such as a button press, change in temperature, movement, etc.), and your MCU can react to high-level decisions made on the Linux side (e.g. updating a web interface, logging data, or triggering an AI-driven response). Without complex setup, <strong>you’re working within a single, coordinated architecture.</strong></p>
<h2 class="wp-block-heading"><strong>Arduino</strong><strong><sup>®</sup></strong><strong> App Lab offers a unified application model</strong></h2>
<p>The dual-brain architecture enables a different coding experience – so the real shift is not just in the hardware, but in how you develop for it.</p>
<figure class="wp-block-image size-large">
<div class="image-post"><img decoding="async" width="1024" height="576" src="https://blog.arduino.cc/wp-content/uploads/2026/05/image-6-1024x576.png" alt="" class="wp-image-42070" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/image-6-1024x576.png 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-6-300x169.png 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-6-768x432.png 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-6-1536x864.png 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-6-2048x1152.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>With Arduino App Lab, the MPU and MCU are exposed as parts of a single application. </p>
<p>Arduino App Lab provides developers with a unified, single-console environment. This centralized environment eliminates the need to switch between separate terminals or tools to monitor the two distinct environments. Within this consolidated interface, developers can monitor the logging output from both the primary <em>application</em> processor and the <em>real-time</em> microcontroller in parallel, offering a complete, time-correlated view of the entire system’s execution flow.</p>
<p>From a developer perspective, this <strong>removes the need to manually manage communication or synchronization between two separate systems.</strong></p>
<p>The best part? If you want to see how Arduino App Lab is working behind the scenes, the Github repo contains all the source code, no secrets here! <a href="https://github.com/arduino/arduino-app-lab">If you’re curious, just check it out here</a>.</p>
<h2 class="wp-block-heading">Arduino App Lab AI workflows bridge data insight and real-world action</h2>
<p>Edge AI often becomes complex at the integration stage. Running a model is one thing, but connecting it to real-world signals, managing timing, and triggering actions reliably is where things usually break down.</p>
<p>This is exactly where the dual-brain architecture of the UNO Q changes the game. By combining an MPU running Linux with an MCU handling real-time control, you can naturally split AI workflows: the MPU takes care of model execution, orchestration, and the MCU takes the role of the king of deterministic land. </p>
<p>It’s not just about running AI, it’s about making it fit and work reliably inside a real system.</p>
<figure class="wp-block-image size-full">
<div class="image-post"><img loading="lazy" decoding="async" width="919" height="308" src="https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_2-1.jpg" alt="" class="wp-image-42052" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_2-1.jpg 919w, https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_2-1-300x101.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/arduino_flow_chart_2-1-768x257.jpg 768w" sizes="auto, (max-width: 919px) 100vw, 919px"></div>
</figure>
<p>Arduino App Lab acts as the bridge between these two worlds, enabling seamless data exchange and coordinated execution across the MPU and MCU. <a href="https://blog.arduino.cc/2026/03/04/train-and-deploy-your-own-ai-models-in-arduino-app-lab-now-fully-integrated-with-edge-impulse">With the integration of Edge Impulse</a>, the path from model training to deployment becomes much more direct. You can move from data collection to inference without reworking your entire stack.</p>
<p>Now you can build and deploy custom models in a unified flow: start from the Arduino App Lab “Train New Model,” move to Edge Impulse for training and validation, and deploy back to Arduino App Lab – ready to run across the dual-brain system, from insight to action.</p>
<div class="wp-block-image">
<figure class="aligncenter size-full is-resized">
<div class="image-post"><img loading="lazy" decoding="async" width="260" height="128" src="https://blog.arduino.cc/wp-content/uploads/2026/05/unnamed-4-1.png" alt="" class="wp-image-42074" style="aspect-ratio:2.0314979855939446;width:504px;height:auto"></div>
</figure>
</div>
<p>You can even switch between different models with a simple click of the mouse!</p>
<figure class="wp-block-image size-large">
<div class="image-post"><img loading="lazy" decoding="async" width="1024" height="574" src="https://blog.arduino.cc/wp-content/uploads/2026/05/image-7-1-1024x574.png" alt="" class="wp-image-42079" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/image-7-1-1024x574.png 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-7-1-300x168.png 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-7-1-768x431.png 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-7-1-1536x861.png 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-7-1.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>If you want to explore the full workflow step by step, you can dive deeper into the <a href="https://docs.arduino.cc/software/app-lab/integrations/ai-models/">dedicated article on training and deploying AI models in App Lab</a>, as well as the overview of the expanding UNO Q ecosystem.</p>
<h2 class="wp-block-heading">From architecture to applications</h2>
<p>This dual-brain approach is not just theoretical – you can already see it in action across different types of projects.</p>
<p>From <a href="https://projecthub.arduino.cc/AndreaRichetta/how-to-install-node-red-on-uno-q-using-docker-0d9c78">installing widely available tools like Node-RED</a> to vision-based inspection systems, image processing can run on the Linux side while the microcontroller handles precise triggering and control. This allows you to process complex visual data without sacrificing timing accuracy. You can even process images and short videos with text prompts to generate descriptions or answers, like in this project where <a href="https://projecthub.arduino.cc/marc-edgeimpulse/running-local-llms-and-vlms-on-the-arduino-uno-q-with-yzma-74e288">local LLMs and VLMs run on UNO Q</a>.</p>
<p>In energy monitoring and smart sensing applications – like <a href="https://projecthub.arduino.cc/jumaanji_2004/afa2026-physicalai-accident-response-system-0f4bdf">this accident response system that leverages physical AI</a> – the MCU continuously samples real-world signals, while the MPU aggregates data, runs analytics, and exposes results through services or dashboards.</p>
<figure class="wp-block-image size-large">
<div class="image-post"><img loading="lazy" decoding="async" width="1024" height="768" src="https://blog.arduino.cc/wp-content/uploads/2026/05/image-1-1024x768.jpeg" alt="" class="wp-image-42078" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/image-1-1024x768.jpeg 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-1-300x225.jpeg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-1-385x289.jpeg 385w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-1-768x576.jpeg 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/image-1.jpeg 1280w" sizes="auto, (max-width: 1024px) 100vw, 1024px"></div>
</figure>
<h2 class="wp-block-heading">Three reasons, one simpler way to build</h2>
<p>When you put it all together, <a href="https://www.arduino.cc/product-uno-q">UNO Q</a> makes building complex systems simpler for three clear reasons.</p>
<p>First, a single, coordinated setup makes your builds more straightforward and efficient. You have two different brains, each one doing what it’s best at.</p>
<p>Second, the unified application model with Arduino App Lab turns two processors into one coherent development experience. You write, monitor, and debug everything from a single environment – no more switching between terminals, no different hardware for different tasks, no more glue code just to keep the two sides talking.</p>
<p>Third, AI workflows actually fit the system. With Edge Impulse, Qualcomm<sup>®</sup> AI hub, Hugging Face that can be integrated into the flow, you can go from data collection to a deployed model without rebuilding your stack along the way. The microprocessor runs the inference, the microcontroller handles the signals, and Arduino App Lab keeps them all together using code and Bricks – so edge AI stops being an integration headache and starts being just another part of your application.</p>
<p>Flexibility of Linux, precision of real-time control, and a development ecosystem that is able to handle every side of your next project without you having to jump around between platforms: it’s all in a single board, designed to make building simpler from day one.</p>
<p><em>Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Arduino and UNO are trademarks or registered trademarks of Arduino S.r.l.</em></p>
<p>The post <a href="https://blog.arduino.cc/2026/05/07/one-board-two-brains-three-ways-a-dual-architecture-board-makes-building-simpler/">One board, two brains? Three ways a dual architecture board makes building simpler</a> appeared first on <a href="https://blog.arduino.cc/">Arduino Blog</a>.</p>
</div>
<p>Read more here: <a href="https://blog.arduino.cc/2026/05/07/one-board-two-brains-three-ways-a-dual-architecture-board-makes-building-simpler/">https://blog.arduino.cc/2026/05/07/one-board-two-brains-three-ways-a-dual-architecture-board-makes-building-simpler/</a></p>
<p>The post <a href="https://ipv6.net/news/one-board-two-brains-three-ways-a-dual-architecture-board-makes-building-simpler/">One board, two brains? Three ways a dual architecture board makes building simpler</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>This toy box does something incredible with AI-generated video</title>
		<link>https://ipv6.net/news/this-toy-box-does-something-incredible-with-ai-generated-video/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Thu, 07 May 2026 11:37:05 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2909988</guid>

					<description><![CDATA[<p>AI video generation may be impressive on a technical level, but typing out a prompt doesn’t exactly feel like creative work. Hun Han wondered how he could make that more of a collaborative experience and that led him to develop something pretty incredible: the Hush toy box. Hush is a small, enclosed lightbox for photography. [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/this-toy-box-does-something-incredible-with-ai-generated-video/">This toy box does something incredible with AI-generated video</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<figure class="wp-block-image size-large">
<div class="image-post"><img fetchpriority="high" decoding="async" width="1024" height="683" src="https://blog.arduino.cc/wp-content/uploads/2026/05/bHUph2S1E3B3oYLfobSeILvj72Y.jpg-1024x683.avif" alt="" class="wp-image-42056" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/bHUph2S1E3B3oYLfobSeILvj72Y.jpg-1024x683.avif 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/bHUph2S1E3B3oYLfobSeILvj72Y.jpg-300x200.avif 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/bHUph2S1E3B3oYLfobSeILvj72Y.jpg-768x512.avif 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/bHUph2S1E3B3oYLfobSeILvj72Y.jpg-1536x1024.avif 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/bHUph2S1E3B3oYLfobSeILvj72Y.jpg.avif 2048w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>AI video generation may be impressive on a technical level, but typing out a prompt doesn’t exactly feel like creative work. Hun Han wondered how he could make that more of a collaborative experience and that led him to develop something pretty incredible: <a href="https://hunhan.xyz/hush">the Hush toy box</a>.</p>
<p><a href="https://www.creativeapplications.net/member/hush-bringing-inanimate-objects-to-life/">Hush is a small, enclosed lightbox</a> for photography. Users pose inanimate objects — action figures, clay models, plants, and whatever else they can think of — inside the box, then close the lid. After that, the magic happens: Hush snaps a photo of the scene inside the box and feeds it as a prompt to a video generation AI.</p>
<figure class="wp-block-image size-large">
<div class="image-post"><img decoding="async" width="1024" height="576" src="https://blog.arduino.cc/wp-content/uploads/2026/05/6FFC9oZUXWcucRPBZl1UnYI.jpg-copy-1024x576.jpg" alt="" class="wp-image-42059" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/6FFC9oZUXWcucRPBZl1UnYI.jpg-copy-1024x576.jpg 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/6FFC9oZUXWcucRPBZl1UnYI.jpg-copy-300x169.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/6FFC9oZUXWcucRPBZl1UnYI.jpg-copy-768x432.jpg 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/6FFC9oZUXWcucRPBZl1UnYI.jpg-copy-1536x864.jpg 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/6FFC9oZUXWcucRPBZl1UnYI.jpg-copy.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>The result is often fantastic, as AI models are now at a point where they do a very good job of generating and rendering realistic video. And in this case, that realistic video incorporates the real-world items in the box. Imagine your LEGO minifigs battling a clay dragon that you sculpted. That is exactly the kind of video Hush can produce and you get to be part of the creative process by deciding what to put in the box and how to pose those things within the scene. You also get control over day versus night and the simulated weather in the scene.</p>
<p>Kling v2.5 Turbo does the heavy lifting of video generation and a PC connects to that via the Replicate AI. The physical controls, including the weather selection dial and the Hall sensor that detects lid closure, connect to the PC through an Arduino. That board also controls the LED strips that illuminate Hush’s interior. The PC snaps a photo of the scene through OpenCV and a webcam. Finally, the rendered video results display on a repurposed iPhone 6, which is visible through a peep hole on the top of the box. </p>
<figure class="wp-block-image size-large">
<div class="image-post"><img decoding="async" width="1024" height="683" src="https://blog.arduino.cc/wp-content/uploads/2026/05/VO55P8Dg8AGvJCx9T8CqpOWCJls.png-1-1024x683.avif" alt="" class="wp-image-42060" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/VO55P8Dg8AGvJCx9T8CqpOWCJls.png-1-1024x683.avif 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/VO55P8Dg8AGvJCx9T8CqpOWCJls.png-1-300x200.avif 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/VO55P8Dg8AGvJCx9T8CqpOWCJls.png-1-768x512.avif 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/VO55P8Dg8AGvJCx9T8CqpOWCJls.png-1-1536x1024.avif 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/VO55P8Dg8AGvJCx9T8CqpOWCJls.png-1.avif 2048w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<p>When it comes to whimsy and entertainment, this might just be the best use of AI that we’ve come across. </p>
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Hush" width="500" height="281" src="https://www.youtube.com/embed/S7ws2NcmpiA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
</figure>
<p>The post <a href="https://blog.arduino.cc/2026/05/07/this-toy-box-does-something-incredible-with-ai-generated-video/">This toy box does something incredible with AI-generated video</a> appeared first on <a href="https://blog.arduino.cc/">Arduino Blog</a>.</p>
</div>
<p>Read more here: <a href="https://blog.arduino.cc/2026/05/07/this-toy-box-does-something-incredible-with-ai-generated-video/">https://blog.arduino.cc/2026/05/07/this-toy-box-does-something-incredible-with-ai-generated-video/</a></p>
<p>The post <a href="https://ipv6.net/news/this-toy-box-does-something-incredible-with-ai-generated-video/">This toy box does something incredible with AI-generated video</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SCION: The overlay network for bankers</title>
		<link>https://ipv6.net/news/scion-the-overlay-network-for-bankers/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Thu, 07 May 2026 05:07:06 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2909949</guid>

					<description><![CDATA[<p>SCION has both supporters and critics, but it faces a major challenge: Replacing BGP while competing in an environment where network decisions are driven more by carrier costs than by strategic priorities. Read more here: https://blog.apnic.net/2026/05/07/scion-the-overlay-network-for-bankers/</p>
<p>The post <a href="https://ipv6.net/news/scion-the-overlay-network-for-bankers/">SCION: The overlay network for bankers</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>SCION has both supporters and critics, but it faces a major challenge: Replacing BGP while competing in an environment where network decisions are driven more by carrier costs than by strategic priorities.</div>
<p>Read more here: <a href="https://blog.apnic.net/2026/05/07/scion-the-overlay-network-for-bankers/">https://blog.apnic.net/2026/05/07/scion-the-overlay-network-for-bankers/</a></p>
<p>The post <a href="https://ipv6.net/news/scion-the-overlay-network-for-bankers/">SCION: The overlay network for bankers</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>HAUI 3Gang Touch Display is a 7-inch wall-mount Home Assistant dashboard with MQTT support</title>
		<link>https://ipv6.net/news/haui-3gang-touch-display-is-a-7-inch-wall-mount-home-assistant-dashboard-with-mqtt-support/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Thu, 07 May 2026 03:37:04 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2909947</guid>

					<description><![CDATA[<p>The HAUI 3Gang Touch Display is a wall-mount smart home control dashboard designed specifically to run Home Assistant, OpenHAB, Domoticz, or any other web-based home automation dashboard. Built around a Raspberry Pi 3B+ since it’s one of the cheapest options following Pi 4/5 price hikes, the HAUI (Home Assistant User Interface) replaces standard wall switches [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/haui-3gang-touch-display-is-a-7-inch-wall-mount-home-assistant-dashboard-with-mqtt-support/">HAUI 3Gang Touch Display is a 7-inch wall-mount Home Assistant dashboard with MQTT support</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<div><img width="720" height="480" src="https://www.cnx-software.com/wp-content/uploads/2026/05/HAUI-3Gang-7-inch-Display-Wall-Mount-Dashboard-WiFi-720x480.jpg" class="attachment-medium size-medium wp-post-image" alt="HAUI 3Gang 7-inch Display Wall Mount Dashboard WiFi" style="margin-bottom: 10px;" decoding="async" fetchpriority="high" srcset="https://www.cnx-software.com/wp-content/uploads/2026/05/HAUI-3Gang-7-inch-Display-Wall-Mount-Dashboard-WiFi-720x480.jpg 720w, https://www.cnx-software.com/wp-content/uploads/2026/05/HAUI-3Gang-7-inch-Display-Wall-Mount-Dashboard-WiFi-300x200.jpg 300w, https://www.cnx-software.com/wp-content/uploads/2026/05/HAUI-3Gang-7-inch-Display-Wall-Mount-Dashboard-WiFi-768x512.jpg 768w, https://www.cnx-software.com/wp-content/uploads/2026/05/HAUI-3Gang-7-inch-Display-Wall-Mount-Dashboard-WiFi.jpg 1200w" sizes="100vw"></div>
<p>The HAUI 3Gang Touch Display is a wall-mount smart home control dashboard designed specifically to run Home Assistant, OpenHAB, Domoticz, or any other web-based home automation dashboard. Built around a Raspberry Pi 3B+ since it’s one of the cheapest options following Pi 4/5 price hikes, the HAUI (Home Assistant User Interface) replaces standard wall switches and fits into a standard single-gang electrical box, while its front panel spans roughly a three-gang footprint, hence the “3Gang” name. It runs a customized version of FullpageOS Linux distribution with a Chromium browser in kiosk mode and includes built-in MQTT integration along with SSH access for advanced users. HAUI Touch Display specifications Main Controller – Raspberry Pi 3B+ Display – 7-inch capacitive touchscreen Connectivity – Wi-Fi on Raspberry Pi (configured via initial setup wizard) Power Input Voltage – 120 VAC ±10% Maximum Current – 1 A Internal USB charger module (Line/Neutral wiring, no ground [&#8230;]</p>
<p>The post <a href="https://www.cnx-software.com/2026/05/07/haui-3gang-touch-display-7-inch-wall-mount-home-assistant-dashboard-with-mqtt-support/">HAUI 3Gang Touch Display is a 7-inch wall-mount Home Assistant dashboard with MQTT support</a> appeared first on <a href="https://www.cnx-software.com/">CNX Software &#8211; Embedded Systems News</a>.</p>
</div>
<p>Read more here: <a href="https://www.cnx-software.com/2026/05/07/haui-3gang-touch-display-7-inch-wall-mount-home-assistant-dashboard-with-mqtt-support/">https://www.cnx-software.com/2026/05/07/haui-3gang-touch-display-7-inch-wall-mount-home-assistant-dashboard-with-mqtt-support/</a></p>
<p>The post <a href="https://ipv6.net/news/haui-3gang-touch-display-is-a-7-inch-wall-mount-home-assistant-dashboard-with-mqtt-support/">HAUI 3Gang Touch Display is a 7-inch wall-mount Home Assistant dashboard with MQTT support</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>When DNSSEC goes wrong: how we responded to the .de TLD outage</title>
		<link>https://ipv6.net/news/when-dnssec-goes-wrong-how-we-responded-to-the-de-tld-outage/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Wed, 06 May 2026 20:37:04 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2909908</guid>

					<description><![CDATA[<p>On May 5, 2026, at roughly 19:30 UTC, DENIC, the registry operator for the .de country-code top-level domain (TLD), started publishing incorrect DNSSEC signatures for the .de zone. Any validating DNS resolver receiving these signatures was required by the DNSSEC specification to reject them and return SERVFAIL to clients, including 1.1.1.1, the public DNS resolver [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/when-dnssec-goes-wrong-how-we-responded-to-the-de-tld-outage/">When DNSSEC goes wrong: how we responded to the .de TLD outage</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<p>On May 5, 2026, at roughly 19:30 UTC, DENIC, the registry operator for the <code>.de</code> country-code top-level domain (TLD), started publishing incorrect DNSSEC signatures for the <code>.de</code> zone. Any validating DNS resolver receiving these signatures was required by the DNSSEC specification to reject them and return SERVFAIL to clients, including <a href="https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/"><u>1.1.1.1</u></a>, the public DNS resolver operated by Cloudflare. </p>
<p>The country-code top-level domain for Germany, <code>.de</code>, is one of the largest on the Internet. On <a href="https://radar.cloudflare.com/tlds?dateRange=7d"><u>Cloudflare Radar</u></a>, it consistently ranks among the most broadly queried TLDs globally. An outage at this level of the DNS hierarchy has the potential to make millions of domains unreachable.</p>
<p>In this post, we’ll walk through what we saw, the impact of these events, and how we applied temporary mitigations while DENIC resolved the issue.</p>
<figure>
          <img decoding="async" src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hF64h72z4oKRg28w0mDJm/7f535cf687750f9ea730c27fa5e729e3/BLOG-3309_2.png"><br />
          </figure>
<div>
<h2>How DNSSEC works</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#how-dnssec-works"></p>
<p>      </a>
    </div>
<p><a href="https://www.cloudflare.com/learning/dns/dnssec/how-dnssec-works/"><u>DNSSEC</u></a> (Domain Name System Security Extensions) adds cryptographic authentication to DNS. When a zone is signed with DNSSEC, each set of records is accompanied by a digital signature known as an RRSIG record that lets a resolver verify the records haven’t been tampered with. Unlike encrypted DNS protocols, such as DNS over TLS (DoT) and DNS over HTTPs (DoH), DNSSEC is about integrity, not privacy. The records are visible, but their authenticity can be proven.</p>
<p>What makes DNSSEC unique is that the signatures travel together with the records they protect. This means integrity can be verified regardless of how many caches or hops a response has passed through. A cached record is just as verifiable as a fresh one.</p>
<p>DNSSEC is built on a chain of trust. Starting at the root zone, whose trust anchor is hard-coded into the resolvers, each zone delegates trust to child zones via Delegation Signer (DS) records. A DS record in the parent zone contains a cryptographic hash of a public key in the child zone. When a resolver validates <code>example.de</code> it verifies the chain: root trusts <code>.de</code>, <code>.de</code> trusts <code>example.de</code>. A break anywhere in that chain causes validation to fail for everything below it, which is why a misconfiguration at a TLD like <code>.de</code> affects every domain under it.</p>
<p>Zones typically use two types of keys: a Zone Signing Key (ZSK), used to sign the zone’s records, and a Key Signing Key (KSK), used to sign the ZSK itself. The KSK’s public key is what the parent zone’s DS record points to, anchoring the chain of trust. Rotating a ZSK is relatively straightforward: generate a new key, re-sign the zone’s records, and wait for caches to expire. Rotating a KSK is more involved, because the parent’s DS record must also be updated, often requiring coordination with a registrar or registry.</p>
<figure>
          <img decoding="async" src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EDg7LKirRAVrzXCYprNIv/f14a9e3a24595d898cc9a650e9101fdd/image13.png"><br />
          </figure>
<p>During a key rotation, there is a critical window where the old key is being phased out and the new one phased in. If the signatures published in the zone are made with a key that resolvers cannot verify against the zone’s published DNSKEY records, whether because the signing step failed, the timing was wrong, or the new key wasn’t fully distributed yet, resolvers have no choice but to reject the responses and return SERVFAIL.</p>
<div>
<h2>What we saw</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#what-we-saw"></p>
<p>      </a>
    </div>
<p>On May 5, 2026, at roughly 19:30 UTC, DENIC, the operator for the <code>.de</code> TLD, started publishing incorrect DNSSEC signatures for the <code>.de</code> zone. Any validating resolver receiving these records was required by the DNSSEC specification to reject them and return SERVFAIL. 1.1.1.1 was no exception.</p>
<p>The graph below shows the response codes 1.1.1.1 returned for <code>.de</code> queries during the incident.</p>
<figure>
          <img decoding="async" src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78zFXArtjyc8vcUup4zr9L/4207aa01b3caad16392266b4c32037e7/BLOG-3309_4.png"><br />
          </figure>
<p>After the immediate spike in SERVFAILs at 19:30 UTC, it climbed steadily over the following three hours as cached records slowly started expiring. As each domain&#8217;s cached records expired and resolvers went back to DENIC for fresh copies, they got back broken signatures and started failing.</p>
<p>Also visible is a large increase in query volume. This is typical during DNS incidents, as clients retry failed queries, often three or more times, inflating the raw numbers. The SERVFAIL rate looks more alarming than the actual user impact, as many of those queries represent the same user retrying the same domain.</p>
<figure>
          <img decoding="async" src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KpMo46Phe5HtxmP34FYMK/46a1281a625760f58d592cbde91943f8/BLOG-3309_5.png"><br />
          </figure>
<p>What might be surprising is that the NOERROR rate stayed relatively stable throughout the incident. That&#8217;s “serve stale” at work, which we&#8217;ll cover in the next section.</p>
<div>
<h2>Serve stale</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#serve-stale"></p>
<p>      </a>
    </div>
<p>Recursive resolvers cache the records they receive from authoritative nameservers for the duration of each record&#8217;s TTL (Time-to-Live). While a record is cached, the resolver serves it directly without going back to the authoritative nameserver. When the TTL expires, the resolver fetches a fresh copy and re-caches it.</p>
<p>During the outage, freshly requested records ended up resolving to SERVFAIL. The DNSSEC signatures were broken and the resolver correctly rejected them. But many <code>.de</code> records were still sitting in cache from before the incident began. Rather than immediately discarding those and returning SERVFAIL to users, 1.1.1.1 continued serving them past their TTL. This is called “serving stale.”</p>
<p>1.1.1.1 implements <a href="https://datatracker.ietf.org/doc/html/rfc8767"><u>RFC 8767</u></a>, which formalizes this behavior. When upstream resolution fails, a resolver may continue serving expired cached records rather than returning an error. This significantly cushions the impact of an upstream outage, buying time for operators to respond.</p>
<p>The result is visible in the graph below, which shows response codes for <code>.de</code> queries during the incident excluding the stale-served responses. Without stale-served responses, the NOERROR rate drops steadily from 19:30 onward. These represent queries that users received good answers for only because their record was still in cache.</p>
<figure>
          <img decoding="async" src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YUtnXiFixcdswxtGik46r/78082f4b4439130cf23ff1473448781a/BLOG-3309_6.png"><br />
          </figure>
<div>
<h2>Our mitigation</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#our-mitigation"></p>
<p>      </a>
    </div>
<p>While the issue was largely out of our own control, and serve stale was doing its job, there was still a legitimate impact for a lot of users. Luckily, there were some actions we were able to take to improve the situation.</p>
<div>
<h3>Negative Trust Anchors</h3>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#negative-trust-anchors"></p>
<p>      </a>
    </div>
<p><a href="https://datatracker.ietf.org/doc/html/rfc7646"><u>RFC 7646</u></a> defines the concept of a Negative Trust Anchor (NTA). In normal DNSSEC operation, a validating resolver maintains a set of trust anchors: public keys at the root of the chain of trust. Each DNS zone signed with DNSSEC has a trust anchor, and every child zone builds its own trust anchor upon it. When the cryptographic signatures linking the chain together are broken, responses will be rejected and result in SERVFAIL. An NTA is an explicit exception. It tells the resolver to treat a specific zone as if it were unsigned, bypassing validation for names under that zone.</p>
<figure>
          <img decoding="async" src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZPkgIvIf1R9rlScLS7ofh/3483daabde429f99e5ca3bc2f6b5709f/BLOG-3309_7.png"><br />
          </figure>
<p>NTAs exist precisely for these types of incidents. When a TLD operator publishes broken signatures, every DNSSEC-validating resolver is forced to return SERVFAIL for every domain under that TLD. Not because of anything wrong with those domains themselves, but because their parent zone is misconfigured. Continuing to return SERVFAIL in that situation provides no security value: the failure is already known, public, and being fixed. RFC 7646 explicitly names TLD misconfiguration as the primary use case for NTAs.</p>
<div>
<h3>What we actually deployed</h3>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#what-we-actually-deployed"></p>
<p>      </a>
    </div>
<p>For 1.1.1.1 we have our own resolver referred to as <a href="https://blog.cloudflare.com/big-pineapple-intro/"><u>Big Pineapple</u></a>, which also powers 1.1.1.1 for Families, Gateway DNS, DNS Firewall, and more. At this time, we have not implemented a native NTA mechanism. Instead, we used an existing override rule mechanism to mark <code>.de</code> as an insecure zone, which causes all <code>.de</code> queries to be resolved as if they don’t have DNSSEC enabled. This is functionality equivalent to an NTA, though it is not formally defined in any RFC.</p>
<p>The decision to bypass DNSSEC is a deliberate tradeoff. Without DNSSEC validation, <code>.de</code> domains become vulnerable to <a href="https://www.cloudflare.com/en-gb/learning/dns/dnssec/how-dnssec-works/"><u>genuine attacks</u></a> for the duration of the incident. During incidents like this, we weighed this as acceptable because the signing failure was widespread, publicly confirmed, and affected every validating resolver on the Internet equally. As it was put in our internal incident room: “<i>There is no user of 1.1.1.1 resolving a .de name right now who would prefer a SERVFAIL over an unvalidated response</i>.”</p>
<p>We rolled out our mitigation at 22:17 UTC, which marked the end of impact for 1.1.1.1. We communicated this with fellow DNS operators in the <a href="https://www.dns-oarc.net/oarc/services/chat"><u>DNS-OARC Mattermost</u></a>.</p>
<div>
<h3>Origin resolution mitigations</h3>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#origin-resolution-mitigations"></p>
<p>      </a>
    </div>
<p>While all Internet users can access our 1.1.1.1 resolver, we have a particular responsibility to customers using our CDN platform services. Those with <code>.de</code> origin names were also affected by this outage.</p>
<p>Cloudflare operates a separate internal resolver for origin resolution, distinct from our publicly available 1.1.1.1 service. To mitigate impact we applied a similar NTA for <code>.de</code> on the internal resolver service, restoring origin connectivity for affected customers.</p>
<div>
<h3>Extended DNS Errors</h3>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#extended-dns-errors"></p>
<p>      </a>
    </div>
<p>Before our mitigation, queries that couldn&#8217;t be served from cache received a SERVFAIL response from 1.1.1.1. Each SERVFAIL included an Extended DNS Error (EDE) code, defined in <a href="https://datatracker.ietf.org/doc/html/rfc8914"><u>RFC 8914</u></a>, which gives clients more detail about what went wrong. </p>
<p>Some resolvers returned EDE 6 (DNSSEC Bogus) with a descriptive message pointing directly at the broken signature. This is the correct behavior:</p>
<pre><code>EDE: 6 (DNSSEC Bogus): RRSIG with malformed signature found for example.de/nsec3 (keytag=33834)
</code></pre>
</p>
<p>1.1.1.1, on the other hand, returned EDE 22 (No Reachable Authority), which on the surface suggests a connectivity problem with the upstream nameservers rather than a DNSSEC validation failure.  </p>
<p>The cause is a bug in how we propagate DNSSEC EDE codes up from our trust chain verifier. When the verifier detects a bogus signature it creates the DNSSEC Bogus EDE code, but this is never inserted into the response. Instead, the outer layer of the resolver sees a problem with recursive resolution with no error code and falls back to reporting “No Reachable Authority.” This obscures the underlying DNSSEC cause.</p>
<p>We&#8217;re aware that this isn&#8217;t helpful for 1.1.1.1 users and will be fixing our responses to surface the DNSSEC errors.</p>
<div>
<h2>Is this a failure of DNSSEC as a technology?</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#is-this-a-failure-of-dnssec-as-a-technology"></p>
<p>      </a>
    </div>
<p>DNS is a critical part of the request chain for most Internet communication. It would be easy to come to the conclusion that this outage and the mitigations applied means DNSSEC has failed as a technology. However, any technology that is misconfigured will risk breaking for users that rely on it. Leaving critical fiber cables exposed on the seabed for sharks to chew on does not invalidate the important role underwater cables pose in today&#8217;s Internet communications. It only highlights that we’ve sometimes failed to accurately protect it. The same applies here. DNSSEC serves a critical role in ensuring that we can rely on the DNS answers without tampering by malicious actors.</p>
<div>
<h2>#HugOps</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#hugops"></p>
<p>      </a>
    </div>
<p>No one likes to have serious incidents. These things, unfortunately, happen to everyone who operates critical infrastructure at scale. When they do, the DNS community tends to show up for each other.</p>
<p>Incidents like this also highlight why relationships between operators matter. DNS is a decentralized system, no single organization controls all of it, and keeping it running reliably depends on mutual trust and open lines of communication between registries, resolver operators, and the broader community. Forums like DNS-OARC provide exactly this: shared mailing lists and chat rooms where operators can coordinate quickly across organizational boundaries when something goes wrong.</p>
<p>DENIC has published <a href="https://blog.denic.de/en/technical-issue-with-de-domains-resolved/"><u>a short blog post about the incident</u></a> where they state: “The outage is linked to a routine, scheduled key rollover. During this process, non-validatable signatures were generated and distributed. As a precautionary measure, future rollovers have been suspended until the exact technical causes have been identified.”</p>
<p> We&#8217;re sure we’ll hear more when their own analysis is ready. </p>
<div>
<h2>Takeaways from this incident</h2>
<p>      <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/#takeaways-from-this-incident"></p>
<p>      </a>
    </div>
<p>This incident highlights a structural reality of the DNS hierarchy: when a registry at the TLD level fails, every domain under that TLD is affected simultaneously, regardless of where it&#8217;s hosted or which resolver is used. This isn&#8217;t unique to DNSSEC; the same is true if a TLD’s nameservers become unreachable. The hierarchy that makes the global DNS work is also what makes failures at the top propagate downward.</p>
<p>There is no simple fix for this. What the industry can do is respond quickly and consistently when it happens. In this incident, resolver operators across the Internet independently applied Negative Trust Anchors within an hour, restoring resolution while DENIC worked to fix the zone. Operational practices, industry communication channels like DNS-OARC, and features like serve stale all reduce the impact, even if they can’t eliminate the underlying dependency.</p>
<p>We also came away with some points to improve for ourselves. We will be working on our EDE errors to better surface DNSSEC errors.</p>
<p>We look forward to DENIC’s post-incident report and appreciate the transparency they showed throughout.</p>
<p>If you want to learn more about how DNSSEC works, visit our page <a href="https://www.cloudflare.com/en-gb/learning/dns/dnssec/how-dnssec-works/"><u>How does DNSSEC work?</u></a> And you can always follow real-time DNS trends and TLD data on <a href="https://radar.cloudflare.com/tlds/de?dateStart=2026-05-05&amp;dateEnd=2026-05-06"><u>Cloudflare Radar</u></a>.</p>
</div>
<p>Read more here: <a href="https://blog.cloudflare.com/de-tld-outage-dnssec/">https://blog.cloudflare.com/de-tld-outage-dnssec/</a></p>
<p>The post <a href="https://ipv6.net/news/when-dnssec-goes-wrong-how-we-responded-to-the-de-tld-outage/">When DNSSEC goes wrong: how we responded to the .de TLD outage</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Students build a lactose intolerance breath tester with Arduino® Nano™ board</title>
		<link>https://ipv6.net/news/students-build-a-lactose-intolerance-breath-tester-with-arduino-nano-board/</link>
		
		<dc:creator><![CDATA[news-aggregator]]></dc:creator>
		<pubDate>Wed, 06 May 2026 11:07:06 +0000</pubDate>
				<category><![CDATA[IPv6 and IoT News]]></category>
		<category><![CDATA[#iot]]></category>
		<category><![CDATA[#ipv6]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[m2m]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[tech]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://ipv6.net/?p=2909832</guid>

					<description><![CDATA[<p>What if your students could build a working biomedical prototype from scratch – one that explains human digestion, gas diffusion, sensor calibration, and programming all at once? That’s exactly what happened at ITTS “E. Divini” in San Severino Marche, Italy, where Professor Lorenzo Morresi and his colleagues Professors Battistini and Capri led a group of [&#8230;]</p>
<p>The post <a href="https://ipv6.net/news/students-build-a-lactose-intolerance-breath-tester-with-arduino-nano-board/">Students build a lactose intolerance breath tester with Arduino® Nano™ board</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div>
<figure class="wp-block-image size-full">
<div class="image-post"><img fetchpriority="high" decoding="async" width="900" height="693" src="https://blog.arduino.cc/wp-content/uploads/2026/05/Mask.jpg" alt="" class="wp-image-42034" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/Mask.jpg 900w, https://blog.arduino.cc/wp-content/uploads/2026/05/Mask-300x231.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/Mask-768x591.jpg 768w" sizes="(max-width: 900px) 100vw, 900px"></div>
</figure>
<p>What if your students could build a working biomedical prototype from scratch – one that explains human digestion, gas diffusion, sensor calibration, and programming all at once? That’s exactly what happened at <a href="https://divini.edu.it/">ITTS “E. Divini”</a> in San Severino Marche, Italy, where Professor Lorenzo Morresi and his colleagues Professors Battistini and Capri led a group of fifth-year chemistry and materials students – Noemi Aloi, Corrado Avellino, Michele Bagoi, Alessandro Fiorani, Priya Kaur, and Matteo Zagaglia – to prototype a hydrogen breath test system using an <a href="https://store.arduino.cc/products/arduino-nano">Arduino Nano</a> board. The project was featured in Italy’s Focus Scuola magazine and is a great example of what’s possible when curiosity meets the right tools.</p>
<h2 class="wp-block-heading">The science behind the idea</h2>
<p>Lactose intolerance isn’t a disease – it’s a condition caused by a deficiency of lactase, the enzyme that breaks down lactose into glucose and galactose in the small intestine. When lactase is absent or insufficient, undigested lactose reaches the colon, where gut bacteria ferment it and produce gases, including hydrogen (H?). That hydrogen passes into the bloodstream and is eventually exhaled through the lungs.</p>
<p>This is the principle behind the hydrogen breath test, a diagnostic method used in clinical settings: measuring the concentration of hydrogen in exhaled breath after ingesting lactose can help to detect malabsorption. The project team saw this as a perfect intersection of biochemistry, physics, and electronics – and decided to build it.</p>
<h2 class="wp-block-heading">The hardware: simple, accessible, effective</h2>
<p>The prototype uses three main components. A simple Nano board serves as the brain of the system, programmed in the Arduino language (based on C++) to handle sensor input and data output. A MiCS-5524 gas sensor – sensitive to reducing gases including hydrogen – handles detection across a range of 1 to 1,000 ppm. And to make the device practical to use, the sensor was integrated into a stan</p>
<p>dard aerosol mask, so exhaled breath hits the sensitive element directly. The choice of components was deliberate: accessible, affordable, and <strong>replicable by any school with a basic electronics lab</strong>.</p>
<figure class="wp-block-image size-large">
<div class="image-post"><img decoding="async" width="1024" height="835" src="https://blog.arduino.cc/wp-content/uploads/2026/05/Sensor-2-1024x835.jpg" alt="" class="wp-image-42037" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/Sensor-2-1024x835.jpg 1024w, https://blog.arduino.cc/wp-content/uploads/2026/05/Sensor-2-300x245.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/Sensor-2-768x626.jpg 768w, https://blog.arduino.cc/wp-content/uploads/2026/05/Sensor-2-1536x1252.jpg 1536w, https://blog.arduino.cc/wp-content/uploads/2026/05/Sensor-2-2048x1670.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px"></div>
</figure>
<h2 class="wp-block-heading">Calibration, protocol, and the scientific method</h2>
<p>The team didn’t stop at assembly. Without access to certified gas cylinders for calibration, students worked from the manufacturer’s logarithmic curves to translate raw electrical signals into hydrogen concentrations expressed in parts per million – a real exercise in dealing with the kind of uncertainty and approximation that comes with actual scientific work.</p>
<p>Test subjects followed a rigorous protocol: 12 hours of fasting, a baseline measurement, a low-residue diet the day prior, ingestion of milk, and breath measurements every 15 minutes for two hours. Data was then processed in Microsoft Excel to visualize the hydrogen curve over time – and the resulting graphs clearly resembled the hydrogen peaks characteristic of a breath test.</p>
<figure class="wp-block-image size-full">
<div class="image-post"><img decoding="async" width="960" height="720" src="https://blog.arduino.cc/wp-content/uploads/2026/05/Risultati.jpg" alt="" class="wp-image-42036" srcset="https://blog.arduino.cc/wp-content/uploads/2026/05/Risultati.jpg 960w, https://blog.arduino.cc/wp-content/uploads/2026/05/Risultati-300x225.jpg 300w, https://blog.arduino.cc/wp-content/uploads/2026/05/Risultati-385x289.jpg 385w, https://blog.arduino.cc/wp-content/uploads/2026/05/Risultati-768x576.jpg 768w" sizes="(max-width: 960px) 100vw, 960px"></div>
</figure>
<h2 class="wp-block-heading">A powerful teaching tool, not a medical device</h2>
<p>The team is clear about what the prototype is and isn’t. As Professor Morresi puts it, “We demonstrated the feasibility of our idea and its reproducibility by others. This is not a medical device – but it is a powerful teaching tool that brings together coding, physics, and health in a single lab activity.”</p>
<p>The project covers an impressive spread of curriculum topics in one hands-on experience: the physics of gas diffusion through the circulatory system, the biochemistry of enzyme function and fermentation, analog signal processing with a microcontroller, and the analysis of measurement uncertainty. Future iterations of the project aim to add methane (CH?) detection, which would make the results even more diagnostically meaningful.</p>
<h2 class="wp-block-heading">Open and replicable – by design</h2>
<p>One of the most generous aspects of this project is that Professor Morresi has made everything available to other schools: lab worksheets, Arduino code, sensor calibration data, and test protocols. The goal is straightforward – to show that in a technical high school, with good guidance and affordable components, students can turn ideas into working technology, and subjects like physics and chemistry stop being abstract concepts and start being tools for understanding the world.</p>
<p>If you’re a teacher looking to bring a genuinely interdisciplinary project into your classroom – one that connects biochemistry, physics, electronics, and data analysis in a way students can actually build – this one is worth exploring! All project materials are available on <a href="https://morresi.wordpress.com/didattica/formazione-on-line/itis-e-divini-a-s-2025-2026-classi-virtuali/breathtest/">Professor Morresi’s dedicated project page</a> (in Italian).</p>
<p><em>Arduino and Nano are trademarks or registered trademarks of Arduino S.r.l.</em></p>
<p>The post <a href="https://blog.arduino.cc/2026/05/06/students-build-a-lactose-intolerance-breath-tester-with-arduino-nano-board/">Students build a lactose intolerance breath tester with Arduino® Nano™ board</a> appeared first on <a href="https://blog.arduino.cc/">Arduino Blog</a>.</p>
</div>
<p>Read more here: <a href="https://blog.arduino.cc/2026/05/06/students-build-a-lactose-intolerance-breath-tester-with-arduino-nano-board/">https://blog.arduino.cc/2026/05/06/students-build-a-lactose-intolerance-breath-tester-with-arduino-nano-board/</a></p>
<p>The post <a href="https://ipv6.net/news/students-build-a-lactose-intolerance-breath-tester-with-arduino-nano-board/">Students build a lactose intolerance breath tester with Arduino® Nano™ board</a> appeared first on <a href="https://ipv6.net">IPv6.net</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Object Caching 73/95 objects using Memcached
Page Caching using Disk: Enhanced 
Lazy Loading (feed)
Minified using APC
Database Caching 6/58 queries in 0.030 seconds using Memcached (Request-wide modification query)

Served from: ipv6.net @ 2026-05-08 14:50:50 by W3 Total Cache
-->