<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>ARMdevices.net</title>
	<atom:link href="http://armdevices.net/feed/" rel="self" type="application/rss+xml" />
	<link>https://armdevices.net</link>
	<description>New video posted every 8 hours, forever</description>
	<lastBuildDate>Fri, 17 Apr 2026 04:01:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.2.2</generator>

 
<atom:link rel="hub" href=""/><site xmlns="com-wordpress:feed-additions:1">214008376</site>	<item>
		<title>Battlelf Foldable Keyboards, Touchpads at Global Sources 2026</title>
		<link>https://armdevices.net/2026/04/17/battlelf-foldable-keyboards-touchpads-at-global-sources-2026/</link>
					<comments>https://armdevices.net/2026/04/17/battlelf-foldable-keyboards-touchpads-at-global-sources-2026/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 04:01:57 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139111</guid>

					<description><![CDATA[Battleft showcased an extensive lineup of input devices and accessories at Global Sources China Sourcing Fair Hong Kong April 2026. The exhibition featured their flagship foldable keyboard with touchpad products including larger-format keyboard and touchpad variants with integrated locking mechanisms designed for ergonomic leg-based working setups. Key product highlights included combination units merging mouse pads [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Battleft showcased an extensive lineup of input devices and accessories at Global Sources China Sourcing Fair Hong Kong April 2026. The exhibition featured their flagship foldable keyboard with touchpad products including larger-format keyboard and touchpad variants with integrated locking mechanisms designed for ergonomic leg-based working setups. Key product highlights included combination units merging mouse pads with numeric keypads, offering expandable touchpad configurations and modular customization options for professional users. Battleft has developed folding lock mechanisms over 16 years, with their current factory formerly operating under DZH Industrial before transitioning to OEM and ODM production models for international markets. The booth displayed various keyboard solutions including silent switches and clicky mechanical variants, with a premium gaming keyboard line featuring reinforced construction for durability and professional use scenarios. Additional demonstrations included compact portable form factors and innovative folding designs with integrated mirrors, representing their broad accessory portfolio. The factory operates with approximately 600 workers serving global customer bases across multiple product categories.</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>source <a href="https://www.youtube.com/watch?v=jtphu_UFac0">https://www.youtube.com/watch?v=jtphu_UFac0</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/17/battlelf-foldable-keyboards-touchpads-at-global-sources-2026/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139111</post-id>	</item>
		<item>
		<title>XY Audio Car DSP Head Unit: 8-Channel Universal Car Audio System</title>
		<link>https://armdevices.net/2026/04/16/xy-audio-car-dsp-head-unit-8-channel-universal-car-audio-system/</link>
					<comments>https://armdevices.net/2026/04/16/xy-audio-car-dsp-head-unit-8-channel-universal-car-audio-system/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 18:11:36 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139109</guid>

					<description><![CDATA[XY Audio is a car audio manufacturer showcasing their DSP-based head unit and car audio product lineup. The system is built around a Snapdragon octa-core processor and features a real DSP implementation supporting 6-channel and 8-channel configurations for universal car compatibility. Audio playback supports Sony DSD format with a 24-bit audio decoder. The DSP application [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>XY Audio is a car audio manufacturer showcasing their DSP-based head unit and car audio product lineup. The system is built around a Snapdragon octa-core processor and features a real DSP implementation supporting 6-channel and 8-channel configurations for universal car compatibility. Audio playback supports Sony DSD format with a 24-bit audio decoder. The DSP application enables 3-way and 4-way active crossover adjustments without requiring a computer, directly from the device. Connectivity options include high-level speaker-level inputs for direct factory head unit connection and RCA low-level outputs for integration with external amplifiers or car DSP units. XY Audio offers touchscreen head units in 9-inch, 10-inch, 12.3-inch, and 13.1-inch sizes, including vertical screen options styled for BMW and Mercedes applications. The displays feature anti-reflective coating and run a customizable Android interface where users can install applications and resize widgets. The booth also featured car audio speakers, subwoofers for trunk installation, and amplifiers designed for front and rear door placement. XY Audio positions their products as upgrade solutions for used and older vehicles, enabling customers to modernize car audio systems without replacing the entire vehicle. The company works with distributors and resellers who handle vehicle-specific mounting frame design and installation. Global Sources China Sourcing Fair Hong Kong 2026</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>source <a href="https://www.youtube.com/watch?v=9OthRvZiCzE">https://www.youtube.com/watch?v=9OthRvZiCzE</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/16/xy-audio-car-dsp-head-unit-8-channel-universal-car-audio-system/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139109</post-id>	</item>
		<item>
		<title>DJI Pocket 4 unboxing, the best camera in the world</title>
		<link>https://armdevices.net/2026/04/16/dji-pocket-4-unboxing-the-best-camera-in-the-world/</link>
					<comments>https://armdevices.net/2026/04/16/dji-pocket-4-unboxing-the-best-camera-in-the-world/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 13:31:40 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139107</guid>

					<description><![CDATA[at the Pocket 4 launch event in Guangzhou China source https://www.youtube.com/watch?v=OkuVajhbGgo]]></description>
										<content:encoded><![CDATA[<p>at the Pocket 4 launch event in Guangzhou China</p>
<p>source <a href="https://www.youtube.com/watch?v=OkuVajhbGgo">https://www.youtube.com/watch?v=OkuVajhbGgo</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/16/dji-pocket-4-unboxing-the-best-camera-in-the-world/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139107</post-id>	</item>
		<item>
		<title>DJI Pocket 4 Launch event Guangzhou</title>
		<link>https://armdevices.net/2026/04/16/dji-pocket-4-launch-event-guangzhou/</link>
					<comments>https://armdevices.net/2026/04/16/dji-pocket-4-launch-event-guangzhou/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 12:31:44 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139105</guid>

					<description><![CDATA[source https://www.youtube.com/watch?v=N73zMKB4JF8]]></description>
										<content:encoded><![CDATA[<p>source <a href="https://www.youtube.com/watch?v=N73zMKB4JF8">https://www.youtube.com/watch?v=N73zMKB4JF8</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/16/dji-pocket-4-launch-event-guangzhou/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139105</post-id>	</item>
		<item>
		<title>DJI Pocket 4 launch in Guangzhou China</title>
		<link>https://armdevices.net/2026/04/16/dji-pocket-4-launch-in-guangzhou-china/</link>
					<comments>https://armdevices.net/2026/04/16/dji-pocket-4-launch-in-guangzhou-china/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 12:31:37 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139103</guid>

					<description><![CDATA[source https://www.youtube.com/watch?v=l2O56GbTkRU]]></description>
										<content:encoded><![CDATA[<p>source <a href="https://www.youtube.com/watch?v=l2O56GbTkRU">https://www.youtube.com/watch?v=l2O56GbTkRU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/16/dji-pocket-4-launch-in-guangzhou-china/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139103</post-id>	</item>
		<item>
		<title>Colorii M.2 SSD Enclosures, Card Readers, and iPhone Smart Grip at China Sourcing Fair</title>
		<link>https://armdevices.net/2026/04/14/colorii-m-2-ssd-enclosures-card-readers-and-iphone-smart-grip-at-china-sourcing-fair/</link>
					<comments>https://armdevices.net/2026/04/14/colorii-m-2-ssd-enclosures-card-readers-and-iphone-smart-grip-at-china-sourcing-fair/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 21:36:23 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139100</guid>

					<description><![CDATA[Colorii is a technology company specializing in high-performance storage enclosures, mobile accessories, and card reader solutions. At the Global Sources China Sourcing Fair Hong Kong 2026, the company showcased its product lineup targeting photographers, videographers, and content creators. Core products include multi-format card readers supporting microSD, SD, CF-Express Type A, B, and C cards with [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Colorii is a technology company specializing in high-performance storage enclosures, mobile accessories, and card reader solutions. At the Global Sources China Sourcing Fair Hong Kong 2026, the company showcased its product lineup targeting photographers, videographers, and content creators. Core products include multi-format card readers supporting microSD, SD, CF-Express Type A, B, and C cards with integrated SIM pin storage and up to 10 gigabits per second transfer speeds. The card reader doubles as a protective storage case with a transparent design for easy card visibility. Super Color also demonstrated an M.2 SSD enclosure featuring an E-ink smart display that shows SSD names, health status, temperature, cycle counts, and write data even when disconnected from a computer. A built-in supercapacitor provides 8 to 10 seconds of emergency power to safely complete data transfers and protect SSDs from accidental disconnection damage. The dual-bay M.2 SSD enclosure supports offline cloning, RAID 0 and RAID 1, and JBOD modes in a fanless design available in 10 gigabit and 20 gigabit versions. Super Color highlighted its expertise in fanless enclosures across its range, including a 40 gigabit model praised for sustained transfer speeds during extended 6-hour aging tests and an 80 gigabit enclosure achieving approximately 6000 to 7000 megabytes per second sustained read and write performance. The smart grip for iPhone integrates an M.2 SSD slot supporting up to 8 terabytes of storage and 7000 milliampere-hour battery capacity, enabling ProRes video recording directly to external storage. Magnetic smartphone holders with 4-inch and 5-inch touch screen displays for Type-C DisplayPort alternate mode mirroring were also featured.</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>source <a href="https://www.youtube.com/watch?v=VkuWpGaRu9E">https://www.youtube.com/watch?v=VkuWpGaRu9E</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/14/colorii-m-2-ssd-enclosures-card-readers-and-iphone-smart-grip-at-china-sourcing-fair/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139100</post-id>	</item>
		<item>
		<title>Advantech Edge AI Presentation at Embedded World 2026: CPU, GPU, NPU, TOPS, Benchmarking, SDK</title>
		<link>https://armdevices.net/2026/04/09/advantech-edge-ai-presentation-at-embedded-world-2026-cpu-gpu-npu-tops-benchmarking-sdk/</link>
					<comments>https://armdevices.net/2026/04/09/advantech-edge-ai-presentation-at-embedded-world-2026-cpu-gpu-npu-tops-benchmarking-sdk/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 10:41:35 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139098</guid>

					<description><![CDATA[Umar Ahmad, AI@Edge Evangelist at Advantech, presents a practical overview of how edge AI projects move from concept to production, and why hardware choice is only one part of the equation. The talk focuses on how CPUs, GPUs, and NPUs fit different workloads, and why deployment, software readiness, thermal design, and lifecycle support often decide [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Umar Ahmad, AI@Edge Evangelist at Advantech, presents a practical overview of how edge AI projects move from concept to production, and why hardware choice is only one part of the equation. The talk focuses on how CPUs, GPUs, and NPUs fit different workloads, and why deployment, software readiness, thermal design, and lifecycle support often decide whether an AI product succeeds in the field. [https://www.advantech.com/](https://www.advantech.com/)</p>
<p>A central theme is the full AI workflow: data collection, transfer learning, model optimization, format conversion, application development, edge deployment, monitoring, and retraining. Advantech positions itself as a partner across that chain, with board support packages, drivers, benchmarking tools, SDK support, and engineering services designed to shorten the path from trained model to production-ready embedded system.</p>
<p>The compute discussion is grounded in real trade-offs. CPUs are described as a strong fit for general-purpose processing, rule-based AI, and lighter sensor or time-series workloads. GPUs remain the preferred option for deep learning, vision, and higher-throughput edge inference, while NPUs target lower-power AI acceleration for industrial automation and embedded vision. The point is not that one architecture wins, but that each one matches a different deployment profile.</p>
<p>One of the more useful parts of the presentation is the warning against treating TOPS as the only metric that matters. Umar Ahmad explains that TOPS mostly reflects raw INT8 compute and can be misleading without context. In real edge AI design, latency, throughput, power efficiency, thermal behavior, memory bandwidth, framework support, operating system compatibility, and development environment maturity are often more relevant than a headline performance number.</p>
<p>Recorded at Embedded World 2026 in Nuremberg, this Advantech presentation also touches on practical platform selection across Intel, NVIDIA, Qualcomm, Rockchip, NXP, and accelerator vendors such as Hailo, along with Ubuntu Pro support on NXP i.MX8 through Canonical. The result is a useful summary of how edge AI moves beyond demos into maintainable, scalable products built for 24/7 operation in industrial and vision-centric environments</p>
<p>source <a href="https://www.youtube.com/watch?v=DArqsOHP3Bw">https://www.youtube.com/watch?v=DArqsOHP3Bw</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/04/09/advantech-edge-ai-presentation-at-embedded-world-2026-cpu-gpu-npu-tops-benchmarking-sdk/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139098</post-id>	</item>
		<item>
		<title>Advantech and NXP i.MX 95 Edge AI, OSM Modules, Ubuntu Pro, Long Lifecycle</title>
		<link>https://armdevices.net/2026/03/27/advantech-and-nxp-i-mx-95-edge-ai-osm-modules-ubuntu-pro-long-lifecycle/</link>
					<comments>https://armdevices.net/2026/03/27/advantech-and-nxp-i-mx-95-edge-ai-osm-modules-ubuntu-pro-long-lifecycle/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 14:06:12 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139095</guid>

					<description><![CDATA[This conversation looks at how Advantech and NXP are turning a new generation of Arm-based edge hardware into deployable industrial platforms rather than just another SoC launch. The focus is the i.MX 95 family, positioned here as the step up in compute, graphics, and edge AI capability that lets Advantech build board-level and system-level products [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>This conversation looks at how Advantech and NXP are turning a new generation of Arm-based edge hardware into deployable industrial platforms rather than just another SoC launch. The focus is the i.MX 95 family, positioned here as the step up in compute, graphics, and edge AI capability that lets Advantech build board-level and system-level products for HMI, vision, automation, and embedded inference. The message is less about raw peak numbers than about a usable stack: silicon, modules, carrier design, software enablement, and long product availability. https://www.advantech.com/</p>
<p>A key theme is scalability across form factors. Advantech describes a range built around NXP processors, from compact OSM modules to SMARC and ready-to-integrate boards, so customers can move from a tiny embedded node to a more feature-rich edge computer without changing ecosystem too much. That matters in industrial design, where display support, graphics, AI acceleration, power budget, thermal envelope, and I/O density all need to be balanced against enclosure size and certification path.</p>
<p>The i.MX 95 part stands out here as the higher-performance option, combining stronger multimedia and graphics capability with on-chip AI processing for edge workloads. Alongside it, the i.MX 93 appears as the smaller and lower-power route, especially relevant for compact SOM designs and cost- or energy-sensitive devices. The discussion around OSM Size-S and related module formats underlines a practical market shift: customers want standardized, highly integrated compute blocks that shorten design cycles while still leaving room for custom carrier boards and application-specific I/O.</p>
<p>Software and compliance are just as central as hardware in this interview. Advantech highlights Ubuntu Pro support on NXP-based platforms, with the appeal of a long maintenance window and a cleaner path toward European Cyber Resilience Act requirements coming into force in 2027. In other words, the value proposition is not only edge AI or graphics, but lifecycle management, security updates, BSP stability, and a more predictable route from prototype to deployed device in regulated industrial environments.</p>
<p>What makes the partnership credible is the industrial framing: longevity, open ecosystem thinking, and time-to-market. NXP’s long availability commitments and Advantech’s module and off-the-shelf product strategy give OEMs flexibility to choose between a finished platform and a scalable embedded design-in path. Filmed at Embedded World 2026 in Nuremberg, the interview captures a familiar reality of the embedded market: the winning products are usually the ones where silicon roadmaps, open software, module standards, and long-term maintenance line up into one coherent platform.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=dcznGypWJVE">https://www.youtube.com/watch?v=dcznGypWJVE</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/27/advantech-and-nxp-i-mx-95-edge-ai-osm-modules-ubuntu-pro-long-lifecycle/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139095</post-id>	</item>
		<item>
		<title>Würth Elektronik Powered GRINN Edge AI SBC, GenioBoard, GenioSOM-700, sensors, LTE Wi-Fi Bluetooth</title>
		<link>https://armdevices.net/2026/03/25/wurth-elektronik-powered-grinn-edge-ai-sbc-genioboard-geniosom-700-sensors-lte-wi-fi-bluetooth/</link>
					<comments>https://armdevices.net/2026/03/25/wurth-elektronik-powered-grinn-edge-ai-sbc-genioboard-geniosom-700-sensors-lte-wi-fi-bluetooth/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 14:02:26 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139092</guid>

					<description><![CDATA[Würth Elektronik presents edge AI here as a practical hardware stack rather than just a component catalog. The core story is a compact open-hardware single-board computer built for local inference, where computer-vision and sensor workloads run directly on the device instead of depending on cloud processing. That makes the platform relevant for access control, object [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Würth Elektronik presents edge AI here as a practical hardware stack rather than just a component catalog. The core story is a compact open-hardware single-board computer built for local inference, where computer-vision and sensor workloads run directly on the device instead of depending on cloud processing. That makes the platform relevant for access control, object detection, robotics, condition monitoring, and industrial IoT systems where latency, bandwidth, privacy, or offline operation matter. https://www.we-online.com/en/components/products/GRINN-GENIOBOARD-EDGE-AI-SBC</p>
<p>At the center of the demo is the Grinn GenioBoard Edge AI SBC, based on a MediaTek platform with an integrated NPU rated at 4 TOPS for machine-learning workloads. In practice, the board is positioned as an out-of-box Linux platform with HDMI and DisplayPort output, USB-C power, Ethernet, dual MIPI-CSI camera inputs, onboard storage, and M.2 expansion for SSDs or AI accelerators. That combination makes it easier to move from prototype to custom carrier or production design with fewer architectural changes at the edge.</p>
<p>The live use cases make the positioning clear. One demo runs face recognition locally for entry control, while the broader pitch extends to museum protection, home automation, and robotics vision. The same idea also applies beyond cameras: pressure, acceleration, temperature, and humidity data can be processed on-premise for event detection, anomaly classification, or predictive-maintenance style inference, which is often more efficient than streaming raw telemetry to remote servers all day long.</p>
<p>What makes the presentation more interesting is that Würth Elektronik is not just showing a compute board, but wrapping it with the company’s own design support around EMC, power architecture, thermal behavior, RF integration, antenna matching, and compliance. Expansion options for LTE, Wi-Fi, and Bluetooth Low Energy turn the board into a bridge between local inference and connected IoT deployment, so a customer can choose when data stays on-device and when selected results move to the cloud. Filmed at Embedded World 2026 in Nuremberg, the demo feels like a reference design for companies that need a starting point for edge AI without building every subsystem from zero.</p>
<p>The bigger message is that edge AI development is shifting from isolated silicon demos to more complete system recipes. Würth Elektronik and Grinn are using the GenioBoard, GenioSOM-700, wireless modules, antennas, and sensor add-ons to show how computer vision, sensor fusion, wireless connectivity, and hardware design services can be combined into a deployable platform. For developers working on machine vision, smart devices, industrial automation, or embedded Linux products, this is less about a flashy demo and more about shortening the path from proof of concept to a certifiable product at scale.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=OFlSQrnsG58">https://www.youtube.com/watch?v=OFlSQrnsG58</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/25/wurth-elektronik-powered-grinn-edge-ai-sbc-genioboard-geniosom-700-sensors-lte-wi-fi-bluetooth/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139092</post-id>	</item>
		<item>
		<title>Advantech Medical AI with Jetson Thor, Holoscan, AIMB-294 and USB-C medical display</title>
		<link>https://armdevices.net/2026/03/25/advantech-medical-ai-with-jetson-thor-holoscan-aimb-294-and-usb-c-medical-display/</link>
					<comments>https://armdevices.net/2026/03/25/advantech-medical-ai-with-jetson-thor-holoscan-aimb-294-and-usb-c-medical-display/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 13:56:23 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139090</guid>

					<description><![CDATA[Advantech is showing how medical edge AI is moving from generic compute boxes to compact, purpose-built platforms around NVIDIA Jetson Thor. The core of this demo is the AIMB-294 mini-ITX board, designed for healthcare imaging, diagnostics, and smart device integration, with enough compute for real-time video analysis and sensor-driven inference in a much smaller footprint [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Advantech is showing how medical edge AI is moving from generic compute boxes to compact, purpose-built platforms around NVIDIA Jetson Thor. The core of this demo is the AIMB-294 mini-ITX board, designed for healthcare imaging, diagnostics, and smart device integration, with enough compute for real-time video analysis and sensor-driven inference in a much smaller footprint than older GPU systems. https://www.advantech.com/en-us/products/ec92f1c7-a7bd-4d47-bf13-dd6d159778d0/aimb-294/mod_26dae8fd-dcdc-40aa-ae62-5d84523841a6</p>
<p>What makes the platform interesting is not just raw TOPS, but the stack around it. Advantech positions this hardware with NVIDIA Holoscan support, which matters for low-latency streaming pipelines in medical imaging, endoscopy, surgical visualization, and other environments where camera input, AI inference, and display output need to stay tightly synchronized. In the demo, that shows up as live object detection and tracking, but the wider point is deterministic edge processing for medical workflows rather than cloud-dependent AI.</p>
<p>The hardware story is also practical. Advantech is pairing high compute density with medical-ready system design, including a certified box platform, compact thermal design, and modern I/O. One detail highlighted here is USB-C display integration, where power, touch, USB, and audio can run over a single cable. For medical device manufacturers, that can simplify arm-mounted systems, reduce cable bulk, lower weight, and make cleaning and service access easier in hospital and clinical settings.</p>
<p>A small transcript glitch seems to call the Intel platform “Panel Lake”, but the broader message is clear: Advantech is combining NVIDIA Jetson Thor for high-end AI workloads with Intel-based medical display and device integration options, giving OEMs multiple paths depending on compute, certification, and interface needs. Filmed at Embedded World 2026 in Nuremberg, this is less about a flashy demo than about how compact edge hardware, Holoscan-ready software, and medical integration features are starting to converge into deployable healthcare AI equipment.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=cZO5HI7wH8o">https://www.youtube.com/watch?v=cZO5HI7wH8o</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/25/advantech-medical-ai-with-jetson-thor-holoscan-aimb-294-and-usb-c-medical-display/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139090</post-id>	</item>
		<item>
		<title>Advantech Edge AI and Omniverse, AIR-420, AMR Digital Twin Demo</title>
		<link>https://armdevices.net/2026/03/25/advantech-edge-ai-and-omniverse-air-420-amr-digital-twin-demo/</link>
					<comments>https://armdevices.net/2026/03/25/advantech-edge-ai-and-omniverse-air-420-amr-digital-twin-demo/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 09:21:41 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139088</guid>

					<description><![CDATA[Advantech’s Embedded World interview focuses on how NVIDIA Omniverse is being used as a practical robotics and warehouse digital twin workflow rather than a concept demo. The key idea is that a robot can map a storage environment with cameras and sensors, generate a 3D model of the space, and keep syncing live operational data [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Advantech’s Embedded World interview focuses on how NVIDIA Omniverse is being used as a practical robotics and warehouse digital twin workflow rather than a concept demo. The key idea is that a robot can map a storage environment with cameras and sensors, generate a 3D model of the space, and keep syncing live operational data back to the simulation layer. That makes the virtual model useful for path planning, coordination, and validation in environments where multiple AMRs or mobile robots may need to share space and avoid conflicts. https://www.advantech.com/</p>
<p>What stands out here is the feedback loop between physical and virtual systems. Instead of manually building every 3D scene from scratch, the robot contributes spatial data that feeds the digital twin, while real-world telemetry continues to update the model. In industrial terms, this is where simulation starts to matter: route optimization, obstacle avoidance, testing of robot behavior before deployment, and more reliable orchestration of fleets in logistics or smart factory settings. OpenUSD-based collaboration and Omniverse Enterprise also fit naturally into this kind of workflow, especially when different teams need to work on the same operational model.</p>
<p>On the hardware side, the demo is tied to Advantech edge AI infrastructure rather than a fixed single compute stack. In the interview they point to the AIR-420, an Edge AI HPC platform based on AMD Ryzen Embedded and EPYC Embedded options, designed for GPU-heavy workloads and scalable AI deployment. That matches the broader trend around industrial edge servers that can handle sensor fusion, real-time visualization, AI inference, and digital twin workloads close to the machine layer instead of sending everything to a distant data center.</p>
<p>The wider story is physical AI at the edge: robots perceiving space, generating useful world models, and acting on live data with enough compute nearby to keep latency under control. That is why this Embedded World 2026 demo in Nuremberg is interesting beyond the booth itself. It connects warehouse automation, AMR fleet behavior, 3D scene reconstruction, edge GPU computing, and industrial digital twin software into one readable example of where robotics infrastructure is heading.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=-vke_HMi-pk">https://www.youtube.com/watch?v=-vke_HMi-pk</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/25/advantech-edge-ai-and-omniverse-air-420-amr-digital-twin-demo/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139088</post-id>	</item>
		<item>
		<title>Advantech CRA Compliance, IEC 62443 Pre-Certified Hardware and ONEKEY SBOM</title>
		<link>https://armdevices.net/2026/03/24/advantech-cra-compliance-iec-62443-pre-certified-hardware-and-onekey-sbom/</link>
					<comments>https://armdevices.net/2026/03/24/advantech-cra-compliance-iec-62443-pre-certified-hardware-and-onekey-sbom/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 18:26:40 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139086</guid>

					<description><![CDATA[Advantech frames the Cyber Resilience Act as a system-level engineering problem rather than a paperwork exercise. In this interview, the focus is on building compliance from the hardware upward: trusted platform hardware, supported operating systems, and an embedded stack that is already aligned with industrial cybersecurity requirements before a customer adds its own software layer. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Advantech frames the Cyber Resilience Act as a system-level engineering problem rather than a paperwork exercise. In this interview, the focus is on building compliance from the hardware upward: trusted platform hardware, supported operating systems, and an embedded stack that is already aligned with industrial cybersecurity requirements before a customer adds its own software layer. The message is less about a single box to tick and more about shortening the path from embedded computer selection to deployable, regulation-aware products. https://www.advantech.com/</p>
<p>The core idea is pre-certification. Joe describes how selected Advantech platforms are already being validated against IEC 62443-4-2 with third-party involvement, then used as a baseline for a broader internal “pre-certified” process across more hardware lines. That matters because the CRA pushes device makers toward traceability, vulnerability handling, and repeatable evidence, especially for connected industrial and edge systems. Advantech’s current material around CRA and IEC 62443 makes the same point: start with hardened platforms, then reduce downstream certification work for OEM and system-integration teams.</p>
<p>A big technical piece here is software composition and vulnerability visibility. The video points to ONEKEY as the tool Advantech is using to address SBOM generation, CVE monitoring, and the ongoing software side of compliance. That is important because CRA readiness is not only about secure boot or TPM-backed roots of trust; it is also about knowing what is inside the firmware and software supply chain, then monitoring exposure over time. Advantech’s ONEKEY material specifically highlights automated binary analysis, one-click SBOM generation, continuous monitoring, and CI/CD integration, which fits well with the interview’s emphasis on repeatable, scalable workflows rather than one-off audits.</p>
<p>What makes this discussion relevant is that it connects EU regulation, IEC 62443 certification practice, and day-to-day embedded product development in one flow. The promise to customers is practical: save months in certification work, lower validation cost, and reduce risk by starting from pre-qualified industrial hardware, then adding gap analysis for the final application stack. Filmed at Embedded World 2026 in Nuremberg, it shows how industrial computing vendors are moving cybersecurity compliance closer to the board, BIOS, firmware, SBOM, and lifecycle-monitoring level, where CRA preparation becomes an ongoing product-management task rather than a last-minute legal check.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=fdaF7laR4rg">https://www.youtube.com/watch?v=fdaF7laR4rg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/24/advantech-cra-compliance-iec-62443-pre-certified-hardware-and-onekey-sbom/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139086</post-id>	</item>
		<item>
		<title>Vantron Edge AI Hardware and SaaS Device Management for Linux Android Fleets</title>
		<link>https://armdevices.net/2026/03/23/vantron-edge-ai-hardware-and-saas-device-management-for-linux-android-fleets/</link>
					<comments>https://armdevices.net/2026/03/23/vantron-edge-ai-hardware-and-saas-device-management-for-linux-android-fleets/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 11:16:49 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139084</guid>

					<description><![CDATA[Vantron presents a broad embedded portfolio built around customization rather than a single fixed platform. The interview highlights board-level designs, rugged tablets, compact wireless modules, medical and AI-capable systems, and multiple compute form factors aimed at OEM and system-integration work. A key theme is processor flexibility, with support across Intel, NVIDIA, Qualcomm, TI, MediaTek, NXP [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Vantron presents a broad embedded portfolio built around customization rather than a single fixed platform. The interview highlights board-level designs, rugged tablets, compact wireless modules, medical and AI-capable systems, and multiple compute form factors aimed at OEM and system-integration work. A key theme is processor flexibility, with support across Intel, NVIDIA, Qualcomm, TI, MediaTek, NXP and ST, which makes the lineup relevant for industrial HMI, edge computing, embedded vision, IoT gateways and application-specific device design. https://vantrontech.us/</p>
<p>What makes this interesting is the mix of hardware breadth and deployment pragmatism. Vantron is positioning itself as processor-agnostic and form-factor agnostic, covering everything from Q7-style modules and Raspberry Pi-based designs to ruggedized field hardware and higher-performance edge AI platforms that can accommodate GPU acceleration. That matters for teams balancing BOM targets, thermals, power draw, Linux or Android support, and long-term product availability across different embedded roadmaps.</p>
<p>The software discussion shifts the story from hardware selection to fleet operations. Vantron’s management platform is described as a SaaS layer for Linux-based and Android devices, handling firmware and application updates, agent deployment, remote troubleshooting, monitoring and recovery in the field. The real value is at scale: device lockdown, remote viewer capability, debugging workflows similar to ADB tracing, and centralized configuration become much more important when a deployment grows from dozens of units to tens of thousands across retail, healthcare, industrial or other distributed edge environments.</p>
<p>Taken together, this is less about a single demo than about an edge-to-cloud deployment model where hardware choice, remote lifecycle management and service economics are tightly linked. The per-device monthly model reflects how embedded vendors are moving toward recurring platform management rather than one-time hardware shipment alone. Filmed at Embedded World 2026 in Nuremberg, the conversation gives a useful snapshot of where embedded IoT is heading: modular hardware, mixed silicon ecosystems, remote operations, secure update pipelines and practical fleet management for real devices already out in the field.</p>
<p>source <a href="https://www.youtube.com/watch?v=xzaNgGa7Ejs">https://www.youtube.com/watch?v=xzaNgGa7Ejs</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/23/vantron-edge-ai-hardware-and-saas-device-management-for-linux-android-fleets/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139084</post-id>	</item>
		<item>
		<title>Advantech Humanoid Robotics Demo with AFE-A702, Jetson Thor, GMSL and Depth Perception</title>
		<link>https://armdevices.net/2026/03/23/advantech-humanoid-robotics-demo-with-afe-a702-jetson-thor-gmsl-and-depth-perception/</link>
					<comments>https://armdevices.net/2026/03/23/advantech-humanoid-robotics-demo-with-afe-a702-jetson-thor-gmsl-and-depth-perception/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 22 Mar 2026 23:56:07 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139082</guid>

					<description><![CDATA[Advantech uses this demo to frame humanoid robotics as a perception and compute problem: the robot needs to fuse multiple camera feeds, depth data, and scene understanding fast enough to act in real time. At the center is the AFE-A702, a Jetson Thor based robotic control system built for high-bandwidth sensor input, AI inference, and [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Advantech uses this demo to frame humanoid robotics as a perception and compute problem: the robot needs to fuse multiple camera feeds, depth data, and scene understanding fast enough to act in real time. At the center is the AFE-A702, a Jetson Thor based robotic control system built for high-bandwidth sensor input, AI inference, and robot I/O, so the conversation is less about a single robot and more about the full edge-AI pipeline needed to make one practical. https://www.advantech.com/emt/products/8d5aadd0-1ef5-4704-a9a1-504718fb3b41/afe-a702/mod_13487539-d213-4c8f-a027-4be489e0fe1a</p>
<p>The live view makes that idea concrete. Four GMSL cameras and depth sensing are used to build a machine view of the scene, while segmentation separates people, objects, and other obstacles into semantic classes. That is the core requirement for humanoids, AMRs, and service robots: not just video capture, but low-latency perception, sensor fusion, and AI models that can support navigation, obstacle avoidance, workspace awareness, and interaction in dynamic environments at the edge.</p>
<p>What makes the platform interesting is the surrounding ecosystem rather than the compute figure alone. Advantech is positioning the AFE-A702 inside a broader robotics stack with integrated camera drivers, support for LiDAR, IMU and other sensors, JetPack 7, ROS 2 oriented tooling through Advantech Robotic Suite, and links to Isaac ROS, simulation, and deployment workflows. Filmed at Embedded World 2026 in Nuremberg, the demo fits the current shift toward production-ready robotics platforms that reduce custom integration work and shorten the path from prototype to field.</p>
<p>The technical message is clear: humanoid and mobile robotics now depend on scalable multi-sensor vision more than on isolated controller boards. Advantech’s recent updates around validated GMSL camera integration and Jetson Thor class robotics controllers suggest it wants to cover the whole route from thermal and mechanical design-in to perception software and AI inference. In that context, this demo is really about how a robot can turn synchronized camera and depth streams into a usable understanding of people, obstacles, and free space fast enough to deploy</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=IHD9JFthSiY">https://www.youtube.com/watch?v=IHD9JFthSiY</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/23/advantech-humanoid-robotics-demo-with-afe-a702-jetson-thor-gmsl-and-depth-perception/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139082</post-id>	</item>
		<item>
		<title>Advantech Robotics Demo: RK3588 AMR controller, Intel Core Ultra, GMSL vision</title>
		<link>https://armdevices.net/2026/03/21/advantech-robotics-demo-rk3588-amr-controller-intel-core-ultra-gmsl-vision/</link>
					<comments>https://armdevices.net/2026/03/21/advantech-robotics-demo-rk3588-amr-controller-intel-core-ultra-gmsl-vision/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 21 Mar 2026 15:41:45 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139080</guid>

					<description><![CDATA[Advantech is showing how an AMR compute stack comes together around multi-camera perception, edge AI and integration support rather than just raw processor specs. The demo centers on the AFRS-761, a Rockchip RK3588-based controller connected to four GMSL cameras, building a 360-degree view for person detection, obstacle awareness and autonomous navigation in warehouse or factory [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Advantech is showing how an AMR compute stack comes together around multi-camera perception, edge AI and integration support rather than just raw processor specs. The demo centers on the AFRS-761, a Rockchip RK3588-based controller connected to four GMSL cameras, building a 360-degree view for person detection, obstacle awareness and autonomous navigation in warehouse or factory robots. The point is clear: perception is the front end of robot intelligence, and the computer has to ingest, synchronize and process several camera streams in real time. https://www.advantech.com/</p>
<p>What makes this interesting is the emphasis on practical sensor integration. In the demo, GMSL is presented as a preferred interface for current robotics deployments because it simplifies multi-camera wiring across mobile platforms, while Advantech also supports other camera options including MIPI-CSI. That matters for AMRs, forklifts and mobile service robots where reliability, cabling distance, ruggedness and low-latency video all affect how safely a machine can move through a busy environment.</p>
<p>The broader message is that robotics perception is no longer a single-board story. Advantech positions the compute module as the robot brain, the cameras as the eyes, and the motion stack as the actuators behind wheels, arms or other mechanisms. Alongside the Arm platform, the company also highlights an Intel Core Ultra based AMR controller, showing that robotics developers increasingly want a choice of CPU and AI architectures depending on power budget, software stack and workload mix, from object detection to depth processing and scene understanding.</p>
<p>Software is a big part of the pitch as well. Advantech’s robotics approach combines hardware with integration work, driver support, ROS 2 oriented tooling and partner software for fleet management, navigation and deployment. In this setup, Node Robotics provides the higher-level AMR software visible in the demo, while Advantech focuses on making sensor and compute combinations easier to bring into real projects. That is often the hard part in robotics: not proving a concept once, but making perception pipelines stable enough for deployment.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, this interview gives a useful snapshot of where industrial robotics is heading: closer coupling between cameras and edge compute, more multi-sensor perception at the vehicle level, and more modular ecosystems for AMRs, AGVs and warehouse automation. The small robot on the booth is the simple visual example, but the real topic is scalable perception architecture for robots that need to see people, avoid obstacles and keep moving reliably in dynamic spaces.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=V0vkzAmaiSA">https://www.youtube.com/watch?v=V0vkzAmaiSA</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/21/advantech-robotics-demo-rk3588-amr-controller-intel-core-ultra-gmsl-vision/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139080</post-id>	</item>
		<item>
		<title>Würth Elektronik at Embedded World 2026 Power Modules, Wireless Power, AEC-Q200 Inductors and more</title>
		<link>https://armdevices.net/2026/03/21/wurth-elektronik-at-embedded-world-2026-power-modules-wireless-power-aec-q200-inductors-and-more/</link>
					<comments>https://armdevices.net/2026/03/21/wurth-elektronik-at-embedded-world-2026-power-modules-wireless-power-aec-q200-inductors-and-more/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 21 Mar 2026 11:21:25 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139078</guid>

					<description><![CDATA[Würth Elektronik gives a broad view of how a passive-component supplier moves up the stack into practical power design. The interview centers on compact DC/DC power modules that integrate the inductor, capacitors and key support circuitry, so engineers can build a regulated supply with minimal external parts and much less layout effort. That makes the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Würth Elektronik gives a broad view of how a passive-component supplier moves up the stack into practical power design. The interview centers on compact DC/DC power modules that integrate the inductor, capacitors and key support circuitry, so engineers can build a regulated supply with minimal external parts and much less layout effort. That makes the story less about single components and more about power architecture, EMI behavior, thermal paths and time-to-design in embedded hardware. https://www.we-online.com/</p>
<p>The most interesting angle is how the portfolio connects discrete magnetics, capacitors, quartz and oscillators with module-level building blocks such as the MagI3C family. In real designs, that means one vendor can cover timing, filtering, isolation, galvanic separation and point-of-load conversion across industrial and compute boards. The video also touches wireless power, where Würth Elektronik’s coil and transformer know-how feeds transmitter and receiver designs similar to split-transformer architectures used in inductive charging, from consumer devices up to higher-power transfer.</p>
<p>Automotive qualification is another key theme. The company highlights parts built for stricter reliability targets, including AEC-Q200 qualified components for harsher electrical and thermal environments. That matters in body electronics, infotainment, motor control and power conversion, where low loss, stable magnetic behavior and controlled EMC can matter as much as raw current rating. The discussion around molded inductors is especially relevant here, because shielded constructions help reduce stray magnetic fields and support cleaner high-efficiency converter layouts.</p>
<p>Seen in the context of Embedded World 2026 in Nuremberg, the demo is really about breadth: passive components, power modules, optoelectronics, LED control, wireless power and application examples with partners such as STMicroelectronics and Analog Devices. The closing focus on efficiency and thermal management is the right one, because embedded systems now span everything from nanoamp energy-harvesting nodes to high-current rails for GaN-based power stages, and both ends of that range depend on better magnetics, lower losses and tighter integration today.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=wBCMA45Vd3I">https://www.youtube.com/watch?v=wBCMA45Vd3I</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/21/wurth-elektronik-at-embedded-world-2026-power-modules-wireless-power-aec-q200-inductors-and-more/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139078</post-id>	</item>
		<item>
		<title>Advantech Multi-OS on Arm: Ubuntu Pro on NXP i.MX 8M, Qualcomm YOLOv8 Edge AI</title>
		<link>https://armdevices.net/2026/03/20/advantech-multi-os-on-arm-ubuntu-pro-on-nxp-i-mx-8m-qualcomm-yolov8-edge-ai/</link>
					<comments>https://armdevices.net/2026/03/20/advantech-multi-os-on-arm-ubuntu-pro-on-nxp-i-mx-8m-qualcomm-yolov8-edge-ai/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 19:11:34 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139076</guid>

					<description><![CDATA[I used current Advantech material on Ubuntu Pro for Devices, its Canonical collaboration, and its Qualcomm edge AI module lineup to tighten the wording and add relevant technical context around OS support, lifecycle, and edge inference. Advantech says Ubuntu Pro for Devices brings 10 years of LTS support, expanded security maintenance, and management tooling, while [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I used current Advantech material on Ubuntu Pro for Devices, its Canonical collaboration, and its Qualcomm edge AI module lineup to tighten the wording and add relevant technical context around OS support, lifecycle, and edge inference. Advantech says Ubuntu Pro for Devices brings 10 years of LTS support, expanded security maintenance, and management tooling, while its Qualcomm QCS6490-based AOM-2721 platform supports Yocto, Windows, and Ubuntu across edge AI scenarios.</p>
<p>Advantech is showing how software support can be just as important as silicon in modern Arm-based edge systems. One part of the demo focuses on Ubuntu Pro running on NXP i.MX 8M, aimed at developers who want a more complete Linux environment for industrial IoT, gateways, robotics and embedded AI without spending time rebuilding drivers, kernel support and interface validation from scratch. The point is not just that Ubuntu boots on Arm, but that the platform is prepared for deployment with long-term maintenance, security updates and a usable BSP from day one. https://www.advantech.com/</p>
<p>The discussion also highlights why this matters for real products. On embedded platforms, OS readiness, driver coverage, graphics support, wireless connectivity and patch management often decide how quickly a team can move from evaluation to shipping hardware. Here the value proposition is a development-ready stack around NXP and Canonical, where Ubuntu Pro adds 10-year lifecycle support, expanded CVE maintenance and large-scale device management options that fit industrial environments better than a minimal custom Linux image.</p>
<p>The second demo moves to Qualcomm and a more explicitly AI-focused workflow. Advantech shows a small OSM-based edge platform running live object detection with YOLOv8, using the SoC’s heterogeneous compute resources rather than pushing everything onto the CPU. That is the real multi-OS story in this video: Yocto, Ubuntu and Windows support on Arm platforms where CPU, GPU and dedicated AI acceleration can be balanced depending on latency, power budget, camera pipeline and application needs.</p>
<p>What makes the conversation interesting is the practical emphasis on optimization. The interview keeps coming back to a familiar edge AI issue: strong hardware alone does not guarantee good results if the software stack is not tuned to the accelerator, memory bandwidth and available drivers. Advantech positions itself as the layer between silicon vendors and product teams, helping customers understand whether a workload belongs on CPU, GPU or NPU, and what software dependencies come with that decision.</p>
<p>This makes the video less about one benchmark and more about reducing engineering friction in embedded AI. The combination of long-term OS support on NXP, multi-OS enablement on Qualcomm, containerized AI workflows and board-level software integration reflects where many Arm deployments are heading now. Filmed at Embedded World 2026 in Nuremberg, it captures a shift from raw edge AI hardware announcements toward the harder question of how to make these platforms maintainable, secure and actually usable in production.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=FWlD2-CJulU">https://www.youtube.com/watch?v=FWlD2-CJulU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/20/advantech-multi-os-on-arm-ubuntu-pro-on-nxp-i-mx-8m-qualcomm-yolov8-edge-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139076</post-id>	</item>
		<item>
		<title>Octavo Systems SiP with AM62, STM32MP2, ROS 2, Edge AI, ADAS and Guitar Audio DSP</title>
		<link>https://armdevices.net/2026/03/20/octavo-systems-sip-with-am62-stm32mp2-ros-2-edge-ai-adas-and-guitar-audio-dsp/</link>
					<comments>https://armdevices.net/2026/03/20/octavo-systems-sip-with-am62-stm32mp2-ros-2-edge-ai-adas-and-guitar-audio-dsp/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 12:56:42 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139074</guid>

					<description><![CDATA[Octavo Systems focuses on System-in-Package design: taking a microprocessor, DDR memory, power management and key passives, then collapsing them into a molded BGA that removes much of the hardest board-level integration. In this interview, that idea is shown not as an abstract packaging story but as a practical way to reduce layout risk, shrink PCB [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Octavo Systems focuses on System-in-Package design: taking a microprocessor, DDR memory, power management and key passives, then collapsing them into a molded BGA that removes much of the hardest board-level integration. In this interview, that idea is shown not as an abstract packaging story but as a practical way to reduce layout risk, shrink PCB area, and accelerate bring-up for embedded Linux, edge AI, industrial control and audio products. https://octavosystems.com/</p>
<p>The most memorable demo is a Chaos Audio multi-effects guitar pedal, where the SiP handles the real-time audio DSP while a phone or tablet acts mainly as the control surface over Bluetooth. The point is not just miniaturization; it is deterministic local processing with effectively no audible latency, which is exactly what musicians need when switching tones, stacking effects, or building a digital pedalboard that still feels immediate under the fingers.</p>
<p>From there the discussion moves to Octavo’s newer processor-module direction, including the TI AM62-based OSD62PM and the STM32MP2-based OSD32MP2-PM reference platform. The pitch is very specific: processor plus DDR4 in a package roughly the size of the DRAM footprint, with major savings in area and routing complexity compared with a discrete MPU-and-memory design. Camera interfaces, DSI, LVDS, PCIe, Ethernet and built-in AI capability make these parts relevant for HMI, vision, smart gateways, robotics and compact edge compute gear.</p>
<p>What makes the booth tour useful is the range of deployed examples. Octavo shows SiP designs inside ROS 2 robotics modules, a retail people counter, a programmable smart torque drill for manufacturing, a compact SOM, an AMD Zynq UltraScale+ MPSoC platform for ADAS-style video inference, and an industrial automation controller with RS485, CAN and cloud connectivity. That broad spread makes the technology easier to understand: SiP is not a single market play, but a packaging and productization strategy that fits many embedded workloads.</p>
<p>A recurring theme is that SiP is not only about size. Octavo argues that pre-validating the processor-to-memory subsystem removes non-differentiating engineering work, reduces design spins, and in some cases can even compete on BOM cost when compared with sourcing the processor and DRAM separately. Filmed at Embedded World 2026 in Nuremberg, this is a grounded look at how integration, thermals, Linux-class processing and edge AI are being pushed into much smaller hardware footprints without turning every product into a custom high-risk board design.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=C9RZC3o2RHE">https://www.youtube.com/watch?v=C9RZC3o2RHE</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/20/octavo-systems-sip-with-am62-stm32mp2-ros-2-edge-ai-adas-and-guitar-audio-dsp/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139074</post-id>	</item>
		<item>
		<title>HS Devices Atronx microQ7 COM, Octavo SiP, Linux web terminal and modular I/O</title>
		<link>https://armdevices.net/2026/03/20/hs-devices-atronx-microq7-com-octavo-sip-linux-web-terminal-and-modular-i-o/</link>
					<comments>https://armdevices.net/2026/03/20/hs-devices-atronx-microq7-com-octavo-sip-linux-web-terminal-and-modular-i-o/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 10:16:38 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139072</guid>

					<description><![CDATA[HS Devices presents Atronx as a compact embedded development platform built around a microQ7 computer-on-module and an Octavo System-in-Package approach, aimed at teams that need a ready hardware base for custom products rather than a finished end device. The pitch is clear: shorten hardware bring-up, keep software portable, and give engineers a practical Linux-based platform [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>HS Devices presents Atronx as a compact embedded development platform built around a microQ7 computer-on-module and an Octavo System-in-Package approach, aimed at teams that need a ready hardware base for custom products rather than a finished end device. The pitch is clear: shorten hardware bring-up, keep software portable, and give engineers a practical Linux-based platform they can adapt for industrial control, monitoring, and edge-connected systems. https://www.hsdevices.com/</p>
<p>What stands out in the demo is the browser-based terminal and device management layer. Instead of treating the module as a black box, HS Devices shows telemetry, module details, CPU load, memory usage, temperature, and direct command-line access in one web interface. That makes the board feel less like a static eval kit and more like a remotely manageable embedded node, which is useful for prototyping field devices, service access, scripted deployment, and debugging across distributed installations.</p>
<p>The hardware story is about modularity and reuse. Atronx is positioned so developers can keep the same carrier or development board and swap compute modules depending on the target: more multithreaded Linux performance, lower power operation, or stronger real-time behavior. That kind of separation between carrier design and compute module is valuable in embedded product design because it reduces redesign cycles, preserves I/O investment, and lets teams move faster when requirements change late in development.</p>
<p>There is also an interesting small-company angle here. HS Devices is a startup from Niš, Serbia, focused on PCB design, circuit design, and embedded hardware engineering, and this product reflects that mindset: practical board-level integration, standardized software foundations, and attention to communication interfaces. Filmed at Embedded World 2026 in Nuremberg, the interview shows a company trying to turn board design expertise into a flexible embedded platform that engineers can actually build on, not just evaluate.</p>
<p>source <a href="https://www.youtube.com/watch?v=SIz9Ln5vz68">https://www.youtube.com/watch?v=SIz9Ln5vz68</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/20/hs-devices-atronx-microq7-com-octavo-sip-linux-web-terminal-and-modular-i-o/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139072</post-id>	</item>
		<item>
		<title>RISC-V CEO Interview on ISO Standardization, AI Matrix Extensions, Automotive and Edge Compute</title>
		<link>https://armdevices.net/2026/03/20/risc-v-ceo-interview-on-iso-standardization-ai-matrix-extensions-automotive-and-edge-compute/</link>
					<comments>https://armdevices.net/2026/03/20/risc-v-ceo-interview-on-iso-standardization-ai-matrix-extensions-automotive-and-edge-compute/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 02:16:37 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139070</guid>

					<description><![CDATA[RISC-V CEO Andrea Gallo presents here not as a single chip vendor story, but as the governance layer behind an open instruction set architecture that lets semiconductor firms, IP providers, tool vendors and device makers build to the same ISA while keeping their own implementation details proprietary. The interview explains why that distinction matters: RISC-V [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>RISC-V CEO Andrea Gallo presents here not as a single chip vendor story, but as the governance layer behind an open instruction set architecture that lets semiconductor firms, IP providers, tool vendors and device makers build to the same ISA while keeping their own implementation details proprietary. The interview explains why that distinction matters: RISC-V is an open standard, not an open-source core, so the shared asset is the specification itself and the portability it enables across software stacks, supply chains and long product cycles. https://riscv.org/</p>
<p>A central theme is standardization. Andrea Gallo frames the 2025 milestone of RISC-V International becoming an ISO/IEC JTC 1 PAS submitter as more than a badge: it is a path toward formal international recognition for the ISA, which can matter in procurement, compliance, functional safety and regulated industrial design. That is especially relevant as RISC-V moves further from microcontrollers into application processors, automotive platforms, security architectures and compute infrastructure where interoperability and long-term governance carry real weight.</p>
<p>The conversation also gets into how technical consensus is built without turning the ISA into bloat. New extensions are expected to solve real multi-company problems, not one-off requests, and that discipline is what keeps the architecture coherent while still expanding into vectors, matrix processing and AI-friendly data handling. The software point is critical: vendors may differentiate in silicon, microarchitecture and performance, but developers still want one PyTorch, one TensorFlow backend, one toolchain target and a stable compliance model rather than fragmented ports.</p>
<p>Another useful insight is how RISC-V is organizing itself around both horizontal technologies and industry verticals. Alongside the core technical groups, the ecosystem is pulling in requirements from automotive, safety, data center, space, intelligent edge and robotics so that recommendations can map real workloads to the right ISA profiles, extensions and software expectations. That makes the story less about ideology and more about practical system design: where the standard should stop, where vendors should compete, and how to keep portability from compiler to firmware to OS and AI runtime.</p>
<p>What comes through most clearly is that RISC-V is no longer just a university-origin ISA associated with embedded experimentation. It is becoming a neutral coordination point for global compute development, backed by formal process, public technical review and a growing base of engineers, researchers and students who are treating the architecture as production infrastructure. Filmed at Embedded World 2026 in Nuremberg, this interview captures that transition well: from open ISA theory to the harder work of profiles, extensions, safety, matrix acceleration, ecosystem alignment and real deployment at scale.</p>
<p>source <a href="https://www.youtube.com/watch?v=4IoVgheSB2o">https://www.youtube.com/watch?v=4IoVgheSB2o</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/20/risc-v-ceo-interview-on-iso-standardization-ai-matrix-extensions-automotive-and-edge-compute/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139070</post-id>	</item>
		<item>
		<title>Espressif ESP32-P4 Edge AI Robot Arm, ESP32-H4 LE Audio, ESP32-E22 Wi-Fi 6E</title>
		<link>https://armdevices.net/2026/03/20/espressif-esp32-p4-edge-ai-robot-arm-esp32-h4-le-audio-esp32-e22-wi-fi-6e/</link>
					<comments>https://armdevices.net/2026/03/20/espressif-esp32-p4-edge-ai-robot-arm-esp32-h4-le-audio-esp32-e22-wi-fi-6e/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 00:01:50 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139068</guid>

					<description><![CDATA[Espressif’s booth video is really about how far the ESP32 family has moved beyond basic IoT nodes into embedded vision, motion control, touch UI, wireless audio, and higher-bandwidth connectivity. The main demo centers on an ESP32-P4 robotic arm using on-device computer vision to detect colored blocks and trigger pick-and-place motion, which is a good fit [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Espressif’s booth video is really about how far the ESP32 family has moved beyond basic IoT nodes into embedded vision, motion control, touch UI, wireless audio, and higher-bandwidth connectivity. The main demo centers on an ESP32-P4 robotic arm using on-device computer vision to detect colored blocks and trigger pick-and-place motion, which is a good fit for the P4’s dual-core RISC-V architecture, AI instruction extensions, MIPI camera/display support, hardware pixel processing, and H.264-capable multimedia pipeline. https://www.espressif.com/</p>
<p>What makes the robotic arm section interesting is that it combines local inference with networked control instead of treating edge AI and cloud AI as opposites. In the demo, OpenCV-style vision runs directly on the chip for offline detection, while wireless connectivity is used for function-call style interaction and remote control. That fits Espressif’s broader direction for the P4 platform: richer HMI, camera-based edge computing, and low-cost embedded systems that can still expose modern interfaces and automation logic. The handheld controller also points to ESP-NOW as a practical low-latency device-to-device control layer for responsive robotics and peripherals.</p>
<p>The middle part of the video broadens that story with touch and audio demos rather than staying narrowly focused on robotics. The piano example shows how Espressif is positioning capacitive touch as a stable UI input method for compact devices, while the small talking character demo shifts attention to voice interaction, directional audio capture, and sensor-driven movement. That combination matters because Espressif is increasingly covering the full edge stack: sensing, local processing, audio I/O, display control, and wireless backhaul, all in platforms that stay closer to MCU economics than full application-processor designs.</p>
<p>Another useful part of the booth tour is the segmentation across chips. The ESP32-H4 appears in the BLE audio and touch-control demos, which lines up with its role as a low-power dual-core RISC-V SoC for Bluetooth 5.4 LE, IEEE 802.15.4, LE Audio, PAwR, direction finding, and battery-powered devices with an integrated DC-DC converter. The sensor shuttle concept then shows Espressif’s modular approach to quick prototyping, where IMU, magnetic, environmental, display, lighting, microphone, speaker, and battery functions can be mixed around a compact controller rather than rebuilt for each proof of concept.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, the last stretch of the video gives a glimpse of where Espressif is expanding next: not just low-power 2.4 GHz IoT, but also stronger wireless transport. The ESP32-C5, which reached mass production in 2025, brings dual-band Wi-Fi 6 plus Bluetooth LE and 802.15.4, while the newer ESP32-E22 adds tri-band Wi-Fi 6E as a connectivity co-processor across 2.4, 5, and 6 GHz. Put together, the booth is less about a single hero demo and more about Espressif building a ladder from simple sensors to edge AI vision, LE Audio, robotics, and higher-throughput connected devices.</p>
<p>source <a href="https://www.youtube.com/watch?v=21dSHwdn7pQ">https://www.youtube.com/watch?v=21dSHwdn7pQ</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/20/espressif-esp32-p4-edge-ai-robot-arm-esp32-h4-le-audio-esp32-e22-wi-fi-6e/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139068</post-id>	</item>
		<item>
		<title>Espressif Booth Tour at Embedded World 2026 ESP32-P4 HMI, ESP32-C6 Low Power, ESP32-E22 Wi-Fi 6E</title>
		<link>https://armdevices.net/2026/03/19/espressif-booth-tour-at-embedded-world-2026-esp32-p4-hmi-esp32-c6-low-power-esp32-e22-wi-fi-6e/</link>
					<comments>https://armdevices.net/2026/03/19/espressif-booth-tour-at-embedded-world-2026-esp32-p4-hmi-esp32-c6-low-power-esp32-e22-wi-fi-6e/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 22:11:17 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139066</guid>

					<description><![CDATA[Espressif uses this demo to show how far its MCU roadmap has moved beyond classic sensor nodes and simple connectivity. The centerpiece is the ESP32-P4, a dual-core RISC-V MCU aimed at richer HMI, multimedia and lightweight edge vision, paired here with a Riverdi 12.1-inch 1280×800 high-brightness industrial touch display. What stands out is not raw [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Espressif uses this demo to show how far its MCU roadmap has moved beyond classic sensor nodes and simple connectivity. The centerpiece is the ESP32-P4, a dual-core RISC-V MCU aimed at richer HMI, multimedia and lightweight edge vision, paired here with a Riverdi 12.1-inch 1280×800 high-brightness industrial touch display. What stands out is not raw headline performance alone, but the fact that this class of GUI can run in an MCU environment with ESP-IDF and LVGL rather than requiring a heavier application processor. https://www.espressif.com/en/products/socs/esp32-p4</p>
<p>The discussion makes clear that Espressif is positioning the P4 as a serious display and interface device: MIPI support, camera input, vector instructions, pixel-processing acceleration, and a software stack that stays accessible to embedded developers. That creates an interesting middle ground between traditional microcontrollers and Linux-class SoCs. For product teams building control panels, industrial terminals, smart appliances, medical interfaces or compact vision-enabled devices, that balance of cost, power envelope and graphics capability is likely the real point of interest.</p>
<p>Another theme is software portability and ecosystem depth. The demo moves between ESP-IDF, LVGL, Embedded Wizard and Slint, while also touching on Rust support and open-source inference examples. Espressif’s approach remains closely tied to accessible tooling, broad community adoption and low barrier to entry, which is one reason the ESP32 family continues to show up in both commercial products and fast prototyping. The partner angle with Riverdi also matters, because industrial display vendors can turn a reference platform into something closer to a deployable subassembly.</p>
<p>Power management is the other major thread. The ESP32-C6 demo highlights Espressif’s split between high-power and low-power cores, showing how software design affects current draw far more than many teams initially expect. That is especially relevant for battery devices, wireless panels and always-on IoT endpoints. Filmed at Embedded World 2026 in Nuremberg, the booth tour also gives a useful snapshot of how Espressif now spans makers, industrial users and HMI developers rather than sitting in only one of those camps.</p>
<p>The wider portfolio shown at the booth reinforces that trajectory. Alongside P4-based HMI and camera demos, Espressif points to C-series RISC-V parts tailored for different wireless and memory requirements, plus the newly announced ESP32-E22 as a tri-band Wi-Fi 6E connectivity co-processor covering 2.4, 5 and 6 GHz. Put together, the story here is about modular architecture: compute where you need it, radio where you need it, and a path from compact MCU designs to more display-heavy and connected embedded products without abandoning the familiar ESP development model.</p>
<p>source <a href="https://www.youtube.com/watch?v=uVr0JxrOTfU">https://www.youtube.com/watch?v=uVr0JxrOTfU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/19/espressif-booth-tour-at-embedded-world-2026-esp32-p4-hmi-esp32-c6-low-power-esp32-e22-wi-fi-6e/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139066</post-id>	</item>
		<item>
		<title>N-iX Embedded Engineering, IoT Prototyping, Nordic Low-Power Devices &#038; Robotics</title>
		<link>https://armdevices.net/2026/03/19/n-ix-embedded-engineering-iot-prototyping-nordic-low-power-devices-robotics/</link>
					<comments>https://armdevices.net/2026/03/19/n-ix-embedded-engineering-iot-prototyping-nordic-low-power-devices-robotics/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 20:11:09 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139064</guid>

					<description><![CDATA[This interview frames N-iX as a broad engineering partner rather than a narrow outsourcing vendor. The key point is its one-stop model: embedded software, hardware design, mechanical engineering, connectivity, cloud, and data work can be combined into a single product-development flow, which matters when companies need faster prototyping, tighter hardware-software integration, and fewer handoffs across [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>This interview frames N-iX as a broad engineering partner rather than a narrow outsourcing vendor. The key point is its one-stop model: embedded software, hardware design, mechanical engineering, connectivity, cloud, and data work can be combined into a single product-development flow, which matters when companies need faster prototyping, tighter hardware-software integration, and fewer handoffs across suppliers. https://www.n-ix.com/</p>
<p>The embedded team describes the practical side of that model well. Instead of focusing only on firmware, they talk about building real devices end to end, including enclosures, electronics, and product-level design decisions. That makes this less about coding capacity and more about full-cycle embedded engineering, where board design, RTOS or Linux software, wireless connectivity, mechanical constraints, validation, and manufacturability all need to line up.</p>
<p>A useful detail in the conversation is how N-iX uses platforms such as Raspberry Pi and Arduino. These are presented mainly as prototyping tools, but also as fast paths for proof-of-concept work where teams need to validate sensing, control logic, motion, and obstacle avoidance before moving to a more production-oriented architecture. The robotic arm demo fits that pattern: rapid iteration around edge control, object handling, and system behavior, with the prototype acting as a bridge between concept and deployable product.</p>
<p>The mention of Nordic Semiconductor also points to a more specific technical direction: low-power connected devices. That usually means Bluetooth Low Energy, battery-optimized wearables, asset trackers, sensor nodes, and other designs where power budgeting, radio performance, firmware efficiency, and long maintenance cycles matter as much as raw compute. Seen that way, the video is really about how an engineering services company positions itself across the full embedded stack, from early prototype hardware to connected edge and IoT product development. The interview was filmed at Embedded World 2026 in Nuremberg.</p>
<p>source <a href="https://www.youtube.com/watch?v=aZFiVUrnBLw">https://www.youtube.com/watch?v=aZFiVUrnBLw</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/19/n-ix-embedded-engineering-iot-prototyping-nordic-low-power-devices-robotics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139064</post-id>	</item>
		<item>
		<title>CTRL+N Railway RTLS Wearables AI Multimeter Embedded Safety</title>
		<link>https://armdevices.net/2026/03/19/ctrln-railway-rtls-wearables-ai-multimeter-embedded-safety/</link>
					<comments>https://armdevices.net/2026/03/19/ctrln-railway-rtls-wearables-ai-multimeter-embedded-safety/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 18:11:24 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139062</guid>

					<description><![CDATA[CTRL+N presents itself here as a Serbian engineering startup building both hardware and software around embedded systems, with a clear focus on IoT, RTLS, wearables and AI-enabled digital platforms for industrial use. In this interview, the company frames its value around practical field devices rather than generic demos, showing how sensing, positioning and human-machine interaction [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>CTRL+N presents itself here as a Serbian engineering startup building both hardware and software around embedded systems, with a clear focus on IoT, RTLS, wearables and AI-enabled digital platforms for industrial use. In this interview, the company frames its value around practical field devices rather than generic demos, showing how sensing, positioning and human-machine interaction can be combined into compact products for real deployments. https://ctrln.tech/</p>
<p>The strongest use case in the video is railway safety. CTRL+N shows a digital signalling and worker-safety concept built on embedded electronics, wireless connectivity and precise location awareness, so field personnel can be tracked relative to infrastructure and hazards. That points to a broader architecture built around RTLS, low-power radios such as Bluetooth Low Energy, edge sensing and alert logic, where worker position, status and alarm conditions can be fed into a supervision layer rather than handled as isolated devices.</p>
<p>The wearable element is especially relevant because it turns the system into something operational at track level. A wrist-worn or body-worn node that can vibrate, flash alarms and report location or basic vital-state data is a practical embedded design problem: power budget, ruggedization, wireless reliability, latency and usability all matter more than consumer-style features. In that sense, the video is less about a gadget and more about occupational safety infrastructure built from embedded hardware, firmware and connected software.</p>
<p>Another interesting detail is the AI-assisted multimeter concept. Instead of treating a measurement tool as a passive instrument, CTRL+N describes a compact tester with a chatbot-style interface that helps technicians investigate rail-track faults and interpret readings locally. That suggests a direction where field diagnostics blend measurement electronics, embedded UI, contextual guidance and AI support, giving junior engineers faster troubleshooting workflows while reducing dependence on constant access to senior staff. The interview was filmed at Embedded World 2026 in Nuremberg, where that mix of rail tech, wearables, RTLS and AI-backed maintenance made CTRL+N stand out as a systems-oriented engineering company rather than a single-product vendor.</p>
<p>source <a href="https://www.youtube.com/watch?v=16uPvs8tFE8">https://www.youtube.com/watch?v=16uPvs8tFE8</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/19/ctrln-railway-rtls-wearables-ai-multimeter-embedded-safety/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139062</post-id>	</item>
		<item>
		<title>Altium Octopart Discover system design search, BOM sourcing, reference designs, CAD workflow</title>
		<link>https://armdevices.net/2026/03/19/altium-octopart-discover-system-design-search-bom-sourcing-reference-designs-cad-workflow/</link>
					<comments>https://armdevices.net/2026/03/19/altium-octopart-discover-system-design-search-bom-sourcing-reference-designs-cad-workflow/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 15:41:20 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139060</guid>

					<description><![CDATA[Altium positions Octopart Discover as a step beyond classic component lookup, turning Octopart from a parts search engine into a system-level discovery workflow. The core idea in this interview is persistent design intent: engineers can start with requirements, narrow options by context such as power, performance, lifecycle status or sourcing constraints, and carry those decisions [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Altium positions Octopart Discover as a step beyond classic component lookup, turning Octopart from a parts search engine into a system-level discovery workflow. The core idea in this interview is persistent design intent: engineers can start with requirements, narrow options by context such as power, performance, lifecycle status or sourcing constraints, and carry those decisions through architecture, PCB design and procurement instead of losing that reasoning between tools. https://octopart.com/octopart-discover</p>
<p>What stands out is the shift from part-centric filtering to electronics system design. Rather than only searching for a specific IC or passive, the platform is shown handling reference designs, functional blocks, simulation assets, CAD data, lifecycle flags, alternates and distributor availability in one flow. That makes the tool relevant not just for component engineers but also for embedded software teams, hardware architects, sourcing specialists and manufacturing teams trying to converge earlier on a viable BOM.</p>
<p>The demo also suggests a more interactive reference design workflow. Users can inspect schematics and PCB context, view board layers and 3D geometry, drill into component properties, compare alternates, and preserve technical questions asked to field application engineers around a given design choice. That is important because many embedded projects fail less on raw part search than on handoff friction: why a device was chosen, whether it remains recommended for new design, and what constraints shaped the original decision.</p>
<p>On the Octopart side, the scale still matters. The demo references a component database in the tens of millions, live pricing and stock visibility, distributor and manufacturer normalization, and BOM-level purchasing flows that can move from architecture to preferred sourcing channels with fewer spreadsheet exports. For engineers dealing with second-source strategy, compliance, availability windows, regional supply conditions or cost-down work, that combination of technical metadata and sourcing context is where the platform becomes more than search.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, this conversation is really about how EDA, supply chain data and early system architecture are starting to merge. Octopart Discover is presented not as a closed CAD feature but as an open, cross-ecosystem layer that can connect reference designs, component intelligence, distributor data and downstream implementation tools. If Altium executes on that open workflow, it could make early-stage embedded design more traceable, more procurement-aware and much faster to move from concept to production.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=SL3y-r2sSuM">https://www.youtube.com/watch?v=SL3y-r2sSuM</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/19/altium-octopart-discover-system-design-search-bom-sourcing-reference-designs-cad-workflow/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139060</post-id>	</item>
		<item>
		<title>SiliconAuto XMotiv M3, ZF Interface Chip, ADAS Pre-Processing, ADB Lighting and Digital Twin</title>
		<link>https://armdevices.net/2026/03/19/siliconauto-xmotiv-m3-zf-interface-chip-adas-pre-processing-adb-lighting-and-digital-twin/</link>
					<comments>https://armdevices.net/2026/03/19/siliconauto-xmotiv-m3-zf-interface-chip-adas-pre-processing-adb-lighting-and-digital-twin/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 14:11:20 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139058</guid>

					<description><![CDATA[SiliconAuto is positioning itself as a new automotive semiconductor player focused on the control layer that sits between high-level compute and real-time vehicle behavior. In this interview, the company frames its first XMotiv M3 microcontroller as part of a broader move toward automotive HPC, MCU and high-speed interconnect architectures built for low latency, functional safety [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>SiliconAuto is positioning itself as a new automotive semiconductor player focused on the control layer that sits between high-level compute and real-time vehicle behavior. In this interview, the company frames its first XMotiv M3 microcontroller as part of a broader move toward automotive HPC, MCU and high-speed interconnect architectures built for low latency, functional safety and deterministic motion control rather than consumer-style compute alone. https://www.siliconautotech.com/</p>
<p>The technical story is really about partitioning. Instead of forcing a central SoC to absorb every sensor, control and housekeeping task, SiliconAuto argues for distributing work across a safety-oriented MCU and companion devices that handle timing-critical jobs closer to the edge. That matters in ADAS and automated driving, where sensor fusion, bounded latency, power limits and fail-operational behavior all shape the system architecture more than raw TOPS alone.</p>
<p>The XMotiv M3 itself is described as a TSMC 28 nm automotive MCU built around an Arm Cortex-M33 at 160 MHz, with TrustZone, HSM, random-number generation, CAN FD, SPI, I2C, UART and a large GPIO budget. In the demo, it drives an adaptive driving beam reference design with matrix LED control, regional dimming, steering-linked light shaping and welcome-animation features. The interesting angle is not just the headlamp demo, but the attempt to move premium lighting control, reference code and faster integration paths into more mainstream vehicle programs too.</p>
<p>A second thread in the video is SiliconAuto’s work with ZF on an award-winning I/O architecture shown at Embedded World 2026 in Nuremberg. Here the MCU acts as a safety and system-management companion for a chip handling camera and sensor pre-processing, image signal processing, radar-related data paths and AI-assisted detection. The broader implication is a chiplet-friendly automotive compute stack where OEMs can mix performance SoC, AI accelerator and I/O domains with more flexibility, while reducing CPU overhead, DDR traffic and sensor-interface bottlenecks.</p>
<p>The digital-twin demos push that idea further by showing software, AI inference benchmarking and even robotic-arm control before final silicon is available. That early virtual-platform workflow is increasingly relevant for automotive, robotics and drone development, where validation time, toolchain maturity and faster concept-to-production cycles can be just as important as the silicon itself. Overall, the video shows SiliconAuto less as a single-chip launch and more as an attempt to define a modular automotive compute model around safety MCU, sensor pre-processing, ADB lighting, UCIe-era integration and real-time motion.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=sxBNICzdrFo">https://www.youtube.com/watch?v=sxBNICzdrFo</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/19/siliconauto-xmotiv-m3-zf-interface-chip-adas-pre-processing-adb-lighting-and-digital-twin/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139058</post-id>	</item>
		<item>
		<title>MediaTek Booth Tour at Embedded World 2026: Genio Pro 5100, Genio 360, 420, 520, 720, Edge AI, OSM</title>
		<link>https://armdevices.net/2026/03/19/mediatek-booth-tour-at-embedded-world-2026-genio-pro-5100-genio-360-420-520-720-edge-ai-osm/</link>
					<comments>https://armdevices.net/2026/03/19/mediatek-booth-tour-at-embedded-world-2026-genio-pro-5100-genio-360-420-520-720-edge-ai-osm/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 11:31:25 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139056</guid>

					<description><![CDATA[MediaTek’s Embedded World 2026 booth tour centers on a broader edge AI compute stack for industrial and embedded systems, with the Genio family now spanning entry, mid-range, and higher-performance tiers. The key message is platform continuity: shared software direction, scalable AI acceleration, and pin-compatible options that let OEMs move between performance classes without redesigning everything [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>MediaTek’s Embedded World 2026 booth tour centers on a broader edge AI compute stack for industrial and embedded systems, with the Genio family now spanning entry, mid-range, and higher-performance tiers. The key message is platform continuity: shared software direction, scalable AI acceleration, and pin-compatible options that let OEMs move between performance classes without redesigning everything from scratch. In practice, that matters for robotics, HMI, smart retail, machine vision, industrial IoT, and connected equipment that need on-device inference rather than cloud dependence. https://www.mediatek.com/products/iot/genio-iot</p>
<p>The newly discussed Genio 360 is positioned as a major refresh of the lower end of the lineup, replacing a much older class of part with a hexa-core 6nm design and up to 6 TOPS for edge inference. That is a meaningful jump for cost-sensitive devices that still need practical AI workloads such as object detection, pose estimation, gesture recognition, vision-based monitoring, and lightweight generative AI at the edge. Above that, the Genio 420 extends the range, while the previously introduced Genio 720 and 520 bring 10 TOPS on 6nm silicon with octa-core CPU configurations and support for LPDDR4 or LPDDR5 memory.</p>
<p>At the top of this discussion is the new Genio Pro tier, presented here as a 50 TOPS class platform aimed at heavier edge AI and robotics workloads. That shifts MediaTek’s embedded portfolio closer to use cases involving multimodal perception, larger vision models, more demanding transformer inference, autonomous mobile systems, and local LLM deployment in the 7B class and beyond, depending on model optimization, quantization, memory footprint, and thermal design. The emphasis is not only raw TOPS, but a combination of CPU headroom, multimedia capability, memory bandwidth, and developer readiness through early kits and partner designs.</p>
<p>The demo ecosystem in the booth shows how that silicon strategy turns into deployable products. MediaTek highlights partner SOMs and OSM modules, including compact designs from companies such as Mitwell, plus embedded boards built around parts like the Genio 1200 for display-heavy systems. One of the more concrete examples here is a 6DoF tracking setup for forklifts and mobile equipment, illustrating how edge AI, sensors, and embedded compute can be packaged into aftermarket or OEM industrial systems. Filmed at Embedded World 2026 in Nuremberg, the video gives a useful snapshot of where MediaTek is heading: from entry-level embedded AI up to high-throughput edge compute for robotics, vision, and industrial automation.</p>
<p>source <a href="https://www.youtube.com/watch?v=gtCAVdaedqI">https://www.youtube.com/watch?v=gtCAVdaedqI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/19/mediatek-booth-tour-at-embedded-world-2026-genio-pro-5100-genio-360-420-520-720-edge-ai-osm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139056</post-id>	</item>
		<item>
		<title>Premio modular rugged edge AI computers, Jetson Orin, EDGEBoost, railway and vision systems</title>
		<link>https://armdevices.net/2026/03/17/premio-modular-rugged-edge-ai-computers-jetson-orin-edgeboost-railway-and-vision-systems/</link>
					<comments>https://armdevices.net/2026/03/17/premio-modular-rugged-edge-ai-computers-jetson-orin-edgeboost-railway-and-vision-systems/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 19:11:42 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139053</guid>

					<description><![CDATA[Premio’s latest platform story is really about rugged edge compute becoming more modular, more serviceable, and more AI-specific at the same time. The interview focuses on fanless industrial computers, panel PCs, and display systems designed for harsh deployments where vibration tolerance, thermal design, and lifecycle flexibility matter as much as raw performance. A central theme [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Premio’s latest platform story is really about rugged edge compute becoming more modular, more serviceable, and more AI-specific at the same time. The interview focuses on fanless industrial computers, panel PCs, and display systems designed for harsh deployments where vibration tolerance, thermal design, and lifecycle flexibility matter as much as raw performance. A central theme is Premio’s EDGEBoost architecture, which lets users configure I/O, storage, networking, and acceleration around a standardized core rather than forcing a fixed box into every deployment. https://premioinc.com/</p>
<p>That modular approach shows up in several places: M12 connectivity, dual 10GbE, PoE, out-of-band remote management, lockable storage, safe-eject logging features, and expansion paths for NVMe and GPU resources. The pitch is not just customization for its own sake, but faster deployment in industrial environments where requirements vary between vehicle systems, rail, machine vision, data logging, and field-installed automation. Premio also ties this to IEC 62443-4-1 processes, which matters for customers now treating cybersecurity and maintainability as part of the hardware spec rather than an afterthought.</p>
<p>The strongest technical segment is around rugged AI computers based on NVIDIA Jetson, especially Jetson AGX Orin and Orin-class systems for robotics, surveillance, ADAS, and anomaly detection. The transcript highlights GMSL camera support for low-latency long-cable video links in trucks and rail, plus IP66 designs for condensation-prone deployments. That combination of sealed enclosure design, fanless thermal engineering, and transport-focused compliance such as EN50155 and E-Mark is what makes these systems relevant beyond the demo table and into real railway and in-vehicle edge AI rollouts.</p>
<p>Another useful angle is Premio’s view of the “physical AI” compute ladder. At the low end, x86 platforms with integrated NPUs handle compact fanless inference. Moving up, M.2 AI accelerator cards add higher channel density for vision workloads without the power and size penalty of multiple discrete GPUs. Then Jetson Orin and larger GPU-based x86 systems take over for vision-language models, multimodal inference, and on-prem industrial AI where bandwidth, privacy, and latency make cloud-first architectures less practical. Filmed at Embedded World 2026 in Nuremberg, the interview reflects a market that is clearly shifting from simple object detection toward local VLM, SLM, and multimodal edge deployments.</p>
<p>The smart terminal and OEM/ODM sections complete the picture. Premio is not only selling rugged boxes, but also modular display systems where damaged front-end panels can be replaced without scrapping the compute backend, which is a practical design choice for glove-heavy industrial use. Combined with board-level customization, waterproof housings, and tailored I/O, the company is positioning itself as a hardware partner for system integrators building industrial 5.0, smart city, inspection, surveillance, and robotics platforms where reliability, thermal validation, and configurability all have to coexist.</p>
<p>source <a href="https://www.youtube.com/watch?v=rEmbvGBsz4I">https://www.youtube.com/watch?v=rEmbvGBsz4I</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/17/premio-modular-rugged-edge-ai-computers-jetson-orin-edgeboost-railway-and-vision-systems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139053</post-id>	</item>
		<item>
		<title>Innatera Pulsar neuromorphic MCU, SNN edge AI, radar presence sensing and audio classification</title>
		<link>https://armdevices.net/2026/03/17/innatera-pulsar-neuromorphic-mcu-snn-edge-ai-radar-presence-sensing-and-audio-classification/</link>
					<comments>https://armdevices.net/2026/03/17/innatera-pulsar-neuromorphic-mcu-snn-edge-ai-radar-presence-sensing-and-audio-classification/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 15:03:50 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139051</guid>

					<description><![CDATA[Innatera is positioning neuromorphic computing as a practical way to run always-on sensor AI without the usual power penalty. In this interview, the company explains how its Pulsar chip combines spiking neural networks, a RISC-V microcontroller, and a CNN accelerator in a single sensor-edge device, so pattern recognition can happen continuously where data is created [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Innatera is positioning neuromorphic computing as a practical way to run always-on sensor AI without the usual power penalty. In this interview, the company explains how its Pulsar chip combines spiking neural networks, a RISC-V microcontroller, and a CNN accelerator in a single sensor-edge device, so pattern recognition can happen continuously where data is created rather than being pushed to a larger processor or the cloud. That makes the discussion less about raw TOPS marketing and more about system-level efficiency, latency, and battery life. https://innatera.com/pulsar</p>
<p>The key idea is that Pulsar uses silicon neurons and synapses across digital and analog spiking fabric to process sensory events in a brain-inspired way. Instead of treating AI as a separate block bolted onto a conventional embedded design, Innatera presents neuromorphic inference as part of the whole SoC architecture. The result is a platform aimed at sub-millisecond reaction time, low data movement, and ultra-low-power operation for audio, radar, vibration, and other continuous sensor streams at the edge.</p>
<p>What makes the video interesting is that the story quickly moves from architecture to concrete product categories. The live demos include real-time audio classification, audio scene recognition for adaptive headphones, radar-based human presence detection, and predictive maintenance based on vibration sensing. These are all workloads where conventional embedded AI often struggles with the tradeoff between accuracy and always-on operation. Innatera’s claim is that spiking neural networks can keep sensing active full time while staying inside the power budget of compact battery-powered devices.</p>
<p>There is also a strong ambient intelligence theme running through the interview. A notable example is the radar-based human presence detector developed with Socionext, targeting extremely low-power detection for devices such as smart doorbells. Another is the intelligent smoke detector described here, which adds classification and occupancy awareness rather than acting as a simple threshold alarm. Filmed at Embedded World 2026 in Nuremberg, the demo set gives a useful snapshot of where neuromorphic edge AI is heading: not as a research novelty, but as embedded silicon for smart home, industrial IoT, wearables, and safety systems alike.</p>
<p>The company background matters too. Innatera spun out of Delft University of Technology in 2018 after years of research into brain-inspired and energy-efficient computing, and the interview frames Pulsar as the point where that research becomes production silicon. That matters because the value proposition is not generic AI acceleration, but embedded pattern recognition that can stay on continuously in the field. For engineers building sensor-rich products, this is really a discussion about edge inference architecture, mixed-signal design, SNN deployment, and how to reduce power, latency, and bandwidth all at the same time.</p>
<p>source <a href="https://www.youtube.com/watch?v=jAM-sgLlmrg">https://www.youtube.com/watch?v=jAM-sgLlmrg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/17/innatera-pulsar-neuromorphic-mcu-snn-edge-ai-radar-presence-sensing-and-audio-classification/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139051</post-id>	</item>
		<item>
		<title>Bosch Rexroth ctrlX OS on AMD: Secure Industrial Control, Soft PLC, Node-RED, Edge AI</title>
		<link>https://armdevices.net/2026/03/17/bosch-rexroth-ctrlx-os-on-amd-secure-industrial-control-soft-plc-node-red-edge-ai/</link>
					<comments>https://armdevices.net/2026/03/17/bosch-rexroth-ctrlx-os-on-amd-secure-industrial-control-soft-plc-node-red-edge-ai/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 11:11:41 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139049</guid>

					<description><![CDATA[Bosch Rexroth is positioning ctrlX OS as a hardware-independent industrial Linux platform for software-defined automation, where the same application stack can move across controllers, IPCs, edge systems and virtual environments. In this interview, the focus is on secure industrial control, app-based deployment, and a common runtime that lets developers build once and roll out across [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Bosch Rexroth is positioning ctrlX OS as a hardware-independent industrial Linux platform for software-defined automation, where the same application stack can move across controllers, IPCs, edge systems and virtual environments. In this interview, the focus is on secure industrial control, app-based deployment, and a common runtime that lets developers build once and roll out across multiple device classes with far less integration work. https://www.ctrlx-os.com/</p>
<p>The demo shows how ctrlX OS can host different control approaches on the same data layer, from a soft PLC to Node-RED, while exposing machine states and digital I/O through a unified interface. That matters because industrial edge systems increasingly mix classic control logic, visualization, protocol handling, and data services on one platform rather than splitting them across isolated boxes.</p>
<p>A key theme here is the broader hardware reach created by Bosch Rexroth’s work with AMD. The transcript points to support for CPU, GPU and MPU resources, which fits the current push toward x86 embedded processors and adaptive SoC platforms for edge compute. For developers building process-hungry workloads, that opens the door to more demanding HMI, analytics and edge AI pipelines without changing the operating-system layer or rewriting the deployment model.</p>
<p>Security and lifecycle management are just as central as performance. ctrlX OS is presented here as CRA-ready and aligned with IEC 62443-4-2 Security Level 2 expectations, while also giving access to the practical features engineers actually need in the field: backup and restore, reset, license management, app installation, and centralized access to every exposed data point. The result is less about a single controller and more about a secure, manageable OT software platform.</p>
<p>What makes the story interesting is the developer angle. Bosch Rexroth is clearly pushing an API-driven model where the same functions available in the web UI can also be automated through REST APIs, virtual controllers, SDK tooling, and reusable apps. Filmed at Embedded World 2026 in Nuremberg, this interview captures a broader transition in industrial automation: PLC logic, low-code tools, edge AI acceleration, and secure app deployment are starting to converge into one programmable software stack.</p>
<p>source <a href="https://www.youtube.com/watch?v=zIA8jK-tkFE">https://www.youtube.com/watch?v=zIA8jK-tkFE</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/17/bosch-rexroth-ctrlx-os-on-amd-secure-industrial-control-soft-plc-node-red-edge-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139049</post-id>	</item>
		<item>
		<title>Tianma display roadmap: glass-free 3D, Mini-LED, transparent Micro-LED and HUD</title>
		<link>https://armdevices.net/2026/03/17/tianma-display-roadmap-glass-free-3d-mini-led-transparent-micro-led-and-hud/</link>
					<comments>https://armdevices.net/2026/03/17/tianma-display-roadmap-glass-free-3d-mini-led-transparent-micro-led-and-hud/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 07:04:21 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139047</guid>

					<description><![CDATA[Tianma’s display portfolio here is less about a single panel and more about how the company is packaging complete HMI platforms for industrial, medical, transport and automotive use. The interview moves from a 23.8-inch 4K2K industrial display to integrated systems where Tianma supplies not just the LCD or OLED, but also electronics, compute boards and [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Tianma’s display portfolio here is less about a single panel and more about how the company is packaging complete HMI platforms for industrial, medical, transport and automotive use. The interview moves from a 23.8-inch 4K2K industrial display to integrated systems where Tianma supplies not just the LCD or OLED, but also electronics, compute boards and enclosure design. That matters for OEMs building camera monitors, control terminals or specialized vision devices, because the value shifts from raw panel supply to full module integration, long-life support and design-in flexibility. https://www.tianma.eu/</p>
<p>A big theme in the booth tour is optical engineering for difficult environments. Tianma shows glass-free 3D with eye tracking, allowing a split between 2D UI and 3D visualization, which fits medical imaging and other workflows where depth cues matter but operators still need conventional data overlays. Mini-LED backlighting with local dimming is another clear focus, improving black levels and contrast for medical and inflight display use, while reflective display technology targets outdoor readability with far lower power draw than a conventional transmissive panel.</p>
<p>The industrial side is paired with application-specific hardware concepts, including a rugged professional tablet style monitor for camera and vision systems. What stands out is the combination of Tianma’s core display technologies with embedded electronics, suggesting a path from display component to near-finished device. The transcript also points to Rockchip-based electronics in the demo hardware, which reinforces the idea that Tianma is not just talking about panel specs, but about complete embedded display subsystems tuned for field use, sunlight readability and power efficiency.</p>
<p>On the automotive side, the most interesting pieces are transparent Micro-LED, long-shape Micro-LED formats and a Micro-LED source for head-up display architecture. That lines up with Tianma’s broader recent push into automotive Micro-LED and HUD concepts, including very high brightness projection-oriented displays and transparent surfaces that can turn glass areas into information layers. In that context, the booth demo feels like an extension of a wider strategy around smart cockpit display architecture, where LTPS LCD, AMOLED and Micro-LED each serve different HMI roles rather than competing as one universal technology.</p>
<p>Later in the video, filmed at Embedded World 2026 in Nuremberg, the broader message becomes clear: Tianma is positioning itself as a global display engineering partner with in-house coverage across TFT-LCD, LTPS, AMOLED, Mini-LED and Micro-LED, backed by manufacturing scale in Asia and regional support for European customers. The result is a story about display roadmaps, integration capability and application fit, from smartphones to digital signage to transportation and automotive cockpits, rather than a simple product launch.</p>
<p>source <a href="https://www.youtube.com/watch?v=r51NNAA56PY">https://www.youtube.com/watch?v=r51NNAA56PY</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/17/tianma-display-roadmap-glass-free-3d-mini-led-transparent-micro-led-and-hud/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139047</post-id>	</item>
		<item>
		<title>Axelera Metis 214 TOPS and Europa Edge AI 629 TOPS: 8K Vision, RISC-V, Robotics, SLM, PCIe/M.2</title>
		<link>https://armdevices.net/2026/03/17/axelera-metis-214-tops-and-europa-edge-ai-629-tops-8k-vision-risc-v-robotics-slm-pcie-m-2/</link>
					<comments>https://armdevices.net/2026/03/17/axelera-metis-214-tops-and-europa-edge-ai-629-tops-8k-vision-risc-v-robotics-slm-pcie-m-2/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 23:36:34 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139044</guid>

					<description><![CDATA[Axelera positions itself as a European edge AI alternative focused on inference rather than training, and this interview makes that distinction clear. The main story is performance per watt: the company’s Metis platform is presented as delivering 214 TOPS at around 6W typical power, in compact M.2 and PCIe form factors that let developers add [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Axelera positions itself as a European edge AI alternative focused on inference rather than training, and this interview makes that distinction clear. The main story is performance per watt: the company’s Metis platform is presented as delivering 214 TOPS at around 6W typical power, in compact M.2 and PCIe form factors that let developers add AI acceleration to existing x86 or Arm systems without redesigning the whole box. https://axelera.ai/</p>
<p>What stands out in the demo lineup is how practical the workloads are. Instead of benchmark theatre, the booth focuses on edge deployments such as native 8K video analytics, retail loss prevention, container inspection for rust and damage, and autonomous robotics. The point is not just raw throughput, but being able to process high resolution video streams and multiple models at the edge where thermal limits, latency, bandwidth, and total system cost matter more than in cloud-first AI.</p>
<p>The technical angle is also stronger than a typical trade-show pitch. Axelera describes Metis as combining digital in-memory computing for matrix-vector multiplication with a RISC-V based orchestration layer across four AI cores, which allows parallel or cascaded model execution. That architecture fits the current edge AI mix well: computer vision pipelines, multimodel workloads, and lighter generative AI tasks such as speech interfaces and small language models, rather than full-scale training or oversized server-class LLM deployments.</p>
<p>The roadmap matters just as much as the current chip. In the interview, Axelera points to Europa as the next step for premium edge systems, robotics, VLM-style contextual understanding, and larger language models beyond the current memory envelope. That lines up with the company’s broader push this year around Metis and Europa, its Voyager SDK toolchain, and ecosystem work that makes model conversion and deployment easier for developers moving from FP32 training environments to efficient edge inference.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, this conversation shows why Axelera is getting attention in European semiconductor and edge AI circles: not because it claims to replace GPU training infrastructure, but because it targets the part of the stack where many industrial systems actually live. Low-power inference, compact accelerators, RISC-V control, DDR5-backed memory bandwidth, and deployable computer vision pipelines are the core themes here, with Europe’s supply-chain and sovereignty angle sitting in the background rather than dominating the pitch.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=iJrwV9zM53A">https://www.youtube.com/watch?v=iJrwV9zM53A</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/17/axelera-metis-214-tops-and-europa-edge-ai-629-tops-8k-vision-risc-v-robotics-slm-pcie-m-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139044</post-id>	</item>
		<item>
		<title>ProvenRun ProvenCore EAL7, Automotive Ethernet Protocol Break, Formal OS, ProvenHSM, STM32H5, PQC</title>
		<link>https://armdevices.net/2026/03/16/provenrun-provencore-eal7-automotive-ethernet-protocol-break-formal-os-provenhsm-stm32h5-pqc/</link>
					<comments>https://armdevices.net/2026/03/16/provenrun-provencore-eal7-automotive-ethernet-protocol-break-formal-os-provenhsm-stm32h5-pqc/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 15:16:20 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139042</guid>

					<description><![CDATA[ProvenRun is making a case for embedded security that starts below the application layer, with a mathematically verified trusted base rather than another add-on middleware stack. In this interview, the company explains how ProvenCore, its formally proven secure OS and TEE, is used to build high-assurance systems for automotive, avionics, defense, microcontrollers and cloud security, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>ProvenRun is making a case for embedded security that starts below the application layer, with a mathematically verified trusted base rather than another add-on middleware stack. In this interview, the company explains how ProvenCore, its formally proven secure OS and TEE, is used to build high-assurance systems for automotive, avionics, defense, microcontrollers and cloud security, with the goal of reducing attack surface, simplifying certification and keeping long lifecycle products maintainable. https://provenrun.com/</p>
<p>A big part of the discussion is the shift to software-defined vehicles and zonal automotive Ethernet. ProvenRun’s protocol-break approach fully deconstructs and reconstructs traffic between exposed domains and safety-critical zones, rather than relying only on segmentation. That matters for in-vehicle infotainment, connectivity modems and ADAS paths, where 1GbE and faster links now carry far more critical traffic than older in-car networks ever did.</p>
<p>The technical differentiator is formal methods. ProvenRun says ProvenCore remains the only operating system certified at Common Criteria EAL7, and that foundation is then reused for trusted applications such as secure storage, cryptography, PKCS#11, VPN, network stacks, secure firmware update and protocol filtering. The company also highlights compatibility with standard embedded security ecosystems including GlobalPlatform, PSA-style APIs, Android trusted applications and post-quantum cryptography work with CryptoNext.</p>
<p>The interview also touches the microcontroller side, where ProvenCore-M is positioned as a secure RTOS and TEE for Arm v8-M class devices, including ST deployments around STM32 security architectures. That gives developers a pre-certified route to TrustZone-based isolation, secure services and easier product evaluation without having to design every security primitive from scratch. Filmed at Embedded World 2026 in Nuremberg, the demo shows how that same security-by-design philosophy is now being stretched from MCU roots into automotive gateways and trusted edge compute.</p>
<p>On the cloud side, ProvenRun is pushing ProvenHSM and ProvenBox as remotely manageable hardware-backed trust anchors for key management, crypto services and customizable secure applications. The interesting angle is not just HSM throughput, but compositional certification, cloud-native administration, FPGA-assisted crypto acceleration and a roadmap that includes PQC readiness. Overall, this is a useful look at how embedded cybersecurity is moving toward verifiable isolation, certifiable trusted execution and longer-term lifecycle assurance across both edge and data center scale.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=Cmz3ENmAYPs">https://www.youtube.com/watch?v=Cmz3ENmAYPs</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/16/provenrun-provencore-eal7-automotive-ethernet-protocol-break-formal-os-provenhsm-stm32h5-pqc/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139042</post-id>	</item>
		<item>
		<title>eSOL eMCOS POSIX RTOS, ROS Middleware, Multicore ARM Cortex and RISC-V Embedded Full Stack</title>
		<link>https://armdevices.net/2026/03/16/esol-emcos-posix-rtos-ros-middleware-multicore-arm-cortex-and-risc-v-embedded-full-stack/</link>
					<comments>https://armdevices.net/2026/03/16/esol-emcos-posix-rtos-ros-middleware-multicore-arm-cortex-and-risc-v-embedded-full-stack/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 13:46:06 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139040</guid>

					<description><![CDATA[eSOL positions itself as a full-stack embedded software partner rather than a vendor selling only one RTOS layer. The core message in this interview is integration: a production-ready platform that combines the eMCOS real-time operating system, a POSIX-compliant profile, middleware for networking and robotics-oriented workflows, plus engineering services that extend from bring-up to certification. That [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>eSOL positions itself as a full-stack embedded software partner rather than a vendor selling only one RTOS layer. The core message in this interview is integration: a production-ready platform that combines the eMCOS real-time operating system, a POSIX-compliant profile, middleware for networking and robotics-oriented workflows, plus engineering services that extend from bring-up to certification. That matters for teams trying to reduce supplier fragmentation and keep one accountable path from hardware integration to deployed code. https://www.esol.com/</p>
<p>A key theme is the gap between prototype software and certifiable production systems. The demo points to ROS and model-based toolchains as part of the ecosystem, but the argument from eSOL is that open robotics frameworks alone are not always enough once determinism, safety, and real-time behavior become mandatory. In that context, eMCOS POSIX is presented as a way to preserve familiar POSIX development models while moving toward tighter scheduling control, certification targets, and system-level integration for embedded products.</p>
<p>What makes the platform interesting technically is scalability across compute classes. In the demo, the same runtime approach spans ARM Cortex-M, ARM Cortex-R, ARM Cortex-A and also RISC-V, reflecting eSOL’s long-standing focus on multi-core and many-core embedded architectures. That gives the interview a broader angle than a simple RTOS pitch: it is really about one software foundation that can move from small microcontrollers to larger heterogeneous SoCs without forcing a complete tooling reset or a redesign of the application stack at every step.</p>
<p>Recent eSOL direction adds useful context to what is shown here. The company has been expanding its Full Stack Engineering model in Europe, and its eMCOS POSIX profile gained ISO 26262 ASIL D compliance in 2025, which reinforces the interview’s emphasis on automotive-grade real-time software. eSOL has also been showing eMCOS in software-defined vehicle workflows, including virtual-platform work around Renesas R-Car, so the message here fits a wider industry push toward software-first development, safety partitioning, and faster validation at scale.</p>
<p>Overall, this is less about Linux replacement rhetoric and more about where a deterministic POSIX RTOS fits when embedded teams need predictable latency, certification support, multicore scaling, and one engineering interface across the stack. The interview was filmed at Embedded World 2026 in Nuremberg, and it frames eSOL as a company targeting automotive, robotics, industrial and medical designs where middleware compatibility, long-term support, and integration ownership are often worth as much as raw kernel features in practice here.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=iEaaI6PVweQ">https://www.youtube.com/watch?v=iEaaI6PVweQ</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/16/esol-emcos-posix-rtos-ros-middleware-multicore-arm-cortex-and-risc-v-embedded-full-stack/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139040</post-id>	</item>
		<item>
		<title>ATGBICS at Embedded World 2026: Compatible Transceivers, Legacy Optics, 800G QSFP, DAC and AOC</title>
		<link>https://armdevices.net/2026/03/16/atgbics-at-embedded-world-2026-compatible-transceivers-legacy-optics-800g-qsfp-dac-and-aoc/</link>
					<comments>https://armdevices.net/2026/03/16/atgbics-at-embedded-world-2026-compatible-transceivers-legacy-optics-800g-qsfp-dac-and-aoc/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 13:01:53 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139038</guid>

					<description><![CDATA[ATGBICS is positioning itself as a practical supplier for industrial network connectivity rather than just another optics reseller. The main story here is compatibility at scale: transceivers for more than 300 vendor ecosystems, support for legacy and current modules, and a business model built around keeping networks running when original OEM parts have gone end-of-life. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>ATGBICS is positioning itself as a practical supplier for industrial network connectivity rather than just another optics reseller. The main story here is compatibility at scale: transceivers for more than 300 vendor ecosystems, support for legacy and current modules, and a business model built around keeping networks running when original OEM parts have gone end-of-life. That matters in embedded and industrial systems where redesigning around a discontinued optical part can be far more expensive than the module itself. https://atgbics.com/</p>
<p>A big part of the discussion is obsolescence management. ATGBICS describes a process where the bill of materials is locked, prototype samples can be validated against a customer’s hardware, and repeat orders can be built with the same chipset, laser, and configuration that was previously qualified. For industrial Ethernet, long-lived automation platforms, transport systems, and ruggedized infrastructure, that kind of traceability can be more important than chasing the newest data rate.</p>
<p>The interview also makes clear that this is not only about old through-hole optics from the 1990s. The portfolio shown moves from 1&#215;9 and 2&#215;5 legacy transceivers to 1G and 10G workhorse SFP-class modules, then all the way up to high-bandwidth QSFP and direct attach cable options used in data center and AI networking. The interesting angle is that the same company is covering both ends of the market: replacement parts for installed industrial gear and compatible modules for newer high-density switching environments.</p>
<p>What gives the video some depth is the manufacturing and customization side. ATGBICS talks about working with factory partners in Taiwan and China, offering certificates of conformity, custom firmware, private labeling, and barcode-level branding for OEMs building their own switch, router, or PoE product lines. Filmed at Embedded World 2026 in Nuremberg, the interview shows how optical connectivity is increasingly tied to supply-chain resilience, second-source qualification, and lifecycle planning, not just raw bandwidth.</p>
<p>The result is a useful look at a part of embedded infrastructure that usually stays in the background. Instead of focusing on headline silicon, this conversation is about pluggable optics, DACs, AOCs, OEM-compatible coding, industrial temperature requirements, and the economics of keeping deployed systems alive for years longer than the original vendor may support. That makes the video relevant for engineers, sourcing teams, EMS partners, and network equipment makers dealing with both legacy maintenance and forward migration.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=h5iWToDbxh4">https://www.youtube.com/watch?v=h5iWToDbxh4</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/16/atgbics-at-embedded-world-2026-compatible-transceivers-legacy-optics-800g-qsfp-dac-and-aoc/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139038</post-id>	</item>
		<item>
		<title>Weebit Nano ReRAM for Edge AI, Embedded NVM, Near-Memory Compute and SoC Integration</title>
		<link>https://armdevices.net/2026/03/16/weebit-nano-reram-for-edge-ai-embedded-nvm-near-memory-compute-and-soc-integration/</link>
					<comments>https://armdevices.net/2026/03/16/weebit-nano-reram-for-edge-ai-embedded-nvm-near-memory-compute-and-soc-integration/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 11:36:08 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139036</guid>

					<description><![CDATA[Weebit Nano is positioning ReRAM as an embedded non-volatile memory alternative to flash for SoCs that need faster writes, lower power, better endurance, and easier scaling below 28 nm. In this interview, CEO Coby Hanoch explains why the company focuses on embedded NVM rather than bulk storage: the target is firmware, security keys, calibration data, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Weebit Nano is positioning ReRAM as an embedded non-volatile memory alternative to flash for SoCs that need faster writes, lower power, better endurance, and easier scaling below 28 nm. In this interview, CEO Coby Hanoch explains why the company focuses on embedded NVM rather than bulk storage: the target is firmware, security keys, calibration data, AI coefficients, and instant-on system behavior integrated directly on the same die as compute and control logic. https://www.weebit-nano.com/</p>
<p>The key technical point is that Weebit’s ReRAM is a back-end-of-line technology, built between metal layers rather than in the silicon substrate. That matters for mixed-signal and analog-heavy designs, because it avoids many of the layout and process compromises associated with embedded flash. Hanoch describes the cell in simple terms: voltage moves ions to form or break a conductive path, switching between low and high resistance states that represent stored data.</p>
<p>For edge AI, the pitch is especially clear. If model coefficients can live in embedded non-volatile memory on the AI chip, designers can avoid a separate external flash device, reduce board cost, shorten boot time, cut power draw, and remove a security exposure created when weights are copied at startup. That fits near-memory compute, and it also points toward in-memory compute, where analog-style ReRAM arrays may eventually support more efficient AI inference for gesture recognition, sensor workloads, and always-on edge devices.</p>
<p>The interview also shows why this matters beyond AI. Embedded ReRAM is relevant for power management ICs, MCUs, IoT nodes, automotive electronics, and aerospace-oriented designs that need retention without power, robust endurance, and tolerance for harsh conditions. Weebit highlights qualification work for automotive temperature ranges, radiation immunity as a useful characteristic, and the benefit of integrating memory without disturbing the optimal analog portion of a chip.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, the discussion captures a memory company moving from R&#038;D into commercialization. Weebit already talks about customers such as onsemi and Texas Instruments, growing capacity targets in the embedded range, and a roadmap that connects embedded NVM with future AI architectures. The result is not “more storage” in the consumer sense, but a more integrated memory block for edge silicon where power, cost, area, boot latency, and security all matter at once.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=dn82VxEX4aI">https://www.youtube.com/watch?v=dn82VxEX4aI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/16/weebit-nano-reram-for-edge-ai-embedded-nvm-near-memory-compute-and-soc-integration/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139036</post-id>	</item>
		<item>
		<title>Tektronix IsoVu TIVP, TICP and 7 Series DPO for SiC, GaN and power integrity</title>
		<link>https://armdevices.net/2026/03/16/tektronix-isovu-tivp-ticp-and-7-series-dpo-for-sic-gan-and-power-integrity/</link>
					<comments>https://armdevices.net/2026/03/16/tektronix-isovu-tivp-ticp-and-7-series-dpo-for-sic-gan-and-power-integrity/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 07:36:16 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139034</guid>

					<description><![CDATA[Tektronix focuses here on one of the harder measurement problems in modern power electronics: capturing fast, high-voltage switching behavior without corrupting the waveform through probe loading, ground noise, or isolation limits. The interview centers on the second-generation IsoVu isolated voltage probe, where optical power delivery over glass fiber lets the probe head stay electrically isolated [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Tektronix focuses here on one of the harder measurement problems in modern power electronics: capturing fast, high-voltage switching behavior without corrupting the waveform through probe loading, ground noise, or isolation limits. The interview centers on the second-generation IsoVu isolated voltage probe, where optical power delivery over glass fiber lets the probe head stay electrically isolated while still measuring very small and very fast events. That matters for SiC and GaN power stages, where dv/dt, common-mode noise, and switching transients quickly expose the limits of conventional probing. https://www.tektronix.com/</p>
<p>A key point in the demo is flexibility at the probe tip. The discussion mentions interchangeable tips spanning low-voltage work up to kilovolt-class measurements, which fits the broader need to move between gate-drive, shunt, switch-node, and bus measurements without rebuilding the whole setup. Tektronix also highlights its isolated current probing, including an RF link architecture with no direct physical connection inside the probe path, aimed at very high common-mode rejection. In practice, this is the kind of tooling engineers need for double-pulse test setups, power integrity analysis, wide-bandgap converter design, and validation of fast-switching inverter stages. ([tektronix.com][1])</p>
<p>What makes the video interesting is that it is less about headline specs and more about measurement credibility. The screen demo compares a reference voltage with current captured through the isolated current probe, showing how Tektronix is positioning these probes as part of a complete power integrity workflow rather than as standalone accessories. That fits a broader shift in lab instrumentation, where probe architecture, tip ecosystem, connection standards, and noise rejection are becoming just as important as oscilloscope bandwidth. The clip was filmed at Embedded World 2026 in Nuremberg, where this kind of test and measurement detail is especially relevant for embedded power, automotive, industrial control, and energy conversion teams.</p>
<p>The booth tour also briefly points to Tektronix’s wider high-speed instrumentation stack, including the 7 Series DPO at up to 25 GHz and 125 GS/s, plus the DPO70000SX platform, which Tektronix lists up to 70 GHz and 200 GS/s for very high-speed serial, PCIe, memory, and signal-integrity work. So the story here is really two layers of debug: precision isolated probing for power devices such as SiC and GaN MOSFETs, and high-bandwidth scope platforms for the digital and interconnect side of the same system.</p>
<p>source <a href="https://www.youtube.com/watch?v=kev976LKlLg">https://www.youtube.com/watch?v=kev976LKlLg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/16/tektronix-isovu-tivp-ticp-and-7-series-dpo-for-sic-gan-and-power-integrity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139034</post-id>	</item>
		<item>
		<title>RED Semiconductor VISC edge AI matrix math IP, RISC-V coprocessor for vision, crypto</title>
		<link>https://armdevices.net/2026/03/16/red-semiconductor-visc-edge-ai-matrix-math-ip-risc-v-coprocessor-for-vision-crypto/</link>
					<comments>https://armdevices.net/2026/03/16/red-semiconductor-visc-edge-ai-matrix-math-ip-risc-v-coprocessor-for-vision-crypto/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 01:01:57 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139032</guid>

					<description><![CDATA[RED Semiconductor describes an edge AI approach built around matrix math rather than a conventional CPU-first design. The pitch here is a licensable processor IP block that combines a small RISC-V front end with a dedicated math engine, aiming to reduce data movement, power draw, and latency for workloads that need fast local inference rather [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>RED Semiconductor describes an edge AI approach built around matrix math rather than a conventional CPU-first design. The pitch here is a licensable processor IP block that combines a small RISC-V front end with a dedicated math engine, aiming to reduce data movement, power draw, and latency for workloads that need fast local inference rather than cloud-scale throughput. That makes the discussion relevant for embedded vision, cryptography, sensor processing, and tightly bounded real-time edge AI work https://redsemiconductor.com/</p>
<p>The architecture, called VISC, is presented as a coprocessor rather than a full standalone compute platform. In practical terms, RED is targeting the part of an SoC where matrix multiply, matrix-vector operations, and other repetitive mathematical kernels dominate execution time. The company’s message is that GPUs bring graphics-era overhead, while a conventional NPU may still be too large or too fixed for some deeply embedded deployments, so VISC is meant to sit closer to the math-heavy bottleneck at lower silicon cost.</p>
<p>A key part of the story is software compatibility. RED uses RISC-V as the entry point into toolchains and developer workflows, but the engine itself is not tied only to RISC-V systems and can be integrated alongside Arm or other heterogeneous processor mixes. The company also stresses firmware-level customization, so an OEM can tune the accelerator for a specific vision model, cryptographic routine, or algorithmic pipeline instead of treating AI acceleration as a generic black-box block in the stack.</p>
<p>What stands out in the interview is the emphasis on edge-specific constraints: low power, low memory traffic, fast startup, and deterministic response. RED talks less about large language models and more about vision inference, medical imaging style search, secure compute, and sensor-driven applications where milliseconds, energy budget, and local autonomy matter more than raw datacenter-class scale. That focus fits the broader Embedded World conversation around RISC-V, edge inference, and domain-specific acceleration in Nuremberg during 2026.</p>
<p>The company positions the IP as tileable, licensable, and suitable for inclusion in a broader SoC that may already contain CPUs, vector processors, or other accelerators. RED has also been framing VISC publicly around edge AI, cryptography, and secure processing, with recent company updates pointing to an expanding RISC-V and edge AI roadmap. This video gives a useful look at how RED wants to differentiate: not by replacing every processor in a design, but by offloading the dense mathematical core that defines many embedded AI workloads for edge IP</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=xYVgQoCru_4">https://www.youtube.com/watch?v=xYVgQoCru_4</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/16/red-semiconductor-visc-edge-ai-matrix-math-ip-risc-v-coprocessor-for-vision-crypto/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139032</post-id>	</item>
		<item>
		<title>Canonical Robotics: Ubuntu Core, ROS, real-time control and fleet observability</title>
		<link>https://armdevices.net/2026/03/15/canonical-robotics-ubuntu-core-ros-real-time-control-and-fleet-observability/</link>
					<comments>https://armdevices.net/2026/03/15/canonical-robotics-ubuntu-core-ros-real-time-control-and-fleet-observability/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 20:11:45 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139030</guid>

					<description><![CDATA[Canonical is positioning Ubuntu as infrastructure for robotics rather than just a general Linux distro. In this demo, the focus is on a real-time stack where Ubuntu’s real-time kernel drives a vision-guided pick-and-place flow: AI detects shapes on a moving conveyor, a 3D scene mirrors the process, and the arm adapts with a safety slowdown [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Canonical is positioning Ubuntu as infrastructure for robotics rather than just a general Linux distro. In this demo, the focus is on a real-time stack where Ubuntu’s real-time kernel drives a vision-guided pick-and-place flow: AI detects shapes on a moving conveyor, a 3D scene mirrors the process, and the arm adapts with a safety slowdown when a hand enters the zone. It is a useful example of how deterministic control, perception, and simulation can be tied together in one deployment without turning the OS itself into a separate engineering project. https://canonical.com/</p>
<p>A second thread is the Bosch Rexroth integration around ctrlX AUTOMATION, which builds on Ubuntu Core. That matters because Ubuntu Core brings an immutable design, transactional over-the-air updates, rollback, and snap-based packaging with strict confinement. For industrial robotics and machine control, that combination is increasingly relevant: vendors want modular application delivery, cleaner lifecycle management, and a clearer path to compliance and long-term maintenance instead of carrying a custom Linux platform on their own.</p>
<p>The most forward-looking part of the interview is Canonical’s push toward fleet observability and deployable AI components. The planned open-source platform connects device fleets to dashboards and telemetry pipelines using Grafana, Loki, Prometheus, Juju, and charms, which fits the reality of robotics deployments where logs, metrics, and remote supervision matter as much as the robot demo itself. Canonical also points to inference snaps, making it easier to package and run models such as Gemma 3 or NeMoTron on local compute for edge AI and physical AI workflows.</p>
<p>What comes through clearly is that Canonical wants to reduce the hidden platform burden in robotics: patching, OTA infrastructure, application distribution, security hardening, ROS integration, and operations across a fleet. That is especially relevant as robotics companies move from prototype to product and face stricter requirements around uptime, software supply chain control, and regulations such as the Cyber Resilience Act. The pitch is not that Ubuntu builds the robot for you, but that it removes a large amount of undifferentiated platform work so teams can focus on the actual use case and ROI.</p>
<p>The discussion also touches on where the sector is heading. Humanoids are acknowledged as promising but still short of the broad, versatile efficiency often implied by the hype, while simpler mobile manipulation systems appear closer to practical value today. Filmed at Embedded World 2026 in Nuremberg, this interview is really about the software foundation under modern robotics: real-time Linux, ROS, immutable edge systems, secure app delivery, observability, and local AI inference coming together as a production stack rather than a lab demo.</p>
<p>source <a href="https://www.youtube.com/watch?v=aeVh5Z3tQcQ">https://www.youtube.com/watch?v=aeVh5Z3tQcQ</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/canonical-robotics-ubuntu-core-ros-real-time-control-and-fleet-observability/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139030</post-id>	</item>
		<item>
		<title>Golioth is acquired by Canonical: Secure Bluetooth OTA, LakeDB and Indirect IoT Device Management</title>
		<link>https://armdevices.net/2026/03/15/golioth-is-acquired-by-canonical-secure-bluetooth-ota-lakedb-and-indirect-iot-device-management/</link>
					<comments>https://armdevices.net/2026/03/15/golioth-is-acquired-by-canonical-secure-bluetooth-ota-lakedb-and-indirect-iot-device-management/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 18:01:48 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139028</guid>

					<description><![CDATA[Golioth’s latest demo shows how a non-IP Bluetooth endpoint can be managed through a Bluetooth-to-cellular gateway while staying end-to-end encrypted all the way to the cloud. The gateway forwards traffic, but it cannot inspect payloads or own the security domain, which is a strong fit for industrial sensing, remote peripherals, and indirectly connected devices that [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Golioth’s latest demo shows how a non-IP Bluetooth endpoint can be managed through a Bluetooth-to-cellular gateway while staying end-to-end encrypted all the way to the cloud. The gateway forwards traffic, but it cannot inspect payloads or own the security domain, which is a strong fit for industrial sensing, remote peripherals, and indirectly connected devices that still need fleet management, telemetry, and OTA workflows. The broader platform positions this around one control plane for connectivity, data routing, settings, and device lifecycle management. https://golioth.io/</p>
<p>What stands out in the interview is the combination of certificate-based onboarding, cloud-managed settings, streamed sensor data, and firmware rollout to Bluetooth devices that may roam across multiple gateways. In the demo, an accelerometer event is sent upstream, settings are pulled back down from the cloud, and the same path can be used for over-the-air updates. That maps well to real deployments where the endpoint is resource-constrained, intermittently connected, or dependent on another node for backhaul.</p>
<p>The Canonical angle makes the story more important than a single booth demo. Golioth announced on March 3, 2026 that it is now part of Canonical, which helps explain the focus on secure infrastructure, developer tooling, on-prem deployments, and data-sovereignty requirements alongside the managed cloud path. Filmed at Embedded World 2026 in Nuremberg, the discussion gives a practical look at how this stack could sit beside Ubuntu, open-source edge software, and enterprise IoT operations rather than acting as a narrow point product.</p>
<p>There is also a useful architectural point here: Golioth is not limited to Bluetooth. The interview frames Bluetooth as the first implementation of an indirectly connected device model, with the same management pattern extending to CAN, serial, Linux-class hardware, MCU targets, and potentially mesh-capable transports such as OpenThread. That makes the value less about a single radio and more about abstracting the transport layer while keeping a consistent API surface for updates, settings, observability, and device orchestration.</p>
<p>For teams building connected products, this is really a video about secure fleet operations at scale: using CI/CD to publish firmware, targeting subsets of deployed devices through management APIs, validating rollout status, and relying on mechanisms such as MCUboot for image integrity and rollback safety. The result is a clearer picture of how Bluetooth and other non-IP devices can be brought into a modern cloud workflow without giving up security boundaries or developer ergonomics.</p>
<p>source <a href="https://www.youtube.com/watch?v=JNguONmVpco">https://www.youtube.com/watch?v=JNguONmVpco</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/golioth-is-acquired-by-canonical-secure-bluetooth-ota-lakedb-and-indirect-iot-device-management/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139028</post-id>	</item>
		<item>
		<title>Mobilint ARIES and REGULUS edge AI, MLA400 LLM inference and multi-camera vision</title>
		<link>https://armdevices.net/2026/03/15/mobilint-aries-and-regulus-edge-ai-mla400-llm-inference-and-multi-camera-vision/</link>
					<comments>https://armdevices.net/2026/03/15/mobilint-aries-and-regulus-edge-ai-mla400-llm-inference-and-multi-camera-vision/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 10:31:54 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139026</guid>

					<description><![CDATA[Mobilint frames its edge AI story around efficiency rather than headline TOPS alone. In this booth conversation, the focus is on local inference, cost per watt, and practical deployment formats: USB devices, standalone edge boxes, low-profile PCIe cards, MXM modules, and SoC-class hardware for embedded designs. That fits Mobilint’s broader product stack around the ARIES [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Mobilint frames its edge AI story around efficiency rather than headline TOPS alone. In this booth conversation, the focus is on local inference, cost per watt, and practical deployment formats: USB devices, standalone edge boxes, low-profile PCIe cards, MXM modules, and SoC-class hardware for embedded designs. That fits Mobilint’s broader product stack around the ARIES NPU family, the REGULUS low-power SoC line, and the SDK qb software flow for model conversion and deployment. https://www.mobilint.com/</p>
<p>The demo is really about what edge AI looks like when it is treated as an appliance instead of a cloud extension. Mobilint shows multi-stream computer vision running fully offline, with real-time inference on several video feeds and no dependency on a datacenter link. That makes the pitch relevant for AI security, industrial monitoring, smart city analytics, and other latency-sensitive workloads where privacy, bandwidth, and predictable operating cost matter at the edge.</p>
<p>A big part of the discussion is about scaling from vision to LLM workloads. The speaker describes an M400-class configuration built from four accelerators, aimed at running multiple small language models concurrently and pushing into the roughly 35 to 36 billion parameter range with quantization. That lines up with Mobilint’s current direction: the MLA100 card is positioned around 80 TOPS with 16 GB LPDDR4X and 25 W TDP, while the upcoming MLA400 is presented as a quad-ARIES architecture for higher-throughput workstation and on-prem inference. In that context, the video is less about raw benchmark theater and more about usable local AI for mixed vision and language video.</p>
<p>What makes the booth interesting is the software angle behind the hardware. Mobilint keeps coming back to quantization, compiler tooling, runtime integration, and model adaptation, because edge NPUs live or die by how well they map real models rather than synthetic demos. Its SDK qb is built around framework support for PyTorch, TensorFlow, TFLite and ONNX, with optimization and Int8-oriented deployment aimed at preserving model accuracy while fitting tighter memory and power budgets. That is the practical layer that turns AI silicon into deployable embedded compute.</p>
<p>There is also a broader roadmap underneath the interview. Mobilint has recently been talking about both the ARIES and REGULUS NPU families, with REGULUS targeting compact on-device AI at about 10 TOPS under 3 W and support for 4K video pipelines, while products such as MLX-A1 package the accelerator into a more complete edge box. Seen from Embedded World 2026 in Nuremberg, the message is clear: Mobilint wants to compete where offline inference, multi-camera analytics, quantized LLMs, and power-aware embedded deployment matter more than brute-force datacenter silicon roadmap</p>
<p>source <a href="https://www.youtube.com/watch?v=ylvPT1Mlv_g">https://www.youtube.com/watch?v=ylvPT1Mlv_g</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/mobilint-aries-and-regulus-edge-ai-mla400-llm-inference-and-multi-camera-vision/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139026</post-id>	</item>
		<item>
		<title>Toradex Leno, OSM, Verdin i.MX95 and Aquila AM69 edge AI modules</title>
		<link>https://armdevices.net/2026/03/15/toradex-leno-osm-verdin-i-mx95-and-aquila-am69-edge-ai-modules/</link>
					<comments>https://armdevices.net/2026/03/15/toradex-leno-osm-verdin-i-mx95-and-aquila-am69-edge-ai-modules/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 10:31:47 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139024</guid>

					<description><![CDATA[Toradex is positioning its 2026 lineup around a wider spread of system-on-modules, from very small Lenos and OSM designs up to higher-performance Aquila and Verdin families. The key message here is scalability: compact modules for cost-sensitive, high-volume products, and larger pin-compatible platforms for projects that need more I/O, compute, graphics, networking or edge AI. That [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Toradex is positioning its 2026 lineup around a wider spread of system-on-modules, from very small Lenos and OSM designs up to higher-performance Aquila and Verdin families. The key message here is scalability: compact modules for cost-sensitive, high-volume products, and larger pin-compatible platforms for projects that need more I/O, compute, graphics, networking or edge AI. That makes the portfolio relevant for gateways, HMIs, robotics and machine-vision devices, while Toradex keeps leaning on software, documentation and long product life as part of the pitch. https://www.toradex.com/</p>
<p>A big part of the story is the move toward smaller solderable form factors. The 30&#215;30 mm Leno and OSM modules shown here are aimed at designs where pick-and-place assembly, vibration resistance and BOM control matter as much as raw performance. In practice, that means customers can start with a compact module for volume production, while still staying close to the Toradex ecosystem instead of rebuilding everything around a custom board too early.</p>
<p>Further up the stack, Toradex is expanding around NXP’s i.MX 95 and TI’s AM69/TDA4 class of processors. That opens the door to more demanding embedded Linux workloads such as multi-camera vision, industrial control, visual inspection, people counting, robotics and autonomous mobile platforms. In that part of the range, the attraction is not just CPU performance but also integrated NPU, ISP, TSN-capable Ethernet, CAN FD, display pipelines and the kind of mixed real-time plus application processing that industrial OEMs increasingly want at the edge.</p>
<p>The demo also points to how Toradex wants customers to move from module to full platform. Carrier boards such as Clover for Aquila target dense vision and robotics use cases, while industrial gateway products extend the company further into ready-to-deploy edge infrastructure rather than only selling compute modules. That is where the value proposition becomes more complete: SOM, carrier board, BSP, Linux distribution, OTA updates, container workflow and cloud fleet management all tied together in one development path.</p>
<p>What makes the pitch credible is that it is less about a single chip and more about a migration strategy across form factors and price points. The video was filmed at Embedded World 2026 in Nuremberg, and the theme throughout is clear: tiny modules that can still expose Ethernet, display and CAN, midrange platforms built around i.MX 95, and higher-end edge AI with Aquila AM69, all anchored by Torizon OS and Toradex support. The result is a portfolio aimed at companies that need to prototype quickly, then scale without changing software foundations too hard.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=gvqJLv8yPLM">https://www.youtube.com/watch?v=gvqJLv8yPLM</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/toradex-leno-osm-verdin-i-mx95-and-aquila-am69-edge-ai-modules/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139024</post-id>	</item>
		<item>
		<title>Blumind AMPL Analog AI at 60 Microwatts for Always-On Audio, Edge Wearables and Vision</title>
		<link>https://armdevices.net/2026/03/15/blumind-ampl-analog-ai-at-60-microwatts-for-always-on-audio-edge-wearables-and-vision/</link>
					<comments>https://armdevices.net/2026/03/15/blumind-ampl-analog-ai-at-60-microwatts-for-always-on-audio-edge-wearables-and-vision/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 08:36:30 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139022</guid>

					<description><![CDATA[Blumind is positioning analog AI as a far-edge compute architecture rather than another digital accelerator story. In this interview, the company outlines how its AMPL platform and BM110 direction target always-on audio inference with extremely low system power, low latency and a direct analog signal path that avoids the usual ADC, DAC and high-speed clock [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Blumind is positioning analog AI as a far-edge compute architecture rather than another digital accelerator story. In this interview, the company outlines how its AMPL platform and BM110 direction target always-on audio inference with extremely low system power, low latency and a direct analog signal path that avoids the usual ADC, DAC and high-speed clock overhead of conventional embedded AI. That makes the pitch especially relevant for wearables, smart glasses, earbuds, remotes and other battery-limited devices where keyword spotting has to stay active all day without burning through the cell. https://blumind.ai/</p>
<p>The key technical claim here is not raw TOPS but energy per inference. Blumind describes a total always-on audio solution around 50 to 60 microwatts, with the chip itself at roughly 20 microamps at 1.8 volts and an analog microphone adding about 20 microamps at 1 volt. In practical terms, that shifts edge AI from “can it run” to “can it remain on continuously” for wake-word detection and other audio-triggered interfaces, which is where always-listening products live or die.</p>
<p>What makes the approach interesting is that the neural network is implemented as dedicated analog hardware rather than as software running on an MCU, CPU, RISC-V or Arm core. The company frames this as a fall-through analog compute network optimized for robustness across process, voltage and temperature variation, while keeping latency low and silicon efficiency high. For embedded engineers, that means a very different design trade-off from standard DSP-plus-microcontroller voice pipelines, especially when standby budget is more important than programmability.</p>
<p>The roadmap goes beyond keyword spotting. Blumind says the same analog architecture can scale from RNN-style audio and time-series workloads toward CNN-based vision tasks and eventually smaller attention or transformer-class models running locally on edge devices. That lines up with the company’s broader messaging around all-analog neural processing in standard CMOS and its push to make the technology available not only as its own ASSP silicon but also as licensable IP for future SoCs and microcontrollers. Filmed at Embedded World 2026 in Nuremberg, this is really a look at how analog inference could carve out a specific role inside next-generation edge AI stacks.</p>
<p>source <a href="https://www.youtube.com/watch?v=JWvze2MhVsc">https://www.youtube.com/watch?v=JWvze2MhVsc</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/blumind-ampl-analog-ai-at-60-microwatts-for-always-on-audio-edge-wearables-and-vision/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139022</post-id>	</item>
		<item>
		<title>Edge AI Foundation Global Edge AI Community, San Diego 2026, 60+ Partners</title>
		<link>https://armdevices.net/2026/03/15/edge-ai-foundation-global-edge-ai-community-san-diego-2026-60-partners/</link>
					<comments>https://armdevices.net/2026/03/15/edge-ai-foundation-global-edge-ai-community-san-diego-2026-60-partners/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 06:21:16 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139020</guid>

					<description><![CDATA[Edge AI Foundation is presented here less as a single company than as a coordination layer for the wider edge AI ecosystem: silicon vendors, module makers, toolchains, embedded OEMs, startups, researchers, and system builders working around on-device inference, AIoT, computer vision, sensor fusion, and low-latency AI deployment. The interview frames the foundation as a place [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Edge AI Foundation is presented here less as a single company than as a coordination layer for the wider edge AI ecosystem: silicon vendors, module makers, toolchains, embedded OEMs, startups, researchers, and system builders working around on-device inference, AIoT, computer vision, sensor fusion, and low-latency AI deployment. The interview frames the foundation as a place where competitors still collaborate, which is a useful way to understand today’s market: edge AI is moving too fast for isolated roadmaps, so shared events, workshops, and cross-vendor discussion have become part of the engineering stack. https://www.edgeaifoundation.org/</p>
<p>What stands out is the mix of audiences and technologies. This is not only for executives or keynote speakers, but also for engineers, program managers, researchers, and developers dealing with real deployment issues such as model optimization, embedded Linux, MCU and MPU design choices, heterogeneous compute, NPU roadmaps, power efficiency, industrial vision, and the tradeoff between cloud AI and local inference. The point is not just to talk about AI in general, but to connect practical embedded workflows with current edge AI architectures.</p>
<p>The discussion also highlights how the foundation’s calendar reflects the speed of the sector. The upcoming San Diego event is described as a three-day meeting point with partner exhibition tables, workshops, keynote sessions, and a research track, which fits the broader shift toward tighter interaction between commercial edge AI platforms and academia. That matters because edge AI is now shaped as much by deployment constraints like thermals, bandwidth, privacy, deterministic response, and cost per watt as by raw model capability.</p>
<p>Another useful detail is the partner network itself. The transcript references a community spanning large established players and newer entrants, and that is increasingly where edge AI momentum is coming from: partnerships between silicon companies, board vendors, software ecosystems, and vertical solution providers. Filmed at Embedded World 2026 in Nuremberg, the interview captures that industry mood well, with the foundation positioning itself as a neutral meeting ground for the people building the next generation of embedded AI systems.</p>
<p>source <a href="https://www.youtube.com/watch?v=R7_x6TAypg0">https://www.youtube.com/watch?v=R7_x6TAypg0</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/edge-ai-foundation-global-edge-ai-community-san-diego-2026-60-partners/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139020</post-id>	</item>
		<item>
		<title>Geniatech Edge AI and ePaper at Embedded World 2026: i.MX95, RK3588, Kinara, Hailo</title>
		<link>https://armdevices.net/2026/03/15/geniatech-edge-ai-and-epaper-at-embedded-world-2026-i-mx95-rk3588-kinara-hailo/</link>
					<comments>https://armdevices.net/2026/03/15/geniatech-edge-ai-and-epaper-at-embedded-world-2026-i-mx95-rk3588-kinara-hailo/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 01:06:19 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139018</guid>

					<description><![CDATA[Geniatech presents a broad ARM-based embedded portfolio built around edge AI hardware, BSP-level software work, and customization services rather than a single demo board. The video focuses on how the company combines SoMs, SBCs, gateways, AI boxes and ePaper platforms with kernel, SDK and API support, so customers can move from evaluation to deployment without [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Geniatech presents a broad ARM-based embedded portfolio built around edge AI hardware, BSP-level software work, and customization services rather than a single demo board. The video focuses on how the company combines SoMs, SBCs, gateways, AI boxes and ePaper platforms with kernel, SDK and API support, so customers can move from evaluation to deployment without rebuilding the whole stack. The central theme is local inference on compact ARM systems, where Geniatech positions quantized and compressed LLM and VLM workloads as practical on-device workloads instead of cloud-only tasks. https://www.geniatech.com/</p>
<p>A key part of that story is heterogeneous edge AI acceleration. In the booth tour, Geniatech shows NXP and Rockchip based platforms paired with M.2 AI modules and explains the split between computer-vision accelerators and LLM-oriented parts. That maps well to the company’s current platform direction: i.MX95 systems with optional M.2 expansion, RK3588 designs, and accelerator options such as Kinara for transformer-style workloads or Hailo for CNN-heavy vision pipelines. The interesting angle here is not just raw TOPS, but memory footprint, quantization, driver porting, and how much of the model can realistically stay on the device.</p>
<p>The demo of a local multimodal assistant makes that concrete. A camera-equipped edge box estimates who is in front of it, feeds selected prompts into a locally deployed model, and returns results every few seconds without a cloud round trip. That matters for privacy, latency, and deterministic deployment in retail, kiosks, transport, and industrial settings. Geniatech’s role in this stack is mostly the infrastructure layer: stable ARM hardware, Linux BSP work, accelerator integration, conversion toolchains, NPU APIs, and support for customers training or adapting their own models.</p>
<p>The second half of the video shifts to ePaper, and this is where Geniatech looks unusually vertically integrated. Instead of treating ePaper as just a panel sourcing business, the company talks about its own TCON and software optimization, faster refresh behavior, and end-to-end system design for signage. The bus-stop example, multi-panel drive capability, indoor-light energy harvesting concepts, and wide-temperature operation point to transport and outdoor display use cases where low power draw matters as much as color or refresh performance.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, the booth tour shows Geniatech as a company trying to connect two markets that are starting to overlap: edge AI compute and ultra-low-power visual interfaces. On one side, there is ARM edge hardware with i.MX95, RK3588, AI modules, local LLM support and carrier-board customization. On the other, there are Spectra 6 style color ePaper and alternative reflective display approaches for signage, pricing, and information systems. Put together, it is a practical embedded roadmap for devices that need local intelligence, low power, industrial design flexibility, and long lifecycle support.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=xwuZf8M2k_E">https://www.youtube.com/watch?v=xwuZf8M2k_E</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/15/geniatech-edge-ai-and-epaper-at-embedded-world-2026-i-mx95-rk3588-kinara-hailo/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139018</post-id>	</item>
		<item>
		<title>Microchip Booth Tour at Embedded World 2026: Edge AI, 10BASE-T1S, RISC-V, ADAS, Security</title>
		<link>https://armdevices.net/2026/03/14/microchip-booth-tour-at-embedded-world-2026-edge-ai-10base-t1s-risc-v-adas-security/</link>
					<comments>https://armdevices.net/2026/03/14/microchip-booth-tour-at-embedded-world-2026-edge-ai-10base-t1s-risc-v-adas-security/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 19:56:20 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139016</guid>

					<description><![CDATA[Microchip’s booth tour is less about a single flagship chip and more about how the company is stitching together the embedded stack: edge AI, industrial networking, automotive camera links, HMI, security and power electronics. The demos show Microchip positioning itself as a broad platform vendor, not just a microcontroller supplier, with current emphasis on AIoT, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Microchip’s booth tour is less about a single flagship chip and more about how the company is stitching together the embedded stack: edge AI, industrial networking, automotive camera links, HMI, security and power electronics. The demos show Microchip positioning itself as a broad platform vendor, not just a microcontroller supplier, with current emphasis on AIoT, 10BASE-T1S, TSN, Zephyr, Linux, secure MCUs and MPUs, and reference designs that shorten evaluation cycles for OEMs. https://www.microchip.com/en-us/about/events-info/embedded-world</p>
<p>The access-control and cockpit demos reflect two themes that now run through a lot of embedded design: local inference and human-machine interaction. Facial recognition with liveness detection, round-display touch interfaces, and color-sorting machine vision are shown here not as isolated gimmicks but as edge workloads that need low latency, deterministic control and a practical HMI layer. That also fits with Microchip’s current demo lineup around graphics, touch, camera systems and AI at the edge.</p>
<p>A stronger technical thread in the video is networking. The shop-floor setup points to Single Pair Ethernet, especially 10BASE-T1S, as a path away from older fieldbus designs toward IP-based industrial systems with simpler wiring, real-time behavior and easier IT/OT integration. Microchip is explicitly framing this around industrial Ethernet migration, TSN-capable architectures, open-source software stacks and modular evaluation hardware built around boards that can be quickly reconfigured for demos or first customer trials.</p>
<p>Security is treated here as infrastructure rather than a feature checkbox. The tour touches secure boot, secure firmware update, key provisioning, post-quantum cryptography and Cyber Resilience Act readiness, including Microchip’s security portfolio and its work with Kudelski IoT keySTREAM for device provisioning and update workflows. In practice, that makes the video relevant to anyone designing industrial or edge products that now need lifecycle security, not just network connectivity and compute.</p>
<p>The automotive and high-performance pieces round out the picture: ASA-ML serializer/deserializer links for ADAS camera paths into Qualcomm Ride platforms, FPGA-based sensor fusion around AI accelerators, MICROSAR IO with Vector for compact ECUs, and a RISC-V story spanning PolarFire SoC FPGA and the newer PIC64 family. Taken together, the booth shows Microchip pushing toward distributed intelligence where control, networking, security and inference sit closer to the machine, a message delivered from the company’s stand at Embedded World 2026 in Nuremberg.</p>
<p>source <a href="https://www.youtube.com/watch?v=2bXmkl934mI">https://www.youtube.com/watch?v=2bXmkl934mI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/14/microchip-booth-tour-at-embedded-world-2026-edge-ai-10base-t1s-risc-v-adas-security/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139016</post-id>	</item>
		<item>
		<title>JetBrains Embedded Development with CLion, AI Agents, ESP32, ST, Zephyr, Local AI</title>
		<link>https://armdevices.net/2026/03/14/jetbrains-embedded-development-with-clion-ai-agents-esp32-st-zephyr-local-ai/</link>
					<comments>https://armdevices.net/2026/03/14/jetbrains-embedded-development-with-clion-ai-agents-esp32-st-zephyr-local-ai/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 18:11:47 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139014</guid>

					<description><![CDATA[JetBrains is framing embedded development less as a board-specific workflow and more as a unified software engineering problem. In this conversation, the focus is CLion as the company’s embedded IDE for C, C++ and Rust, aimed at reducing the fragmentation that comes from switching between vendor SDKs, toolchains, debuggers and separate utilities. The key idea [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>JetBrains is framing embedded development less as a board-specific workflow and more as a unified software engineering problem. In this conversation, the focus is CLion as the company’s embedded IDE for C, C++ and Rust, aimed at reducing the fragmentation that comes from switching between vendor SDKs, toolchains, debuggers and separate utilities. The key idea is a consistent developer experience across targets such as Espressif and STMicroelectronics, with support for frameworks like Zephyr and modern build flows around CMake, so firmware work can happen inside one environment instead of being spread across multiple disconnected tools. https://www.jetbrains.com/clion/embedded/</p>
<p>A big part of that story is AI, but in a practical embedded context rather than as a generic chatbot layer. JetBrains shows agent support directly inside the IDE, including Junie, external agents, MCP connectivity and bring-your-own-key workflows, with the emphasis on tool grounding and agent orchestration rather than just the raw model. That matters for firmware teams because the useful part is not only code generation, but being able to trigger project-aware actions such as rebuilds, refreshes, navigation and other IDE-native operations in a controlled way.</p>
<p>The interview also points to a broader shift in embedded engineering: local and on-premises AI is becoming relevant for teams that cannot send code or design data to public cloud services. JetBrains is clearly leaning into that requirement, showing local AI running on NVIDIA hardware and discussing private deployment models for LLM-backed development. For regulated sectors and larger product teams, that makes the IDE part of a secure internal toolchain rather than a thin client to an external service.</p>
<p>What makes the booth discussion interesting is that it connects classic embedded pain points with current software trends. CLion is presented as a bridge between microcontroller and SoC projects, vendor ecosystems, RTOS-oriented work and newer AI-assisted flows, while keeping the core promise around productivity, code intelligence and debugging. Filmed at Embedded World 2026 in Nuremberg, the video captures how JetBrains is positioning embedded work alongside mainstream software development instead of treating it as a separate niche.</p>
<p>The result is a view of embedded development where the IDE becomes the integration layer for toolchains, frameworks, AI agents and secure deployment options. Rather than chasing a single board demo, JetBrains is making the case that teams at companies such as automotive and industrial OEMs need a stable, extensible workspace that can handle Zephyr, ESP-IDF, STM32-class projects, CMake-based builds, Rust support and agentic coding in the same place. That makes this less about one feature and more about how firmware teams may want to structure their workflow over the next few years.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=cYpC1drBfqg">https://www.youtube.com/watch?v=cYpC1drBfqg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/14/jetbrains-embedded-development-with-clion-ai-agents-esp32-st-zephyr-local-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139014</post-id>	</item>
		<item>
		<title>TeleCANesis at Embedded World 2026: Hub for CAN, Modbus, I2C, Cloud, HMI and AI Data Routing</title>
		<link>https://armdevices.net/2026/03/14/telecanesis-at-embedded-world-2026-hub-for-can-modbus-i2c-cloud-hmi-and-ai-data-routing/</link>
					<comments>https://armdevices.net/2026/03/14/telecanesis-at-embedded-world-2026-hub-for-can-modbus-i2c-cloud-hmi-and-ai-data-routing/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 14:36:24 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139012</guid>

					<description><![CDATA[TeleCANesis is tackling a familiar embedded problem: too many devices, buses and software stacks speaking incompatible dialects. The platform is positioned as thin middleware plus tooling for protocol mapping, message routing and automated code generation, so teams can connect CAN, Modbus, I2C, SPI, RS485, Ethernet and higher-level interfaces without rewriting glue code every time a [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>TeleCANesis is tackling a familiar embedded problem: too many devices, buses and software stacks speaking incompatible dialects. The platform is positioned as thin middleware plus tooling for protocol mapping, message routing and automated code generation, so teams can connect CAN, Modbus, I2C, SPI, RS485, Ethernet and higher-level interfaces without rewriting glue code every time a signal layout changes. In practice, the value is less about “moving data” in the abstract and more about preserving engineering time for product logic, analytics and HMI work. https://telecanesis.com/</p>
<p>What stands out in this demo is the workflow refinement inside the web-based Hub. Codecs are becoming system-wide rather than tied to a single capsule, which makes reuse much cleaner across a blueprint. The new imports flow also looks more practical for DBC-driven design: engineers can ingest a file once, label it, selectively pull only the required messages into each capsule, and later re-import changed definitions instead of rebuilding the whole route map. That is a meaningful shift for teams dealing with evolving vehicle, battery or industrial bus definitions over time.</p>
<p>The use case described here is a good fit for battery systems, domain controllers and other heterogeneous embedded environments where one internal data model has to feed cloud services, databases, HMIs and mobile apps in different formats. Rather than expose every raw signal upstream, TeleCANesis lets developers normalize data internally and publish only the subset that matters to customers or backend services. Filmed at Embedded World 2026 in Nuremberg, the demo also hints at where the product is moving next, with broader plug-in support, updated ingestion in the coming 1.1 release, and recent additions such as CANopen and serial connector plug-ins.</p>
<p>There is also a practical deployment story behind it. The runtime is presented as largely platform-agnostic, with only a thin OS and compiler abstraction layer needing adaptation, which makes ports to new ARM or MCU targets much faster than a typical middleware stack. The company points to support around QNX, Raspberry Pi 4 and 5, Yocto Scarthgap, and integration paths toward HMI frameworks such as Qt, Slint, GL Studio and Unity. That combination makes the tool relevant not only for automotive-style gateways but also for industrial control, robotics and connected equipment.</p>
<p>The AI angle is still early, but the direction makes sense: use AI to inspect an existing project, identify protocols and messages, and pre-build the TeleCANesis blueprint so engineers start from a working draft instead of a blank canvas. For teams building software-defined machines, cloud-connected controllers or AI-assisted products, that could make TeleCANesis a useful bridge between fieldbus data, application logic and agent workflows. The core idea is straightforward: stop hand-coding translation layers every time the system grows, and treat connectivity as a configurable part of the architecture instead of a recurring rewrite.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=MvX0zdWJ0fY">https://www.youtube.com/watch?v=MvX0zdWJ0fY</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/14/telecanesis-at-embedded-world-2026-hub-for-can-modbus-i2c-cloud-hmi-and-ai-data-routing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139012</post-id>	</item>
		<item>
		<title>Makat AI Electronics Procurement, BoM Analysis, Real-Time Pricing, Component Sourcing</title>
		<link>https://armdevices.net/2026/03/14/makat-ai-electronics-procurement-bom-analysis-real-time-pricing-component-sourcing/</link>
					<comments>https://armdevices.net/2026/03/14/makat-ai-electronics-procurement-bom-analysis-real-time-pricing-component-sourcing/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 13:16:46 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139010</guid>

					<description><![CDATA[Makat is pitching a more data-driven version of open-market component buying: instead of opaque broker calls and manual quote chasing, the platform is built around real-time pricing, availability checks, supplier scoring, and transaction workflows that let a buyer move from BoM analysis to PO placement inside one digital flow. The company frames this as AI-powered [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Makat is pitching a more data-driven version of open-market component buying: instead of opaque broker calls and manual quote chasing, the platform is built around real-time pricing, availability checks, supplier scoring, and transaction workflows that let a buyer move from BoM analysis to PO placement inside one digital flow. The company frames this as AI-powered independent distribution for OEMs and CMs, with emphasis on shortage management, cost reduction, excess inventory handling, and transparent markup rather than black-box brokering. https://www.makat.ai/</p>
<p>What stands out in this interview is the attempt to turn tactical procurement into something more strategic. The demo revolves around board-level electronics sourcing, where Makat says it can highlight risk, identify alternate distributors, benchmark pricing across multiple supply channels, and show where a customer may be overpaying or exposed to supply disruption. That matters in electronics manufacturing, where line stoppages, allocation pressure, NCNR exposure, and fragmented broker networks still make spot buys expensive and slow to execute.</p>
<p>The AI angle here is not presented as a generic chatbot layer, but as a sourcing and procurement engine: benchmarking supplier quotes, ranking vendors, analyzing stock positions, and automating parts of supplier communication and decision support. In practice, that places the platform somewhere between electronics distribution, supply-chain intelligence, and procurement workflow automation. The interesting claim is not only visibility, but transactability: Makat says it acts as vendor of record, taking ownership of sourcing, logistics, and delivery rather than only recommending where to buy.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, the conversation shows how much the electronics supply chain is shifting toward digital procurement infrastructure. Makat’s message is that the future of component sourcing is less about informal broker relationships and more about comparison analytics, supplier data, workflow automation, and accountable execution. For manufacturers dealing with shortages, alternates, price volatility, and multi-distributor sourcing, that is a relevant change in how component purchasing gets done today.</p>
<p>All my Embedded World videos are in this playlist: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjgUpdNMBkGzEWU6YVxR8Ga</p>
<p>source <a href="https://www.youtube.com/watch?v=SncbMKIVCtA">https://www.youtube.com/watch?v=SncbMKIVCtA</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/14/makat-ai-electronics-procurement-bom-analysis-real-time-pricing-component-sourcing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139010</post-id>	</item>
		<item>
		<title>Edge Impulse Intelligent Factory at Embedded World 2026: Edge AI, YOLO-Pro, Digital Twin, Local LLM</title>
		<link>https://armdevices.net/2026/03/14/edge-impulse-intelligent-factory-at-embedded-world-2026-edge-ai-yolo-pro-digital-twin-local-llm/</link>
					<comments>https://armdevices.net/2026/03/14/edge-impulse-intelligent-factory-at-embedded-world-2026-edge-ai-yolo-pro-digital-twin-local-llm/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 12:06:34 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139008</guid>

					<description><![CDATA[Edge Impulse frames this demo around a practical factory problem: too many data streams, too little time to turn them into action. The setup combines multi-line visual inspection, model inference, and operator-facing summaries into one edge pipeline, with object detection separating good parts from faulty ones and feeding decisions such as rework, scrap, or continued [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Edge Impulse frames this demo around a practical factory problem: too many data streams, too little time to turn them into action. The setup combines multi-line visual inspection, model inference, and operator-facing summaries into one edge pipeline, with object detection separating good parts from faulty ones and feeding decisions such as rework, scrap, or continued flow. The point is not AI as a cloud dashboard, but AI as a control layer sitting close to the machine. https://www.edgeimpulse.com/</p>
<p>What stands out is the way several workloads run side by side: four simulated production lines, defect detection, a digital-twin view of the floor, and a local language model interface for querying what is happening in real time. That makes the demo less about a single neural network and more about orchestration across computer vision, telemetry, and human-machine interaction, where latency and determinism matter more than headline model size.</p>
<p>The industrial case is clear. In manufacturing, stoppages are expensive, and even a small delay in inspection or triage can ripple through yield, throughput, and maintenance planning. Running inference on the edge helps keep response times predictable, keeps proprietary production data on premises, and avoids depending on a round trip to the cloud for every decision. That is especially relevant for defect detection, anomaly screening, and line monitoring where reliability has to be built into the stack.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, the demo also shows how edge AI is moving beyond isolated vision nodes toward richer factory software. Edge Impulse positions its YOLO-Pro workflow around embedded industrial vision, while the local LLM layer points to a new operator model where staff can query live plant data in plain language instead of navigating separate dashboards. The result is a compact view of where industrial edge systems are headed: vision, digital twin, and natural-language analytics running together on site.</p>
<p>source <a href="https://www.youtube.com/watch?v=Aun0kQt-hH8">https://www.youtube.com/watch?v=Aun0kQt-hH8</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/14/edge-impulse-intelligent-factory-at-embedded-world-2026-edge-ai-yolo-pro-digital-twin-local-llm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139008</post-id>	</item>
		<item>
		<title>Grinn Edge AI SOMs with GenioSOM-360, AstraSOM-261x and ReneSOM-V2H at Embedded World</title>
		<link>https://armdevices.net/2026/03/14/grinn-edge-ai-soms-with-geniosom-360-astrasom-261x-and-renesom-v2h-at-embedded-world/</link>
					<comments>https://armdevices.net/2026/03/14/grinn-edge-ai-soms-with-geniosom-360-astrasom-261x-and-renesom-v2h-at-embedded-world/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 09:56:15 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139006</guid>

					<description><![CDATA[Grinn presents itself here less as a single-board vendor and more as a rapid productization partner for embedded AI. The core idea is consistent across the booth: take a complex SoC, turn it into a compact system-on-module, add the carrier design and software stack around it, and let customers focus on the actual device instead [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Grinn presents itself here less as a single-board vendor and more as a rapid productization partner for embedded AI. The core idea is consistent across the booth: take a complex SoC, turn it into a compact system-on-module, add the carrier design and software stack around it, and let customers focus on the actual device instead of rebuilding the low-level platform from zero. That comes through in the PCB inspection robot, the camera modules, and the industrial carrier boards shown in the demo. https://grinn-global.com/</p>
<p>The strongest thread in the video is practical edge vision. One demo uses robot vision and onboard AI to monitor PCB production, while another shows real-time hand-gesture tracking aimed at robotics and human-machine interaction. Rather than presenting AI as a cloud service, Grinn is framing it as local inference on embedded Linux hardware, where latency, power budget, camera input, and I/O integration matter as much as raw TOPS.</p>
<p>The hardware story is also broader than one chipset family. The booth includes a MediaTek-based GenioSOM platform, a Synaptics SL2610 based module shown in camera and industrial formats, and a newly announced GenioSOM-360 positioned as an extremely small module for edge AI designs. That makes the video relevant for developers looking at SOM-based designs for industrial vision, smart cameras, robotics, compact HMI devices, and other products where Ethernet, HDMI, MIPI camera interfaces, and software portability all have to come together on a tight schedule.</p>
<p>Another useful angle is how Grinn uses partner booths to validate its role in the ecosystem. The company’s modules and demos are spread across Synaptics, MediaTek, Würth Elektronik, RS and other stands, which says something important: Grinn is not only shipping modules, but also helping silicon vendors and distributors show real deployable use cases. Filmed at Embedded World 2026 in Nuremberg, the interview captures that middle layer of the embedded market where reference design, carrier integration, BSP work, and fast customization often decide whether an AI concept becomes a shipping product.</p>
<p>Overall, this is a good snapshot of where embedded AI is heading in 2026: smaller SOMs, stronger local vision processing, faster path from evaluation kit to product, and more emphasis on software support alongside hardware. The interesting part is not just the silicon names, but the integration model behind them. Grinn is showing how MediaTek, Synaptics and Renesas class processors can be turned into compact, application-ready platforms for machine vision, gesture recognition, industrial inspection and robotics at the edge today.</p>
<p>source <a href="https://www.youtube.com/watch?v=SRkLbeRIfzo">https://www.youtube.com/watch?v=SRkLbeRIfzo</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/14/grinn-edge-ai-soms-with-geniosom-360-astrasom-261x-and-renesom-v2h-at-embedded-world/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139006</post-id>	</item>
		<item>
		<title>RECOM Low-Voltage High-Current Power Modules from 25A for AI, FPGA, DDR to 150A Multiphase Rails</title>
		<link>https://armdevices.net/2026/03/13/recom-low-voltage-high-current-power-modules-from-25a-for-ai-fpga-ddr-to-150a-multiphase-rails/</link>
					<comments>https://armdevices.net/2026/03/13/recom-low-voltage-high-current-power-modules-from-25a-for-ai-fpga-ddr-to-150a-multiphase-rails/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 18:06:22 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139004</guid>

					<description><![CDATA[RECOM is expanding its board-level power portfolio with compact point-of-load modules aimed at the hardest rail in modern digital design: very low voltage at very high current. The discussion centers on new 15A and 25A modules for power-tree design, covering rails for processor cores, DDR and dense digital logic, with output targets down to 0.35V [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>RECOM is expanding its board-level power portfolio with compact point-of-load modules aimed at the hardest rail in modern digital design: very low voltage at very high current. The discussion centers on new 15A and 25A modules for power-tree design, covering rails for processor cores, DDR and dense digital logic, with output targets down to 0.35V and 0.5V depending on the part. That fills a gap between intermediate bus conversion and the final high-current core rail, where size, efficiency and layout matter most. https://recom-power.com/</p>
<p>The key theme here is what happens when SoCs, FPGAs and AI accelerators keep adding compute density while core voltages keep dropping. Lower voltage helps switching speed, but it pushes current sharply upward, so the power stage has to deliver tens or even hundreds of amps in a very small footprint. RECOM positions these modules as scalable building blocks: 25A per unit, 50A with two devices, and up to 150A through multiphase paralleling, aimed at robotics, machine vision, automotive compute and other embedded platforms with fast load steps.</p>
<p>A major technical point in the interview is transient response. Modern processors can jump from sleep to full activity extremely fast, so the regulator has to react before the rail drifts out of tolerance. RECOM’s adaptive constant-on-time control is presented as a way to respond faster than a conventional clock-cycle-limited loop, while also allowing lower output capacitance. That matters because less capacitance can reduce board area, BOM cost and stored energy on the rail, all while keeping the supply stable during aggressive current swings.</p>
<p>Another important layer is programmability. With PMBus telemetry and control, the module is not just a fixed converter but part of the system architecture. Output voltage can be trimmed very accurately, operating behavior can be tuned for different modes, and voltage margining can match the needs of individual processors characterized at the factory. In practice, that means the rail can be optimized for performance, efficiency and reliability instead of treating power as a static afterthought. The video was filmed at Embedded World 2026 in Nuremberg, where this kind of low-voltage, high-current power delivery is becoming central to embedded AI and high-density compute.</p>
<p>The broader context also matters. RECOM highlights a portfolio that runs from tiny isolated converters to high-power systems, and its latest public messaging around embedded world 2026 also points to discrete power IC and transformer options alongside PoL modules. That makes this launch interesting not just as one new regulator, but as part of a wider push toward configurable, modular power design. For engineers working on next-generation FPGA, SoC and edge AI hardware, the real takeaway is simple: power delivery is now an active design domain, with telemetry, programmability, interleaving, EMI behavior and transient control all shaping what the processor can actually do.</p>
<p>RECOM High-Current PoL Modules, PMBus Control,  for FPGA and SoC</p>
<p>RECOM PMBus Power Delivery for SoC and FPGA, 0.35V Rails and 25A PoL Modules</p>
<p>source <a href="https://www.youtube.com/watch?v=L91dBTq3rK8">https://www.youtube.com/watch?v=L91dBTq3rK8</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/13/recom-low-voltage-high-current-power-modules-from-25a-for-ai-fpga-ddr-to-150a-multiphase-rails/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139004</post-id>	</item>
		<item>
		<title>RECOM 65W GaN AC/DC, 1200W Fanless PMBus PSU, 2U DIN Rail Power</title>
		<link>https://armdevices.net/2026/03/13/recom-65w-gan-ac-dc-1200w-fanless-pmbus-psu-2u-din-rail-power/</link>
					<comments>https://armdevices.net/2026/03/13/recom-65w-gan-ac-dc-1200w-fanless-pmbus-psu-2u-din-rail-power/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 16:16:40 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139002</guid>

					<description><![CDATA[RECOM is showing how far compact AC/DC design has moved when mechanical compatibility stays fixed but output power climbs sharply. The headline part here is the new 65W PCB-mount AC/DC family, presented in the same footprint and pinout as an earlier 30W generation, so designers can scale power without rerouting the board or redesigning the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>RECOM is showing how far compact AC/DC design has moved when mechanical compatibility stays fixed but output power climbs sharply. The headline part here is the new 65W PCB-mount AC/DC family, presented in the same footprint and pinout as an earlier 30W generation, so designers can scale power without rerouting the board or redesigning the front end. The move to GaN switching is central: faster switching, higher efficiency, smaller magnetics and better power density all show up directly in the module size, transformer reduction and lower material use. https://recom-power.com/</p>
<p>What makes that interesting is not only density, but migration path. A pin-compatible upgrade from lower power to 65W is useful for products that start with one load profile and later need more headroom, whether that is for industrial control, embedded compute, test equipment or medical electronics. The open-frame variant shown in the interview pushes the same platform into chassis-mount use, with integrated surge handling and common-mode filtering aimed at installations where grounding, EMI and earth-loop behavior matter more than in a floating-output board design.</p>
<p>The bigger power story is the fanless 1200W class. RECOM’s RACM1200-V platform is built around baseplate cooling, up to 1000W continuous fanless output with 1200W boost, PMBus visibility, and digital control for monitoring, fault handling and application-specific behavior. That makes it relevant for medical, industrial and automation systems where acoustics, reliability and service life often matter more than adding a fan. The interview also touches on firmware tuning, power limiting and protection strategy, which is increasingly where power supplies become part of the system architecture rather than just a power brick.</p>
<p>Another practical angle is cabinet density. RECOM’s newer ultra-slim DIN-rail family uses a 2U step-shape format for 30W, 60W and 90W versions, keeping the same width while pushing higher output into flat distribution panels and home or building automation cabinets. The 90W version is especially notable because RECOM positions it against wider conventional alternatives, with high efficiency, push-in terminals, audible-noise suppression and tighter panel utilization. Filmed at Embedded World 2026 in Nuremberg, the discussion ties together GaN, thermal design, EMC filtering, PMBus telemetry and mechanical standardization in a way that feels very relevant to current embedded power design.</p>
<p>Overall, this is less about one isolated launch and more about RECOM’s broader direction: higher power density where GaN makes sense, digital control at higher wattage, and space-efficient AC/DC form factors for embedded and automation installs. The useful takeaway is that smaller magnetics, slimmer DIN-rail geometry, conduction-cooled kilowatt supplies and drop-in board upgrades are all converging toward the same goal: more power in less volume, with fewer compromises in certification, thermal behavior and integration effort.</p>
<p>source <a href="https://www.youtube.com/watch?v=-hISqLa3kmg">https://www.youtube.com/watch?v=-hISqLa3kmg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/13/recom-65w-gan-ac-dc-1200w-fanless-pmbus-psu-2u-din-rail-power/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139002</post-id>	</item>
		<item>
		<title>Thistle Technologies Edge AI Security, Secure Boot, OTA Updates, Model Signing</title>
		<link>https://armdevices.net/2026/03/13/thistle-technologies-edge-ai-security-secure-boot-ota-updates-model-signing/</link>
					<comments>https://armdevices.net/2026/03/13/thistle-technologies-edge-ai-security-secure-boot-ota-updates-model-signing/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 14:11:27 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=139000</guid>

					<description><![CDATA[Thistle Technologies is tackling a familiar embedded problem: the industry knows what strong security should look like, but secure boot, signed firmware, encrypted updates, hardware root of trust integration, and key handling still take too much board-specific work for most teams. This interview explains how Thistle is trying to compress that effort from months into [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Thistle Technologies is tackling a familiar embedded problem: the industry knows what strong security should look like, but secure boot, signed firmware, encrypted updates, hardware root of trust integration, and key handling still take too much board-specific work for most teams. This interview explains how Thistle is trying to compress that effort from months into hours by giving device makers one platform for secure boot enablement, OTA orchestration, firmware signing, release control, and now protected Edge AI model deployment. https://thistle.tech/product</p>
<p>A key point here is that AI models on embedded devices now need the same trust chain as firmware. Thistle’s approach is to sign, encrypt, version, and verify models back to hardware so the device can confirm it is running the intended model rather than an injected or tampered payload. That matters for Edge AI pipelines where models change frequently, but provenance, integrity, and anti-extraction controls have to stay intact across deployment and update cycles. Embedded Computing Design’s 2026 Best in Show coverage frames this as hardware-anchored trust, model signing, provenance tracking, and protected delivery for Edge AI systems.</p>
<p>The demos make that concrete across very different hardware classes: small MCU-scale targets, Linux systems, Qualcomm platforms, MediaTek designs, and boards using Infineon OPTIGA Trust M. What stands out is the unified control plane: one backend for secure OTA, encrypted firmware bundles, model rollout, and version management across heterogeneous fleets. Thistle’s own product material also highlights CI/CD-oriented release tooling and Cloud KMS-backed signing flows, which fits well with what is shown in the interview about practical key management instead of passing secrets around on laptops or USB sticks.</p>
<p>Another layer in the discussion is regulation. The video was filmed at Embedded World 2026 in Nuremberg, where security and lifecycle maintenance were major themes, and Thistle explicitly connects its stack to Europe’s Cyber Resilience Act. That alignment makes sense: CRA preparation is pushing manufacturers toward secure-by-design architectures, authenticated updates, vulnerability handling, and long-term maintenance for connected products. In that context, the value here is not a vague “security platform” pitch but a workflow that ties silicon security features, software release discipline, and field update reliability into one operational path.</p>
<p>The most interesting part of the conversation is also the most realistic one: nobody claims 100% security. Instead, the argument is that embedded systems controlling physical processes, infrastructure, robotics, and safety-relevant equipment can no longer accept weak boot chains, ad hoc signing, or unsecured model refresh. For teams shipping connected products with Edge AI, this is really about reducing attack surface while keeping deployment practical: secure boot, encrypted OTA, hardware-backed key custody, model verification, and fleet-wide update management brought into a single repeatable flow.</p>
<p>source <a href="https://www.youtube.com/watch?v=dbkKcFbHaOw">https://www.youtube.com/watch?v=dbkKcFbHaOw</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/13/thistle-technologies-edge-ai-security-secure-boot-ota-updates-model-signing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">139000</post-id>	</item>
		<item>
		<title>RECOM discrete DC/DC solutions, isolated power ICs and SMD transformers explained</title>
		<link>https://armdevices.net/2026/03/13/recom-discrete-dc-dc-solutions-isolated-power-ics-and-smd-transformers-explained/</link>
					<comments>https://armdevices.net/2026/03/13/recom-discrete-dc-dc-solutions-isolated-power-ics-and-smd-transformers-explained/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 11:46:47 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138998</guid>

					<description><![CDATA[RECOM is broadening its power portfolio beyond classic modules and into discrete isolated DC/DC building blocks, giving design teams a more flexible path from concept to production. The key idea in this interview is not just component availability, but a structured design flow built around matched power ICs, SMD transformers, and ready-made discrete reference solutions. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>RECOM is broadening its power portfolio beyond classic modules and into discrete isolated DC/DC building blocks, giving design teams a more flexible path from concept to production. The key idea in this interview is not just component availability, but a structured design flow built around matched power ICs, SMD transformers, and ready-made discrete reference solutions. Instead of forcing engineers to choose between a fully integrated module and a fully custom analog design from scratch, RECOM is positioning itself in the middle with pre-matched combinations that remove much of the uncertainty from isolated power design. https://recom-power.com/</p>
<p>What makes the concept interesting is the “your design, your choice” approach. An engineer can start with only the IC, select an IC plus a validated matching transformer, or order a complete discrete low-power isolated DC/DC implementation prepared by RECOM. That matters because transformer-driver matching is often where discrete converter design becomes slow and risky, especially when magnetics, topology, isolation constraints, and board-level integration all have to line up at once.</p>
<p>The technical focus is clearly on low-power isolated DC/DC conversion, where the interplay between the controller IC and the transformer largely defines whether the design behaves properly. RECOM highlights very small ICs, compact SMD transformers, and board-level discrete solutions that can be tested directly in an application. This gives developers a way to evaluate isolated converter behavior, tune system requirements, and decide whether a modular converter, a semi-custom discrete stage, or individual discrete parts is the better fit for cost, layout, and product differentiation.</p>
<p>The main value proposition here is speed. RECOM says it can deliver a ready discrete solution within 20 days, which shifts the conversation from pure component sourcing to design acceleration and faster time to market. For embedded developers working on industrial, communications, automation, or edge electronics, that can be more important than squeezing out a marginal efficiency gain, because the real bottleneck is often engineering time, validation effort, and getting hardware into the field quickly. The video was filmed at Embedded World 2026 in Nuremberg, where this launch was presented as a bridge between RECOM’s established module business and a new discrete power strategy.</p>
<p>Overall, the story is about giving engineers more control without pushing all the risk back onto them. RECOM is using the know-how it built through years of DC/DC module design and exposing part of that expertise through matched IC-transformer pairs and pre-built discrete solutions. That turns isolated power from a slow, magnetics-heavy design exercise into something closer to a configurable platform, which is a notable shift for teams that need isolation, compact SMD implementation, and faster prototyping without abandoning the option of deeper customization later on.</p>
<p>source <a href="https://www.youtube.com/watch?v=f6SsrygbdEk">https://www.youtube.com/watch?v=f6SsrygbdEk</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/13/recom-discrete-dc-dc-solutions-isolated-power-ics-and-smd-transformers-explained/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138998</post-id>	</item>
		<item>
		<title>Renesas RH850/U2B at Embedded World 2026, Motor Control, FFT, Zonal Controller</title>
		<link>https://armdevices.net/2026/03/13/renesas-rh850-u2b-at-embedded-world-2026-motor-control-fft-zonal-controller/</link>
					<comments>https://armdevices.net/2026/03/13/renesas-rh850-u2b-at-embedded-world-2026-motor-control-fft-zonal-controller/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 02:46:21 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138996</guid>

					<description><![CDATA[Renesas is showing a very practical side of the RH850/U2B here: how an automotive MCU can tackle a noisy BLDC motor with visible torque ripple, vibration, and cogging, then smooth it out with a dedicated compensation algorithm. Instead of framing motor control as an abstract benchmark, this demo makes the effect easy to hear, feel, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Renesas is showing a very practical side of the RH850/U2B here: how an automotive MCU can tackle a noisy BLDC motor with visible torque ripple, vibration, and cogging, then smooth it out with a dedicated compensation algorithm. Instead of framing motor control as an abstract benchmark, this demo makes the effect easy to hear, feel, and measure through the FFT view and the before/after response of the system. https://www.renesas.com/en/products/rh850-u2b</p>
<p>The key technical point is hardware offload. In this setup, the compensation workload runs on the RH850/U2B embedded hardware accelerator rather than relying only on the main CPU cores, which cuts the control cycle time from roughly 15.4 microseconds to about 5 microseconds. That kind of latency reduction matters in inverter and motor-control loops because it improves response, reduces ripple, and helps push precision further at low speed where cogging effects are easy to notice.</p>
<p>What makes the demo more relevant than a simple motor-control board is where Renesas positions the device. RH850/U2B is part of its cross-domain automotive MCU family, aimed at zonal controllers and unified ECU designs where motor control, safety, security, and real-time processing increasingly need to coexist on one device. The discussion around ASIL certification, EVITA Full capability, multi-core processing, and lockstep support places this clearly in the context of modern vehicle E/E architecture rather than a standalone industrial drive demo.</p>
<p>Filmed at Embedded World 2026 in Nuremberg, the demo is a good example of how Renesas is linking motor-control quality to broader automotive compute trends: hardware acceleration, deterministic timing, functional safety, cybersecurity, and domain integration. The result shown here is simple but meaningful: lower acoustic noise, lower vibration, faster execution, and a more efficient control path for EV, HEV, actuator, and zonal automotive applications.</p>
<p>source <a href="https://www.youtube.com/watch?v=7-LnA57KlGo">https://www.youtube.com/watch?v=7-LnA57KlGo</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/13/renesas-rh850-u2b-at-embedded-world-2026-motor-control-fft-zonal-controller/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138996</post-id>	</item>
		<item>
		<title>Yocto Project at Embedded World 2026: LTS, SBOM, BitBake, RISC-V, Embedded Linux</title>
		<link>https://armdevices.net/2026/03/13/yocto-project-at-embedded-world-2026-lts-sbom-bitbake-risc-v-embedded-linux/</link>
					<comments>https://armdevices.net/2026/03/13/yocto-project-at-embedded-world-2026-lts-sbom-bitbake-risc-v-embedded-linux/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 23:01:35 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138994</guid>

					<description><![CDATA[This conversation frames Yocto less as a single distro and more as the infrastructure layer many embedded Linux teams eventually need once products move beyond quick demos. The interview highlights why developers keep coming back to it: reproducible builds, minimal images, board bring-up, source mirroring, A/B update workflows, and a build system that only pulls [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>This conversation frames Yocto less as a single distro and more as the infrastructure layer many embedded Linux teams eventually need once products move beyond quick demos. The interview highlights why developers keep coming back to it: reproducible builds, minimal images, board bring-up, source mirroring, A/B update workflows, and a build system that only pulls in what the target actually needs. That matters for performance, maintenance, and attack surface, especially when long-lived devices are deployed in volume. https://www.yoctoproject.org/</p>
<p>A big theme here is maintainability over time. The speakers point to the next Yocto LTS cycle, with four years of support, as a practical answer for product teams facing long qualification windows and regulatory pressure. Security is presented in a very concrete way: SBOM generation, vulnerability scanning, CVE tracking, and the ability to rebuild images quickly when fixes land. That makes Yocto relevant not just for BSP work and image creation, but for Cyber Resilience Act readiness and ongoing fleet maintenance in the field.</p>
<p>What also comes through is how much of Yocto’s value sits in BitBake and the surrounding workflow rather than in any single package set. The discussion around bitbake-setup, shared sstate cache, layer configuration, and reusable board support shows why experienced engineers see it as a build framework rather than just another embedded Linux option. First builds may take time, but incremental rebuilds, cache reuse across projects, and structured metadata make the system much more scalable once teams are juggling multiple products, branches, and hardware targets at once.</p>
<p>The interview also gives a useful view of Yocto’s hardware reach. ARM is treated as routine, cross-compilation is normal, and RISC-V now feels more strategic than experimental, with community layers, board support, and stronger testing infrastructure getting more attention. There is also an interesting hint that Yocto thinking may spread beyond classic embedded targets, especially through meta-virtualization, container image construction, multi-architecture builds, and ultra-small deployable runtimes where provenance and SBOM detail matter a lot.</p>
<p>Just as important, this is a story about community process. The speakers are candid about what works well and what still needs refinement, from mailing-list driven contribution flow to newer GitHub-style expectations, and from volunteer patch flow to paid maintainers, release management, and LTS coordination funded by members. Filmed at Embedded World 2026 in Nuremberg, the video ends up showing Yocto as a mature, open, vendor-neutral build ecosystem for embedded Linux, where security, reproducibility, board enablement, and long-term support are all tied together in one stack.</p>
<p>source <a href="https://www.youtube.com/watch?v=YPjoayYbosQ">https://www.youtube.com/watch?v=YPjoayYbosQ</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/13/yocto-project-at-embedded-world-2026-lts-sbom-bitbake-risc-v-embedded-linux/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138994</post-id>	</item>
		<item>
		<title>Renesas RZ/V2H and RZ/V2N Robotics Demo, Gesture AI, Voice Control, ROS 2</title>
		<link>https://armdevices.net/2026/03/12/renesas-rz-v2h-and-rz-v2n-robotics-demo-gesture-ai-voice-control-ros-2/</link>
					<comments>https://armdevices.net/2026/03/12/renesas-rz-v2h-and-rz-v2n-robotics-demo-gesture-ai-voice-control-ros-2/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 21:56:10 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138992</guid>

					<description><![CDATA[Renesas uses this demo to show how edge AI is moving from simple vision classification into closed-loop robot control. The first setup combines an off-the-shelf dexterous hand with an RZ/V2H board, where a camera tracks human hand gestures, runs local inference, and maps the result to motors and axes so the robot hand mirrors the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Renesas uses this demo to show how edge AI is moving from simple vision classification into closed-loop robot control. The first setup combines an off-the-shelf dexterous hand with an RZ/V2H board, where a camera tracks human hand gestures, runs local inference, and maps the result to motors and axes so the robot hand mirrors the operator in real time. It is a practical example of embedded vision, gesture recognition, motor control, and low-latency human-machine interaction coming together on one platform. https://www.renesas.com/en</p>
<p>What makes the RZ/V2H part interesting here is not just raw AI throughput, but the system balance behind it. Renesas positions it for robotics and vision AI with multicore processing, DRP-AI acceleration, image-processing capability, and support for multiple camera streams, which fits workloads such as hand tracking, perception fusion, and coordinated motion. In this context the demo is less about a robotic hand alone and more about how sensor input, inference, and actuator control can be collapsed into a compact edge robotics design.</p>
<p>The second demo shifts toward collaborative robotics and tool assistance. Here, a robotic arm based on the RZ/V2N platform accepts both voice commands and hand gestures, running in a ROS 2 architecture to identify a requested tool, move to the right position, and present it to the operator. That makes the story broader than vision AI: it becomes a multimodal interface problem involving speech, gesture, robot middleware, task flow, and safe human-robot collaboration on the edge.</p>
<p>MXT’s role adds another useful layer, because this is not only a silicon story but also an ecosystem story. As a Renesas preferred partner, MXT has worked with Renesas across modules, evaluation kits, and custom boards, and the board shown here is described as a Raspberry Pi form factor design that can work with existing expansion hardware. That matters for faster prototyping, easier integration, and lower friction when developers want to move from proof of concept to a more product-like robotics platform.</p>
<p>Seen from Embedded World 2026 in Nuremberg, these demos reflect where industrial and service robotics are heading: more cameras, more AI models, more joints, more natural interfaces, and tighter integration between Linux, ROS 2, vision pipelines, and motor control. The most useful takeaway is not hype around humanoids, but the way Renesas is stacking practical building blocks for gesture-controlled manipulators, voice-driven cobots, and embedded robot perception where latency, power, and system cost still matter.</p>
<p>source <a href="https://www.youtube.com/watch?v=-9ba3hnz_ek">https://www.youtube.com/watch?v=-9ba3hnz_ek</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/renesas-rz-v2h-and-rz-v2n-robotics-demo-gesture-ai-voice-control-ros-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138992</post-id>	</item>
		<item>
		<title>Renesas Robotics Sensor Tech at Embedded World 2026, Edge AI, Force Sensing, Predictive Maintenance</title>
		<link>https://armdevices.net/2026/03/12/renesas-robotics-sensor-tech-at-embedded-world-2026-edge-ai-force-sensing-predictive-maintenance/</link>
					<comments>https://armdevices.net/2026/03/12/renesas-robotics-sensor-tech-at-embedded-world-2026-edge-ai-force-sensing-predictive-maintenance/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 18:11:05 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138990</guid>

					<description><![CDATA[Renesas frames this demo around sensing as a core building block for edge AI, robotics, mobility, and industrial automation. The focus is not on one isolated component but on how force sensing, position sensing, impedance sensing, and low-footprint embedded intelligence can be combined into compact actuator and HMI designs that are precise, robust, and realistic [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Renesas frames this demo around sensing as a core building block for edge AI, robotics, mobility, and industrial automation. The focus is not on one isolated component but on how force sensing, position sensing, impedance sensing, and low-footprint embedded intelligence can be combined into compact actuator and HMI designs that are precise, robust, and realistic to scale in production. https://www.renesas.com/IPS</p>
<p>The robotic hand is a good example of that direction. Instead of simple fingertip touch, the demo shows full-finger force measurement, so grip strength and the force curve over time can be tracked as the grasp develops. That matters for dexterous manipulation, safe human-robot interaction, and more natural motion control, where the system must regulate pressure finely enough to hold fragile objects without instability or slip.</p>
<p>A second theme is robotic joint feedback. Renesas positions inductive, magnet-free sensing as a practical fit for humanoid and industrial robot joints because it can deliver absolute position information, high resolution, immunity to stray magnetic fields, and better robustness against moisture, vibration, dust, and electromagnetic disturbance. That lines up with the company’s newer inductive position sensor push, including parts such as the RAA2P3226 for robotic joints, where compact integration, low latency, and tight angular accuracy are critical for servo control and coordinated motion.</p>
<p>The mobility demo extends that sensing approach into the human-machine interface. The scooter handle detects whether both hands are present using impedance sensing rather than conventional capacitive touch, which improves operation with gloves and in humid or wet conditions. Renesas is also emphasizing more complete reference algorithms around these sensors, so OEMs can tune sensitivity and recognition behavior in software without starting from scratch, which is often what product teams need when time-to-design is tight.</p>
<p>The final part of the video is about edge intelligence in a more literal sense: sensor data processed locally on a modest 32-bit microcontroller to infer things that are not directly measured, such as leakage, friction, or load change for predictive maintenance. That is a useful distinction in industrial sensing because it keeps latency, memory demand, power budget, and system cost under control while still enabling condition monitoring. Filmed at Embedded World 2026 in Nuremberg, the demo shows Renesas pushing sensors beyond raw measurement toward embedded perception for robotics, micromobility, and Industry 4.0.</p>
<p>source <a href="https://www.youtube.com/watch?v=qjhmr43MScA">https://www.youtube.com/watch?v=qjhmr43MScA</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/renesas-robotics-sensor-tech-at-embedded-world-2026-edge-ai-force-sensing-predictive-maintenance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138990</post-id>	</item>
		<item>
		<title>Lantronix Open-M 720G/520G drone AI compute, thermal imaging and Pixhawk integration</title>
		<link>https://armdevices.net/2026/03/12/lantronix-open-m-720g-520g-drone-ai-compute-thermal-imaging-and-pixhawk-integration/</link>
					<comments>https://armdevices.net/2026/03/12/lantronix-open-m-720g-520g-drone-ai-compute-thermal-imaging-and-pixhawk-integration/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 15:06:21 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138988</guid>

					<description><![CDATA[Lantronix is showing how a compact edge-AI compute module can turn a drone platform into something closer to an OEM-ready reference design than a simple demo. The focus here is the new Open-M 720G and 520G system-on-modules based on MediaTek Genio 720 and 520, aimed at getting UAV developers from evaluation to flight tests quickly [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Lantronix is showing how a compact edge-AI compute module can turn a drone platform into something closer to an OEM-ready reference design than a simple demo. The focus here is the new Open-M 720G and 520G system-on-modules based on MediaTek Genio 720 and 520, aimed at getting UAV developers from evaluation to flight tests quickly with onboard vision, control and sensor integration in one low-power stack. https://www.lantronix.com/products/open-m-720g-520g-som-system-on-module/</p>
<p>What makes this interesting is not just the module itself, but the system architecture around it. In the demo, Lantronix ties the SOM into a FLIR thermal camera path and a Pixhawk flight controller, creating a practical platform for inspection, surveillance and infrastructure monitoring. That matters because drone makers often need a starting point that already solves camera I/O, flight-control interfacing and edge inference, so they can spend more time on mission logic, autonomy and payload design.</p>
<p>Technically, the Genio 720 and 520 class stands out for delivering up to 10 TOPS of AI performance in a very constrained power envelope. Lantronix positions the platform at roughly 4 to 10 watts for typical usage, which is a meaningful number in UAV design where propulsion already dominates the energy budget. The point is not raw benchmark leadership, but usable on-device AI without the thermal and battery penalties that come with moving to 20, 30 or 40 watt compute tiers. For drones, that tradeoff can decide whether a mission lasts close to an hour or drops toward the 20 to 30 minute range.</p>
<p>The 720G and 520G mainly separate on imaging capability rather than core AI class, with the 720G supporting more camera processing through a dual-ISP style configuration while the 520G fits simpler single-ISP designs. That makes the pair relevant for manufacturers building regional alternatives to DJI-style platforms, especially where thermal imaging, multi-camera sensing, operator-assisted autonomy and fleet workflows matter more than consumer drone features. Filmed at Embedded World 2026 in Nuremberg, this interview is really about edge compute efficiency, modular drone design and how low-power AI silicon is becoming a practical foundation for industrial UAVs.</p>
<p>source <a href="https://www.youtube.com/watch?v=BBdLp7FBkd4">https://www.youtube.com/watch?v=BBdLp7FBkd4</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/lantronix-open-m-720g-520g-drone-ai-compute-thermal-imaging-and-pixhawk-integration/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138988</post-id>	</item>
		<item>
		<title>Innocomm MediaTek Genio 360P Multi-Camera Edge AI, DMS and Gesture Recognition</title>
		<link>https://armdevices.net/2026/03/12/innocomm-mediatek-genio-360p-multi-camera-edge-ai-dms-and-gesture-recognition/</link>
					<comments>https://armdevices.net/2026/03/12/innocomm-mediatek-genio-360p-multi-camera-edge-ai-dms-and-gesture-recognition/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 13:21:39 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138986</guid>

					<description><![CDATA[Innocomm presents a practical edge AI vision platform built around the MediaTek Genio 360P and Genio 360, showing how a system integrator and module maker can turn a reference SoC into a deployable multi-camera product. The demo is less about a single benchmark and more about system balance: camera input, AI pipeline scheduling, thermal behavior, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Innocomm presents a practical edge AI vision platform built around the MediaTek Genio 360P and Genio 360, showing how a system integrator and module maker can turn a reference SoC into a deployable multi-camera product. The demo is less about a single benchmark and more about system balance: camera input, AI pipeline scheduling, thermal behavior, and a usable module strategy for OEM and embedded designs. https://www.innocomm.com/</p>
<p>What stands out in this setup is concurrent inference across four camera streams with six computer-vision workloads running on one device. The applications mentioned in the demo cover driver monitoring, face detection and face matching, pose estimation, fall detection for elderly-care scenarios, gesture recognition, object detection, and missing-item or left-behind-belonging detection. That makes the platform relevant for smart mobility, public-space analytics, safety systems, and AIoT endpoints where several perception tasks need to run in parallel rather than one at a time.</p>
<p>The technical story is also about resource management. On screen, the demo exposes frame rate, compute loading, and temperature while models are enabled or disabled, showing how performance can be redistributed dynamically across workloads. That matters in real deployments, because edge AI products live or die by sustained throughput, memory bandwidth, and thermal envelope, not just peak TOPS figures. Around the Genio 360 family, MediaTek is positioning a 6nm edge AI platform with a hexa-core CPU architecture and integrated NPU capability, while Innocomm extends that into modules and standard products that also span MediaTek Genio 720 and 520 options for broader design scaling.</p>
<p>Rather than presenting AI as a vague feature, this video shows a fairly concrete embedded vision stack: multi-camera input, real-time inference, modular hardware, and deployable use cases with clear commercial logic. Filmed at Embedded World 2026 in Nuremberg, it gives a good look at how MediaTek ecosystem partners such as Innocomm are packaging edge perception into evaluation kits and modules that can move from demo to product with relatively little architectural change.</p>
<p>source <a href="https://www.youtube.com/watch?v=Zt8BUChd38E">https://www.youtube.com/watch?v=Zt8BUChd38E</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/innocomm-mediatek-genio-360p-multi-camera-edge-ai-dms-and-gesture-recognition/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138986</post-id>	</item>
		<item>
		<title>Linaro CoreCollective at Embedded World 2026, ONEBoot, AMI Meridian, Yocto, Arm firmware lifecycle</title>
		<link>https://armdevices.net/2026/03/12/linaro-corecollective-at-embedded-world-2026-oneboot-ami-meridian-yocto-arm-firmware-lifecycle/</link>
					<comments>https://armdevices.net/2026/03/12/linaro-corecollective-at-embedded-world-2026-oneboot-ami-meridian-yocto-arm-firmware-lifecycle/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 11:11:26 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138984</guid>

					<description><![CDATA[Linaro’s demo focuses on something that usually stays invisible until it breaks: firmware lifecycle management on Arm devices. The discussion here is about making BIOS and boot firmware less of a one-time “flash and forget” step and more of a maintained software layer, with repeatable build, test, verification, SBOM tracking, vulnerability management, and long-term updates [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Linaro’s demo focuses on something that usually stays invisible until it breaks: firmware lifecycle management on Arm devices. The discussion here is about making BIOS and boot firmware less of a one-time “flash and forget” step and more of a maintained software layer, with repeatable build, test, verification, SBOM tracking, vulnerability management, and long-term updates for devices running either Linux or Windows on Arm. https://www.linaro.org/</p>
<p>A key point is the split between ACPI-based firmware for Windows on Arm and Device Tree based firmware for Linux, and how Linaro and AMI are trying to manage both from one workflow. The demo combines AMI Meridian, Aptio V UEFI, and Linaro ONEBoot on the same ADLINK OSM-IMX93 platform, showing how a single board can boot Windows 11 IoT or Yocto Linux while keeping the firmware path standardized, security-aware, and easier to maintain over time.</p>
<p>That matters because firmware sits below the operating system and carries higher privilege than user space or even the kernel. If the firmware layer is weak, OS hardening only goes so far. The interview makes that practical: CVE monitoring, SBOM generation, software supply chain visibility, and CRA-oriented compliance are no longer just enterprise server topics, but increasingly part of embedded and IoT product maintenance. This video was filmed at Embedded World 2026 in Nuremberg, where that regulatory angle is clearly shaping how vendors present embedded platforms.</p>
<p>The other thread in the video is Linaro’s broader services model around Arm software enablement. Beyond firmware, the booth also covers Yocto build analysis, license and IP compliance, upstream kernel support, virtualization with virtio, and practical pathways for keeping deployed products supportable in the field. The newly launched CoreCollective also comes up as a free-to-join industry forum backed by Arm, intended to gather OEMs, ODMs, silicon vendors, and software stakeholders around shared engineering problems rather than isolated one-off fixes.</p>
<p>The final section on training is also worth noting because it connects theory to real hardware. Linaro is rebuilding its training offering around firmware, TF-A, U-Boot, Linux kernel, and Yocto, with remote lab access through its automation appliance, serial console, remote power control, OTG boot, and camera-monitored boards. That makes the pitch broader than a firmware demo alone: standardized boot flows, upstream-first engineering, CRA readiness, and hands-on enablement for teams building Arm products that need to stay secure and maintainable after shipment.</p>
<p>Linaro Unified Firmware Lifecycle, ONEBoot, AMI Meridian, Windows and Linux on Arm<br />
Linaro ONEBoot and , SBOM, CVE and CRA compliance</p>
<p>source <a href="https://www.youtube.com/watch?v=aRIs9YZfkH0">https://www.youtube.com/watch?v=aRIs9YZfkH0</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/linaro-corecollective-at-embedded-world-2026-oneboot-ami-meridian-yocto-arm-firmware-lifecycle/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138984</post-id>	</item>
		<item>
		<title>Forlinx Edge AI on i.MX 95 and Ara240, RK3588 Multi-Camera Vision, Modular SoMs</title>
		<link>https://armdevices.net/2026/03/12/forlinx-edge-ai-on-i-mx-95-and-ara240-rk3588-multi-camera-vision-modular-soms/</link>
					<comments>https://armdevices.net/2026/03/12/forlinx-edge-ai-on-i-mx-95-and-ara240-rk3588-multi-camera-vision-modular-soms/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 09:11:26 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138982</guid>

					<description><![CDATA[Forlinx presents itself here as more than a module vendor. The interview is really about how an embedded hardware company is moving up the stack into edge AI integration, combining SoM design, carrier boards, manufacturing, software enablement, model conversion, and deployment support. The main message is that Forlinx wants to shorten the path from silicon [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Forlinx presents itself here as more than a module vendor. The interview is really about how an embedded hardware company is moving up the stack into edge AI integration, combining SoM design, carrier boards, manufacturing, software enablement, model conversion, and deployment support. The main message is that Forlinx wants to shorten the path from silicon vendor roadmap to a production-ready embedded AI platform, whether the target is industrial vision, smart gateways, robotics, or local multimodal inference. https://www.forlinx.net/</p>
<p>The headline demo pairs an NXP i.MX 95 platform with the Ara240 M.2 AI accelerator, creating a hybrid edge AI system that mixes the i.MX 95’s local vision, graphics, security and low-power processing with an external 40 eTOPS accelerator for larger models. In the discussion, that translates into local image understanding and natural-language analysis without relying on cloud inference, including a 7B-class LLM workflow and token generation around 20 tokens per second. That combination is interesting because it shows a practical split between on-chip NPU inference and a higher-throughput PCIe add-in path for generative AI at the edge.</p>
<p>A second thread in the video is platform scaling. Forlinx talks about using the i.MX 95’s own NPU for front-end recognition and then handing richer tasks to the accelerator, while also pointing to multi-card configurations for larger parameter counts. That makes the story less about one benchmark and more about architecture: modular edge AI, where compute can be right-sized from compact fanless designs up to multi-accelerator systems, depending on camera count, model size, latency target, and power budget.</p>
<p>The Rockchip side of the booth broadens that picture. RK3588 appears as a mature edge vision platform handling multi-camera workloads, PoE-connected inference pipelines, stitching, and video-centric AI optimization across encode, decode, and NPU execution. There is also a smaller RV1126B face-tracking demo showing how low-power Cortex-A53 class systems with an integrated NPU can still deliver responsive, fanless vision tasks. What stands out is not just chip support, but the engineering work behind BSP tuning, driver maturity, model adaptation, and layer-level optimization for real deployments.</p>
<p>Later in the video, the discussion shifts to pin-compatible module design, ODM work, early access to new SoCs, Linux support, and close collaboration with NXP, Rockchip, TI and Allwinner. That makes this less of a product showcase and more of a view into how embedded AI is being industrialized: standardised compute building blocks, faster bring-up, tighter software-hardware co-design, and a clearer route from demo to mass production. The video was filmed at Embedded World 2026 in Nuremberg, where Forlinx framed edge AI as a system integration problem as much as a silicon one.</p>
<p>Forlinx i.MX 95, Ara240 and RK3588 Edge AI for Vision and Local LLMs</p>
<p>Forlinx Embedded AI Platforms with i.MX 95, Ara240, RK3588 and</p>
<p>source <a href="https://www.youtube.com/watch?v=W6M4m0LBciw">https://www.youtube.com/watch?v=W6M4m0LBciw</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/forlinx-edge-ai-on-i-mx-95-and-ara240-rk3588-multi-camera-vision-modular-soms/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138982</post-id>	</item>
		<item>
		<title>Renesas 365 Launched at Embedded World 2026: MCU selection, BSP scaffolding, fleet management</title>
		<link>https://armdevices.net/2026/03/12/renesas-365-launched-at-embedded-world-2026-mcu-selection-bsp-scaffolding-fleet-management/</link>
					<comments>https://armdevices.net/2026/03/12/renesas-365-launched-at-embedded-world-2026-mcu-selection-bsp-scaffolding-fleet-management/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 06:46:38 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138980</guid>

					<description><![CDATA[Renesas 365 is presented here as a cloud-native engineering platform that tries to connect system architecture, embedded software, PCB design, and operational lifecycle management inside one continuous workflow. The core idea is not just collaboration in a browser, but persistent digital context: design intent, interface requirements, device choices, and implementation details stay linked instead of [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Renesas 365 is presented here as a cloud-native engineering platform that tries to connect system architecture, embedded software, PCB design, and operational lifecycle management inside one continuous workflow. The core idea is not just collaboration in a browser, but persistent digital context: design intent, interface requirements, device choices, and implementation details stay linked instead of being scattered across diagrams, spreadsheets, datasheets, and isolated toolchains. That makes the discussion less about a single MCU and more about how a smart connected product is specified, built, updated, and maintained across its full life cycle. https://www.renesas.com/renesas365</p>
<p>The balancing-robot demo makes that concept concrete. Renesas shows how a product can begin as a system-level model, where interfaces between controller, sensors, connectivity, and peripherals become machine-readable constraints rather than static drawing objects. In the demo, Electronic System Design captures those constraints and feeds them into RA Explorer, which evaluates the RA MCU family at scale, including peripheral allocation, channel mapping, and pin multiplexing. Instead of manually checking hundreds of parts and reconciling conflicts one by one, the platform narrows the candidate list in seconds and regenerates a valid configuration when requirements change, such as adding CAN.</p>
<p>What stands out technically is the handoff from system model to software scaffolding. Once the device configuration is resolved, Renesas 365 can generate the basis of a board support package and assemble the low-level driver stack around the selected peripherals, including connectivity layers such as Wi-Fi. That is the real productivity claim here: not only component discovery, but carrying configuration intent downstream into embedded implementation. For MCU teams dealing with pinmux limits, package variants, and software-stack assembly, that removes a large amount of repetitive engineering work and shifts attention toward architecture, trade-off analysis, and application behavior at the edge.</p>
<p>The wider roadmap matters just as much as the live demo. Renesas has been positioning Renesas 365, powered by Altium, as a full electronics-system platform spanning silicon, discover, develop, lifecycle, and software, with broader lifecycle services around digital traceability, secure OTA/OTAA infrastructure, and fleet-oriented management. In the interview, that future direction also extends toward behavioral modeling, power and memory budgeting, AI-assisted code generation, debugging, and API-level access for external tools and autonomous agents. Filmed at Embedded World 2026 in Nuremberg, the conversation frames the launch as part of a larger shift from isolated EDA and firmware workflows toward a more platform-based electronics-development stack.</p>
<p>Another important point is openness. Renesas is clearly strongest when modeling its own silicon, but the demo also shows third-party components in the design flow, and the company describes a roadmap where partners can publish hardware, software, and subsystem models into the environment. That makes Renesas 365 less about locking engineers into a single vendor bill of materials and more about giving mixed-vendor embedded teams a shared design surface with traceable context. For anyone building software-defined industrial or IoT products, the interesting question is not whether this replaces every existing tool on day one, but how far it can reduce manual integration friction between architecture, firmware, board design, update infrastructure, and fleet operation at scale.</p>
<p>source <a href="https://www.youtube.com/watch?v=62XKBA4x7ts">https://www.youtube.com/watch?v=62XKBA4x7ts</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/12/renesas-365-launched-at-embedded-world-2026-mcu-selection-bsp-scaffolding-fleet-management/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138980</post-id>	</item>
		<item>
		<title>Looking Glass musubi holographic photo frame converts photos &#038; videos to HLD holograms Kickstarter</title>
		<link>https://armdevices.net/2026/03/11/looking-glass-musubi-holographic-photo-frame-converts-photos-videos-to-hld-holograms-kickstarter/</link>
					<comments>https://armdevices.net/2026/03/11/looking-glass-musubi-holographic-photo-frame-converts-photos-videos-to-hld-holograms-kickstarter/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 13:36:40 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138978</guid>

					<description><![CDATA[musubi is a new holographic photo and video frame developed by Looking Glass that converts ordinary photos and short video clips into holograms with visible depth. The device is designed as a simple consumer product that works with media people already have, including photos from phones or older scanned pictures. Conversion happens locally through a [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>musubi is a new holographic photo and video frame developed by Looking Glass that converts ordinary photos and short video clips into holograms with visible depth. The device is designed as a simple consumer product that works with media people already have, including photos from phones or older scanned pictures. Conversion happens locally through a desktop application that reconstructs depth using machine learning and loads the hologram directly onto the frame. The device does not require a cloud connection or subscription and stores the media locally on the device. https://look.glass/musubi</p>
<p>The idea behind musubi is to make holographic displays practical for everyday use at home. Many people store thousands of photos and videos that are rarely revisited once they disappear into phone galleries or cloud folders. By transforming those flat images into holographic scenes with depth, the frame attempts to recreate moments with more spatial presence than traditional digital photo frames. Weddings, family memories, pets, and travel clips can be converted into short holographic scenes that play directly on the display.</p>
<p>The workflow is intentionally simple. Users connect the frame to a Mac or PC using USB-C, select photos or short video clips up to thirty seconds long, and run the conversion tool in the Looking Glass desktop software. The application generates a 3D scene from the original media and loads it into the device storage. Each frame can hold around one thousand holograms and includes a built-in speaker for video playback, allowing clips to run with sound.</p>
<p>The hardware includes a 7-inch Hololuminescent Display with roughly two inches of perceived depth. The frame has an internal rechargeable battery rated for about three hours of operation or can run continuously when powered through USB-C. All playback works offline once the media has been converted and loaded. The device includes simple controls for power, volume, and switching between stored holograms.</p>
<p>For creators and developers there are additional tools available beyond the standard workflow, including support for Gaussian splat imports as well as plugins for Unity, Unreal Engine, and Blender. Motion graphics templates for Adobe Premiere Pro and After Effects can also generate compatible holographic content. This demonstration was filmed at Embedded World 2026 in Nuremberg where Looking Glass presented musubi as a smaller consumer counterpart to its larger holographic displays used in developer and enterprise environments.</p>
<p>Looking Glass musubi holographic photo frame demo HLD display converts photos and videos<br />
Looking Glass musubi holographic frame turns photos and videos into 3D holograms<br />
Looking Glass musubi holographic display photo and video frame Kickstarter demo</p>
<p>source <a href="https://www.youtube.com/watch?v=3_ZKcVEi5Yk">https://www.youtube.com/watch?v=3_ZKcVEi5Yk</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/11/looking-glass-musubi-holographic-photo-frame-converts-photos-videos-to-hld-holograms-kickstarter/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138978</post-id>	</item>
		<item>
		<title>Siemens industrial AI hub Booth Tour at SPS 2025 digital twin, copilots and agentic robots</title>
		<link>https://armdevices.net/2026/03/02/siemens-industrial-ai-hub-booth-tour-at-sps-2025-digital-twin-copilots-and-agentic-robots/</link>
					<comments>https://armdevices.net/2026/03/02/siemens-industrial-ai-hub-booth-tour-at-sps-2025-digital-twin-copilots-and-agentic-robots/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 12:41:11 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138975</guid>

					<description><![CDATA[Siemens uses this booth tour to show how its industrial AI strategy connects automation hardware, engineering software and domain-specific copilots into one digital enterprise stack. From the central Industrial AI Hub, Tsvetelina Nikolova explains how manufacturers can merge real-world production assets with a comprehensive digital twin, then run “what-if” scenarios across the entire lifecycle to [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Siemens uses this booth tour to show how its industrial AI strategy connects automation hardware, engineering software and domain-specific copilots into one digital enterprise stack. From the central Industrial AI Hub, Tsvetelina Nikolova explains how manufacturers can merge real-world production assets with a comprehensive digital twin, then run “what-if” scenarios across the entire lifecycle to optimize design, throughput and energy use. The focus is on leveraging Siemens Xcelerator, Industrial Operations X and Industrial Edge to turn heterogeneous shop-floor data into a consistent, AI-ready data fabric that spans OT and IT. https://www.siemens.com/global/en/products/automation/topic-areas/industrial-ai.html</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the design and engineering side, the tour highlights generative AI embedded directly into NX and the new family of Industrial Copilots. Here, engineers can ask natural-language questions about CAD models, get design variants for components like TV wall mounts, or have NX CAM Copilot propose optimized toolpaths for complex parts. The Engineering Copilot TIA, tightly integrated with TIA Portal, lets automation engineers describe intents instead of writing or searching through PLC code, automating configuration tasks and documentation across projects. ([Siemens Press][3]) This reduces repetitive work, accelerates commissioning and makes it easier for new engineers to contribute quickly to established control architectures, improving productivity across the engineering workflow.</p>
<p>In operations, the video zooms in on Insights Hub, Siemens’ industrial IoT platform that aggregates sensor, PLC and MES data and exposes it through dashboards and a built-in copilot. Operators can use conversational queries to check stock levels in the MES, configure machines for short product runs with multiple variants, and orchestrate workflows textually rather than through custom scripts. The same data backbone feeds asset intelligence and predictive maintenance, illustrated by a BlueScope steel case where a digital twin “fingerprint” of critical assets is compared continuously with live data to detect deviations and trigger proactive interventions, avoiding roughly 2,000 hours of unplanned downtime. Together, these examples show how industrial AI copilots move from nice-to-have dashboards to closed-loop decision support that protects throughput and uptime.</p>
<p>The second half of the tour steps into Siemens’ “future” zone, where agentic AI and autonomous production concepts are on display. A robot cell is configured as an example of how autonomous agents could handle configuration, scheduling and execution of tasks, while an orchestrator agent coordinates specialized agents for planning, quality, logistics and energy optimization. Rather than replacing humans, Siemens positions these agentic systems as collaborators that take over low-level reconfiguration work so engineers and operators can focus on high-value problem solving, governance and safety. This aligns with Siemens’ broader push toward industrial foundation models and AI agents that can reason over engineering data, shop-floor events and business constraints across the wider industry.</p>
<p>Filmed in the Siemens hall at SPS 2025 in Nuremberg, the video also touches on how the company extends this experience beyond the physical stand through live talks and a persistent virtual booth. Nikolova stresses that AI-driven factories are still built around human decision-makers, with copilots and agents acting as transparent, explainable tools rather than opaque black boxes. For younger engineers, that means fewer hours on translation, documentation and repetitive configuration, and more time on creative tasks like new machine concepts or process improvements. The result is a glimpse of how industrial AI, digital twins and autonomous agents may reshape factory work over the coming years, while keeping human expertise firmly at the center of the experience online.</p>
<p>source <a href="https://www.youtube.com/watch?v=FpkAXAdHaEI">https://www.youtube.com/watch?v=FpkAXAdHaEI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/03/02/siemens-industrial-ai-hub-booth-tour-at-sps-2025-digital-twin-copilots-and-agentic-robots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138975</post-id>	</item>
		<item>
		<title>nVent Google Project Deschutes 5.0 CDU, CX121 liquid cooling for AI data center racks Nvidia GB300</title>
		<link>https://armdevices.net/2026/02/25/nvent-google-project-deschutes-5-0-cdu-cx121-liquid-cooling-for-ai-data-center-racks-nvidia-gb300/</link>
					<comments>https://armdevices.net/2026/02/25/nvent-google-project-deschutes-5-0-cdu-cx121-liquid-cooling-for-ai-data-center-racks-nvidia-gb300/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 25 Feb 2026 17:21:46 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138973</guid>

					<description><![CDATA[nVent is positioning its liquid cooling portfolio as core infrastructure for AI and high-performance data centers, starting with the new Project Deschutes 5.0 coolant distribution unit based on Google’s open OCP specification. The unit is a 2 MW, 500 gpm, high-pressure liquid-to-liquid CDU with N+1 sealless pumps, low-harmonic VFD drives and 3°C approach temperature, engineered [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>nVent is positioning its liquid cooling portfolio as core infrastructure for AI and high-performance data centers, starting with the new Project Deschutes 5.0 coolant distribution unit based on Google’s open OCP specification. The unit is a 2 MW, 500 gpm, high-pressure liquid-to-liquid CDU with N+1 sealless pumps, low-harmonic VFD drives and 3°C approach temperature, engineered to support Google’s seventh-generation TPU “Ironwood” and other high-density chips at scale while staying within tight thermal envelopes and electrical constraints. https://www.nvent.com/en-us/data-solutions/liquid-cooling</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>In the video, Matt Archbald walks through how this Deschutes CDU is tuned to Google’s operating point: single-phase water-based secondary loops, up to 65–80 psi design pressure, and about 60 kW of electrical input to move roughly 2 MW of heat away from the racks. Ultra-low harmonic drives allow the CDU to share the same power rails as the IT load, avoiding extra electrical infrastructure. N+1 filtration with individually isolatable filters means maintenance can be done live, without shutting down the CDU or impacting TPU clusters and other liquid-cooled nodes. Variable-speed pumps make it possible to support lower-pressure environments and non-Google chips simply by shifting along the pump PQ curve.</p>
<p>nVent also shows the CX121, a “for the masses” row-based CDU platform in the 1.5–1.75 MW range with three pumps and true N+1 redundancy at a 4°C approach. The CX121’s power architecture is configurable as single, dual, three-feed or four-feed, enabling architectures such as “4 feeds makes 3” for Nvidia and AMD racks and reducing the need for extra CDUs as pure failover. Liquid quality monitoring, leak detection and telemetry are built in, while pump modules bundle pump, filter and drive into a hot-swappable 750 lb cartridge that can be changed by a single technician in under half an hour, keeping service windows short in large AI data halls.</p>
<p>Around the booth, the discussion zooms out to full-stack infrastructure. nVent demonstrates smart power distribution units with polling and streaming telemetry, enabling power analytics, threshold-based smart alerting and automated load shedding when temperatures or currents approach critical limits. On the thermal side, the portfolio spans in-rack CDUs for Nvidia MGX, GB200 and GB300 configurations, OCP OV3 and enterprise racks, rear-door heat exchangers for capturing residual air-side heat, and a liquid-to-air sidecar that uses the data hall’s airstream when facility water is not available. Overhead, the modular Technology Cooling System (TCS) network uses pre-defined manifold lengths, flexible interconnects and seismic bracing so coolant distribution to each rack remains resilient and easy to commission.</p>
<p>Filmed at Supercomputing 2025 (SC25) in St. Louis, the conversation also touches on lifecycle management and environmental impact. nVent emphasizes closed-loop secondary circuits that avoid evaporative losses, glycol-based formulations tuned to approach the thermal performance of water, and a partnership with Valvoline for global fluid supply, monitoring and end-of-life recycling. By standardizing around open specifications like Google’s Project Deschutes 5.0 and combining CDUs, manifolds, sidecars and rear doors with services and telemetry, nVent presents a coherent path to megawatt-class racks, 500 W+ chips and hybrid air/liquid deployments without requiring a complete data center rebuild.</p>
<p>Publishing 50+ videos from Supercomputing 2025 (SC25, St. Louis), and from other recent events, about 4 per day at 5AM, 11AM, 5PM and 11PM CET/EST.<br />
Join https://www.youtube.com/charbax/join for early access to all my queued videos early.</p>
<p>Watch my full SC25 playlist:<br />
https://www.youtube.com/playlist?list=PL7xXqJFxvYvihnaq98TO55Cbe2VMD9mk8</p>
<p>source <a href="https://www.youtube.com/watch?v=l9l5m4y8zYg">https://www.youtube.com/watch?v=l9l5m4y8zYg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/25/nvent-google-project-deschutes-5-0-cdu-cx121-liquid-cooling-for-ai-data-center-racks-nvidia-gb300/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138973</post-id>	</item>
		<item>
		<title>Layer Canvas square QLED: 10,000 zones + Dragonwing QCS8550 GPU, live generative art</title>
		<link>https://armdevices.net/2026/02/21/layer-canvas-square-qled-10000-zones-dragonwing-qcs8550-gpu-live-generative-art/</link>
					<comments>https://armdevices.net/2026/02/21/layer-canvas-square-qled-10000-zones-dragonwing-qcs8550-gpu-live-generative-art/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 21 Feb 2026 21:32:00 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138971</guid>

					<description><![CDATA[Layer presents Canvas as a combined hardware display and curated content platform aimed at museum-grade digital art: a square-format QD/QLED panel with a dense miniLED backlight and 10,000 individually controlled local-dimming zones, tuned for high contrast, low blooming, and a matte, low-reflection front surface so the image holds up from wider viewing angles. The core [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Layer presents Canvas as a combined hardware display and curated content platform aimed at museum-grade digital art: a square-format QD/QLED panel with a dense miniLED backlight and 10,000 individually controlled local-dimming zones, tuned for high contrast, low blooming, and a matte, low-reflection front surface so the image holds up from wider viewing angles. The core idea is to treat the screen like a true “canvas” for generative and code-based work rather than a TV that sometimes shows art. https://layer.com/canvas</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A big part of the demo is that many pieces aren’t pre-rendered video loops: they run live, with every pixel computed on the GPU in real time, which lets artists expose parameters (amplitude, exponent, speed, palette logic, randomness seeds) for near-infinite variations without repeating the same frame sequence. Viewers can trigger variants wirelessly from a phone or tablet, and some works can ingest ambient data so the art responds to context rather than staying fixed.</p>
<p>On the display side, the 10,000-zone approach behaves like “selective darkness”: backlight zones behind black areas shut down while only the lit regions stay energized, pushing perceived black levels closer to emissive displays while avoiding some long-term burn-in concerns that matter for always-on installation. The unit also integrates light sensors to auto-match room brightness for day/night usage, and that sensor data can be made available to the artwork logic for adaptive behavior in situ.</p>
<p>The system uses a Qualcomm Dragonwing compute platform (highlighted at the Qualcomm booth during CES), chosen for a strong GPU pipeline while staying quiet enough for living spaces and gallery installs. Layer positions the form factor itself as part of the art argument: a square aspect ratio that breaks away from 16:9 “TV framing,” plus minimalist industrial design (CNC aluminum back, clean installation options including wire-hanging so it can float in a space for 360-degree viewing).</p>
<p>The catalog focus is moving, generative digital art with an AI-driven curation mode that rotates pieces through the day and learns viewing preferences without leaning on invasive camera tracking; early experiments included mmWave-style presence sensing, but the approach discussed here favors privacy-safe signals (like Bluetooth presence and crowd-level heuristics) instead. The conversation also ties back to a long history in online digital-art communities and the push to make digital art displays feel native in galleries and homes, with this interview later echoed in the broader display conversation at ISE 2026 Barcelona as generative content meets high-end panel engineering in a single wall unit.</p>
<p>I&#8217;m publishing about 50+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=J3uXdtjk54s">https://www.youtube.com/watch?v=J3uXdtjk54s</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/21/layer-canvas-square-qled-10000-zones-dragonwing-qcs8550-gpu-live-generative-art/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138971</post-id>	</item>
		<item>
		<title>Bang &#038; Olufsen Beo Grace: 12mm titanium driver, IP67, ANC, Dolby Atmos, NearTap</title>
		<link>https://armdevices.net/2026/02/19/bang-olufsen-beo-grace-12mm-titanium-driver-ip67-anc-dolby-atmos-neartap/</link>
					<comments>https://armdevices.net/2026/02/19/bang-olufsen-beo-grace-12mm-titanium-driver-ip67-anc-dolby-atmos-neartap/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 19 Feb 2026 11:01:49 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138969</guid>

					<description><![CDATA[Bang &#038; Olufsen’s Beo Grace is positioned as a true-wireless earbud where industrial design is treated as part of the acoustic platform: pearl-blasted aluminium housing, jewellery-like fit, and a compact case that’s meant to travel without looking like generic plastic. In this video, the focus is on how the physical build (materials, tolerances, seal) supports [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Bang &#038; Olufsen’s Beo Grace is positioned as a true-wireless earbud where industrial design is treated as part of the acoustic platform: pearl-blasted aluminium housing, jewellery-like fit, and a compact case that’s meant to travel without looking like generic plastic. In this video, the focus is on how the physical build (materials, tolerances, seal) supports both comfort and consistent sound delivery. https://www.bang-olufsen.com/en/int/earphones/beograce</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the audio side, Beo Grace uses a 12 mm titanium dynamic driver and tuning that targets a clean, high-resolution presentation while keeping low-end controlled rather than boosted. The demo leans heavily on active noise cancellation performance, describing a “full isolation” effect, plus a transparency mode that’s meant to stay natural instead of sounding like a boosted microphone feed. Dolby Atmos spatial audio is also part of the story, aiming to widen the image and create a more externalised stage.</p>
<p>Interaction design matters here: instead of only relying on tiny buttons, Beo Grace uses touch controls and proximity-based gestures (often referenced as NearTap-style control) for volume and playback, so you can make adjustments without breaking the seal. The earbuds are rated IP67, which is unusually high for premium in-ears, and the aluminium charging case keeps the same material language as the earbuds rather than switching to coated plastic. The segment was filmed at ISE 2026 in Barcelona, so it’s presented in a show-floor context rather than a quiet studio.</p>
<p>A useful technical footnote is that ultra-premium earbuds still live within power and size limits: reports around Beo Grace point to shorter playback time with ANC enabled than mainstream rivals, while focusing on longevity via battery-management design (including partnerships around battery health and cycle life). If you’re comparing it to other B&#038;O in-ears, think of Beo Grace less as a feature checklist and more as a materials + acoustics + ANC package aimed at a very specific listening workflow.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=NzbDddd3BcU">https://www.youtube.com/watch?v=NzbDddd3BcU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/19/bang-olufsen-beo-grace-12mm-titanium-driver-ip67-anc-dolby-atmos-neartap/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138969</post-id>	</item>
		<item>
		<title>Bang &#038; Olufsen ISE 2026 Booth Tour: Landscape speaker, BeoLiving Intelligence, Atelier, Beolab 90</title>
		<link>https://armdevices.net/2026/02/14/bang-olufsen-ise-2026-booth-tour-landscape-speaker-beoliving-intelligence-atelier-beolab-90/</link>
					<comments>https://armdevices.net/2026/02/14/bang-olufsen-ise-2026-booth-tour-landscape-speaker-beoliving-intelligence-atelier-beolab-90/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 14 Feb 2026 10:21:25 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138966</guid>

					<description><![CDATA[Bang &#038; Olufsen’s booth walkthrough is a compact tour of how the brand thinks about architectural audio: treat speakers, TVs, and control as one connected system, then let integrators extend it into more rooms and more zones without changing the user experience. The focus is on multiroom orchestration, consistent latency/level behavior between zones, and making [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Bang &#038; Olufsen’s booth walkthrough is a compact tour of how the brand thinks about architectural audio: treat speakers, TVs, and control as one connected system, then let integrators extend it into more rooms and more zones without changing the user experience. The focus is on multiroom orchestration, consistent latency/level behavior between zones, and making the “control layer” feel as deliberate as the industrial design. https://www.bang-olufsen.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A headline prototype on display is the forthcoming landscape speaker concept, positioned as Bang &#038; Olufsen’s first fully in-house outdoor architectural speaker. The idea is to carry the same platform approach outdoors (gardens, terraces, hospitality courtyards) so an exterior zone behaves like any other room in the ecosystem, rather than a separate, bolt-on audio island. The discussion naturally lands on weather-facing materials, mounting options, and how integrators would specify outdoor coverage patterns alongside indoor listening areas.</p>
<p>A second thread is Bang &#038; Olufsen Atelier, shown as a specification tool for bespoke finishes and one-off styling choices—useful when you’re matching wood, anodised aluminium tones, textiles, or architectural palettes. In practice, it’s about giving architects and installers a way to keep acoustics and aesthetics aligned, especially when a system includes statement pieces like the Beolab 90 in special editions such as Phantom and Mirage, where surface treatment and visual depth are part of the product story.</p>
<p>On the integration side, the booth also highlights BeoLiving Intelligence as the automation bridge between B&#038;O products and wider smart-home ecosystems. The demos lean into real-world programming concepts—scenes, triggers, multiroom grouping logic, and feedback loops—plus newer AI-assisted programming ideas aimed at speeding up configuration and scaling to larger multiroom or commercial deployments. It’s essentially the “glue” layer that makes B&#038;O AV behave predictably inside a broader control stack.</p>
<p>This video was filmed at ISE 2026 in Barcelona, and it’s a useful snapshot of what B&#038;O is prioritising for integrators right now: outdoor architectural expansion via the landscape speaker concept, deep customisation via Atelier, high-end reference hardware like Beolab 90 editions, and system-level control via BeoLiving Intelligence and remotes such as BeoRemote Halo and BeoRemote One. The result is less about a single product and more about how a whole installation is specified, tuned, and controlled as one coherent audio experience.</p>
<p>I&#8217;m publishing about 50+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=w8jWp_N3Mxw">https://www.youtube.com/watch?v=w8jWp_N3Mxw</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/14/bang-olufsen-ise-2026-booth-tour-landscape-speaker-beoliving-intelligence-atelier-beolab-90/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138966</post-id>	</item>
		<item>
		<title>Google ChromeOS enterprise update: Cameyo PWA for Windows apps, Gemini on Chromebook Plus, DLP</title>
		<link>https://armdevices.net/2026/02/13/google-chromeos-enterprise-update-cameyo-pwa-for-windows-apps-gemini-on-chromebook-plus-dlp/</link>
					<comments>https://armdevices.net/2026/02/13/google-chromeos-enterprise-update-cameyo-pwa-for-windows-apps-gemini-on-chromebook-plus-dlp/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 20:51:47 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138964</guid>

					<description><![CDATA[ChromeOS is being positioned as an enterprise-ready endpoint where AI features and security policy move together, rather than being bolted on later. In this chat, Craig Francis explains how Google is trying to tell a more complete “Gemini + security” story: Chromebook Plus devices can use on-device acceleration (Intel, MediaTek, Qualcomm class platforms) for responsive [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>ChromeOS is being positioned as an enterprise-ready endpoint where AI features and security policy move together, rather than being bolted on later. In this chat, Craig Francis explains how Google is trying to tell a more complete “Gemini + security” story: Chromebook Plus devices can use on-device acceleration (Intel, MediaTek, Qualcomm class platforms) for responsive AI tasks, while heavier requests can still run in the cloud when needed. https://chromeenterprise.google/products/chrome-enterprise-premium/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A big theme is removing adoption friction for organizations that still depend on Windows-era software. The discussion highlights Cameyo by Google as a way to package legacy Windows apps on a Windows cloud server and publish them as a Progressive Web App that launches from the app icon like a native program. The point is that users don’t deal with extra logins or visible virtualization layers; they just open the app, and the session is streamed from the server behind the scenes.</p>
<p>The other blocker is “Microsoft-first” workflows, and the messaging here is that ChromeOS can be a practical front end even when teams stay on Microsoft 365. The idea is single sign-on with Microsoft credentials, web-first Office access, and admin-managed policies that keep identity and data consistent while avoiding the unmanaged-browser problem that shows up when employees mix corporate work with random sites and third-party AI tools. This interview was filmed at ISE 2026 in Barcelona, where enterprise AV and workplace IT themes overlap more than ever.</p>
<p>Chrome Enterprise Premium is framed as the control plane for that browser reality: security visibility, phishing and malware defenses, and data loss prevention rules that can reduce risky copy/paste or uploads of sensitive content into unsanctioned services, including AI tools. Put together, the pitch is less about replacing everything with web apps overnight, and more about making ChromeOS a manageable, policy-driven client that can run web workloads, virtualized legacy apps, and selective on-device AI without breaking enterprise governance.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=XRsgjD92Wto">https://www.youtube.com/watch?v=XRsgjD92Wto</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/13/google-chromeos-enterprise-update-cameyo-pwa-for-windows-apps-gemini-on-chromebook-plus-dlp/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138964</post-id>	</item>
		<item>
		<title>Vistech MiP microLED 0.6/0.9, HDMI one-cable, Discovery Max IMAX, XR 1.5, transparent 3.91</title>
		<link>https://armdevices.net/2026/02/13/vistech-mip-microled-0-6-0-9-hdmi-one-cable-discovery-max-imax-xr-1-5-transparent-3-91/</link>
					<comments>https://armdevices.net/2026/02/13/vistech-mip-microled-0-6-0-9-hdmi-one-cable-discovery-max-imax-xr-1-5-transparent-3-91/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 11:26:18 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138962</guid>

					<description><![CDATA[Vistech shows how commercial LED is moving toward LCD-style “big flat panel” workflows by packaging microLED as MiP (micro LED in package) alongside classic SMD and COB options, then scaling it into fine-pitch, high-brightness tiles for meeting rooms and signage. The demo starts with MiP 0.6 on a 50-inch 1080p format and steps up to [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Vistech shows how commercial LED is moving toward LCD-style “big flat panel” workflows by packaging microLED as MiP (micro LED in package) alongside classic SMD and COB options, then scaling it into fine-pitch, high-brightness tiles for meeting rooms and signage. The demo starts with MiP 0.6 on a 50-inch 1080p format and steps up to MiP 0.9 as a modular building block aimed at replacing large-format LCD in enterprise environments. https://www.vistechdisplay.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A key idea is pixel-pitch as the enabler for close-viewing use cases: MiP 0.6/0.9 targets short viewing distances with less than 1,000-nit output, while keeping a familiar 16:9 feel through pre-defined panel sizes and multi-panel splicing. The 4K wall shown is built as a 3×3 array of panels (roughly a square meter class) and is presented as a “giant display” alternative where bezels, burn-in concerns, and limited mounting flexibility can be pain points on LCD.</p>
<p>The booth walkthrough (filmed at ISE 2026 in Barcelona) also pivots into production and immersive use cases: an indoor 1.5-pitch microLED wall positioned for XR volumes, including curved configurations where mechanical alignment, scan/refresh behavior, and camera interaction become part of the spec. Vistech frames this as the same modular ecosystem serving conference rooms, classrooms, and content stages, just tuned by pitch, cabinet mechanics, and calibration targets.</p>
<p>On the specialty side, there’s a home-cinema angle with 0.8/1.2 pitch options and a 3D viewing mode using active glasses, plus a “decor-friendly” LED concept where the powered-off surface is styled like a wooden finish rather than a black slab. These details are less about raw luminance and more about integration: living spaces, design studios, or hospitality where the off-state and physical texture matter almost as much as the on-state image.</p>
<p>The lineup ends with retail and outdoor: a transparent LED screen around 3.91 mm pitch and roughly 3,000-nit class brightness for storefront glass, plus MiP-based outdoor cabinets designed for staging and public display with integrated PSU/receiving hardware for faster service. Overall it’s a coherent “from fine-pitch boardroom to see-through retail to XR wall” story, driven by packaging choices (SMD/COB/MiP), cabinet engineering, and control stack integration.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=qR0cvZ9NZVI">https://www.youtube.com/watch?v=qR0cvZ9NZVI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/13/vistech-mip-microled-0-6-0-9-hdmi-one-cable-discovery-max-imax-xr-1-5-transparent-3-91/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138962</post-id>	</item>
		<item>
		<title>Quectel Edge Compute roadmap: QCS8550 Wi-Fi 7 modules and Dragonwing Q-8750 77 TOPS</title>
		<link>https://armdevices.net/2026/02/12/quectel-edge-compute-roadmap-qcs8550-wi-fi-7-modules-and-dragonwing-q-8750-77-tops/</link>
					<comments>https://armdevices.net/2026/02/12/quectel-edge-compute-roadmap-qcs8550-wi-fi-7-modules-and-dragonwing-q-8750-77-tops/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 21:46:52 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138960</guid>

					<description><![CDATA[Quectel shows a “smart single-board computer” concept that keeps the Raspberry Pi mechanical footprint and 40-pin GPIO header, but replaces the usual discrete SoC + SD-card approach with a shielded smart module aimed at commercial IoT. You prototype with Pi-style I/O, then ship with fixed SKUs, known memory/storage, and cellular options that look more like [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Quectel shows a “smart single-board computer” concept that keeps the Raspberry Pi mechanical footprint and 40-pin GPIO header, but replaces the usual discrete SoC + SD-card approach with a shielded smart module aimed at commercial IoT. You prototype with Pi-style I/O, then ship with fixed SKUs, known memory/storage, and cellular options that look more like an embedded product than a hobby board. https://www.quectel.com/product/lte-sc200u-smart-module-series/</p>
<p>On the LTE side, the SC200U smart module integrates Qualcomm’s SM6115 (Kryo CPU with Arm Cortex-A73/A53 cores) and an LTE Cat 4 modem in one module, with DDR and eMMC already inside the shield. That eliminates microSD as a reliability bottleneck and collapses power management, RF, and high-speed layout into a pre-qualified block, while still letting you run Linux (kernel 5.15 class) or Android on a familiar carrier board.</p>
<p>The next rung is Quectel’s SG560D family built around Qualcomm QCM6490/QCS6490-class silicon, where the carrier stays similar but the compute and connectivity step up to 5G Sub-6 (on cellular variants) and a stronger multimedia pipeline. In the demo, on-device AI acceleration is framed around roughly 12 TOPS for vision tasks like face landmarking, and the module layout even leaves a thermal opening so a heatsink or fan can couple directly to the hot spot for sustained load.</p>
<p>A key theme is modularity: the same smart module can be “plopped” into bigger boards for richer I/O, or into smaller designs when BOM, certification, and footprint start to dominate. The interviewer also gets a glimpse of why this matters—each module bundles DDR, storage, PMICs, RF front end, transceiver paths, and lots of passives, easily north of 200 parts you don’t want to re-engineer for every revision. The booth examples span an edge compute box, a retro-gaming handheld with active cooling, and a cashierless checkout vision demo.</p>
<p>Quectel also hints at what comes after: higher-tier smart modules around newer Qualcomm IoT platforms, including QCS8550-based options that pair Wi-Fi 7 and Bluetooth 5.3 with around 48 TOPS for heavier vision and multimedia, and a newer Dragonwing Q-8750 class that Qualcomm rates at about 77 TOPS for larger on-device AI workloads. Filmed at CES Las Vegas 2026, it’s a useful snapshot of how “Pi-like” developer ergonomics are converging with production-grade module integration for edge compute at scale.</p>
<p>I&#8217;m publishing about 100+ videos from CES 2026, I upload about 4 videos per day at 5AM/11AM/5PM/11PM CET/EST.  Check out all my CES 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjaMwKMgLb6ja_yZuano19e</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 ), watch all my DJI Pocket 3 videos here https://www.youtube.com/playlist?list=PL7xXqJFxvYvhDlWIAxm_pR9dp7ArSkhKK</p>
<p>Click the &#8220;Super Thanks&#8221; button below the video to send a highlighted comment under the video! Brands I film are welcome to support my work in this way <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=p2U2ypFE4eA">https://www.youtube.com/watch?v=p2U2ypFE4eA</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/12/quectel-edge-compute-roadmap-qcs8550-wi-fi-7-modules-and-dragonwing-q-8750-77-tops/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138960</post-id>	</item>
		<item>
		<title>Barco at ISE 2026, QDX RGB + Encore 3 + EC210: 4K60 multi-output processing, UDM/F80 projector stack</title>
		<link>https://armdevices.net/2026/02/12/barco-at-ise-2026-qdx-rgb-encore-3-ec210-4k60-multi-output-processing-udm-f80-projector-stack/</link>
					<comments>https://armdevices.net/2026/02/12/barco-at-ise-2026-qdx-rgb-encore-3-ec210-4k60-multi-output-processing-udm-f80-projector-stack/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 15:06:23 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138958</guid>

					<description><![CDATA[Barco walks through a high-end projection and live-event processing stack that spans 1DLP and 3-chip DLP, with a clear focus on rental fleets, repeatable optics, and predictable color workflows. The headline is the QDX 45K-class platform in native 4K, shown in RGB laser for wide-gamut work (Rec.2020 coverage is the key talking point) while keeping [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Barco walks through a high-end projection and live-event processing stack that spans 1DLP and 3-chip DLP, with a clear focus on rental fleets, repeatable optics, and predictable color workflows. The headline is the QDX 45K-class platform in native 4K, shown in RGB laser for wide-gamut work (Rec.2020 coverage is the key talking point) while keeping lens compatibility aligned with existing Barco glass to protect lens inventory ROI. https://www.barco.com/en/product/encore3</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the projection side, the conversation ranges from compact 6,000-lumen class units up to the QDX family, plus the i600 1DLP lineup (8K/10K/15K lumen classes) and specialty optics aimed at museums and tight installs. Barco calls out an “elbow” ultra-short-throw concept where physics typically costs 20–30% of light, and contrasts that with straight-through UST optics (notably a 0.5 throw) to reduce mirror complexity and keep image quality clean.</p>
<p>For simulation and high-motion content, the F80 is positioned around native 4K with high frame rate behavior, while the “baby 3-chip” UDM sits below QDX in size and weight but still targets serious output (up to the 30K-lumen class). Across the range, Barco emphasizes vertical integration around the light engine—sourcing laser diodes directly and designing the projector around that engine—plus cloud connectivity for integrators who want monitoring and service visibility.</p>
<p>The other half of the video shifts to Barco’s Event Master ecosystem: Encore 3 as the flagship image processor with 8× 4K60 outputs in one chassis, scalable by linking units for larger canvases and multi-screen compositions. The EC30 and EC210 event controllers are shown as tactile show-control surfaces for rapid preset recall and T-bar operation, with firmware updates aligning controller support and training workflows. This was filmed at ISE 2026 in Barcelona, so the demos lean heavily into live events and integration use-cases rather than cinema-only storytelling.</p>
<p>Finally, SwiftAgent appears as a software-based switching layer for hybrid production: multiple 4K60 camera inputs, several 4K program outputs, and IP streaming formats like RTSP, SRT, and NDI in the mix, with audio-follow and automated camera behavior discussed as iterative roadmap features. The underlying theme is repeatable 4K60 signal paths—capture, process, scale, and route—built for operators who need deterministic latency and clean 12-bit 4:4:4 processing across large pixel canvases.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=cd5LBhqfS1c">https://www.youtube.com/watch?v=cd5LBhqfS1c</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/12/barco-at-ise-2026-qdx-rgb-encore-3-ec210-4k60-multi-output-processing-udm-f80-projector-stack/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138958</post-id>	</item>
		<item>
		<title>Napster Station 2.0: Transparent microLED AI Concierge Kiosk, VoiceField mic array, Azure AI</title>
		<link>https://armdevices.net/2026/02/12/napster-station-2-0-transparent-microled-ai-concierge-kiosk-voicefield-mic-array-azure-ai/</link>
					<comments>https://armdevices.net/2026/02/12/napster-station-2-0-transparent-microled-ai-concierge-kiosk-voicefield-mic-array-azure-ai/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 23:41:07 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138956</guid>

					<description><![CDATA[Napster Station 2.0 is a bundled “embodied AI concierge” kiosk concept that tries to make voice + video agents feel like a practical front desk for retail, hospitality, and public venues, not a demo that only works in a quiet lab. The idea is consistent brand behavior across channels: the same agent logic can live [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Napster Station 2.0 is a bundled “embodied AI concierge” kiosk concept that tries to make voice + video agents feel like a practical front desk for retail, hospitality, and public venues, not a demo that only works in a quiet lab. The idea is consistent brand behavior across channels: the same agent logic can live on a website, in-app, and in a physical station, with deployment framed as consumption-based “digital labor” rather than a big bespoke integration. https://www.napster.ai/</p>
<p>What makes this build technically interesting is the hardware stack being treated as part of the AI product: a transparent microLED touch display (AUO) paired with a high-end embedded compute module, a 48MP-class camera, and a beamforming microphone array tuned for near-field capture. In the booth conversation you can hear the engineering focus on noisy environments: voice isolation, face/pose tracking, and lip/mouth movement cues to improve diarization and reduce pickup from bystanders, plus tighter tuning of gain staging, echo cancellation, and latency.</p>
<p>On the display side, the station moves from transparent OLED to transparent microLED in a sub-millimeter pitch range (described as about 0.66 mm) to push higher luminance and better see-through characteristics for an “object-behind-the-screen” effect. It’s the kind of panel where optical bonding, cover glass, and PCAP multi-touch matter as much as pixel tech, because reflections, parallax, and touch accuracy define whether it feels like a usable interface or a showroom trick. The transparency also changes interaction design: you can keep eye contact through the screen while still using it as a UI canvas.</p>
<p>The software story is equally “stacked”: Napster positions it as multi-cloud and partner-friendly, with Microsoft Azure AI Foundry mentioned for real-time voice/video agent behavior and the ability to run on different hyperscalers (including Gemini, per the discussion). In this demo, the agent “Kai” isn’t just Q&A; it can branch into multimodal content generation (e.g., creating a short, shareable song with lyrics + audio), then hand off via QR for retrieval and sharing, which hints at a broader workflow engine behind the avatar.</p>
<p>This video was filmed at ISE 2026 in Barcelona, where the station is shown inside the faytech booth context as a prototype moving toward lighthouse customers. The most convincing part is not the avatar animation, but the systems thinking: sensor fusion (camera + mic), low-latency streaming (websocket-style), voice UX for public spaces, and a display architecture that lets the kiosk live in the middle of a room without visually blocking it.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=Fq8c-fZbpaI">https://www.youtube.com/watch?v=Fq8c-fZbpaI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/12/napster-station-2-0-transparent-microled-ai-concierge-kiosk-voicefield-mic-array-azure-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138956</post-id>	</item>
		<item>
		<title>Eyefactive 55&#8243; 4K 1000nit touch table with faytech + tangible object recognition chip + CMS AppSuite</title>
		<link>https://armdevices.net/2026/02/11/eyefactive-55-4k-1000nit-touch-table-with-faytech-tangible-object-recognition-chip-cms-appsuite/</link>
					<comments>https://armdevices.net/2026/02/11/eyefactive-55-4k-1000nit-touch-table-with-faytech-tangible-object-recognition-chip-cms-appsuite/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 19:11:17 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138954</guid>

					<description><![CDATA[Eyefactive shows a multitouch table concept that blends large-format PCAP interaction with tangible object recognition, so physical items become UI controls on a 4K surface: drop a marker chip on the glass, and the software tracks its ID, position, and rotation to reveal contextual layers, switch views, or scrub through media without extra sensors. https://www.eyefactive.com/ [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Eyefactive shows a multitouch table concept that blends large-format PCAP interaction with tangible object recognition, so physical items become UI controls on a 4K surface: drop a marker chip on the glass, and the software tracks its ID, position, and rotation to reveal contextual layers, switch views, or scrub through media without extra sensors. https://www.eyefactive.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>What’s technically interesting is that the “black chip” is a passive marker, not NFC: there’s no battery, and the touch controller plus recognition layer interprets the marker pattern directly on the capacitive sensor, enabling continuous XY tracking plus orientation, which typical NFC taps don’t provide. That makes it usable for tabletop “tokens” in retail, museums, showrooms, and wayfinding, where multiple objects can act like tangible filters, selectors, or menu keys in a shared scene.</p>
<p>On the software side, Eyefactive positions this as an app platform with a CMS workflow: you can build experiences from text, images, video, PDFs, websites, and even 360° or 3D assets using template-style apps (the demo uses an interactive map concept called Hotspots), so non-developers can assemble navigation and storytelling without writing code. For developers, there’s still an API path to integrate object recognition into custom stacks like Unreal Engine, so the tangible inputs can drive real-time 3D content or bespoke kiosk logic in a controlled runtime.</p>
<p>The hardware shown is a high-brightness tabletop display co-developed with faytech: around a 55-inch UHD panel, built for exhibition lighting and heavy public use, with multi-user ergonomics and a robust glass/stand structure you can lean on without worry. Filmed at ISE 2026 in Barcelona, it’s a neat illustration of how “tangible UI” can move beyond gimmick into an operational interface for self-service kiosks, tourism rooms, and public venues.</p>
<p>The bigger idea is that multitouch becomes the baseline, and multi-user plus tangible tracking becomes the differentiation: people can collaborate from different sides, tokens can represent products, guests, languages, or points of interest, and staff can still use the same surface for guided demos when needed. If you’ve been thinking about replacing static signage with an interaction layer that scales from kiosks to wall displays and tables, this is a practical pattern to watch today.</p>
<p>Eyefactive PCAP multi-user table: passive marker object tracking, Hotspots maps, 4K UHD 1000nit<br />
Eyefactive touch table with position+rotation object recognition, AppStore/CMS, 55in 4K 1000nit</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=-lr1bgo5Gfo">https://www.youtube.com/watch?v=-lr1bgo5Gfo</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/11/eyefactive-55-4k-1000nit-touch-table-with-faytech-tangible-object-recognition-chip-cms-appsuite/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138954</post-id>	</item>
		<item>
		<title>BOE Qualcomm video bar, HDMI I/O, triple-lens 48MP, 4K60 ISP/NPU, smart gallery, speaker tracking</title>
		<link>https://armdevices.net/2026/02/11/boe-qualcomm-video-bar-hdmi-i-o-triple-lens-48mp-4k60-isp-npu-smart-gallery-speaker-tracking/</link>
					<comments>https://armdevices.net/2026/02/11/boe-qualcomm-video-bar-hdmi-i-o-triple-lens-48mp-4k60-isp-npu-smart-gallery-speaker-tracking/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 16:11:43 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138952</guid>

					<description><![CDATA[BOE showcases a Qualcomm-powered video bar aimed at medium meeting spaces, blending a triple-lens AI camera block with integrated audio DSP so one device can handle framing, capture, and speaker playback in a single USB/HDMI-friendly package. The pitch is “broadcast-style” conferencing hardware—multi-camera optics plus on-device inference—without turning the room into a complicated install. https://www.boe.com/ &#8212; [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>BOE showcases a Qualcomm-powered video bar aimed at medium meeting spaces, blending a triple-lens AI camera block with integrated audio DSP so one device can handle framing, capture, and speaker playback in a single USB/HDMI-friendly package. The pitch is “broadcast-style” conferencing hardware—multi-camera optics plus on-device inference—without turning the room into a complicated install. https://www.boe.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the imaging side, the unit uses three lenses (wide + zoom views) and an ISP/NPU pipeline designed for intelligent crop, auto-framing, and “smart gallery” layouts that generate individual tiles for each person while keeping a room overview strip visible. The demo mentions up to three 48MP sensors and 4K60 capture capability, even though most mainstream meeting apps still cap uplink at 1080p today, which frames the hardware as future-ready for higher-fidelity workflows in a hybrid setup.</p>
<p>Audio is treated as a first-class signal chain: echo cancellation, noise reduction, and room-tuned playback via built-in speakers, aiming for intelligibility when the space gets noisy. This is the typical AEC/NS/AGC stack you’d expect in a conferencing appliance, but the interesting angle is how much can be moved onto the SoC’s AI engine for adaptive processing tied to visual context (who is speaking, where they sit) and for features like presenter/speaker tracking.</p>
<p>Connectivity includes dual HDMI 2.0 outputs plus HDMI input, with the option to add an extra external camera and support multi-screen layouts; the conversation also explores the idea of multiple bars collaborating across a larger room, which would require tighter device synchronization and a multi-node AV architecture. Filmed at ISE 2026 in Barcelona, the discussion leans into what happens when room devices become edge compute nodes: people counting, behavior analytics, and local policy-driven processing rather than sending everything to cloud.</p>
<p>A practical near-term idea is using on-device super-resolution: receive a 1080p conference stream, then upscale locally to a sharper 4K presentation for the in-room display—separating “transport resolution” from “display resolution” with an AI enhancement stage. They also touch on offline translation (ASR + MT on the NPU) and meeting summaries, noting that larger language-model summarization still tends to live in cloud today, but the direction is clear as edge TOPS budgets keep rising.</p>
<p>BOE AI conferencing bar , dual display, auto-framing, super-resolution upscale<br />
BOE video bar on Qualcomm: 3-camera optics, AEC/NR audio DSP, people crop, room view</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=ctwr-e1OCjI">https://www.youtube.com/watch?v=ctwr-e1OCjI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/11/boe-qualcomm-video-bar-hdmi-i-o-triple-lens-48mp-4k60-isp-npu-smart-gallery-speaker-tracking/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138952</post-id>	</item>
		<item>
		<title>BOE S2 Series ultra-thin DLE LCD signage: 700–1200 nit, local dimming, 32–86, 4K</title>
		<link>https://armdevices.net/2026/02/11/boe-s2-series-ultra-thin-dle-lcd-signage-700-1200-nit-local-dimming-32-86-4k/</link>
					<comments>https://armdevices.net/2026/02/11/boe-s2-series-ultra-thin-dle-lcd-signage-700-1200-nit-local-dimming-32-86-4k/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 12:06:44 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138950</guid>

					<description><![CDATA[BOE walks through its S2-series LCD digital signage concept with a focus on industrial design: an ultra-thin chassis built around a DLE-style modular architecture, aiming for a clean “flush-to-wall” look while keeping serviceability practical for rollouts. The lineup spans typical fleet sizes (32&#8243;) up to large-format installs (86&#8243;), with 55&#8243; and 65&#8243; positioned as the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>BOE walks through its S2-series LCD digital signage concept with a focus on industrial design: an ultra-thin chassis built around a DLE-style modular architecture, aiming for a clean “flush-to-wall” look while keeping serviceability practical for rollouts. The lineup spans typical fleet sizes (32&#8243;) up to large-format installs (86&#8243;), with 55&#8243; and 65&#8243; positioned as the volume sweet spot for retail, corporate, and public-space deployments. https://www.boe.com/en/Enterprise/DigitalSignageDisplay</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On image performance, the key spec band discussed is high-brightness operation in the 700–1200 nit range, paired with strong contrast for mixed lighting and window-facing scenarios. The more interesting engineering lever is local dimming: by segmenting the backlight into zones, you get higher perceived contrast and better power proportionality than a full-backlight “always-on” approach, which matters for 24/7 networks trying to cut energy per candela.</p>
<p>BOE also positions the family as orientation-flexible (landscape or portrait), which sounds simple but affects thermals, panel uniformity targets, mounting patterns, and firmware tuning for brightness limits. There’s a clear product ladder: an entry tier around 4K with ~350 nit class brightness, and a step-up tier around ~500 nit, before you move into the higher-brightness, local-dimming variants where contrast and peak luminance become the main differentiator.</p>
<p>Filmed at ISE 2026 in Barcelona, the takeaway is how “boring” LCD signage becomes a systems problem at scale: power budgets, long-duty reliability, remote content workflows, and the choice between integrated Android media players versus no-OS displays fed by external players or industrial PCs over HDMI. That flexibility is what lets the same panel platform land in everything from menu boards to city-wide screen fleets with centralized CMS control.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=ir-C8qhDEEI">https://www.youtube.com/watch?v=ir-C8qhDEEI</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/11/boe-s2-series-ultra-thin-dle-lcd-signage-700-1200-nit-local-dimming-32-86-4k/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138950</post-id>	</item>
		<item>
		<title>Megapixel Ventana Deep Matte at ISE 2026 + HELIOS + AMD compute: 1000 nit HDR microLED tile demo</title>
		<link>https://armdevices.net/2026/02/11/megapixel-ventana-deep-matte-at-ise-2026-helios-amd-compute-1000-nit-hdr-microled-tile-demo/</link>
					<comments>https://armdevices.net/2026/02/11/megapixel-ventana-deep-matte-at-ise-2026-helios-amd-compute-1000-nit-hdr-microled-tile-demo/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 09:16:46 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138948</guid>

					<description><![CDATA[Megapixel shows a Ventana Deep Matte microLED tile that targets a “gallery wall” look: low-glare matte behavior, but tuned to keep saturation and highlight punch, including peak white around 1,000 nit while holding very low black level for high contrast HDR content. The demo leans on artwork and skin-tone gradients to show how the finish [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Megapixel shows a Ventana Deep Matte microLED tile that targets a “gallery wall” look: low-glare matte behavior, but tuned to keep saturation and highlight punch, including peak white around 1,000 nit while holding very low black level for high contrast HDR content. The demo leans on artwork and skin-tone gradients to show how the finish suppresses specular reflections without turning the image into a flat, hazy panel. https://megapixelvr.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>Next to it, they contrast a glossy “Liquid Black” style tile, where the surface read is closer to an inky mirror-like black, trading higher reflectivity for a deeper perceived black in controlled light. Both finishes sit in the same Ventana modular tile concept, so the “matte vs gloss” choice becomes a system-level design decision depending on ambient light, viewing distance, and whether you want a framed-canvas vibe or a polished display aesthetic.</p>
<p>All of it is driven by Megapixel HELIOS processing, and the AMD booth context highlights a silicon partnership angle: the processor platform integrates AMD compute, and the discussion frames the pipeline as high-performance video ingest + real-time processing + LED drive mapping. In Megapixel terms, that typically includes calibration, tone mapping, grayscale handling, and tight genlock-style consistency so the wall behaves like a single coherent raster even as it scales.</p>
<p>The physical build is also part of the pitch: a magnetic puck mounting approach lets you align a wall grid and then “pop” tiles in and out for serviceability, which matters when you’re building portrait-format canvases like the 2880 × 3600 demo here. This segment was filmed at ISE 2026 in Barcelona, where the emphasis is less on raw pixel count and more on surface optics, install workflow, and processor-led image integrity.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=a-h8Qb-TfIw">https://www.youtube.com/watch?v=a-h8Qb-TfIw</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/11/megapixel-ventana-deep-matte-at-ise-2026-helios-amd-compute-1000-nit-hdr-microled-tile-demo/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138948</post-id>	</item>
		<item>
		<title>faytech Booth Tour at ISE 2026 Looking Glass HLD, transparent AUO microLED kiosk, transflective LCD</title>
		<link>https://armdevices.net/2026/02/11/faytech-booth-tour-at-ise-2026-looking-glass-hld-transparent-auo-microled-kiosk-transflective-lcd/</link>
					<comments>https://armdevices.net/2026/02/11/faytech-booth-tour-at-ise-2026-looking-glass-hld-transparent-auo-microled-kiosk-transflective-lcd/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Wed, 11 Feb 2026 04:11:26 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138946</guid>

					<description><![CDATA[faytech uses ISE as a fast tour of how its touch hardware portfolio scales from standard signage players to purpose-built kiosks, with most of the real work happening in PCAP tuning, optical bonding, high-brightness stacks, and enclosure engineering for 24/7 duty cycles. The talk keeps coming back to “build speed”: partners bring an application (retail, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>faytech uses ISE as a fast tour of how its touch hardware portfolio scales from standard signage players to purpose-built kiosks, with most of the real work happening in PCAP tuning, optical bonding, high-brightness stacks, and enclosure engineering for 24/7 duty cycles. The talk keeps coming back to “build speed”: partners bring an application (retail, wayfinding, menu boards), faytech turns it into an integrated touch display + compute + mechanics package, and then pushes toward volume once a demo starts pulling leads. https://faytech.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A standout demo is a large interactive touch table running very precise capacitive sensing, where the value isn’t just the panel but the full interaction loop: touch latency, palm rejection, UI triggers, and reliable gesture detection in public spaces. This kind of hardware is boring until it’s not—once you add real-time content selection, kiosk-grade mounting, and a predictable BOM for rollout, it becomes the kind of “quiet infrastructure” that restaurants and venues can actually deploy.</p>
<p>Later in the walkthrough (shot at ISE 2026 in Barcelona), the conversation pivots to glasses-free 3D for digital signage: Looking Glass Hololuminescent Displays (HLD) shown in 16-inch FHD and 27-inch 4K UHD form factors, with an 86-inch concept framing the “entrance display” use case. The point is autostereoscopic multi-view without headsets or eye tracking, packaged as a normal video-driven display pipeline, so you can treat content like signage media but render it as a fixed 3D volume on the edge.</p>
<p>On the kiosk side, you get a nice contrast between transparent OLED and transparent micro-LED approaches. The transparent OLED kiosk format is familiar (LG transparent OLED class hardware), while the AUO 30-inch transparent micro-LED kiosk is framed as a first public showing: optically bonded 10-point PCAP, around 600 cd/m² brightness, over 60% transparency, and high contrast, built into a complete kiosk enclosure in a very short iteration cycle. That’s less about pixel count and more about proving a manufacturable integration path from sample panel to deployable kit.</p>
<p>The tour also drops into “invisible engineering”: a transflective, sunlight-readable LCD concept that is mostly passive, drawing power mainly on image changes, with a small rear solar cell enabling periodic updates outdoors (think minutes, not video). And for rugged environments, they mention EMI/EMC shielding layers bonded into the glass stack plus extreme surge robustness (up to ~30 kV) while keeping touch stability, which is the kind of detail that matters when the display is a subsystem inside a larger vehicle or mission platform, here.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=9cOyZR6A9ew">https://www.youtube.com/watch?v=9cOyZR6A9ew</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/11/faytech-booth-tour-at-ise-2026-looking-glass-hld-transparent-auo-microled-kiosk-transflective-lcd/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138946</post-id>	</item>
		<item>
		<title>Geniatech edge AI signage: 7B LLM on-device, ePaper ODM driver stack, partial update power saving</title>
		<link>https://armdevices.net/2026/02/10/geniatech-edge-ai-signage-7b-llm-on-device-epaper-odm-driver-stack-partial-update-power-saving/</link>
					<comments>https://armdevices.net/2026/02/10/geniatech-edge-ai-signage-7b-llm-on-device-epaper-odm-driver-stack-partial-update-power-saving/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 22:41:19 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138944</guid>

					<description><![CDATA[Geniatech walks through three product tracks that sit at the intersection of digital signage playback, edge inference, and ultra-low-power ePaper. The “classic” signage player focus is straightforward integration: HDMI input/output models for looping content, and higher-density units built for multi-display layouts where one box can feed several screens while still fitting into standard CMS workflows. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Geniatech walks through three product tracks that sit at the intersection of digital signage playback, edge inference, and ultra-low-power ePaper. The “classic” signage player focus is straightforward integration: HDMI input/output models for looping content, and higher-density units built for multi-display layouts where one box can feed several screens while still fitting into standard CMS workflows. https://www.geniatech.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>The more interesting twist is where signage hardware becomes an on-prem compute node. They describe an edge AI configuration around a 40 TOPS NPU-class accelerator, aimed at running small vision models locally and even deploying a side-loaded LLM up to roughly 7B parameters, depending on memory and runtime constraints. In practice, that points to applications like audience measurement, dwell-time and behavior analytics, and camera-driven context that can adapt creative, scheduling, or content rules without round-tripping raw video to cloud.</p>
<p>On the ePaper side, Geniatech positions itself as an ODM layer that removes the “hard part” of E Ink driving for existing signage ecosystems. The pitch is plug-and-play compatibility: Linux and Android-based controller stacks, custom TCON/driver know-how, and interface boards that let a conventional signage player keep its CMS unchanged while ePaper gets proper waveform control, ghosting mitigation, and partial update support where only a price/date region refreshes.</p>
<p>The demo leans into large-format color ePaper (including a 28.5-inch class panel) doing live, localized partial refresh that feels closer to print than to LCD motion, while staying power-frugal enough for battery and even solar-backed deployments. This was filmed at ISE 2026 in Barcelona, and it frames Geniatech’s strategy as “one backend, multiple front panels”: HDMI video walls when you need motion, and Spectra-class ePaper when you want sunlight readability, near-zero idle power, and selective refresh at the edge.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=2Eko8L8XOss">https://www.youtube.com/watch?v=2Eko8L8XOss</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/geniatech-edge-ai-signage-7b-llm-on-device-epaper-odm-driver-stack-partial-update-power-saving/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138944</post-id>	</item>
		<item>
		<title>HiteVision interactive displays: EDLA Android 16 roadmap, QLED local dimming, NFC fingerprint</title>
		<link>https://armdevices.net/2026/02/10/hitevision-interactive-displays-edla-android-16-roadmap-qled-local-dimming-nfc-fingerprint/</link>
					<comments>https://armdevices.net/2026/02/10/hitevision-interactive-displays-edla-android-16-roadmap-qled-local-dimming-nfc-fingerprint/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 14:11:24 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138942</guid>

					<description><![CDATA[HiteVision positions its portfolio around interactive whiteboards and interactive flat-panel displays (IFPD) for classrooms, pairing touch UX with device management features that matter to IT teams: NFC-based sign-in, fingerprint authentication, and panel variants described as QLED with local dimming for higher contrast in bright rooms. The goal is to make the board feel like the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>HiteVision positions its portfolio around interactive whiteboards and interactive flat-panel displays (IFPD) for classrooms, pairing touch UX with device management features that matter to IT teams: NFC-based sign-in, fingerprint authentication, and panel variants described as QLED with local dimming for higher contrast in bright rooms. The goal is to make the board feel like the primary “computer” in a class, not just a big monitor, and to keep onboarding simple for teachers. https://www.hitevision.com.tw/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A key thread is Google EDLA (Enterprise Device Licensing Agreement): a certification path that lets schools run Google services on-board in a compliant way, instead of relying on screen-mirroring from a laptop. That matters for Google Play access, Workspace workflows, and predictable app deployment, especially when devices are shared across periods and users.</p>
<p>On the hardware side they point to large-format touch displays, including a 110-inch class panel with a stated 120 Hz refresh rate for smoother pen tracking and motion, plus an emphasis on high brightness. They also show an 80% automated production line, underlining repeatability in assembly, calibration, and QA for high-volume education rollouts at ISE 2026 Barcelona.</p>
<p>For “big-room” pedagogy and auditoriums, the booth highlights a 163-inch 4K LED display designed to behave like their interactive boards, so the UI, touch habits, and control model remain consistent across LCD IFPD and direct-view LED. The pitch is reducing the learning curve for teachers and the integration burden for IT, while scaling the same collaboration surface into larger spaces you can actually deploy.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=CQDLgo3NBqo">https://www.youtube.com/watch?v=CQDLgo3NBqo</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/hitevision-interactive-displays-edla-android-16-roadmap-qled-local-dimming-nfc-fingerprint/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138942</post-id>	</item>
		<item>
		<title>Muxwave Series F + P32 Transparent LED poster, top power feed, IP65 outdoor media facade</title>
		<link>https://armdevices.net/2026/02/10/muxwave-series-f-p32-transparent-led-poster-top-power-feed-ip65-outdoor-media-facade/</link>
					<comments>https://armdevices.net/2026/02/10/muxwave-series-f-p32-transparent-led-poster-top-power-feed-ip65-outdoor-media-facade/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 11:16:41 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138940</guid>

					<description><![CDATA[Muxwave walks through its transparent LED portfolio, centered on an ultra-light “hanging” format (Series F) built for large suspended installs. The demo wall is about 4 m high by 7.5 m wide, with top-fed power and signal routing so cabling stays clean while the feed runs down through the structure. The core idea is keeping [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Muxwave walks through its transparent LED portfolio, centered on an ultra-light “hanging” format (Series F) built for large suspended installs. The demo wall is about 4 m high by 7.5 m wide, with top-fed power and signal routing so cabling stays clean while the feed runs down through the structure. The core idea is keeping mass and depth low enough to enable creative shapes and big spans while still behaving like a real LED display rather than a projection surface. https://www.muxwave.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A key engineering theme is architectural integration: high transparency for shopfronts and atriums, plus enough luminance and refresh stability for camera-friendly content. Muxwave frames this as “full screen” coverage where the LED is distributed across a mesh/film-like module, making the visual layer feel embedded into the space. For outdoor deployments they highlight IP65 weather protection and an on-board LED packaging approach (described in the booth talk as “blue on board”) aimed at robustness in public environments and façades.</p>
<p>They also show a floor-standing poster concept (P32), essentially a 1 m by 2 m transparent LED unit that can be tiled side-by-side into longer ribbons, like a 5 m by 1 m run. The modules are attached to a glass-fronted format, and the logistics are clearly designed for rental and rapid rollout: each unit ships as one set per dedicated transport case. Content ingest is handled by an LED system controller, with upload and playback managed from a phone or laptop for quick campaign turnover.</p>
<p>Beyond the booth hardware, the conversation points to real deployments: retail windows, building façades, and sculptural installs such as circular or “ball” shapes used for wayfinding or landmark signage. One named reference is a large installation at the south entrance area of Fira Barcelona, which helps anchor the product as something that can survive high foot traffic and real public lighting. This interview was filmed at ISE 2026 in Barcelona, so the focus is on integrator-ready formats rather than lab prototypes.</p>
<p>The most interesting takeaway is how transparent LED is converging into a practical media layer: low kg/m² loading, thin profiles, modular splicing, and simplified top-power architecture that reduces install complexity overhead. Add IP-rated outdoor variants and controller-based content workflows, and you get a system that can scale from a small 20×10 cm advertising demo up to multi-meter architectural spans, while slowly pushing total project cost down year over year.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=7kbf-f26X_U">https://www.youtube.com/watch?v=7kbf-f26X_U</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/muxwave-series-f-p32-transparent-led-poster-top-power-feed-ip65-outdoor-media-facade/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138940</post-id>	</item>
		<item>
		<title>NovaStar Infinity + COEX 5G: PWM+PAM driver IC, 2.9M px per link, Nova Cloud</title>
		<link>https://armdevices.net/2026/02/10/novastar-infinity-coex-5g-pwmpam-driver-ic-2-9m-px-per-link-nova-cloud/</link>
					<comments>https://armdevices.net/2026/02/10/novastar-infinity-coex-5g-pwmpam-driver-ic-2-9m-px-per-link-nova-cloud/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 08:11:41 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138938</guid>

					<description><![CDATA[NovaStar walks through its Infinity concept as an end-to-end LED control chain, spanning video processing, sending/receiving, and the LED driver IC layer. The key idea is hybrid PWM + PAM (amplitude) drive, tuned with processing and algorithms to improve low-gray performance, brightness control, black level stability, and refresh behavior without pushing panels into visible flicker [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>NovaStar walks through its Infinity concept as an end-to-end LED control chain, spanning video processing, sending/receiving, and the LED driver IC layer. The key idea is hybrid PWM + PAM (amplitude) drive, tuned with processing and algorithms to improve low-gray performance, brightness control, black level stability, and refresh behavior without pushing panels into visible flicker regimes. https://www.novastar.tech/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A nice detail is how Infinity is shown as something that can travel across the supply chain: the demo ties the control stack to partner display hardware like a BOE 1.25 mm COB module, and the discussion hints at how driver IC choices and calibration coefficients shape what people call “image quality” on fine-pitch LED. The takeaway is that a lot of the perceived sharpness and uniformity comes from the interaction between bit-depth mapping, grayscale linearity, and how the driver allocates current at very low luminance.</p>
<p>The booth tour then shifts to “LED intelligent playback control” and monitoring, essentially pushing fixed-install LED toward a managed appliance model: TV-style UI workflows, centralized status visibility for processors, and cloud management via Nova Cloud. On the content side, the media-server story is framed around redundancy (one primary plus two backup paths) and scaling across resolutions, which matters for unattended signage and large canvases where a single failure becomes obvious.</p>
<p>Another segment, filmed at ISE 2026 in Barcelona, focuses on thermal compensation: compensating temperature-driven drift to reduce long-run color cast and keep color temperature stable at a more monitor-like level. In practice this kind of control loop is about sustaining chroma consistency over hours of operation, especially when cabinet thermals vary across a wall due to airflow, power density, or mounting geometry.</p>
<p>Finally, NovaStar highlights its COEX 5G distribution approach versus “1G” Ethernet workflows: roughly 2.9–2.95 million pixels over a single link, which can cut processor count and dramatically reduce cable runs. The side-by-side cabling examples make the point clearly: fewer physical links and fewer failure points, while still fitting into an ecosystem where multiple vendors align around the same next-gen transport and configuration tooling together</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=2uSZWAnFOzY">https://www.youtube.com/watch?v=2uSZWAnFOzY</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/novastar-infinity-coex-5g-pwmpam-driver-ic-2-9m-px-per-link-nova-cloud/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138938</post-id>	</item>
		<item>
		<title>Vivalyte Phantom Mesh + Dynamic Lightbox + DMX Neon Flex, NovaStar/Colorlight video mapping</title>
		<link>https://armdevices.net/2026/02/10/vivalyte-phantom-mesh-dynamic-lightbox-dmx-neon-flex-novastar-colorlight-video-mapping/</link>
					<comments>https://armdevices.net/2026/02/10/vivalyte-phantom-mesh-dynamic-lightbox-dmx-neon-flex-novastar-colorlight-video-mapping/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 05:11:32 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138936</guid>

					<description><![CDATA[Vivalyte walks through a toolkit that sits between LED display hardware and architectural lighting, where “video-driven light” and “pixel-controlled line light” start to blur. The Phantom Mesh concept is a see-through LED mesh aimed at glass, façades, and staging, with the demo focusing on two transparency/definition tradeoffs: around 6.2 mm pitch for a denser image, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Vivalyte walks through a toolkit that sits between LED display hardware and architectural lighting, where “video-driven light” and “pixel-controlled line light” start to blur. The Phantom Mesh concept is a see-through LED mesh aimed at glass, façades, and staging, with the demo focusing on two transparency/definition tradeoffs: around 6.2 mm pitch for a denser image, and around 10 mm pitch for higher optical transparency. It’s designed to scale into large-format surfaces via small mechanical connection pieces and modular sections, so you can build long runs without turning the build into a heavy video-wall project. https://vivalyte.com/solution/phantom-mesh/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A practical detail is how the mesh is driven: it behaves like a conventional video endpoint, so standard LED control ecosystems like NovaStar or Colorlight can feed it, and then your choice of media server or mapping stack (Pixera, MadMapper, etc.) handles content workflows. In the booth, they describe variants with different power-box placement (top vs top+bottom), plus an exhibition-oriented mesh prototype intended to mount into common booth frame systems like Aluvision and beMatrix, targeting faster rigging and cleaner alignment on show builds too.</p>
<p>The other thread is Vivalyte’s “Dynamic Lightbox” idea: combining printed fabric (or other translucent layers) with a low-resolution LED backplane (they mention P20) to add motion, highlights, and day/night effects without the “raw LED screen” look. Instead of replacing print, the LED becomes a controllable light engine behind the graphic, so you keep sharp printed detail while selectively animating regions, gradients, and glow. This segment was filmed at ISE 2026 in Barcelona, and it fits the broader trend of hybrid media surfaces that need to read well at close range and still scale to big areas here.</p>
<p>For ceilings and interiors, the stretch-ceiling demo shows a layered approach: an acoustic layer, then backlight bars, with tunable-white control from about 1800 K to 6500 K over DMX. The “big pixel” approach they mention (large controllable zones) is a reminder that not every surface needs high spatial resolution; for circadian-style ambience, smooth CCT transitions and uniform diffusion matter more than tight pitch here.</p>
<p>Finally, the Neon Flex lineup is positioned as architectural “line media” rather than faux neon: silicone extrusion with internal LEDs, DMX pixel control, and even a fully 3D-bendable variant with about 25 mm pixel pitch for richer chases and effects. They also point to a two-part construction in the 40F series (cover + strip) to hide starts/ends and enable seamless continuous lines, which is exactly the kind of detail that decides whether an install reads as a clean architectural element in the real world, not a segmented strip here.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=yJgxvubpVFU">https://www.youtube.com/watch?v=yJgxvubpVFU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/vivalyte-phantom-mesh-dynamic-lightbox-dmx-neon-flex-novastar-colorlight-video-mapping/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138936</post-id>	</item>
		<item>
		<title>OnSign Digital Signage CMS: Nexmosphere Sensors, Quividi Analytics, Amazon Signage Stick, Mosaic AI</title>
		<link>https://armdevices.net/2026/02/10/onsign-digital-signage-cms-nexmosphere-sensors-quividi-analytics-amazon-signage-stick-mosaic-ai/</link>
					<comments>https://armdevices.net/2026/02/10/onsign-digital-signage-cms-nexmosphere-sensors-quividi-analytics-amazon-signage-stick-mosaic-ai/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 02:11:44 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138934</guid>

					<description><![CDATA[OnSign positions its platform as a cloud-based digital signage CMS for managing distributed screen networks: upload media, build layouts, schedule playlists, and publish to one or many players with tag- and rule-based targeting. A practical detail is the player-side caching model: content is synchronized and stored locally so playback can continue through temporary connectivity loss, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>OnSign positions its platform as a cloud-based digital signage CMS for managing distributed screen networks: upload media, build layouts, schedule playlists, and publish to one or many players with tag- and rule-based targeting. A practical detail is the player-side caching model: content is synchronized and stored locally so playback can continue through temporary connectivity loss, while admins keep control from a web console. https://www.onsign.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A big part of the story here is how signage turns into an interactive endpoint when you connect retail and venue hardware. The demo shows Nexmosphere-style triggers such as presence detection, RFID (with antenna), and magnetic or “product-lift” sensing, where removing an item from a shelf can cue a specific creative on the nearest display. This bridges CMS scheduling with real-world events, useful for planograms, end-caps, and guided shopping flows in store.</p>
<p>Captured at ISE 2026 in Barcelona, the walkthrough also highlights how OnSign fits into modern DOOH and retail media patterns: screens in-store for programmatic ad slots, menu boards, corporate comms, public transport info, and in-vehicle or rooftop displays for taxi or transit media. The publishing workflow shown is classic enterprise signage—divide the canvas into zones, assign destinations, and restrict playback by daypart, weekday, location, or metadata tags so the same creative behaves differently by context.</p>
<p>Audience measurement appears as a separate but connected layer: a camera can detect approximate viewer counts, “looking vs not looking,” and coarse demographics like gender and age range, then feed that into proof-of-play and campaign reporting. OnSign mentions integration with Quividi-type analytics so advertisers can correlate ad playback with observed attention, and optionally trigger content based on demographic ranges rather than only time-of-day logic.</p>
<p>The AI angle is less “content magic” and more operational reliability: OnSign’s Mosaic concept takes periodic screenshots (shown as every ~12 minutes) and uses automated checks to flag black screens, popups, or frozen playback. Add in AI-assisted grouping, alert rules like CPU thresholds, and QR-driven interactivity (scan-to-landing-page plus on-screen feedback), and you get a CMS that’s pushing toward closed-loop monitoring and smarter orchestration across fleets of screens.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=Xsbu6Chq_q4">https://www.youtube.com/watch?v=Xsbu6Chq_q4</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/onsign-digital-signage-cms-nexmosphere-sensors-quividi-analytics-amazon-signage-stick-mosaic-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138934</post-id>	</item>
		<item>
		<title>Pixelhue PixPro A &#124; AV-over-IP + LED controller in one, 16&#215;6 splicing, PD3 4K</title>
		<link>https://armdevices.net/2026/02/10/pixelhue-pixpro-a-av-over-ip-led-controller-in-one-16x6-splicing-pd3-4k/</link>
					<comments>https://armdevices.net/2026/02/10/pixelhue-pixpro-a-av-over-ip-led-controller-in-one-16x6-splicing-pd3-4k/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 23:16:38 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138932</guid>

					<description><![CDATA[Pixelhue walks through PixPro A, positioned as an all-in-one AV-over-IP platform that merges a video decoder/encoder pipeline with an LED controller in the same box, so an LED wall can be driven directly without stacking separate processors. The pitch is a single integrated control layer for distributed video, splicing, and LED output, aimed at command/control [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Pixelhue walks through PixPro A, positioned as an all-in-one AV-over-IP platform that merges a video decoder/encoder pipeline with an LED controller in the same box, so an LED wall can be driven directly without stacking separate processors. The pitch is a single integrated control layer for distributed video, splicing, and LED output, aimed at command/control and large canvas display workflows. https://www.pixelhue.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A core theme is doing high image quality over standard 1GbE instead of requiring 10GbE, while still targeting very low end-to-end delay for operator use. In the demo they switch into a KVM mode and call out sub-2 ms latency, which is the key spec when you want “mouse-feels-local” interaction on remote sources. This is paired with a “visualized digital center” style management interface for monitoring, routing, and layout control in a control-room context.</p>
<p>The PD3 transceiver family is shown as the building block: compact encode/decode endpoints with HDMI I/O, a quick device-ID/check button, and the ability to repurpose a unit as encoder or decoder depending on configuration. There are variants labeled PD3 2K and PD3 4K, with models that include multiple HDMI inputs/outputs so you can do input backup and fast switching for redundancy. The video was filmed at ISE 2026 in Barcelona, which fits the AV-control and LED ecosystem angle.</p>
<p>On the canvas side, PixPro A demonstrates multi-window splicing up to 16×6, effectively treating many sources as one composited surface for an LED wall or a large multi-display array. That points to typical use cases like situational awareness, SOC/NOC visualization, and surveillance viewing, where you need flexible layout presets, fast recall, and consistent timing across tiles. The “13 million pixels” comment reinforces that they’re targeting large total pixel budgets and wide canvases.</p>
<p>The competitive framing is against established enterprise AV-over-IP vendors, but with a focus on lowering network and deployment cost by staying on 1GbE while keeping 4K capability and interactive latency low. Technically, it sits at the intersection of IP video distribution, matrix switching, LED processing, and KVM-over-IP, with emphasis on integrated control and failover-friendly HDMI routing. It’s a practical look at how AV-over-IP is being packaged for LED walls and control-room operation here.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=I_XKyNlwVwQ">https://www.youtube.com/watch?v=I_XKyNlwVwQ</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/10/pixelhue-pixpro-a-av-over-ip-led-controller-in-one-16x6-splicing-pd3-4k/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138932</post-id>	</item>
		<item>
		<title>Joan ePaper workplace displays: room/desk booking, 13-inch touch, color visitor badges</title>
		<link>https://armdevices.net/2026/02/09/joan-epaper-workplace-displays-room-desk-booking-13-inch-touch-color-visitor-badges/</link>
					<comments>https://armdevices.net/2026/02/09/joan-epaper-workplace-displays-room-desk-booking-13-inch-touch-color-visitor-badges/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 20:11:07 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138930</guid>

					<description><![CDATA[Joan is positioning workplace management as a tight loop between calendar data, physical space, and low-power displays: rooms, desks, visitors, and internal comms all driven by the same scheduling layer. The demo shows how ePaper becomes “always-on” signage without needing power or network cabling at the mount point, while still staying current via cloud sync [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Joan is positioning workplace management as a tight loop between calendar data, physical space, and low-power displays: rooms, desks, visitors, and internal comms all driven by the same scheduling layer. The demo shows how ePaper becomes “always-on” signage without needing power or network cabling at the mount point, while still staying current via cloud sync and device-to-cloud updates. https://getjoan.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A standout hardware piece is the large-format, battery-powered e-ink wall display: it mounts like a picture frame, can be lifted off the bracket, then locks back in, running roughly a year per charge before topping up over USB-C. The same idea scales down to the classic meeting-room door plates, where battery life depends on size and update frequency, with touch for ad-hoc booking and front lighting on some models for better readability.</p>
<p>On the room side, the range spans compact door schedulers through a bigger, secure-mount 13-inch touchscreen option that can act as a room overview and on-the-spot booking terminal, including PoE mounting where permanent power is preferred. The “what you see is what you need” UI is built around real-time status, meeting metadata, and quick actions, while keeping the display stack efficient enough for ePaper refresh behavior and long sleep cycles.</p>
<p>For desks, the platform leans into hybrid office realities: interactive maps for desk discovery and booking, visibility of who sits where, and add-ons like desk indicators that show occupancy status and can even integrate charging. Visitor management extends that same workflow into reception with reusable full-color ePaper visitor tags that update wirelessly (Bluetooth Low Energy), so check-in becomes “select badge → push identity card layout,” without burning through disposable print stock.</p>
<p>Late in the interview (filmed at ISE 2026 in Barcelona), Joan also frames AI as reducing “click work”: instead of building a narrow chatbot feature, they describe an MCP-style integration approach so a user can request a coordinated office day—booking a whole team’s desks, a room slot, and supporting logistics—through an assistant interface. The core idea is that Joan sits as the bridge between digital intent (Teams/Google Calendar) and physical execution (signage, wayfinding, check-in), so the office behaves like a programmable system rather than a set of disconnected tools.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=z3--b1hrvhg">https://www.youtube.com/watch?v=z3&#8211;b1hrvhg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/09/joan-epaper-workplace-displays-room-desk-booking-13-inch-touch-color-visitor-badges/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138930</post-id>	</item>
		<item>
		<title>CVTE interactive flat panel + 15MP AI camera + PCAP 100 touch + AI pen classroom workflow</title>
		<link>https://armdevices.net/2026/02/09/cvte-interactive-flat-panel-15mp-ai-camera-pcap-100-touch-ai-pen-classroom-workflow/</link>
					<comments>https://armdevices.net/2026/02/09/cvte-interactive-flat-panel-15mp-ai-camera-pcap-100-touch-ai-pen-classroom-workflow/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 17:06:56 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138928</guid>

					<description><![CDATA[CVTE frames its “Dream Future” theme around a young engineering culture and a very practical goal: make interactive display hardware easier for schools and meeting rooms to deploy, maintain, and actually use day to day. In this walkthrough, the focus stays on interactive flat panels as a classroom hub, combining touch, compute, and camera so [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>CVTE frames its “Dream Future” theme around a young engineering culture and a very practical goal: make interactive display hardware easier for schools and meeting rooms to deploy, maintain, and actually use day to day. In this walkthrough, the focus stays on interactive flat panels as a classroom hub, combining touch, compute, and camera so teachers can run lessons, annotate content, and support hybrid learning without stacking extra boxes and cables. https://www.cvte.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the panel side, the discussion highlights mainstream Rockchip-based compute for responsive UI and low-latency ink, plus a built-in 15MP camera aimed at distance learning and lecture capture. The camera stack is positioned around everyday pro-AV features like zoom, autoframing, speaker tracking, and “smart gallery” style multi-view, with optional external USB cameras to solve real room geometry problems (for example, putting a lens at the front of the class while the main screen sits behind the teacher) for better sightlines and framing.</p>
<p>Touch and optics are treated as separate “feel” and “look” choices: a PCAP capacitive model is described as a “big iPad” aesthetic for finger-first interaction, while IR touch remains a cost-effective path for large-format classrooms. The PCAP unit is quoted at 50 touch points today with a roadmap toward 100 touch points, and the kid-focused display leans on optical bonding (reduced parallax, better contrast in bright rooms) plus low blue-light certification to support longer sessions with less eye strain at close range.</p>
<p>A second product direction targets family education and kindergarten use, with details that matter in real homes: a plug-in camera module with a physical privacy lid, safety-conscious chassis angles, and content that mixes learning with motion-based activities. The demo includes a camera-driven “jumping game” concept to turn movement into an input modality, which is a nice reminder that computer vision can be part of engagement design, not only a conferencing feature, at ISE 2026 Barcelona.</p>
<p>Finally, CVTE positions itself as a large-scale ODM/OEM engine behind many overseas education-display brands, while also showing adjacent ecosystem pieces like adjustable stands, ergonomic student furniture, and mobility-focused classroom layouts. The AI angle stays grounded in teacher workflow too: an “AI pen” concept is described as a remote interaction tool so the teacher can move around the room while still controlling the panel, which ties computer vision, UI control, and classroom management into one coherent usage model.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=I2XYqcIcNDE">https://www.youtube.com/watch?v=I2XYqcIcNDE</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/09/cvte-interactive-flat-panel-15mp-ai-camera-pcap-100-touch-ai-pen-classroom-workflow/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138928</post-id>	</item>
		<item>
		<title>AIPC Local LLM Box, ARFirst PC + AI box ROCm Ryzen 7 780M, Ryzen AI Max+ 395 126 TOPS, 128GB, 10GbE</title>
		<link>https://armdevices.net/2026/02/09/aipc-local-llm-box-arfirst-pc-ai-box-rocm-ryzen-7-780m-ryzen-ai-max-395-126-tops-128gb-10gbe/</link>
					<comments>https://armdevices.net/2026/02/09/aipc-local-llm-box-arfirst-pc-ai-box-rocm-ryzen-7-780m-ryzen-ai-max-395-126-tops-128gb-10gbe/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 14:16:43 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138926</guid>

					<description><![CDATA[AIPC shows two takes on ARfirst compute: a portable, AR-ready AI PC and a compact local-inference box aimed at running large language models without cloud dependency. The smaller system is positioned like a self-contained workstation with USB-C DisplayPort output for direct headset or display connection, plus enough GPU compute to handle everyday office workloads and [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>AIPC shows two takes on ARfirst compute: a portable, AR-ready AI PC and a compact local-inference box aimed at running large language models without cloud dependency. The smaller system is positioned like a self-contained workstation with USB-C DisplayPort output for direct headset or display connection, plus enough GPU compute to handle everyday office workloads and light gaming in a single device.</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the handheld/keyboard form factor, the demo unit is specced around AMD Ryzen 7 with Radeon 780M graphics, 32 GB RAM, up to 2 TB storage, and Windows 11 Pro. The pitch is practicality: built-in pointing control, active cooling airflow channels, and an estimated ~8 hours of monitor-style use, with different keyboard layouts possible beyond the US version shown.</p>
<p>The bigger story is the “AIPC” local AI box built around AMD Ryzen AI Max+ 395, quoted at 126 TOPS, paired with Radeon 8060-class integrated graphics and configured up to 96–128 GB memory with 2 TB SSD. The point of the high memory ceiling is straightforward: bigger parameter models, larger context, more KV-cache headroom, and fewer compromises when you try to run heavier Qwen/Llama-class checkpoints locally rather than streaming tokens from a hosted API.</p>
<p>The workflow shown is very “local model ops”: a Windows environment with a model marketplace/manager (Nova Studio) to download, start/stop, and swap models, then run offline prompts (including quick multilingual queries) with no internet access. They also demo voice recording and TTS voice cloning to produce speech in another language using the recorded sample, framing the box as a 24/7 agent machine for coding, research, and multimodal generation with predictable cost and privacy characteristics.</p>
<p>Filmed at ISE 2026 in Barcelona, the interview leans into platform tradeoffs: comparisons to Mac Studio and NVIDIA “AI boxes,” plus the AMD ROCm vs CUDA ecosystem discussion, and the practical I/O checklist (HDMI, 10GbE Ethernet, high-speed external storage). The core claim is that this class of high-TOPS APU + large unified memory makes “biggish” local LLM work feel less like a lab setup and more like a normal desktop routine, at a fixed hardware price.</p>
<p>I&#8217;m publishing about 60+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=J6KHnxFN3ZM">https://www.youtube.com/watch?v=J6KHnxFN3ZM</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/09/aipc-local-llm-box-arfirst-pc-ai-box-rocm-ryzen-7-780m-ryzen-ai-max-395-126-tops-128gb-10gbe/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138926</post-id>	</item>
		<item>
		<title>Looking Glass 86 HLD + faytech: 4K hololuminescent signage, IR touch, AI avatars</title>
		<link>https://armdevices.net/2026/02/09/looking-glass-86-hld-faytech-4k-hololuminescent-signage-ir-touch-ai-avatars/</link>
					<comments>https://armdevices.net/2026/02/09/looking-glass-86-hld-faytech-4k-hololuminescent-signage-ir-touch-ai-avatars/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 11:16:42 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138924</guid>

					<description><![CDATA[Looking Glass and faytech are showing a new way to deploy “holographic” digital signage without changing your content pipeline: the 86-inch Hololuminescent Display (HLD). The idea is to move the 3D effect into the optical stack, so the playback device and CMS still see a standard 4K screen, while viewers see a person or product [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Looking Glass and faytech are showing a new way to deploy “holographic” digital signage without changing your content pipeline: the 86-inch Hololuminescent Display (HLD). The idea is to move the 3D effect into the optical stack, so the playback device and CMS still see a standard 4K screen, while viewers see a person or product rendered on a fixed spatial stage with convincing depth. https://lookingglassfactory.com/86-hld</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>In the demo, a simple iPhone video of Looking Glass CEO Shawn is played back as an MP4 through a normal signage workflow, yet it reads as a life-size presence inside the display volume. Because the system behaves like a regular monitor over HDMI or DisplayPort, you can feed it from Windows, macOS, BrightSign-style players, or a workstation GPU, and keep using familiar tools for scheduling, color grading, and campaign variants such as tinting and mood shifts for the same scene here.</p>
<p>The conversation frames HLD as a “magic problem” product: it’s meant for retail, lobbies, endcaps, digital-out-of-home, and museum-style storytelling where attention and dwell time matter, not for precision depth measurements. It’s also designed for group viewing without glasses or per-viewer tracking, and the production units are targeted around 400–500 nits for brighter storefront conditions. This video was filmed at ISE 2026 in Barcelona, where the emphasis is clearly on deployability at scale today.</p>
<p>They also switch to an interactive mode that combines the holographic stage with an IR touch overlay, running a Unity application on a compact PC (shown on an Intel NUC). Touch input lets users browse product variants and manipulate “in-box” lighting and styling across the whole volume, which is a useful mental model for POS configurators, virtual shelves, and guided product education where the UI stays 2D-simple while the presentation feels spatial now.</p>
<p>The faytech partnership is positioned as the practical integration layer: custom bezels, kiosk enclosures, and fit-and-finish for AV rollouts, plus accessory options as needed. Looking Glass says production units will integrate a 4K camera, microphone, speakers, and touch, which opens the door to telepresence-style “beaming,” guided museum narration, or AI-avatar front-of-house experiences (their earlier Uncle Rabbit demo gets a mention), while keeping the core promise: treat it like a 4K display, get a 3D effect next.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=-RQQXGGrsDU">https://www.youtube.com/watch?v=-RQQXGGrsDU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/09/looking-glass-86-hld-faytech-4k-hololuminescent-signage-ir-touch-ai-avatars/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138924</post-id>	</item>
		<item>
		<title>Newline Interactive DV Premier+ COB 216in 4K, STV+ 115in 24/7 signage</title>
		<link>https://armdevices.net/2026/02/09/newline-interactive-dv-premier-cob-216in-4k-stv-115in-24-7-signage/</link>
					<comments>https://armdevices.net/2026/02/09/newline-interactive-dv-premier-cob-216in-4k-stv-115in-24-7-signage/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 02:31:18 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138922</guid>

					<description><![CDATA[Newline Interactive is positioning its range around two big AV building blocks: large-format direct-view LED for “main wall” impact, and commercial LCD/IFP for everything else in the room. The highlight is a 216-inch 4K COB DV wall (shown here at 1.25 mm pitch) aimed at meeting and briefing spaces that want a seamless 16:9 canvas [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Newline Interactive is positioning its range around two big AV building blocks: large-format direct-view LED for “main wall” impact, and commercial LCD/IFP for everything else in the room. The highlight is a 216-inch 4K COB DV wall (shown here at 1.25 mm pitch) aimed at meeting and briefing spaces that want a seamless 16:9 canvas with finer LED packaging, tighter pixel geometry, and cleaner black uniformity than older SMD builds. https://newline-interactive.com/eu/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the signage side, the STV+ family is framed as a 24/7 digital signage display line that scales from 43-inch up to a flagship 115-inch 4K panel. The emphasis is on continuous runtime, IP control for fleet operations, ultra-thin bezel styling, and onboard Android for lightweight playback, plus compatibility with Newline’s Signage Pro platform for typical landscape/portrait deployment in corporate, retail, and public-facing venues.</p>
<p>For collaboration rooms, the demo pivots to “all-in-one” interactive screens with integrated UC hardware: camera, microphone array, and quick access workflows like NFC card login. ARVA Pro is positioned as the infrared touch option, while Vega Pro shifts to PCAP for a more glass-like feel, paired with Google EDLA certification so Teams/Google Workspace deployments can stay inside a managed Android app ecosystem. Vega Pro also leans into meeting capture with an optional 4K AI camera, an 8-mic array, and built-in speakers for a single-device install.</p>
<p>There’s also a practical deployment angle: a non-Android “C series” path for IT teams that prefer a pure display plus external compute, with OPS slot-in PC or a Chromebox doing the heavy lift. Newline also shows a smaller, battery-powered 27-inch mobile display concept (Google EDLA, around a 4-hour battery) to make casting and touch interaction portable for education and ad-hoc collaboration. This walkthrough was filmed at ISE 2026 in Barcelona, so the product mix is clearly tuned for European integrator and channel demand.</p>
<p>Taken together, the story is less about one hero SKU and more about covering the full signal chain in modern spaces: COB direct-view LED (pixel pitch, 4K, 16:9), large-format LCD for signage (4K UHD, IP control, 24/7), and interactive flat panels for UC (EDLA, Android, OPS, PCAP vs IR touch, NFC login, casting). If you’re designing standardized meeting rooms or digital signage fleets, it’s a coherent set of endpoints that can share management patterns while matching the “right display tech” to the viewing distance and use case.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=HZ472LjeNeg">https://www.youtube.com/watch?v=HZ472LjeNeg</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/09/newline-interactive-dv-premier-cob-216in-4k-stv-115in-24-7-signage/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138922</post-id>	</item>
		<item>
		<title>EXCO Vision ISE 2026 Best of Show Zeus, XR LED volume + narrow-pitch rental, concave/convex stage</title>
		<link>https://armdevices.net/2026/02/09/exco-vision-ise-2026-best-of-show-zeus-xr-led-volume-narrow-pitch-rental-concave-convex-stage/</link>
					<comments>https://armdevices.net/2026/02/09/exco-vision-ise-2026-best-of-show-zeus-xr-led-volume-narrow-pitch-rental-concave-convex-stage/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 00:06:35 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138920</guid>

					<description><![CDATA[EXCO Vision shows how an LED vendor can cover both in-camera VFX and fast-turn rental builds without treating them as separate engineering problems: the same “batch + pitch” discipline is used to keep a ceiling, main wall, and floor aligned for white point, gamma, and color matching across an LED volume. That consistency matters when [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>EXCO Vision shows how an LED vendor can cover both in-camera VFX and fast-turn rental builds without treating them as separate engineering problems: the same “batch + pitch” discipline is used to keep a ceiling, main wall, and floor aligned for white point, gamma, and color matching across an LED volume. That consistency matters when you’re doing virtual production with real lenses, real shutters, and real reflections. https://www.excovision.net/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>In the interview, the flagship topic is an LED backdrop custom designed for virtual production, where the goal is to replace a grey/green stage with an emissive background that the camera can capture directly. EXCO Vision describes indoor brightness around 3,000 nit and refresh up to 30,000 Hz for the shown configuration, aiming to reduce flicker/banding at challenging shutter angles while keeping contrast high via black-surface LED design for camera.</p>
<p>From the product naming used by EXCO Vision, Zeus and Apollo sit in the XR/virtual shooting family, with marketing that leans on high refresh and high grayscale for smoother motion rendering and better moiré control at narrow pixel pitches. The Zeus line is also notable because it was listed as a Best of Show winner at ISE 2026 by TVBEurope, which signals it was evaluated in a broadcast-facing context rather than pure signage. The video is filmed at ISE 2026 Barcelona, where LED volumes, tracking, and real-time rendering are now standard talking points across broadcast, cinema, and live event work.</p>
<p>Alongside VP, the booth walk highlights a rental-oriented setup described as a narrow-pitch rental platform that can be deployed as ceiling, wall, and floor in the same pitch and batch. The practical angle is modularity: straight builds plus concave/convex sections, including tunnel-style layouts, so you can create immersive spaces without sacrificing calibration continuity across the full surface.</p>
<p>On company context, the transcript states EXCO Vision was founded in 2017, manufactures in Shenzhen, and is backed by shareholders that include Youku as a major investor tied to film/TV production activity in China. Their stated plan is broader reach beyond China and the US, targeting more European rollouts during 2026 as XR stages and rental hybrids become more common in regional production pipelines across Europe</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=y3AQMm4NeOc">https://www.youtube.com/watch?v=y3AQMm4NeOc</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/09/exco-vision-ise-2026-best-of-show-zeus-xr-led-volume-narrow-pitch-rental-concave-convex-stage/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138920</post-id>	</item>
		<item>
		<title>Logitech Rally AI Camera Pro + Rally Board + Sight: RightSight 2 framing, Meet Android, Teams Rooms</title>
		<link>https://armdevices.net/2026/02/08/logitech-rally-ai-camera-pro-rally-board-sight-rightsight-2-framing-meet-android-teams-rooms/</link>
					<comments>https://armdevices.net/2026/02/08/logitech-rally-ai-camera-pro-rally-board-sight-rightsight-2-framing-meet-android-teams-rooms/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 08 Feb 2026 17:41:20 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138918</guid>

					<description><![CDATA[Logitech’s booth focuses on making meeting-room video behave more like a directed shoot: on-device AI combines image analysis with audio cues to keep the right faces in frame, without the operator “driving” PTZ all day. The new Rally AI Camera Pro and Rally AI Camera sit above the familiar Rally lineup, targeting bigger spaces where [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Logitech’s booth focuses on making meeting-room video behave more like a directed shoot: on-device AI combines image analysis with audio cues to keep the right faces in frame, without the operator “driving” PTZ all day. The new Rally AI Camera Pro and Rally AI Camera sit above the familiar Rally lineup, targeting bigger spaces where speaker handoffs, whiteboards, and side conversations can break basic auto-framing. https://www.logitech.com/en-us/products/video-conferencing.html</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>Rally AI Camera Pro is aimed at complex rooms with a dual-camera concept: a wide context view plus a PTZ view with hybrid zoom, so the system can keep a stable “room read” while still punching in on a presenter. Logitech ties this to RightSight 2 modes (grid, group, speaker) and to multi-camera deployments that conferencing platforms can manage for consistent composition as people move.</p>
<p>The tour also revisits Rally Bar as a core appliance for mid-size rooms, where dual lenses, tuned optics, and DSP are used to get the most out of 1080p cloud calls by feeding them cleaner, sharper source video. The underlying point is that better sensors and more processing headroom still matter even when the transport codec is the bottleneck, because improved exposure, color, and noise behavior survive compression more cleanly.</p>
<p>On the interactive side, Rally Board is shown in two flavors: an Android-based appliance that runs Google Meet natively, and a Windows-based Microsoft Teams Rooms variant for organizations that can’t put Android endpoints on their network. Both keep touch workflows for whiteboarding and content sharing, and Logitech hints at depth/IR sensing to build a 3D room map for more reliable tracking plus privacy-aware background masking. This segment was filmed at ISE 2026 in Barcelona, giving a useful snapshot today.</p>
<p>Finally, Logitech Sight tackles the long-table problem by adding a tabletop viewpoint that’s “almost 360” (about 315° to avoid shooting the front display), with built-in microphones and direction-of-arrival cues to help remote participants see and hear whoever is speaking at the far end. In a dual-Sight layout, coverage can stretch along longer tables and produce multiple participant tiles (up to eight) so discussion feels less like a single wide shot and more like a set of close conversational angle.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=VAAYpAS0DWU">https://www.youtube.com/watch?v=VAAYpAS0DWU</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/08/logitech-rally-ai-camera-pro-rally-board-sight-rightsight-2-framing-meet-android-teams-rooms/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138918</post-id>	</item>
		<item>
		<title>BOE at ISE 2026: Curved COB, Chip-on-Glass, MPD 0.6mm LED, Magic LED eye comfort, Spectra 6 ePaper</title>
		<link>https://armdevices.net/2026/02/08/boe-at-ise-2026-curved-cob-chip-on-glass-mpd-0-6mm-led-magic-led-eye-comfort-spectra-6-epaper/</link>
					<comments>https://armdevices.net/2026/02/08/boe-at-ise-2026-curved-cob-chip-on-glass-mpd-0-6mm-led-magic-led-eye-comfort-spectra-6-epaper/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sun, 08 Feb 2026 09:56:15 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138916</guid>

					<description><![CDATA[BOE’s booth tour focuses on how packaging and drive architecture are reshaping fine-pitch direct-view LED: full flip-chip COB with black film encapsulation, common-cathode driving, and increasingly glass-based approaches that aim for better thermal behavior and more uniform luminance. The walkthrough starts with a curved COB concept that relies on an ultra-thin module to make bending [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>BOE’s booth tour focuses on how packaging and drive architecture are reshaping fine-pitch direct-view LED: full flip-chip COB with black film encapsulation, common-cathode driving, and increasingly glass-based approaches that aim for better thermal behavior and more uniform luminance. The walkthrough starts with a curved COB concept that relies on an ultra-thin module to make bending feasible, with a stated curvature limit around a 600 mm radius for concave or convex installs. https://www.boe.com/en/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>After that, the comparison ladder makes the tradeoffs concrete: a “standard” COB around 1.25 mm pitch is framed as a cost-led choice (about 600 nits and 5,000:1 contrast, using Nova control), while a 0.9 mm “ultra” step pushes toward roughly 2,000 nits and 20,000:1 with tighter seam management and a higher-end control path that can drive more cabinets per controller to reduce cabling and power distribution complexity. This segment is filmed at ISE 2026 in Barcelona, and it’s useful as a quick reference for why pitch alone is not the only quality lever in retail and corporate AV.</p>
<p>The “highest end” theme is then tied to chip-on-glass (COG): placing LEDs directly on glass is presented as the route for going below what COB typically targets, while keeping a slim module and consistent optical behavior. In parallel, BOE shows Micro Pixel Device development (MPD) at around 0.6 mm pitch with a quoted cabinet depth near 2.4 cm, positioned as a prototype path to thinner, tighter LED walls where mechanical depth and heat paths often set the real limits.</p>
<p>A standout demo is “Magic LED,” described as using pixel-level distortion to reduce real electrical brightness (example given around 800 nits) while preserving a perceived high-brightness look closer to a higher mode (example referenced around 2,000 nits), aimed at eye comfort and lower energy draw. There’s also a glossy surface film approach that makes an LED wall read more like LCD/OLED from a distance by masking division lines, plus a quick look at modular serviceability (magnetic tiles, visible power/drive components) and how SMD, COB, and COG raise the production hurdle in different ways.</p>
<p>Beyond LED, BOE pivots into wider display categories: an interactive LCD with a new polarizer/film for improved off-axis viewing, and local dimming with 288 zones to manage contrast and power by content. The ePaper area highlights Spectra 6 full-color with a noted operating range of roughly -25 to 60°C and battery operation, alongside the familiar multi-second refresh behavior. The tour ends with semi-outdoor LED (example referenced at 4,000 nits and ~1.9 mm pitch) plus a sensor cluster concept for fleet management—temperature, ambient light, water/rain logging, door and impact/vandal alerts—before closing on factory sustainability and recycled-content efforts as part of BOE’s manufacturing story.</p>
<p>I&#8217;m publishing about 60+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=E9RKZ3V3k2I">https://www.youtube.com/watch?v=E9RKZ3V3k2I</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/08/boe-at-ise-2026-curved-cob-chip-on-glass-mpd-0-6mm-led-magic-led-eye-comfort-spectra-6-epaper/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138916</post-id>	</item>
		<item>
		<title>Leyard Planar at ISE 2026: Komodo 0.7mm LED + 8K COB MicroLED wall + IP65 floor + megapixel HELIOS</title>
		<link>https://armdevices.net/2026/02/07/leyard-planar-at-ise-2026-komodo-0-7mm-led-8k-cob-microled-wall-ip65-floor-megapixel-helios/</link>
					<comments>https://armdevices.net/2026/02/07/leyard-planar-at-ise-2026-komodo-0-7mm-led-8k-cob-microled-wall-ip65-floor-megapixel-helios/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 07 Feb 2026 19:01:38 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138914</guid>

					<description><![CDATA[Leyard Planar walks through a booth built around end-to-end LED video wall engineering, where mechanics, mounting tolerances and pixel-level calibration matter as much as the diodes. The theme is “own the stack”: structural frames, cabinet geometry, surface protection and processing are treated as one system rather than separate parts. https://www.planar.com/products/led-video-walls/planar-komodo-series/ &#8212; HDMI® Technology is the [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Leyard Planar walks through a booth built around end-to-end LED video wall engineering, where mechanics, mounting tolerances and pixel-level calibration matter as much as the diodes. The theme is “own the stack”: structural frames, cabinet geometry, surface protection and processing are treated as one system rather than separate parts. https://www.planar.com/products/led-video-walls/planar-komodo-series/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the creative side, you see LED used as architectural material: a 1.3 mm pixel-pitch floor rated IP65 for liquid ingress, with anti-slip and optional pressure-based touch interaction for wayfinding or branded experiences. The True Curve concept pushes that idea further with a 1.8 mm module that follows a physical form via a laser-cut back plate, locating pins and magnetic positioning, so designers can build repeatable curves without fighting seam alignment at every edge.</p>
<p>For large canvas installs, the conversation shifts to fine-pitch COB MicroLED where durability and thermal behavior become practical deployment issues. A 16:9 8K wall is described as four 4K viewports stitched into one surface, using a hard epoxy coating to handle knocks in public areas while keeping radiant heat low. The wall shown is around 0.9 mm pitch in a 12×12 cabinet layout with XYZ structural adjustment to minimize visible joins, filmed at ISE 2026 in Barcelona.</p>
<p>Installation flexibility shows up again in cabinets offered in multiple heights (from 250×250 up to 250×1000 mm), letting integrators tile around doors, elevators and control interfaces instead of forcing rectangular voids. A slim, portrait-format faceted curve wall is quoted at roughly 35 mm depth, up to 2,000 nits brightness, and uses off-board power to reduce cabinet weight and on-surface heat while enabling redundancy, a useful pattern for a mission-critical control room.</p>
<p>The flagship focus is the Planar Komodo fine-pitch lineup, highlighted here at true 0.7 mm with ~22-inch diagonal cabinets for dense pixel count in smaller footprints, aimed at premium home cinema, simulation and gaming. Keywords include COB MicroLED, DCI-P3 wide-gamut color, 120 Hz sources, high refresh, plus genlock and pixel-lock. The tour also contrasts standard SMD with MicroLED-in-Package (MIP) for sharper uniformity, and ends on 2.6 mm transparent LED at about 70% translucency for glass and window installs, keeping the “build the whole system” idea intact.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=JyAqysPqf7E">https://www.youtube.com/watch?v=JyAqysPqf7E</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/07/leyard-planar-at-ise-2026-komodo-0-7mm-led-8k-cob-microled-wall-ip65-floor-megapixel-helios/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138914</post-id>	</item>
		<item>
		<title>QOMO wireless 4K doc cams, 100x optical zoom, SimpleBoard modular OPS workflow</title>
		<link>https://armdevices.net/2026/02/07/qomo-wireless-4k-doc-cams-100x-optical-zoom-simpleboard-modular-ops-workflow/</link>
					<comments>https://armdevices.net/2026/02/07/qomo-wireless-4k-doc-cams-100x-optical-zoom-simpleboard-modular-ops-workflow/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 07 Feb 2026 14:06:48 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138912</guid>

					<description><![CDATA[QOMO’s collaboration stack in this video is built around making capture, annotation, and sharing feel “plug-and-present” instead of “IT project”. The demo mixes wireless document cameras (8MP/4K-class imaging) with higher-end optical models that prioritize autofocus stability, sensor detail, and real-time viewing for classrooms, boardrooms, and public-sector briefing spaces. https://qomo.com/products/interactive-displays/bundleboard-i/ &#8212; HDMI® Technology is the foundation [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>QOMO’s collaboration stack in this video is built around making capture, annotation, and sharing feel “plug-and-present” instead of “IT project”. The demo mixes wireless document cameras (8MP/4K-class imaging) with higher-end optical models that prioritize autofocus stability, sensor detail, and real-time viewing for classrooms, boardrooms, and public-sector briefing spaces. https://qomo.com/products/interactive-displays/bundleboard-i/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A big theme is optics as a UX feature: the optical document camera shown can hold focus on fine textures and objects with deep zoom (called out at up to 100x), while gooseneck positioning and one-touch autofocus make it practical for live demos rather than only static shots. The preview-screen form factor is also aimed at presenter ergonomics, so you can verify framing without turning to the projection surface.</p>
<p>On the display side, SimpleBoard is positioned as an OS-less interactive panel that stays hardware-neutral: you add the compute you want (OPS/PC module, Windows box, etc.) and standardize touch + display without forcing a fixed Android image. In this segment filmed at ISE 2026 in Barcelona, they pair it with a Windows module and conferencing/collaboration workflow (Zoom, Microsoft Teams, Google Meet) plus a 4K camera for more consistent room video.</p>
<p>Interactivity is covered two ways: audience response keypads (IR-based) that integrate tightly with PowerPoint for live voting and lesson “gamification”, and a separate wireless presentation link for meeting rooms where you can’t (or shouldn’t) join the corporate LAN. The QShare/Q-series transceiver concept is simple: HDMI or USB-C in, direct wireless out to the display, with USB used for power, so guest presenters can share content without network onboarding.</p>
<p>BundleBoard i then shows the “all-in-one” route: Android 14 with Google ecosystem support and Play Store access, plus whiteboarding and fast UI response for multi-app workflows. The mobile 32-inch interactive screen rounds it out as a battery-backed, wheeled endpoint with a built-in camera and a physical privacy shutoff, aimed at ad-hoc huddles, training corners, and flexible classrooms.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=M0g6rxhKRT4">https://www.youtube.com/watch?v=M0g6rxhKRT4</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/07/qomo-wireless-4k-doc-cams-100x-optical-zoom-simpleboard-modular-ops-workflow/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138912</post-id>	</item>
		<item>
		<title>Absen at ISE 2026: XL COB P1.5/P1.8 + CL COB contrast + JDH transparent + iCon foldable AIO LED</title>
		<link>https://armdevices.net/2026/02/07/absen-at-ise-2026-xl-cob-p1-5-p1-8-cl-cob-contrast-jdh-transparent-icon-foldable-aio-led/</link>
					<comments>https://armdevices.net/2026/02/07/absen-at-ise-2026-xl-cob-p1-5-p1-8-cl-cob-contrast-jdh-transparent-icon-foldable-aio-led/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 07 Feb 2026 11:16:25 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138910</guid>

					<description><![CDATA[Absen walks through a set of LED display building blocks that map nicely to real retail and venue constraints: viewing distance, budget per square metre, impact resistance, and how much “black level” you can hold under mixed ambient light. The XL COB focus is interesting precisely because it is not chasing ultra-fine pitch; P1.5 and [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Absen walks through a set of LED display building blocks that map nicely to real retail and venue constraints: viewing distance, budget per square metre, impact resistance, and how much “black level” you can hold under mixed ambient light. The XL COB focus is interesting precisely because it is not chasing ultra-fine pitch; P1.5 and P1.8 COB aims at affordable near-field sharpness while keeping the practical advantages of a resin-coated surface for durability, cleaning, and handling in public space. https://www.absen.com/</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A second thread is physical deployment: the double-sided KDS P2.6 uses flip-chip SMD in a slim (claimed 38 mm) profile, which matters for hanging weight, ceiling loads, and clean sightlines in transport hubs and mall atriums. Double-sided LED also changes content strategy: independent brightness per side lets you tune for glare, daylight, or interior mood without running two separate structures.</p>
<p>COB gets pushed beyond “just protection” with textured COB, where the encapsulation layers add a material-like finish (wood or marble effect) that reads more like interior architecture than a typical emissive wall. The premium COB highlight is the fine-pitch CL-class concept (0.9 / 1.2 pitch mentioned) with very high contrast (30,000:1 stated), 1,200 nit typical brightness and 1,600 nit peak, plus the “cool to the touch” angle that usually points toward flip-chip plus common-cathode style power architecture and tighter thermal control for long duty cycles.</p>
<p>Filmed at ISE 2026 in Barcelona, the transparent JDH series demo shows why transparent LED keeps showing up in storefront and façade briefs: you can run bright outward-facing content (about 2,000 nits mentioned) while preserving daylight, visibility, and a less “blocked off” feeling from the inside. It also becomes a passive shading layer, reducing harsh sun while still allowing the space to read as open rather than walled in.</p>
<p>The all-in-one iCon/ICON approach ties it together for integrators who need fast deployment: standard 110/136/163-inch classes, integrated control and audio, and a foldable chassis idea that is less about spectacle and more about logistics (fit in an elevator, move between floors, redeploy for events). The key takeaway is that the LED “canvas” is only half the system: resolution targets, processing load, and content workflow (including AI-assisted generation for higher pixel-count canvases) are what decide whether the installation communicates clearly or just glows.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=eUM3RaPm4Hc">https://www.youtube.com/watch?v=eUM3RaPm4Hc</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/07/absen-at-ise-2026-xl-cob-p1-5-p1-8-cl-cob-contrast-jdh-transparent-icon-foldable-aio-led/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138910</post-id>	</item>
		<item>
		<title>Cisco AI workspaces at ISE 2026 PoE AV, Room Navigator sensors, Meraki MT15 + Splunk dashboards</title>
		<link>https://armdevices.net/2026/02/07/cisco-ai-workspaces-at-ise-2026-poe-av-room-navigator-sensors-meraki-mt15-splunk-dashboards/</link>
					<comments>https://armdevices.net/2026/02/07/cisco-ai-workspaces-at-ise-2026-poe-av-room-navigator-sensors-meraki-mt15-splunk-dashboards/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 07 Feb 2026 08:11:27 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138908</guid>

					<description><![CDATA[Cisco frames the booth tour around “AI-powered workspaces” as a stack that starts with collaboration endpoints but quickly expands into building telemetry and operational analytics, so IT and facilities can make data-driven decisions on space usage, comfort, and energy. The core idea is to reduce friction for hybrid work while keeping deployments platform-flexible (Webex native, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Cisco frames the booth tour around “AI-powered workspaces” as a stack that starts with collaboration endpoints but quickly expands into building telemetry and operational analytics, so IT and facilities can make data-driven decisions on space usage, comfort, and energy. The core idea is to reduce friction for hybrid work while keeping deployments platform-flexible (Webex native, or re-registered for Microsoft Teams, while still being able to join Zoom and Google Meet). https://www.webex.com/us/en/devices/desk-series/cisco-desk-pro.html</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>On the collaboration side, the Desk Pro Gen 2 is positioned as more than an executive desktop: with a wide-angle 4K camera and touch workflow for whiteboarding and control, it can drop into a small huddle room as an all-in-one endpoint, with tighter framing and digital pan/tilt/crop for cleaner participant views in a compact room.</p>
<p>For larger spaces, the tour highlights “distance zero” meeting design using side cameras to give a cross-table perspective when discussion flows laterally, so remote participants see natural eye-lines instead of a single front-wall shot. Audio is treated as a sensor array too, with the ceiling beamforming mic concept (64 mic elements forming 8 adaptive beams, with echo handling per beam) to keep pickup stable even when the active speaker is several meters away in a busy space.</p>
<p>The workplace layer is where Cisco connects collaboration hardware to smart-building signals: Room Navigator-style touch panels surface temperature, humidity and air-quality context where people actually sit, while the video system can add people-count and presence data. That gets blended with network indicators like Wi-Fi association trends for occupancy, then extended with purpose-built sensors like the Meraki MT15 for CO2 and air-quality metrics that affect focus and comfort in real workdays.</p>
<p>Splunk is presented as the unifying data platform for security, observability, and facilities dashboards, including edge reduction via Splunk Edge Hub so raw sensor chatter becomes normalized, usable time-series before it hits the cloud. The tour also nods to Power over Ethernet as a physical infrastructure strategy (up to 90 W on a single cable) for lighting, desk power, and automation triggers, making reconfiguration easier and shifting more of the workplace into software-defined control for work.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=zCF_bb7LpVQ">https://www.youtube.com/watch?v=zCF_bb7LpVQ</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/07/cisco-ai-workspaces-at-ise-2026-poe-av-room-navigator-sensors-meraki-mt15-splunk-dashboards/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138908</post-id>	</item>
		<item>
		<title>TSL Hummingbird NDI routing control, tally mapping, presets, Linux VM broadcast panels</title>
		<link>https://armdevices.net/2026/02/07/tsl-hummingbird-ndi-routing-control-tally-mapping-presets-linux-vm-broadcast-panels/</link>
					<comments>https://armdevices.net/2026/02/07/tsl-hummingbird-ndi-routing-control-tally-mapping-presets-linux-vm-broadcast-panels/#respond</comments>
		
		<dc:creator><![CDATA[Charbax]]></dc:creator>
		<pubDate>Sat, 07 Feb 2026 05:02:17 +0000</pubDate>
				<category><![CDATA[Exclusive videos]]></category>
		<guid isPermaLink="false">https://armdevices.net/?p=138906</guid>

					<description><![CDATA[TSL Products’ Hummingbird control family is shown here as a way to treat NDI streams like “real” broadcast sources: discover endpoints, build routes, and drive operator workflows from the same control surface used for SDI routers and SMPTE ST 2110 systems. The key shift is adding NDI routing control into a broadcast-grade control layer, so [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>TSL Products’ Hummingbird control family is shown here as a way to treat NDI streams like “real” broadcast sources: discover endpoints, build routes, and drive operator workflows from the same control surface used for SDI routers and SMPTE ST 2110 systems. The key shift is adding NDI routing control into a broadcast-grade control layer, so tallies, salvos/presets, and structured operational logic can sit on top of an IP media fabric without forcing operators into a totally new interface. https://tslproducts.com/ndi</p>
<p>&#8212;<br />
HDMI® Technology is the foundation for the worldwide ecosystem of HDMI-connected devices; integrated with displays, set-top boxes, laptops, audio video receivers and other product types. Because of this global usage, manufacturers, resellers, integrators and consumers must be assured that their HDMI® products work seamlessly together and deliver the best possible performance by sourcing products from licensed HDMI Adopters or authorized resellers. For HDMI Cables, consumers can look for the official HDMI® Cable Certification Labels on packaging. Innovation continues with the latest HDMI 2.2 Specification that supports higher 96Gbps bandwidth and next-gen HDMI Fixed Rate Link technology to provide optimal audio and video for a wide range of device applications. Higher resolutions and refresh rates are supported, including up to 12K@120 and 16K@60. Additionally, more high-quality options are supported, including uncompressed full chroma formats such as 8K@60/4:4:4 and 4K@240/4:4:4 at 10-bit and 12-bit color.<br />
&#8212;</p>
<p>A practical takeaway is how NDI’s value proposition changes once control is no longer the weak link. NDI can move high-quality video and audio over standard 1 GbE networks, which makes it attractive for cost and deployment density, but many broadcast teams historically ignored it because it didn’t “feel” like a routed, automated plant. By binding NDI into the same routing and monitoring paradigm as baseband, Hummingbird makes it easier to scale from a few sources to a large device estate without losing operational discipline.</p>
<p>The video also highlights the human-factors side of control: hard-button panels still matter in fast, mission-critical production because operators can work by touch while watching program or multiview. In this setup, those panels become NDI-aware, so button presses can trigger NDI routes, tally mapping, and preset management with the kind of immediacy expected from decades of SDI. NDI endpoints still handle encode/decode in their own hardware or software (CPU, FPGA, and similar), while Hummingbird focuses on orchestration, routing intent, and state.</p>
<p>Under the hood, this is enabled by newer NDI SDK capabilities for discovery/monitoring/control and receiver-oriented management, which is why the demo is positioned as a “now it’s possible” moment rather than a brand-new transport. Filmed during ISE 2026 in Barcelona, it lands in the wider AV-over-IP trend at the show, but from a broadcast angle: predictable routing, tally correctness, and operator speed are treated as first-class requirements rather than optional extras.</p>
<p>There’s also a deployment story: Hummingbird’s control logic is largely software-based, and the move toward a portable Linux platform opens up on-prem VM installs (for near-instant response) as well as data-center and cloud footprints where it makes sense. The point is less about running switching “in the cloud” and more about making control portable across Hyper-V/VMware-style environments, appliances, and hybrid sites, while keeping latency for control actions in the millisecond range when the media plane is local.</p>
<p>I&#8217;m publishing about 75+ videos from ISE 2026, check out all my ISE 2026 videos in my playlist here: https://www.youtube.com/playlist?list=PL7xXqJFxvYvjUiepj5jbL6aIt6QB9jeCk</p>
<p>This video was filmed using the DJI Pocket 3 ($669 at https://amzn.to/4aMpKIC using the dual wireless DJI Mic 2 microphones with the DJI lapel microphone https://amzn.to/3XIj3l8 )</p>
<p>&#8220;Super Thanks&#8221; are welcome <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f601.png" alt="😁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Check out my video with Daylight Computer about their revolutionary Sunlight Readable Transflective LCD Display for Healthy Learning: https://www.youtube.com/watch?v=U98RuxkFDYY</p>
<p>source <a href="https://www.youtube.com/watch?v=KqIlhJp9b0s">https://www.youtube.com/watch?v=KqIlhJp9b0s</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://armdevices.net/2026/02/07/tsl-hummingbird-ndi-routing-control-tally-mapping-presets-linux-vm-broadcast-panels/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">138906</post-id>	</item>
	</channel>
</rss>
