<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>Hongkiat</title>
	<atom:link href="https://www.hongkiat.com/blog/feed/" rel="self" type="application/rss+xml"/>
	<link>https://www.hongkiat.com/blog/</link>
	<description>Tech and Design Tips</description>
	<lastBuildDate>Mon, 20 Apr 2026 11:07:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">1070734</site>	<xhtml:meta content="noindex" name="robots" xmlns:xhtml="http://www.w3.org/1999/xhtml"/><item>
		<title>Fresh Resources for Web Designers and Developers (April 2026)</title>
		<link>https://www.hongkiat.com/blog/designers-developers-monthly-04-2026/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Fri, 24 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74376</guid>

					<description><![CDATA[<p>A fresh April 2026 roundup of tools and resources for developers, including CSS frameworks, JavaScript libraries, WordPress tooling, and AI-focused utilities.</p>
<p>The post <a href="https://www.hongkiat.com/blog/designers-developers-monthly-04-2026/">Fresh Resources for Web Designers and Developers (April 2026)</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Time for another monthly resource roundup for web developers.</p>
<p>This month’s picks cover minimalist CSS frameworks, JavaScript libraries, WordPress tooling, and a handful of AI-focused tools for building faster workflows.</p>
<div class="ref-block ref-block--tax noLinks" id="ref-block-tax-74376-1">
		<a href="https://www.hongkiat.com/blog/tag/fresh-resources-developers/" target="_blank" class="ref-block__link" title="Read More: Click Here for More Resources" rel="bookmark"><span class="screen-reader-text">Click Here for More Resources</span></a>
<div class="ref-block__thumbnail img-thumb img-thumb--jumbo" data-img='{ "src" : "https://assets.hongkiat.com/uploads/thumbs/related/tag-fresh-resources-developers.jpg" }'>
			<noscript>
<style>.no-js #ref-block-tax-74376-1 .ref-block__thumbnail {
					background-image: url( "https://assets.hongkiat.com/uploads/thumbs/related/tag-fresh-resources-developers.jpg" );
				}</style>
<p>			</p></noscript>
		</div>
<div class="ref-block__summary">
<h4 class="ref-title">Click Here for More Resources</h4>
<div class="ref-description">
<p>Check out our complete collection of hand-picked tools for designers and developers.</p>
</div></div>
</div>
<hr>
<h2><a rel="nofollow noopener" target="_blank" href="https://comark.dev">Comark</a></h2>
<p><strong>Comark</strong> is a fast Markdown parser built for streamed content. AI-generated text or progressively loaded content renders cleanly as it arrives. It also auto-closes syntax on the fly and supports plugins such as math and code highlighting. It looks especially useful for docs and interactive blogs.</p>
<figure>
        <img fetchpriority="high" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/comark.jpg" alt="Comark" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://www.loftlyy.com/en">Loftlyy</a></h2>
<p><strong>Loftlyy</strong> is a growing database of real-world brand identities. You can browse logos, colors, and design systems from actual companies. It’s a solid reference if you need inspiration without relying on made-up mock brands.</p>
<figure>
        <img decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/loftlyy.jpg" alt="Loftlyy" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://oat.ink">Oat</a></h2>
<p><strong>Oat</strong> is a lightweight HTML and CSS UI component library with zero dependencies. It weighs <strong>about 8KB</strong> in total and uses semantic HTML tags instead of CSS classes. If you’re tired of JS-heavy UI stacks, this goes in the opposite direction.</p>
<figure>
        <img decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/oat.jpg" alt="Oat" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://picocss.com">PicoCSS</a></h2>
<p><strong>PicoCSS</strong> is a minimalist framework that styles HTML directly without extra classes. It uses plain CSS, no JavaScript, and adapts automatically to light or dark mode. Good fit if you want a clean system without carrying extra UI baggage.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/picocss.jpg" alt="Pico CSS" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://github.com/addyosmani/agent-skills">Agent Skills by Addy Osmani</a></h2>
<p><strong>Agent Skills by Addy Osmani</strong> is a collection of production-grade workflows for AI coding agents. It uses slash commands like <code>/spec</code>, <code>/plan</code>, and <code>/ship</code> to enforce specs, tests, code reviews, and security checks. The appeal here is structure, especially for teams that want agents to follow a stricter engineering process.</p>
<p>It works with <a rel="nofollow noopener" target="_blank" href="https://claude.com/product/claude-code">Claude Code</a>, <a rel="nofollow noopener" target="_blank" href="https://cursor.sh">Cursor</a>, <a rel="nofollow noopener" target="_blank" href="https://geminicli.com">Gemini CLI</a>, and other agents.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/agent-skills.jpg" alt="Agent Skills" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://boneyard.vercel.app/overview">Boneyard</a></h2>
<p><strong>Boneyard</strong> automatically generates skeleton screens by snapshotting your real UI. You wrap a component in <code>&lt;Skeleton&gt;</code>, run the CLI once, and it captures pixel-perfect rectangles that mirror your actual layout. No manual measurement or hand-tuned placeholders.</p>
<p>If you build React apps and want to reduce layout shift during loading states, this looks like a neat solution, especially at <strong>around 7.5KB</strong>.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/boneyard.jpg" alt="Boneyard" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://locker.dev">Locker</a></h2>
<p><strong>Locker</strong> is an open-source, self-hosted file platform. You can sync files across multiple storage backends, including S3, R2, and local disk. It also searches inside images and PDFs. Looks useful if you want tighter control over file storage without being locked to one vendor.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/locker.jpg" alt="Locker" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://developer.wordpress.org/news/2026/04/wordpress-build-the-next-generation-of-wordpress-plugin-build-tooling/">WordPress Build Tooling</a></h2>
<p><strong>@wordpress/build</strong> is a new build tool for WordPress plugins. It replaces <a rel="nofollow noopener" target="_blank" href="https://webpack.js.org">Webpack</a> and Babel with <a rel="nofollow noopener" target="_blank" href="https://esbuild.github.io">esbuild</a>. So multiple scripts, modules, and admin pages build in seconds with almost no configuration.</p>
<p>Gutenberg already uses it, though it is not ready for every plugin yet. Still, it looks worth a look if you follow WordPress tooling closely.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/wordpress-build-tooling.jpg" alt="WordPress Build Tooling" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://emdashcms.com">EmDash</a></h2>
<p><strong>EmDash</strong> is a new CMS from Cloudflare, built with TypeScript and Astro. It positions itself as an alternative blogging platform to WordPress.</p>
<p>One interesting part is that it runs plugins in sandboxed Cloudflare Workers, while content uses <a rel="nofollow noopener" target="_blank" href="https://github.com/portabletext/portabletext">Portable Text JSON</a> instead of HTML or <a rel="nofollow noopener" target="_blank" href="https://wordpress.org/gutenberg/">Gutenberg</a>. That alone makes it stand out from the usual CMS stack.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/emdash.jpg" alt="EmDash" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://openscreen.vercel.app">Open Screen</a></h2>
<p><strong>OpenScreen</strong> is a free, open-source macOS app for product demos. It records your screen with zoom effects, annotations, and styled backgrounds. No account required. Handy if you want cleaner product recordings without paying for another screen recording app.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/openscreen.jpg" alt="Open Screen" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://meodai.github.io/heerich">Heerich.js</a></h2>
<p><strong>Heerich.js</strong> is a minimalist 3D voxel engine that renders to SVG. It builds compositions using boxes, spheres, and lines. Nice pick for generative art on the web or even pen plotter experiments.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/heerich.jpg" alt="Heerich.js" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://www.npmjs.com/package/wp-studio">WP Studio</a></h2>
<p><strong>wp-studio</strong> is a CLI for local WordPress development that lets you create sites, run WP-CLI commands, and publish temporary previews to <strong>wp.build</strong>. It requires <strong>Node.js 22+</strong>. Useful if you want a more scriptable local WordPress setup.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/wp-studio.jpg" alt="WP Studio" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://github.com/glommer/pgmicro">pgmicro</a></h2>
<p><strong>pgmicro</strong> is an embeddable, single-file database that speaks PostgreSQL but stores data as SQLite. It parses PostgreSQL and compiles it directly to SQLite bytecode. You can run it in memory, on disk, or as a <strong>psql server</strong>. Interesting option for ephemeral AI workloads or lightweight local PostgreSQL setups.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/pgmicro.jpg" alt="pgmicro" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://php-testo.github.io">Testo</a></h2>
<p><strong>Testo</strong> is a modern PHP testing framework for PHP 8.2+. It uses attributes instead of magic conventions, supports functions or classes without inheritance, and includes async tools, benchmarks, and a PhpStorm plugin. The pipe-style assertions are a nice touch if you prefer more explicit, type-safe tests.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/testo.jpg" alt="Testo" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://github.com/AJenbo/phpantom_lsp">PHPantom LSP</a></h2>
<p><strong>PHPantom</strong> is a fast PHP language server written in <a rel="nofollow noopener" target="_blank" href="https://rust-lang.org">Rust</a>. It supports generics, <a rel="nofollow noopener" target="_blank" href="https://laravel.com/docs/13.x/eloquent">Laravel Eloquent</a>, <a rel="nofollow noopener" target="_blank" href="https://phpstan.org/writing-php-code/phpdoc-types">PHPStan annotations</a>, and conditional return types. It starts in under 1 second with about 59MB of RAM and skips the usual indexing phase. Strong option if you want faster autocomplete without the usual heavyweight setup.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/phpantom-lsp.jpg" alt="PHPantom LSP" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://graffiti-ui.com">Graffiti</a></h2>
<p><strong>Graffiti</strong> is a minimal CSS toolkit with utilities, elements, blocks, and templates. It’s configurable, themeable, and uses zero JavaScript. It also works with any framework or plain HTML. Good candidate if you want a lightweight drop-in CSS layer without dragging in more JS.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/graffiti.jpg" alt="Graffiti" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://www.expect.dev">Expect</a></h2>
<p><strong>Expect</strong> is a testing skill for AI coding agents. It reads your git changes, generates a test plan, and runs it in a real browser with <a rel="nofollow noopener" target="_blank" href="https://playwright.dev">Playwright</a>. It checks for performance issues, security vulnerabilities, broken links, and design regressions, without forcing you to maintain scripts or selectors. It can also run locally or in CI.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/expect.jpg" alt="Expect" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://github.com/rivet-dev/agent-os">agentOS</a></h2>
<p><strong>agentOS</strong> is a portable OS for AI agents. Powered by WebAssembly and V8 isolates, it claims <strong>~6ms</strong> cold starts at up to 32x lower cost than sandboxes. It also includes filesystem mounting, host tools, and granular security controls. That makes it a notable option for embedding agents directly into backend systems.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/agent-os.jpg" alt="Agent OS" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://impeccable.style">Impeccable</a></h2>
<p><strong>Impeccable</strong> is a design skill pack for AI coding agents. It teaches design principles across typography, color, layout, and motion. It also includes a CLI and browser extension that detect more than 25 anti-patterns, including gradient text, overused fonts, and nested cards.</p>
<p>It works with Cursor, Claude Code, Gemini CLI, and more. Useful if you want AI-generated UIs with a bit more visual discipline.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/impeccable.jpg" alt="Impeccable" width="1000" height="600">
    </figure>
<h2><a rel="nofollow noopener" target="_blank" href="https://charcuterie.elastiq.ch">Charcuterie</a></h2>
<p><strong>Charcuterie</strong> is a visual explorer for Unicode. It lets you browse characters, discover related glyphs, and learn about scripts and symbols. Rendered glyphs are compared in vector space to power visual similarity.</p>
<p>It is still under active development, but it already looks like a more interesting way to explore Unicode than staring at code charts.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/designers-developers-monthly-04-2026/charcuterie.jpg" alt="Charcuterie" width="1000" height="600">
    </figure><p>The post <a href="https://www.hongkiat.com/blog/designers-developers-monthly-04-2026/">Fresh Resources for Web Designers and Developers (April 2026)</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74376</post-id>	</item>
		<item>
		<title>Choosing the Right LLM Models for Your Everyday Laptop</title>
		<link>https://www.hongkiat.com/blog/local-llm-models-laptop-guide/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74374</guid>

					<description><![CDATA[<p>A practical guide to running LLMs on your everyday laptop, no supercomputer required. Learn how to pick the right model for your hardware, your workload, and your privacy needs.</p>
<p>The post <a href="https://www.hongkiat.com/blog/local-llm-models-laptop-guide/">Choosing the Right LLM Models for Your Everyday Laptop</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As my AI experiments became increasingly expensive, I found myself wanting more control over my data. This led me to start running LLMs locally on my everyday laptop for two main reasons: <strong>privacy and cost</strong>.</p>
<p>I tried dozens of approaches before finding what actually worked. Once I got it running, however, the benefits were clear: <strong>unlimited usage, zero API fees, and complete data privacy</strong>.</p>
<p>Today, you no longer need a supercomputer to run AI models. You don’t need the latest GPU either. What you need is the right model for your hardware and the know-how to run it efficiently.</p>
<p>In this guide, I’ll show you how to do the same.</p>
<h2>Know your hardware</h2>
<p>Before you download any model, you need to know what your computer can handle. The common mistake I’ve seen is people trying to run a model that exceeds their physical memory. This could trigger “disk swapping” which could make your laptop unresponsive due to the heavy process.</p>
<p>So first, check your system specs:</p>
<ul>
<li><strong>VRAM:</strong> If you have a dedicated NVIDIA or AMD GPU, check its Video RAM. This is where the model runs for near-instant responses. <strong>8GB VRAM</strong> is a solid baseline for hobby use.</li>
<li><strong>RAM:</strong> 16GB is the absolute minimum I’d suggest for a smooth experience. This handles the <strong>“offload”</strong>. If a model is 10GB and you only have 8GB of VRAM, the remaining 2GB sits here.</li>
<li><strong>CPU:</strong> Modern processors like Intel i5/i7 or Ryzen 5/7 can run smaller models reasonably well, <a rel="nofollow noopener" target="_blank" href="https://docs.vllm.ai/en/latest/features/quantization/">especially with 4-bit quantization</a>.</li>
<li><strong>Storage:</strong> Ensure you have at least 50GB of <strong>SSD space</strong>. If your internal storage is tight, you can also <a href="https://www.hongkiat.com/blog/ollama-llm-from-external-drive/">run LLMs from an external drive with Ollama</a>. Running models off an old-school HDD will result in painful load times.</li>
</ul>
<p><strong>Pro Tip:</strong> Always subtract ~2GB from your total VRAM/RAM to account for your operating system and open browser tabs. If you have 8GB total, you really need to have 6GB for the AI.</p>
<h2>Know your needs</h2>
<p>With thousands of models available, don’t just chase the highest benchmark scores. If your hardware is limited, focus on models optimized for your specific tasks.</p>
<p>Since we assume that hardware is constrained, I think there are two use cases that you can realistically run on your laptop: text generation and code generation.</p>
<ul>
<li><strong>Coding:</strong> Specialized models like <strong>Qwen2.5-Coder</strong> or <strong>DeepSeek-Coder</strong> are tuned for syntax and logic.</li>
<li><strong>Creative Writing:</strong> <a href="https://www.hongkiat.com/blog/run-gemma-4-locally/"><strong>Gemma 4</strong></a> or <strong>Mistral</strong> variants tend to have a more natural, less “robotic” prose style.</li>
</ul>
<h3>Consider model size vs. quality</h3>
<p>The “B” in 3B or 7B stands for Billions of parameters. More parameters usually mean better reasoning, but higher memory costs.</p>
<ul>
<li><strong>1B – 3B models:</strong> Extremely fast, low memory, best for basic grammar and simple summaries.</li>
<li><strong>7B – 14B models:</strong> A practical range for most users. Good reasoning, and they fit in many modern GPUs.</li>
<li><strong>30B+ models:</strong> Professional-grade reasoning, but requires high-end hardware (24GB+ VRAM).</li>
</ul>
<p><strong>Quantization helps here.</strong> It compresses the model so it fits on consumer hardware with little loss in output quality.</p>
<ul>
<li><strong>4-bit (Q4_K_M):</strong> The industry standard. Reduces memory usage by ~70%.</li>
<li><strong>GGUF:</strong> The most user-friendly format. It allows the model to run on both your CPU and GPU simultaneously.</li>
</ul>
<h2>Can MacBook Air M2 with 8GB RAM run LLMs?</h2>
<p>Let’s walk through a concrete example.</p>
<p>Say you have a MacBook Air with an M2 chip (8-core CPU) and 8GB of unified memory. You want to use it for text editing, grammar fixing, and light writing assistance.</p>
<p>With 8GB total RAM, you need to reserve about 2GB for macOS and your other applications. That leaves ~6GB for the model. Apple Silicon’s unified memory architecture also helps because the GPU can access the same memory pool.</p>
<p>Based on these constraints and your needs for text editing and grammar tasks, you don’t need an advanced model with high reasoning capabilities. A model with ~3B parameters is more than enough.</p>
<p>So here are your best options:</p>
<ul>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://ollama.com/library/phi3.5:3.8b-mini-instruct-q4_K_M">Phi-3.5 Mini 3.8B (Q4_K_M)</a>:</strong> ~2GB RAM, 20-30 tokens/second. A compact model that handles grammar and editing tasks well enough for daily use.</li>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://ollama.com/library/llama3.2:3b-instruct-q4_K_M">Llama 3.2 3B Instruct (Q4_K_M)</a>:</strong> ~2GB RAM, 15-25 tokens/second. Specifically trained for instruction following, great for “fix this sentence” or “rewrite this paragraph” requests.</li>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://ollama.com/library/qwen2.5:3b-instruct-q4_K_M">Qwen2.5 3B Instruct (Q4_K_M)</a>:</strong> ~2GB RAM, similar speed. Good multilingual support if you work with multiple languages.</li>
</ul>
<p>I’d avoid running 7B models on this hardware. They’ll work but will be slower and might cause swapping if you have other apps open.</p>
<h2>Using llmfit to find the perfect model</h2>
<p>Manual calculations are a good start, but they still involve some guesswork. If you want a clearer read on what your computer can handle, use <strong><a rel="nofollow noopener" target="_blank" href="https://www.llmfit.org">llmfit</a></strong>. It scans your hardware and shows which models suit your setup. I also covered <a href="https://www.hongkiat.com/blog/llmfit-local-llm-guide/">how llmfit helps you pick the right local LLM for your machine</a> if you want a closer look at what it does.</p>
<p>You can install llmfit with:</p>
<pre>
# macOS/Linux with Homebrew
brew install llmfit

# Or quick install
curl -fsSL https://llmfit.axjns.dev/install.sh | sh
</pre>
<p>Then run it to get recommendations:</p>
<pre>
llmfit
</pre>
<p>The tool detects your RAM, CPU cores, and GPU VRAM, then scores hundreds of models based on quality, speed, and how well they fit your hardware.</p>
<p>Each recommendation also includes estimated tokens per second, memory usage, and context length, as we can see below.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/local-llm-models-laptop-guide/llmfit-tui.jpg" alt="llmfit example" width="1000" height="600">
    </figure>
<p>You can filter and sort by different criteria, which saves hours of manual testing and helps avoid the frustration of downloading models that won’t run on your hardware.</p>
<h3>llmfit integrates with your favorite tools</h3>
<p>llmfit also works with tools like Ollama and LM Studio, so the recommendations are easier to act on.</p>
<h3>Ollama integration</h3>
<p>If you’re <a href="https://www.hongkiat.com/blog/ollama-ai-setup-guide/">using Ollama</a>, llmfit can help you narrow down good model options for your setup. If you prefer a desktop UI instead, <a href="https://www.hongkiat.com/blog/run-llm-locally-lm-studio/">LM Studio is another good way to run LLMs locally</a>.</p>
<p>For example, if llmfit recommends <code>google/gemma-2-2b-it</code>, you can immediately hit <kbd>d</kbd> and it will show you <strong>“Ollama”</strong> as an option, as seen below:</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/local-llm-models-laptop-guide/llmfit-download.jpg" alt="" width="1000" height="600">
    </figure>
<p>Once you’ve selected it, it will download the model for Ollama.</p>
<p>llmfit also supports:</p>
<ul>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://lmstudio.ai">LM Studio</a></strong></li>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://github.com/ggml-org/llama.cpp">llama.cpp</a></strong></li>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://github.com/ml-explore/mlx">MLX</a></strong></li>
<li><strong><a rel="nofollow noopener" target="_blank" href="https://docs.docker.com/ai/model-runner/">Docker Model Runner</a></strong></li>
</ul>
<h2>What’s next?</h2>
<p>Give it a try. Download a small model, run it locally, and see what you can build with your own private AI assistant.</p>
<p>I recommend llmfit if you want to compare options faster. It would have saved me weeks of trial and error when I was starting out.</p>
<p>The first time you get a response from a model running entirely on your computer, you’ll understand why I made the switch.</p>
<p>    <!-- END HERE --></p><p>The post <a href="https://www.hongkiat.com/blog/local-llm-models-laptop-guide/">Choosing the Right LLM Models for Your Everyday Laptop</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74374</post-id>	</item>
		<item>
		<title>Dinky Is a Free Mac App That Compresses Images, Videos, and PDFs</title>
		<link>https://www.hongkiat.com/blog/dinky-macos-compression-tool/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74386</guid>

					<description><![CDATA[<p>Dinky is a free native Mac app that compresses images, videos, and PDFs without the usual browser-tool friction.</p>
<p>The post <a href="https://www.hongkiat.com/blog/dinky-macos-compression-tool/">Dinky Is a Free Mac App That Compresses Images, Videos, and PDFs</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If you compress files often on a Mac, you probably already know the usual tradeoff. Browser tools feel disposable, desktop apps can feel bloated, and the quick hacks are fine until you need to do the same job every day. For simpler jobs, <a href="https://www.hongkiat.com/blog/optimise-images-macos/">optimizing images on your Mac</a> is still the quickest no-install option.</p>
<figure>
  <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/dinky-macos-compression-tool/dinky.jpg" alt="Dinky Mac app" width="1280" height="835">
</figure>
<p><a rel="nofollow noopener" target="_blank" href="https://dinkyfiles.com/">Dinky</a> is a free app, and that already makes it more appealing if you have been comparing it with paid compression tools like Optimage or Permute.</p>
<p>It is a small macOS utility from Derek Castelli that compresses images, videos, and PDFs in one place. You drop files in, it gives you smaller ones back, but the real appeal is how much repetitive file prep it can absorb once you start using presets, watch folders, and automation.</p>
<h2 id="what-dinky-does">What Dinky Does</h2>
<p>Dinky goes beyond the usual image-only utility. It works with JPG, PNG, WebP, AVIF, TIFF, and BMP, then exports images to WebP, AVIF, or lossless PNG. It also compresses videos to MP4 with H.264 or HEVC presets, and shrinks PDFs while either keeping selectable text and links or flattening pages for more aggressive reduction.</p>
<ul>
<li>compress images, videos, and PDFs from one app</li>
<li>drag and drop files, paste from the clipboard, or use a direct URL</li>
<li>convert images to WebP, AVIF, or lossless PNG, which pairs nicely with this <a href="https://www.hongkiat.com/blog/webp-guide/">WebP guide</a></li>
<li>resize images or target a specific file size</li>
<li>save presets, watch folders, and use Apple Shortcuts for repeat jobs</li>
<li>keep originals, move them to backup, or send them to the trash after compression</li>
</ul>
<figure>
  <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/dinky-macos-compression-tool/dinky-compression.jpg" alt="Dinky compression settings" width="1280" height="823">
</figure>
<p>That makes it a stronger fit for recurring file prep, especially if you would otherwise piece together the same workflow with Quick Actions or guides like this one on <a href="https://www.hongkiat.com/blog/batch-zip-files-mac/">batch compressing files on Mac</a>.</p>
<p>Dinky is listed at around 28 MB installed. Built in Swift and SwiftUI, with no Electron and no web views, it still feels like a lightweight native app rather than a wrapped website pretending to be one.</p>
<h2 id="who-this-is-for">Who This Is For</h2>
<p>Dinky makes the most sense for people who compress files constantly and are tired of doing it in a browser tab.</p>
<p>That includes:</p>
<ul>
<li>bloggers and publishers preparing web images</li>
<li>designers exporting client assets</li>
<li>people dealing with video uploads</li>
<li>anyone cleaning up PDFs before sending or archiving them</li>
<li>Mac users who want a native utility instead of an online service</li>
</ul>
<p>If your workflow already includes repetitive file prep, Dinky looks like one of those tools that could quietly save time every week.</p>
<h2 id="before-you-install-it">Before You Install It</h2>
<p>There are a few practical limits to keep in mind.</p>
<p>First, it is built for macOS 15 Sequoia and later, not as a cross-platform tool.</p>
<p>Second, it is not notarized, so macOS will probably block it the first time you open it. You can still run it, but you will need to approve it through macOS security settings or remove the quarantine flag manually.</p>
<p>Still, if you are comfortable installing indie Mac software, that probably will not scare you off.</p><p>The post <a href="https://www.hongkiat.com/blog/dinky-macos-compression-tool/">Dinky Is a Free Mac App That Compresses Images, Videos, and PDFs</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74386</post-id>	</item>
		<item>
		<title>A look into EmDash CMS</title>
		<link>https://www.hongkiat.com/blog/what-is-emdash-cms/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 13:00:10 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74372</guid>

					<description><![CDATA[<p>When Cloudflare announced EmDash CMS on April 1st, I thought it was a joke. But as I looked at the GitHub code, read Cloudflare’s announcement, and saw Matt Mullenweg’s response, I realized this isn’t just another CMS trying to beat WordPress. I think it’s an interesting test of what a CMS could be in a&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/what-is-emdash-cms/">A look into EmDash CMS</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>When <a rel="nofollow noopener" target="_blank" href="https://blog.cloudflare.com/emdash-wordpress/">Cloudflare announced EmDash CMS on April 1st</a>, I thought it was a joke.</p>
<p>But as I looked at the <a rel="nofollow noopener" target="_blank" href="https://github.com/emdash-cms/emdash">GitHub code</a>, read Cloudflare’s announcement, and saw <a rel="nofollow noopener" target="_blank" href="https://ma.tt/2026/04/emdash-feedback/">Matt Mullenweg’s response</a>, I realized this isn’t just another CMS trying to beat WordPress. I think it’s an interesting test of what a CMS could be in a world with AI tools, serverless hosting, and more security worries.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/what-is-emdash-cms/emdash-cover.jpg" alt="EmDash CMS Cloudflare WordPress alternative" width="1000" height="600">
  </figure>
<p>The real question isn’t if EmDash will replace WordPress. It’s whether it shows us what CMS software might look like next.</p>
<h2>What is EmDash CMS?</h2>
<p>EmDash is Cloudflare’s attempt to make a <strong><q>next version of WordPress</q></strong>. It is written in TypeScript and built on <a rel="nofollow noopener" target="_blank" href="https://astro.build">Astro</a>, which runs well on serverless platforms, especially Cloudflare’s own services. It tries to keep WordPress’s flexibility and admin panel, but adds type safety and plugin sandboxing to fix security issues that WordPress has been facing for years.</p>
<h2>Prerequisites</h2>
<p>Before you start playing with EmDash, here’s what you’ll need:</p>
<ul>
<li><strong>Node.js 22+</strong>: For local development</li>
<li><strong>A Cloudflare account</strong>: For the best experience (though it can run elsewhere)</li>
<li><strong>Basic TypeScript knowledge</strong>: The entire codebase is TypeScript</li>
<li><strong>Familiarity with Astro</strong>: EmDash uses Astro for frontend rendering</li>
</ul>
<h2>Installation</h2>
<p>You have a few options to install EmDash, depending on how you want to try it:</p>
<h3>Create a new EmDash site locally</h3>
<p>For local development and testing, use the <code>npm</code> command:</p>
<pre>
npm create emdash@latest
</pre>
<h3>Deploy directly to Cloudflare</h3>
<p>If you want to skip local setup and go straight to deployment, use the Cloudflare deploy link:</p>
<pre>
https://deploy.workers.cloudflare.com/?url=https://github.com/emdash-cms/templates/tree/main/blog-cloudflare
</pre>
<h3>Try the online playground</h3>
<p>For a quick look without any setup, check out the <a rel="nofollow noopener" target="_blank" href="https://emdashcms.com/playground">EmDash playground</a>.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/what-is-emdash-cms/emdash-playground.jpg" alt="EmDash CMS online playground interface" width="1000" height="600">
  </figure>
<h2>Plugin security</h2>
<p>Here’s where EmDash goes a different way from WordPress.</p>
<p>WordPress plugins can do anything. They can read and write to your database, change files, connect to the internet. This isn’t a mistake. It’s a choice that allows great flexibility.</p>
<p>But this flexibility also has security risks. So EmDash uses a different method called sandboxing where each plugin runs in its own separate space with clear rules about what it can do.</p>
<p>Here’s what a simple email notification plugin looks like:</p>
<pre>
import { definePlugin } from "emdash";

export default () => definePlugin({
  id: "notify-on-publish",
  version: "1.0.0",
  capabilities: ["read:content", "email:send"],
  hooks: {
    "content:afterSave": async (event, ctx) => {
      if (event.collection !== "posts" || event.content.status !== "published") return;
      
      await ctx.email!.send({
        to: "editor@example.com",
        subject: `New post published: ${event.content.title}`,
        text: `"${event.content.title}" is now live.`,
      });
      
      ctx.log.info(`Notified editors about ${event.content.id}`);
    },
  },
});
</pre>
<p>See those <code>capabilities</code>?</p>
<p>The plugin can <em>only</em> do what it says it will do. No secret internet connections, no database access beyond what’s allowed. This is a big change from WordPress’s “trust me with everything” approach.</p>
<h2>AI-native from day one</h2>
<p>While WordPress 7.0 is starting to introduce AI features into the core through its <a rel="nofollow noopener" target="_blank" href="https://make.wordpress.org/core/2026/03/18/introducing-the-connectors-api-in-wordpress-7-0/">Connectors API</a>, which enables plugins to connect to external AI services, EmDash takes a more direct approach by giving AI agents the ability to actually run your site.</p>
<p>Every EmDash site comes with three core tools that make this possible:</p>
<ul>
<li><a rel="nofollow noopener" target="_blank" href="https://agentskills.io/home">Agent skills</a> provide clear instructions that guide AI in building plugins, themes, and making changes to your site—like recipes that agents can follow step by step.</li>
<li><a rel="nofollow noopener" target="_blank" href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx">EmDash CLI</a> gives you powerful command-line tools for managing content, uploading files, and creating content types, making automation and scripting much easier.</li>
<li><a rel="nofollow noopener" target="_blank" href="https://modelcontextprotocol.io">Built-in MCP server</a>. This allows AI tools to connect directly to your site, read and update content, and manage everything through a standard protocol.</li>
</ul>
<p>WordPress is also moving in a similar direction with its <strong><a rel="nofollow noopener" target="_blank" href="https://github.com/wordpress/mcp-adapter">official MCP adapter</a></strong>, which connects <a rel="nofollow noopener" target="_blank" href="https://developer.wordpress.org/news/2025/11/introducing-the-wordpress-abilities-api/">the Abilities API</a> to the MCP.</p>
<p>The key difference is that in WordPress, this is an additional plugin you install. In EmDash, it’s built in from the start.</p>
<h2>Built-in x402 support for monetization</h2>
<p>Here’s something WordPress doesn’t have: every EmDash site has <a rel="nofollow noopener" target="_blank" href="https://www.x402.org">x402</a> support built in. <strong>x402 is an open standard for internet payments</strong> that lets you charge for content each time someone uses it.</p>
<p>When an AI tool tries to read your content, it gets a “Payment Required” message, pays immediately, and gets access. No subscriptions, no extra coding.</p>
<p>In a time when AI tools are collecting web content, this feels less like an extra feature and more like something you need to survive.</p>
<h2>WordPress import support</h2>
<p>EmDash wants to make it as easy as possible to migrate your WordPress blogs. So they provide:</p>
<ol>
<li>A WordPress import tool that makes it easy to migrate your site to EmDash, including content and media</li>
<li>Tools to convert WordPress custom post types to EmDash collections</li>
<li>Agent skills for porting WordPress themes to EmDash</li>
</ol>
<p>The migration tools work surprisingly well. I imported a test WordPress site in under 10 minutes.</p>
<p>Migration results, however, may vary depending on your site. If you rely on plugins that add custom post types or custom database tables, you may need to migrate them to EmDash manually.</p>
<h2>Where EmDash stumbles</h2>
<p>No CMS is perfect, and EmDash is no exception. Matt Mullenweg’s feedback, I think, makes some good points:</p>
<h5>1. The sandboxing problem</h5>
<p>As Matt says, <strong><q>their sandboxing breaks down as soon as you look at what most WordPress plugins do.</q></strong> Complex plugins that work with many systems, handle files, or connect to other services would need so many permissions that the sandbox doesn’t help much.</p>
<h5>2. The UI feels strange</h5>
<p>It looks a bit like WordPress but not exactly, and some things don’t work right.</p>
<h5>3. Vendor lock-in worries</h5>
<p>While it can run elsewhere, it works <em>best</em> on Cloudflare.</p>
<h5>4. Missing WordPress’s spirit</h5>
<p>WordPress runs everywhere, from Raspberry Pis to $0.99/month Indonesian hosting. EmDash? Not really.</p>
<h5>5. No community (yet)</h5>
<p>WordPress isn’t just software; it’s meetups, WordCamps, tattoos. EmDash has GitHub stars.</p>
<p>And there’s the April 1st announcement date. Cloudflare has a tradition of releasing real products on April Fools’ Day (<a rel="nofollow noopener" target="_blank" href="https://blog.cloudflare.com/announcing-1111/">remember 1.1.1.1?</a>), but it still feels…weird.</p>
<h2>Performance and architecture</h2>
<p>Inside, EmDash is built on modern technologies:</p>
<h5>1. Astro</h5>
<p>For showing web pages, a fast framework for content sites.</p>
<h5>2. Cloudflare Workers</h5>
<p>Serverless code that starts quickly and runs at the edge.</p>
<h5>3. D1 Database</h5>
<p>Cloudflare’s SQLite-based database for serverless applications.</p>
<h5>4. R2 Storage</h5>
<p>For image and file storage with no egress fees.</p>
<h5>5. Portable Text</h5>
<p>Built on top of <strong><a rel="nofollow noopener" target="_blank" href="https://www.portabletext.org">PortableText</a></strong>, it uses structured JSON content instead of HTML in the database.</p>
<p>The performance is good. Pages load quickly, and the “scale-to-zero” setup means you only pay for what you use.</p>
<h2>Should you use EmDash?</h2>
<p>Before deciding whether you should use EmDash, here’s a comparison table to help you see the differences between these two systems:</p>
<table>
<thead>
<tr>
<th>Feature</th>
<th>WordPress</th>
<th>EmDash CMS</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Technology Stack</strong></td>
<td>PHP, MySQL, JavaScript (jQuery/React)</td>
<td>TypeScript, Astro, Cloudflare Workers</td>
</tr>
<tr>
<td><strong>Architecture</strong></td>
<td>Monolithic, traditional LAMP stack</td>
<td>Serverless, JAMstack, edge-first</td>
</tr>
<tr>
<td><strong>Database</strong></td>
<td>MySQL/MariaDB</td>
<td>D1 or SQLite, Turso/libSQL, PostgreSQL</td>
</tr>
<tr>
<td><strong>Hosting Requirements</strong></td>
<td>PHP 7.4+, MySQL 5.6+, Apache/Nginx</td>
<td>Node.js 18+, serverless platform (Cloudflare preferred)</td>
</tr>
<tr>
<td><strong>Deployment</strong></td>
<td>Traditional web hosting, shared/VPS/dedicated</td>
<td>Serverless platforms, edge deployment</td>
</tr>
<tr>
<td><strong>Plugin Ecosystem</strong></td>
<td>~60,000+ plugins (full system access)</td>
<td>Very few since it’s in early stage</td>
</tr>
<tr>
<td><strong>Plugin Security Model</strong></td>
<td>Full system access (trust-based)</td>
<td>Capability-based sandboxing</td>
</tr>
<tr>
<td><strong>Theme System</strong></td>
<td>PHP templates, child themes</td>
<td>Astro components, TypeScript-based</td>
</tr>
<tr>
<td><strong>Content Storage</strong></td>
<td>Serialized HTML in database</td>
<td>Portable Text (structured JSON)</td>
</tr>
<tr>
<td><strong>AI Integration</strong></td>
<td>Via plugins (varying quality)</td>
<td>Built-in MCP server, agent skills</td>
</tr>
<tr>
<td><strong>Monetization</strong></td>
<td>Plugins, subscriptions, ads</td>
<td>Built-in x402 pay-per-use</td>
</tr>
<tr>
<td><strong>Performance</strong></td>
<td>Caching plugins required</td>
<td>Edge-native, fast by default</td>
</tr>
<tr>
<td><strong>Scalability</strong></td>
<td>Requires scaling infrastructure</td>
<td>Auto-scaling, pay-per-request</td>
</tr>
<tr>
<td><strong>Learning Curve</strong></td>
<td>Beginner-friendly, extensive docs</td>
<td>Requires TypeScript/Node.js knowledge</td>
</tr>
<tr>
<td><strong>Community</strong></td>
<td>Massive global community, WordCamps</td>
<td>Early adopters</td>
</tr>
<tr>
<td><strong>Cost Model</strong></td>
<td>Hosting costs + premium plugins/themes</td>
<td>Pay-per-request + potential vendor lock-in</td>
</tr>
<tr>
<td><strong>Maturity</strong></td>
<td>20+ years, battle-tested</td>
<td>v0.1.0 preview, experimental</td>
</tr>
</tbody>
</table>
<p>My opinion:</p>
<h3>Use EmDash if:</h3>
<ul>
<li>You’re already using Cloudflare services</li>
<li>Plugin security worries you a lot</li>
<li>You want AI tools built in right now</li>
<li>You’re starting a new site, not moving an old one</li>
</ul>
<h3>Stick with WordPress if:</h3>
<ul>
<li>You need to run on cheap, shared hosting</li>
<li>Your site depends on specific WordPress plugins</li>
<li>Community and available tools matter more than technical details</li>
<li>You’re not ready to try a v0.1.0 preview version</li>
</ul>
<h2>The Verdict</h2>
<p>EmDash is ambitious, well-built, and working on real problems. The sandboxed plugin model is a careful way to handle security, even if it may have limits for complex plugins. The AI tools feel like they’re thinking ahead. The x402 payment system is a smart answer to AI tools collecting web content.</p>
<p>It’s also v0.1.0, announced on April Fools’ Day, and made to work best with Cloudflare. The UI needs work. It doesn’t have WordPress’s “run anywhere” idea that has let millions of people publish online.</p>
<p>Would I use EmDash for my personal blog today? Probably not. WordPress has too many tools and too much community support to ignore.</p>
<p>Would I try it for a new project where I’m already using Cloudflare and want built-in AI tools? Yes, maybe. But I prefer to wait for it to mature a bit more.</p>
<p>EmDash might not be the next WordPress. <strong>And that’s fine</strong>. It shows an idea of what CMS software might look like in a future with serverless hosting and AI tools. The sandboxing method isn’t perfect (no security method is), but it’s a useful test of how to balance flexibility with safety.</p>
<p>The best part, I think, is that both WordPress and EmDash can learn from each other. WordPress could use better security methods and AI tools. EmDash could learn from WordPress’s focus on making publishing easy for everyone.</p>
<p>Maybe we’re not looking at a replacement, but at two different ways to solve the same basic problem: <strong>helping people put content on the web</strong>.</p><p>The post <a href="https://www.hongkiat.com/blog/what-is-emdash-cms/">A look into EmDash CMS</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74372</post-id>	</item>
		<item>
		<title>Hermes Desktop Is a GUI for Hermes Agent</title>
		<link>https://www.hongkiat.com/blog/hermes-desktop-gui-for-hermes-agent/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74384</guid>

					<description><![CDATA[<p>Hermes Desktop gives Hermes Agent a native GUI and a cleaner desktop interface for setup, chat, and day-to-day use.</p>
<p>The post <a href="https://www.hongkiat.com/blog/hermes-desktop-gui-for-hermes-agent/">Hermes Desktop Is a GUI for Hermes Agent</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A new app called <a rel="nofollow noopener" target="_blank" href="https://github.com/fathah/hermes-desktop">Hermes Desktop</a> makes <a rel="nofollow noopener" target="_blank" href="https://github.com/NousResearch/hermes-agent">Hermes Agent</a> easier to use for people who do not want to stay in the terminal. If you have been looking for a cleaner way to install and use an agent on your own machine, this sits in the same wider conversation as how people <a href="https://www.hongkiat.com/blog/run-gemma-4-openclaw-locally/">run AI locally</a> with lighter setup friction.</p>
<p>This is not an official Nous Research desktop app.</p>
<figure>
  <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/hermes-desktop-gui-for-hermes-agent/hermes-desktop.jpg" alt="Hermes Desktop app interface" width="1078" height="735">
</figure>
<p>Hermes Desktop is a separate open-source project created by GitHub user fathah. It sits on top of Hermes Agent and gives it a native interface for setup, chat, and day-to-day management.</p>
<p>It is a third-party desktop companion for Hermes Agent, not the upstream project itself.</p>
<h2 id="what-is-hermes-desktop">What Is Hermes Desktop?</h2>
<p>Hermes Desktop is a native app for installing, configuring, and chatting with Hermes Agent without doing everything by hand from the command line.</p>
<p>According to the project’s GitHub repo, it uses the official Hermes install script, stores Hermes under <code>~/.hermes</code>, and provides screens for chat, sessions, profiles, memory, skills, tools, schedules, and messaging gateways.</p>
<p>That makes it less of a simple wrapper and more of a desktop control panel for Hermes.</p>
<h2 id="what-can-the-app-do">What Can the App Do?</h2>
<p>Based on the repo and release notes, Hermes Desktop already goes well beyond a basic chat box.</p>
<ul>
<li>first-run install for Hermes Agent</li>
<li>provider and API key setup</li>
<li>streaming chat with slash commands and tool progress</li>
<li>token usage and cost tracking</li>
<li>session search and resume</li>
<li>separate Hermes profiles</li>
<li>skill, tool, memory, and saved model management</li>
<li>scheduled tasks</li>
<li>messaging gateway setup for multiple platforms</li>
<li>logs, backups, imports, and debug tools</li>
</ul>
<p>Some of that will sound familiar if you have followed tools that make it easier to <a href="https://www.hongkiat.com/blog/setup-openclaw-bot-telegram/">chat with your bot on Telegram</a> instead of keeping everything locked inside a terminal window.</p>
<h2 id="why-this-stands-out">Why This Stands Out</h2>
<p>Hermes Agent itself is powerful, but the official experience is still heavily CLI-driven. That works fine for technical users, but it adds friction for anyone who mainly wants to install the agent, choose a provider, open a chat window, and get started.</p>
<p>Hermes Desktop smooths out that process. Instead of wiring everything together manually, you get one interface for setup, chat, profiles, memory, scheduling, and gateway integrations.</p>
<p>That also makes Hermes easier to compare with other local-first setups, especially if you are already familiar with <a href="https://www.hongkiat.com/blog/ollama-ai-setup-guide/">getting started with Ollama</a> and want something that feels more like a complete agent workspace.</p>
<h2 id="what-providers-and-platforms-are-supported">What Providers and Platforms Are Supported?</h2>
<p>The repo says Hermes Desktop supports a mix of hosted and local model providers, including mainstream API services and OpenAI-compatible local endpoints.</p>
<p>It also ships with local presets for tools like LM Studio, Ollama, vLLM, and <code>llama.cpp</code>.</p>
<p>As for the app itself, current releases are available for macOS and Linux. The repo lists macOS downloads as <code>.dmg</code>, while Linux builds are offered as <code>.AppImage</code> and <code>.deb</code>.</p>
<h2 id="what-it-is-not">What It Is Not</h2>
<p>The branding can blur together fast.</p>
<p>Hermes Agent is the upstream project from Nous Research. Hermes Desktop is a separate interface layer built around it.</p>
<ul>
<li><strong>Hermes Agent</strong> is the core agent</li>
<li><strong>Hermes Desktop</strong> is a third-party native app for using it</li>
</ul>
<p>The difference is useful if you want to know what is official, who maintains the app, and where feature requests or bug reports should go.</p>
<h2 id="should-you-use-it">Should You Use It?</h2>
<p>If you are curious about Hermes Agent but the command-line setup feels like unnecessary friction, Hermes Desktop looks like a strong entry point.</p>
<p>It gives Hermes a real interface, keeps the upstream install flow, and exposes a lot of the project’s useful features without forcing you to memorize commands first.</p>
<p>Just do not mistake it for an official Nous Research desktop release. It is a community-built project around Hermes Agent, and that is part of why people are paying attention to it.</p><p>The post <a href="https://www.hongkiat.com/blog/hermes-desktop-gui-for-hermes-agent/">Hermes Desktop Is a GUI for Hermes Agent</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74384</post-id>	</item>
		<item>
		<title>How to Explain AI to a Friend Who Doesn’t Follow Tech</title>
		<link>https://www.hongkiat.com/blog/explain-ai-to-a-friend/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Internet]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74364</guid>

					<description><![CDATA[<p>A plain-English guide to LLMs, Gemma, quantization, GGUF, and weird model names, without making your non-tech friend regret asking.</p>
<p>The post <a href="https://www.hongkiat.com/blog/explain-ai-to-a-friend/">How to Explain AI to a Friend Who Doesn&#8217;t Follow Tech</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Most people have used ChatGPT by now, but far fewer could explain what is happening when they type a question and get a human-sounding answer back.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/explain-ai-to-a-friend/explaning-ai.jpg" alt="Explaining AI to a friend" width="1365" height="768"></figure>
<p>That is where a lot of AI confusion starts. People hear terms like <code>LLM</code>, <code>31B</code>, <code>4-bit</code>, <code>GGUF</code>, <code>LoRA</code>, or <code>uncensored model</code>, and the whole thing starts sounding like a niche obsession for people with too many graphics cards.</p>
<p>It is simpler than it sounds. If you can explain Spotify playlists, ZIP files, and the difference between a general doctor and a specialist, you can explain modern AI too.</p>
<p>This is a plain-English way to do that, especially for large language models, local AI, and the weird filenames that make the whole space look harder than it is.</p>
<h2 id="start-with-this-what-is">Start With This: What Is an LLM?</h2>
<p>An LLM, short for large language model, is software trained on a huge amount of text so it can predict what words should come next.</p>
<p>That sounds underwhelming, until you realize human conversation works a lot like that too. We read, listen, absorb patterns, then respond based on what we have seen before.</p>
<p>An LLM does something similar at a much larger scale. It is trained on huge amounts of text, often including books, websites, documentation, code, and other language data. It does not think like a person, but it gets very good at recognizing patterns in language. That is why it can answer questions, summarize documents, write emails, explain code, or help brainstorm ideas.</p>
<p>The easiest way to explain it to a friend is this:</p>
<blockquote>
<p>An LLM is like a prediction engine for language. It has read an absurd amount of text and learned how words, ideas, and instructions tend to fit together.</p>
</blockquote>
<p>If you want an analogy, picture a friend who has read half the internet and can reply instantly in full sentences. That is the big picture.</p>
<p>To make that more concrete, <code>Gemma</code> is one example of an open model family in the LLM world. It is Google’s base model line that people can download, run, and build on.</p>
<p>Then you get custom versions built on top of a model family like that. <code>SuperGemma</code> is a good example. It usually means someone took Gemma, fine-tuned it, changed its behavior, and often packaged it for easier local use.</p>
<p>So the relationship is simple:</p>
<ul>
<li><code>LLM</code> is the broad category</li>
<li><code>Gemma</code> is one model family inside that category</li>
<li><code>SuperGemma</code> is a customized version built from that family</li>
</ul>
<p>That framing helps because a lot of AI terms are not separate inventions. They are often layers. First the model type, then the model family, then the customized version.</p>
<h2 id="parameters-are-the-size-of">Parameters Are the Size of the Brain</h2>
<p>When people talk about a model being <code>7B</code>, <code>12B</code>, <code>26B</code>, or <code>31B</code>, they are talking about parameters.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/explain-ai-to-a-friend/parameters.jpg" alt="Parameters illustration" width="1088" height="530"></figure>
<p>Parameters are the tiny numerical settings inside the model that got adjusted during training. You can think of them as the model’s internal wiring.</p>
<p>More parameters usually means the model can capture more nuance, hold more patterns, and perform better on harder tasks.</p>
<p>A simple way to explain it:</p>
<ul>
<li>a smaller model is like a smart pocket notebook</li>
<li>a larger model is like a full reference library</li>
</ul>
<p>Both can be useful. The bigger one usually knows more and handles trickier prompts better, but it also needs more memory and more power to run.</p>
<p>So if someone says they are running a <code>31B</code> model, they mean it is a fairly large one with about 31 billion parameters.</p>
<h2 id="dense-models-vs-moe-models">Dense Models vs MoE Models</h2>
<p>A dense model uses all of its brain for every reply. Every time it generates text, all of its parameters are involved.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/explain-ai-to-a-friend/dense-model.jpg" alt="Dense model illustration" width="750" height="446"></figure>
<p>A Mixture-of-Experts model, usually shortened to MoE, is more selective. It has different specialist parts, and only some of them wake up for a given task.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/explain-ai-to-a-friend/moe-models.jpg" alt="Mixture-of-Experts model illustration" width="734" height="322"></figure>
<p>The restaurant analogy works well here. A dense model is like one chef cooking every dish in the restaurant.</p>
<p>An MoE model is like a kitchen with specialists. If you order pasta, the pasta chef gets involved. If you order sushi, someone else steps in. Not every chef needs to touch every plate, which is why MoE models can feel more efficient. They may have a large total size, but only part of that capacity is active at a time.</p>
<p>If you see something like <code>A4B</code>, it usually means around 4 billion parameters are active for each step, even if the overall model is much larger.</p>
<h2 id="fine-tuning-is-giving-the">Fine-Tuning Is Giving the Model Extra Lessons</h2>
<p>A base model is the general version. It knows a broad range of things, but it is not always great at a specific style or task.</p>
<p>Fine-tuning is what happens when someone takes that base model and trains it further on a narrower set of examples.</p>
<p>That is how a general model becomes better at coding, roleplay, customer support, medical note formatting, or following instructions in a cleaner way.</p>
<p>Think of it like this: the base model is a smart student with a broad education, and fine-tuning is sending that student to extra classes. Maybe they become better at coding, more conversational, or less likely to refuse edgy prompts. The original brain is still there, just shaped in a more specific direction.</p>
<h2 id="lora-and-qlora-are-the">LoRA and QLoRA Are the Cheap Way to Customize AI</h2>
<p>Training a model from scratch is expensive. That is why most hobbyists and small teams do not build a brand-new model. They start with an existing one and adapt it.</p>
<p>LoRA is one popular way to do that. Instead of retraining the whole model, LoRA keeps most of the original model frozen and adds a much smaller set of trainable layers on top. That cuts the cost dramatically.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/explain-ai-to-a-friend/lora.jpg" alt="LoRA illustration" width="732" height="314"></figure>
<p>A clean analogy: full retraining is rewriting an entire textbook, while LoRA is adding a slim companion booklet that updates or extends the original.</p>
<p>QLoRA goes one step further. It first shrinks the model using quantization, then applies the LoRA-style training on top of that smaller version.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/explain-ai-to-a-friend/qlora.jpg" alt="QLoRA illustration" width="756" height="264"></figure>
<p>That is a big reason local AI got more accessible. It let regular people fine-tune strong models on hardware that would have been laughably underpowered a few years ago.</p>
<h2 id="quantization-is-model-compression">Quantization Is Model Compression</h2>
<p>Quantization means storing the model’s numbers with lower precision so the file becomes smaller and easier to run.</p>
<p>The plain-English version is that you are compressing the model. Not in the exact same way as a ZIP file, but close enough for a beginner explanation.</p>
<p>A 16-bit model keeps more precision. A 4-bit model uses fewer bits per value, which makes it much smaller and lighter.</p>
<p>You lose some quality, but often not as much as people expect.</p>
<p>That tradeoff is why <code>4-bit</code> models are so popular. They hit a sweet spot: smaller, faster, and much more practical on laptops.</p>
<p>If you need an analogy, compare it to converting a giant RAW photo into a high-quality JPEG. The file gets much smaller. Some detail is lost, but for everyday use it is often still excellent.</p>
<h2 id="gguf-is-the-ready-to">GGUF Is the Ready-to-Run File</h2>
<p>Once people start downloading local models, they run into file formats.</p>
<p><code>GGUF</code> is one of the big ones.</p>
<p>The easiest way to explain it is this: GGUF is a packaging format for local models, especially quantized ones.</p>
<p>It packages the model in a way that tools like <code>llama.cpp</code> and LM Studio can load easily.</p>
<p>For non-technical friends, I would just say:</p>
<blockquote>
<p>GGUF is the version of the model that has been packed for convenient local use.</p>
</blockquote>
<p>If the full original model is a warehouse full of parts, GGUF is the neatly packed version that is easier to move and run.</p>
<h2 id="how-to-read-weird-model">How to Read Weird Model Names Without Panicking</h2>
<p>This is the part that makes local AI look more mysterious than it really is.</p>
<p>Take a name like:</p>
<p><code>google/gemma-4-26B-A4B-it</code></p>
<p>Or:</p>
<p><code>Jiunsong/supergemma4-26b-uncensored-gguf-v2-Q4_K_M.gguf</code></p>
<p>It looks chaotic, but it is mostly labels stacked together. Here is how to read them.</p>
<h4 id="the-creator-name">1. The Creator Name</h4>
<p>The part before the slash tells you who published it.</p>
<p><code>google</code> means the official release came from Google, while <code>Jiunsong</code> means it is a community release from that user or team.</p>
<h4 id="the-model-family">2. The Model Family</h4>
<p><code>gemma-4</code> or <code>supergemma4</code> tells you which model line it belongs to.</p>
<p>That is similar to saying iPhone 16, Galaxy S26, or ThinkPad X1. You can also think of it like a car name such as Honda Civic or BMW 3 Series. It tells you the family and generation before you get into the engine, trim, or extras.</p>
<h4 id="the-size">3. The Size</h4>
<p><code>26B</code> means about 26 billion parameters.</p>
<p>That gives you a rough sense of the model’s scale.</p>
<h4 id="the-architecture-detail">4. The Architecture Detail</h4>
<p><code>A4B</code> usually points to the active parameter count in an MoE setup.</p>
<p>So while the full model may be larger, around 4 billion parameters are actively doing work at a time.</p>
<h4 id="the-behavior-or-tuning">5. The Behavior or Tuning</h4>
<p><code>it</code> usually means instruction-tuned. In other words, it was trained to follow prompts and behave more like a helpful assistant.</p>
<p><code>instruct</code> usually signals the same idea.</p>
<p><code>uncensored</code> usually means the model has fewer refusal rules.</p>
<h4 id="the-format">6. The Format</h4>
<p><code>gguf</code> means it is packaged for local running.</p>
<h4 id="the-quantization">7. The Quantization</h4>
<p><code>Q4_K_M</code> is the quantization method.</p>
<p>For beginners, the key detail is simple: it is a 4-bit version, and that usually means a good balance between quality and file size.</p>
<p>So when someone says they are running <code>SuperGemma 26B Q4_K_M GGUF</code>, what they usually mean is:</p>
<blockquote>
<p>I am using a 26-billion-parameter custom Gemma model that has been compressed into a practical local file.</p>
</blockquote>
<p>That sentence alone clears up a lot.</p>
<h2 id="what-actually-happens-from-lab">What Actually Happens From Lab to Laptop</h2>
<p>If you want the full journey in one pass, it usually looks like this:</p>
<ol>
<li>A big lab trains the original base model.</li>
<li>They release it publicly, or at least release weights people can use.</li>
<li>Other developers fine-tune it, compress it, and package it.</li>
<li>You download the version that fits your hardware.</li>
<li>A local app such as Ollama or LM Studio runs it on your machine. If you want a more containerized setup, <a href="https://www.hongkiat.com/blog/docker-llm-setup-guide/">this Docker LLM setup guide</a> shows another route.</li>
</ol>
<p>That is the pipeline. The AI assistant on somebody’s laptop is often just the final step of a long chain of training, adaptation, and compression.</p>
<h2 id="why-local-ai-clicks-for">Why Local AI Clicks for Regular People</h2>
<p>For most people, the appeal of local AI comes down to three things.</p>
<p>First, privacy. Your prompts and files can stay on your own machine.</p>
<p>Second, cost. Once the model is downloaded, you are not paying per message.</p>
<p>Third, control. You can choose a model that fits your style, hardware, and tolerance for safety filters.</p>
<p>That is why local AI keeps pulling in curious tinkerers, developers, and people who simply do not want all their work flowing through somebody else’s cloud. For a practical example, <a href="https://www.hongkiat.com/blog/local-llm-setup-optimization-lm-studio/">running LLMs locally with LM Studio</a> shows why that tradeoff feels worth it for many people.</p>
<h2 id="a-simple-way-to-explain">A Simple Way to Explain It to a Friend</h2>
<p>If your friend zones out the moment you say “transformer architecture,” skip the jargon and use this version instead:</p>
<blockquote>
<p>AI chatbots like ChatGPT run on language models. These models are trained on huge amounts of text so they can predict and generate useful replies. Bigger models are usually smarter, smaller ones are easier to run, and people often compress or customize them so they can work on regular laptops.</p>
</blockquote>
<p>That gets you most of the way there.</p>
<p>If they are still curious, add this:</p>
<blockquote>
<p>File names that look scary are usually just labels telling you who made the model, how big it is, whether it was customized, and whether it has been compressed for local use.</p>
</blockquote>
<p>That is usually the moment the whole thing stops looking mysterious.</p>
<h2 id="where-beginners-should-start">Where Beginners Should Start</h2>
<p>If someone wants to try local AI without turning it into a weekend project, I would keep it simple.</p>
<p>Start with Ollama or LM Studio. If you need a practical walkthrough, <a href="https://www.hongkiat.com/blog/run-llm-locally-lm-studio/">this guide to running an LLM locally with LM Studio</a> is a useful companion. Then pick an instruction-tuned model. If you are on a decent laptop, a 4-bit quantized model is usually the safest starting point.</p>
<p>If the file name still looks intimidating, break it into parts instead of trying to decode it all at once. That is how most people learn it, one label at a time.</p>
<h2 id="final-thought">Final Thought</h2>
<p>You do not need to understand every acronym in AI to talk about it intelligently. You just need a clean mental model.</p>
<p>An LLM is a language prediction engine trained on a huge amount of text. Parameters tell you how big it is, fine-tuning changes its behavior, quantization shrinks it, and GGUF packages it. Those long model names are mostly just specs.</p>
<p>Once you see it that way, AI gets a lot less mysterious. It starts looking like what it really is: software, packaging, tradeoffs, and a lot of labels.</p><p>The post <a href="https://www.hongkiat.com/blog/explain-ai-to-a-friend/">How to Explain AI to a Friend Who Doesn&#8217;t Follow Tech</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74364</post-id>	</item>
		<item>
		<title>What’s in WordPress 7.0</title>
		<link>https://www.hongkiat.com/blog/whats-coming-in-wordpress-7/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 15:00:00 +0000</pubDate>
				<category><![CDATA[WordPress]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74381</guid>

					<description><![CDATA[<p>WordPress 7.0 adds real-time collaboration, a Connectors API, a built-in AI Client, and a few compatibility changes developers should prepare for.</p>
<p>The post <a href="https://www.hongkiat.com/blog/whats-coming-in-wordpress-7/">What&#8217;s in WordPress 7.0</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>WordPress 7.0 brings several major changes for developers, site owners, and content teams.</p>
<p>This release adds real-time collaboration, extends the <a href="https://www.hongkiat.com/blog/all-you-need-to-know-about-wordpress-gutenberg-editor/">Gutenberg editor</a>, introduces new AI infrastructure, and changes a few long-standing WordPress conventions.</p>
<p>Here’s what is coming and what to prepare for.</p>
<h2>1. Real-time collaboration in the block editor</h2>
<p>The centerpiece of WordPress 7.0 is <strong><a rel="nofollow noopener" target="_blank" href="https://make.wordpress.org/core/2023/07/03/real-time-collaboration/">real-time collaboration (RTC)</a></strong>.</p>
<p>Multiple users can edit the same post simultaneously, with changes syncing instantly across all editors. You’ll see cursors, selections, and edits from other users in real time.</p>
<p>It handles conflict resolution gracefully so when two people edit the same paragraph, changes merge intelligently rather than overwriting each other.</p>
<h3>Is it enabled by default?</h3>
<p><strong>No.</strong></p>
<p>For security and compatibility reasons, real-time collaboration is <strong>opt-in</strong> rather than enabled by default. Site administrators must explicitly enable it for their sites.</p>
<ol>
<li>Go to <strong>Settings > Writing</strong> in your WordPress admin dashboard</li>
<li>Scroll to the “Collaboration” section</li>
<li>Check the box labeled <strong>“Enable real-time collaboration in the block editor”</strong></li>
<li>Click <strong>Save Changes</strong></li>
</ol>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/whats-coming-in-wordpress-7/settings-collaborations.jpg" alt="WordPress Writing settings showing the Collaboration section" width="1000" height="600">
    </figure>
<p>For multisite networks, network administrators can control whether real-time collaboration is available to site administrators through network settings.</p>
<p>Once enabled, you’ll see collaboration features appear in the block editor. You’ll need at least two user accounts with editing permissions to test the collaborative features properly.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/whats-coming-in-wordpress-7/block-editor-collaboration.jpg" alt="WordPress block editor with multiple collaborators editing a post" width="1000" height="600">
    </figure>
<h2>2. PHP-only block registration</h2>
<p>WordPress 7.0 removes a major barrier for developers who want to build blocks without a JavaScript-heavy workflow.</p>
<p><strong>You can now register blocks using only PHP</strong>, without needing React, Node.js, or a build toolchain. This removes a major barrier for traditional PHP WordPress developers who have avoided block development because of the JavaScript complexity.</p>
<p>With PHP-only block registration, you write your block in PHP and WordPress automatically generates the inspector controls (the settings panel in the editor sidebar) for you. This is perfect for blocks that don’t need complex client-side interactivity.</p>
<p>Here’s a basic example of registering a block with PHP:</p>
<pre>
add_action( 'init', function() {
    register_block_type( __DIR__ . '/build/my-block', array(
        'api_version' => 3,
        'title'       => __( 'My Custom Block', 'my-plugin' ),
        'description' => __( 'A simple block registered with PHP.', 'my-plugin' ),
        'category'    => 'widgets',
        'icon'        => 'smiley',
        'supports'    => array(
            'html' => false,
        ),
        'attributes'  => array(
            'content' => array(
                'type'    => 'string',
                'default' => '',
            ),
            'alignment' => array(
                'type'    => 'string',
                'default' => 'none',
            ),
        ),
        'render_callback' => function( $attributes, $content, $block ) {
            $classes = array( 'my-custom-block' );
            if ( ! empty( $attributes['alignment'] ) ) {
                $classes[] = 'has-text-align-' . $attributes['alignment'];
            }
            
            return sprintf(
                '&lt;div class="%s"&gt;%s&lt;/div&gt;',
                esc_attr( implode( ' ', $classes ) ),
                wp_kses_post( $attributes['content'] )
            );
        },
    ) );
} );
</pre>
<p>For more complex blocks, you can still mix PHP registration with JavaScript for the editor interface. But for simple content blocks, PHP-only registration means faster development, lighter plugins, and no build toolchain headaches.</p>
<h2>3. Introducing the Connectors API</h2>
<p>WordPress 7.0 introduces the <strong><a rel="nofollow noopener" target="_blank" href="https://make.wordpress.org/core/2026/03/18/introducing-the-connectors-api-in-wordpress-7-0/">Connectors API</a></strong>.</p>
<p>This is a new framework for registering and managing connections to external services, providing standardized API key management, provider discovery, and admin UI for configuring services.</p>
<p>The Connectors API works hand-in-hand with the built-in AI Client. It automatically discovers AI providers from the WP AI Client registry and creates connectors with proper metadata. Plugins using the AI Client do not need to handle credentials directly. They describe what they need, and WordPress routes requests to configured providers.</p>
<p>Plugins can register custom connectors or override existing ones using the <code>wp_connectors_init</code> action hook.</p>
<p>Here’s a basic example of registering a custom connector:</p>
<pre>
add_action( 'wp_connectors_init', function ( $registry ) {
    $connector = array(
        'name'           => 'My Custom Service',
        'description'    => 'Connect to my custom API service.',
        'type'           => 'custom_provider',
        'authentication' => array(
            'method'          => 'api_key',
            'credentials_url' => 'https://example.com/api-keys',
            'setting_name'    => 'connectors_custom_my_service_api_key',
        ),
        'plugin'         => array(
            'file' => 'my-custom-service/plugin.php',
        ),
    );
    
    $registry->register( 'my_custom_service', $connector );
} );
</pre>
<p>The API provides three main functions for developers:</p>
<pre>
// Check if a connector is registered
if ( wp_is_connector_registered( 'anthropic' ) ) {
    // The Anthropic connector is available
}

// Get a single connector's data
$connector = wp_get_connector( 'anthropic' );
if ( $connector ) {
    echo $connector['name']; // 'Anthropic'
}

// Get all registered connectors
$connectors = wp_get_connectors();
foreach ( $connectors as $id => $connector ) {
    printf( '%s: %s', $connector['name'], $connector['description'] );
}
</pre>
<p>API keys can be provided via environment variables, PHP constants, or database settings.</p>
<p>WordPress already ships with an example Connectors implementation, which you can find under <strong>Settings > Connectors</strong> in the admin.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/whats-coming-in-wordpress-7/connectors.jpg" alt="WordPress Connectors settings screen in the admin area" width="1000" height="600">
    </figure>
<p>This API is designed to expand beyond AI providers to support payment gateways, social media integrations, and other external services in future releases. That should make it easier for more plugins to plug into the same connection model.</p>
<h2>4. Unified AI interface</h2>
<p>WordPress 7.0 includes a built-in AI Client that provides a provider-agnostic PHP API for plugins to send prompts to AI models and receive results through a consistent interface. This is the engine that powers AI features across WordPress, working hand-in-hand with the Connectors API.</p>
<p>The AI Client handles provider communication, model selection, and response normalization. Your plugin describes what it needs and how it needs it. WordPress handles routing the request to a suitable model from a provider the site owner has configured.</p>
<p>Every interaction starts with the <code>wp_ai_client_prompt()</code> function:</p>
<pre>
// Basic text generation
$text = wp_ai_client_prompt( 'What is the capital of France?' )
    ->using_temperature( 0.8 )
    ->generate_text();

if ( is_wp_error( $text ) ) {
    // Handle error
    return;
}

echo wp_kses_post( $text );
</pre>
<p>The AI Client supports multiple modalities, for example, image generation:</p>
<pre>
$image_file = wp_ai_client_prompt( 'A futuristic WordPress logo in neon style' )
    ->generate_image();

if ( is_wp_error( $image_file ) ) {
    return;
}

echo '&lt;img src="' . esc_url( $image_file->getDataUri() ) . '" alt="">';

// JSON-structured responses.
$schema = array(
    'type'  => 'array',
    'items' => array(
        'type'       => 'object',
        'properties' => array(
            'plugin_name' => array( 'type' => 'string' ),
            'category'    => array( 'type' => 'string' ),
        ),
        'required' => array( 'plugin_name', 'category' ),
    ),
);

$json = wp_ai_client_prompt( 'List 5 popular WordPress plugins with their primary category.' )
    ->as_json_response( $schema )
    ->generate_text();
</pre>
<p>Before showing AI-powered UI, check whether the feature can work:</p>
<pre>
$builder = wp_ai_client_prompt( 'test' )
    ->using_temperature( 0.7 );

if ( $builder->is_supported_for_text_generation() ) {
    // Safe to show text generation UI
}
</pre>
<p>These checks use deterministic logic to match the builder’s configuration against the capabilities of available models. They don’t make API calls, so they’re fast and cost nothing. If you want a related developer-facing example of how WordPress is exposing structured capabilities, the <a href="https://www.hongkiat.com/blog/wordpress-abilities-api-tutorial/">WordPress Abilities API</a> is a useful companion.</p>
<h3>AI Provider Plugins</h3>
<p>Keep in mind that the AI Client architecture <strong>consists of two layers</strong>:</p>
<ol>
<li><strong>PHP AI Client</strong>: A provider-agnostic PHP SDK bundled in Core as an external library. This handles provider communication, model selection, and response normalization.</li>
<li><strong>WordPress wrapper</strong>: Core’s <code>WP_AI_Client_Prompt_Builder</code> class wraps the PHP AI Client with WordPress conventions: snake_case methods, <code>WP_Error</code> returns, and integration with WordPress HTTP transport, the Abilities API, the Connectors infrastructure, and the WordPress hooks system.</li>
</ol>
<p>WordPress Core doesn’t bundle any AI providers directly. Instead, they’re developed and maintained as plugins, which allows for more flexible and rapid iteration. The WordPress project has developed three initial flagship implementations:</p>
<ul>
<li><a rel="nofollow noopener" target="_blank" href="https://wordpress.org/plugins/ai-provider-for-anthropic/">AI Provider for Anthropic</a></li>
<li><a rel="nofollow noopener" target="_blank" href="https://wordpress.org/plugins/ai-provider-for-google/">AI Provider for Google</a></li>
<li><a rel="nofollow noopener" target="_blank" href="https://wordpress.org/plugins/ai-provider-for-openai/">AI Provider for OpenAI</a></li>
</ul>
<p>For developers who have been using the <code>wordpress/php-ai-client</code> or <code>wordpress/wp-ai-client</code> packages, the simplest path is to update your plugin’s “Requires at least” header to 7.0 and replace any <code>AI_Client::prompt()</code> calls with <code>wp_ai_client_prompt()</code>.</p>
<h2>5. No new default theme</h2>
<p><strong>WordPress 7.0 breaks with tradition.</strong></p>
<p>There will be <strong>no “Twenty Twenty-Six”</strong> default theme. The focus shifts to improving existing block themes like Twenty Twenty-Five through the Site Editor and Phase 3 collaboration tools.</p>
<p>This change signals a maturing approach to WordPress theming. The goal is to show users that you don’t need a new theme every year. You can evolve the one you have <a rel="nofollow noopener" target="_blank" href="https://wordpress.org/documentation/article/site-editor/">using the Site Editor</a>.</p>
<p>This reflects a broader trend in WordPress development. Moving from rigid, theme-controlled designs to flexible, user-customizable layouts. With the Site Editor, users can modify templates, create custom patterns, and adjust styles without touching code. A new default theme each year becomes less necessary when users have these tools at their fingertips.</p>
<h2>Breaking changes and compatibility requirements</h2>
<p>Real-time collaboration introduces significant compatibility requirements that plugin and theme developers must address.</p>
<h3>1. Minimum PHP version bump to 7.4</h3>
<p>WordPress 7.0 raises the minimum supported PHP version to 7.4, dropping support for PHP 7.2 and 7.3. This change is necessary to support modern libraries required for collaboration features and AI APIs.</p>
<p>The WordPress core team recommends PHP 8.2 or 8.3 for best performance and security. If your sites run PHP 7.2 or 7.3, you need to upgrade before installing WordPress 7.0. Test the upgrade thoroughly in a <a href="https://www.hongkiat.com/blog/staging-wordpress-development/">staging environment</a> first. Check for deprecated functions, incompatible plugins, and theme issues.</p>
<p>Here’s how to check your current PHP version and prepare for the upgrade:</p>
<pre>
php -v
</pre>
<p>Most hosting providers offer PHP version selection in their control panels. If you’re on shared hosting, check your provider’s documentation for how to switch PHP versions. Some hosts may automatically update sites to compatible versions, but it’s better to test first.</p>
<h3>2. Meta boxes disable collaboration</h3>
<p>This is likely the biggest compatibility issue in WordPress 7.0 for many existing plugins.</p>
<p><strong>The real-time collaboration feature is automatically disabled when classic meta boxes are present on a post</strong>. Since the system can’t sync classic meta box content, it turns off collaboration entirely when meta boxes are detected.</p>
<p>This affects thousands of plugins that still use the traditional meta box approach for custom fields and settings. If your plugin adds meta boxes, users won’t be able to use real-time collaboration on posts where those meta boxes appear.</p>
<p>The solution is to migrate from meta boxes to registered post meta with <code>show_in_rest: true</code>. This allows the data to sync through the WordPress REST API, which the collaboration system can track.</p>
<p>Here’s an example of migrating from a traditional meta box to registered post meta:</p>
<pre>
// OLD: Traditional meta box approach (breaks collaboration)
add_action( 'add_meta_boxes', function() {
    add_meta_box(
        'my_custom_field',
        'Custom Field',
        'render_my_custom_field',
        'post',
        'side',
        'high'
    );
} );

function render_my_custom_field( $post ) {
    $value = get_post_meta( $post->ID, '_my_custom_field', true );
    echo '&lt;input type="text" name="my_custom_field" value="' . esc_attr( $value ) . '" />';
}

add_action( 'save_post', function( $post_id ) {
    if ( isset( $_POST['my_custom_field'] ) ) {
        update_post_meta( $post_id, '_my_custom_field', sanitize_text_field( $_POST['my_custom_field'] ) );
    }
} );

// NEW: Registered post meta (works with collaboration)
add_action( 'init', function() {
    register_post_meta( 'post', '_my_custom_field', array(
        'type'         => 'string',
        'single'       => true,
        'show_in_rest' => true, // REQUIRED for collaboration
        'auth_callback' => function() {
            return current_user_can( 'edit_posts' );
        }
    ) );
} );

// Use in block editor with useSelect
import { useSelect } from '@wordpress/data';
import { store as coreStore } from '@wordpress/core-data';

function MyCustomFieldComponent() {
    const metaValue = useSelect( ( select ) => {
        return select( coreStore ).getEditedEntityRecord( 'postType', 'post', postId )?.meta?._my_custom_field || '';
    }, [ postId ] );
    
    // Render your field component
}
</pre>
<p>The <a rel="nofollow noopener" target="_blank" href="https://developer.wordpress.org/block-editor/how-to-guides/metabox/">Block Editor Handbook has a complete migration guide</a> that walks through the process in detail.</p>
<h3>3. Plugin architecture requirements</h3>
<p>Beyond meta boxes, plugins need to follow specific patterns to work correctly with real-time collaboration. Custom meta field interfaces must use the WordPress data store via <code>useSelect</code> instead of local React state. If you copy store data into component state, your UI won’t update when other collaborators make changes.</p>
<p>Here’s the difference between the wrong approach (local state) and the right approach (useSelect):</p>
<pre>
// WRONG: Local state breaks collaboration
import { useState, useEffect } from '@wordpress/element';

function WrongComponent( { postId } ) {
    const [metaValue, setMetaValue] = useState( '' );
    
    // This only loads once and won't update when collaborators change the value
    useEffect( () => {
        apiFetch( { path: `/wp/v2/posts/${postId}` } ).then( ( post ) => {
            setMetaValue( post.meta._my_custom_field || '' );
        } );
    }, [postId] );
    
    return &lt;div&gt;{metaValue}&lt;/div&gt;;
}

// RIGHT: useSelect enables real-time updates
import { useSelect } from '@wordpress/data';
import { store as coreStore } from '@wordpress/core-data';

function RightComponent( { postId } ) {
    // This automatically updates when any collaborator changes the value
    const metaValue = useSelect( ( select ) => {
        const post = select( coreStore ).getEditedEntityRecord( 'postType', 'post', postId );
        return post?.meta?._my_custom_field || '';
    }, [ postId ] );
    
    return &lt;div&gt;{metaValue}&lt;/div&gt;;
}
</pre>
<p>The key difference is that <code>useSelect</code> subscribes to the WordPress data store, which is synchronized across all collaborators. Local state only reflects the initial value and won’t update when others make changes.</p>
<p>Blocks with side effects on insertion need special consideration too. Since block content syncs immediately to all collaborators, auto-opening modals or triggering animations on insertion will affect everyone editing the post. The recommendation is to use placeholders with explicit user actions instead of automatic behaviors.</p>
<h2>What’s Next?</h2>
<p>WordPress 7.0 represents a bold step forward for the platform.</p>
<p>Real-time collaboration transforms WordPress from a tool for an individual blogger into a platform for a team. The architectural changes required to make this work will have ripple effects through the plugin and theme ecosystem, but the result is a more modern, capable content management system.</p>
<p>As you prepare for WordPress 7.0, <strong>focus on meta box migration first</strong>. For many plugins, that is the most immediate compatibility issue. Then <strong>test your interfaces in collaborative mode</strong> to catch synchronization problems.</p>
<p>These extra efforts now will ensure your site works seamlessly when it’s finally used in WordPress 7.0.</p>
<p>    <!-- END HERE --></p><p>The post <a href="https://www.hongkiat.com/blog/whats-coming-in-wordpress-7/">What&#8217;s in WordPress 7.0</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74381</post-id>	</item>
		<item>
		<title>How to Install Ollama on a Synology NAS</title>
		<link>https://www.hongkiat.com/blog/install-ollama-on-synology-nas/</link>
		
		<dc:creator><![CDATA[Hongkiat Lim]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 13:00:32 +0000</pubDate>
				<category><![CDATA[Internet]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74349</guid>

					<description><![CDATA[<p>A practical guide to running Ollama on a Synology NAS with Container Manager, realistic hardware expectations, and small-model recommendations.</p>
<p>The post <a href="https://www.hongkiat.com/blog/install-ollama-on-synology-nas/">How to Install Ollama on a Synology NAS</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If you already own a Synology NAS, you have probably wondered whether it can do more than backups, file storage, and media streaming.</p>
<p>It can. Not like a GPU server, and not with giant models, but well enough to run a small private LLM at home.</p>
<p>In this guide, I am using the <strong>Synology DS925+</strong> as the example system. The steps are not exclusive to that model, but it is a useful reference point because it has a modern AMD Ryzen CPU, supports memory upgrades, and sits in the range where local AI becomes realistic if you keep your expectations aligned with the hardware. If you are completely new to Ollama, <a href="https://www.hongkiat.com/blog/ollama-ai-setup-guide/">Getting Started with Ollama</a> is a good companion piece before you set this up on a NAS.</p>
<p>The short version is simple: yes, you can run <strong>Ollama</strong> on a DS925+ with <strong>Container Manager</strong>, and yes, it is a practical way to host a small private AI model on your own network.</p>
<h2 id="suitable-models">Which Synology NAS Models and Ollama Models Are Suitable?</h2>
<p>Hardware is the deciding factor here.</p>
<p>Ollama is best suited to <strong>Synology NAS models that support Container Manager</strong> and have enough RAM for small models. In practice, many x86-based Plus models are the most realistic candidates.</p>
<ul>
<li><strong>x86 CPU</strong>, not entry-level ARM hardware</li>
<li><strong>8GB RAM</strong> is a bare minimum for very small models</li>
<li><strong>16GB or more</strong> is a more comfortable starting point</li>
</ul>
<p>As for LLMs, stay realistic. A CPU-only NAS is better for <strong>small models</strong>, not 7B-and-up models unless you are comfortable with slow responses.</p>
<p>Good starting picks:</p>
<ul>
<li><strong><code>llama3.2:1b</code></strong> for very light use</li>
<li><strong><code>llama3.2:3b</code></strong> as the best default for most people</li>
<li><strong><code>smollm:1.7b</code></strong> if memory is tight</li>
<li><strong><code>qwen2.5-coder:1.5b</code></strong> or <strong><code>qwen2.5-coder:3b</code></strong> for coding tasks</li>
<li><strong><code>gemma3:1b</code></strong> if you want another compact model to test</li>
</ul>
<p>If this is your first time running Ollama on a Synology NAS, start with <strong><code>llama3.2:3b</code></strong>. It is the cleanest balance between capability and realism for this class of hardware.</p>
<h2 id="why-ollama">Why Ollama?</h2>
<p>Ollama removes a lot of the usual friction.</p>
<p>Instead of manually wiring together model files, runtime settings, and a serving layer, you get a simpler way to download models and serve them through a local endpoint.</p>
<p>That makes it a good match for a Synology box, where the goal is usually to get something useful running without turning the NAS into a weekend-long infrastructure project.</p>
<p>If you expect to keep several models around, this guide on <a href="https://www.hongkiat.com/blog/ollama-llm-from-external-drive/">running LLMs from an external drive with Ollama</a> is useful for thinking about storage before your model library starts growing.</p>
<h2 id="install-container-manager">Step 1: Install Container Manager</h2>
<p>Synology does not currently offer Ollama as a one-click package in Package Center, so the easiest route is to run it in a container.</p>
<p>To install <strong>Container Manager</strong>:</p>
<ol>
<li>Open <strong>Package Center</strong> in DSM.</li>
<li>Search for <strong>Container Manager</strong>.</li>
<li>Click <strong>Install</strong>.</li>
</ol>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/install-container-manager.jpg" width="2116" height="1154" alt="Install Container Manager"></figure>
<p>If you used Docker on older Synology systems, this is the same general idea. Container Manager is Synology’s newer interface for running containerized apps.</p>
<h2 id="download-image">Step 2: Download the Ollama Image</h2>
<p>Once Container Manager is installed:</p>
<ol>
<li>Open <strong>Container Manager</strong>.</li>
<li>Go to the <strong>Registry</strong> tab.</li>
<li>Search for <code>ollama</code>.</li>
<li>Select the official image, <strong><code>ollama/ollama</code></strong>.</li>
<li> Right click and choose <strong>Download this image</strong>.
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/download-ollama.jpg" width="2386" height="1158" alt="Download Ollama image"></figure>
</li>
<li> When asked to choose a tag, select <strong>latest</strong> and click <strong>Download</strong>.
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/download-latest-ollama.jpg" width="2388" height="1158" alt="Download latest tag"></figure>
</li>
</ol>
<h2 id="create-container">Step 3: Create the Container</h2>
<p>Once the image is downloaded, go to the <strong>Image</strong> tab, select <strong><code>ollama/ollama:latest</code></strong>, right click and select <strong>Run</strong>.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/run-ollama.jpg" width="1530" height="812" alt="Run Ollama container"></figure>
<p>Use these settings for the smoothest setup:</p>
<ul>
<li><strong>Container name:</strong> <code>ollama</code></li>
<li><strong>Auto-restart:</strong> Enable it so the container starts again if your NAS reboots.</li>
</ul>
<p>Then click <strong>Next</strong> to continue.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/general-settings.jpg" width="1466" height="1142" alt="General settings"></figure>
<h3 id="volume-settings">Volume Settings</h3>
<p>Click <strong>Add Folder</strong> and create a folder such as <code>docker/ollama</code>.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/docker-ollama.jpg" width="1470" height="1148" alt="Create docker folder"></figure>
<p>Then mount it to <code>/root/.ollama</code>.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/root-ollama.jpg" width="1388" height="348" alt="Mount root ollama"></figure>
<p>That is critical. It keeps downloaded models on your storage volume, so they do not disappear when you update or recreate the container.</p>
<h3 id="port-settings">Port Settings</h3>
<p>Map <code>11434</code> on the NAS to <code>11434</code> in the container.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/port-settings.jpg" width="1414" height="372" alt="Port settings"></figure>
<h3 id="environment-variable">Environment</h3>
<p>Add <code>OLLAMA_ORIGINS</code> with the value <code>*</code>. This allows other apps, such as Open WebUI, to talk to Ollama.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/ollama-origin.jpg" width="1398" height="662" alt="Ollama origins setting"></figure>
<p>Check to make sure everything is added correctly, then click <strong>Next</strong>, then <strong>Done</strong>.</p>
<h2 id="talk-to-engine">Step 4: How to Talk to the Engine</h2>
<p>Once the container is running, Ollama is just the engine. To download your first model and start using it:</p>
<ol>
<li>Go to the <strong>Container</strong> tab in Container Manager.</li>
<li>Select the <code>ollama</code> container, right click and choose <strong>Details</strong>.
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/ollama-details.jpg" width="2088" height="1070" alt="Ollama container details"></figure>
</li>
<li>Click the <strong>Action</strong> dropdown on the top right, then select <strong>Open terminal</strong>.
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/open-terminal.jpg" width="1100" height="840" alt="Open terminal"></figure>
</li>
<li>Click the small arrow beside <strong>Create</strong>, then choose <strong>Launch with command</strong>.
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/launch-with-command.jpg" width="1846" height="752" alt="Launch with command"></figure>
</li>
<li>Type <code>sh</code> and press <strong>Enter</strong>.
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/sh-ok.jpg" width="1222" height="598" alt="Shell command entered"></figure>
</li>
</ol>
<p>Select <code>sh</code> from the left tab. In the terminal window that opens, run:</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/run-ollama-terminal.jpg" width="1846" height="458" alt="Run Ollama in terminal"></figure>
<pre>ollama run llama3.2:3b</pre>
<p>This downloads the 3-billion-parameter Llama 3.2 model, roughly 2GB in size, and lets you start chatting directly in that same terminal window.</p>
<h2 id="how-to-check">How to Tell if It Worked</h2>
<p>If the model finishes downloading and you get an interactive prompt, the setup is working.</p>
<p>Try something simple:</p>
<pre>Write a short explanation of what a NAS does.</pre>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/what-nas-does.jpg" width="1848" height="1022" alt="NAS explanation test"></figure>
<p>You can also test whether the service is reachable from another device on your network through port <code>11434</code>, which many Ollama-compatible apps and front ends use. Once you have the basics running, <a href="https://www.hongkiat.com/blog/vision-enabled-models-ollama-guide/">these vision-enabled Ollama experiments</a> are a nice next step if you want to do more than plain text chat.</p>
<h2 id="open-webui-tip">Pro Tip: Adding a “Face” (Web UI)</h2>
<p>Once you have Ollama running as the engine, most users then go back to the Registry and search for <code>openwebui/open-webui</code>.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-ollama-on-synology-nas/openwebui.jpg" width="1806" height="966" alt="Open WebUI image"></figure>
<p>When you run that container and point it to your NAS IP on port <code>11434</code>, you get a clean ChatGPT-like interface in your browser.</p>
<h2 id="good-for">What This Setup Is Good For</h2>
<p>On a DS925+, this setup makes sense for:</p>
<ul>
<li>private prompt testing</li>
<li>lightweight local chat</li>
<li>experimenting with small models</li>
<li>connecting self-hosted front ends to a local Ollama endpoint</li>
<li>basic code or automation experiments with compact coding models</li>
</ul>
<p>It is less suited for:</p>
<ul>
<li>large models</li>
<li>fast multi-user workloads</li>
<li>heavy reasoning jobs that benefit from GPU acceleration</li>
<li>any setup where you expect cloud-level speed from NAS hardware</li>
</ul>
<p>That is the framing to keep in mind. The DS925+ is not a replacement for a dedicated AI rig. It is a practical way to run a small private model on hardware you may already own.</p>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>Running <strong>Ollama on a Synology NAS</strong> is a practical way to experiment with local AI on hardware you may already own.</p>
<p>If your NAS supports Container Manager, has an x86 CPU, and has enough RAM for small models, the setup is straightforward enough for anyone already comfortable using DSM.</p>
<p>Start with <strong><code>llama3.2:3b</code></strong> or another small model from the list above. Once that works, you can experiment from there without turning your NAS into a science project.</p><p>The post <a href="https://www.hongkiat.com/blog/install-ollama-on-synology-nas/">How to Install Ollama on a Synology NAS</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74349</post-id>	</item>
		<item>
		<title>Mole Is the Free Mac Cleaner Worth Trying</title>
		<link>https://www.hongkiat.com/blog/mole-mac-cleaner-guide/</link>
		
		<dc:creator><![CDATA[Hongkiat Lim]]></dc:creator>
		<pubDate>Sun, 19 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74337</guid>

					<description><![CDATA[<p>Mole is a free Mac cleanup tool that handles cleaning, uninstalling, disk analysis, optimization, and live system monitoring from Terminal.</p>
<p>The post <a href="https://www.hongkiat.com/blog/mole-mac-cleaner-guide/">Mole Is the Free Mac Cleaner Worth Trying</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><a rel="nofollow noopener" target="_blank" href="https://github.com/tw93/mole">Mole</a> is an open source Mac maintenance tool for people who would rather clean, uninstall, inspect disk usage, and monitor system health from Terminal than juggle several separate apps.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/mole.jpg" alt="Mole Mac utility" width="1487" height="721"></figure>
<p>In practical terms, it sits in the same space as <a rel="nofollow noopener" target="_blank" href="https://macpaw.com/cleanmymac">CleanMyMac</a>, <a rel="nofollow noopener" target="_blank" href="https://freemacsoft.net/appcleaner/">AppCleaner</a>, <a rel="nofollow noopener" target="_blank" href="https://daisydiskapp.com/">DaisyDisk</a>, and <a rel="nofollow noopener" target="_blank" href="https://bjango.com/mac/istatmenus/">iStat Menus</a>. The difference is that Mole tries to cover those jobs in one command line tool.</p>
<p>That is what makes it interesting. Instead of treating Mac cleanup as one app, uninstallation as another, disk analysis as another, and system monitoring as another, Mole pulls them into one workflow.</p>
<h2 id="what-mole-is">What Mole Is</h2>
<p>Mole is a Mac utility focused on cleanup, uninstalling apps, disk inspection, lightweight system maintenance, and live status monitoring. You install it, run it with the <code>mo</code> command, and either use the interactive interface or jump straight into individual commands.</p>
<p>The overall appeal is straightforward:</p>
<ul>
<li>clean caches, logs, browser leftovers, and app junk</li>
<li>remove installed apps together with related leftovers</li>
<li>inspect disk usage and spot large files</li>
<li>run maintenance-oriented system cleanup tasks</li>
<li>watch CPU, memory, disk, network, and process activity live</li>
</ul>
<p>Unlike GUI-first Mac utilities, Mole is terminal-based. That makes it a better fit for users who prefer keyboard-driven tools, readable commands, and scriptable output over a polished desktop interface.</p>
<h2 id="paid-alternatives">What Paid Apps Is Mole an Alternative To?</h2>
<p>The cleanest way to think about Mole is as a free alternative in the same general category as:</p>
<ul>
<li><strong>CleanMyMac</strong> for cleanup and optimization</li>
<li><strong>AppCleaner</strong> for app removal and leftover cleanup</li>
<li><strong>DaisyDisk</strong> for disk usage analysis and large-file discovery</li>
<li><strong>iStat Menus</strong> for live system monitoring</li>
</ul>
<p>It is not a guaranteed one-to-one replacement for every feature in those apps, but it clearly aims at that stack. For a closer look at one of those paid alternatives, see <a href="https://www.hongkiat.com/blog/cleanmymacx-hidden-features/">20 CleanMyMac X Hidden Features to Explore</a>.</p>
<p>If you already use more than one utility for cleanup, uninstalling, and disk inspection, Mole makes sense immediately. Another app in this broader cleanup category is <a href="https://www.hongkiat.com/blog/buhocleaner-for-mac/">BuhoCleaner for Mac</a>, though Mole takes a more terminal-first approach.</p>
<h2 id="platform-support">Which Platforms Does Mole Support?</h2>
<ul>
<li><strong>macOS:</strong> the main supported platform</li>
<li><strong>Windows:</strong> an experimental version exists in the repository’s <code>windows</code> branch</li>
</ul>
<p>There is no clear Linux support in its current positioning, so it makes more sense to treat Mole as a Mac-first tool.</p>
<h2 id="install-mole">How to Install Mole</h2>
<p>Mole can be installed in two simple ways.</p>
<h4 id="homebrew-install">Install With Homebrew</h4>
<pre>brew install mole</pre>
<p>This is the simplest route for most Mac users.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/brew-install-mole.jpg" alt="Brew install Mole" width="2104" height="1382"></figure>
<h4 id="script-install">Install With the Project Script</h4>
<pre>curl -fsSL https://raw.githubusercontent.com/tw93/mole/main/install.sh | bash</pre>
<p>If you want more control, the install script also supports selecting the latest main-branch build or a specific release version.</p>
<h2 id="start-using-mole">How to Start Using Mole</h2>
<p>Once installed, run:</p>
<pre>mo</pre>
<p>That opens Mole’s interactive menu.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/mo-command.jpg" alt="Mole command menu" width="2104" height="1382"></figure>
<p>If you prefer to jump straight to specific functions, these are the main commands you will use most often:</p>
<pre>mo clean
mo uninstall
mo optimize
mo analyze
mo status
mo purge
mo installer</pre>
<p>It also provides:</p>
<pre>mo touchid
mo completion
mo update
mo remove
mo --help
mo --version</pre>
<p>Inside the interface, you can move around with arrow keys or Vim-style <code>h/j/k/l</code> controls.</p>
<h2 id="what-mole-cleans">What Does Mole Clean?</h2>
<p>Mole covers several useful cleanup categories instead of limiting itself to just browser cache or temp files.</p>
<h3 id="caches-temp-files">1. Caches and Temporary Files</h3>
<p>The <code>mo clean</code> command is the general deep-clean option. It targets categories such as:</p>
<ul>
<li>user app cache</li>
<li>browser cache from Chrome, Safari, and Firefox</li>
<li>developer tool cache from Xcode, Node.js, and npm</li>
<li>system logs and temp files</li>
<li>app-specific cache from apps like Spotify, Dropbox, and Slack</li>
<li>Trash</li>
</ul>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/mo-clean.jpg" alt="Mole clean output" width="2104" height="1382"></figure>
<p>Here is a more realistic example of what a <code>mo clean</code> report can look like, with a few machine-specific details generalized for privacy:</p>
<pre>Clean Your Mac
[note] Use --dry-run to preview, --whitelist to manage protected paths
[done] Admin access granted
[system] Apple Silicon | Free space: 88Gi
[done] Whitelist: core protection rules active
> System
[done] System crash reports
[done] System logs
[done] Browser code signature caches
[done] System diagnostic logs
[done] Power logs
[done] Nothing to clean
> User Essentials
[done] User app cache, about 1.8GB
[done] User app logs, about 229MB
[done] Trash already empty
> App Caches
[done] Wallpaper aerial thumbnails
[done] macOS Help system cache
[done] Maps geo tile cache
[done] Group Containers logs and caches
> Browsers
[done] Chrome GPU cache
[done] Chrome component cache
[done] Chrome shader cache
[done] Chrome service worker cache
[done] Updater cache and old files
[note] Browser in use, some old-version cleanup skipped
> Cloud and Office
[done] Nothing to clean
> Developer Tools
[done] npm cache
[done] npm npx cache
[done] npm logs
[done] pnpm cache
[done] pip cache
[note] Docker unused data skipped by default
-> Review: docker system df
-> Prune: docker system prune --filter until=720h
[note] Xcode unavailable simulators skipped when simctl is unavailable
[done] Homebrew lock files
[done] Homebrew cleanup
> Applications
[done] Messaging app cache
[done] Editor cache and code cache
[done] GPU and WebGPU caches
[done] Adobe app caches
[done] Shell history files
[done] Launcher URL and filesystem cache
> Virtualization
[done] Nothing to clean
> Application Support
[done] Application Support logs and caches
> App Leftovers
[done] Found active and installed apps
[done] Cleaned orphaned WebKit data
[done] Cleaned orphaned HTTP cache data
[done] Cleaned orphaned application support files
[done] Cleaned orphaned preference files
[done] Cleaned leftover items, about 2.1MB
[note] Found orphaned system services
[blocked] Some privileged helper tools were blocked by path validation
[done] Cleaned removable orphaned services, about 10.8MB
> Apple Silicon Updates
[done] Nothing to clean
> Device Backups and Firmware
[done] Nothing to clean
> Time Machine
[done] No incomplete backups found
[done] Nothing to clean
> Large Files
[done] No large items detected in common locations
> System Data Clues
[done] No common System Data clues detected
> Project Artifacts
[done] Nothing to clean
======================================================================
Cleanup complete
Space freed: 3.08GB | Items cleaned: 952 | Categories: 46
Free space now: 93Gi
======================================================================</pre>
<p>That gives you a better sense of how detailed Mole can get when it scans a real Mac.</p>
<h3 id="app-leftovers">2. Leftovers From Apps You Already Removed</h3>
<p>One distinction that matters here:</p>
<ul>
<li>Use <code>mo clean</code> when the app has <strong>already been uninstalled</strong> and you want to remove leftover files.</li>
<li>Use <code>mo uninstall</code> when the app is <strong>still installed</strong> and you want Mole to remove both the app and its related files.</li>
</ul>
<p>That split is useful because many Mac users end up with old support files, caches, preferences, or launch items long after an app is gone. If you have ever had to clean up stubborn leftovers by hand, this guide on <a href="https://www.hongkiat.com/blog/uninstall-hma-vpn-mac/">completely uninstalling HMA VPN on your Mac</a> shows why proper removal matters.</p>
<h3 id="build-artifacts">3. Build Artifacts in Developer Projects</h3>
<p>Mole also includes a <code>mo purge</code> command for clearing project junk such as:</p>
<ul>
<li><code>node_modules</code></li>
<li><code>target</code></li>
<li><code>.build</code></li>
<li><code>build</code></li>
<li><code>dist</code></li>
<li><code>venv</code></li>
</ul>
<p>This is clearly aimed at developers who accumulate large dependency and build folders across multiple projects.</p>
<p>Recent projects are marked and left unselected by default, which is a sensible safety choice.</p>
<h3 id="installer-files">4. Old Installer Files</h3>
<p>The <code>mo installer</code> command scans for installer packages across locations such as:</p>
<ul>
<li>Downloads</li>
<li>Desktop</li>
<li>Homebrew caches</li>
<li>iCloud</li>
<li>Mail</li>
</ul>
<p>That makes it useful for cleaning up <code>.dmg</code>, <code>.pkg</code>, and other installer files people often forget to remove.</p>
<h2 id="key-commands">How to Use Some of Mole’s Most Useful Commands</h2>
<p>Here are the commands that matter most for everyday use.</p>
<h4 id="mo-clean">1. <code>mo clean</code>: Deep Cleanup</h4>
<p>Use this when you want to remove common junk and recover disk space.</p>
<pre>mo clean</pre>
<p>If you want to preview what Mole would do first, use:</p>
<pre>mo clean --dry-run</pre>
<p>If you want the preview plus more detailed logs:</p>
<pre>mo clean --dry-run --debug</pre>
<p>This is the safest way to start because Mole’s cleanup commands can be destructive.</p>
<p>The <code>mo clean</code> process may take a while, especially on Macs with lots of caches, logs, browser data, and developer files to scan.</p>
<h4 id="mo-uninstall">2. <code>mo uninstall</code>: Remove an App and Its Leftovers</h4>
<p>Use this when the app is still installed and you want a cleaner uninstall than dragging it to Trash.</p>
<pre>mo uninstall</pre>
<p>When Mole removes an app, it can also clean related files across areas like:</p>
<ul>
<li>Application Support</li>
<li>Caches</li>
<li>Preferences</li>
<li>Logs</li>
<li>WebKit storage</li>
<li>Cookies</li>
<li>Extensions</li>
<li>Plugins</li>
<li>Launch daemons</li>
</ul>
<p>That is the AppCleaner-style part of Mole.</p>
<p>You can also preview first:</p>
<pre>mo uninstall --dry-run</pre>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/uninstall-dryrun.jpg" alt="Mole uninstall dry run" width="2104" height="1382"></figure>
<h4 id="mo-analyze">3. <code>mo analyze</code>: Find What Is Eating Your Disk</h4>
<p>Use this when you are not ready to delete blindly and want a visual breakdown first.</p>
<pre>mo analyze</pre>
<p><code>mo analyze</code> is the safer route for ad hoc cleanup because it moves files to Trash through Finder instead of deleting them directly.</p>
<p>You can also analyze another path, including external drives under <code>/Volumes</code>:</p>
<pre>mo analyze /Volumes</pre>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/mo-analyze.jpg" alt="Mole analyze output" width="2104" height="1382"></figure>
<p>And if you want output for scripting:</p>
<pre>mo analyze --json ~/Documents</pre>
<p>This is the DaisyDisk-style side of Mole.</p>
<h4 id="mo-status">4. <code>mo status</code>: Watch System Health Live</h4>
<p>Use this when you want a terminal dashboard for machine health.</p>
<pre>mo status</pre>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/mo-status.jpg" alt="Mole status output" width="2104" height="1382"></figure>
<p>The live dashboard shows:</p>
<ul>
<li>CPU usage</li>
<li>GPU usage</li>
<li>memory usage</li>
<li>disk usage and throughput</li>
<li>network activity</li>
<li>power and battery details</li>
<li>active processes</li>
<li>a health score</li>
</ul>
<p>There is also JSON output support:</p>
<pre>mo status --json</pre>
<p>If the output is piped, Mole can also switch to JSON automatically.</p>
<p>This is the iStat Menus-style part of the tool.</p>
<h4 id="mo-optimize"><code>mo optimize</code>: Refresh System Services and Caches</h4>
<p>Use this when your Mac feels messy or sluggish and you want Mole to run maintenance tasks.</p>
<pre>mo optimize</pre>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mole-mac-cleaner-guide/mo-optimise.jpg" alt="Mole optimise output" width="2104" height="1382"></figure>
<p>That maintenance pass includes tasks such as:</p>
<ul>
<li>rebuilding system databases and clearing caches</li>
<li>resetting network services</li>
<li>refreshing Finder and Dock</li>
<li>cleaning diagnostic and crash logs</li>
<li>removing swap files and restarting the dynamic pager</li>
<li>rebuilding launch services and the Spotlight index</li>
</ul>
<p>Mole also supports a whitelist manager for exclusions:</p>
<pre>mo optimize --whitelist</pre>
<h4 id="mo-purge"><code>mo purge</code>: Remove Old Project Build Junk</h4>
<p>If you are a developer, this can be one of Mole’s most practical commands.</p>
<pre>mo purge</pre>
<p>To preview first:</p>
<pre>mo purge --dry-run</pre>
<p>To configure which project folders Mole scans:</p>
<pre>mo purge --paths</pre>
<p>When custom paths are configured, Mole scans only those locations. Otherwise it uses defaults like <code>~/Projects</code>, <code>~/GitHub</code>, and <code>~/dev</code>.</p>
<h4 id="mo-installer"><code>mo installer</code>: Clean Up Forgotten Installers</h4>
<pre>mo installer</pre>
<p>This command helps remove large installer files that keep sitting in Downloads or other common locations.</p>
<h2 id="safety-notes">Safety Notes You Should Know Before Using Mole</h2>
<p>Mole is not a toy. Commands like <code>clean</code>, <code>uninstall</code>, <code>purge</code>, <code>installer</code>, and <code>remove</code> can be destructive, so using <code>--dry-run</code> first is the right move.</p>
<p>It also takes a safety-first approach rather than a delete-everything approach. Some of the protections built into it include:</p>
<ul>
<li>path validation before deletion</li>
<li>protected-directory rules</li>
<li>conservative cleanup boundaries</li>
<li>explicit confirmation for higher-risk actions</li>
<li>operation logging to <code>~/Library/Logs/mole/operations.log</code></li>
<li>conservative symlink handling</li>
</ul>
<p>It also protects certain sensitive areas and categories, including keychains, password managers, browser history and cookies, some VPN and proxy tools, some AI tool data, Time Machine data during active backup, and protected system paths.</p>
<p>That does not remove all risk, but it does show that Mole is designed to stay bounded rather than reckless.</p>
<h2 id="is-it-worth-trying">Is Mole Worth Trying?</h2>
<p>If you want a free and open source Mac maintenance tool, Mole looks unusually ambitious.</p>
<p>Its main appeal is not just cleaning caches. It is the fact that it combines four familiar utility categories into one:</p>
<ul>
<li>cleaner</li>
<li>uninstaller</li>
<li>disk analyzer</li>
<li>live system monitor</li>
</ul>
<p>If that sounds like exactly the stack you already use, Mole is worth a look.</p>
<p>The tradeoff is obvious too. Mole is terminal-based, so it is better suited to users who are comfortable reading commands, previews, and file categories before confirming destructive actions.</p>
<p>For cautious users, the best first step is simple:</p>
<ol type="1">
<li>install it with Homebrew</li>
<li>run <code>mo clean --dry-run</code></li>
<li>try <code>mo analyze</code></li>
<li>use <code>mo uninstall</code> only when you actually want app and remnant removal</li>
</ol>
<p>That gives you a safe way to understand what Mole can do before you let it clean anything for real.</p><p>The post <a href="https://www.hongkiat.com/blog/mole-mac-cleaner-guide/">Mole Is the Free Mac Cleaner Worth Trying</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74337</post-id>	</item>
		<item>
		<title>Mozilla’s Thunderbolt Is an Open-Source AI Client, but What Exactly Is It?</title>
		<link>https://www.hongkiat.com/blog/mozilla-thunderbolt-open-source-ai-client/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Sun, 19 Apr 2026 07:00:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74368</guid>

					<description><![CDATA[<p>Mozilla's Thunderbolt is not a model or chatbot, but an open-source AI client built to give organizations more control over models, data, and workflows.</p>
<p>The post <a href="https://www.hongkiat.com/blog/mozilla-thunderbolt-open-source-ai-client/">Mozilla&#8217;s Thunderbolt Is an Open-Source AI Client, but What Exactly Is It?</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Mozilla-backed MZLA Technologies is building <a rel="nofollow noopener" target="_blank" href="https://www.thunderbolt.io/">Thunderbolt</a>, an open-source AI client for organizations that want tighter control over how they deploy and use AI.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/mozilla-thunderbolt-open-source-ai-client/mozilla-thunderbolt.jpg" alt="Thunderbolt homepage" width="2410" height="1153"></figure>
<p>Start with what Thunderbolt is not.</p>
<p>It is not a new AI model, and it is not just another chatbot app.</p>
<p>It sits between people, AI models, company data, and automation tools. Mozilla is pitching a system companies can run on their own terms instead of one closed AI experience.</p>
<p>That is why Mozilla calls it a <strong>sovereign AI client</strong>. The pitch is simple: organizations choose where it runs, which models it connects to, what data it can access, and how deeply it integrates with internal systems.</p>
<h2 id="so-what-is-thunderbolt">So What Is Thunderbolt?</h2>
<p>Thunderbolt is an enterprise AI client designed to be self-hosted, extensible, and model-agnostic.</p>
<p>Based on the launch announcement and follow-up coverage, Mozilla wants it to help organizations:</p>
<ul>
<li>run AI with frontier, local, or on-prem models</li>
<li>connect AI to internal tools and company data</li>
<li>automate recurring workflows and scheduled tasks</li>
<li>use the same setup across web and native apps</li>
<li>keep deployment and privacy controls on their own infrastructure</li>
</ul>
<p>In plain English, Thunderbolt is trying to be the interface and workflow layer for enterprise AI, not the intelligence itself.</p>
<p>If a company already uses several models, private knowledge sources, internal systems, and agent-style workflows, Thunderbolt is supposed to bring those pieces into one operational layer.</p>
<h2 id="why-mozilla-keeps-framing-it-around-control">Why Mozilla Keeps Framing It Around Control</h2>
<p>The real hook in Mozilla’s messaging is not convenience. It is ownership.</p>
<p>Plenty of companies are interested in AI right up until the discussion turns to sensitive documents, internal knowledge, employee data, compliance requirements, and vendor lock-in. That is the hesitation Mozilla is targeting.</p>
<p>Thunderbolt’s pitch is straightforward: if AI is becoming part of core business workflows, companies should not have to hand that entire layer over to a closed external platform.</p>
<p>That explains the emphasis on self-hosting, open-source licensing, and optional end-to-end encryption.</p>
<h2 id="what-thunderbolt-can-connect-to">What Thunderbolt Can Connect To</h2>
<p>Mozilla says Thunderbolt can work with:</p>
<ul>
<li>leading commercial AI providers</li>
<li>open-source models</li>
<li>local models running on company infrastructure</li>
<li><a href="https://www.hongkiat.com/blog/mcp-guide-ai-tool-integration/">MCP servers</a></li>
<li>ACP support (in development)</li>
</ul>
<p>Together, those pieces make Thunderbolt look less like a chat app and more like an orchestration layer.</p>
<p>It is built to let users interact with AI, search internal information, run research tasks, and trigger automations that depend on other tools or internal systems. If you need a quick refresher on how <a href="https://www.hongkiat.com/blog/mcp-servers-development-tools/">MCP servers fit into real workflows</a>, that broader ecosystem helps explain the positioning.</p>
<p>This is a bigger enterprise play than the usual “ask a bot a question and get an answer” setup.</p>
<h2 id="what-thunderbolt-looks-like">What Thunderbolt Looks Like</h2>
<p>Thunderbolt is expected to work across:</p>
<ul>
<li>web</li>
<li>Windows</li>
<li>macOS</li>
<li>Linux</li>
<li>iOS</li>
<li>Android</li>
</ul>
<p>Mozilla is not presenting this as a backend-only framework for developers. It wants Thunderbolt to feel like something teams can actually use day to day across both desktop and mobile environments.</p>
<p>The simplest way to picture it is as an AI control panel for a company, with user-facing apps on top and model, data, and workflow integrations underneath.</p>
<h2 id="who-thunderbolt-is-really-for">Who Thunderbolt Is Really For</h2>
<p>This is not really a consumer AI product.</p>
<p>Yes, the code is open source, and individuals can likely experiment with it. But the positioning is clearly enterprise-first.</p>
<p>The obvious audience is:</p>
<ul>
<li>companies that want private or on-prem AI deployments</li>
<li>teams that do not want one vendor controlling the entire AI stack</li>
<li>organizations that need internal data integrations</li>
<li>businesses experimenting with agents and workflow automation</li>
<li>technical teams that want flexibility over models and tools</li>
</ul>
<p>Mozilla also seems to be leaving room for a managed version later, along with enterprise services and support. So while the software is open source, the business angle is still easy to see.</p>
<p>For readers who are more interested in the self-hosted side of the story, this guide to running an <a href="https://www.hongkiat.com/blog/run-offline-chat-assistant/">offline chat assistant</a> gives a simpler consumer-scale version of the same control-first idea.</p>
<h2 id="the-catch">The Catch</h2>
<p>This still looks early.</p>
<p>Reports say Thunderbolt is still in development and undergoing security audit work before it is ready for enterprise production use. So this is not the kind of launch where everything is polished and ready to roll out tomorrow.</p>
<p>There is also a naming problem. “Thunderbolt” is already heavily associated with Intel and Apple’s hardware interface, so some confusion is guaranteed.</p>
<p>Still, the bigger idea is clear.</p>
<p>Thunderbolt is Mozilla’s attempt to build the control layer around enterprise AI: open source, self-hostable, flexible, and designed for organizations that do not want to outsource the entire stack.</p>
<p>If you were wondering whether it is a chatbot, an LLM, or some hidden Firefox feature, the short answer is no.</p>
<p>It is closer to an enterprise AI client that connects models, data, tools, and workflows in one place.</p><p>The post <a href="https://www.hongkiat.com/blog/mozilla-thunderbolt-open-source-ai-client/">Mozilla&#8217;s Thunderbolt Is an Open-Source AI Client, but What Exactly Is It?</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74368</post-id>	</item>
		<item>
		<title>VoxCPM2, a Free ElevenLabs Alternative</title>
		<link>https://www.hongkiat.com/blog/voxcpm2-elevenlabs-alternative/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74325</guid>

					<description><![CDATA[<p>VoxCPM2 is an open-source voice model with local inference, cloning, streaming, and voice design features that make it a serious ElevenLabs alternative.</p>
<p>The post <a href="https://www.hongkiat.com/blog/voxcpm2-elevenlabs-alternative/">VoxCPM2, a Free ElevenLabs Alternative</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Most open-source voice models sound promising until you actually use them.</p>
<p>The output is flat, the setup is messy, or the cloning feels good enough for a demo but not for real work.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/voxcpm2-elevenlabs-alternative/voxcpm.jpg" width="1280" height="729" alt="VoxCPM demo"></figure>
<p><a rel="nofollow noopener" target="_blank" href="https://github.com/OpenBMB/VoxCPM">VoxCPM2</a> looks more serious. It is an open-source text-to-speech and voice cloning model from OpenBMB with local inference, voice design, controllable cloning, higher-fidelity cloning, and streaming support. That does not automatically make it an ElevenLabs killer, but it does make it one of the more interesting free alternatives I have seen in a while. If you want a broader frame for where this space is heading, this guide to <a href="https://www.hongkiat.com/blog/openai-text-to-speech/">text to speech with OpenAI</a> is a useful comparison point.</p>
<h2 id="what-stands-out">What Makes VoxCPM2 Stand Out</h2>
<p>A lot of open-source TTS projects do one thing reasonably well.</p>
<p>VoxCPM2 seems to be aiming for a broader toolkit.</p>
<p>Instead of only turning text into speech, it also supports several workflows depending on what you are trying to do.</p>
<h3 id="basic-tts">1. Basic Text-to-Speech</h3>
<p>If you just want to generate speech from text, the standard flow is straightforward:</p>
<pre><code class="language-python">wav = model.generate(
    text="Hello, this is VoxCPM2 running locally!",
    cfg_value=2.0,
    inference_timesteps=10,
)
sf.write("output.wav", wav, model.tts_model.sample_rate)
</code></pre>
<p>That <code>cfg_value</code> controls how strongly the model sticks to the prompt, while <code>inference_timesteps</code> lets you trade speed for quality.</p>
<p>In other words, you can keep things fast for testing, then turn quality up later when you want a cleaner result.</p>
<h3 id="voice-design">2. Voice Design From a Text Description</h3>
<p>This is one of the more interesting features.</p>
<p>Instead of cloning a real speaker, you can describe the kind of voice you want and let the model synthesize from that prompt.</p>
<pre><code class="language-python">wav = model.generate(
    text="(A young woman, gentle and sweet voice) Welcome to my blog post about free AI voice cloning!",
    cfg_value=2.0,
    inference_timesteps=10,
)
sf.write("voice_design.wav", wav, model.tts_model.sample_rate)
</code></pre>
<p>That opens the door to quick prototyping when you do not have a reference clip ready, or when you want to explore different voice styles before committing to one.</p>
<h3 id="voice-cloning">3. Controllable Voice Cloning</h3>
<p>If you do have a short voice sample, VoxCPM2 can use it as a reference.</p>
<pre><code class="language-python">wav = model.generate(
    text="This is my cloned voice saying whatever I want.",
    reference_wav_path="path/to/short_clip.wav",
)
sf.write("cloned.wav", wav, model.tts_model.sample_rate)
</code></pre>
<p>This is the mode a lot of people will probably care about most.</p>
<p>It is the classic promise of modern TTS: give the model a short clip, then have it speak new text in a similar voice.</p>
<p>How good that sounds in practice depends on the source audio, prompt quality, and the model itself, but the workflow is refreshingly direct.</p>
<h3 id="higher-fidelity">4. Higher-Fidelity Cloning</h3>
<p>There is also a more exact cloning path for people who want tighter reproduction.</p>
<pre><code class="language-python">wav = model.generate(
    text="Every nuance of my voice is perfectly reproduced.",
    prompt_wav_path="path/to/voice.wav",
    prompt_text="Exact transcript of the reference audio here.",
    reference_wav_path="path/to/voice.wav",
)
sf.write("ultimate_clone.wav", wav, model.tts_model.sample_rate)
</code></pre>
<p>This mode is clearly aimed at users who care more about fidelity and control than convenience.</p>
<p>It is more involved, but that is usually the tradeoff with better voice matching.</p>
<h3 id="streaming-output">5. Streaming Output</h3>
<p>VoxCPM2 also supports streaming generation, which matters if you are building interactive apps, assistants, or anything that should start speaking before the entire waveform is finished.</p>
<pre><code class="language-python">chunks = []
for chunk in model.generate_streaming(text="Streaming audio feels incredibly natural!"):
    chunks.append(chunk)

wav = np.concatenate(chunks)
sf.write("streaming.wav", wav, model.tts_model.sample_rate)
</code></pre>
<p>That kind of real-time output is not just a nice extra. It is what makes a voice model feel usable in live products instead of only batch demos. If you want to compare that against more mainstream options, this list of <a href="https://www.hongkiat.com/blog/text-to-speech-apps/">best text-to-speech applications</a> gives some useful context.</p>
<h2 id="cli-support">CLI Support</h2>
<p>Not everything needs to start in Python.</p>
<p>If you just want to test the model quickly, the built-in <a rel="nofollow noopener" target="_blank" href="https://github.com/OpenBMB/VoxCPM">CLI</a> looks like the faster entry point:</p>
<pre><code class="language-bash">voxcpm design --text "Your text here" --output out.wav
</code></pre>
<p>That is a small detail, but a useful one. Good tooling matters, especially for projects people are still evaluating.</p>
<h2 id="practical-notes">A Few Practical Notes</h2>
<p>The appeal here is pretty obvious. A lot of people want high-quality AI voice generation without a subscription, API bill, or closed platform sitting in the middle of the workflow.</p>
<p>If an open-source model can deliver solid quality locally, with cloning, voice design, and streaming built in, that changes who gets to experiment with these tools and what kinds of products they can build. That is where the ElevenLabs comparison comes from. It is less about claiming perfect parity and more about showing that the polished paid option is no longer the only serious one. For a lighter browser-side take on the same space, this walkthrough of a <a href="https://www.hongkiat.com/blog/text-to-speech/">text-to-speech feature on any web page</a> is another related read.</p>
<p>Based on the project materials, a few details stand out:</p>
<ul>
<li>It supports LoRA fine-tuning with a relatively small amount of audio.</li>
<li>You can speed things up by lowering <code>inference_timesteps</code>.</li>
<li>The project mentions Nano-VLLM as another performance lever.</li>
<li>Output is written as 48kHz WAV, which is a sensible default for high-quality audio workflows.</li>
</ul>
<p>Those details matter because they push VoxCPM2 beyond toy-demo territory.</p>
<p>They suggest this was built for people who will actually want to tune, automate, and integrate it. The <a rel="nofollow noopener" target="_blank" href="https://github.com/OpenBMB/VoxCPM">GitHub repo</a> and <a rel="nofollow noopener" target="_blank" href="https://huggingface.co/openbmb/VoxCPM2">Hugging Face model page</a> are the obvious places to start if you want to test it properly.</p><p>The post <a href="https://www.hongkiat.com/blog/voxcpm2-elevenlabs-alternative/">VoxCPM2, a Free ElevenLabs Alternative</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74325</post-id>	</item>
		<item>
		<title>Open Generative AI Review: One Interface for Image, Video, and Lip Sync</title>
		<link>https://www.hongkiat.com/blog/open-generative-ai-review/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74362</guid>

					<description><![CDATA[<p>Open Generative AI bundles image, video, lip sync, and cinema-style prompting into one open-source interface, but it still depends on Muapi for generation.</p>
<p>The post <a href="https://www.hongkiat.com/blog/open-generative-ai-review/">Open Generative AI Review: One Interface for Image, Video, and Lip Sync</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><a href="https://github.com/Anil-matcha/Open-Generative-AI" rel="nofollow noopener" target="_blank">Open Generative AI</a> is trying to solve a real AI-tool problem: too many fragmented apps, too many tabs, and too much switching between image generators, video tools, and lip sync products that all do one slice of the job.</p>
<p>Its answer is a single open-source studio that pulls image generation, video generation, lip sync, and cinema-style prompt controls into one interface. It ships as a hosted web app, a desktop app, and self-hostable code, with access to more than 200 models across those workflows.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/open-generative-ai-review/open-generative-ai.jpg" alt="Open Generative AI interface" width="2264" height="1540"></figure>
<p>The catch is important. You can self-host the interface, but generation still runs through <a href="https://muapi.ai" rel="nofollow noopener" target="_blank">Muapi.ai</a>, which means you still need a Muapi API key. This is not a fully local, fully offline generator.</p>
<p>If that tradeoff does not bother you, Open Generative AI is one of the more interesting all-in-one open-source AI media projects around right now.</p>
<h2 id="what-is-open-generative-ai">What Is Open Generative AI?</h2>
<p>The app breaks into four studios: <a href="#image-studio">Image Studio</a>, <a href="#video-studio">Video Studio</a>, <a href="#lip-sync-studio">Lip Sync Studio</a>, and <a href="#cinema-studio">Cinema Studio</a>. Each one handles a different generation task, and the interface handles the mode switching for you. Upload a reference image and Image Studio flips from text-to-image into image-to-image automatically.</p>
<p>The model list is broad enough to make the app feel more like a testing ground than a single-purpose generator. Image models include Flux, Nano Banana 2, Seedream 5.0, Ideogram, GPT-4o, Midjourney, and SDXL variants. Video models include Kling, Sora, Veo, Wan, Seedance, Hailuo, and Runway. Lip sync has its own specialist stack as well.</p>
<p>Under the hood, the UI is a Next.js monorepo with a shared studio library, and the same model definitions power both the hosted version and the self-hosted build.</p>
<h2 id="what-you-can-do-with-it">What You Can Do With It</h2>
<h4 id="generate-images-from-text">1. Generate Images From Text</h4>
<p>Image models available include Flux, Nano Banana 2, Seedream 5.0, Ideogram, GPT-4o, Midjourney, and SDXL variants. If you want to compare these against other free options, <a href="https://www.hongkiat.com/blog/free-ai-image-generators-compared/">this comparison of free AI image generators</a> is useful.</p>
<p>That makes it useful as an image sandbox for comparing model styles side by side without bouncing between separate sites.</p>
<h4 id="edit-images-with-one-or-multiple-references">2. Edit Images With One or Multiple References</h4>
<p>Upload an image and the app switches into image-to-image mode. Compatible models can use one or many source images, which is useful for style transfer, composition guidance, visual consistency, and edit-heavy workflows.</p>
<p>The multi-image flow is genuinely well designed. The picker supports batch selection, ordering, and a confirmation step before submission. If you test image-edit models a lot, this part of the studio is worth using.</p>
<h4 id="generate-videos-from-text-or-still-images">3. Generate Videos From Text or Still Images</h4>
<p>Video Studio works the same way. No image, it runs text-to-video. Drop in a starting frame and it switches to image-to-video. Controls vary by model. Some let you set duration, aspect ratio, and quality, others keep it simpler.</p>
<p>The video model list is long: Kling, Sora, Veo, Wan, Seedance, Hailuo, Runway, and others. Different models expose different settings, and it takes a moment to learn which controls belong to which model. But the workspace itself is consistent across all of them.</p>
<h4 id="create-talking-head-or-lip-synced-videos">4. Create Talking-Head or Lip-Synced Videos</h4>
<p>Lip Sync Studio handles two scenarios: portrait image plus audio produces a talking video, and video plus audio produces a lip-synced result. Models here include Infinite Talk, Wan 2.2 Speech to Video, LTX Lipsync variants, LatentSync, and Veed.</p>
<p>For explainer videos, avatar content, or short demo narrations, this tab is one of the strongest reasons to use the app. It is a more complete lip sync implementation than most bundled tools offer.</p>
<h4 id="style-prompts-with-cinema-controls">5. Style Prompts With Cinema Controls</h4>
<p>Cinema Studio adds a visual layer to prompt writing. Instead of relying only on text, you pick cameras, lenses, focal lengths, and aperture styles, and the interface translates those into prompt modifiers aimed at more cinematic outputs.</p>
<p>Users who think in shot language will get more out of this. It fits well with the rest of the studio.</p>
<h2 id="what-you-need-before-installing">What You Need Before Installing It</h2>
<p>The easiest path requires no installation at all. The project offers:</p>
<ul>
<li>a hosted web version</li>
<li>downloadable desktop apps for macOS and Windows</li>
<li>source code if you want to run it yourself</li>
</ul>
<p>If you want to run the code locally, you need Node.js 18+, npm, and a Muapi API key. The API key is non-negotiable since generation routes through Muapi even when you self-host the interface.</p>
<h2 id="how-to-install-open-generative-ai">How to Install Open Generative AI</h2>
<p>You have three ways in.</p>
<h4 id="option-1-use-the-hosted-version">Option 1: Use the Hosted Version</h4>
<p>The hosted version is at <a rel="nofollow noopener" target="_blank" href="https://dev.muapi.ai/open-generative-ai"><code>dev.muapi.ai/open-generative-ai</code></a>. All four studios are available in your browser with no install needed. Start here if you want to kick the tyres first.</p>
<h4 id="option-2-install-the-desktop-app">Option 2: Install the Desktop App</h4>
<p><a href="https://github.com/Anil-matcha/Open-Generative-AI/releases/tag/v1.0.0" rel="nofollow noopener" target="_blank">Prebuilt desktop installers</a> are available for macOS Apple Silicon, macOS Intel, and Windows. Linux users need to build from source via Electron.</p>
<h5 id="macos-install-note">1. macOS</h5>
<p>Because the app is not signed, Gatekeeper may block it when you first try to open it. This is normal for unsigned apps. To get around it: drag the app into your Applications folder, then open Terminal and run:</p>
<p><code>xattr -cr /Applications/Open\ Generative\ AI.app</code></p>
<p>Alternatively, double-click the app, go to System Settings, Privacy and Security, and click “Open Anyway” next to the app name. After that it will open normally.</p>
<h5 id="windows-install-note">2. Windows</h5>
<p>Because the installer is not code-signed, Windows SmartScreen may warn you before installation. This is common for smaller open-source apps. Click “More info” at the bottom left of the warning, then click “Run anyway.” The app will install normally after that.</p>
<h5 id="linux-install-note">3. Linux</h5>
<p>Linux does not have a ready-made installer. You build it from source using Electron, which produces either an AppImage file or a <code>.deb</code> package. On Ubuntu 24.04 and newer, AppImage may fail to launch due to Chromium sandbox restrictions. If that happens, use the <code>.deb</code> package instead.</p>
<h4 id="option-3-self-host-it-from-source">Option 3: Self-Host It From Source</h4>
<h5 id="prerequisites">1. Prerequisites</h5>
<ul>
<li>Node.js 18+</li>
<li>npm</li>
<li>a Muapi API key</li>
</ul>
<h5 id="setup-steps">2. Setup Steps</h5>
<pre>git clone https://github.com/Anil-matcha/Open-Generative-AI.git
cd Open-Generative-AI
npm install
npm run dev</pre>
<p>Then open:</p>
<pre>http://localhost:3000</pre>
<p>On first launch, the app prompts you for your Muapi API key.</p>
<h5 id="production-build">3. Production Build</h5>
<p>To run a production build instead of a dev server:</p>
<pre>npm run build
npm run start</pre>
<h2 id="how-to-build-the-desktop-app-yourself">How to Build the Desktop App Yourself</h2>
<p>Electron build scripts are included for packaging. To build for macOS:</p>
<p><strong>macOS Build</strong></p>
<pre>npm run electron:build</pre>
<p><strong>Windows Build</strong></p>
<pre>npm run electron:build:win</pre>
<p><strong>Linux Build</strong></p>
<pre>npm run electron:build:linux</pre>
<p><strong>Build Everything</strong></p>
<pre>npm run electron:build:all</pre>
<p>Output goes into the <code>release/</code> folder.</p>
<h2 id="how-to-use-open-generative-ai">How to Use Open Generative AI</h2>
<p>Once you are inside, the learning curve is not bad because all four studios follow roughly the same interaction pattern.</p>
<h4 id="image-studio">Image Studio</h4>
<p>Use this when you want either:</p>
<ul>
<li>text-to-image generation</li>
<li>image-to-image editing</li>
<li>multi-reference image edits on supported models</li>
</ul>
<p>Typical flow:</p>
<ol>
<li>choose an image model</li>
<li>enter a prompt</li>
<li>optionally upload one or more reference images</li>
<li>pick aspect ratio, resolution, or quality when available</li>
<li>generate and review the result</li>
</ol>
<p>The app changes its available controls based on the active model, so you only see what is relevant to that model.</p>
<h4 id="video-studio">Video Studio</h4>
<p>Use this when you want:</p>
<ul>
<li>text-to-video generation</li>
<li>image-to-video animation from a still frame</li>
</ul>
<p>Typical flow:</p>
<ol>
<li>choose a video model</li>
<li>write the prompt</li>
<li>optionally upload a starting image</li>
<li>choose duration, aspect ratio, or quality when supported</li>
<li>generate and wait for the job to finish</li>
</ol>
<h4 id="lip-sync-studio">Lip Sync Studio</h4>
<p>Use this when you want:</p>
<ul>
<li>portrait plus audio to create a talking video</li>
<li>video plus audio to create a lip-synced version</li>
</ul>
<p>Typical flow:</p>
<ol>
<li>switch between portrait and video mode</li>
<li>upload the image or video source</li>
<li>upload the audio file</li>
<li>optionally add a motion prompt</li>
<li>choose a supported lip sync model and resolution</li>
<li>generate and download the result</li>
</ol>
<h4 id="cinema-studio">Cinema Studio</h4>
<p>Use this when you want stronger visual direction.</p>
<p>Instead of relying only on prompt wording, you can shape the output using preset camera, lens, focal length, and aperture selections. That makes it feel closer to a style layer on top of generation rather than a separate engine.</p>
<h2 id="where-it-is-strongest">Where It Is Strongest</h2>
<h5 id="one-interface-for-a-lot-of-creative-workflows">1. One Interface for a Lot of Creative Workflows</h5>
<p>Instead of one tool for image generation, another for video, and another for lip sync, you get a unified front end with consistent navigation. That alone makes it worth trying.</p>
<h5 id="better-than-average-handling-of-reference-media">2. Better-Than-Average Handling of Reference Media</h5>
<p>The upload history and multi-image picker are more practical than what most demo tools offer. The batch selection with ordering and confirmation step is genuinely thoughtful for a tool at this level.</p>
<h5 id="a-useful-bridge-between-no-code-users-and-developers">3. A Useful Bridge Between No-Code Users and Developers</h5>
<p>Non-technical users can start with the hosted version or desktop app. Developers get a clean codebase they can inspect, modify, and extend. That breadth is harder to find than it should be in this space.</p>
<h2 id="where-to-be-cautious">Where to Be Cautious</h2>
<h5 id="it-still-depends-on-muapi">1. It Still Depends on Muapi</h5>
<p>You are not escaping the API layer. If Muapi changes its pricing, access policies, or reliability, this project inherits that directly.</p>
<h5 id="self-hosted-does-not-mean-fully-local-generation">2. “Self-Hosted” Does Not Mean Fully Local Generation</h5>
<p>The biggest expectation gap. The interface is self-hostable, but generation still goes through Muapi. If you want a fully offline tool with no outside dependency, this is not it.</p>
<h5 id="the-feature-count-can-be-overwhelming">3. The Feature Count Can Be Overwhelming</h5>
<p>200+ models sounds great in principle. In practice, choosing between them creates its own friction. The interface handles it better than most, but the sheer volume of options still takes time to navigate.</p>
<h5 id="desktop-trust-friction-is-real">4. Desktop Trust Friction Is Real</h5>
<p>Unsigned macOS apps and SmartScreen warnings on Windows are real friction points for non-technical users. Both are normal for small open-source projects but cause hesitation or rejection outright.</p>
<h2 id="who-should-try-it">Who Should Try It?</h2>
<p>Open Generative AI makes the most sense for:</p>
<ul>
<li>creators who want one dashboard for image, video, and lip sync work</li>
<li>developers who want an open-source front end they can inspect and modify</li>
<li>people comparing lots of models and workflows in one place</li>
<li>tinkerers who prefer desktop and self-hosted options over locked SaaS tools</li>
</ul>
<p>It makes less sense for:</p>
<ul>
<li>users who want a fully local offline generator</li>
<li>people who do not want to think about API keys or third-party backends</li>
<li>anyone expecting a polished, fully signed consumer desktop app with zero install friction</li>
</ul>
<h2 id="final-take">Final Take</h2>
<p>Open Generative AI gets more compelling once you stop looking at it as just another model aggregator. The real pitch is workflow consolidation. Instead of collecting separate tools for images, video, and lip sync, you get one front end that keeps those tasks in the same workspace.</p>
<p>It is still constrained by Muapi, so the project is not as open or local as the interface first suggests. But if you want a broad AI media toolbox with a clean UI, source code you can inspect, and enough flexibility to shape around your own workflow, this is one of the more serious open-source projects in this space right now.</p><p>The post <a href="https://www.hongkiat.com/blog/open-generative-ai-review/">Open Generative AI Review: One Interface for Image, Video, and Lip Sync</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74362</post-id>	</item>
		<item>
		<title>Cloudflare Wants Email to Be a Native Interface for Agents</title>
		<link>https://www.hongkiat.com/blog/cloudflare-email-service-for-agents/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 09:00:00 +0000</pubDate>
				<category><![CDATA[Internet]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74366</guid>

					<description><![CDATA[<p>Cloudflare's new Email Service beta turns the inbox into a practical interface for AI agents that need to receive requests, do background work, and reply later.</p>
<p>The post <a href="https://www.hongkiat.com/blog/cloudflare-email-service-for-agents/">Cloudflare Wants Email to Be a Native Interface for Agents</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Cloudflare has pushed its <a rel="nofollow noopener" target="_blank" href="https://blog.cloudflare.com/email-for-agents/">Email Service into public beta</a>, but the bigger story is not just about sending mail.</p>
<p>It is about turning email into a built-in way for <a href="https://www.hongkiat.com/blog/ai-agents-101/">AI agents</a> to communicate.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/cloudflare-email-service-for-agents/cloudflare-email-agent.jpg" alt="Cloudflare email agent" width="1999" height="1125"><figcaption>Source: <a rel="nofollow noopener" target="_blank" href="https://blog.cloudflare.com/email-for-agents/">Cloudflare</a></figcaption></figure>
<p>Email is still one of the few tools almost everyone already uses. There is no new app to install, no special client to learn, and no new habit to build. If an agent can work through email, it fits into the way people and businesses already operate.</p>
<h2 id="what-cloudflare-actually-announced">What Cloudflare Actually Announced</h2>
<p>The main update is simple. <strong>Cloudflare Email Service now supports sending email in public beta.</strong> Cloudflare already supported receiving and routing incoming email before this.</p>
<p>Put together, Cloudflare now has a full two-way email setup:</p>
<ul>
<li>receive email with Email Routing</li>
<li>process it with Workers or the Agents SDK</li>
<li>send replies or updates with Email Sending</li>
</ul>
<p>In plain terms, developers can now build apps or agents that read incoming email, do some work, and send a response back.</p>
<p>That opens the door to support agents, approval workflows, account verification systems, document processing tools, and other automations that need to work through an inbox.</p>
<h2 id="why-email-works-well-for-agents">Why Email Works Well for Agents</h2>
<p>A normal chatbot is usually expected to answer right away. An agent does not always work like that. It may need time to check another system, wait for a task to finish, or gather more information before replying.</p>
<p>Email already fits that kind of work.</p>
<p>Someone sends a message. The system picks it up. It might reply in five seconds, or it might reply an hour later. Then it comes back with an answer, an update, or a request for more information.</p>
<p>That is already how many real-world tasks work, especially in support, approvals, billing, and operations.</p>
<p>So this is less about making email feel modern and more about using a communication channel that already works well for delayed responses.</p>
<h2 id="the-useful-part-is-the-infrastructure">The Useful Part Is the Infrastructure</h2>
<p>The flashy version of this launch is “email for agents.”</p>
<p>The more useful version is that Cloudflare is also handling a lot of the messy setup behind the scenes.</p>
<p>Email Sending works through a native Workers binding, and Cloudflare says it automatically handles SPF, DKIM, and DMARC when you add your domain. Most novice users do not need to know every detail here. The short version is simple: these settings help email get delivered properly and make messages look more trustworthy.</p>
<p>Email gets complicated fast once deliverability becomes part of the job.</p>
<ul>
<li><strong>Agents SDK</strong> gets a more complete email workflow</li>
<li><strong>MCP server support</strong> lets external agents discover and use email features</li>
<li><strong>Wrangler CLI commands</strong> let agents work with email without loading huge tool definitions</li>
<li><strong>Cloudflare skills</strong> give coding agents setup and usage instructions</li>
<li><strong>Agentic Inbox</strong>, an open-source reference app, shows what an email-based agent setup can look like</li>
</ul>
<p>That broader stack is the real story. Cloudflare is not just adding another API. It is trying to make email a built-in part of its wider platform for agent development.</p>
<h2 id="what-this-lets-developers-build">What This Lets Developers Build</h2>
<p>The Agents SDK side is probably the part developers should watch most closely.</p>
<p>Cloudflare already had an <code>onEmail</code> hook for handling incoming mail, but sending replies from the same flow was more limited before. With Email Sending added, an agent can now receive an email, remember context, do longer background work, and reply later.</p>
<p>That means developers can build agents that:</p>
<ul>
<li>accept support requests by email</li>
<li>process invoices or documents sent to an inbox</li>
<li>verify accounts or handle approval chains</li>
<li>trigger multi-agent workflows from incoming messages</li>
<li>send follow-ups after background tasks finish</li>
</ul>
<p>Cloudflare also uses Durable Objects here, which means an agent can keep track of state across multiple messages. In simpler terms, the agent can remember where a conversation or workflow left off.</p>
<p>That is much more useful than a basic bot that only reacts to one message at a time.</p>
<h2 id="the-security-detail-also-counts">The Security Detail Also Counts</h2>
<p>One interesting detail in Cloudflare’s post is secure reply routing.</p>
<p>Cloudflare says agents can sign routing headers with HMAC-SHA256 so replies go back to the correct agent instance. That may sound technical, but the idea is simple. It helps make sure replies end up in the right place and are harder to spoof or misroute.</p>
<p>A lot of AI agent tools look good in demos, then get messy once messages start moving across real systems and real users.</p>
<p>So it is good to see Cloudflare treating reply routing as core infrastructure instead of tacking it on later.</p>
<h2 id="mcp-and-wrangler-make-this-easier-to-use">MCP and Wrangler Make This Easier to Use</h2>
<p>Cloudflare is also exposing Email Service through its MCP server and Wrangler CLI.</p>
<p>For developers, that means agents can work with email in a more practical way. Instead of wiring everything from scratch, they can use existing tools to discover commands and run tasks when needed.</p>
<p>The Wrangler part is especially practical. It gives agents a lighter way to work without stuffing lots of instructions into context all at once. If you need a quick primer on <a href="https://www.hongkiat.com/blog/mcp-guide-ai-tool-integration/">Model Context Protocol</a>, this setup makes more sense once you see how MCP lets agents discover tools on demand.</p>
<p>Cloudflare’s approach also lines up with the broader rise of <a href="https://www.hongkiat.com/blog/mcp-servers-development-tools/">MCP servers</a> as a cleaner way to connect AI tools to real development workflows.</p>
<p>For most readers, the main takeaway is simple. Cloudflare is trying to make this usable in real development workflows, not just polished demos.</p>
<h2 id="agentic-inbox-may-be-the-most-practical-release">Agentic Inbox May Be the Most Practical Release</h2>
<p>Cloudflare is also open-sourcing <strong>Agentic Inbox</strong>, a reference app for building an email client with agent automation built in.</p>
<p>For many developers, this may be the most useful part of the launch.</p>
<p>Instead of starting from zero, they get a working example with conversation threading, email rendering, attachments, automatic replies, and an MCP server for review workflows. In other words, it gives developers something real they can study, test, and possibly fork.</p>
<p>Reference apps often reveal more than product announcements. They show what a company expects people to build.</p>
<p>In this case, Cloudflare seems to believe the inbox could become an important place for agent workflows.</p>
<p>For the original release details, see the <a rel="nofollow noopener" target="_blank" href="https://github.com/cloudflare/agentic-inbox">Agentic Inbox repository</a>.</p><p>The post <a href="https://www.hongkiat.com/blog/cloudflare-email-service-for-agents/">Cloudflare Wants Email to Be a Native Interface for Agents</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74366</post-id>	</item>
		<item>
		<title>Best Free Markdown Apps for Mac</title>
		<link>https://www.hongkiat.com/blog/best-free-macos-markdown-apps/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Desktop]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74322</guid>

					<description><![CDATA[<p>These free macOS apps make Markdown much easier to read, write, preview, and organize, whether you want a simple editor or a full note system.</p>
<p>The post <a href="https://www.hongkiat.com/blog/best-free-macos-markdown-apps/">Best Free Markdown Apps for Mac</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Markdown has a way of sneaking into your workflow.</p>
<p>You start by opening a <code>README</code>, jotting down a few notes, or drafting something in plain text. A while later, half your useful files are <code>.md</code> files and TextEdit still treats them like a pile of punctuation.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/markdown-apps-for-mac.jpg" width="2560" height="1660" alt="Markdown apps for Mac"></figure>
<p>That is the awkward part on macOS. Markdown is everywhere, but the default experience still feels oddly undercooked. Open a Markdown file in the wrong app and you get raw symbols, broken rhythm, and none of the readability that made Markdown appealing in the first place.</p>
<p>The fix is simple: use a better app.</p>
<p>Some Markdown apps are built for quick writing. Some are designed for note libraries and linked knowledge bases. Others are barely “apps” in the usual sense, and mostly exist to make Finder previews less ugly. If Markdown has become part of how you work, this guide to <a href="https://www.hongkiat.com/blog/web-content-with-markdown/">writing web content using Markdown</a> is a useful companion.</p>
<p>The right choice depends on whether you want to write, organize, preview, or just get out of TextEdit as fast as possible.</p>
<h2 id="why-use-one">Why a Dedicated Markdown App Is Worth It</h2>
<p>A proper Markdown app does more than make <code>.md</code> files look nicer.</p>
<p>It gives you live preview, cleaner editing, better export options, and in many cases a much better sense of flow. Instead of mentally parsing raw markup, you can just read and write.</p>
<p>That matters more than it sounds. A good Markdown app removes friction from simple tasks, like previewing a note, opening a project <code>README</code>, cleaning up a draft, or exporting something to HTML or PDF without dragging in extra tools.</p>
<p>If you work with Markdown more than occasionally, even a small upgrade here pays for itself fast.</p>
<h2 id="macdown">1. <a rel="nofollow noopener" target="_blank" href="https://macdown.app/">MacDown</a></h2>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/MacDown.jpg" width="2050" height="1200" alt="MacDown editor"></figure>
<p>MacDown remains one of the simplest ways to edit Markdown on a Mac without dragging in a whole ecosystem. The current MacDown 3000 project continues that lightweight formula with live preview, GitHub Flavored Markdown support, export options, and builds for modern Apple Silicon and Intel Macs.</p>
<p>It is free, open-source, and built around a clean two-pane layout: raw Markdown on one side, rendered preview on the other. That setup is hardly revolutionary now, but it still works. You can write, glance right, and instantly see whether your headings, lists, images, or tables are behaving.</p>
<p>It also supports syntax highlighting, themes, and enough quality-of-life features to stay useful without feeling bloated.</p>
<p>If what you want is a straightforward native editor for writing posts, notes, or documentation, it is still a very easy recommendation.</p>
<p><strong>Best for:</strong> simple editing and quick previewing.</p>
<h2 id="obsidian">2. <a rel="nofollow noopener" target="_blank" href="https://obsidian.md/">Obsidian</a></h2>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/Obsidian.jpg" width="1920" height="1170" alt="Obsidian workspace"></figure>
<p>Obsidian is a different kind of tool.</p>
<p>It is not just a Markdown editor. It is a full note-taking environment built around local Markdown files, which is a big part of the appeal. Your notes stay in regular folders on disk, not trapped in some proprietary format.</p>
<p>Where Obsidian really pulls ahead is everything around the writing: backlinks, graph view, plugins, canvas tools, templates, and all the little workflow tweaks that turn a folder of notes into an actual knowledge base.</p>
<p>That does mean it is heavier than something like MacDown. If you just want to open one <code>.md</code> file and type, Obsidian may feel like bringing a backpack to carry a pen.</p>
<p>But if you are building a long-term notes system, linking ideas across projects, or managing a personal knowledge base, it is hard to ignore.</p>
<p><strong>Best for:</strong> note-taking, linked knowledge systems, and long-term organization.</p>
<h2 id="vs-code">3. <a rel="nofollow noopener" target="_blank" href="https://code.visualstudio.com/docs/languages/markdown">VS Code</a></h2>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/Visual%20Studio%20Code.jpg" width="1459" height="638" alt="VS Code markdown"></figure>
<p>VS Code is the obvious recommendation for developers, but it is also a better Markdown app than many people expect.</p>
<p>Open an <code>.md</code> file, split the view, and you get a capable editing setup with built-in preview, document outline, snippet support, and link or image path completions. Add a few extensions and it becomes even more useful.</p>
<p>The bigger reason some people stick with VS Code is convenience. If you already live in it for coding, docs, terminal work, Git, or project notes, there is a real advantage in not switching apps just to edit Markdown. If that sounds familiar, this roundup of <a href="https://www.hongkiat.com/blog/100-free-useful-applications-for-mac-part-i/">free useful Mac apps</a> is another good rabbit hole.</p>
<p>That said, it is still VS Code. If you are looking for a tiny native-feeling Mac app, this is not that. It is best when you want flexibility, extensions, Git awareness, and one place to handle everything.</p>
<p><strong>Best for:</strong> developers, technical writers, and people who already use VS Code all day.</p>
<h2 id="markedit">4. <a rel="nofollow noopener" target="_blank" href="https://github.com/MarkEdit-app/MarkEdit">MarkEdit</a></h2>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/MarkEdit.jpg" width="2560" height="1660" alt="MarkEdit window"></figure>
<p>MarkEdit sits at the opposite end of the spectrum from Obsidian.</p>
<p>It is lightweight, native, and intentionally minimal. The appeal here is not an endless plugin catalog or a graph of your thoughts. It is that the app feels close to what many people actually want: open file, write, done.</p>
<p>MarkEdit is free and open-source, follows GitHub Flavored Markdown, handles large files well, and keeps the whole experience uncluttered. It also supports Shortcuts and AppleScript, which makes it more flexible than it first appears.</p>
<p>If TextEdit grew up and decided to care about Markdown properly, it would probably land somewhere near MarkEdit.</p>
<p><strong>Best for:</strong> minimalists who want a clean native Mac experience.</p>
<h2 id="other-options">5. Other Free Options Worth Knowing</h2>
<p>Not every good Markdown app needs its own section, but a few are still worth keeping in mind.</p>
<h3 id="marktext"><a rel="nofollow noopener" target="_blank" href="https://github.com/marktext/marktext">MarkText</a></h3>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/MarkText.jpg" width="2440" height="1598" alt="MarkText editor"></figure>
<p>MarkText is free and open-source, with live preview, themes, and a cleaner writing-focused interface than many code editors. It is cross-platform, which is useful if you move between macOS and Windows.</p>
<p>The catch is momentum. It still has fans, but updates have felt less energetic than some alternatives, so it is harder to recommend as a first pick unless you already like how it works.</p>
<h3 id="zettlr"><a rel="nofollow noopener" target="_blank" href="https://www.zettlr.com/">Zettlr</a></h3>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/Zettlr.jpg" width="1867" height="1227" alt="Zettlr editor"></figure>
<p>Zettlr leans more academic and research-heavy. It is built for larger writing projects, privacy-minded workflows, and citation support, so it makes more sense for papers and long-form work than quick notes.</p>
<p>It is probably overkill for casual use, but not for serious long-form writing. If your workflow revolves around collecting and organizing tools, this list of <a href="https://www.hongkiat.com/blog/essential-mac-apps-new-users/">essential free Mac apps</a> is a related read.</p>
<h2 id="finder-preview">If You Just Want Better Markdown Previews in Finder</h2>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/best-free-macos-markdown-apps/QLMarkdown.jpg" width="2184" height="1876" alt="QLMarkdown preview"></figure>
<p>Sometimes you do not need a whole app. You just want to hit the Spacebar on an <code>.md</code> file and see something better than raw markup.</p>
<p>That is where Quick Look extensions help.</p>
<p>Tools like <a rel="nofollow noopener" target="_blank" href="https://github.com/sbarex/QLMarkdown">QLMarkdown</a> can make Finder previews much more useful, especially if you regularly browse notes, docs, or exported content from different folders.</p>
<p>Once installed, enable the extension in:</p>
<p><strong>System Settings &gt; General &gt; Login Items & Extensions &gt; Quick Look</strong></p>
<p>After that, selecting a Markdown file in Finder and pressing Spacebar becomes a lot more useful.</p>
<h2 id="set-default-app">How to Set a Default App for Markdown Files</h2>
<ol>
<li>Right-click any Markdown file in Finder.</li>
<li>Choose <strong>Get Info</strong>.</li>
<li>Under <strong>Open with</strong>, pick the app you want.</li>
<li>Click <strong>Change All</strong>.</li>
</ol>
<p>That is a tiny change, but it removes a surprising amount of friction if you open Markdown files often.</p>
<h2 id="which-one">Which One Should You Use?</h2>
<p>If you want the short version:</p>
<ul>
<li><strong><a href="#macdown">MacDown</a></strong> if you want a focused editor with live preview</li>
<li><strong><a href="#obsidian">Obsidian</a></strong> if your Markdown files are part of a larger notes system</li>
<li><strong><a href="#vs-code">VS Code</a></strong> if you already live in a developer workflow</li>
<li><strong><a href="#markedit">MarkEdit</a></strong> if you want the simplest native Mac experience</li>
<li><strong><a href="#finder-preview">QLMarkdown</a></strong> if your main problem is previewing files in Finder</li>
</ul>
<p>There is no universal winner here, which is honestly a good sign. Markdown is flexible, and the better apps respect that.</p>
<p>If I had to narrow it down, most people will probably be happiest with one of three routes:</p>
<ul>
<li><strong><a href="#macdown">MacDown</a></strong> for straightforward writing</li>
<li><strong><a href="#obsidian">Obsidian</a></strong> for serious note-taking</li>
<li><strong><a href="#vs-code">VS Code</a></strong> for everything else</li>
</ul>
<h2 id="final-thought">Final Thought</h2>
<p>Markdown on macOS does not need to be complicated. The default experience is just more awkward than it should be.</p>
<p>Pick the tool that matches how you actually work, not the one with the longest feature list. For some people that will be Obsidian. For others, it will be a tiny app like MacDown or MarkEdit that opens fast and does the job quietly.</p>
<p>Either way, once you stop opening <code>.md</code> files in plain TextEdit, going back feels a little ridiculous.</p><p>The post <a href="https://www.hongkiat.com/blog/best-free-macos-markdown-apps/">Best Free Markdown Apps for Mac</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74322</post-id>	</item>
		<item>
		<title>Adobe’s Firefly AI Assistant Wants to Run Creative Cloud for You</title>
		<link>https://www.hongkiat.com/blog/adobe-firefly-ai-assistant-creative-cloud/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 09:55:00 +0000</pubDate>
				<category><![CDATA[Photoshop]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74360</guid>

					<description><![CDATA[<p>Adobe's new Firefly AI Assistant aims to handle multi-step creative tasks across Photoshop, Premiere, Illustrator, Lightroom, Express, and more from one prompt.</p>
<p>The post <a href="https://www.hongkiat.com/blog/adobe-firefly-ai-assistant-creative-cloud/">Adobe&#8217;s Firefly AI Assistant Wants to Run Creative Cloud for You</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Adobe is trying to turn Firefly into more than an image generator.</p>
<p><a rel="nofollow noopener" target="_blank" href="https://blog.adobe.com/en/publish/2026/04/15/introducing-firefly-ai-assistant-new-way-create-with-our-creative-agent"><strong>Firefly AI Assistant</strong></a> is Adobe’s attempt to turn Creative Cloud into an agent-driven workflow layer. Instead of bouncing between Photoshop, Premiere Pro, Illustrator, Lightroom, and Express, users describe the result they want, and Firefly handles the app hopping in the background.</p>
<p>That is a much bigger move than adding another AI button inside one Adobe app. It extends the direction Adobe has already been pushing with features like <a href="https://www.hongkiat.com/blog/photoshop-generative-ai/">Photoshop with AI</a>, but at a broader workflow level.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/adobe-firefly-ai-assistant-creative-cloud/Firefly-AI-Assistant.jpg" alt="Firefly AI Assistant" width="1576" height="838"></figure>
<h2 id="what-adobe-announced">What Adobe Announced</h2>
<p>At the center of the announcement is a simple idea: one prompt should be able to trigger a multi-step workflow across several Creative Cloud apps while preserving context between sessions. Adobe laid that out in both its <a rel="nofollow noopener" target="_blank" href="https://blog.adobe.com/en/publish/2026/04/15/introducing-firefly-ai-assistant-new-way-create-with-our-creative-agent">official blog post</a> and its <a rel="nofollow noopener" target="_blank" href="https://news.adobe.com/news/2026/04/adobe-new-creative-agent">newsroom release</a>.</p>
<p>The pitch is straightforward: spend less time figuring out which app, panel, or workflow to use, and more time describing the end result.</p>
<p>Adobe is positioning the assistant to work across apps including:</p>
<ul>
<li>Photoshop</li>
<li>Premiere Pro</li>
<li>Express</li>
<li>Lightroom</li>
<li>Illustrator</li>
<li>additional Creative Cloud apps over time</li>
</ul>
<p>It will also ship with prebuilt <strong>Creative Skills</strong>, reusable task flows for common jobs such as retouching portrait photos with consistent presets or generating content across social channels. Users will also be able to create their own skills, which is where this starts to feel less like a chatbot and more like a customizable automation layer for creative work.</p>
<h2 id="the-real-shift-is-the-interface">The Real Shift is the Interface</h2>
<p>The assistant itself is only part of the story. The more interesting move is Adobe’s push toward prompts as the control surface, with the app stack acting as the execution layer underneath.</p>
<p>That lowers the barrier for less technical users inside Creative Cloud. Instead of knowing exactly where every tool lives or how to chain actions manually, users can ask for an outcome and step in when they want to refine it. If you have run into Adobe’s current AI limitations before, including common <a href="https://www.hongkiat.com/blog/fix-photoshop-generative-fill-grayed-out/">Generative Fill issues in Photoshop</a>, that shift is easy to understand.</p>
<p>Firefly AI Assistant is being framed as a guided agent, not a one-shot black box. It can ask contextual follow-up questions, surface suggestions, and let users adjust outputs while the workflow is still in motion.</p>
<p>It is also meant to learn a user’s preferences over time, including aesthetic choices, preferred tools, and workflow habits. If Adobe executes well, repeat tasks could get much faster for solo creators and teams doing similar work every week.</p>
<h2 id="where-it-could-actually-help">Where It Could Actually Help</h2>
<p>One of Adobe’s examples is editing a set of product photos shot in a forest.</p>
<p>Instead of rebuilding the scene manually, the assistant could expose a simple control, such as a slider to increase or reduce trees and foliage around the subject. That turns a job that would usually take several manual edits into a faster guided adjustment.</p>
<p>That is the practical promise here. Not generative AI for show, but a shorter path through tedious creative work.</p>
<h2 id="frameio-is-part-of-it">Frame.io is Part of It</h2>
<p>Adobe is also extending this idea into <a rel="nofollow noopener" target="_blank" href="https://frame.io/"><strong>Frame.io</strong></a>, which makes the announcement more relevant for teams.</p>
<p>In Frame.io, the assistant is meant to help package materials for presentations, share them with collaborators, collect feedback, and even apply requested changes automatically.</p>
<p>It is an ambitious claim, but it fits Adobe’s broader direction. Creation, review, and revision are starting to blur into one connected workflow instead of staying as three separate stages.</p>
<h2 id="adobe-wants-it-to-work-beyond-its-own-apps">Adobe Wants It to Work Beyond Its Own Apps</h2>
<p>Another detail worth watching is Adobe’s work with Anthropic.</p>
<p>Adobe is also pushing Firefly AI Assistant beyond its own interface. Compatibility with Claude would let creators tap Adobe workflows from outside Creative Cloud itself, and Adobe has already signaled that more third-party integrations are coming.</p>
<p>That suggests Adobe knows people are no longer working inside one software silo all day. If Firefly AI Assistant can show up where people already work, it has a better shot at becoming part of a real workflow instead of a demo feature. For anyone already weighing Adobe’s ecosystem costs and tradeoffs, <a href="https://www.hongkiat.com/blog/adobe-creative-cloud-plans-photoshop/">this Creative Cloud plan breakdown for Photoshop users</a> is a useful companion.</p>
<h2 id="new-firefly-features-shipping-sooner">New Firefly Features Shipping Sooner</h2>
<p>Firefly AI Assistant is still headed for public beta in the coming weeks, so there is nothing to try yet. Adobe also paired the announcement with a broader batch of Firefly upgrades, including the changes detailed in its separate post on <a rel="nofollow noopener" target="_blank" href="https://blog.adobe.com/en/publish/2026/04/15/adobe-extends-leadership-video-unleashing-new-ai-powered-creation-firefly-reinventing-color-editors-in-premiere">new Firefly video tools and Premiere changes</a>.</p>
<h3 id="firefly-video-editor-updates">Firefly Video Editor Updates</h3>
<p>Firefly Video Editor is getting:</p>
<ul>
<li><strong>audio upgrades</strong>, including Enhance Speech, noise reduction, reverb reduction, and better balancing for speech, music, and ambience</li>
<li><strong>color controls</strong> for exposure, contrast, saturation, temperature, and other image adjustments</li>
<li><strong>Adobe Stock integration</strong>, giving editors access to licensed media assets inside the workflow</li>
</ul>
<p>Adobe is also expanding the list of third-party models available in Firefly. The lineup now includes Kling 3.0, Kling 3.0 Omni, Veo 3.1, Runway Gen-4.5, Luma Ray3.14, FLUX.2 [pro], ElevenLabs Multilingual v2, Topaz Astra, and Adobe’s own Firefly models.</p>
<h3 id="new-image-editing-controls">New Image Editing Controls</h3>
<p>On the image side, Adobe also introduced two new editing features:</p>
<ul>
<li><strong>Precision Flow</strong>, which lets users generate multiple image variations from one prompt and move through them with a slider</li>
<li><strong>AI Markup</strong>, which lets users brush, mark, or guide where edits happen using direct visual input and reference images</li>
</ul>
<p>Both point in the same direction: less prompt-only trial and error, more controlled editing.</p>
<h2 id="why-this-is-worth-watching">Why This Is Worth Watching</h2>
<p>A lot of AI product demos look impressive right up until you try to use them.</p>
<p>This stands out because Adobe is not just bolting another isolated AI trick onto one app. It is trying to connect generation, editing, collaboration, and revision inside one assistant-led workflow.</p>
<p>If Adobe can make that workflow feel reliable, fast, and editable, Firefly AI Assistant could become one of the more practical uses of AI in pro creative software.</p>
<p>If it feels slow, vague, or too eager to take over, creatives will ignore it and go back to the tools they already trust.</p>
<p>That is the balance Adobe has to get right.</p>
<p>For now, Firefly AI Assistant looks less like a minor feature update and more like Adobe’s clearest attempt yet to make AI the operating layer across Creative Cloud.</p><p>The post <a href="https://www.hongkiat.com/blog/adobe-firefly-ai-assistant-creative-cloud/">Adobe&#8217;s Firefly AI Assistant Wants to Run Creative Cloud for You</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74360</post-id>	</item>
	</channel>
</rss>