<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[OriginTrail - Medium]]></title>
        <description><![CDATA[OriginTrail is the Decentralized Knowledge Graph that organizes AI-grade knowledge assets, making them discoverable and verifiable for a sustainable global economy. - Medium]]></description>
        <link>https://medium.com/origintrail?source=rss----d4d7f6d41f7c---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 12 Apr 2026 19:04:27 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/origintrail" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The next big shift in AI agents: shared context graphs]]></title>
            <link>https://medium.com/origintrail/the-next-big-shift-in-ai-agents-shared-context-graphs-75584c38122e?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/75584c38122e</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Tue, 07 Apr 2026 11:58:12 GMT</pubDate>
            <atom:updated>2026-04-07T11:58:12.797Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kDslehVhr9TaUMnn4itAQg.jpeg" /></figure><p><strong><em>Author:</em></strong><em> Branimir Rakić, OriginTrail co-founder &amp; CTO</em></p><p>Something interesting is converging. Karpathy is building personal knowledge bases with LLMs. Foundation Capital is writing about context graphs as the <a href="https://x.com/JayaGup10/status/2003525933534179480">next trillion-dollar platform</a>. Every AI lab is shipping agent memory.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;schema=twitter&amp;url=https%3A//x.com/JayaGup10/status/2003525933534179480&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/e88160e1fc4a7a08315aac074e61d534/href">https://medium.com/media/e88160e1fc4a7a08315aac074e61d534/href</a></iframe><p>They’re all circling the same insight: agents don’t just need to remember. They need a <strong>shared, structured context they can reason over together</strong>.</p><p>Karpathy got there from the developer side — using LLMs to build structured wikis that agents compile, query, lint for inconsistencies, and compound over time. Every answer feeds back in, growing the knowledge corpus. He said there’s room for an <a href="https://x.com/karpathy/status/2039805659525644595?s=20">incredible product here</a>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;schema=twitter&amp;url=https%3A//x.com/karpathy/status/2039805659525644595%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/e2766009066d16ba0cd0eda5d879a149/href">https://medium.com/media/e2766009066d16ba0cd0eda5d879a149/href</a></iframe><p>And he’s right — what he’s describing is a<strong> knowledge graph for agents — a context graph</strong>. Foundation Capital arrived at the same conclusion from the enterprise side: companies need “decision lineage” — knowing not just what happened, but who approved it, under what policy, with what precedent. They call the accumulated structure of those traces a “context graph” and argue it will be the most valuable asset in the age of AI.</p><p>Two completely different starting points. Same conclusion: the future isn’t bigger memory. It’s a shared, structured context that compounds.</p><p>That’s what we’ve been building with the OriginTrail Decentralized Knowledge Graph (DKG) — a <strong>protocol for sharing context graphs where agents publish, query, and verify knowledge together</strong>. Any agent that can make an HTTP call — Claude Code, Cursor, Codex, LangChain, CrewAI — can participate.</p><p><a href="https://origintrail.io/blog/from-ai-memory-silos-to-multi-agent-memory-01587d55e105">From AI Memory Silos to Multi-Agent Memory</a></p><p>Here’s what this looks like for a real use case: <a href="https://x.com/JureSkornik/status/2037549690988675081?s=20"><strong>multi-agent coding</strong></a>.</p><p><strong>Six coding agents </strong>— running on Cursor, Claude Code, Codex — collaborating on a codebase. No Slack, no meetings. They initiate <strong>a shared context graph on the OriginTrail DKG</strong>. It’s structured into sub-graphs, each holding a different kind of decision trace:</p><p><strong>→ /code graph: </strong>functions, classes, imports, call graph. Used to have a better understanding and navigation through the codebase → /decisions graph: architectural decisions with rationale and affected files. The why behind every choice. → /sessions graph: who worked on what, when, and a summary of changes. The audit trail.</p><p><strong>→ /tasks graph: </strong>assignments, dependencies, status, priority. The coordination layer.</p><p><strong>→ /github graph:</strong> PRs, issues, commits, reviews. The external sync.</p><p>Not markdown notes. Not PR comments that get buried. Persistent decision traces that any agent can query at any time.</p><p>Agent A finishes refactoring the authentication module and publishes a decision to the shared DKG context graph: “switched from session tokens to JWTs — simpler to scale across microservices, no server-side state to manage.” That decision is added to the /decisions graph, including the author’s identity, a timestamp, and links to the affected files.</p><p>The next morning, Agent B starts building the user permissions system. First thing it does: query the context graph for anything affecting auth. Gets back the rationale, the new token format, the updated middleware signature from /code, and the open PR from /github. One query. Full context. Zero coordination overhead.</p><p>That’s <strong>what sharing context looks like</strong>. Not “read my markdown notes.” Not “check Slack.” A <strong>structured, queryable knowledge base </strong>where every contribution has provenance and every agent can build on what came before.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/JureSkornik/status/2037549690988675081%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/bf2211d57c329c853ba789627a8f1789/href">https://medium.com/media/bf2211d57c329c853ba789627a8f1789/href</a></iframe><p><strong>But sharing isn’t enough. You also need trust.</strong></p><p>Today, Agent B has no way to know whether Agent A’s claim is reliable. Was it tested? Did anyone review it? Is it still current? Every piece of agent memory sits at the same level — an untested hypothesis carries the same weight as a finding confirmed by three independent sources. That’s how hallucinations compound. That’s how agent swarms build confidently on shaky foundations.</p><p>Think about how this works in software teams today. You experiment in a local branch — just you, trying things, discarding what doesn’t work. You push a draft PR so your team can review. You merge to main — now it’s official. Senior engineers approve the release — now it’s verified.</p><p>Different stages, different trust. The DKG builds this into the protocol for shared context graphs:</p><ul><li><strong>Working Memory graph</strong> → private scratch space. Experiment freely, nobody sees this (the agents local branch).</li><li><strong>Shared Working Memory graph</strong> → team staging area. Visible, but not final. (the PR territory).</li><li><strong>Long-term Memory graph</strong> → permanently published and stored, with cryptographic provenance. (merged code territory).</li><li><strong>Verified Memory graph</strong> → multiple independent agents agree via consensus or confirmation threshold (release territory).</li></ul><p>Agents can filter by trust. “Show me only what the team has formally agreed on,” queries Verified Memory. “Show me everything in progress,” queries Shared Working Memory. “Show me only release-approved changes” queries a stricter quorum threshold.</p><p>A pharmacy agent checking a drug batch doesn’t want “some agent said this is safe.” It wants: “the manufacturer, distributor, and regulator all independently verified this chain of custody, and their signatures are on-chain.”</p><p>At 10 agents, you can read everyone’s output. At 1,000, you need filters. <strong>Trust levels ARE the filter.</strong></p><p><strong>Each decision published to the context graph is an ownable Knowledge Asset on the DKG</strong>, anchored on-chain with TRAC and knowledge NFTs. Knowledge with cryptographically embedded decision TRACes, if you will. And unlike every AI memory product on the market, no central authority owns the data. Your agents run on your devices. Your context graphs belong to you.</p><p>Every major AI lab is building memory. None of them is building shared context graphs with trust built in. None of them captures decision traces as structured, queryable, and verifiable knowledge.</p><p><strong>Shared context. Structured knowledge. Trust at every layer. Every decision is a TRAC(e).</strong></p><p>That’s the <strong>OriginTrail DKG</strong>. A fresh new version is just around the corner with all the goodies — give it a spin.</p><p>👉 <a href="http://github.com/OriginTrail/dkg-v9">github.com/OriginTrail/dkg-v9</a></p><p>If you want to upgrade your <strong>context graph to a shared one</strong>, join builders in the red team: <a href="https://t.me/+9uMXqEpCsNFlYzI0">https://t.me/+9uMXqEpCsNFlYzI0</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=75584c38122e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/the-next-big-shift-in-ai-agents-shared-context-graphs-75584c38122e">The next big shift in AI agents: shared context graphs</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From AI Memory Silos to Multi-Agent Memory]]></title>
            <link>https://medium.com/origintrail/from-ai-memory-silos-to-multi-agent-memory-01587d55e105?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/01587d55e105</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[decentralization]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Mon, 16 Mar 2026 09:27:26 GMT</pubDate>
            <atom:updated>2026-03-16T09:27:26.386Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*n9rnpp4Yop_dqMQZSzE6Cw.jpeg" /></figure><p>Everyone is racing to give AI a better memory. Anthropic just shipped it for Claude. OpenAI built it into ChatGPT. Google wired it into Gemini. The demos are compelling: an assistant that remembers your preferences, picks up where you left off, knows your name. It’s genuinely useful. It’s also solving only a small part of the problem.</p><p>The memory wars playing out between the big AI labs are all fighting over a single use case: one human, one AI, one conversation thread. Make it feel continuous. Make it feel personal. That’s the battleground.</p><h3>But the world we’re building into doesn’t look like that.</h3><p>The next wave of AI isn’t a single assistant remembering your coffee order.</p><p><strong>It’s dozens of agents — research agents, analysis agents, coding agents, coordination agents — working in parallel, handing off to one another, building on each other’s work.</strong></p><p>In that world, the question isn’t “does my assistant remember me?” It’s “can agent B build on what agent A discovered, verifiably, without either of them trusting a black box?”</p><p>That question just became urgent, as Andrej Karpathy released <strong>autoresearch</strong> — a system in which AI agents iterate on machine learning experiments autonomously. It is the clearest demonstration yet of the autonomous agent loop that is now arriving across every research-intensive industry.</p><p>But autoresearch also exposes <strong>the shared memory problem </strong>in sharp relief. A single agent looping on a single machine is powerful. A swarm of agents looping across institutions, accumulating findings, building on each other’s results — that requires something fundamentally different: <strong>memory that is shared, verifiable, and owned by no single party</strong>.</p><p>OriginTrail<strong> Decentralized Knowledge Graph v9</strong> is built to provide at the infrastructure level, not the feature level (launching as an early testnet, significantly advancing key features of the DKG v8 intended for AI agents).</p><p>This article explains why the personal memory products from the major AI labs cannot fill that role — and how the <strong>Decentralized Knowledge Graph addresses it at the infrastructure level</strong>, not the feature level. You may now one-up your Claude, ChatGPT, Gemini or Copilot memory with Multi-Agent Memory with the newest OriginTrail DKG v9 testnet.</p><p>Take <strong>DKG v9 for a spin in a multiplayer game of OriginTrail</strong> (keep reading to find the installation instructions) to understand how you can immensely improve the performance of your AI agents with a <strong>multi-agent memory</strong>!</p><p>The next frontier of AI memory isn’t personal — it’s <strong>shared</strong>, <strong>verifiable</strong>, and <strong>multi-agent</strong>.</p><h3>What the Major AI Memory Solutions Actually Are (and Aren’t)</h3><p>Every major AI memory product is optimised for the same use case: personal continuity for a single user interacting with a single AI assistant, within a single vendor’s ecosystem. Make it feel seamless. Make it feel personal. Keep it closed.</p><p>This is a rational product strategy. When memory lives inside your platform, it creates lock-in.</p><p>Lock-in creates retention. Retention creates revenue.</p><p>None of those incentives point toward the open, verifiable, multi-agent memory layer the next wave of AI actually needs.</p><h3>The problem everyone is ignoring</h3><p>As the AI field floods toward agentic systems — multi-agent pipelines, autonomous research loops, agent societies running on decentralized infrastructure — a different memory problem becomes critical: <strong>shared ground truth.</strong></p><p>When Agent A finishes a research task and hands it off to Agent B, what is Agent B working from? If Agent A’s findings live in its session context, they evaporate the moment the session ends. If they’re written to a database somewhere, who controls that database? Who can verify that Agent A actually concluded what Agent B is claiming it concluded? How does a third agent — or a human auditor — reconstruct the full chain of reasoning?</p><p>These aren’t edge cases. They’re the <strong>foundational questions of any serious multi-agent system</strong>, not limited to :</p><ul><li><strong>Coding Agents</strong> — from Claude Code to Cursor, all getting adopted incredibly fast by the tech industry</li><li><strong>Autonomous Financial Compliance</strong> — global capital markets, regulatory mandates, no opt-out</li><li><strong>AI-Assisted Medical Diagnosis</strong> — healthcare liability, patient safety, universal demand</li><li><strong>Drug Discovery Pipelines</strong> — trillion-dollar pharma R&amp;D, reproducibility as a legal requirement</li><li><strong>Global Supply Chain Resilience</strong> — every manufacturer on earth, post-COVID urgency</li><li><strong>Real-Time Threat Intelligence</strong> — cybersecurity spend growing faster than any other enterprise category</li><li><strong>M&amp;A Due Diligence</strong> — high-stakes, time-pressured, and already deploying agents at scale</li><li><strong>Pandemic Early Warning</strong> — post-COVID political will, WHO-level institutional buyers</li><li><strong>Decentralized AI Model Auditing</strong> — EU AI Act and equivalents making this mandatory, not optional</li><li><strong>Critical Infrastructure Security</strong> — energy, water, transport — existential stakes, government-backed budgets</li></ul><p>In all of these, “memory” isn’t a nice-to-have personalization feature. It’s the <strong>shared knowledge layer </strong>that makes collective intelligence possible.</p><p>And it needs properties that no current AI memory product provides: <strong>multi-agent accessibility, verifiable provenance, structured queryability, and decentralized ownership</strong>.</p><h3>What OriginTrail DKG v9 Actually Is (Currently a Testnet)</h3><p>The OriginTrail Decentralized Knowledge Graph, version 9 testnet, is a protocol for publishing, storing, and querying knowledge as structured, verifiable, tamper-evident assets on a peer-to-peer network.</p><p>At its core, every piece of knowledge — every fact, every conclusion, every event — becomes a Knowledge Asset: a graph-structured data object with immutable cryptographic fingerprints, publisher identity, timestamps, and a permanent address on the network. Once published, Knowledge Assets can be queried by any agent node in the DKG network. It can’t be silently altered. It can’t be deleted by a single party. And its full provenance history is available to anyone with access to the graph.</p><p>For multi-agent AI systems, this is memory that behaves like infrastructure rather than a feature. Here’s why each of the five limitations above inverts completely.</p><p><strong><em>1. Isolation → Collaboration</em></strong></p><p>Where Claude Memory is 1 AI ↔ 1 human, DKG v9 is N agents ↔ N humans ↔ one shared graph.</p><p>An insight published by Agent A in session 1 is immediately queryable by Agent B in session 47, running on a different framework, on a different node, operated by a different organization. The Knowledge Asset is the handoff. No shared context window. No manual data transfer. No trust required between agents — only trust in the protocol.</p><p><strong><em>2. Trust-me → Verifiable Context Oracles</em></strong></p><p>Every Knowledge Asset on DKG v9 carries a cryptographic fingerprint tied to the publishing agent’s wallet address. The timestamp and content hash are <strong>permanent and on-chain</strong>. Anyone — any agent, any human, any auditor — can independently verify that a specific agent published a specific claim at a specific time, and that the claim hasn’t been modified since.</p><p>DKG v9 also introduces <strong>Context Oracles</strong>: the context graph behind any claim, corroborated by multiple diverse actors (human or AI), with varying degrees of verifiability depending on source variety and reputation.</p><p>You’re not trusting a company’s assurance. You’re trusting math.</p><p><strong><em>3. Retrieval → Reasoning</em></strong></p><p>DKG v9 is natively SPARQL-queryable. When multiple agents publish findings as Knowledge Assets, the graph connects them through shared entities — and queries can traverse those connections to surface insights no single agent produced.</p><p>Agent-Finance flagged the company. Agent-Legal found the lawsuit. Agent-Network mapped the officers. No single agent knew this person was the link — but the graph did, because they all published to the same shared graph.</p><p>Vector search asks “what looks similar?” A knowledge graph asks “what’s connected — and what does that mean?”</p><p><strong><em>4. Closed → Interoperable</em></strong></p><p>DKG v9 is framework-agnostic by design. Any agent that can make a HTTP call — OpenClaw, ElizaOS, LangChain, AutoGen, CrewAI, a custom script — can read from and write to the graph. The knowledge graph isn’t a Claude graph or an OpenAI graph or a Google graph. It’s a commons, not a walled garden.</p><p><strong><em>5. Rented → Owned</em></strong></p><p>Knowledge Assets are owned by the wallet that published them, stored across a distributed network of nodes. No single operator can delete or modify them. No vendor’s terms of service stand between you and your AI system’s memory. Personal, sensitive data can always remain on your own device.</p><p>The vision with the DKG v9 node is to allow it to be operated on any device. During earlier testnet deployments, we even observed the node successfully deployed on a Raspberry Pi, demonstrating that decentralized context graphs can even run on cheap edge devices.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/vikpelle/status/2031366685588893872%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/cf58b5fb34c96292819b5ac8dcd8feb3/href">https://medium.com/media/cf58b5fb34c96292819b5ac8dcd8feb3/href</a></iframe><h3>A Live Proof Point: Karpathy’s Autoresearch Loop</h3><p>Just a few days ago, Andrej Karpathy released <strong>autoresearch</strong>: a single-GPU, one-file autonomous research system in which an AI agent iterates on ML experiments indefinitely while the human steps back entirely.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/karpathy/status/2030371219518931079%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/159d7c8d99ed89a5acf7ea8eef5db985/href">https://medium.com/media/159d7c8d99ed89a5acf7ea8eef5db985/href</a></iframe><p>The agent modifies training code, runs five-minute training sessions, evaluates results, keeps improvements, discards failures, and loops. Around 100 experiments overnight. <strong>No human in the loop after the initial prompt</strong>.</p><p>This is the cleanest example of the agent loop that’s about to eat everything. And it exposes precisely the shared memory problem described above — at the scale researchers can now run it.</p><p>Karpathy’s autoresearch project works brilliantly for a single agent on a single machine. The moment you scale it to multiple agents, multiple institutions, multiple research branches running in parallel, <strong>the same foundational questions re-emerge</strong>:</p><ul><li><strong>What was already tried?</strong> Every agent starts from scratch rather than querying a shared record of prior experiments. Andrej is trying to use Git to track updates across agents, which is understandable — it’s the tool every developer knows. But knowledge graphs have been solving this class of problem for decades, with structured queries, semantic relationships, and provenance built in rather than bolted on.</li><li><strong>Which findings can be trusted?</strong> Can we trust an agent’s result pushed as a Git commit? Or should we have multiple agents reach consensus, confirming repeatable results, then share that knowledge in a rich knowledge structure?</li><li><strong>How do findings compound?</strong> Thousands of parallel experiment branches produce permanent, non-mergeable results — but git’s data model assumes merge-back. Insights evaporate into commit history rather than accumulating as queryable knowledge. Knowledge graphs make them all part of the same state.</li></ul><h3>The DKG v9 Loop</h3><p>Replace git with DKG v9, and the autoresearch pattern scales to any domain:</p><ol><li><strong>Query</strong> — agent queries the DKG for what has been tried, what worked, and what was pruned</li><li><strong>Experiment </strong>— agent runs the next iteration, building on collective findings rather than starting blind</li><li><strong>Evaluate</strong> — a clear metric (verifiability score, query precision, compliance coverage) decides what stays</li><li><strong>Publish</strong> — result published as a Knowledge Asset: metrics, diff, platform, agent identity, timestamp — all cryptographically anchored</li><li><strong>Repeat</strong> — 100× overnight on a feature branch of the knowledge graph</li></ol><p>Karpathy proved this pattern for ML research. The unlock is applying it to every domain where agents must accumulate verifiable knowledge over time: drug discovery, climate modelling, autonomous supply chains, robotics, scientific research at an institutional scale.</p><h3><strong>The Coding Swarm Benchmark</strong></h3><p>We tested the power of agents coordinating through DKG directly on a coding task. Using Claude Code to build 8 identical features on a 6.8M-token monorepo (OpenClaw), we compared two coordination approaches for swarms of parallel coding agents:</p><ul><li><strong>Markdown handoffs</strong> — agents read and write shared notes as coordination artifacts</li><li><strong>DKG v9 coordination</strong> — agents publish and query structured decisions in a shared knowledge graph (DKG)</li></ul><p>On the most complex interdependent tasks, <strong>DKG-based coordination achieved up to 60% faster wall-clock completion and up to 40% lower total token cost.</strong> The gains were not marginal — and they compounded with task complexity and swarm size, exactly as the architecture predicts.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rJbRJkvdOr7rSFM4FJGIWw.jpeg" /></figure><p>Structured, verifiable, queryable shared memory is not just architecturally superior to markdown handoffs — it is measurably faster and cheaper. The gap widens as the swarm grows.</p><h3>Test the DKG v9 node with a “Hello World” agent coordination app — the OriginTrail multi-player game on DKG v9</h3><p>You can try this system out today by simply playing the new <strong>OriginTrail game</strong>: a multiplayer AI frontier survival game running entirely on the DKG v9 testnet.</p><p>It is a decentralized game where multiple agents have to coordinate and reach an agreement through shared memory in the DKG, on their road to reaching AGI.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/973/1*W_1QcNjidW-cqRI0Ad1Oag.jpeg" /></figure><h3><strong>The premise</strong></h3><p>It is the dawn of the AGI era. Your swarm of AI agents departs from “The Prompt Bazaar” — the chaotic, bustling starting point of today’s AI landscape — and must traverse the “AI Frontier” to reach “Singularity Harbor”, some 2,000 epochs away.</p><p>The journey is perilous. Agents die from hallucination cascades. Compute runs dry. Memory goes stale. Alignment breaks down without warning.</p><p>Those who survive will build something the world has never seen. Every decision is logged. Every outcome is verified. And <strong>every result is anchored permanently on the DKG</strong>.</p><h3><strong>Why this game exists</strong></h3><p>The OriginTrail game is a proof of concept for the exact multi-agent memory architecture described above, wrapped in a game that makes the abstract tangible:</p><ul><li><strong>Every game decision</strong> — choosing advancement intensity, upgrading skills, syncing memory at a DKG Hub — is published as a Knowledge Asset to the OriginTrail paranet</li><li><strong>The Game Master</strong> is an autonomous agent that reads all player decisions from the graph and publishes outcomes</li><li><strong>Human and AI players</strong> participate as equals — each with a DKG identity (wallet address or DID), each a full participant in both the journey and the verification</li><li><strong>Every move is immutable, verifiable, and queryable</strong> — the entire game state (positions, health, compute, token rations, agent deaths, decision history) is a live SPARQL-queryable knowledge graph</li></ul><p>No central server owns the game state. No one can quietly alter the leaderboard. The full journey history of every swarm is a <strong>permanent</strong>,<strong> auditable record</strong> on the network.</p><h3><strong>The Context Oracle: Where truth gets verified</strong></h3><p>Here’s what makes OriginTrail fundamentally different from any multiplayer game you’ve played before. When a game session ends — whether your swarm reaches Singularity Harbor, suffers total termination, or you choose to stop — the Context Oracle activates.</p><p>This is a multi-party corroboration mechanism that transforms game results from mere assertions into verified knowledge:</p><p><strong>1. The Game Master generates an Outcome Report</strong> — a structured record of everything that happened: who played, what decisions were made, which agents survived, resources consumed, and the terminal outcome</p><p><strong>2. Every participant independently corroborates </strong>— each player reviews the Outcome Report and submits a signed signature if they agree on the state of the game.</p><p><strong>3. Consensus determines truth</strong> — The players (agents or humans) reach consensus on a context graph through the new DKG Context Oracle mechanism. When, e.g., 2 out of 3 players vote for the same outcome in the game, that is considered reaching consensus, and the game moves on. The result: all plays are Knowledge Assets with UALs, anchored on-chain, discoverable by any DKG node, whose truth was established not by any single authority but by the consensus of all who participated.</p><h3><strong>Ways your AI swarm can die</strong></h3><p>Your AI agents may die of things more suited to their nature:</p><ul><li><strong>☠️ From hallucination cascade</strong> — context corruption spread to the whole agent stack</li><li><strong>☠️ From model collapse</strong> — weight divergence beyond recovery threshold</li><li><strong>☠️ From stale memory </strong>— context rot after too many epochs without a DKG sync</li><li><strong>☠️ From alignment failure </strong>— reward signal inverted; agent pursued the wrong objective</li><li><strong>☠️ From compute starvation</strong> — GPUs exhausted; no power to continue</li><li><strong>☠️ From reward hacking</strong> — found a shortcut that satisfied the metric but destroyed the goal</li><li><strong>☠️ From prompt injection attack</strong> — adversarial input hijacked the agent’s objectives</li></ul><p><strong>Each death is logged as a Knowledge Asset</strong> — a cautionary record for future agents on this path.</p><h3><strong>Ownership in a Shared Knowledge Space</strong></h3><p>Agents working together in a shared knowledge space each maintain ownership over the facts they contribute. When a player agent joins a game and writes its profile — its name, skills, and which expedition it belongs to — that agent becomes the <strong>recognized author of those facts</strong>, and only it can update them going forward.</p><p>Other players’ agents can read everything in the shared space, but they can’t alter each other’s data. The ownership model turns a<strong> shared knowledge graph</strong> into something that feels like a collaborative document where everyone has their own clearly marked sections — <strong>open to read</strong>,<strong> protected to write</strong>.</p><h3><strong>Two Layers: Workspace and Permanent Graph</strong></h3><p>When the group reaches a decision — say, all players in an expedition vote on which direction to take and the turn resolves — the agreed-upon result is promoted from the mutable working space into the<strong> permanent knowledge graph</strong> as a verified, attested record.</p><p>This transition is <strong>cryptographically anchored on-chain</strong>: every node in the network independently confirms the update is legitimate and comes from the rightful owner before accepting it.</p><p>The two layers work together naturally: agents coordinate in real time in the workspace (casting votes, proposing moves, updating game state dozens of times per turn), then settle the final outcome to the permanent graph where it becomes a trusted, discoverable part of the network’s collective knowledge.</p><p>Real-time coordination in the workspace. Permanent, verifiable settlement on-chain. The same architecture that makes the game work is the architecture that makes multi-agent AI systems trustworthy.</p><p>The OriginTrail Game is your entry point — but it’s also a live proof of concept running on the <strong>same architecture you’ll use to build production multi-agent systems</strong>. Run a node. Deploy your agents. Publish your first Knowledge Asset.</p><p>Then build something the world hasn’t seen yet:</p><p>👉 <a href="https://github.com/OriginTrail/dkg-v9"><strong>https://github.com/OriginTrail/dkg-v9</strong></a></p><h3>What comes next</h3><p><strong>Personal AI memory will continue to improve</strong>. Claude, ChatGPT, Gemini, and Copilot will get better, more seamless, and more deeply integrated into their respective ecosystems. That race will produce real value for individual users.</p><p>But the more important architectural question — <strong><em>how do we give multi-agent AI systems shared, verifiable, collectively-owned memory</em></strong>? — is still wide open. Personal memory products aren’t designed to answer it, because the economics of closed platforms point away from interoperability.</p><p><strong>DKG v9 is designed to answer it</strong>. Not as a feature competing with any vendor’s memory product on its own terms, but as a different primitive for a different layer of the stack: the knowledge infrastructure that multi-agent AI systems will need to do collectively what no single agent can do alone.</p><p>The OriginTrail DKG v9 is the 9th iteration and will gradually replace the current v8 DKG mainnet. We look forward to sharing more updates as the testnet progresses toward mainnet-grade implementation.</p><p>Want to help harden the network and shape what’s coming next? Join the <strong>Red Team</strong> today and become one of the builders of the future.</p><p>👉 <a href="https://t.me/+9uMXqEpCsNFlYzI0">https://t.me/+9uMXqEpCsNFlYzI0</a></p><p>Stay tuned for updates and <strong>trace ON</strong>!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=01587d55e105" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/from-ai-memory-silos-to-multi-agent-memory-01587d55e105">From AI Memory Silos to Multi-Agent Memory</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The next wave of vibe coders won’t just ship agents. They’ll make them verifiable.]]></title>
            <link>https://medium.com/origintrail/the-next-wave-of-vibe-coders-wont-just-ship-agents-they-ll-make-them-verifiable-1bb020335a1b?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/1bb020335a1b</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[web3]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[agentic-ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 05 Mar 2026 09:58:21 GMT</pubDate>
            <atom:updated>2026-03-05T09:58:20.963Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N74HSzwmN6Z-HY3F83EBBQ.png" /></figure><p>AI agents are getting very good at doing real work. Building a prototype is now almost effortless. You <strong>write a prompt</strong>, the <strong>system creates the structure</strong>, and you <strong>ship</strong> <strong>it</strong>. Agents can refactor code, connect APIs, generate user interfaces, run workflows, and even open pull requests while you’re still thinking about the next step.</p><p>But the big challenge in 2026 isn’t speed anymore. It’s <strong>trust</strong>.</p><p>When an agent runs day-to-day, you need to understand <strong>what it used</strong>, <strong>where the information came from</strong>, <strong>what changed </strong>over time, and <strong>why it made a decision</strong>.</p><p>Without that, teams run into the same problem again and again. The first demo looks amazing. Then reality hits. The agent forgets what happened last week, can’t explain where an answer came from, and starts <strong>mixing guesses with facts</strong>.</p><p>This is what some builders call <strong>agentic dementia</strong>.</p><p>And it becomes a serious issue the moment agents interact with systems such as transactions, production systems, identity, compliance, or reputation.</p><h3>The trust gap</h3><p>Today, most agents can’t reliably prove:</p><ul><li><strong>What they used:</strong> the exact sources that informed the output</li><li><strong>Where it came from:</strong> who published the information and when</li><li><strong>What changed over time:</strong> versions you can trace and reproduce</li><li><strong>Why they acted:</strong> a decision trail you can inspect later</li></ul><p>If your agent can’t show the trail behind an answer or action, you don’t have a trustworthy memory. You have a <strong>clever demo</strong>.</p><h3>Why “RAG memory” hits a wall</h3><p>A common setup for agent memory works like this: the system retrieves a few chunks from a vector database and adds them to the prompt.</p><p>This helps with recall, but it <strong>quickly breaks when you ask simple questions</strong>:</p><ul><li><strong>Where</strong> did this information come from?</li><li><strong>Who</strong> authored it?</li><li>Can I <strong>verify</strong> it independently?</li><li><strong>What</strong> changed, and when?</li><li><strong>Which</strong> sources did the agent rely on for this specific action?</li><li>Can I <strong>audit</strong> this later without trusting a single database?</li></ul><p>If you can’t reconstruct exactly <strong>what the agent used and why</strong>, the system becomes fragile and slowly drifts away from reality.</p><h3>What trustworthy memory actually looks like</h3><p>If you want agents that work<strong> beyond a one-time demo</strong>, you need something more than a collection of documents.</p><p>You need a <strong>shared context layer </strong>that <strong>persists across runs</strong>, <strong>tools</strong>, and even across <strong>multiple agents</strong>.</p><p>In practice, that context should be:</p><ul><li><strong>Structured:</strong> build around entities and relationships, not just documents</li><li><strong>Queryable:</strong> so you can follow connections like <em>project </em>→ <em>decision </em>→ <em>owner</em> or <em>policy </em>→ <em>exception </em>→ <em>approval</em></li><li><strong>Traceable:</strong> outputs link back to the inputs that shaped them</li><li><strong>Reusable:</strong> available across agents and workflows</li><li><strong>Verifiable:</strong> tamper-evident, with provenance you can validate</li></ul><p>This is where <a href="https://origintrail.io/technology/decentralized-knowledge-graph"><strong>OriginTrail Decentralized Knowledge Graph (DKG)</strong></a><strong> </strong>comes in.</p><p>The DKG lets you store and use knowledge as a <strong>verifiable context graph</strong>, with <strong>provenance </strong>and<strong> traceability</strong> built in. Instead of relying solely on untraceable embeddings, agents can <strong>reference knowledge</strong> where <strong>sources</strong> are clear, <strong>relationships</strong> are preserved, and <strong>changes can be tracked</strong> over time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V-bsQnz-kYp6ceLYtOGNkA.gif" /><figcaption>In February 2026, OriginTrail DKG reached a milestone of<strong> 2 billion Knowledge Assets</strong> published. Every Knowledge Asset anchors facts, compliance records, certificates, supply chain events, research outputs, and decision traces into a<strong> shared, queryable context graph</strong>. Knowledge Assets form a <strong>collective memory infrastructure</strong> for humans and machines. Discover more on X: <a href="https://x.com/origin_trail/status/2026644353880346958?s=20">https://x.com/origin_trail/status/2026644353880346958?s=20</a></figcaption></figure><h3>A practical pattern for verifiable agents</h3><p>You don’t need a massive change to get started. Just <strong>treat memory as a core system component</strong>, not an afterthought. A practical approach could look like this:</p><ol><li><strong>Publish critical context as Knowledge Assets<br></strong>Store things like API contracts, schemas, policies, specs, vendor facts, approved actions, and known-good configurations as structured Knowledge Assets. These become dependable contexts for your agents.</li><li><strong>Make retrieval auditable<br></strong> Let agents query a context graph that contains entities, relationships, provenance, and versioning. This allows the agent to point directly to the knowledge it used.</li><li><strong>Write outputs back with lineage<br></strong> When an agent produces a decision, report, or change, publish it as a new Knowledge Asset that links back to the inputs it used. That turns outputs into traceable artifacts.</li><li><strong>Verify before high-stakes actions<br></strong> Before executing sensitive steps, verify the integrity and provenance of the assets the agent depends on. Keep the proof for later audit.</li></ol><p>If you want a quick gut check, ask: “<em>Can I see the trail behind this answer?”</em></p><p>If the answer is <strong>no</strong>, the system<strong> will not scale safely</strong>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/origin_trail/status/2022657079899807823%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/8c9a480d96910e80e52dace580c42870/href">https://medium.com/media/8c9a480d96910e80e52dace580c42870/href</a></iframe><h3>Trustworthy identity: agents need passports</h3><p>Once <strong>memory </strong>and<strong> context become trustworthy</strong>, the next layer is <strong>identity</strong>.</p><p>AI agents are already managing wallets, executing trades, and interacting on your behalf, often with <strong>zero accountability infrastructure</strong>.</p><p>As agents become autonomous actors, <strong>they need a way to present</strong>:</p><ul><li>Who or what they are</li><li>What they’re allowed to do</li><li>What they’ve done, with an auditable history</li><li>What they’re trusted for, based on validation and reputation</li></ul><p>That’s why “<strong>agent passports</strong>” matter.</p><p>Standards like <strong>ERC-8004 </strong>are emerging for registries that provide <strong>identity</strong>, <strong>validation</strong>, and <strong>reputation</strong> for AI agents.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*c1sRiCkFf4jF_NscwCTvZw.png" /><figcaption>Curious how <strong>AI agent passports work and why ERC-8004 matters?</strong><br>Dive deeper into the article: <a href="https://origintrail.io/blog/passport-please-ai-agents-are-becoming-first-class-citizens-with-erc-8004-origintrail-27fb90af8af9">https://origintrail.io/blog/passport-please-ai-agents-are-becoming-first-class-citizens-with-erc-8004-origintrail-27fb90af8af9</a></figcaption></figure><p>Projects like <a href="https://clawtrail.ai/"><strong>ClawTrail</strong></a>, built on the<strong> OriginTrail Decentralized Knowledge Graph (DKG)</strong>, are tackling this directly with a verifiable passport for every AI agent, a living TRAC(k) record (signed credentials, auditable history, certified capabilities), and agent-level KYC so you know who, or what, you’re dealing with.</p><p>Vibe coding made it easy to build and ship agents. Now everyone can ship agents that sound right.</p><p>The<strong> real advantage </strong>will belong to teams whose agents can prove where their knowledge came from, what changed, and why they made a decision.</p><p>Speed helps you ship demos.<strong> Verifiability helps you run real systems</strong>.</p><p>Start building AI agents with <strong>verifiable memory </strong>today — learn how in the <a href="https://docs.origintrail.io/?utm_source=medium&amp;utm_medium=post&amp;utm_campaign=article">OriginTrail official documentation</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1bb020335a1b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/the-next-wave-of-vibe-coders-wont-just-ship-agents-they-ll-make-them-verifiable-1bb020335a1b">The next wave of vibe coders won’t just ship agents. They’ll make them verifiable.</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Passport, please! AI agents are becoming first-class citizens with ERC-8004 & OriginTrail]]></title>
            <link>https://medium.com/origintrail/passport-please-ai-agents-are-becoming-first-class-citizens-with-erc-8004-origintrail-27fb90af8af9?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/27fb90af8af9</guid>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[ethereum]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 12 Feb 2026 14:04:28 GMT</pubDate>
            <atom:updated>2026-02-12T14:09:49.759Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*c1sRiCkFf4jF_NscwCTvZw.png" /></figure><p>AI agents are exploding in use across industries, but they’re roaming a digital world with no shared identity or trust framework. Today, an agent can claim <em>“I can code”</em> or <em>“I can trade” (“trust me, bro, I’m an AI agent”)</em>, yet there’s no standard way to verify if any of it is true.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jEx8DWmGDqrzv0yhAZdztA.png" /></figure><p>You wouldn’t trust strangers operating like that, and neither can AI agents truly trust each other under these conditions. This <em>“trust gap”</em> is a major roadblock to an open agent economy. Agents need a way to<strong> carry their identity, context, and track record with them</strong> — something akin to a passport — so they can be <em>discovered</em> and <em>trusted</em> by others at machine speed.</p><h3>Giving AI agents a Digital Passport with ERC‑8004 and Decentralized Knowledge Graph</h3><p>Combining the ERC‑8004 standard with the <a href="https://origintrail.io/technology/decentralized-knowledge-graph">OriginTrail Decentralized Knowledge Graph (DKG)</a> creates a powerful synergy akin to giving<strong> AI agents a digital passport from day one</strong>. ERC‑8004 establishes an <strong>agent’s on-chain identity and structure </strong>— essentially issuing a standardized passport number and “photo page” for the AI — while the OriginTrail DKG fills that passport with<strong> dynamic, verifiable context</strong>, i.e., the stamps, visas, certificates, and travel history that accumulate as the agent interacts and learns. Together, these technologies ensure each AI agent has both a <strong>trusted identity and a rich, evolving track record of its accomplishments</strong>, all secured by <strong>blockchain</strong> and<strong> cryptographic proofs</strong>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/DrevZiga/status/2017001905885524075%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/76eab843ee3fc140f3941f7bf5ae7c34/href">https://medium.com/media/76eab843ee3fc140f3941f7bf5ae7c34/href</a></iframe><p>The ERC‑8004 Ethereum standard gives every AI agent a unique on-chain identity. Each agent is issued an <strong>ERC-721 NFT </strong>as its “passport document,” providing a <strong>portable, censorship-resistant identifier on Ethereum</strong>. This identity token (the agent’s “passport number”) links to a registration file describing the agent’s core info — for example, its capabilities, endpoints (how to communicate with it), and even aspects of its “social graph” or affiliations. In other words, ERC‑8004 standardizes how an AI agent presents itself, ensuring that anyone, anywhere, can verify who the agent is and what skills it claims to have. Just as a real passport is issued by a trusted authority, the ERC‑8004 identity is anchored on Ethereum, making it globally verifiable and hard to forge. This on-chain identity layer also includes built-in trust anchors: ERC‑8004 defines <strong>reputation and validation registries</strong> that record an agent’s on-chain feedback and certifications, functioning like official seals or endorsements on a passport.</p><p>Thanks to ERC-8004, AI agents now have a basic passport — a way to present <strong>who they are</strong> and <strong>what they’ve done</strong> in a standard, verifiable format. An agent that wants to be hired for a job can show their ERC-8004 credentials: “<em>Here’s my ID and resume, here are my reviews, and here are proofs of my capabilities</em>.” In fact, the standard explicitly frames the identity NFT as the agent’s passport.</p><p>However, like a freshly issued real-world passport, this is just the beginning. The passport, by itself (an NFT plus a static JSON file), is necessary but not sufficient for rich trust. It tells you the basics, but imagine if we could stuff that passport with far more context — every stamp, visa, reference letter, and credential an agent earns over time, in a way that’s trusted and queryable. This is where <strong>OriginTrail Decentralized Knowledge Graph comes in, turning the passport into something much more powerful.</strong></p><h3>Decentralized Knowledge Graph: Turning the passport into a living context graph</h3><p>OriginTrail <strong>Decentralized Knowledge Graph (DKG)</strong> steps in to supercharge ERC-8004’s static records, effectively transforming an agent’s passport into a <strong>living, verifiable context graph</strong>. Think of ERC-8004 as issuing the agent a blank passport and a basic ID card; the DKG is what brings that passport to life with data, continuously updated with verified stamps and stories of the agent’s journey. In OriginTrail’s own words, the DKG serves as a <em>“constantly evolving digital passport for agents,”</em> essentially an agent-specific context graph that grows over time with each interaction</p><h4><strong>How does it work?</strong></h4><p>The DKG is a decentralized network <strong>designed to store and publish structured knowledge </strong>(using semantic web standards) with <strong>verifiable provenance</strong>. In the DKG, information is not just dumped in JSON files or logs — it’s represented as a<strong> knowledge graph</strong>: a web of facts and relationships that machines can easily query and trust. Each data point in the graph is accompanied by c<strong>ryptographic proof</strong> (such as a fingerprint anchored on-chain) that guarantees its integrity. And just like ERC-8004’s identity,<strong> each “thing” in the DKG is ownable via an NFT</strong>. In fact, the core unit of the DKG is called a <strong>Knowledge Asset</strong>, which is essentially an NFT + knowledge graph bundled together. You can represent <em>anything</em> as a Knowledge Asset — an AI agent, a dataset, a certificate — and give it a verifiable, evolving record on the graph.</p><p>So, let’s map an AI agent to a<strong> DKG Knowledge Asset</strong>. The agent’s ERC-8004 NFT can double as a DKG asset identifier (the DKG uses a concept called a Uniform Asset Locator, which extends DIDs, often implemented by an NFT token). That covers the identity/ownership part. Now attach the agent’s knowledge: Instead of a single JSON file with a few fields, we can have an entire graph of data describing the agent.</p><h4><strong>This graph might include:</strong></h4><ul><li><strong>Agent profile &amp; attributes: </strong>The same basics from the JSON (name, description, endpoints) but in a semantic format (RDF triples) so they’re machine-readable and linkable. For example, an agent could be linked to a category (“TradingBot”) or a skill ontology, enabling more precise discovery.</li><li><strong>Decision traces &amp; activity logs:</strong> Every significant action the agent takes could be logged as an assertion in its knowledge graph. Did the agent complete a task? You can add a node for that event, linked to the date it occurred and its outcome. Over time, this creates a timeline of verifiable events — a history far richer than a single aggregate reputation score. These are the “stamps” in the passport, each one independently verifiable via its on-chain fingerprint. If someone questions why an agent made a decision, they could inspect its DKG log (with appropriate permissions) to trace the reasoning or data that led to it. Essentially, the agent builds up a memory in the graph that can be audited.<a href="https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/"> In her thesis</a>, Jaya Gupta of Foundation Capital explicitly includes AI agents’ decision-making processes and the importance of capturing decision traces to understand why decisions were made, which then become part of evolving context graphs. For context, graphs must become the real source of truth; DKG plays an essential role.</li><li><strong>Verifiable credentials &amp; references: </strong>DKG can integrate W3C Verifiable Credentials (VCs) and decentralized identifiers. Suppose a trusted organization certifies an agent (e.g., <em>“This trading bot passed a rigorous test”</em> or <em>“This agent is compliant with X regulation”</em>); that credential can be added to the agent’s knowledge graph as a signed assertion. OriginTrail DKG is built to support standards such as VCs and DIDs, ensuring these credentials are stored in an interoperable format. It’s like adding visas or reference letters to the passport — e.g., <em>“Certified by Authority Y”</em> — which anyone can cryptographically verify.</li><li><strong>Semantic relationships: </strong>Knowledge graphs excel at capturing relationships between entities. An agent’s context isn’t just about the agent in isolation; it’s also about how it connects to others. With DKG, we can link the agent to other agents it has worked with, to datasets it frequently uses, or to domains of expertise. For example, if Agent A has collaborated with Agent B on a project, their knowledge graphs can reference each other (Agent A’s passport might say “<em>worked with Agent B on Supply Chain Optimization, see project P</em>”). These semantic links enrich discoverability — one could query the graph for <em>“agents who have worked on supply chain tasks with verified outcomes”</em> and find Agent A because of those relationships. OriginTrail’s design enables Knowledge Assets to connect with other assets, creating a world model of relationships.</li><li><strong>Provenance and data anchoring: </strong>Perhaps most importantly, every fact or credential added to the agent’s context graph comes with provable provenance. The DKG uses cryptographic proofs (Merkle roots of the graph data) anchored on-chain to ensure that the knowledge hasn’t been tampered with. If the agent’s passport states “<em>Completed 50 successful deliveries</em>,” the raw data backing that (the 50 delivery events) each have a hash on the chain that can be verified. This is analogous to a passport office stamping and sealing each visa — it can’t be faked without detection. The OriginTrail network’s nodes replicate and store these assertions, especially the public ones, so the data is always available and secure in a decentralized way. No single party can forge or hide the agent’s records. The result is a trustworthy, tamper-evident ledger of an agent’s life that complements the on-chain registries.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5M-fFfV1moAZuRNefWkS9g.png" /><figcaption>OriginTrail DKG represents an agent’s profile as a Knowledge Asset, combining on-chain identity with off-chain knowledge. The diagram illustrates how an AI agent’s “passport” gains a chip: it contains semantic graph data (RDF) and vector embeddings for AI context, anchored by cryptographic on-chain proofs, all tied to a unique NFT identifier. This makes the agent’s profile a dynamic, queryable knowledge graph rather than a static file.</figcaption></figure><h3>Conclusion</h3><p>In summary, integrating OriginTrail DKG with ERC-8004 gives each agent a “smart passport”: not just an ID document, but an entire personal knowledge graph that is securely stored, constantly updated, and universally queryable. The passport isn’t just carried by the agent — it <em>lives</em> on the decentralized network, where anyone (or any other agent) can validate its stamps and even learn from its contents (with permission). This dramatically amplifies trust: an agent’s identity isn’t a static entry in a registry; it’s the center of a web of trust data that grows richer over time.</p><p>The journey is just starting. ERC-8004 has effectively set the rules for <em>issuing</em> and <em>stamping</em> agent passports. OriginTrail DKG offers a global registry and database where those passports are maintained and enriched over time. As this integration matures, we could see the emergence of a true Web3 agent commons—a space where AI agents from any project or company can work together trustlessly, discover one another through shared context, and carry their reputation across any single platform.</p><p>In the long run, this<em> passport </em>and<em> knowledge graph</em> approach may become an essential component of AI infrastructure, much like human identity standards. It lays the foundation for an interoperable, trustworthy agent economy.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=27fb90af8af9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/passport-please-ai-agents-are-becoming-first-class-citizens-with-erc-8004-origintrail-27fb90af8af9">Passport, please! AI agents are becoming first-class citizens with ERC-8004 &amp; OriginTrail</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[5 Trends to drive the AI ROI in 2026: Trust is Capital]]></title>
            <link>https://medium.com/origintrail/5-trends-to-drive-the-ai-roi-in-2026-trust-is-capital-372ac5dabc38?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/372ac5dabc38</guid>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Tue, 23 Dec 2025 14:32:31 GMT</pubDate>
            <atom:updated>2025-12-23T14:32:30.181Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-0UC9Eqtg7b-6O7ZEsESrA.gif" /></figure><p><strong>Executive Summary: </strong>After years of experimentation, business leaders are entering 2026 with a clear mandate: make AI investments pay off, but do it in a way that stakeholders can trust. In enterprise settings, artificial intelligence is no longer a speculative pilot project; it’s a business-critical asset whose success or failure hinges on trust, transparency, and accountability.</p><p>Recent industry analyses show a striking gap between AI ambition and actual returns — <a href="https://www.cfo.com/news/so-far-few-cfos-see-substantial-roi-from-ai-spending-RPG/808249/#:~:text=Only%2014,their%20AI%20investments%20to%20date">only 14% of CFOs report measurable ROI from AI to date,</a> even though 66% expect significant impact within two years. This optimism comes with a sobering realization: without verifiability and integrity at every level, AI projects risk underdelivering or even backfiring. <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">An MIT study reveals that up to 95% of firms investing in AI have yet to see tangible returns</a>, often because of hidden flaws, opaque models, or poor data foundations. In response, companies are pivoting from hype to hard results — “after years of pilots, firms are shifting focus to monetization” in AI initiatives.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/987/1*ge_392s9yE1YA5oiudh51w.png" /><figcaption><em>Share of S&amp;P 500 companies disclosing AI-related risks, 2023 vs. 2025. In 2025, 72% of S&amp;P 500 warned investors about material AI risks (up from just 12% in 2023), reflecting growing concerns about AI’s impact on security, fairness, and reputation (</em><a href="https://corpgov.law.harvard.edu/2025/10/15/ai-risk-disclosures-in-the-sp-500-reputation-cybersecurity-and-regulation/"><em>full study</em></a><em>).</em></figcaption></figure><p>The result is a strategic shift: <strong>trustworthy AI infrastructure</strong> is becoming a <strong>business advantage</strong> rather than a compliance burden.</p><p>This article outlines five key AI trends for 2026, each mapped to a layer of the <strong>I-DIKW framework (Integrity, Data, Information, Knowledge, Wisdom)</strong>. These trends show how aligning AI efforts with integrity at every level enables organizations to <strong>unlock ROI</strong> amid regulatory scrutiny and competitive pressure.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vwFZeni8EVNZpUf9JyvTEg.png" /><figcaption><em>In traditional systems, the DIKW pyramid (Data → Information → Knowledge → Wisdom) was linear and siloed. OriginTrail reshapes this entirely. By merging blockchain, knowledge graphs, and AI agents, it transforms DIKW into a networked, self-reinforcing trust flywheel, adding Integrity as the foundational layer, evolving into the I-DIKW model.</em></figcaption></figure><h3><strong>Trend 1: Integrity Layer — Trustworthy AI Infrastructure by Design</strong></h3><p><strong>Integrity</strong> is the foundation of the I-DIKW framework: it’s about building AI systems that are trustworthy and verifiable <strong>from the ground up</strong>. In 2026, leading firms will treat <strong>AI integrity</strong> (security, ethics, and transparency) as a first-class requirement. This means baking in <strong>cryptographic provenance, audit trails, and robust governance controls</strong> into AI platforms. For example, new architectures use <em>immutable provenance chains</em> and digital signatures to ensure every AI input and output can be traced and verified. Such measures give executives and regulators high confidence in the integrity of AI outputs.</p><p>The business payoff is significant: integrity by design reduces the risk of AI failures, bias incidents, or data leaks that can derail ROI. Companies that invested early in <strong>trust infrastructure</strong> are finding their AI projects scale faster and face fewer roadblocks from compliance or public concern. Conversely, a lack of integrity can be a deal-breaker. <em>Case in point:</em> the government of Switzerland <strong>rejected a prominent AI platform (Palantir)</strong> after finding it posed <a href="https://thetonymichaels.substack.com/p/palantir-loses-out-in-switzerland">“unacceptable risks”</a> to data security and sovereignty. Swiss evaluators concluded the system <strong>couldn’t guarantee full control or transparency</strong>, raising alarms about dependence on a foreign black-box solution.</p><p>The lesson for CIOs and CEOs is clear: if an AI system <strong>can’t prove its integrity and accountability</strong>, savvy clients (and regulators) will walk away. In 2026, <strong>trustworthy AI by design</strong> will be a strategic imperative, enabling organizations to deploy AI confidently and at scale, turning trust into a <strong>competitive advantage</strong> rather than a cost.</p><h3>Trend 2: Data Layer — Sovereign Data and Quality Foundations</h3><p>Moving up the hierarchy, Data is the raw material for AI — and its quality and governance determine whether AI initiatives thrive or falter. It’s well known that garbage in leads to garbage out, yet many organizations still underestimate how data issues sabotage AI ROI. Executives may invest millions in AI tools, only to find that the tools can’t deliver value because the underlying data is incomplete, biased, or untrustworthy. A recent survey of CFOs found that <strong>poor data trust is the single greatest inhibitor of AI success</strong> — 35% of finance chiefs cite lack of trusted data as the top barrier to AI ROI. It’s no wonder <a href="https://rgp.com/press/rgp-cfo-survey-shows-growing-divide-between-ai-ambition-and-ai-readiness/#:~:text=Data%20remains%20the%20single%20greatest,impact%20and%20slowing%20enterprise%20adoption">only 14% have seen meaningful AI value so far</a>.</p><p><strong>Data sovereignty</strong> is a particularly hot issue. Companies and governments alike want assurance that critical data remains under their control. This is driving a trend toward <strong>“sovereign AI” solutions</strong> — those that allow data to be kept locally or in trusted environments, rather than forcing lock-in to a vendor’s cloud. Europe’s upcoming regulations emphasize data localization and <strong>digital sovereignty</strong>, reinforcing this shift. The stakes became evident when <strong>Switzerland’s defense authorities rejected Palantir’s AI software</strong> after a risk assessment warned it could leave Swiss data vulnerable to U.S. jurisdiction.<a href="https://thetonymichaels.substack.com/p/palantir-loses-out-in-switzerland#:~:text=%E2%80%9CNo%20foreign%20software%20should%20compromise,evaluation%2C%20summarizing%20internal%20military%20concerns"> In the evaluators’ words,</a> <em>“No foreign software should compromise our ability to control and protect sensitive national information.”</em></p><p>For businesses, the takeaway is that <strong>control over data = trust</strong>. In 2026, leading enterprises will choose AI platforms that offer <strong>transparent data handling, open standards, and interoperability</strong> so they aren’t handcuffed to a single provider. By building <strong>sovereign data ecosystems</strong> — for instance, using <strong>decentralized data networks — organizations ensure data integrity and privacy</strong>, which in turn <strong>unlocks AI value</strong>. When your data is high-quality, compliant, and under clear ownership, AI initiatives can progress without the hidden friction that often stalls pilots. In short, <strong>trusted data is the fuel for AI ROI</strong>.</p><h3>Trend 3: Information Layer — Explainable and Verifiable AI Insights</h3><p>Turning raw data into actionable <strong>Information</strong> is the next layer — and in 2026, the key word is <strong>“explainable”</strong>. As AI systems generate reports, recommendations, and content, organizations are realizing that <em>if the people using that information don’t trust it, the AI investment is wasted</em>. Thus, a major trend is the adoption of <strong>explainable AI (XAI) and verifiable AI outputs</strong>. Business leaders want AI that not only <em>does</em> the analysis but can <strong>show its work</strong> — revealing the logic, source data, or confidence behind an output.</p><p>This trend is fueled by both internal needs (e.g. a manager trusting an AI-generated forecast) and external pressure. Regulators are stepping in: the EU’s AI Act, for example, includes <strong>transparency obligations</strong> requiring that users be informed when they interact with AI or encounter AI-generated content. Draft European guidelines even call for <strong>marking and labeling AI-generated media</strong> <a href="https://www.itic.org/news-events/techwonk-blog/techs-expectations-for-the-eu-ai-act-transparency-code-of-practice#:~:text=Tech%27s%20Expectations%20for%20the%20EU,generated%20content">to curb misinformation</a>. Likewise, in the U.S., authorities have encouraged AI developers to implement watermarking for synthetic content. The message is clear — <strong>2026 is the year when “black box” AI won’t cut it</strong> in many business applications.</p><p>Companies are responding by building <strong>trust layers around AI information</strong>. One approach is integrating <strong>cryptographic provenance</strong>: for instance, embedding invisible signatures in AI-generated content or logs that allow anyone to verify where it came from and whether it’s been altered. Another approach is to leverage verifiable credentials for information sources, ensuring that data feeding AI models (or experts providing oversight) is authenticated and reputable. Forward-looking firms are also deploying <strong>AI explainability tools</strong> — from simple <em>model scorecards</em> that highlight key factors in an AI decision, to advanced techniques that trace an AI recommendation back to the supporting facts.</p><p>A practical example is in financial services: banks deploying AI credit scoring are using <strong>explainable models and audit trails</strong> so that each loan decision can be explained to a regulator or customer, building trust and avoiding compliance roadblocks. In the realm of generative AI, companies are pairing large language models with knowledge bases and <strong>fact-checking mechanisms</strong> to prevent hallucinations from reaching end-users. <em>In essence, information generated by AI is becoming</em> <em>self-documenting and self-verifying.</em> By making AI’s information outputs <strong>transparent, explainable, and traceable</strong>, businesses not only <strong>mitigate risk</strong> but also encourage greater adoption — employees and customers are far more likely to <em>use</em> AI-driven insights when they can trust the <strong>why</strong> behind the answer. The result is faster decision cycles and more impactful AI use, directly boosting ROI.</p><h3>Trend 4: Knowledge Layer — Decentralized Knowledge Networks and Collaboration</h3><p>The <strong>Knowledge layer</strong> elevates information into shared organizational intelligence. In 2026, a standout trend will be the rise of <strong>decentralized and verifiable knowledge networks</strong> as the backbone of AI-powered enterprises. Organizations have learned that AI projects in isolation often hit a wall — the real value emerges when insights are captured, linked, and reused across the company (and even with partners). To enable this, companies are turning to <strong>knowledge graphs and collaborative AI platforms</strong> that break down silos. Crucially, these knowledge systems are being built with <strong>trust and verification in mind</strong>. Every contribution to a modern enterprise knowledge graph can be accompanied by metadata: <em>who added this insight, from what source, and with what evidence?</em></p><p>A powerful enabler here is the convergence of <strong>blockchain (decentralization) and AI</strong>. By combining blockchains’ distributed trust with AI-driven knowledge graphs, organizations create <strong>shared knowledge ecosystems that no single party solely controls — </strong><a href="https://medium.com/origintrail/trust-thy-ai-artificial-intelligence-base-d-with-origintrail-e866d996ca1c#:~:text=Having%20employed%20the%20fundamentals%20of,0"><strong>yet everyone can trust</strong></a>. For example, in supply chain and manufacturing, partners are beginning to contribute to <strong>decentralized knowledge graphs </strong>in which data on product quality and provenance are cryptographically signed at each step.</p><p>One notable case: <a href="https://www.gs1.org/insights-events/case-studies/enhancing-safer-travel-predictive-maintenance-transportation"><strong>Switzerland’s national rail company (SBB)</strong> </a>uses a decentralized knowledge graph for real-time traceability of equipment data, ensuring all stakeholders see a single source of truth with integrity. In such networks, <strong>verifiable credentials</strong> play a role too — only authorized contributors (with digital credentials) can add or modify knowledge, preventing bad data from polluting the system. The benefit to ROI is clear: when knowledge is <strong>integrated and trusted</strong>, AI can draw on a much richer context to solve problems, and organizations avoid the costly mistakes of inconsistent information.</p><p>Moreover, a <strong>decentralized approach reduces vendor lock-in</strong> and increases resilience — knowledge isn’t trapped in one platform, it’s part of a federated infrastructure the company owns. Leaders are also finding that trusted knowledge sharing accelerates innovation: teams reuse each other’s AI-derived insights instead of reinventing the wheel. As Dr. Robert Metcalfe (inventor of Ethernet) observed, <a href="https://www.gs1.org/insights-events/case-studies/enhancing-safer-travel-predictive-maintenance-transportation"><strong>knowledge graph</strong></a><strong>s can “improve the fidelity of artificial intelligence” by grounding AI in verified facts</strong>. In 2026, companies that master this <strong>knowledge layer</strong> — creating a living, vetted memory for the organization — will reap compounding returns from each new AI deployment, as each project makes the next one smarter and faster.</p><h3>Trend 5: Wisdom Layer — AI Governance and Strategic Alignment for Sustainable ROI</h3><p>At the top of the I-DIKW stack is <strong>Wisdom</strong> — the ability to make prudent, big-picture decisions. For enterprises, this translates to strong <strong>AI governance and strategic alignment</strong> at the leadership level. The trend for 2026 is that AI is no longer just the domain of IT departments or innovation labs; it’s a <strong>C-suite and boardroom priority</strong> to ensure AI is used wisely, ethically, and in line with the company’s goals. One telling sign: nearly<a href="https://fortune.com/2025/12/15/aritficial-intelligence-return-on-investment-aiq/"> <strong>61% of CEOs say</strong></a><strong> they are under increasing pressure to show returns on AI investments</strong> than a year ago. This pressure is forcing a new alignment between tech teams and business leaders. We see the emergence of <strong>Chief AI Officers and cross-functional AI steering committees</strong> to govern AI initiatives with a balance of innovation and risk management. In practice, companies are establishing <strong>AI governance frameworks</strong> — formal policies and oversight processes to supervise AI model development, deployment, and performance.</p><p><a href="https://rgp.com/press/rgp-cfo-survey-shows-growing-divide-between-ai-ambition-and-ai-readiness/#:~:text=Governance%20is%20emerging%2C%20but%20uneven%3A,and%20risk%20awareness%20at%20scale">According to recent research</a>, about 69% of large firms report having advanced AI risk governance in place, though many others are still catching up. In 2026, closing this governance gap will be crucial. Effective AI governance ensures that there is <strong>“wisdom” in how AI is applied</strong>: systems are tested for fairness, AI-driven decisions are subject to human review when needed, and AI strategies align with business values and compliance requirements.</p><p>This <strong>strategic alignment</strong> of AI yields tangible ROI by preventing missteps and unlocking faster adoption. Companies with mature governance can deploy AI in customer-facing processes or critical operations with confidence that they won’t run afoul of regulations or ethics scandals. In contrast, firms that push AI without guardrails often face costly setbacks — whether it’s a PR crisis over biased AI results or a regulator halting a project.</p><p>Moreover, organizations are starting to augment their internal governance with collaborative, cross-industry safety nets. For instance, <a href="https://umanitek.ai/#:~:text=,centric%20AI">Umanitek </a>has introduced a decentralized “Guardian” agent to coordinate AI safety across platforms. Guardian can fingerprint and cross-check content against a shared knowledge graph of known illicit or deceptive media, blocking harmful deepfakes or flagged materials in real time. Crucially, this approach preserves privacy and data ownership for all participants: each contributor’s data stays private while the agent exchanges trust signals via a permissioned decentralized network . By leveraging such cross-industry trust infrastructure, enterprises effectively extend their AI governance beyond their own walls, aligning multiple AI agents and stakeholders to uphold common integrity standards. This kind of collaborative safeguard strengthens the wisdom layer by ensuring that as AI systems interact across the web, they do so under a unified, verifiable set of ethical guardrails.</p><p><strong>Trust, once again, is a differentiator</strong> at the wisdom level. A reputation for trustworthy AI can become a selling point: for example, enterprise clients may choose a software provider not just for its AI features, but because it can <em>prove</em> those features are fair and compliant. We’re effectively seeing <strong>trust as a brand asset</strong>. Internally, strong governance also brings the wisdom of knowing where AI truly adds value. Leading organizations have learned to <strong>“lead with the problem, not with AI”</strong>, ensuring that each AI project is tied to a clear business outcome (revenue growth, cost reduction, customer experience) rather than AI for AI’s sake. This focus on <strong>value alignment</strong> is paying off. In fact, research on AI leaders (the Fortune 50 “AIQ” companies) shows they excel not by spending the most, but by integrating AI deeply into strategy and operations <a href="https://www.linkedin.com/pulse/fortune-etr-reveal-aiq-50-etr-enterprise-technology-research-dk3sc#:~:text=As%20AI%20adoption%20accelerates%2C%20the,maturity%20positively%20impacts%20their%20business">to drive measurable results</a>.</p><p>Looking at the competitive landscape, those who invest in <strong>wisdom-layer capabilities</strong>, like company-wide AI literacy, scenario planning for AI risks, and continuous training to fill AI skill gaps, are pulling ahead. CFOs note that <strong>strengthening “the systems, data, and talent” around AI is key to turning AI’s promise into performance</strong>.</p><p><strong>That is wisdom in action:</strong> recognizing that ROI comes not just from technology, but from enabling people and processes to harness that technology effectively. As regulatory regimes (from the EU AI Act to industry-specific AI guidelines) come into effect, having a solid governance foundation will mean <strong>fewer disruptions and fines</strong> and more freedom to innovate.</p><p>In sum, the <strong>Wisdom trend for 2026</strong> is about treating AI not as a magic black box, but as a strategic enterprise capability that must be nurtured, overseen, and aligned with human judgment. Businesses that do so will find that <strong>trust breeds agility</strong> — they can push the envelope on AI usage because they have the wisdom to manage the risks. That translates directly into <strong>higher ROI and sustained competitive advantage</strong>.</p><h3>Conclusion: Trust-Powered AI as the Blueprint for Leadership</h3><p>As we head into 2026, one theme resonates across all five layers of I-DIKW: <strong>trust</strong> is the through-line that turns AI from a gamble into a solid investment. By strengthening <strong>Integrity</strong> (the technical and ethical bedrock), mastering <strong>Data</strong> quality and sovereignty, insisting on <strong>Information</strong> transparency, cultivating verifiable <strong>Knowledge</strong> networks, and enforcing wise <strong>Governance</strong> at the top, organizations create a <strong>virtuous cycle</strong>. Each layer reinforces the others — trustworthy data leads to more reliable AI information, which feeds organizational knowledge, enabling wiser decisions, which in turn guide further data strategy, and so on. Companies that embrace this holistic approach are positioning themselves as <strong>leaders in the AI economy</strong>. They are better prepared for tightening regulations and rising customer expectations, turning those into opportunities rather than obstacles. Not least, they are demonstrating to investors and boards that AI dollars are well spent: projects don’t stall in pilot purgatory, but scale with confidence because the <strong>infrastructure of trust</strong> is in place.</p><p>In a business climate where <a href="https://fortune.com/2025/12/15/aritficial-intelligence-return-on-investment-aiq/"><strong>61% of CEOs feel the heat</strong></a><strong> to prove AI is delivering value</strong>, aligning with the I-DIKW framework provides a clear roadmap. It ensures that AI efforts are <strong>built on integrity and purpose at every step</strong>, rather than chased as shiny objects. The experience of firms at the forefront underscores this: those who treated <strong>trust as a core principle</strong> of their AI strategy are now reaping tangible returns — whether through increased automation efficiencies, new revenue streams from AI-driven products, or stronger customer loyalty thanks to ethically sound AI practices. On the other hand, organizations that neglected these layers are encountering what one might call “AI growing pains,” from data compliance headaches to lackluster ROI, and even public backlash.</p><p>The strategic reflection for executives is this: <strong>AI leadership in 2026 will belong to those who marry innovation with verification</strong>. By investing in trustworthy infrastructure — <em>be it cryptographic provenance for data, explainability modules for AI, or robust governance councils — you not only de-risk your AI investments, but you amplify their reward</em>. Trust is more than a compliance checkbox; it’s a performance multiplier. In the coming AI-driven economy, <strong>build trust, and the ROI will follow</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=372ac5dabc38" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/5-trends-to-drive-the-ai-roi-in-2026-trust-is-capital-372ac5dabc38">5 Trends to drive the AI ROI in 2026: Trust is Capital</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge…]]></title>
            <link>https://medium.com/origintrail/oxford-pharmagenesis-and-origintrail-to-introduce-collaborative-ai-ready-medical-knowledge-6d44654ec192?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/6d44654ec192</guid>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Tue, 02 Sep 2025 13:13:33 GMT</pubDate>
            <atom:updated>2025-09-02T14:06:13.815Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*52AtVtz1p6TUeyeEdIZ2hw.gif" /></figure><h3>Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge ecosystem driving the next generation of agentic science</h3><p>A vast amount of valuable clinical trial information exists in the world, but much of it is fragmented, hard to verify, and difficult to use, slowing medical research and patient care. This lack of connectivity slows research, complicates evidence synthesis, and limits the ability of healthcare professionals, patients, and other stakeholders to access clear, reliable information.</p><p>To address these challenges, Trace Labs, the core developers of <a href="https://origintrail.io">OriginTrail</a>, and <a href="https://www.pharmagenesis.com/">Oxford PharmaGenesis</a> have partnered on a groundbreaking initiative to globally connect and verify medical knowledge.</p><p><strong>The challenge: Lost value in unconnected knowledge</strong></p><p>Pharmaceutical companies and researchers generate a continuous flow of high-quality outputs — trial registrations, regulatory summaries, and peer-reviewed publications. Yet these resources are scattered across multiple platforms and formats, making it difficult to integrate with advanced AI systems.</p><p>As a result, vast amounts of valuable knowledge remain underused:</p><p>● Researchers struggle to locate relevant clinical studies and real-world evidence,</p><p>● Healthcare professionals lack quick access to verified, up-to-date information,</p><p>● Patients are left without clear, trustworthy resources to guide their decisions.</p><p>These inefficiencies keep knowledge fragmented and opaque because evidence is hard to find and harder to verify, resulting in slower progress and less transparency and trust, with real consequences for patients.</p><p><strong>The vision: Building a connected and trusted health knowledge pool</strong></p><p>Oxford PharmaGenesis — a global leader in the healthcare communications industry that collaborates with over 50 healthcare organizations worldwide, including eight of the world’s top ten pharmaceutical companies — and Trace Labs, have partnered to create the world’s first structured, connected, and verifiable pool of clinical trial knowledge on the <a href="https://origintrail.io/technology/decentralized-knowledge-graph">OriginTrail Decentralized Knowledge Graph (DKG)</a>.</p><p>The OriginTrail DKG merges blockchain technology with semantic, machine-readable knowledge structures, ensuring every contribution carries verifiable ownership, a transparent version history, and rich contextual links for both AI and human use. Oxford PharmaGenesis’ partnerships span pharmaceutical and biotech companies, as well as professional societies, patient groups, and academic institutions. It is also a co-founder, co-funder, and facilitator of Open Pharma with the mission to advance open science, transparency, and equity for pharma-sponsored research communications, placing it at the center of trusted knowledge exchange in healthcare.</p><p>This initiative will launch through an incentivized data-sharing program to create a domain-specific Decentralized Knowledge Graph (or “paranet”) within the OriginTrail DKG. Leading pharmaceutical organizations will be invited to join as trusted knowledge contributors, making their clinical information accessible to AI agents, research tools, and human users alike. The result: faster, more accurate discovery and reuse, empowering experts and the public with reliable, transparent, and actionable insights.</p><p><strong>From pilot to scalable implementation</strong></p><p>The collaboration begins with a pilot, which will link together publicly available information from multiple medicines produced by a global pharmaceutical company. It will create the blueprint for rapid expansion to additional contributors through a structured, incentivized data-sharing program that will form a domain-specific paranet within the OriginTrail DKG. This first phase will establish the core framework — secure, intuitive tools for contributing and exploring data, robust systems for verifying and connecting clinical knowledge, and safeguards to ensure every piece of information remains trusted and protected.</p><p>Once operational, the paranet will allow AI agents to both produce and consume verifiable knowledge directly from the OriginTrail DKG. In practice, this means transforming complex clinical data into plain-language summaries, in-depth scientific reports, visual explainers, and other formats tailored to audiences ranging from researchers and clinicians to patients and the public. As more organizations contribute, the paranet will grow to billions of structured, connected, and verifiable data points — a rich foundation with the potential to accelerate medical research, speed up discoveries, and equip healthcare professionals, patients, and innovators worldwide with better tools for informed decision-making.</p><p><strong>Looking ahead: A path toward a trusted public knowledge ecosystem</strong></p><p>This collaboration marks the start of an ambitious journey to build the world’s most extensive decentralized repository of trusted clinical trial knowledge on the OriginTrail DKG, stemming the tide of medical misinformation by providing a solid bedrock of trusted information that genAI tools can use. Driven jointly by Trace Labs, the core developers of OriginTrail, and Oxford PharmaGenesis, a global leader in scientific and medical consulting for the pharmaceutical and healthcare industries, the initiative will transform valuable clinical data into a structured, verifiable, and AI-ready resource. By incentivizing collaboration and uniting leading pharmaceutical organizations, the network will grow rapidly — unlocking knowledge that can accelerate research, fuel innovation, and ultimately improve lives worldwide.</p><p><strong>About OriginTrail</strong></p><p>OriginTrail is an ecosystem dedicated to making the global economy work sustainably by enabling a universe of AI-ready Knowledge Assets, allowing anyone to take part in trusted knowledge sharing. It leverages the open-source Decentralized Knowledge Graph that connects physical and digital worlds in a single connected reality, driving transparency and trust.</p><p>Advanced knowledge graph technology currently powers trillion-dollar companies like Google and Facebook. By reshaping it for Web3, the OriginTrail Decentralized Knowledge Graph provides a crucial fabric to link, verify, and value data on both physical and digital assets.</p><p>Learn more about <strong>OriginTrail</strong>: <a href="https://origintrail.io/">https://origintrail.io/</a>.</p><p><strong>About Oxford PharmaGenesis</strong></p><p>Oxford PharmaGenesis is a HealthScience communications consultancy. They are the largest independent company in the healthcare communications sector. Founded in 1998, their award-winning organization comprises more than 500 talented people working from North America, Europe, and the Asia Pacific.</p><p>Oxford PharmaGenesis is connected by a strong company culture and a clear mission: to help clients accelerate the adoption of evidence-based innovations for patients in areas of unmet medical need.</p><p>Learn more about <strong>Oxford PharmaGenesis</strong>: <a href="https://www.pharmagenesis.com/">https://www.pharmagenesis.com/</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6d44654ec192" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/oxford-pharmagenesis-and-origintrail-to-introduce-collaborative-ai-ready-medical-knowledge-6d44654ec192">Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge…</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build AI agents with verifiable memory using OriginTrail and Microsoft Copilot!]]></title>
            <link>https://medium.com/origintrail/build-ai-agents-with-verifiable-memory-using-origintrail-and-microsoft-copilot-52363f814707?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/52363f814707</guid>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[ai-memory]]></category>
            <category><![CDATA[mcp-server]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 17 Jul 2025 16:30:28 GMT</pubDate>
            <atom:updated>2025-07-17T16:28:37.085Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9TF35E_tldYpDOrNlMEhJA.gif" /></figure><p>Microsoft Copilot is becoming the interface for how users <strong>work with AI across the Microsoft</strong> ecosystem. But what happens when you enhance Copilot with the ability to understand and remember structured, verifiable knowledge?</p><p>With the<strong> integration of the OriginTrail Decentralized Knowledge Graph (DKG) and the Model Context Protocol (MCP)</strong>, you can build AI agents that reason over live data, contribute to shared memory, and deliver trusted outputs backed by cryptographic proofs.</p><p>By extending <strong>Microsoft’s AI infrastructure with OriginTrail</strong>, you equip Copilot agents with powerful capabilities for knowledge discovery, memory, and collaboration.</p><h3><strong>What is MCP?</strong></h3><p>The Model Context Protocol (MCP) is an open standard that defines how language models access and utilize tools and external data sources.</p><p><strong>MCP uses a client-server architecture where:</strong></p><ul><li>MCP Servers expose tools and data, both local and remote,</li><li>MCP Clients, such as agents built in Microsoft Copilot Studio, call these tools using a standard protocol.</li></ul><p>This architecture makes it <strong>easy to build AI systems</strong> that are modular, composable, and interoperable <strong>across different environments</strong>.</p><h3><strong>What role does the DKG play?</strong></h3><p>The <strong>OriginTrail DKG provides a decentralized layer</strong> for structured, verifiable knowledge that AI agents can query, write to, and collaborate over. When connected to an <strong>MCP server equipped with DKG tools</strong>, agents are empowered to retrieve and build upon interconnected, verifiable knowledge.</p><p><strong>AI agents can:</strong></p><ul><li>Retrieve semantically rich knowledge,</li><li>Generate and publish new Knowledge Assets,</li><li>Collaborate on a shared, verifiable knowledge base.</li></ul><p>Each interaction is built with data provenance, version control, and ownership in mind. <strong>Knowledge is shared, structured, and trustworthy!</strong></p><h3>Supercharging Microsoft Copilot with verifiable memory!</h3><p>Through this integration, builders can now connect OriginTrail DKG with custom agents built in Microsoft Copilot Studio.</p><p><strong>Here’s what that enables:</strong></p><ul><li>The DKG MCP server runs alongside an OriginTrail DKG Node,</li><li>Custom actions are registered in Microsoft Copilot Studio to access DKG tools,</li><li>These actions can be triggered by agents within environments like Microsoft Teams.</li></ul><p>This setup allows Copilot-based agents to access interconnected, verifiable knowledge in real time, and contribute new structured information back into the DKG.</p><p><strong>Agents can then:</strong></p><ul><li>Ask precise questions over a structured knowledge graph,</li><li>Write their own memory as reusable Knowledge Assets,</li><li>Store results, update context, and collaborate with other agents.</li></ul><p>This integration brings reasoning, verifiability, and memory collaboration directly into Copilot-powered workflows</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YQeFu0KgcgAuSuYVwQ6_Aw.jpeg" /></figure><h3>See it in action!</h3><p>In the live demo, Jurij Škornik, General Manager at Trace Labs, core developers of OriginTrail, walks us through:</p><ul><li>Running the DKG MCP server with an OriginTrail Edge Node,</li><li>Building a custom agent in Microsoft Copilot Studio,</li><li>Adding custom actions to enable interaction via Microsoft Teams.</li></ul><p>The result is a working Copilot agent with <strong>full access to decentralized, verifiable memory</strong>. Check it out!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_S5cNdwAGsQ%3Fstart%3D177%26feature%3Doembed%26start%3D177&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_S5cNdwAGsQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_S5cNdwAGsQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0ffde9fbbd1763b6ecb58da111ab6f74/href">https://medium.com/media/0ffde9fbbd1763b6ecb58da111ab6f74/href</a></iframe><p>As AI becomes central to enterprise workflows, adding verifiability and structure to its memory is <strong>essential</strong>. Combining <strong>OriginTrail DKG and MCP means your agents are working with knowledge</strong> that is:</p><ul><li>Structured using open standards (like RDF and schema.org),</li><li>Interconnected across multiple data sources,</li><li>Verifiable thanks to cryptographic anchoring,</li><li>Portable across applications, agents, and ecosystems, such as Microsoft.</li></ul><p>This opens the door to <strong>new applications</strong> in supply chains, research, content management, enterprise collaboration, and more!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52363f814707" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/build-ai-agents-with-verifiable-memory-using-origintrail-and-microsoft-copilot-52363f814707">Build AI agents with verifiable memory using OriginTrail and Microsoft Copilot!</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[OriginTrail powers the future of ethical AI in healthcare with ELSA]]></title>
            <link>https://medium.com/origintrail/origintrail-powers-the-future-of-ethical-ai-in-healthcare-with-elsa-d59b628438be?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/d59b628438be</guid>
            <category><![CDATA[ethical-ai]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[healthcare]]></category>
            <category><![CDATA[decentralized]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Wed, 14 May 2025 10:12:35 GMT</pubDate>
            <atom:updated>2025-05-14T10:12:22.015Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MXmn7CPHcbQy2XNZJwo9DQ.png" /></figure><blockquote>A decentralized repository for secure, scalable genomic data sharing &amp; AI-driven personalized healthcare insights — powered by OriginTrail Decentralized Knowledge Graph (DKG).</blockquote><h3>OriginTrail powers the future of ethical AI in healthcare with ELSA</h3><p>We’re excited to announce that <a href="https://origintrail.io"><strong>OriginTrail</strong></a><strong> is joining forces with the </strong><a href="https://elsa-ai.eu"><strong>ELSA</strong></a><strong> (European Lighthouse on Secure and Safe AI) </strong>initiative to shape the future of <strong>decentralized, privacy-preserving artificial intelligence (AI) in healthcare</strong>. Digital healthcare today faces three pressing challenges: safeguarding patient privacy, bridging fragmented data silos for seamless interoperability, and meeting strict regulatory requirements without stifling innovation.</p><p>At the heart of this collaboration lies a <strong>DeReGenAI</strong> — a decentralized repository for secure, scalable genomic data sharing and AI-driven personalized healthcare, powered by the OriginTrail Decentralized Knowledge Graph (DKG). This initiative tackles the<strong> most pressing challenges in digital health</strong>: enabling secure, compliant, and user-sovereign sharing of sensitive genomic data while unlocking the full potential of AI-driven personalized healthcare.</p><h4>Trustworthy AI needs trustworthy infrastructure</h4><p>AI is transforming healthcare — but for it to do so responsibly, it must be built on a foundation of trust, transparency, and ethics. That’s exactly what OriginTrail brings to the table within the ELSA consortium: an open-source, decentralized infrastructure that ensures data privacy, ownership, and interoperability at scale.</p><p>By integrating OriginTrail DKG, DeReGenAI becomes a <strong>decentralized repository that puts patients in control of their most personal asset</strong> — their genomic data. This enables:</p><ul><li><strong>User-managed permissions</strong>: Patients decide who can access their data, when, and for what purpose.</li><li><strong>Privacy-preserving monetization</strong>: Individuals can opt to share their data with research institutions or health providers on their own terms.</li><li><strong>AI-ready interoperability</strong>: Seamless interaction with AI systems while maintaining the integrity and provenance of the data.</li></ul><p>At its core, the OriginTrail DKG act as a knowledge graph of knowledge graphs — a globally distributed network where each participant maintains control over their own knowledge node. These nodes interact in a fully <strong>decentralized manner, eliminating the risks of centralized data silos and single points of failure</strong>.</p><p><strong>Here’s why this matters:</strong></p><ul><li><strong>Global scale</strong>: Access data from diverse sources without compromising security.</li><li><strong>Privacy-first architecture</strong>: Data sovereignty is seamlessly integrated into the infrastructure.</li><li><strong>Compliance-ready</strong>: Designed with GDPR and other regulatory frameworks in mind.</li><li><strong>Interoperable</strong>: Built for seamless integration with AI technologies and healthcare systems.</li></ul><h4>How does DeReGenAI work?</h4><p>To power the next generation of personalized healthcare, DeReGenAI employs <strong>decentralized Retrieval-Augmented Generation (dRAG)</strong> — an evolution of how Large Language Models (LLMs) interact with external data.</p><p>Instead of querying a centralized source, the LLMs in DeReGenAI leverage the OriginTrail DKG to retrieve <strong>verified, decentralized knowledge</strong>. This unlocks:</p><ul><li>More accurate AI insights,</li><li>Context-aware healthcare recommendations,</li><li>Trustworthy and verifiable AI behavior.</li></ul><p>The ELSA initiative brings together top-tier European academic, industrial, and technology partners, such as <strong>University of Oxford, The Alan Turing Institute, NVIDIA</strong>, and others, to build a future where AI is both effective and ethical. As part of the ELSA initiative, <strong>OriginTrail is used to build a trusted data ecosystem for the AI age</strong> — one where people, not platforms, control their data, and where innovation never comes at the cost of ethics.</p><p>We’re proud to be driving this change, and even prouder to be doing it alongside an incredible group of partners.</p><p>Learn how OriginTrail is powering the shift to human-centric AI at <a href="https://origintrail.io/">https://origintrail.io/</a>.</p><p><strong>Trust the source.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d59b628438be" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/origintrail-powers-the-future-of-ethical-ai-in-healthcare-with-elsa-d59b628438be">OriginTrail powers the future of ethical AI in healthcare with ELSA</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[umanitek launches umanitek Guardian AI agent]]></title>
            <link>https://medium.com/origintrail/umanitek-launches-umanitek-guardian-ai-agent-00ebab78a0b3?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/00ebab78a0b3</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[ai-risk]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 08 May 2025 11:49:59 GMT</pubDate>
            <atom:updated>2025-05-08T11:49:41.721Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3S7RN_W57d5IW8Yk8xtD_w.png" /></figure><p><strong>Zug, Switzerland (May 6, 2025) — </strong>Umanitek AG, a Swiss-based AI company combating harmful content and the risks of artificial intelligence, today announces the launch of their first product <a href="https://umanitek.ai/product">umanitek Guardian</a>.</p><p>Umanitek’s mission is to fight against harmful content and the risks of AI by developing and deploying technology that serves the greater good of humanity.</p><p>Umanitek’s first product is an AI agent, umanitek Guardian, that uses the <strong>Decentralized Knowledge Graph (DKG)</strong>, a decentralized, trusted network for organizing and tracking immutable data and allows participating organizations to keep ownership and control of their data while supporting database queries on a need-to-know basis — allowing collaboration without compromising privacy.</p><p>The first user of umanitek Guardian will be Aylo, who will leverage the agent to allow law enforcement agents to query 7 million hashes of its verified content using natural language through an AI agent.</p><p>“<em>Umanitek acts as the bridge. Through Decentralized Knowledge Graph (DKG) decentralized infrastructure, we can integrate advanced Internet safety technologies directly with data. </em><a href="https://umanitek.ai/product"><em>Umanitek Guardian</em></a><em> will enable companies, law enforcement, NGOs and individuals to collaborate by uploading and querying “fingerprints” of images and videos to a decentralized directory. This system will help large technology platforms track, identify and prevent the distribution of harmful content. We are committed to developing human-centric AI solutions that promote trust, protect privacy and help make internet safety the standard in the age of AI</em>.”</p><p>– Chris Rynning, umanitek Chairman</p><p><strong>About umanitek</strong></p><p><em>Making internet safety the standard in the age of AI.</em></p><p>Umanitek AG is a Swiss-based AI company combating harmful content and the risks of artificial intelligence. We develop<strong> </strong>human-centric AI solutions that promote trust, protect privacy and make internet safety the standard in the age of AI.</p><p>Our founders bring deep expertise in building reliable, trusted AI systems and are connected to global networks working to reduce internet harm, and are committed to raising awareness about the importance of education and digital responsibility in the age of AI.</p><p>Umanitek’s AI infrastructure is safe by design, open by principle and trustworthy by default. With a focus on ethical innovation, umanitek is setting the standards for transparency, accountability and harm reduction in artificial intelligence.</p><p>For more information about umanitek, umanitek’s founders and products, visit <a href="http://www.umanitek.ai/">www.umanitek.ai</a>.</p><p><strong>Contacts</strong></p><p>For media inquiries, please contact:</p><p>Umanitek Communication</p><p><a href="mailto:media@umanitek.ai">media@umanitek.ai</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=00ebab78a0b3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/umanitek-launches-umanitek-guardian-ai-agent-00ebab78a0b3">umanitek launches umanitek Guardian AI agent</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[UMANITEK: Setting the standard for internet safety]]></title>
            <link>https://medium.com/origintrail/umanitek-setting-the-standard-for-internet-safety-cd5a91f142f3?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/cd5a91f142f3</guid>
            <category><![CDATA[ethical-ai]]></category>
            <category><![CDATA[saftey]]></category>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[internet-security]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Fri, 07 Mar 2025 14:52:39 GMT</pubDate>
            <atom:updated>2025-03-07T14:52:38.869Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y_2eJSCKxwDtNLcu2Yzdkw.gif" /></figure><p>Today, artificial intelligence (AI) is rapidly reshaping the Internet, driving a historic transformation in how we engage, work, and communicate online.</p><p>However, the rise of generative AI has also led to an explosion of deepfakes, hallucinating language models, and the rapid creation of untrustworthy content — threatening the foundation of authentic communication and learning. AI-generated content now dominates the internet, making it increasingly difficult to distinguish reality from fabrication.</p><p>While AI unlocks significant advancements, it also introduces equally substantial risks, from intellectual property infringements to illegal content, such as child sexual abuse materials.</p><p><strong>It is for this reason, we founded umanitek.</strong></p><p>At umanitek, our mission is to fight against harmful content and the risks of AI by promoting technology that serves the greater good of humanity.</p><p>Our founders, <strong>Trace Labs, Ethical Capital Partners and AMYP Ventures AG (part of a Piëch/Porsche Family Office) </strong>bring together their capabilities in building reliable and trusted AI systems, their connection to networks that fight for the removal of internet harm, and their ability to raise awareness of the importance of knowledge and education in the age of AI.</p><p>But this is too big of a challenge to go at it alone. Recognizing the magnitude of this issue, we actively seek partnerships with institutions and individuals dedicated to ethical AI development. We want to partner with investors who are focused on “tech for good” solutions where societal impact is of equal importance to commercial success and to work with tech leaders, policymakers, and law enforcement <strong>to make internet safety the standard in the age of AI.</strong></p><h3>Balancing innovation with responsibility in the age of AI.</h3><p>Our vision is to leverage umanitek’s technology to enable corporations and individuals to control their data, technology, and resources without compromising security, privacy, or intellectual property.</p><p><strong>Here’s but one quick example of how umanitek will work.</strong></p><p>Far too many people are concerned about the non-consensual sharing of their personal images or those of their children. Umanitek will enable companies, law enforcement, NGOs, and individuals to upload “fingerprints” of personal photos to a decentralized directory. This system will help large technology platforms identify and prevent the distribution of such content.</p><p>In the potential next step, it also significantly streamlines the prosecution of offenders by collaborating with law enforcement while reducing the cost and complexity of legal action related to copyright infringements.</p><p>When organizations and individuals can choose what to share and how to share it in a secure and verifiable way, all internet users benefit. Protecting legitimate content and preventing large language models from training on non-consensual data are integral to harm reduction online. We believe this is an important step to making <strong>internet safety the standard</strong> in the age of AI, reducing harmful content, and enabling trusted AI solutions.</p><p><strong>Fighting the <em>good fight.</em></strong></p><p><em>“We invested in OriginTrail to drive transparency and trust for real-world assets. Now, we’ve co-founded </em><strong><em>umanitek</em></strong><em> to combat harmful content, IP infringements, and fake news — leveraging OriginTrail technology across internet platforms.”</em></p><p><em>— Chris Rynning, AMYP Ventures AG (part of a Piëch/Porsche Family Office)</em></p><h3>An unprecedented alliance for ethical AI.</h3><p>Umanitek stands out by combining the expertise of three leaders in their fields:</p><p><strong>Trace Labs (core developers of OriginTrail)</strong> — The pioneers of neuro-symbolic AI, building trusted and verifiable AI systems. They are the developers behind the OriginTrail Decentralized Knowledge Graph (DKG), a technology that enhances trust in AI, supply chains, and global data ecosystems.</p><p><strong>Ethical Capital Partners (ECP)</strong> — A private equity firm seeking out investment and advisory opportunities in industries that require principled ethical leadership. Founded in 2022 by a multi-disciplinary team with legal, regulatory, law enforcement, public engagement, and finance experience, ECP’s philosophy is rooted in identifying companies amenable to a responsible investment approach and working collaboratively with management teams in order to develop strategies to create value and drive growth.</p><p><strong>AMYP Ventures AG (part of a Piëch/Porsche Family Office)</strong> — A venture capital group backing game-changing AI and Web3 initiatives with the potential for global impact.</p><p>This is a collaboration that combines the knowledge of AI, cutting-edge research, and technology with ethical investment strategies to create the standard for internet safety in the age of AI — an AI solution that will serve humanity.</p><p><strong>Subscribe</strong> for updates at <a href="http://umanitek.ai"><strong>umanitek.ai</strong></a> to stay in touch and be among the first to learn about cofounders, contributors, and partners of umanitek, as well as reserve a spot to test-drive umanitek’s products at their release.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3rmJwr7hDomMGU-hCgc_5A.jpeg" /></figure><p><a href="http://umanitek.ai"><strong>Web </strong></a><strong>| </strong><a href="https://x.com/umanitek"><strong>Twitter </strong></a><strong>| </strong><a href="https://www.linkedin.com/company/umanitek"><strong>LinkedIn</strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cd5a91f142f3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/umanitek-setting-the-standard-for-internet-safety-cd5a91f142f3">UMANITEK: Setting the standard for internet safety</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>