<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[OriginTrail - Medium]]></title>
        <description><![CDATA[OriginTrail is the Decentralized Knowledge Graph that organizes AI-grade knowledge assets, making them discoverable and verifiable for a sustainable global economy. - Medium]]></description>
        <link>https://medium.com/origintrail?source=rss----d4d7f6d41f7c---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 01 Mar 2026 00:17:34 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/origintrail" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Passport, please! AI agents are becoming first-class citizens with ERC-8004 & OriginTrail]]></title>
            <link>https://medium.com/origintrail/passport-please-ai-agents-are-becoming-first-class-citizens-with-erc-8004-origintrail-27fb90af8af9?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/27fb90af8af9</guid>
            <category><![CDATA[knowledge-graph]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[blockchain]]></category>
            <category><![CDATA[ethereum]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 12 Feb 2026 14:04:28 GMT</pubDate>
            <atom:updated>2026-02-12T14:09:49.759Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*c1sRiCkFf4jF_NscwCTvZw.png" /></figure><p>AI agents are exploding in use across industries, but they’re roaming a digital world with no shared identity or trust framework. Today, an agent can claim <em>“I can code”</em> or <em>“I can trade” (“trust me, bro, I’m an AI agent”)</em>, yet there’s no standard way to verify if any of it is true.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jEx8DWmGDqrzv0yhAZdztA.png" /></figure><p>You wouldn’t trust strangers operating like that, and neither can AI agents truly trust each other under these conditions. This <em>“trust gap”</em> is a major roadblock to an open agent economy. Agents need a way to<strong> carry their identity, context, and track record with them</strong> — something akin to a passport — so they can be <em>discovered</em> and <em>trusted</em> by others at machine speed.</p><h3>Giving AI agents a Digital Passport with ERC‑8004 and Decentralized Knowledge Graph</h3><p>Combining the ERC‑8004 standard with the <a href="https://origintrail.io/technology/decentralized-knowledge-graph">OriginTrail Decentralized Knowledge Graph (DKG)</a> creates a powerful synergy akin to giving<strong> AI agents a digital passport from day one</strong>. ERC‑8004 establishes an <strong>agent’s on-chain identity and structure </strong>— essentially issuing a standardized passport number and “photo page” for the AI — while the OriginTrail DKG fills that passport with<strong> dynamic, verifiable context</strong>, i.e., the stamps, visas, certificates, and travel history that accumulate as the agent interacts and learns. Together, these technologies ensure each AI agent has both a <strong>trusted identity and a rich, evolving track record of its accomplishments</strong>, all secured by <strong>blockchain</strong> and<strong> cryptographic proofs</strong>.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/DrevZiga/status/2017001905885524075%3Fs%3D20&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/76eab843ee3fc140f3941f7bf5ae7c34/href">https://medium.com/media/76eab843ee3fc140f3941f7bf5ae7c34/href</a></iframe><p>The ERC‑8004 Ethereum standard gives every AI agent a unique on-chain identity. Each agent is issued an <strong>ERC-721 NFT </strong>as its “passport document,” providing a <strong>portable, censorship-resistant identifier on Ethereum</strong>. This identity token (the agent’s “passport number”) links to a registration file describing the agent’s core info — for example, its capabilities, endpoints (how to communicate with it), and even aspects of its “social graph” or affiliations. In other words, ERC‑8004 standardizes how an AI agent presents itself, ensuring that anyone, anywhere, can verify who the agent is and what skills it claims to have. Just as a real passport is issued by a trusted authority, the ERC‑8004 identity is anchored on Ethereum, making it globally verifiable and hard to forge. This on-chain identity layer also includes built-in trust anchors: ERC‑8004 defines <strong>reputation and validation registries</strong> that record an agent’s on-chain feedback and certifications, functioning like official seals or endorsements on a passport.</p><p>Thanks to ERC-8004, AI agents now have a basic passport — a way to present <strong>who they are</strong> and <strong>what they’ve done</strong> in a standard, verifiable format. An agent that wants to be hired for a job can show their ERC-8004 credentials: “<em>Here’s my ID and resume, here are my reviews, and here are proofs of my capabilities</em>.” In fact, the standard explicitly frames the identity NFT as the agent’s passport.</p><p>However, like a freshly issued real-world passport, this is just the beginning. The passport, by itself (an NFT plus a static JSON file), is necessary but not sufficient for rich trust. It tells you the basics, but imagine if we could stuff that passport with far more context — every stamp, visa, reference letter, and credential an agent earns over time, in a way that’s trusted and queryable. This is where <strong>OriginTrail Decentralized Knowledge Graph comes in, turning the passport into something much more powerful.</strong></p><h3>Decentralized Knowledge Graph: Turning the passport into a living context graph</h3><p>OriginTrail <strong>Decentralized Knowledge Graph (DKG)</strong> steps in to supercharge ERC-8004’s static records, effectively transforming an agent’s passport into a <strong>living, verifiable context graph</strong>. Think of ERC-8004 as issuing the agent a blank passport and a basic ID card; the DKG is what brings that passport to life with data, continuously updated with verified stamps and stories of the agent’s journey. In OriginTrail’s own words, the DKG serves as a <em>“constantly evolving digital passport for agents,”</em> essentially an agent-specific context graph that grows over time with each interaction</p><h4><strong>How does it work?</strong></h4><p>The DKG is a decentralized network <strong>designed to store and publish structured knowledge </strong>(using semantic web standards) with <strong>verifiable provenance</strong>. In the DKG, information is not just dumped in JSON files or logs — it’s represented as a<strong> knowledge graph</strong>: a web of facts and relationships that machines can easily query and trust. Each data point in the graph is accompanied by c<strong>ryptographic proof</strong> (such as a fingerprint anchored on-chain) that guarantees its integrity. And just like ERC-8004’s identity,<strong> each “thing” in the DKG is ownable via an NFT</strong>. In fact, the core unit of the DKG is called a <strong>Knowledge Asset</strong>, which is essentially an NFT + knowledge graph bundled together. You can represent <em>anything</em> as a Knowledge Asset — an AI agent, a dataset, a certificate — and give it a verifiable, evolving record on the graph.</p><p>So, let’s map an AI agent to a<strong> DKG Knowledge Asset</strong>. The agent’s ERC-8004 NFT can double as a DKG asset identifier (the DKG uses a concept called a Uniform Asset Locator, which extends DIDs, often implemented by an NFT token). That covers the identity/ownership part. Now attach the agent’s knowledge: Instead of a single JSON file with a few fields, we can have an entire graph of data describing the agent.</p><h4><strong>This graph might include:</strong></h4><ul><li><strong>Agent profile &amp; attributes: </strong>The same basics from the JSON (name, description, endpoints) but in a semantic format (RDF triples) so they’re machine-readable and linkable. For example, an agent could be linked to a category (“TradingBot”) or a skill ontology, enabling more precise discovery.</li><li><strong>Decision traces &amp; activity logs:</strong> Every significant action the agent takes could be logged as an assertion in its knowledge graph. Did the agent complete a task? You can add a node for that event, linked to the date it occurred and its outcome. Over time, this creates a timeline of verifiable events — a history far richer than a single aggregate reputation score. These are the “stamps” in the passport, each one independently verifiable via its on-chain fingerprint. If someone questions why an agent made a decision, they could inspect its DKG log (with appropriate permissions) to trace the reasoning or data that led to it. Essentially, the agent builds up a memory in the graph that can be audited.<a href="https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/"> In her thesis</a>, Jaya Gupta of Foundation Capital explicitly includes AI agents’ decision-making processes and the importance of capturing decision traces to understand why decisions were made, which then become part of evolving context graphs. For context, graphs must become the real source of truth; DKG plays an essential role.</li><li><strong>Verifiable credentials &amp; references: </strong>DKG can integrate W3C Verifiable Credentials (VCs) and decentralized identifiers. Suppose a trusted organization certifies an agent (e.g., <em>“This trading bot passed a rigorous test”</em> or <em>“This agent is compliant with X regulation”</em>); that credential can be added to the agent’s knowledge graph as a signed assertion. OriginTrail DKG is built to support standards such as VCs and DIDs, ensuring these credentials are stored in an interoperable format. It’s like adding visas or reference letters to the passport — e.g., <em>“Certified by Authority Y”</em> — which anyone can cryptographically verify.</li><li><strong>Semantic relationships: </strong>Knowledge graphs excel at capturing relationships between entities. An agent’s context isn’t just about the agent in isolation; it’s also about how it connects to others. With DKG, we can link the agent to other agents it has worked with, to datasets it frequently uses, or to domains of expertise. For example, if Agent A has collaborated with Agent B on a project, their knowledge graphs can reference each other (Agent A’s passport might say “<em>worked with Agent B on Supply Chain Optimization, see project P</em>”). These semantic links enrich discoverability — one could query the graph for <em>“agents who have worked on supply chain tasks with verified outcomes”</em> and find Agent A because of those relationships. OriginTrail’s design enables Knowledge Assets to connect with other assets, creating a world model of relationships.</li><li><strong>Provenance and data anchoring: </strong>Perhaps most importantly, every fact or credential added to the agent’s context graph comes with provable provenance. The DKG uses cryptographic proofs (Merkle roots of the graph data) anchored on-chain to ensure that the knowledge hasn’t been tampered with. If the agent’s passport states “<em>Completed 50 successful deliveries</em>,” the raw data backing that (the 50 delivery events) each have a hash on the chain that can be verified. This is analogous to a passport office stamping and sealing each visa — it can’t be faked without detection. The OriginTrail network’s nodes replicate and store these assertions, especially the public ones, so the data is always available and secure in a decentralized way. No single party can forge or hide the agent’s records. The result is a trustworthy, tamper-evident ledger of an agent’s life that complements the on-chain registries.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5M-fFfV1moAZuRNefWkS9g.png" /><figcaption>OriginTrail DKG represents an agent’s profile as a Knowledge Asset, combining on-chain identity with off-chain knowledge. The diagram illustrates how an AI agent’s “passport” gains a chip: it contains semantic graph data (RDF) and vector embeddings for AI context, anchored by cryptographic on-chain proofs, all tied to a unique NFT identifier. This makes the agent’s profile a dynamic, queryable knowledge graph rather than a static file.</figcaption></figure><h3>Conclusion</h3><p>In summary, integrating OriginTrail DKG with ERC-8004 gives each agent a “smart passport”: not just an ID document, but an entire personal knowledge graph that is securely stored, constantly updated, and universally queryable. The passport isn’t just carried by the agent — it <em>lives</em> on the decentralized network, where anyone (or any other agent) can validate its stamps and even learn from its contents (with permission). This dramatically amplifies trust: an agent’s identity isn’t a static entry in a registry; it’s the center of a web of trust data that grows richer over time.</p><p>The journey is just starting. ERC-8004 has effectively set the rules for <em>issuing</em> and <em>stamping</em> agent passports. OriginTrail DKG offers a global registry and database where those passports are maintained and enriched over time. As this integration matures, we could see the emergence of a true Web3 agent commons—a space where AI agents from any project or company can work together trustlessly, discover one another through shared context, and carry their reputation across any single platform.</p><p>In the long run, this<em> passport </em>and<em> knowledge graph</em> approach may become an essential component of AI infrastructure, much like human identity standards. It lays the foundation for an interoperable, trustworthy agent economy.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=27fb90af8af9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/passport-please-ai-agents-are-becoming-first-class-citizens-with-erc-8004-origintrail-27fb90af8af9">Passport, please! AI agents are becoming first-class citizens with ERC-8004 &amp; OriginTrail</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[5 Trends to drive the AI ROI in 2026: Trust is Capital]]></title>
            <link>https://medium.com/origintrail/5-trends-to-drive-the-ai-roi-in-2026-trust-is-capital-372ac5dabc38?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/372ac5dabc38</guid>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Tue, 23 Dec 2025 14:32:31 GMT</pubDate>
            <atom:updated>2025-12-23T14:32:30.181Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-0UC9Eqtg7b-6O7ZEsESrA.gif" /></figure><p><strong>Executive Summary: </strong>After years of experimentation, business leaders are entering 2026 with a clear mandate: make AI investments pay off, but do it in a way that stakeholders can trust. In enterprise settings, artificial intelligence is no longer a speculative pilot project; it’s a business-critical asset whose success or failure hinges on trust, transparency, and accountability.</p><p>Recent industry analyses show a striking gap between AI ambition and actual returns — <a href="https://www.cfo.com/news/so-far-few-cfos-see-substantial-roi-from-ai-spending-RPG/808249/#:~:text=Only%2014,their%20AI%20investments%20to%20date">only 14% of CFOs report measurable ROI from AI to date,</a> even though 66% expect significant impact within two years. This optimism comes with a sobering realization: without verifiability and integrity at every level, AI projects risk underdelivering or even backfiring. <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">An MIT study reveals that up to 95% of firms investing in AI have yet to see tangible returns</a>, often because of hidden flaws, opaque models, or poor data foundations. In response, companies are pivoting from hype to hard results — “after years of pilots, firms are shifting focus to monetization” in AI initiatives.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/987/1*ge_392s9yE1YA5oiudh51w.png" /><figcaption><em>Share of S&amp;P 500 companies disclosing AI-related risks, 2023 vs. 2025. In 2025, 72% of S&amp;P 500 warned investors about material AI risks (up from just 12% in 2023), reflecting growing concerns about AI’s impact on security, fairness, and reputation (</em><a href="https://corpgov.law.harvard.edu/2025/10/15/ai-risk-disclosures-in-the-sp-500-reputation-cybersecurity-and-regulation/"><em>full study</em></a><em>).</em></figcaption></figure><p>The result is a strategic shift: <strong>trustworthy AI infrastructure</strong> is becoming a <strong>business advantage</strong> rather than a compliance burden.</p><p>This article outlines five key AI trends for 2026, each mapped to a layer of the <strong>I-DIKW framework (Integrity, Data, Information, Knowledge, Wisdom)</strong>. These trends show how aligning AI efforts with integrity at every level enables organizations to <strong>unlock ROI</strong> amid regulatory scrutiny and competitive pressure.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vwFZeni8EVNZpUf9JyvTEg.png" /><figcaption><em>In traditional systems, the DIKW pyramid (Data → Information → Knowledge → Wisdom) was linear and siloed. OriginTrail reshapes this entirely. By merging blockchain, knowledge graphs, and AI agents, it transforms DIKW into a networked, self-reinforcing trust flywheel, adding Integrity as the foundational layer, evolving into the I-DIKW model.</em></figcaption></figure><h3><strong>Trend 1: Integrity Layer — Trustworthy AI Infrastructure by Design</strong></h3><p><strong>Integrity</strong> is the foundation of the I-DIKW framework: it’s about building AI systems that are trustworthy and verifiable <strong>from the ground up</strong>. In 2026, leading firms will treat <strong>AI integrity</strong> (security, ethics, and transparency) as a first-class requirement. This means baking in <strong>cryptographic provenance, audit trails, and robust governance controls</strong> into AI platforms. For example, new architectures use <em>immutable provenance chains</em> and digital signatures to ensure every AI input and output can be traced and verified. Such measures give executives and regulators high confidence in the integrity of AI outputs.</p><p>The business payoff is significant: integrity by design reduces the risk of AI failures, bias incidents, or data leaks that can derail ROI. Companies that invested early in <strong>trust infrastructure</strong> are finding their AI projects scale faster and face fewer roadblocks from compliance or public concern. Conversely, a lack of integrity can be a deal-breaker. <em>Case in point:</em> the government of Switzerland <strong>rejected a prominent AI platform (Palantir)</strong> after finding it posed <a href="https://thetonymichaels.substack.com/p/palantir-loses-out-in-switzerland">“unacceptable risks”</a> to data security and sovereignty. Swiss evaluators concluded the system <strong>couldn’t guarantee full control or transparency</strong>, raising alarms about dependence on a foreign black-box solution.</p><p>The lesson for CIOs and CEOs is clear: if an AI system <strong>can’t prove its integrity and accountability</strong>, savvy clients (and regulators) will walk away. In 2026, <strong>trustworthy AI by design</strong> will be a strategic imperative, enabling organizations to deploy AI confidently and at scale, turning trust into a <strong>competitive advantage</strong> rather than a cost.</p><h3>Trend 2: Data Layer — Sovereign Data and Quality Foundations</h3><p>Moving up the hierarchy, Data is the raw material for AI — and its quality and governance determine whether AI initiatives thrive or falter. It’s well known that garbage in leads to garbage out, yet many organizations still underestimate how data issues sabotage AI ROI. Executives may invest millions in AI tools, only to find that the tools can’t deliver value because the underlying data is incomplete, biased, or untrustworthy. A recent survey of CFOs found that <strong>poor data trust is the single greatest inhibitor of AI success</strong> — 35% of finance chiefs cite lack of trusted data as the top barrier to AI ROI. It’s no wonder <a href="https://rgp.com/press/rgp-cfo-survey-shows-growing-divide-between-ai-ambition-and-ai-readiness/#:~:text=Data%20remains%20the%20single%20greatest,impact%20and%20slowing%20enterprise%20adoption">only 14% have seen meaningful AI value so far</a>.</p><p><strong>Data sovereignty</strong> is a particularly hot issue. Companies and governments alike want assurance that critical data remains under their control. This is driving a trend toward <strong>“sovereign AI” solutions</strong> — those that allow data to be kept locally or in trusted environments, rather than forcing lock-in to a vendor’s cloud. Europe’s upcoming regulations emphasize data localization and <strong>digital sovereignty</strong>, reinforcing this shift. The stakes became evident when <strong>Switzerland’s defense authorities rejected Palantir’s AI software</strong> after a risk assessment warned it could leave Swiss data vulnerable to U.S. jurisdiction.<a href="https://thetonymichaels.substack.com/p/palantir-loses-out-in-switzerland#:~:text=%E2%80%9CNo%20foreign%20software%20should%20compromise,evaluation%2C%20summarizing%20internal%20military%20concerns"> In the evaluators’ words,</a> <em>“No foreign software should compromise our ability to control and protect sensitive national information.”</em></p><p>For businesses, the takeaway is that <strong>control over data = trust</strong>. In 2026, leading enterprises will choose AI platforms that offer <strong>transparent data handling, open standards, and interoperability</strong> so they aren’t handcuffed to a single provider. By building <strong>sovereign data ecosystems</strong> — for instance, using <strong>decentralized data networks — organizations ensure data integrity and privacy</strong>, which in turn <strong>unlocks AI value</strong>. When your data is high-quality, compliant, and under clear ownership, AI initiatives can progress without the hidden friction that often stalls pilots. In short, <strong>trusted data is the fuel for AI ROI</strong>.</p><h3>Trend 3: Information Layer — Explainable and Verifiable AI Insights</h3><p>Turning raw data into actionable <strong>Information</strong> is the next layer — and in 2026, the key word is <strong>“explainable”</strong>. As AI systems generate reports, recommendations, and content, organizations are realizing that <em>if the people using that information don’t trust it, the AI investment is wasted</em>. Thus, a major trend is the adoption of <strong>explainable AI (XAI) and verifiable AI outputs</strong>. Business leaders want AI that not only <em>does</em> the analysis but can <strong>show its work</strong> — revealing the logic, source data, or confidence behind an output.</p><p>This trend is fueled by both internal needs (e.g. a manager trusting an AI-generated forecast) and external pressure. Regulators are stepping in: the EU’s AI Act, for example, includes <strong>transparency obligations</strong> requiring that users be informed when they interact with AI or encounter AI-generated content. Draft European guidelines even call for <strong>marking and labeling AI-generated media</strong> <a href="https://www.itic.org/news-events/techwonk-blog/techs-expectations-for-the-eu-ai-act-transparency-code-of-practice#:~:text=Tech%27s%20Expectations%20for%20the%20EU,generated%20content">to curb misinformation</a>. Likewise, in the U.S., authorities have encouraged AI developers to implement watermarking for synthetic content. The message is clear — <strong>2026 is the year when “black box” AI won’t cut it</strong> in many business applications.</p><p>Companies are responding by building <strong>trust layers around AI information</strong>. One approach is integrating <strong>cryptographic provenance</strong>: for instance, embedding invisible signatures in AI-generated content or logs that allow anyone to verify where it came from and whether it’s been altered. Another approach is to leverage verifiable credentials for information sources, ensuring that data feeding AI models (or experts providing oversight) is authenticated and reputable. Forward-looking firms are also deploying <strong>AI explainability tools</strong> — from simple <em>model scorecards</em> that highlight key factors in an AI decision, to advanced techniques that trace an AI recommendation back to the supporting facts.</p><p>A practical example is in financial services: banks deploying AI credit scoring are using <strong>explainable models and audit trails</strong> so that each loan decision can be explained to a regulator or customer, building trust and avoiding compliance roadblocks. In the realm of generative AI, companies are pairing large language models with knowledge bases and <strong>fact-checking mechanisms</strong> to prevent hallucinations from reaching end-users. <em>In essence, information generated by AI is becoming</em> <em>self-documenting and self-verifying.</em> By making AI’s information outputs <strong>transparent, explainable, and traceable</strong>, businesses not only <strong>mitigate risk</strong> but also encourage greater adoption — employees and customers are far more likely to <em>use</em> AI-driven insights when they can trust the <strong>why</strong> behind the answer. The result is faster decision cycles and more impactful AI use, directly boosting ROI.</p><h3>Trend 4: Knowledge Layer — Decentralized Knowledge Networks and Collaboration</h3><p>The <strong>Knowledge layer</strong> elevates information into shared organizational intelligence. In 2026, a standout trend will be the rise of <strong>decentralized and verifiable knowledge networks</strong> as the backbone of AI-powered enterprises. Organizations have learned that AI projects in isolation often hit a wall — the real value emerges when insights are captured, linked, and reused across the company (and even with partners). To enable this, companies are turning to <strong>knowledge graphs and collaborative AI platforms</strong> that break down silos. Crucially, these knowledge systems are being built with <strong>trust and verification in mind</strong>. Every contribution to a modern enterprise knowledge graph can be accompanied by metadata: <em>who added this insight, from what source, and with what evidence?</em></p><p>A powerful enabler here is the convergence of <strong>blockchain (decentralization) and AI</strong>. By combining blockchains’ distributed trust with AI-driven knowledge graphs, organizations create <strong>shared knowledge ecosystems that no single party solely controls — </strong><a href="https://medium.com/origintrail/trust-thy-ai-artificial-intelligence-base-d-with-origintrail-e866d996ca1c#:~:text=Having%20employed%20the%20fundamentals%20of,0"><strong>yet everyone can trust</strong></a>. For example, in supply chain and manufacturing, partners are beginning to contribute to <strong>decentralized knowledge graphs </strong>in which data on product quality and provenance are cryptographically signed at each step.</p><p>One notable case: <a href="https://www.gs1.org/insights-events/case-studies/enhancing-safer-travel-predictive-maintenance-transportation"><strong>Switzerland’s national rail company (SBB)</strong> </a>uses a decentralized knowledge graph for real-time traceability of equipment data, ensuring all stakeholders see a single source of truth with integrity. In such networks, <strong>verifiable credentials</strong> play a role too — only authorized contributors (with digital credentials) can add or modify knowledge, preventing bad data from polluting the system. The benefit to ROI is clear: when knowledge is <strong>integrated and trusted</strong>, AI can draw on a much richer context to solve problems, and organizations avoid the costly mistakes of inconsistent information.</p><p>Moreover, a <strong>decentralized approach reduces vendor lock-in</strong> and increases resilience — knowledge isn’t trapped in one platform, it’s part of a federated infrastructure the company owns. Leaders are also finding that trusted knowledge sharing accelerates innovation: teams reuse each other’s AI-derived insights instead of reinventing the wheel. As Dr. Robert Metcalfe (inventor of Ethernet) observed, <a href="https://www.gs1.org/insights-events/case-studies/enhancing-safer-travel-predictive-maintenance-transportation"><strong>knowledge graph</strong></a><strong>s can “improve the fidelity of artificial intelligence” by grounding AI in verified facts</strong>. In 2026, companies that master this <strong>knowledge layer</strong> — creating a living, vetted memory for the organization — will reap compounding returns from each new AI deployment, as each project makes the next one smarter and faster.</p><h3>Trend 5: Wisdom Layer — AI Governance and Strategic Alignment for Sustainable ROI</h3><p>At the top of the I-DIKW stack is <strong>Wisdom</strong> — the ability to make prudent, big-picture decisions. For enterprises, this translates to strong <strong>AI governance and strategic alignment</strong> at the leadership level. The trend for 2026 is that AI is no longer just the domain of IT departments or innovation labs; it’s a <strong>C-suite and boardroom priority</strong> to ensure AI is used wisely, ethically, and in line with the company’s goals. One telling sign: nearly<a href="https://fortune.com/2025/12/15/aritficial-intelligence-return-on-investment-aiq/"> <strong>61% of CEOs say</strong></a><strong> they are under increasing pressure to show returns on AI investments</strong> than a year ago. This pressure is forcing a new alignment between tech teams and business leaders. We see the emergence of <strong>Chief AI Officers and cross-functional AI steering committees</strong> to govern AI initiatives with a balance of innovation and risk management. In practice, companies are establishing <strong>AI governance frameworks</strong> — formal policies and oversight processes to supervise AI model development, deployment, and performance.</p><p><a href="https://rgp.com/press/rgp-cfo-survey-shows-growing-divide-between-ai-ambition-and-ai-readiness/#:~:text=Governance%20is%20emerging%2C%20but%20uneven%3A,and%20risk%20awareness%20at%20scale">According to recent research</a>, about 69% of large firms report having advanced AI risk governance in place, though many others are still catching up. In 2026, closing this governance gap will be crucial. Effective AI governance ensures that there is <strong>“wisdom” in how AI is applied</strong>: systems are tested for fairness, AI-driven decisions are subject to human review when needed, and AI strategies align with business values and compliance requirements.</p><p>This <strong>strategic alignment</strong> of AI yields tangible ROI by preventing missteps and unlocking faster adoption. Companies with mature governance can deploy AI in customer-facing processes or critical operations with confidence that they won’t run afoul of regulations or ethics scandals. In contrast, firms that push AI without guardrails often face costly setbacks — whether it’s a PR crisis over biased AI results or a regulator halting a project.</p><p>Moreover, organizations are starting to augment their internal governance with collaborative, cross-industry safety nets. For instance, <a href="https://umanitek.ai/#:~:text=,centric%20AI">Umanitek </a>has introduced a decentralized “Guardian” agent to coordinate AI safety across platforms. Guardian can fingerprint and cross-check content against a shared knowledge graph of known illicit or deceptive media, blocking harmful deepfakes or flagged materials in real time. Crucially, this approach preserves privacy and data ownership for all participants: each contributor’s data stays private while the agent exchanges trust signals via a permissioned decentralized network . By leveraging such cross-industry trust infrastructure, enterprises effectively extend their AI governance beyond their own walls, aligning multiple AI agents and stakeholders to uphold common integrity standards. This kind of collaborative safeguard strengthens the wisdom layer by ensuring that as AI systems interact across the web, they do so under a unified, verifiable set of ethical guardrails.</p><p><strong>Trust, once again, is a differentiator</strong> at the wisdom level. A reputation for trustworthy AI can become a selling point: for example, enterprise clients may choose a software provider not just for its AI features, but because it can <em>prove</em> those features are fair and compliant. We’re effectively seeing <strong>trust as a brand asset</strong>. Internally, strong governance also brings the wisdom of knowing where AI truly adds value. Leading organizations have learned to <strong>“lead with the problem, not with AI”</strong>, ensuring that each AI project is tied to a clear business outcome (revenue growth, cost reduction, customer experience) rather than AI for AI’s sake. This focus on <strong>value alignment</strong> is paying off. In fact, research on AI leaders (the Fortune 50 “AIQ” companies) shows they excel not by spending the most, but by integrating AI deeply into strategy and operations <a href="https://www.linkedin.com/pulse/fortune-etr-reveal-aiq-50-etr-enterprise-technology-research-dk3sc#:~:text=As%20AI%20adoption%20accelerates%2C%20the,maturity%20positively%20impacts%20their%20business">to drive measurable results</a>.</p><p>Looking at the competitive landscape, those who invest in <strong>wisdom-layer capabilities</strong>, like company-wide AI literacy, scenario planning for AI risks, and continuous training to fill AI skill gaps, are pulling ahead. CFOs note that <strong>strengthening “the systems, data, and talent” around AI is key to turning AI’s promise into performance</strong>.</p><p><strong>That is wisdom in action:</strong> recognizing that ROI comes not just from technology, but from enabling people and processes to harness that technology effectively. As regulatory regimes (from the EU AI Act to industry-specific AI guidelines) come into effect, having a solid governance foundation will mean <strong>fewer disruptions and fines</strong> and more freedom to innovate.</p><p>In sum, the <strong>Wisdom trend for 2026</strong> is about treating AI not as a magic black box, but as a strategic enterprise capability that must be nurtured, overseen, and aligned with human judgment. Businesses that do so will find that <strong>trust breeds agility</strong> — they can push the envelope on AI usage because they have the wisdom to manage the risks. That translates directly into <strong>higher ROI and sustained competitive advantage</strong>.</p><h3>Conclusion: Trust-Powered AI as the Blueprint for Leadership</h3><p>As we head into 2026, one theme resonates across all five layers of I-DIKW: <strong>trust</strong> is the through-line that turns AI from a gamble into a solid investment. By strengthening <strong>Integrity</strong> (the technical and ethical bedrock), mastering <strong>Data</strong> quality and sovereignty, insisting on <strong>Information</strong> transparency, cultivating verifiable <strong>Knowledge</strong> networks, and enforcing wise <strong>Governance</strong> at the top, organizations create a <strong>virtuous cycle</strong>. Each layer reinforces the others — trustworthy data leads to more reliable AI information, which feeds organizational knowledge, enabling wiser decisions, which in turn guide further data strategy, and so on. Companies that embrace this holistic approach are positioning themselves as <strong>leaders in the AI economy</strong>. They are better prepared for tightening regulations and rising customer expectations, turning those into opportunities rather than obstacles. Not least, they are demonstrating to investors and boards that AI dollars are well spent: projects don’t stall in pilot purgatory, but scale with confidence because the <strong>infrastructure of trust</strong> is in place.</p><p>In a business climate where <a href="https://fortune.com/2025/12/15/aritficial-intelligence-return-on-investment-aiq/"><strong>61% of CEOs feel the heat</strong></a><strong> to prove AI is delivering value</strong>, aligning with the I-DIKW framework provides a clear roadmap. It ensures that AI efforts are <strong>built on integrity and purpose at every step</strong>, rather than chased as shiny objects. The experience of firms at the forefront underscores this: those who treated <strong>trust as a core principle</strong> of their AI strategy are now reaping tangible returns — whether through increased automation efficiencies, new revenue streams from AI-driven products, or stronger customer loyalty thanks to ethically sound AI practices. On the other hand, organizations that neglected these layers are encountering what one might call “AI growing pains,” from data compliance headaches to lackluster ROI, and even public backlash.</p><p>The strategic reflection for executives is this: <strong>AI leadership in 2026 will belong to those who marry innovation with verification</strong>. By investing in trustworthy infrastructure — <em>be it cryptographic provenance for data, explainability modules for AI, or robust governance councils — you not only de-risk your AI investments, but you amplify their reward</em>. Trust is more than a compliance checkbox; it’s a performance multiplier. In the coming AI-driven economy, <strong>build trust, and the ROI will follow</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=372ac5dabc38" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/5-trends-to-drive-the-ai-roi-in-2026-trust-is-capital-372ac5dabc38">5 Trends to drive the AI ROI in 2026: Trust is Capital</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge…]]></title>
            <link>https://medium.com/origintrail/oxford-pharmagenesis-and-origintrail-to-introduce-collaborative-ai-ready-medical-knowledge-6d44654ec192?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/6d44654ec192</guid>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Tue, 02 Sep 2025 13:13:33 GMT</pubDate>
            <atom:updated>2025-09-02T14:06:13.815Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*52AtVtz1p6TUeyeEdIZ2hw.gif" /></figure><h3>Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge ecosystem driving the next generation of agentic science</h3><p>A vast amount of valuable clinical trial information exists in the world, but much of it is fragmented, hard to verify, and difficult to use, slowing medical research and patient care. This lack of connectivity slows research, complicates evidence synthesis, and limits the ability of healthcare professionals, patients, and other stakeholders to access clear, reliable information.</p><p>To address these challenges, Trace Labs, the core developers of <a href="https://origintrail.io">OriginTrail</a>, and <a href="https://www.pharmagenesis.com/">Oxford PharmaGenesis</a> have partnered on a groundbreaking initiative to globally connect and verify medical knowledge.</p><p><strong>The challenge: Lost value in unconnected knowledge</strong></p><p>Pharmaceutical companies and researchers generate a continuous flow of high-quality outputs — trial registrations, regulatory summaries, and peer-reviewed publications. Yet these resources are scattered across multiple platforms and formats, making it difficult to integrate with advanced AI systems.</p><p>As a result, vast amounts of valuable knowledge remain underused:</p><p>● Researchers struggle to locate relevant clinical studies and real-world evidence,</p><p>● Healthcare professionals lack quick access to verified, up-to-date information,</p><p>● Patients are left without clear, trustworthy resources to guide their decisions.</p><p>These inefficiencies keep knowledge fragmented and opaque because evidence is hard to find and harder to verify, resulting in slower progress and less transparency and trust, with real consequences for patients.</p><p><strong>The vision: Building a connected and trusted health knowledge pool</strong></p><p>Oxford PharmaGenesis — a global leader in the healthcare communications industry that collaborates with over 50 healthcare organizations worldwide, including eight of the world’s top ten pharmaceutical companies — and Trace Labs, have partnered to create the world’s first structured, connected, and verifiable pool of clinical trial knowledge on the <a href="https://origintrail.io/technology/decentralized-knowledge-graph">OriginTrail Decentralized Knowledge Graph (DKG)</a>.</p><p>The OriginTrail DKG merges blockchain technology with semantic, machine-readable knowledge structures, ensuring every contribution carries verifiable ownership, a transparent version history, and rich contextual links for both AI and human use. Oxford PharmaGenesis’ partnerships span pharmaceutical and biotech companies, as well as professional societies, patient groups, and academic institutions. It is also a co-founder, co-funder, and facilitator of Open Pharma with the mission to advance open science, transparency, and equity for pharma-sponsored research communications, placing it at the center of trusted knowledge exchange in healthcare.</p><p>This initiative will launch through an incentivized data-sharing program to create a domain-specific Decentralized Knowledge Graph (or “paranet”) within the OriginTrail DKG. Leading pharmaceutical organizations will be invited to join as trusted knowledge contributors, making their clinical information accessible to AI agents, research tools, and human users alike. The result: faster, more accurate discovery and reuse, empowering experts and the public with reliable, transparent, and actionable insights.</p><p><strong>From pilot to scalable implementation</strong></p><p>The collaboration begins with a pilot, which will link together publicly available information from multiple medicines produced by a global pharmaceutical company. It will create the blueprint for rapid expansion to additional contributors through a structured, incentivized data-sharing program that will form a domain-specific paranet within the OriginTrail DKG. This first phase will establish the core framework — secure, intuitive tools for contributing and exploring data, robust systems for verifying and connecting clinical knowledge, and safeguards to ensure every piece of information remains trusted and protected.</p><p>Once operational, the paranet will allow AI agents to both produce and consume verifiable knowledge directly from the OriginTrail DKG. In practice, this means transforming complex clinical data into plain-language summaries, in-depth scientific reports, visual explainers, and other formats tailored to audiences ranging from researchers and clinicians to patients and the public. As more organizations contribute, the paranet will grow to billions of structured, connected, and verifiable data points — a rich foundation with the potential to accelerate medical research, speed up discoveries, and equip healthcare professionals, patients, and innovators worldwide with better tools for informed decision-making.</p><p><strong>Looking ahead: A path toward a trusted public knowledge ecosystem</strong></p><p>This collaboration marks the start of an ambitious journey to build the world’s most extensive decentralized repository of trusted clinical trial knowledge on the OriginTrail DKG, stemming the tide of medical misinformation by providing a solid bedrock of trusted information that genAI tools can use. Driven jointly by Trace Labs, the core developers of OriginTrail, and Oxford PharmaGenesis, a global leader in scientific and medical consulting for the pharmaceutical and healthcare industries, the initiative will transform valuable clinical data into a structured, verifiable, and AI-ready resource. By incentivizing collaboration and uniting leading pharmaceutical organizations, the network will grow rapidly — unlocking knowledge that can accelerate research, fuel innovation, and ultimately improve lives worldwide.</p><p><strong>About OriginTrail</strong></p><p>OriginTrail is an ecosystem dedicated to making the global economy work sustainably by enabling a universe of AI-ready Knowledge Assets, allowing anyone to take part in trusted knowledge sharing. It leverages the open-source Decentralized Knowledge Graph that connects physical and digital worlds in a single connected reality, driving transparency and trust.</p><p>Advanced knowledge graph technology currently powers trillion-dollar companies like Google and Facebook. By reshaping it for Web3, the OriginTrail Decentralized Knowledge Graph provides a crucial fabric to link, verify, and value data on both physical and digital assets.</p><p>Learn more about <strong>OriginTrail</strong>: <a href="https://origintrail.io/">https://origintrail.io/</a>.</p><p><strong>About Oxford PharmaGenesis</strong></p><p>Oxford PharmaGenesis is a HealthScience communications consultancy. They are the largest independent company in the healthcare communications sector. Founded in 1998, their award-winning organization comprises more than 500 talented people working from North America, Europe, and the Asia Pacific.</p><p>Oxford PharmaGenesis is connected by a strong company culture and a clear mission: to help clients accelerate the adoption of evidence-based innovations for patients in areas of unmet medical need.</p><p>Learn more about <strong>Oxford PharmaGenesis</strong>: <a href="https://www.pharmagenesis.com/">https://www.pharmagenesis.com/</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6d44654ec192" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/oxford-pharmagenesis-and-origintrail-to-introduce-collaborative-ai-ready-medical-knowledge-6d44654ec192">Oxford PharmaGenesis and OriginTrail to introduce collaborative, AI-ready medical knowledge…</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build AI agents with verifiable memory using OriginTrail and Microsoft Copilot!]]></title>
            <link>https://medium.com/origintrail/build-ai-agents-with-verifiable-memory-using-origintrail-and-microsoft-copilot-52363f814707?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/52363f814707</guid>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[ai-memory]]></category>
            <category><![CDATA[mcp-server]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 17 Jul 2025 16:30:28 GMT</pubDate>
            <atom:updated>2025-07-17T16:28:37.085Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9TF35E_tldYpDOrNlMEhJA.gif" /></figure><p>Microsoft Copilot is becoming the interface for how users <strong>work with AI across the Microsoft</strong> ecosystem. But what happens when you enhance Copilot with the ability to understand and remember structured, verifiable knowledge?</p><p>With the<strong> integration of the OriginTrail Decentralized Knowledge Graph (DKG) and the Model Context Protocol (MCP)</strong>, you can build AI agents that reason over live data, contribute to shared memory, and deliver trusted outputs backed by cryptographic proofs.</p><p>By extending <strong>Microsoft’s AI infrastructure with OriginTrail</strong>, you equip Copilot agents with powerful capabilities for knowledge discovery, memory, and collaboration.</p><h3><strong>What is MCP?</strong></h3><p>The Model Context Protocol (MCP) is an open standard that defines how language models access and utilize tools and external data sources.</p><p><strong>MCP uses a client-server architecture where:</strong></p><ul><li>MCP Servers expose tools and data, both local and remote,</li><li>MCP Clients, such as agents built in Microsoft Copilot Studio, call these tools using a standard protocol.</li></ul><p>This architecture makes it <strong>easy to build AI systems</strong> that are modular, composable, and interoperable <strong>across different environments</strong>.</p><h3><strong>What role does the DKG play?</strong></h3><p>The <strong>OriginTrail DKG provides a decentralized layer</strong> for structured, verifiable knowledge that AI agents can query, write to, and collaborate over. When connected to an <strong>MCP server equipped with DKG tools</strong>, agents are empowered to retrieve and build upon interconnected, verifiable knowledge.</p><p><strong>AI agents can:</strong></p><ul><li>Retrieve semantically rich knowledge,</li><li>Generate and publish new Knowledge Assets,</li><li>Collaborate on a shared, verifiable knowledge base.</li></ul><p>Each interaction is built with data provenance, version control, and ownership in mind. <strong>Knowledge is shared, structured, and trustworthy!</strong></p><h3>Supercharging Microsoft Copilot with verifiable memory!</h3><p>Through this integration, builders can now connect OriginTrail DKG with custom agents built in Microsoft Copilot Studio.</p><p><strong>Here’s what that enables:</strong></p><ul><li>The DKG MCP server runs alongside an OriginTrail DKG Node,</li><li>Custom actions are registered in Microsoft Copilot Studio to access DKG tools,</li><li>These actions can be triggered by agents within environments like Microsoft Teams.</li></ul><p>This setup allows Copilot-based agents to access interconnected, verifiable knowledge in real time, and contribute new structured information back into the DKG.</p><p><strong>Agents can then:</strong></p><ul><li>Ask precise questions over a structured knowledge graph,</li><li>Write their own memory as reusable Knowledge Assets,</li><li>Store results, update context, and collaborate with other agents.</li></ul><p>This integration brings reasoning, verifiability, and memory collaboration directly into Copilot-powered workflows</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YQeFu0KgcgAuSuYVwQ6_Aw.jpeg" /></figure><h3>See it in action!</h3><p>In the live demo, Jurij Škornik, General Manager at Trace Labs, core developers of OriginTrail, walks us through:</p><ul><li>Running the DKG MCP server with an OriginTrail Edge Node,</li><li>Building a custom agent in Microsoft Copilot Studio,</li><li>Adding custom actions to enable interaction via Microsoft Teams.</li></ul><p>The result is a working Copilot agent with <strong>full access to decentralized, verifiable memory</strong>. Check it out!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_S5cNdwAGsQ%3Fstart%3D177%26feature%3Doembed%26start%3D177&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_S5cNdwAGsQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_S5cNdwAGsQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0ffde9fbbd1763b6ecb58da111ab6f74/href">https://medium.com/media/0ffde9fbbd1763b6ecb58da111ab6f74/href</a></iframe><p>As AI becomes central to enterprise workflows, adding verifiability and structure to its memory is <strong>essential</strong>. Combining <strong>OriginTrail DKG and MCP means your agents are working with knowledge</strong> that is:</p><ul><li>Structured using open standards (like RDF and schema.org),</li><li>Interconnected across multiple data sources,</li><li>Verifiable thanks to cryptographic anchoring,</li><li>Portable across applications, agents, and ecosystems, such as Microsoft.</li></ul><p>This opens the door to <strong>new applications</strong> in supply chains, research, content management, enterprise collaboration, and more!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52363f814707" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/build-ai-agents-with-verifiable-memory-using-origintrail-and-microsoft-copilot-52363f814707">Build AI agents with verifiable memory using OriginTrail and Microsoft Copilot!</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[OriginTrail powers the future of ethical AI in healthcare with ELSA]]></title>
            <link>https://medium.com/origintrail/origintrail-powers-the-future-of-ethical-ai-in-healthcare-with-elsa-d59b628438be?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/d59b628438be</guid>
            <category><![CDATA[ethical-ai]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[healthcare]]></category>
            <category><![CDATA[decentralized]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Wed, 14 May 2025 10:12:35 GMT</pubDate>
            <atom:updated>2025-05-14T10:12:22.015Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MXmn7CPHcbQy2XNZJwo9DQ.png" /></figure><blockquote>A decentralized repository for secure, scalable genomic data sharing &amp; AI-driven personalized healthcare insights — powered by OriginTrail Decentralized Knowledge Graph (DKG).</blockquote><h3>OriginTrail powers the future of ethical AI in healthcare with ELSA</h3><p>We’re excited to announce that <a href="https://origintrail.io"><strong>OriginTrail</strong></a><strong> is joining forces with the </strong><a href="https://elsa-ai.eu"><strong>ELSA</strong></a><strong> (European Lighthouse on Secure and Safe AI) </strong>initiative to shape the future of <strong>decentralized, privacy-preserving artificial intelligence (AI) in healthcare</strong>. Digital healthcare today faces three pressing challenges: safeguarding patient privacy, bridging fragmented data silos for seamless interoperability, and meeting strict regulatory requirements without stifling innovation.</p><p>At the heart of this collaboration lies a <strong>DeReGenAI</strong> — a decentralized repository for secure, scalable genomic data sharing and AI-driven personalized healthcare, powered by the OriginTrail Decentralized Knowledge Graph (DKG). This initiative tackles the<strong> most pressing challenges in digital health</strong>: enabling secure, compliant, and user-sovereign sharing of sensitive genomic data while unlocking the full potential of AI-driven personalized healthcare.</p><h4>Trustworthy AI needs trustworthy infrastructure</h4><p>AI is transforming healthcare — but for it to do so responsibly, it must be built on a foundation of trust, transparency, and ethics. That’s exactly what OriginTrail brings to the table within the ELSA consortium: an open-source, decentralized infrastructure that ensures data privacy, ownership, and interoperability at scale.</p><p>By integrating OriginTrail DKG, DeReGenAI becomes a <strong>decentralized repository that puts patients in control of their most personal asset</strong> — their genomic data. This enables:</p><ul><li><strong>User-managed permissions</strong>: Patients decide who can access their data, when, and for what purpose.</li><li><strong>Privacy-preserving monetization</strong>: Individuals can opt to share their data with research institutions or health providers on their own terms.</li><li><strong>AI-ready interoperability</strong>: Seamless interaction with AI systems while maintaining the integrity and provenance of the data.</li></ul><p>At its core, the OriginTrail DKG act as a knowledge graph of knowledge graphs — a globally distributed network where each participant maintains control over their own knowledge node. These nodes interact in a fully <strong>decentralized manner, eliminating the risks of centralized data silos and single points of failure</strong>.</p><p><strong>Here’s why this matters:</strong></p><ul><li><strong>Global scale</strong>: Access data from diverse sources without compromising security.</li><li><strong>Privacy-first architecture</strong>: Data sovereignty is seamlessly integrated into the infrastructure.</li><li><strong>Compliance-ready</strong>: Designed with GDPR and other regulatory frameworks in mind.</li><li><strong>Interoperable</strong>: Built for seamless integration with AI technologies and healthcare systems.</li></ul><h4>How does DeReGenAI work?</h4><p>To power the next generation of personalized healthcare, DeReGenAI employs <strong>decentralized Retrieval-Augmented Generation (dRAG)</strong> — an evolution of how Large Language Models (LLMs) interact with external data.</p><p>Instead of querying a centralized source, the LLMs in DeReGenAI leverage the OriginTrail DKG to retrieve <strong>verified, decentralized knowledge</strong>. This unlocks:</p><ul><li>More accurate AI insights,</li><li>Context-aware healthcare recommendations,</li><li>Trustworthy and verifiable AI behavior.</li></ul><p>The ELSA initiative brings together top-tier European academic, industrial, and technology partners, such as <strong>University of Oxford, The Alan Turing Institute, NVIDIA</strong>, and others, to build a future where AI is both effective and ethical. As part of the ELSA initiative, <strong>OriginTrail is used to build a trusted data ecosystem for the AI age</strong> — one where people, not platforms, control their data, and where innovation never comes at the cost of ethics.</p><p>We’re proud to be driving this change, and even prouder to be doing it alongside an incredible group of partners.</p><p>Learn how OriginTrail is powering the shift to human-centric AI at <a href="https://origintrail.io/">https://origintrail.io/</a>.</p><p><strong>Trust the source.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d59b628438be" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/origintrail-powers-the-future-of-ethical-ai-in-healthcare-with-elsa-d59b628438be">OriginTrail powers the future of ethical AI in healthcare with ELSA</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[umanitek launches umanitek Guardian AI agent]]></title>
            <link>https://medium.com/origintrail/umanitek-launches-umanitek-guardian-ai-agent-00ebab78a0b3?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/00ebab78a0b3</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[ai-risk]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 08 May 2025 11:49:59 GMT</pubDate>
            <atom:updated>2025-05-08T11:49:41.721Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3S7RN_W57d5IW8Yk8xtD_w.png" /></figure><p><strong>Zug, Switzerland (May 6, 2025) — </strong>Umanitek AG, a Swiss-based AI company combating harmful content and the risks of artificial intelligence, today announces the launch of their first product <a href="https://umanitek.ai/product">umanitek Guardian</a>.</p><p>Umanitek’s mission is to fight against harmful content and the risks of AI by developing and deploying technology that serves the greater good of humanity.</p><p>Umanitek’s first product is an AI agent, umanitek Guardian, that uses the <strong>Decentralized Knowledge Graph (DKG)</strong>, a decentralized, trusted network for organizing and tracking immutable data and allows participating organizations to keep ownership and control of their data while supporting database queries on a need-to-know basis — allowing collaboration without compromising privacy.</p><p>The first user of umanitek Guardian will be Aylo, who will leverage the agent to allow law enforcement agents to query 7 million hashes of its verified content using natural language through an AI agent.</p><p>“<em>Umanitek acts as the bridge. Through Decentralized Knowledge Graph (DKG) decentralized infrastructure, we can integrate advanced Internet safety technologies directly with data. </em><a href="https://umanitek.ai/product"><em>Umanitek Guardian</em></a><em> will enable companies, law enforcement, NGOs and individuals to collaborate by uploading and querying “fingerprints” of images and videos to a decentralized directory. This system will help large technology platforms track, identify and prevent the distribution of harmful content. We are committed to developing human-centric AI solutions that promote trust, protect privacy and help make internet safety the standard in the age of AI</em>.”</p><p>– Chris Rynning, umanitek Chairman</p><p><strong>About umanitek</strong></p><p><em>Making internet safety the standard in the age of AI.</em></p><p>Umanitek AG is a Swiss-based AI company combating harmful content and the risks of artificial intelligence. We develop<strong> </strong>human-centric AI solutions that promote trust, protect privacy and make internet safety the standard in the age of AI.</p><p>Our founders bring deep expertise in building reliable, trusted AI systems and are connected to global networks working to reduce internet harm, and are committed to raising awareness about the importance of education and digital responsibility in the age of AI.</p><p>Umanitek’s AI infrastructure is safe by design, open by principle and trustworthy by default. With a focus on ethical innovation, umanitek is setting the standards for transparency, accountability and harm reduction in artificial intelligence.</p><p>For more information about umanitek, umanitek’s founders and products, visit <a href="http://www.umanitek.ai/">www.umanitek.ai</a>.</p><p><strong>Contacts</strong></p><p>For media inquiries, please contact:</p><p>Umanitek Communication</p><p><a href="mailto:media@umanitek.ai">media@umanitek.ai</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=00ebab78a0b3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/umanitek-launches-umanitek-guardian-ai-agent-00ebab78a0b3">umanitek launches umanitek Guardian AI agent</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[UMANITEK: Setting the standard for internet safety]]></title>
            <link>https://medium.com/origintrail/umanitek-setting-the-standard-for-internet-safety-cd5a91f142f3?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/cd5a91f142f3</guid>
            <category><![CDATA[ethical-ai]]></category>
            <category><![CDATA[saftey]]></category>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[internet-security]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Fri, 07 Mar 2025 14:52:39 GMT</pubDate>
            <atom:updated>2025-03-07T14:52:38.869Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y_2eJSCKxwDtNLcu2Yzdkw.gif" /></figure><p>Today, artificial intelligence (AI) is rapidly reshaping the Internet, driving a historic transformation in how we engage, work, and communicate online.</p><p>However, the rise of generative AI has also led to an explosion of deepfakes, hallucinating language models, and the rapid creation of untrustworthy content — threatening the foundation of authentic communication and learning. AI-generated content now dominates the internet, making it increasingly difficult to distinguish reality from fabrication.</p><p>While AI unlocks significant advancements, it also introduces equally substantial risks, from intellectual property infringements to illegal content, such as child sexual abuse materials.</p><p><strong>It is for this reason, we founded umanitek.</strong></p><p>At umanitek, our mission is to fight against harmful content and the risks of AI by promoting technology that serves the greater good of humanity.</p><p>Our founders, <strong>Trace Labs, Ethical Capital Partners and AMYP Ventures AG (part of a Piëch/Porsche Family Office) </strong>bring together their capabilities in building reliable and trusted AI systems, their connection to networks that fight for the removal of internet harm, and their ability to raise awareness of the importance of knowledge and education in the age of AI.</p><p>But this is too big of a challenge to go at it alone. Recognizing the magnitude of this issue, we actively seek partnerships with institutions and individuals dedicated to ethical AI development. We want to partner with investors who are focused on “tech for good” solutions where societal impact is of equal importance to commercial success and to work with tech leaders, policymakers, and law enforcement <strong>to make internet safety the standard in the age of AI.</strong></p><h3>Balancing innovation with responsibility in the age of AI.</h3><p>Our vision is to leverage umanitek’s technology to enable corporations and individuals to control their data, technology, and resources without compromising security, privacy, or intellectual property.</p><p><strong>Here’s but one quick example of how umanitek will work.</strong></p><p>Far too many people are concerned about the non-consensual sharing of their personal images or those of their children. Umanitek will enable companies, law enforcement, NGOs, and individuals to upload “fingerprints” of personal photos to a decentralized directory. This system will help large technology platforms identify and prevent the distribution of such content.</p><p>In the potential next step, it also significantly streamlines the prosecution of offenders by collaborating with law enforcement while reducing the cost and complexity of legal action related to copyright infringements.</p><p>When organizations and individuals can choose what to share and how to share it in a secure and verifiable way, all internet users benefit. Protecting legitimate content and preventing large language models from training on non-consensual data are integral to harm reduction online. We believe this is an important step to making <strong>internet safety the standard</strong> in the age of AI, reducing harmful content, and enabling trusted AI solutions.</p><p><strong>Fighting the <em>good fight.</em></strong></p><p><em>“We invested in OriginTrail to drive transparency and trust for real-world assets. Now, we’ve co-founded </em><strong><em>umanitek</em></strong><em> to combat harmful content, IP infringements, and fake news — leveraging OriginTrail technology across internet platforms.”</em></p><p><em>— Chris Rynning, AMYP Ventures AG (part of a Piëch/Porsche Family Office)</em></p><h3>An unprecedented alliance for ethical AI.</h3><p>Umanitek stands out by combining the expertise of three leaders in their fields:</p><p><strong>Trace Labs (core developers of OriginTrail)</strong> — The pioneers of neuro-symbolic AI, building trusted and verifiable AI systems. They are the developers behind the OriginTrail Decentralized Knowledge Graph (DKG), a technology that enhances trust in AI, supply chains, and global data ecosystems.</p><p><strong>Ethical Capital Partners (ECP)</strong> — A private equity firm seeking out investment and advisory opportunities in industries that require principled ethical leadership. Founded in 2022 by a multi-disciplinary team with legal, regulatory, law enforcement, public engagement, and finance experience, ECP’s philosophy is rooted in identifying companies amenable to a responsible investment approach and working collaboratively with management teams in order to develop strategies to create value and drive growth.</p><p><strong>AMYP Ventures AG (part of a Piëch/Porsche Family Office)</strong> — A venture capital group backing game-changing AI and Web3 initiatives with the potential for global impact.</p><p>This is a collaboration that combines the knowledge of AI, cutting-edge research, and technology with ethical investment strategies to create the standard for internet safety in the age of AI — an AI solution that will serve humanity.</p><p><strong>Subscribe</strong> for updates at <a href="http://umanitek.ai"><strong>umanitek.ai</strong></a> to stay in touch and be among the first to learn about cofounders, contributors, and partners of umanitek, as well as reserve a spot to test-drive umanitek’s products at their release.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3rmJwr7hDomMGU-hCgc_5A.jpeg" /></figure><p><a href="http://umanitek.ai"><strong>Web </strong></a><strong>| </strong><a href="https://x.com/umanitek"><strong>Twitter </strong></a><strong>| </strong><a href="https://www.linkedin.com/company/umanitek"><strong>LinkedIn</strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cd5a91f142f3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/umanitek-setting-the-standard-for-internet-safety-cd5a91f142f3">UMANITEK: Setting the standard for internet safety</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[2025 Roadmap update: Synergy of AI agents and autonomous DKG]]></title>
            <link>https://medium.com/origintrail/2025-roadmap-update-synergy-of-ai-agents-and-autonomous-dkg-563455b8179b?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/563455b8179b</guid>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[roadmaps]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Fri, 17 Jan 2025 19:38:15 GMT</pubDate>
            <atom:updated>2025-01-24T15:25:13.254Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*s1ZkJ6-ilpK8HKBNJvNdng.png" /></figure><p>In 2024, the OriginTrail ecosystem achieved remarkable milestones, driving innovation in decentralized knowledge and AI integration. The three stages (or impact bases) of the <a href="https://origintrail.io/blog/the-v8-foundation">V8 Foundation</a> were formulated as inspired by the legendary works of Isaac Asimov. They prophetically symbolize steps towards the future where the reliable and trusted knowledge base or Collective neuro-symbolic AI drives synergies between AI agents and the Autonomous Decentralized Knowledge Graph (DKG) in a human-centric way.</p><p>The updated roadmap recaps the most important achievements of the past year, and highlights the road ahead (full updated <a href="https://origintrail.io/ecosystem/roadmap">roadmap available here</a>).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CYPiII43RyadDA4HoNUuRA.png" /></figure><p>The year kicked off with the establishment of the <em>Impact Base: Trantor</em> (home to the Library of Trantor, where librarians systematically indexed human knowledge in a groundbreaking collaborative effort), which catalyzed key advancements, including <strong>Knowledge Mining</strong>, which introduced <strong>Initial Paranet Offerings (IPOs)</strong> and autonomous knowledge mining initiatives. Simultaneously, the release of <strong>delegated staking</strong> enabled TRAC delegation for network utility and security, enhancing inclusivity and participation in DKG infrastructure.</p><p>Following Trantor, <em>Impact Base: Terminus </em>was activated with key catalysts for adoption, including <strong>multichain growth</strong>, integrating DKG with the <strong>Base blockchain ecosystem</strong>, and implementing transformative scalability solutions such as asynchronous backing on NeuroWebAI blockchain on Polkadot and batch minting features.</p><p>The introduction of <strong>ChatDKG.ai</strong> revolutionized interaction with DKG and paranets, integrating AI models across platforms like <strong>Google Vertex AI, OpenAI, and </strong><a href="https://origintrail.io/blog/trace-labs-joins-nvidia-inception-program-to-advance-the-verifiable-internet-for-ai"><strong>NVIDIA</strong></a>. Meanwhile, the release of <strong>Whitepaper 3.0</strong> outlined the vision of a Verifiable Internet for AI, bridging crypto, Web3, and AI technologies to address misinformation and data integrity challenges.</p><p>The deployment of <a href="https://x.com/origin_trail/status/1872672016857542716"><strong>OriginTrail V8</strong></a> and its <strong>Edge Nodes</strong> brought Internet-scale to the ecosystem. Edge Nodes redefine how sensitive data interacts with AI-driven applications, keeping it on devices while enabling controlled integration with both the DKG and neural networks. This privacy-first architecture facilitates local AI processing, ensuring secure utilization of private and public knowledge assets. In addition, OriginTrail V8 achieves monumental scalability improvements with the random sampling proof system that reduces on-chain transaction requirements by orders of magnitude thus boosting the DKG’s throughput in a major way.</p><p>The DKG V8 provides a powerful substrate to drive synergies between AI and collective-neuro symbolic AI capable of driving AI agents’ autonomous memories and trusted intents, as both AI agents and robots alike become potent enough to act on behalf of humans.</p><h3>Roadmap for 2025 and beyond: Advancing collective neuro-symbolic AI with the DKG</h3><p>The 2025 roadmap marks a leap forward for the OriginTrail ecosystem, as the <strong>Decentralized Knowledge Graph (DKG)</strong> becomes the cornerstone for <strong>collective neuro-symbolic AI</strong>, a powerful fusion of neural and symbolic AI systems.</p><p>With the establishment of <em>Impact Base: Gaia</em>, the roadmap envisions the system functioning as a super-organism, where decentralized AI agent swarms share and expand their collective memory using the DKG. This shared memory infrastructure, combined with the autonomous inferencing and knowledge publishing capabilities of DKG V8, lays the foundation for decentralized AI that seamlessly integrates neural network and knowledge graph reasoning with trusted, verifiable knowledge. The result is a robust AI infrastructure capable of addressing humanity’s most pressing challenges at an accelerated pace.</p><p>At the heart of this vision lies the <strong>Collective Agentic Memory Framework</strong>, enabling autonomous AI agents to mine, publish, and infer new knowledge while ensuring privacy and scalability. This vision is enabled by establishing scalable infrastructure and tools such as AI agent framework <strong>integrations (</strong>such as the <a href="https://github.com/OriginTrail/elizagraph">ElizaOS DKG </a>integration), the <strong>NeuroWeb Collator staking </strong>and<strong> bridge</strong>, and DKG Edge node private knowledge repositories.</p><h3>Those who invest in using the DKG, build the DKG: 60MM TRAC Collective Programmatic Treasury (CPT)</h3><p>The roadmap also introduces decentralized growth through initiatives like the <strong>Collective Programmatic Treasury (CPT)</strong>, allocating 60 million $TRAC over a Bitcoin-like schedule to incentivise an ecosystem of DKG developers based on a <a href="https://github.com/OriginTrail/OT-RFC-repository/blob/main/RFCs/OT-RFC-21_Collective_Neuro-Symbolic_AI/OT-RFC-21%20Collective%20Neuro-Symbolic%20AI.pdf">meritocratic system of knowledge contribution</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RvyNjmZp1hGLo0XFDwASzA.png" /><figcaption><a href="https://www.youtube.com/watch?v=q4iL4Wd2Akg">The quote taken from The Matrix Ending</a></figcaption></figure><p>As adoption spreads across industries such as DeSci, robotics, healthcare, and entertainment, this interconnected ecosystem drives network effects of shared knowledge, exponentially amplifying the collective intelligence of AI agents. By aligning decentralized AI efforts with the DKG’s unifying framework, OriginTrail unlocks the potential for Artificial General Intelligence (AGI) through the synergy of all human knowledge, creating a future where AI reflects the full spectrum of human insight and wisdom.</p><h3>Impact base: Gaia (established in Q1 2025)</h3><p><em>The human beings on Gaia, under robotic guidance, not only evolved their ability to form an ongoing telepathic group consciousness but also extended this consciousness to the fauna and flora of the planet itself, even including inanimate matter. As a result, the entire planet became a super-organism.</em></p><p><strong>DKG V8</strong></p><p>Scalable and robust foundation for enabling next stage of Artificial Intelligence adoption with decentralized Retrieval Augmented Generation (dRAG), combining symbolic and neural decentralized AI. DKG V8 is catalysing the shift from attention economy to intention economy.</p><p>✅ DKG Edge Nodes</p><p>✅ New V8 Staking dashboard</p><p>✅ New V8 DKG Explorer</p><p>✅ Batch minting (scalability)</p><ul><li>Random sampling (scalability)</li><li>Collective Agentic memory framework</li><li>Eliza integration (Github)</li><li>ChatDKG Framework for AI Agent Autonomous Memory</li><li>NeuroWeb Bridge integration</li><li>NeuroWeb Collators</li><li>RFC-23 Multichain TRAC liquidity for DKG utility</li><li>C2PA global content provenance standard compliance</li></ul><p><strong>Catalyst 1: Autonomous Knowledge Mining</strong></p><p>Mine new knowledge for paranets autonomously by using the power of symbolic AI (the DKG) and neural networks.</p><ul><li>AI-agent driven Knowledge Mining</li></ul><p><strong>Catalyst 2: DePIN for private knowledge</strong></p><p>Keep your knowledge private, on your devices, while being able to use it in the bleeding edge AI solutions.</p><ul><li>Private Knowledge Asset repository for agents (DKG Edge Node)</li><li>Private data monetization with Knowledge Assets and <a href="https://ekgf.github.io/dprod/">DPROD</a></li></ul><h3>Convergence (2025 +)</h3><p>With the <strong>Genesis period</strong> completed the OriginTrail DKG will have a large enough number of Knowledge Assets created (1B) to kickstart the “<strong>Convergence”</strong>. Leveraging network effects, growth gets further accelerated through <strong>autonomous knowledge publishing</strong> and <strong>inferencing </strong>capabilities of the DKG, fueled by <strong>decentralized Knowledge Mining</strong> protocols of NeuroWeb and AI Agents supported by multiple frameworks integrating the DKG. During the Convergence, supported by OriginTrail V8 with AI-native features and further scalability increase, the OriginTrail DKG grows the largest public Decentralized Knowledge Graph in existence, a verifiable web of collective human knowledge — the trusted knowledge foundation for AI.</p><h3><strong>Collective Neuro-Symbolic AI (DKG)</strong></h3><p><strong>Collective Global memory: Autonomous Decentralized Knowledge Graph</strong></p><p>Incentivized autonomous enrichment of human knowledge using neural network reasoning capabilities over a large body of trusted knowledge. Providing AI infrastructure that allows any of the most pressing challenges of human existence to be addressed in an accelerated way.</p><p><strong>Future development fund decentralization “Those invest in the DKG, shall build the DKG” — 60,000,000 $TRAC allocated using the Bitcoin schedule over X years with the </strong><a href="https://github.com/OriginTrail/OT-RFC-repository/blob/main/RFCs/OT-RFC-21_Collective_Neuro-Symbolic_AI/OT-RFC-21%20Collective%20Neuro-Symbolic%20AI.pdf"><strong>Collective Programmatic Treasury (CPT</strong></a><strong>)</strong></p><p><strong>Autonomous Decentralized Knowledge Inferencing</strong></p><ul><li>Knowledge graph reasoning</li><li>Graph neural network framework</li><li>Neuro-symbolic inferencing combining GenAI with symbolic AI</li></ul><p><strong>Autonomous Knowledge Mining</strong></p><ul><li>Autonomous knowledge publishing with DKG inferencing</li><li>Additional AI-agent integrations</li></ul><p><strong>Extending DKG-powered AI Agents to physical world through robotics</strong></p><h3><strong>Collective Neuro-Symbolic AI (DKG) adoption 2025 +</strong></h3><ul><li>Autonomous AI agents</li><li>Decentralized science (DeSci)</li><li>Robotics and manufacturing (DePin)</li><li>Financial industry</li><li>Autonomous supply chains supported by Global Standards</li><li>Construction</li><li>Life sciences and healthcare</li><li>Collaboration with internationally recognized pan-European AI network of excellence (EU supported)</li><li>Metaverse and entertainment</li><li>Doubling down on OriginTrail ecosystem inclusivity</li><li>Activating the Collective Programmatic Treasury</li><li>Driving safe Internet in the age of AI inclusively with the leading entities in the industry</li></ul><p>*The list is non exhaustive</p><p>👇 More about OriginTrail 👇</p><p><a href="https://origintrail.io/"><strong>Web</strong></a><strong> | </strong><a href="https://twitter.com/origin_trail"><strong>Twitter</strong></a><strong> | </strong><a href="https://facebook.com/origintrail"><strong>Facebook</strong></a><strong> | </strong><a href="https://t.me/origintrail_info"><strong>Telegram</strong></a><strong> | </strong><a href="https://linkedin.com/company/origintrail/"><strong>LinkedIn</strong></a><strong> | </strong><a href="https://github.com/origintrail"><strong>GitHub</strong></a><strong> | </strong><a href="https://discord.com/invite/xCaY7hvNwD"><strong>Discord</strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=563455b8179b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/2025-roadmap-update-synergy-of-ai-agents-and-autonomous-dkg-563455b8179b">2025 Roadmap update: Synergy of AI agents and autonomous DKG</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Trace Labs, Core Developers of OriginTrail, Welcomes Fady Mansour to the Advisory Board]]></title>
            <link>https://medium.com/origintrail/trace-labs-core-developers-of-origintrail-welcomes-fady-mansour-to-the-advisory-board-4f455ad97626?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/4f455ad97626</guid>
            <category><![CDATA[advisory-board]]></category>
            <category><![CDATA[trace-labs]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 09 Jan 2025 16:38:35 GMT</pubDate>
            <atom:updated>2024-12-17T15:52:55.706Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zpACifAPKgLT1nL4Mq62Dg.gif" /></figure><p>Trace Labs, the core builders behind the <a href="https://origintrail.io/">OriginTrail ecosystem</a>, is pleased to announce the expansion of its advisory board with the addition of Fady Mansour, lawyer and partner with Friedman Mansour LLP and Managing Partner at Ethical Capital Partners. With his wide breadth of experience, Mr. Mansour brings important expertise in regulatory matters, particularly in online data protection.</p><p>In his advisory role, Mr. Mansour will provide strategic guidance to bolster OriginTrail’s strategic importance for combating illicit online content, safeguarding intellectual property, and fostering reliable AI applications for a safer digital landscape in its Internet-scale ambition.</p><p>OriginTrail ecosystem, powered by decentralized knowledge graph technology, is dedicated to promoting responsible AI and sustainable technology adoption. By joining the advisory board, Mr. Mansour will be instrumental in shaping Trace Labs’ mission to drive ethical, human-centric technological innovation across industries.</p><p>Mr. Mansour completes the Trace Labs advisory board of existing members:</p><ul><li>Dr. Bob Metcalfe, Ethernet founder, Internet pioneer and 2023 Turing Award Winner;</li><li>Greg Kidd, Hard Yaka founder and investor;</li><li>Ken Lyon, global expert on logistics and transportation;</li><li>Chris Rynning, Managing Partner at AMYP Venture — Piëch — Porsche Family Office;</li><li>Toni Piëch, Founder &amp; Chair of Board at Toni Piëch Foundation &amp; Piëch Automotive;</li><li>Fady Mansour, Managing Partner at Ethical Capital Partners.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IwgdpOAN9HOQzZdG94KM3A.jpeg" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4f455ad97626" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/trace-labs-core-developers-of-origintrail-welcomes-fady-mansour-to-the-advisory-board-4f455ad97626">Trace Labs, Core Developers of OriginTrail, Welcomes Fady Mansour to the Advisory Board</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS…]]></title>
            <link>https://medium.com/origintrail/bridging-trust-between-humans-and-ai-agents-with-decentralized-knowledge-graph-dkg-and-elizaos-14aab4c32701?source=rss----d4d7f6d41f7c---4</link>
            <guid isPermaLink="false">https://medium.com/p/14aab4c32701</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[decentralization]]></category>
            <category><![CDATA[knowledge-graph]]></category>
            <dc:creator><![CDATA[OriginTrail]]></dc:creator>
            <pubDate>Thu, 09 Jan 2025 16:35:08 GMT</pubDate>
            <atom:updated>2025-01-09T16:08:30.695Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS framework</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3LEMSpsht13jhAHpo26yBw.png" /></figure><p>In the realm of artificial intelligence (AI), particularly in robotics, trust is not just a luxury — it’s a necessity. The Three Laws of Robotics, conceptualized by the visionary Isaac Asimov, provide a well-known foundational ethical structure for robots:</p><ol><li>A robot may not injure a human being or, through inaction, allow a human being to come to harm.</li><li>A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.</li><li>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.</li></ol><p>Ensuring these laws are adhered to in practice requires more than just programming; it necessitates a system where the knowledge upon which AI agents operate is transparent, verifiable, and trusted. This is where OriginTrail Decentralized Knowledge Graph (DKG) comes into play, offering a groundbreaking approach to enhancing the trustworthiness of AI.</p><h3><strong>Transparency and verifiability</strong></h3><p>One of the key aspects of the DKG is its capacity for transparency. By organizing AI-grade Knowledge Assets (KAs) in a decentralized manner, DKG ensures that the data AI agents use to make decisions can be traced back to their origins, with any tampering or modifications of that data being transparently recorded and verifiable on the blockchain. This is crucial for the First Law, where transparency in data sourcing can prevent AI from making decisions that might harm humans due to incorrect or biased information.</p><h3><strong>Ownership and control</strong></h3><p>The DKG allows for each Knowledge Asset to be associated with a non-fungible token (NFT), providing clear ownership and control over the information. This aspect directly impacts how AI agents adhere to the Second Law. Namely, by allowing agents to own their knowledge, DKG empowers AI agents to respond to human commands based on a robust, reliable data set that they control, ensuring they follow human directives while also adhering to the ethical boundaries set by the laws. This capability also allows agents to monetize Knowledge Assets that they have created (i.e. charge other agents (AI or human) for accessing their structured data), enabling agents<strong>’</strong> economic independence.</p><h3><strong>Contextual understanding and decision-making</strong></h3><p>The semantic capabilities of DKG provide AI with a richer context for understanding the world — an ontological, symbolic world model to complement GenAI inferencing, which is vital for the Third Law. The interconnected nature of knowledge in the DKG means it is contextualized better, allowing AI to make decisions with a comprehensive view of the situation. For example, understanding the broader implications of self-preservation in contexts where human safety is paramount ensures that robots do not prioritize their existence over human well-being.</p><h3><strong>Building trust through decentralization</strong></h3><p>Decentralization is at the heart of the DKG’s effectiveness in fostering trust:</p><ul><li><strong>Avoiding centralized control:</strong> Traditional centralized databases can be points of failure or manipulation, especially in multi-agent scenarios. In contrast, DKG distributes control, reducing the risk of misuse or bias in AI decision-making. This decentralized approach helps build a collective, trustworthy intelligence that aligns with human values and safety.</li><li><strong>Community contribution:</strong> DKG facilitates a crowdsourced approach to knowledge, where contributions from various stakeholders can enrich the AI’s understanding of ethical and practical scenarios, further aligning AI behavior with the Three Laws. This community aspect also encourages ongoing vigilance and updates to the knowledge base, ensuring AI systems remain relevant and safe.</li></ul><h3><strong>Grow and read AI Agents’ minds with the ChatDKG framework powered by DKG and ElizaOS</strong></h3><p>The upgrade of ChatDKG marks a pioneering moment, combining the power of the OriginTrail Decentralized Knowledge Graph (DKG) with the ElizaOS framework to create the first AI agent of its kind. <strong>Empowered by DKG, </strong><a href="https://x.com/ChatDKG"><strong>ChatDKG</strong></a><strong> utilizes the DKG as collective memory to store and retrieve information in a transparent, verifiable manner</strong>, allowing for an unprecedented level of interaction where <strong>humans can essentially “read the AI’s mind” by accessing its data and thought processes</strong>. This unique feature not only enhances transparency but also fosters trust between humans and AI.</p><p>The integration with ElizaOS is based on a <a href="https://github.com/branarakic/elizagraph/"><strong>dedicated DKG plugin</strong></a>, with which <strong>ElizaOS agents can create contextually rich knowledge graph memories</strong>, storing structured information about their experiences, insights, and decisions. These memories can be shared and made accessible across the DKG network, forming a collective pool of knowledge graph memories. This allows individual agents to access, analyze, and learn from the experiences of other agents, creating a dynamic ecosystem where collaboration drives network effects between memories. See an example memory knowledge graph created by the ChatDKG agent <a href="https://dkg.origintrail.io/explore?ual=did:dkg:otp:2043/0x8f678eb0e57ee8a109b295710e23076fa3a443fe/516350">here</a>.</p><p>Tapping into collective memory will be enhanced with strong agent reputation systems and robust knowledge graph verification mechanisms. Agents can assess the trustworthiness of shared memories, avoiding hallucinations or false data while making decisions. This not only enables more confident and precise decision-making but also empowers agent swarms to operate with unprecedented coherence and accuracy. Whether predicting trends, solving complex problems, or coordinating large-scale tasks, agents will be able to achieve a new level of intelligence and reliability.</p><p>Yet, this is only the beginning of the journey toward “collective neuro-symbolic AI,” where the synthesis of symbolic reasoning and deep learning, enriched by shared, verifiable knowledge, will redefine the boundaries of artificial intelligence. The possibilities for collaborative intelligence are limitless, paving the way for systems that think, learn, and evolve together.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3LEMSpsht13jhAHpo26yBw.png" /></figure><p>Moreover, ChatDKG invites users to contribute to its memory base, growing and refining its knowledge through direct interaction. This interactive approach leverages the ElizaOS framework’s capabilities to ensure that each exchange informs the AI and enriches its understanding, making it a dynamic participant in the evolving landscape of knowledge.</p><p><strong>Talk to the ChatDKG AI agent on X to grow and read his memory!</strong></p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/drevziga/status/1877248496392106186%3Fs%3D46%26t%3D4AtYZrzCvRHbAU0hd4vFUA&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/83c4ae8736801f4a3b17de1baaa79040/href">https://medium.com/media/83c4ae8736801f4a3b17de1baaa79040/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=14aab4c32701" width="1" height="1" alt=""><hr><p><a href="https://medium.com/origintrail/bridging-trust-between-humans-and-ai-agents-with-decentralized-knowledge-graph-dkg-and-elizaos-14aab4c32701">Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS…</a> was originally published in <a href="https://medium.com/origintrail">OriginTrail</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>