<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Leonardo&#039;s Blog</title>
	<atom:link href="https://blog.chiariglione.org/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.chiariglione.org/</link>
	<description>Leonardo&#039;s views on MPAI, MPEG, ISO and a lot more...</description>
	<lastBuildDate>Sat, 21 Mar 2026 08:47:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>MPAI as a Service (MaaS) for a new generation of intelligent services</title>
		<link>https://blog.chiariglione.org/mpai-as-a-service-maas-for-a-new-generation-of-intelligent-services/</link>
		
		<dc:creator><![CDATA[Leonardo]]></dc:creator>
		<pubDate>Sat, 21 Mar 2026 08:47:32 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://blog.chiariglione.org/?p=3590</guid>

					<description><![CDATA[<p>The 66th MPAI General Assembly (MPAI-66) has approved the publication of the “MPAI as a Service” Call for Technologies. To get a proper understanding of the positioning of this new standard in the MPAI Ecosystem, we should recall the basic elements of the AI Framework (MPAI-AIF) and the Governance of the MPAI Ecosystem (MPAI-GME) standards. The [&#8230;]</p>
<p>The post <a href="https://blog.chiariglione.org/mpai-as-a-service-maas-for-a-new-generation-of-intelligent-services/">MPAI as a Service (MaaS) for a new generation of intelligent services</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p id="isPasted" class="default">The 66<sup>th</sup> MPAI General Assembly (MPAI-66) has approved the publication of the “MPAI as a Service” Call for Technologies. To get a proper understanding of the positioning of this new standard in the MPAI Ecosystem, we should recall the basic elements of the AI Framework (MPAI-AIF) and the Governance of the MPAI Ecosystem (MPAI-GME) standards. The former specifies an environment where it is possible to initialise, dynamically configure, and control AI applications called AI Workflows (AIW) composed of connected processing elements called AI Modules (AIM). MPAI-AIF specifies two profiles – a Basic and a Security Profile.</p>
<p class="default">Figure 1 depicts the MPAI-AIF Basic Profile Reference Model. You can see the Controller – the brain of the system – and the MPAI-AIF APIs enabling the Controller to obtain AIWs/AIMs from the MPAI Store, the place where implementers can submit their implementations for distribution after they have been tested for conformance with the standard and verified for security. Once the AI Framework is equipped with the desired domain-specific processing capabilities, the User Agent can activate the Controller, and the AIMs can call it via the appropriate APIs.</p>
<p><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-39195 aligncenter" src="https://mpai.community/wp-content/uploads/2026/03/Immagine1.png" alt="" width="679" height="357" /></p>
<p style="text-align: center;"><em>Figure 1 &#8211; Reference Model of MPAI-AIF Basic Profile</em></p>
<p>Let’s explore how things can unfold in this new scenario.<br />
<a name="_Toc220702678"></a><strong>1       Creation of infrastructure</strong></p>
<p>Creation of infrastructure is the responsibility of the deployment/control plane to avoid access to the application data plane by the control plane and vice-versa. The REST API protocol is used to specify the steps.</p>
<p><a name="_Toc220702679"></a><strong>1.1      Connection to the SCI</strong></p>
<p>SCI specifies the required security protocols that the RCA must employ for authentication and authorisation purposes. AIF should include an exemplary list of security protocols (basic, digest, bearer). The connection is required by all subsequent points and must be secured using one of the proposed security schemes described in End Point Open API.</p>
<p><a name="_Toc220702680"></a><strong>1.2      Creation of an SCI</strong></p>
<p>RCA asks the AIF end point for the creation of one or more SCIs to which all subsequent AIF API requests will be issued. The objective of SCI creation is the acquisition of an SCI identity for use in subsequent API requests to identify the intended SCIs among the many to which the message will be directed.</p>
<p><a name="_Toc220702681"></a><strong>1.3      Workflow discovery</strong></p>
<p>RCA submits a request to the Server API for AIW matching and discovery. The resulting collection of Workflow Descriptions is returned to the RCA for ultimate selection.</p>
<p><a name="_Toc220702682"></a><strong>1.4      Launch of the desired AI Workflow</strong></p>
<p>RCA submits a request to the SCI through the AIF end point for the launch of the desired AIW(s). The objective of Workflow launch is the acquisition of a Remote Workflow Instance (RWI) identity for use in subsequent API requests for identification of the intended AIW among the many with which input/output messages will be exchanged.</p>
<p><a name="_Toc220702683"></a><strong>2       Message Exchange</strong></p>
<p>Application data exchange is the responsibility of the application data plane thus ensuring non-exposure of application data to the control plane. The REST API protocol is used to specify the steps.</p>
<p><a name="_Toc220702684"></a><strong>2.1      Delivery of messages to the input ports of the AI Workflow</strong></p>
<p>RCA submits requests to the above-identified SCI, through the AIF end point for the delivery of AIF Messages containing application data to the desired input port(s) of the RWI(s).</p>
<p><a name="_Toc220702685"></a><strong>2.2      Reception of messages from the output ports of the AI Workflow</strong></p>
<p>RCA may submit requests to the above-identified SCI through the AIF end point for the reception of AIF Messages from the desired output port(s) of the launched RWI(s). The RCA makes provision for asynchronous delivery of the response when required.</p>
<p><a name="_Toc220702686"></a><strong>2.3       Termination of infrastructure</strong></p>
<p>The deployment/control plane is responsible for the avoidance of access to the application data plane by the control plane and vice-versa. The REST API protocol is used to specify the steps.</p>
<p><strong>3    Termination</strong></p>
<p><a name="_Toc220702687"></a><strong>3.1      Termination of the AI Workflow</strong></p>
<p>RCA submits requests to the SCI through the AIF end point for the termination of the RWI(s).</p>
<p><a name="_Toc220702688"></a><strong>3.2      Release of the AIF Controller</strong></p>
<p>RCA submits requests to the AIF end point for the termination of the above-identified SCI(s).</p>
<p>Figure 2 depicts an initial Reference Model of MPAI as a Service.</p>
<p><img decoding="async" class="alignnone size-full wp-image-39197 aligncenter" src="https://mpai.community/wp-content/uploads/2026/03/Immagine2.png" alt="" width="975" height="367" /></p>
<p style="text-align: center;"><em>Figure 2 &#8211; MPAI as a Service Reference Model</em></p>
<p>An overview of the complete workflow is given by:</p>
<ol>
<li>The RCA issues a request through the API Client to the API Server for the creation of an SCI.</li>
<li>The API Server acts as a local User Agent of a Controller.</li>
<li>The API Server returns the ID (created by the API server) of the newly created SCI to RCA.</li>
<li>The RCA issues a request via the API Client through the API Server to the indicated SCI for the instantiation of a named AIW (RWI).</li>
<li>The SCI retrieves the named AIW metadata (describing the AIW) from the MPAI Store and then parses, retrieves, and installs the referenced packages as required for the instantiation of the AIW.</li>
<li>The MPAI Store receives requests from the SCI for delivery of AIW metadata and the subordinate packages that collectively describe the complete AIW.</li>
<li>The MPAI Store returns the requested elements if it possesses them, otherwise it issues requests to the appropriate remote repositories so as to retrieve the missing elements. The MPAI Store could be:
<ol>
<li>As simple as a stand-alone web server responding to HTTP Get requests.</li>
<li>Based on a distributed file system management service, such as HDSF and other variations.</li>
<li>Based on a standard cloud object management and delivery service, such as Amazon S3 or Open Stack Swift.</li>
<li>Fronted by an object authenticity management framework, such as The Update Framework.</li>
<li>Any combination or variation of the above.</li>
</ol>
</li>
<li>The API Server returns the AIW ID, which was provided by the SCI to the RCA.</li>
<li>The RCA issues a request via the API Client through the API Server to the indicated SCI for delivery of the accompanying <u>input</u> data message to the specified Port of an AIM of the indicated AIW.</li>
<li>The RCA issues a request via the API Client through the API Server to the indicated SCI for reception of an <u>output</u> data message from the specified Port of an AIM of the indicated AIW.</li>
<li>The API Server returns to the RCA the output data message received from the specified Port of an AIM, which was provided by the indicated SCI.</li>
<li>The RCA issues a request via the API Client through the API Server to the indicated SCI for the termination of the RWI.</li>
<li>The RCA issues a request via the API Client to the API Server for the termination of the indicated SCI.</li>
</ol>
<p>To leverage the availability of AIMs and AIWs from various sources, MaaS requires that:</p>
<ul>
<li>The access to the MPAI Store be ubiquitous to support envisaged application scenarios.</li>
<li>The highest level of authorisation be guaranteed by the SCI to an RCA when accessing AI Workflows and their constituent components.</li>
<li>The highest level of authenticity control be exercised by the SCI on AI Workflows and their constituent packaged components.\</li>
</ul>
<p>MPAI-66 has issued a Call for Technologies requesting interested parties to propose:</p>
<ul>
<li>An architecture for the management of the MPAI Store and the subordinate distributed repositories.</li>
<li>Protocol(s) that are considered suitable for supporting the above requirements.</li>
<li>Alternatively, a single interface enabling SCIs to access a plurality of repositories each supporting different protocols.</li>
<li>If needed, proposals for revision of the MPAI-AIF Basic API, to accommodate requirements of the proposed technologies.</li>
</ul>
<p>Solutions proposed may be original, or rely on existing technologies, or be any integration thereof.</p>
<p><strong>The MaaS Call will be presented</strong> at two online events held on 2026/03/30 at 8:00 UTC (<a href="https://us06web.zoom.us/meeting/register/DjVI4RpQRxG-cIzfqU0iFg" target="_blank" rel="noopener noreferrer" type="absoluteLink">register here</a> to attend) and 15:00 UTC (<a href="https://us06web.zoom.us/meeting/register/qqDjJ8XQQYWHIHGW2x7xYQ" target="_blank" rel="noopener noreferrer" type="absoluteLink">register here</a> to attend).</p>
<p>The post <a href="https://blog.chiariglione.org/mpai-as-a-service-maas-for-a-new-generation-of-intelligent-services/">MPAI as a Service (MaaS) for a new generation of intelligent services</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Improved Health Services with AI</title>
		<link>https://blog.chiariglione.org/improved-health-services-with-ai/</link>
		
		<dc:creator><![CDATA[Leonardo]]></dc:creator>
		<pubDate>Wed, 04 Feb 2026 10:37:29 +0000</pubDate>
				<category><![CDATA[MPAI]]></category>
		<guid isPermaLink="false">https://blog.chiariglione.org/?p=3584</guid>

					<description><![CDATA[<p>The 64th MPAI General Assembly has approved publication of Technical Specification: AI for Health (MPAI-AIH) – Health Secure Platform (AIH-HSP) V1.0 with a request for Community Comments to be received by the MPAI Secretariat by 16 March 2026. This paper gives an overview of the proposed standard introduction to help those who wish to review [&#8230;]</p>
<p>The post <a href="https://blog.chiariglione.org/improved-health-services-with-ai/">Improved Health Services with AI</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The 64<sup>th</sup> MPAI General Assembly has approved publication of Technical Specification: AI for Health (MPAI-AIH) – Health Secure Platform (AIH-HSP) V1.0 with a request for Community Comments to be received by the <a href="mailto:secretariat@mpai.community">MPAI Secretariat</a> by 16 March 2026. This paper gives an overview of the proposed standard introduction to help those who wish to review and comment on AIH-HSP.</p>
<p>The Health Secure Platform specifies the architecture of a platform offering health-related services enabling the following functionalities:</p>
<ol>
<li>End Users use AIH-HSP Apps running on their Front Ends (personal devices) to acquire Health Data.</li>
<li>Health Data, combined with an associated Model Licence, are called AIH Data.</li>
<li>AIH Data is uniquely identified.</li>
<li>AIH Data is processed by the Front End using an instance of the MPAI-specified AI Framework (MPAI-AIF).</li>
<li>Front End processes AIH Data using AI-for-Health-recommended AI Modules (AIM) downloaded from the MPAI Store.</li>
<li>Neural Networks in AIMs continually learn while making inferences on AIH Data.</li>
<li>Un-processed and Processed AIH Data are uploaded to the AI Back End.</li>
<li>Back End stores the Model Licence as a Smart Contract on a Blockchain associated with the Back End.</li>
<li>A Smart Contract ID is added to the AIH Data.</li>
<li>The Smart Contract governs the use that is made of the AIH Data stored on the Back End.</li>
<li>Depending on the relevant Smart Contract, an instance of AIH Data stored on the Back End may be processed by the Back End itself and Third-Party Users.</li>
<li>The Back End may process End Users&#8217; AIH Data in its local AI Framework based AI Data Processing AIM.</li>
<li>A rich AIH Taxonomy includes:
<ol>
<li>AIH Data Classes (currently: ECG, EEG, Genomics, and Medical Images).</li>
<li>AIH Data Users (currently: End User, Non-Profit Entity, Profit Entity, Clinical Entity, Authorised Entity, Caregiver).</li>
<li>AIH Data Statuses (currently: Anonymised, Pseudonymised, Identified).</li>
<li>AIH Data Usages (currently: Unrestricted, Pseudonymised, Anonymised, Research, Patient use, Health care).</li>
<li>AIH Data Processing Types (currently: ECG, EEG, Genomics, Medical Images).</li>
<li>Anonymisation/De-Identification Algorithms.</li>
<li>Anomaly Types.</li>
</ol>
</li>
</ol>
<p>Figure 1 depicts the Health Secure Platform specified by AI for Health. At the centre there is the Back End to which Front Ends and Third-Party Users are connected. The MPAI Store enables Back End and Front Ends to access the AI Modules they need for their processing. The Blockchain manages the licencing terms provided to it by the Model Licence.</p>
<p><img decoding="async" class="size-full wp-image-37577 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/Immagine1.png" alt="" width="885" height="369" /></p>
<p style="text-align: center;"><em>Figure 1 &#8211; General Model of AIH-HSP V1.0</em></p>
<p>Figure 2 depicts the architecture of the AIH Back End where Back End, End User, Blockchain, and Third-Party Users perform operations.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-37578 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/Immagine2.png" alt="" width="624" height="384" /></p>
<p style="text-align: center;">Figure 2 &#8211; Reference Model of the Health Back End (AIH-HBE) AIW</p>
<ol>
<li>Back End accesses the MPAI Store and downloads the AIMs required for its operation.</li>
<li>User Registration
<ol>
<li>A User wishing to access the Back End, sends a Registration Request containing Personal Profile and list of Service they intend to access.</li>
<li>Back End provides the Tokens enabling the requesting User to access the corresponding Services.</li>
</ol>
</li>
<li>Storage of AIH Data
<ol>
<li>End User uploads AIH Data.</li>
<li>HBE Data Processing
<ol>
<li>Extracts Model Licence from AIH Data.</li>
<li>Issues Blockchain Licence Request to Blockchain.</li>
</ol>
</li>
<li>Blockchain
<ol>
<li>Converts Model Licence to a Smart Contract.</li>
<li>Responds with a Blockchain Licence Request.</li>
</ol>
</li>
<li>HBE Data Processing
<ol>
<li>Attaches Blockchain Licence ID to AIH Data.</li>
<li>Stores AIH Data in Secure Storage</li>
</ol>
</li>
<li>De-Identification/Anonymisation (DIA) of AIH Data
<ol>
<li>End User sends a DIA Request.</li>
<li>HBE Data Processing
<ol>
<li>Retrieves relevant AIH Data from Secure Storage.</li>
<li>(Pseudo-)Anonymises AIH Data.</li>
<li>Stores (Pseudo-)Anonymised AIH Data back to Secure Storage.</li>
<li>Responds with a DIA Response.</li>
</ol>
</li>
<li>AIH Data Processing
<ol>
<li>User sends AIH Process Request.</li>
<li>HBE Data Processing sends a Licence Confirm Request to the Blockchain.</li>
<li>Blockchain responds with a Licence Confirm Response.</li>
<li>HBE Data Processing
<ol>
<li>Performs the requested Processing, if this is included in the Licence.</li>
<li>Stores the Processed AIH Data as new AIH Data.</li>
<li>Responds with an AI Data Process Response.</li>
</ol>
</li>
<li>Audit
<ol>
<li>End User sends Audit Request.</li>
<li>Auditing
<ol>
<li>Retrieves relevant Confirmation Responses to verify that all Processing was performed according to the Licence terms.</li>
<li>Responds with Audit Response.</li>
</ol>
</li>
<li>Federated Learning
<ol>
<li>Federated Learning sends Federated Learning Request to all Health Front Ends.</li>
<li>Health Front Ends provide the NN Models.</li>
<li>Federated Learning
<ol>
<li>Develops and upload the new NN Model to the MPAI Store.</li>
<li>Sends Federated Learning Response to Health Front Ends.</li>
</ol>
</li>
<li>Front Ends download the new NN Model from the MPAI Store.</li>
</ol>
</li>
</ol>
</li>
</ol>
</li>
</ol>
</li>
</ol>
</li>
</ol>
<p>Figure 3 depicts the Reference Architecture of the Health Front End (AIH-HFE) where Front End and End User perform operations.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-37579 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/Immagine3.png" alt="" width="624" height="258" /></p>
<p style="text-align: center;">Figure 3 – Reference Model of the Health Front End (AIH-HFE) AIW</p>
<ol>
<li>End User registers with HFE and HBE.</li>
<li>End User acquires Health Data with a Health Device and provides Model Licence.</li>
<li>Model Licencing AIM attaches Model Licence to Health Data, produces AIH Data and Stores AIH Data.</li>
<li>End User processes AIH Data locally.</li>
<li>End User stores AIH Data to HFE.</li>
<li>End User processes AIH Data remotely on the Back End.</li>
<li>HFE receives Federated Learn request.</li>
<li>HFE sends the NN Model trained since last Federated Learn request to HBE.</li>
</ol>
<p>The AIH-HSP V1.0 standard is <a href="https://mpai.community/standards/mpai-aih/hsp/v1-0/">available</a>. An online presentation will be made on 2026/02/09 T15 UTC. Register to attend.</p>
<p>Comments on AIH-HSP V1.0 shall reach the <a href="mailto:secretariat@mpai.community">MPAI Secretariat</a> by 2026/03/16.</p>
<p>&nbsp;</p>
<p>The post <a href="https://blog.chiariglione.org/improved-health-services-with-ai/">Improved Health Services with AI</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A Walk inside the Autonomous User</title>
		<link>https://blog.chiariglione.org/a-walk-inside-the-autonomous-user/</link>
		
		<dc:creator><![CDATA[Leonardo]]></dc:creator>
		<pubDate>Fri, 09 Jan 2026 11:33:00 +0000</pubDate>
				<category><![CDATA[MPAI]]></category>
		<guid isPermaLink="false">https://blog.chiariglione.org/?p=3581</guid>

					<description><![CDATA[<p>Table of Content A standard for the Autonomous User Architecture A-User Control: The Autonomous Agent’s Brain Context Capture: The A-User’s First Glimpse of the World Audio Spatial Reasoning: The Sound-Aware Interpreter Visual Spatial Reasoning: The Vision‑Aware Interpreter Prompt Creation: Where Words Meet Context Domain Access: The Specialist Brain Plug-in for the Autonomous User Basic Knowledge: [&#8230;]</p>
<p>The post <a href="https://blog.chiariglione.org/a-walk-inside-the-autonomous-user/">A Walk inside the Autonomous User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center;"><strong>Table of Content</strong></p>
<ol>
<li><a href="#AUA">A standard for the Autonomous User Architecture</a></li>
<li><a href="#AUC">A-User Control: The Autonomous Agent’s Brain</a></li>
<li><a href="#CXC">Context Capture: The A-User’s First Glimpse of the World</a></li>
<li><a href="#ASR">Audio Spatial Reasoning: The Sound-Aware Interpreter</a></li>
<li><a href="#VSR">Visual Spatial Reasoning: The Vision‑Aware Interpreter</a></li>
<li><a href="#PRC">Prompt Creation: Where Words Meet Context</a></li>
<li><a href="#DAC">Domain Access: The Specialist Brain Plug-in for the Autonomous User</a></li>
<li><a href="#BKN">Basic Knowledge: The Generalist Engine Getting Sharper with Every Prompt</a></li>
<li><a href="#USR">User State Refinement: Turning a Snapshot into a Full Profile</a></li>
<li><a href="#PAL">Personality Alignment: The Style Engine of A-User</a></li>
<li><a href="#AUF">A-User Formation: Building the A-User</a></li>
</ol>
<p style="text-align: center;"><strong><a id="AUA"></a>A standard for the Autonomous User Architecture</strong></p>
<p>MPAI has developed <a href="https://mpai.community/standards/">15 standards</a> to facilitate componentisation of AI applications. One of them is <a href="https://mpai.community/standards/mpai-mmm/tec/v2-1/">MPAI Metaverse Model &#8211; Architecture</a> (MMM-TEC) currently at Version 2.1. MMM-TEC assumes that a Metaverse Instance (M-Instance) is populated by Processes performing Actions on Items either directly or indirectly by requesting another Process to perform Actions on their behalf. The requested Process performs the Action if the requesting and requested Processes have the Rights to do. Process Action is the means for a Process to make requests.</p>
<p>A particularly important Process is the User. This may be driven directly by a human (in which case it is called H-User) or may operate autonomously (in which case it is called A-User) performing Actions and requesting Process Actions.</p>
<p>MMM-TEC provides the technical means for an H-User to act in an M-Instance. An A-User can use the same means to act but it currently does not provide the means to <em>decide</em> what (Process) Actions to perform. Such means are vitally important for an A-User to achieve autonomous agency and thus make M-Instances more attractive places for humans to visit and settle.</p>
<p>After long discussions, MPAI has initiated the Performing Goal in metaverse (MPAI-PGM) project. The first subproject is called Autonomous User Architecture (PGM-AUA). Currently, this includes the following documents: a <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/call-for-technologies/">Call for Technologies</a> (per the MPAI process, a technical standard based on the responses received to a Call) accompanied by <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/use-cases-and-functional-requirements">Use Cases and Functional Requirements</a> (what the standard is expected to do), <a href="https://mpai.community/standards/mpai-pgm-aua/v1-0/framework-licence">Framework Licence</a> (guidelines for the use of Essential IPR of the Standard), and a recommended <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/template-for-responses">Template for Responses</a>.</p>
<p>The complexity of the PGM-AUA project has prompted MPAI to develop a <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/#Tentative">Tentative Technical Specification</a> (TTS). This uses the style but is NOT an MPAI Technical Specification. It has been developed as a concrete example of the goal that MPAI intends to eventually achieve with the PGM-AUA project. Respondents to the Call are free to comment on, change, or extend TTS or to propose anything else relevant to the Call whether related or not to the TTS.</p>
<p>Let’s have a look at the Tentative Architecture of Autonomous User.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-36778" src="https://mpai.community/wp-content/uploads/2025/12/Autonomous-User-Architecture-PGM-AUA-V1.0.png" alt="" width="1257" height="583" /></p>
<p>Anybody is entitled to respond to the Call. Responses shall be submitted to the <a href="mailto:secretariat@mpai.community">MPAI Secretariat</a> by 2026/01/21T23:59.</p>
<p>In the following, an extensive techno-conversational description of the TTS is developed.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="AUC"></a>A-User Control</strong><strong>: The Autonomous Agent’s Brain</strong></span></p>
<p><strong>A-User Control</strong> is the general commander of the A-User system making sure the Avatar behaves like a coherent digital entity aware of the rights it can exercise in an instance of the <a href="https://mpai.community/standards/mpai-mmm/tec/v2-1/"><strong>MPAI Metaverse Model – Architecture</strong></a> (MMM-TEC) standard. The command is actuated by various signals exchanged with the Ai-Modules (AIM) composing the Autonomous User.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-37335 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/ca.png" alt="" width="1267" height="330" /></p>
<p>At its core, A-User Control decides <strong>what the A-User should do</strong>, <strong>which AIM should do it</strong>, and <strong>how it should do it</strong> – all while respecting the <strong>Rights</strong> held by the A-User in and the <strong>Rules</strong> defined by the metaverse. Obviously, A-User Control either executes an Action directly or delegates another Process in the metaverse to carry it out.</p>
<p>A-User Control is not just about triggering actions. A-User Control also manages the <strong>operation of its AIMs</strong>, for instance <strong>A-User Formation</strong>, which can turn text produced by the Basic Knowledge (LLM) and the Entity Status selected by Personality Alignment into a speaking and gesturing Avatar. A-User Control sends shaping commands to A-User Formation, ensuring the Avatar’s behaviour aligns with metaverse-generated cues and contextual constraints.</p>
<p>A-User Control is not independent of human influence. The human, i.e., the A-User “owner”, can <strong>override, adjust, or steer</strong> its behaviour. This makes A-User Control a hybrid system: autonomous by design, but open to human modulation when needed.</p>
<p>The control begins when A-User Control triggers <strong>Context Capture</strong> to perceive the current M-Location – the spatial zone of the metaverse where the User is active. That snapshot, called <strong>Context</strong>, includes spatial descriptors and a readout of the human’s cognitive and emotional posture called Entity State. From there, the two <strong>Spatial Reasoning</strong> components – Audio and Visual – use Context to analyse the scene and sending outputs to <strong>Domain Access</strong> and <strong>Prompt Creation</strong>, which enrich the User’s input and guide the A-User’s understanding.</p>
<p>As reasoning flows through <strong>Basic Knowledge</strong>, <strong>Domain Access</strong>, and <strong>User State Refinement</strong>, A-User Control ensures that every action, rendering, and modulation is aligned with the A-User’s operational logic.</p>
<p>In summary, the A-User Control is the executive function of the A-User: part orchestrator, part gatekeeper, part interpreter. It’s the reason the Avatar doesn’t just speak – it does so while being aware of the Context – both the spatial and User components – with purpose, permission, and precision.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="CXC"></a>Context Capture</strong><strong>: The A-User’s First Glimpse of the World</strong></span></p>
<p>Context Capture is the A-User’s sensory front-end – the AIM that opens up perception by scanning the environment and assembling a structured snapshot of what’s out there in the moment. It is the first AI Module (AIM) in the loop providing the data and setting the stage for everything that follows.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37352 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/cb-1.png" alt="" width="207" height="237" /></p>
<p>When A-User Control decides it’s time to engage, it prompts Context Capture to focus on a specific <strong>M-Location</strong> – the zone where the User is active, rendering its Avatar.</p>
<p>The product of Context Capture is called <strong>Context</strong> – a time-stamped, multimodal snapshot that represents the A-User’s initial understanding of the scene. But this isn’t just raw data. Context is composed of two key ingredients:<strong> Audio-Visual Scene Descriptors </strong>and<strong> User State.</strong></p>
<p>The <strong>Audio-Visual Scene Descriptors</strong> are like a spatial sketch of the environment. They describe what’s visible and audible: objects, surfaces, lighting, motion, sound sources, and spatial layout. They provide the A-User with a sense of “what’s here” and “where things are.” But they’re not perfect. These descriptors are often shallow – they capture geometry and presence but not meaning. A chair might be detected as a rectangular mesh with four legs, but Context Capture doesn’t know if it’s meant to be sat on, moved, or ignored.</p>
<p>That’s where <strong>Spatial Reasoning</strong> comes in. Spatial Reasoning is the AIM that takes this raw spatial sketch and starts asking the deeper questions:</p>
<ul>
<li>“Which object is the User referring to?”</li>
<li>“Is that sound coming from a relevant source?”</li>
<li>“Does this object afford interaction, or is it just background?”</li>
</ul>
<p>It analyses the Context and produces an enhanced Scene Description containing a refined map of spatial relationships, referent resolutions, and interaction constraints and a set of cues that enrich the user’s input – highlighting which objects or sounds are relevant, how close they are, and how they might be used.</p>
<p>These outputs are sent downstream to <strong>Domain Access</strong> and <strong>Prompt Creation</strong>. The former refines the spatial understanding of the scene. The latter enriches the A-User’s query when it formulates the prompt to the Basic Knowledge (LLM).</p>
<p>Then there is <strong>Entity State</strong> – a snapshot of the User’s cognitive, emotional, and attentional posture. Is the User focused, distracted, curious, frustrated? Context Capture reads facial expressions, gaze direction, posture, and vocal tone to infer a baseline state. But again, it’s just a starting point. User behaviour may be nuanced, and initial readings can be incomplete, noisy or ambiguous. That’s why <strong>User State Refinement</strong> exists – to track changes over time, infer deeper intent, and guide the alignment of the A-User’s expressive behaviour done by Personality Alignment.</p>
<p>In short, Context Capture is the A-User’s <strong>first glimpse of the world</strong> – a fast, structured perception layer that’s good enough to get started, but not good enough to finish the job. It’s the launchpad for deeper reasoning, richer modulation, and more expressive interaction. Without it, the A-User would be blind. With it, the system becomes situationally aware, emotionally attuned, and ready to reason – but only if the rest of the AIMs do their part.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="ASR"></a>Audio Spatial Reasoning</strong><strong>: The Sound-Aware Interpreter</strong></span></p>
<p><strong>Audio Spatial Reasoning</strong> is the A-User’s acoustic intelligence module – the one that listens, localises, and interprets sound not just as data, but as <strong>data having a</strong> <strong>spatially anchored meaning</strong>. Therefore, Its role is not just about “hearing”, it is also about “understanding” <strong>where sound is coming from</strong>, <strong>how relevant it is</strong>, and <strong>what it implies</strong> in the context of the User’s intent in the environment.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37336 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/cb.png" alt="" width="310" height="238" /></p>
<p>When the A-User system receives a <strong>Context</strong> snapshot from Context Capture – including audio streams with a position and orientation and a description of the User’s emotional state (called Entity State) – Audio Spatial Reasoning start an analysis of <strong>directionality</strong>, <strong>proximity</strong>, and <strong>semantic importance</strong> of incoming sounds. The conclusion is something like “That voice is coming from the left, with a tone of urgence, and its orientation is directed at the A-User.”</p>
<p>All this is represented with an <em>extension</em> of the Audio Scene Descriptors describing:</p>
<ul>
<li>Which audio sources are relevant</li>
<li>Where they are located in 3D space</li>
<li>How close or far they are</li>
<li>Whether they’re foreground (e.g., a question) or background (e.g., ambient chatter)</li>
</ul>
<p>This guide is sent to <strong>Prompt Creation</strong> and <strong>Domain Access</strong>. Let’s see what happens with the former. The extended Audio Scene Descriptors are fused with the User’s spoken or written input and the current Entity State. The result is a <strong>PC-Prompt</strong> – a rich query enriched with text expressing the multimodal information collected so far. This is passed to Basic Knowledge for reasoning.</p>
<p>The Audio Scene Descriptors are further processed and integrated with domain-specific information. The response is called <strong>Audio Spatial Directive </strong>that includes domain-specific logic, scene priors, and task constraints. For example, if the scene is a medical simulation, Domain Access might tell Audio Spatial Reasoning that “only sounds from authorised personnel should be considered”. This feedback helps Audio Spatial Reasoning refine its interpretation – filtering out irrelevant sounds, boosting priority for critical ones, and aligning its spatial model with the current domain expectations.</p>
<p>Therefore, we can call Audio Spatial Reasoning as the A-User’s <strong>auditory guide</strong>. It knows where sounds are coming from, what they mean, and how they should influence the A-User’s behaviour. The A-User responds to a sound with spatial awareness, contextual sensitivity, and domain consistency.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="VSR"></a>Visual Spatial Reasoning</strong><strong>: The Vision‑Aware Interpreter</strong></span></p>
<p>When the A-User acts in a metaverse space, sound doesn’t tell the whole story. The visual scene – objects, zones, gestures, occlusions – is the canvas where situational meaning unfolds. That’s where <strong>Visual Spatial Reasoning</strong> comes in: it’s the interpreter that makes sense of what the Autonomous User <em>sees</em>, not just what it <em>hears</em>. It can be considered as the <strong>visual analyst embedded in the </strong>Autonomous User’s “brain” that understands objects’ <strong>geometry, relationships, and salience</strong>.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37337 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/cc.png" alt="" width="322" height="247" /></p>
<p>Visual Spatial Reasoning doesn’t just list objects; it understands their geometry, relationships, and salience. A chair isn’t just “a chair” – it’s <em>occupied</em>, <em>near a table</em>, <em>partially occluded</em>, or <em>the focus of attention</em>. By enriching raw descriptors into structured semantics, Visual Spatial Reasoning transforms objects made of pixels into <strong>actionable targets</strong>.</p>
<p><strong>This is what it does</strong></p>
<ul>
<li><strong>Scene Structuring:</strong> Takes and organises raw visual descriptors into coherent spatial maps.</li>
<li><strong>Semantic Enrichment:</strong> Adds meaning – classifying objects, inferring affordances, and ranking salience.</li>
<li><strong>Directed Alignment:</strong> Filters and prioritises based on the A-User Controller’s intent, ensuring relevance.</li>
<li><strong>Traceability:</strong> Every refinement step is auditable, to trace back why, “that object in the corner” became “the salient target for interaction.”</li>
</ul>
<p>Without Visual Spatial Reasoning, the metaverse would be a flat stage of unprocessed visuals. With it, <strong>visual scenes become interpretable narratives</strong>. It’s the difference between “there are three objects in the room” and “the User is focused on the screen, while another entity gestures toward the door.”</p>
<p>Of course, Visual Spatial Reasoning does not replace vision. It bridges the gap between raw descriptors and effective interaction, ensuring that the <strong>A‑User</strong> can observe, interpret, and act with precision and intent.</p>
<p>If Audio Spatial Reasoning is the metaverse’s “sound‑aware interpreter,” then Visual Spatial Reasoning is its <strong>“sight‑aware analyst”</strong> that starts by seeing objects and eventually can understand their role, their relevance, and their story in the scene.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="PRC"></a>Prompt Creation</strong><strong>: Where Words Meet Context</strong></span></p>
<p>The Prompt Creation module is the storyteller and translator in the Autonomous User’s “brain”, It takes raw sensory input  –  audio and visual spatial data of Context (such as objects in a scene with their position, orientation and velocity) and the Entity State – and turns it into a well‑formed prompt that Basic Knowledge can actually understand and respond to.</p>
<p>The audio and visual components of<strong> Spatial Reasoning</strong> provide the information on things around the User such as “who’s in the room,” “what’s being said,” “what objects are present,” and “what’s the User doing”. <strong>Context Capture</strong> provides Entity State as a rich description of the A‑User’s understanding of the “internal state” of the User – which may a representation of a biologically real User, if it represents a human, or simulated when the User represents an agent. The task of Prompt Creation is to <strong>synthesise these sources of information</strong> into a PC‑Prompt Plan. This plan starts from what the User said, adds intent (e.g., “User wants help” or “User is asking a question”), includes the context around the User (e.g., “User is in a virtual kitchen”), and embeds User State (e.g., “User seems confused”).</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37338 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/cd.png" alt="" width="208" height="187" /></p>
<p>This information – conveniently represented as a JSON object – is converted into natural language and passed to Basic Knowledge that produces a natural language response called the <strong>Initial Response</strong> – <em>initial</em> because there are more processing elements in the A‑User pipeline that will refine and improve the answer before it is rendered in the metaverse.</p>
<p>Prompt Creation gives the AI a <strong>sense of narrative</strong>, so the A-User can:</p>
<p>&#8211; Ask the right clarifying question.</p>
<p>&#8211; Respond with relevance to the situation.</p>
<p>&#8211; Adapt to the environment and User mood.</p>
<p>&#8211; Maintain continuity across interactions.</p>
<p>If the User says: “Can you help me cook?”</p>
<p><strong>&#8211; Spatial Reasoning</strong> notes the User is in a virtual kitchen with utensils and ingredients.</p>
<p><strong>&#8211; Entity State</strong> suggests the User looks uncertain.</p>
<p><strong>&#8211; Prompt Creation</strong> combines these into: “User is asking for cooking help, is in a kitchen, seems unsure.”</p>
<p>This Initial Response is then passed to <strong>Domain Access</strong>, which may elaborate a new prompt enriched with domain-specific information (in this case “cooking”, when Basic Knowledge is not well informed about cooking).</p>
<p>Prompt Creation turns raw multimodal input and spatial information into meaningful prompts so the AI can think, speak, and act with purpose. It is the scriptwriter that ensures the A‑User’s dialogue is not only coherent but also contextually aware, emotionally attuned, and situationally precise.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="DAC"></a>Domain Access</strong><strong>: The Specialist Brain Plug-in for the Autonomous User</strong></span></p>
<p>The <strong>Basic Knowledge</strong> module is a generalist language model that “knows a bit of everything.” In contrast, <strong>Domain Access</strong> is the expert layer that enables the Autonomous User to tap into <strong>domain-specific intelligence</strong> for deeper understanding of user utterances and their context.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37340 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/cf.png" alt="" width="282" height="208" /></p>
<p><strong> </strong>How Domain Access Works</p>
<ul>
<li><strong>Receives Initial Response</strong>: Domain Access starts with the response of Basic Knowledge, the generalist model’s response to the prompt generated by <strong>Prompt Creation</strong>.</li>
<li><strong>Converts to DA-Input</strong>: As the natural language response is not the best way to process the response, it is converted into a JSON object called DA-Input for structured processing.</li>
<li><strong>Gets specilised knowledge </strong>by pulling in domain vocabulary such as, jargon and technical terms.</li>
<li><strong>Creates the next prompt</strong> by using this specialised knowledge:
<ul>
<li><strong>Injects rules and constraints</strong> (e.g., standards, legal compliance).</li>
<li><strong>Adds reasoning patterns</strong> (e.g., diagnostic flows, contractual logic).</li>
</ul>
</li>
</ul>
<p>All enrichment happens in the JSON domain and so is the produced <strong>DA-Prompt Plan</strong> – a domain-aware structure ready for conversion into natural language – called <strong>DA-Prompt</strong> – and resubmission into the knowledge/response pipeline.</p>
<p>Without Domain Access, the A-User is like a clever intern: knowledgeable but lacking depth and experience. With Domain Access, it becomes n experienced professional that can:</p>
<ul>
<li>Deliver accurate, context-aware answers.</li>
<li>Avoid hallucinations by grounding responses in domain rules.</li>
<li>Address different application domains by swapping or adding domain modules without rebuilding the entire A-User.</li>
</ul>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="BKN"></a>Basic Knowledge</strong><strong>: The Generalist Engine Getting Sharper with Every Prompt</strong></span></p>
<p><strong>Basic Knowledge</strong> is the core language model of the Autonomous User – the “knows-a-bit-of-everything” brain. It’s the provider of the first response to a prompt but the Autonomous User doesn’t fire off just one answer but four of them in a progressive refinement loop, providing smarter and more context-aware responses with every refined prompt.</p>
<p><strong> <img loading="lazy" decoding="async" class="size-full wp-image-37339 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/ce.png" alt="" width="730" height="153" /></strong></p>
<p>The Journey of a Prompt</p>
<ol>
<li><strong>Starts Simple: </strong>The first prompt from <strong>Prompt Creation</strong> is a rough draft because the A-User has only a superficial knowledge of the Context and User intent.</li>
<li><strong>Domain Access adds</strong> expert seasoning: jargon, compliance rules, reasoning patterns. The prompt becomes richer and sharper.</li>
<li><strong>User State Refinement injects</strong> dynamic knowledge about the User – refined emotions, more focused goals, better spatial context – so the prompt feels more attuned to what the User feels and wants.</li>
<li><strong>Personality Alignment Tells A-User how to Behave</strong>: it ensures that the appropriate A-User’s style and mood drive the next prompt.</li>
<li><strong>Final Prompt Delivery</strong>: when Basic Knowledge receives the last prompt (from Personality Alignment) the final touches have been added.</li>
</ol>
<p>This sequence of prompts eventually provides:</p>
<ul>
<li><strong>Better responses</strong>: Each prompt reduces ambiguity.</li>
<li><strong>Domain grounding</strong>: Avoids hallucinations by embedding rules and expert logic.</li>
<li><strong>Personalisation</strong>: Adapts A-User’s tone and content to User State.</li>
<li><strong>Scalability</strong>: Works across domains without retraining.</li>
</ul>
<p>Basic Knowledge starts as a generalist, but thanks to <strong>refined prompts</strong>, it ends up delivering <strong>expert-level, context-aware, and User-sensitive responses</strong>. It starts from a rough sketch and, by iterating with specialist information sources, it provides a final response that includes all the information extracted or produced in the workflow.</p>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="USR"></a> </strong><strong>User State Refinement</strong><strong>: Turning a Snapshot into a Full Profile</strong></span></p>
<p>When the A-User begins interacting, it starts with a basic <strong>User State</strong> captured by <strong>Context Capture</strong> – location, activity, initial intent, and perhaps a few emotional hints. This initial state is useful, but it’s like a blurry photo: the A-User knows that somebody ps there, but not the details that matter for nuanced interaction.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37341 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/cg.png" alt="" width="199" height="165" /></p>
<p>As the session unfolds, the A-User learns much more thanks to <strong>Prompt Creation</strong>, <strong>Spatial Reasoning</strong>, and <strong>Domain Access</strong>. Suddenly, the A-User understands not just what the User said, but what it meant, the context it operates in, and the reasoning patterns relevant to the domain. This new knowledge is integrated with the initial state so that subsequent steps – especially <strong>Personality Alignment</strong> and Basic Knowledge – are based on an appropriate understanding of the User State.</p>
<p>Why Update the User State?</p>
<p>Personality Alignment is where the A-User adapts tone, style, and interaction strategy. If it only relies on the first guess of the User State, it risks taking an incongruent attitude – formal when casual is needed, directive when supportive is expected. If the User State can be updated the A-User knows more about:</p>
<ul>
<li><strong>The environment</strong> incorporating jargon, compliance rules, and reasoning patterns.</li>
<li><strong>The internal state</strong> and can adjust responses to confusion, urgency, or confidence.</li>
</ul>
<p>The Refinement Process</p>
<ol>
<li><strong>Start with Context Snapshot: </strong>capture environment, speech, gestures, and basic emotional cues.</li>
<li><strong>Inject Domain Intelligence </strong>from Domain Access: technical vocabulary, rules, structured reasoning.</li>
<li><strong>Merge New Observations: </strong>emotional shifts, spatial changes, updated intent.</li>
<li><strong>Validate Consistency: </strong>ensure module coherence for reliable downstream use.</li>
<li><strong>Feed Forward: </strong>pass the refined state to Personality Alignment and sharper prompts to Basic Knowledge.</li>
</ol>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="PAL"></a> </strong><strong>Personality Alignment</strong><strong>: The Style Engine of A-User</strong></span></p>
<p>Personality Alignment is where an A-User interacting with a User embedded in a metaverse environment stops being a generic bot and starts acting like a character with intent, tone, and flair. It’s not just a matter of what it utters – it’s about <em>how</em> those words land, how the avatar moves, and how the whole interaction feels.</p>
<p>The figure is an extract from the A-User Architecture Reference model representing Domain Access generating two streams of data related to the User and its environment and two recipient AI Modules: User State Refinement and Personality Alignment.</p>
<p style="text-align: center;"><img loading="lazy" decoding="async" class="wp-image-37342 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/hg.png" alt="" width="562" height="176" /></p>
<p>This is possible because the A-User receives the right inputs driving the Alignment of the A-User Personality with the refined User’s Entity State:</p>
<ul>
<li><strong>Personality Context Guide</strong>: Domain-specific hints from Domain Access (e.g., “medical setting → professional tone”).</li>
<li><strong>Expressive State Guide</strong>: Emotional and attentional posture of the User (e.g., stressed → calming personality).</li>
<li><strong>Refined Response</strong>: Text from Basic Knowledge in response to User State Refinement prompt.</li>
<li><strong>Personality Alignment Directive</strong>: Commands to tweak or override the personality profile (e.g., “switch to negotiator mode”) from the A-User Control AI Module (AIM).</li>
</ul>
<p>A smart integration of these inputs enables the A-User to deliver the following outputs:</p>
<ul>
<li><strong>A-User Entity State</strong>: the complete internal state of the A-User’s synthetic personality produced (tone, gestures, behavioural traits).</li>
<li><strong>PA-Prompt</strong>: New prompt formulation including the final A-User personality (so the words sound right).</li>
<li><strong>Personality Alignment Status</strong>: A structured report of personality and expressive alignment to the A-User Control AIM.</li>
</ul>
<p>Here are some examples of personality profiles that Personality Alignment could use or blend:</p>
<ul>
<li><strong>Mentor Mode</strong>: Calm tone, structured answers, moderate gestures, empathy cues.</li>
<li><strong>Entertainer Mode</strong>: Upbeat tone, humour, wide gestures, animated expressions.</li>
<li><strong>Negotiator Mode</strong>: Firm tone, controlled gestures, strategic phrasing.</li>
<li><strong>Assistant Mode</strong>: Neutral tone, minimal gestures, clarity-first responses.</li>
</ul>
<p style="text-align: center;"><span style="text-decoration: underline;"><strong><a id="AUF"></a>A-User Formation</strong><strong>: Building the A-User</strong></span></p>
<p>If Personality Alignment gives the A-User its style, <strong>A-User Formation AIM</strong> gives the A-User its body and its voice, the avatar and the speech for the A-User Control to embed in the metaverse. The A-User stops being an abstract brain controlling various types of processing and becomes a visible, interactive entity. It’s not just about projecting a face on a bot; it’s about creating a coherent representation that matches the personality, the context, and the expressive cues.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-37343 aligncenter" src="https://mpai.community/wp-content/uploads/2026/01/ch.png" alt="" width="391" height="130" /></p>
<p>Here is how this is achieved.</p>
<p><strong>Inputs Driving A-User Formation:</strong></p>
<ul>
<li><strong>A-User Entity Status</strong>: The personality blueprint from Personality Alignment (tone, gestures, behavioural traits).</li>
<li><strong>Final Response</strong>: personality-tuned content from Basic Knowledge – what the avatar will utter.</li>
<li><strong>A-User Control Command</strong>: Directives for rendering and positioning in the metaverse (e.g., MM-Add, MM-Move).</li>
<li><strong>Rendering Parameters</strong>: Synchronisation cues for speech, facial expressions, and gestures.</li>
</ul>
<p>What comes out of the box is a multimodal representation of the A-User (Speaking Avatar<strong>)</strong> that talks, moves, and reacts in sync with the A-User’s intent – the best expression the A-User can give of itself in the circumstances.</p>
<p><strong>What Makes A-User Formation Special?</strong></p>
<p>It’s the last mile of the pipeline – the point where all upstream intelligence (context, reasoning, User’s Entity Status estimation, personality) becomes visible and interactive. A-User Formation ensures:</p>
<ul>
<li><strong>Expressive Coherence</strong>: Speech, gestures, and facial cues match the chosen personality.</li>
<li><strong>Contextual Fit</strong>: Avatar appearance and behaviour align with domain norms (e.g., formal in a medical setting, casual in a social lounge).</li>
<li><strong>Technical Precision</strong>: Synchronisation across Personal Status modalities for natural and consistent interaction.</li>
<li><strong>Goal</strong>: Deliver a coherent, expressive, and context-aware representation that feels natural and engaging in response to how the User was perceived at the beginning and processed during the pipeline.</li>
</ul>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a href="https://blog.chiariglione.org/a-walk-inside-the-autonomous-user/">A Walk inside the Autonomous User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A-User Formation: Building the A-User</title>
		<link>https://blog.chiariglione.org/a-user-formation-building-the-a-user-2/</link>
		
		<dc:creator><![CDATA[Leonardo]]></dc:creator>
		<pubDate>Mon, 22 Dec 2025 18:00:32 +0000</pubDate>
				<category><![CDATA[MPAI]]></category>
		<guid isPermaLink="false">https://blog.chiariglione.org/?p=3577</guid>

					<description><![CDATA[<p>If Personality Alignment gives the A-User its style, A-User Formation AIM gives the A-User its body and its voice, the avatar and the speech for the A-User Control to embed in the metaverse. The A-User stops being an abstract brain controlling various types of processing and becomes a visible, interactive entity. It’s not just about [&#8230;]</p>
<p>The post <a href="https://blog.chiariglione.org/a-user-formation-building-the-a-user-2/">A-User Formation: Building the A-User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If Personality Alignment gives the A-User its style, A-User Formation AIM gives the A-User its body and its voice, the avatar and the speech for the A-User Control to embed in the metaverse. The A-User stops being an abstract brain controlling various types of processing and becomes a visible, interactive entity. It’s not just about projecting a face on a bot; it’s about creating a coherent representation that matches the personality, the context, and the expressive cues.</p>
<p>We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do things, etc.) with another User in a metaverse. The latter User may be an A-User or be under the direct control of a human and is thus called a Human-User (H-User). The A-User acts as a “conversation partner in a metaverse interaction” with the User.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3558" src="https://blog.chiariglione.org/wp-content/uploads/2025/11/Autonomous-User-Architecture1-PGM-AUA-V1.0.png" alt="" width="1257" height="583" srcset="https://blog.chiariglione.org/wp-content/uploads/2025/11/Autonomous-User-Architecture1-PGM-AUA-V1.0.png 1257w, https://blog.chiariglione.org/wp-content/uploads/2025/11/Autonomous-User-Architecture1-PGM-AUA-V1.0-300x139.png 300w, https://blog.chiariglione.org/wp-content/uploads/2025/11/Autonomous-User-Architecture1-PGM-AUA-V1.0-1024x475.png 1024w, https://blog.chiariglione.org/wp-content/uploads/2025/11/Autonomous-User-Architecture1-PGM-AUA-V1.0-768x356.png 768w" sizes="(max-width: 1257px) 100vw, 1257px" /></p>
<p>This is the tenth and last of a sequence of posts aiming to illustrate more in depth the architecture of an A-User and provide an easy entry point for those who wish to respond to the MPAI <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/#Tentative">Call for Technology on Autonomous User Architecture</a>. The first six dealt with 1) the Control performed by the A-User Control AI Module on the other components of the A-User; 2) how the A-User captures the external metaverse environment using the Context Capture AI Module; 3) listens, localises, and interprets sound not just as data, but as data having a spatially anchored meaning; 4) makes sense of what the Autonomous User sees by understanding objects’ geometry; relationships, and salience; 5) takes raw sensory input and the User State and turns them into a well‑formed prompt that Basic Knowledge can actually understand and respond to; 6) taps into domain-specific intelligence for deeper understanding of user utterances and operational context; 7) the core language model of the Autonomous User – the “knows-a-bit-of-everything” brain, the first responder to a prompt of a sequence of four; 8) converting a “blurry photo” of the User in the environment taken at the onset of the process into a focused picture; and 9) providing not only a generic bot but a character with intent, tone, and flair – not only a matter of what the avatar utters but how its words land, how the avatar moves, and how the whole interaction feels.</p>
<p><strong> </strong><strong>A-User Formation AIM</strong> gives the A-User a body and a voice, the results of a chain or AI Modules composing the A-User pipeline enabling a perceptible and coherent representation that matches the personality, the context, and the expressive cues.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-3578" src="https://blog.chiariglione.org/wp-content/uploads/2025/12/az.png" alt="" width="608" height="234" srcset="https://blog.chiariglione.org/wp-content/uploads/2025/12/az.png 608w, https://blog.chiariglione.org/wp-content/uploads/2025/12/az-300x115.png 300w" sizes="(max-width: 608px) 100vw, 608px" /></p>
<p>Here is how this is achieved.</p>
<p><strong>Inputs Driving A-User Formation</strong></p>
<ul>
<li><strong>A-User Entity Status</strong>: The personality blueprint from Personality Alignment (tone, gestures, behavioural traits).</li>
<li><strong>Final Response</strong>: personality-tuned content from Basic Knowledge – what the avatar will utter.</li>
<li><strong>A-User Control Command</strong>: Directives for rendering and positioning in the metaverse (e.g., MM-Add, MM-Move).</li>
<li><strong>Rendering Parameters</strong>: Synchronisation cues for speech, facial expressions, and gestures.</li>
</ul>
<p>What comes out of the box:<strong> Formation Status</strong></p>
<ul>
<li>A multimodal representation of the A-User (Speaking Avatar<strong>)</strong> that talks, moves, and reacts in sync with the A-User’s intent – the best expression the A-User can give of itself in the circumstances.</li>
<li>Structured report on the processing that led to the result.</li>
</ul>
<p><strong>What Makes A-User Formation Special?</strong></p>
<p>It’s the last mile of the pipeline – the point where all upstream intelligence (context, reasoning, User’s Entity Status estimation, personality) becomes visible and interactive. A-User Formation ensures:</p>
<ul>
<li><strong>Expressive Coherence</strong>: Speech, gestures, and facial cues match the chosen personality.</li>
<li><strong>Contextual Fit</strong>: Avatar appearance and behaviour align with domain norms (e.g., formal in a medical setting, casual in a social lounge).</li>
<li><strong>Technical Precision</strong>: Synchronisation across Personal Status modalities for natural and consistent interaction.</li>
</ul>
<p><strong>Key Points to Take Away about A-User Formation</strong></p>
<ol>
<li><strong>Purpose</strong>: Turns the A-User’s personality and reasoning into a visible and audible interactive avatar.</li>
<li><strong>Inputs</strong>: Personality-aligned final response, control commands, and rendering parameters.</li>
<li><strong>Outputs</strong>: Speaking avatar, formation status.</li>
<li><strong>Goal</strong>: Deliver a coherent, expressive, and context-aware representation that feels natural and engaging in response to how the User was perceived at the beginning and processed during the pipeline.</li>
</ol>
<p>The post <a href="https://blog.chiariglione.org/a-user-formation-building-the-a-user-2/">A-User Formation: Building the A-User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A-User Formation: Building the A-User</title>
		<link>https://blog.chiariglione.org/a-user-formation-building-the-a-user/</link>
		
		<dc:creator><![CDATA[Leonardo]]></dc:creator>
		<pubDate>Wed, 17 Dec 2025 11:43:41 +0000</pubDate>
				<category><![CDATA[MPAI]]></category>
		<guid isPermaLink="false">https://blog.chiariglione.org/?p=3575</guid>

					<description><![CDATA[<p>If Personality Alignment gives the A-User its style, A-User Formation AIM gives the A-User its body and its voice, the avatar and the speech for the A-User Control to embed in the metaverse. The A-User stops being an abstract brain controlling various types of processing and becomes a visible, interactive entity. It’s not just about [&#8230;]</p>
<p>The post <a href="https://blog.chiariglione.org/a-user-formation-building-the-a-user/">A-User Formation: Building the A-User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If Personality Alignment gives the A-User its style, A-User Formation AIM gives the A-User its body and its voice, the avatar and the speech for the A-User Control to embed in the metaverse. The A-User stops being an abstract brain controlling various types of processing and becomes a visible, interactive entity. It’s not just about projecting a face on a bot; it’s about creating a coherent representation that matches the personality, the context, and the expressive cues.</p>
<p>We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do things, etc.) with another User in a metaverse. The latter User may be an A-User or be under the direct control of a human and is thus called a Human-User (H-User). The A-User acts as a “conversation partner in a metaverse interaction” with the User.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-36778 aligncenter" src="https://mpai.community/wp-content/uploads/2025/12/Autonomous-User-Architecture-PGM-AUA-V1.0.png" alt="" width="1257" height="583" /></p>
<p>This is the tenth and last of a sequence of posts aiming to illustrate more in depth the architecture of an A-User and provide an easy entry point for those who wish to respond to the MPAI <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/#Tentative">Call for Technology on Autonomous User Architecture</a>. The first six dealt with 1) the Control performed by the A-User Control AI Module on the other components of the A-User; 2) how the A-User captures the external metaverse environment using the Context Capture AI Module; 3) listens, localises, and interprets sound not just as data, but as data having a spatially anchored meaning; 4) makes sense of what the Autonomous User sees by understanding objects’ geometry; relationships, and salience; 5) takes raw sensory input and the User State and turns them into a well‑formed prompt that Basic Knowledge can actually understand and respond to; 6) taps into domain-specific intelligence for deeper understanding of user utterances and operational context; 7) the core language model of the Autonomous User – the “knows-a-bit-of-everything” brain, the first responder to a prompt of a sequence of four; 8) converting a “blurry photo” of the User in the environment taken at the onset of the process into a focused picture; and 9) providing not only a generic bot but a character with intent, tone, and flair – not only a matter of what the avatar utters but how its words land, how the avatar moves, and how the whole interaction feels.</p>
<p><strong>A-User Formation AIM</strong> gives the A-User a body and a voice, the results of a chain or AI Modules composing the A-User pipeline enabling a perceptible and coherent representation that matches the personality, the context, and the expressive cues.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-36776 aligncenter" src="https://mpai.community/wp-content/uploads/2025/12/at.png" alt="" width="394" height="131" /></p>
<p>The inputs driving A-User Formation are</p>
<ul>
<li><strong>A-User Entity Status</strong>: The personality blueprint from Personality Alignment (tone, gestures, behavioural traits).</li>
<li><strong>Final Response</strong>: personality-tuned content from Basic Knowledge – what the avatar will utter.</li>
<li><strong>A-User Control Command</strong>: Directives for rendering and positioning in the metaverse (e.g., MM-Add, MM-Move).</li>
<li><strong>Rendering Parameters</strong>: Synchronisation cues for speech, facial expressions, and gestures.</li>
</ul>
<p>What comes out of the box:<strong> Formation Status</strong></p>
<ul>
<li>A multimodal representation of the A-User (Speaking Avatar<strong>)</strong> that talks, moves, and reacts in sync with the A-User’s intent – the best expression the A-User can give of itself in the circumstances.</li>
<li>Structured report on the processing that led to the result.</li>
</ul>
<p><strong>What Makes A-User Formation Special?</strong></p>
<p>It’s the last mile of the pipeline – the point where all upstream intelligence (context, reasoning, User’s Entity Status estimation, personality) becomes visible and interactive. A-User Formation ensures:</p>
<ul>
<li><strong>Expressive Coherence</strong>: Speech, gestures, and facial cues match the chosen personality.</li>
<li><strong>Contextual Fit</strong>: Avatar appearance and behaviour align with domain norms (e.g., formal in a medical setting, casual in a social lounge).</li>
<li><strong>Technical Precision</strong>: Synchronisation across Personal Status modalities for natural and consistent interaction.</li>
</ul>
<p><strong>Key Points to Take Away about A-User Formation</strong></p>
<ol>
<li><strong>Purpose</strong>: Turns the A-User’s personality and reasoning into a visible and audible interactive avatar.</li>
<li><strong>Inputs</strong>: Personality-aligned final response, control commands, and rendering parameters.</li>
<li><strong>Outputs</strong>: Speaking avatar, formation status.</li>
<li><strong>Goal</strong>: Deliver a coherent, expressive, and context-aware representation that feels natural and engaging in response to how the User was perceived at the beginning and processed during the pipeline.</li>
</ol>
<p>The post <a href="https://blog.chiariglione.org/a-user-formation-building-the-a-user/">A-User Formation: Building the A-User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Personality Alignment: The Style Engine of A-User</title>
		<link>https://blog.chiariglione.org/personality-alignment-the-style-engine-of-a-user/</link>
		
		<dc:creator><![CDATA[Leonardo]]></dc:creator>
		<pubDate>Wed, 17 Dec 2025 11:42:27 +0000</pubDate>
				<category><![CDATA[MPAI]]></category>
		<guid isPermaLink="false">https://blog.chiariglione.org/?p=3573</guid>

					<description><![CDATA[<p>Personality Alignment is where an A-User interacting with a User embedded in a metaverse environment stops being a generic bot and starts acting like a character with intent, tone, and flair. It’s not just a matter of what it utters – it’s about how those words land, how the avatar moves, and how the whole [&#8230;]</p>
<p>The post <a href="https://blog.chiariglione.org/personality-alignment-the-style-engine-of-a-user/">Personality Alignment: The Style Engine of A-User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Personality Alignment is where an A-User interacting with a User embedded in a metaverse environment stops being a generic bot and starts acting like a character with intent, tone, and flair. It’s not just a matter of what it utters – it’s about <em>how</em> those words land, how the avatar moves, and how the whole interaction feels.</p>
<p>We have already presented the system diagram of the Autonomous User (A-User), an autonomous agent able to move and interact (walk, converse, do things, etc.) with another User in a metaverse. The latter User may be an A-User or be under the direct control of a human and is thus called a Human-User (H-User). The A-User acts as a “conversation partner in a metaverse interaction” with the User.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-36778 aligncenter" src="https://mpai.community/wp-content/uploads/2025/12/Autonomous-User-Architecture-PGM-AUA-V1.0.png" alt="" width="1257" height="583" /></p>
<p>This is the ninth of a sequence of posts aiming to illustrate more in depth the architecture of an A-User and provide an easy entry point for those who wish to respond to the MPAI <a href="https://mpai.community/standards/mpai-pgm/aua/v1-0/#Tentative">Call for Technology on Autonomous User Architecture</a>. The first six dealt with 1) the Control performed by the A-User Control AI Module on the other components of the A-User; 2) how the A-User captures the external metaverse environment using the Context Capture AI Module; 3) listens, localises, and interprets sound not just as data, but as data having a spatially anchored meaning; 4) makes sense of what the Autonomous User sees by understanding objects’ geometry; relationships, and salience; 5) takes raw sensory input and the User State and turns them into a well‑formed prompt that Basic Knowledge can actually understand and respond to; 6) taps into domain-specific intelligence for deeper understanding of user utterances and operational context; 7) the core language model of the Autonomous User – the “knows-a-bit-of-everything” brain, the first responder to a prompt of a sequence of four; and 8) converting a “blurry photo” of the User in the environment taken at the onset of the process into a focused picture.</p>
<p>The figure is an extract from the A-User Architecture Reference model representing Domain Access generating two streams of data related to the User and its environment and two recipient AI Modules: User State Refinement and Personality Alignment.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-36887 aligncenter" src="https://mpai.community/wp-content/uploads/2025/12/ax.png" alt="" width="476" height="177" /></p>
<p>This is possible because the A-User receives the right inputs driving the Alignment of the A-User Personality with the refined User’s Entity State:</p>
<ul>
<li><strong>Personality Context Guide</strong>: Domain-specific hints from Domain Access (e.g., “medical setting → professional tone”).</li>
<li><strong>Expressive State Guide</strong>: Emotional and attentional posture of the User (e.g., stressed → calming personality).</li>
<li><strong>Refined Response</strong>: Text from Basic Knowledge in response to User State Refinement prompt.</li>
<li><strong>Personality Alignment Directive</strong>: Commands to tweak or override the personality profile (e.g., “switch to negotiator mode”) from the A-User Control AI Module (AIM).</li>
</ul>
<p>A smart integration of these inputs enables the A-User to deliver the following outputs:</p>
<ul>
<li><strong>A-User Entity State</strong>: the complete internal state of the A-User’s synthetic personality produced (tone, gestures, behavioural traits).</li>
<li><strong>PA-Prompt</strong>: New prompt formulation including the final A-User personality (so the words sound right).</li>
<li><strong>Personality Alignment Status</strong>: A structured report of personality and expressive alignment to the A-User Control AIM.</li>
</ul>
<p>Here are some examples of personality profiles that Personality Alignment could use or blend:</p>
<ul>
<li><strong>Mentor Mode</strong>: Calm tone, structured answers, moderate gestures, empathy cues.</li>
<li><strong>Entertainer Mode</strong>: Upbeat tone, humour, wide gestures, animated expressions.</li>
<li><strong>Negotiator Mode</strong>: Firm tone, controlled gestures, strategic phrasing.</li>
<li><strong>Assistant Mode</strong>: Neutral tone, minimal gestures, clarity-first responses.</li>
</ul>
<p><strong>Key Points to Take Away about Personality Alignment</strong></p>
<ul>
<li><strong>Purpose</strong>: Makes A-User’s delivery context-aware and emotionally tuned.</li>
<li><strong>Inputs</strong>: Domain context, user emotional state, refined semantic response, and directives.</li>
<li><strong>Outputs</strong>: Personality blueprint (Entity Status), PA-Prompt for expressive rendering, and alignment status.</li>
<li><strong>Profiles</strong>: For example, Mentor, Entertainer, Negotiator, Assistant – each with tone, gesture style, and behavioural traits.</li>
<li><strong>Goal</strong>: Coherent, adaptive interaction that feels natural and persuasive in the metaverse.</li>
</ul>
<p>The post <a href="https://blog.chiariglione.org/personality-alignment-the-style-engine-of-a-user/">Personality Alignment: The Style Engine of A-User</a> appeared first on <a href="https://blog.chiariglione.org">Leonardo&#039;s Blog</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
