<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>Open Source Integrated AI and Semantic Tech</title>
	<atom:link href="https://integratedsemantics.org/feed/" rel="self" type="application/rss+xml"/>
	<link>https://integratedsemantics.org</link>
	<description>Open Source, AI, LLM, KG, Alfresco, Content, ECM, Graph DBs, Search, BI, Dashboards, Visualization, BPM, UIs, Angular, React, Vue.js, Python, Java, NLP, RDF, SPARQL, OWL</description>
	<lastBuildDate>Wed, 29 Oct 2025 00:14:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Flexible GraphRAG or RAG is flexing to the max: 8 Graph databases, 10 Vector databases, 3 search engines working (can docker compose all including dashboards), 13 data sources</title>
		<link>https://integratedsemantics.org/2025/10/28/flexible-graphrag-or-rag-is-flexing-to-the-max-8-graph-databases-10-vector-databases-3-search-engines-working-can-docker-compose-all-including-dashboards-13-data-sources/</link>
					<comments>https://integratedsemantics.org/2025/10/28/flexible-graphrag-or-rag-is-flexing-to-the-max-8-graph-databases-10-vector-databases-3-search-engines-working-can-docker-compose-all-including-dashboards-13-data-sources/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 29 Oct 2025 00:14:00 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Alfresco]]></category>
		<category><![CDATA[Amazon Neptune]]></category>
		<category><![CDATA[ArcadeDB]]></category>
		<category><![CDATA[FalkorDB]]></category>
		<category><![CDATA[Flexible-GraphRAG]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[Knowledge Graphs]]></category>
		<category><![CDATA[Kuzu]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[MCP]]></category>
		<category><![CDATA[Neo4j]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[RAG]]></category>
		<category><![CDATA[Vector Databases]]></category>
		<category><![CDATA[FalkorDB Database]]></category>
		<category><![CDATA[Flexible GraphRAG]]></category>
		<category><![CDATA[Graph Databases]]></category>
		<category><![CDATA[KG]]></category>
		<category><![CDATA[Kuzu Database]]></category>
		<category><![CDATA[LadybugDB]]></category>
		<category><![CDATA[Memgraph]]></category>
		<category><![CDATA[NebulaGraph]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=423</guid>

					<description><![CDATA[Flexible GraphRAG on GitHub X.com&#160;Steve Reiner @stevereiner&#160; LinkedIn&#160;Steve Reiner LinkedIn Posts Flexible GraphRAG or Flexible RAG, an Apache 2.0 open source python platform, is now flexing to the max using LlamaIndex, in terms of supporting more databases and data sources: supports 8 graph databases, 10 vector databases, 3 search engines, and 13 data sources,. Also &#8230; <a href="https://integratedsemantics.org/2025/10/28/flexible-graphrag-or-rag-is-flexing-to-the-max-8-graph-databases-10-vector-databases-3-search-engines-working-can-docker-compose-all-including-dashboards-13-data-sources/" class="more-link">Continue reading<span class="screen-reader-text"> "Flexible GraphRAG or RAG is flexing to the max: 8 Graph databases, 10 Vector databases, 3 search engines working (can docker compose all including dashboards), 13 data sources"</span></a>]]></description>
										<content:encoded><![CDATA[
<p><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG on GitHub</a></p>



<p><strong>X.com</strong>&nbsp;<a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">Steve Reiner @stevereiner</a>&nbsp; <strong>LinkedIn</strong>&nbsp;<a href="https://www.linkedin.com/in/steve-reiner-abbb5320/recent-activity/all/" target="_blank" rel="noreferrer noopener">Steve Reiner LinkedIn Posts</a></p>



<p><strong><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG</a> </strong>or <strong><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible RAG</a></strong>, an <strong>Apache 2.0 open source python</strong> platform, is now flexing to the max using <strong><a href="https://www.llamaindex.ai/" target="_blank" rel="noreferrer noopener">LlamaIndex</a></strong>, in terms of supporting more databases and data sources: supports <strong>8 graph databases, 10 vector databases, 3 search engines, and  13 data sources</strong>,. Also supports <strong>knowledge graph auto-building, schemas</strong>, LlamaIndex <strong>LLMs</strong>, <strong><strong><a href="https://github.com/docling-project/docling" target="_blank" rel="noreferrer noopener">Docling</a></strong></strong> doc processing (<strong><strong><a href="https://www.llamaindex.ai/llamaparse" target="_blank" rel="noreferrer noopener">LlamaParse</a></strong> </strong>coming soon), <strong>GraphRAG </strong>mode, <strong>RAG only</strong> mode, <strong>Hybrid search</strong>, and <strong>AI query / chat</strong>. Has <strong>React, Vue, and Angular frontends</strong>, and a <strong>FastAPI backend</strong>. <strong> React, Vue, Angular, Backend now work on Windows, Mac, Linux (standalone or in docker). </strong> Also has a <strong>FastMCP MCP server.</strong> <strong>Has a convenient docker compose that can include any of the databases</strong> (Vector, Graph, Search, <a href="https://www.hyland.com/en/solutions/products/alfresco-platform" target="_blank" rel="noreferrer noopener">Alfresco</a>) and <strong>Dashboards / Consoles</strong>.</p>



<figure data-wp-context="{&quot;imageId&quot;:&quot;69dc4b0b7bb18&quot;}" data-wp-interactive="core/image" data-wp-key="69dc4b0b7bb18" class="wp-block-image size-large wp-lightbox-container"><img fetchpriority="high" decoding="async" width="1024" height="551" data-wp-class--hide="state.isContentHidden" data-wp-class--show="state.isContentVisible" data-wp-init="callbacks.setButtonStyles" data-wp-on--click="actions.showLightbox" data-wp-on--load="callbacks.setButtonStyles" data-wp-on-window--resize="callbacks.setButtonStyles" src="https://integratedsemantics.org/wp-content/uploads/2025/10/image-2-1024x551.png" alt="" class="wp-image-430" srcset="https://integratedsemantics.org/wp-content/uploads/2025/10/image-2-1024x551.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-2-300x162.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-2-768x413.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-2.png 1200w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /><button
			class="lightbox-trigger"
			type="button"
			aria-haspopup="dialog"
			aria-label="Enlarge"
			data-wp-init="callbacks.initTriggerButton"
			data-wp-on--click="actions.showLightbox"
			data-wp-style--right="state.imageButtonRight"
			data-wp-style--top="state.imageButtonTop"
		>
			<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="none" viewBox="0 0 12 12">
				<path fill="#fff" d="M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z" />
			</svg>
		</button></figure>



<p>(Flexible GraphRAG AI chat shown with Hyland products web page(s) used with  web pages data source to auto-generated a Neo4j graph).</p>



<p><strong>Convenient docker compose:</strong> you can choose to include from all supported 1<strong>0 vector and 8 graph databases, Elasticsearch, OpenSearch</strong>, <strong>and <a href="https://www.hyland.com/en/solutions/products/alfresco-platform" target="_blank" rel="noreferrer noopener">Hyland Alfresco Community</a></strong>. Include in the docker-compose.yaml by just removing the # comment in front of their includes. <strong>Dashboards / Consoles for these databases,  </strong>as much as possible are also included in the docker compose choices (either in the yaml file for the database or for some a separate yaml file include).</p>



<p>You can <strong>run the docker with the databases with the backend and frontends (React, Angular, Vue) running stand alone</strong> in separate terminal windows. In addition to running the databases in docker, <strong>you can include the backend and frontends in the dock compose</strong> by including the app-stack.yaml and proxy.yaml includes. Now have no config duplication for standalone backend+frontends vs full docker mode: previously had to repeat all config in app-stack.yaml now use env_file: include standalone backend .env and overrides with include of docker.env (for configs that need host.docker.internal)</p>



<p><strong>All 8 Graph database working</strong>: <a href="https://neo4j.com/" target="_blank" rel="noreferrer noopener">Neo4j</a>,  <a href="https://arcadedb.com/" target="_blank" rel="noreferrer noopener">ArcadeDB</a>,  <a href="https://www.falkordb.com/" target="_blank" rel="noreferrer noopener">FalkorDB</a>,  <a href="https://github.com/kuzudb/kuzu" target="_blank" rel="noreferrer noopener">archived Kuzu</a> (<a href="https://ladybugdb.com/" target="_blank" rel="noreferrer noopener">LadybugDB </a>fork todo),  <a href="https://www.nebula-graph.io/" target="_blank" rel="noreferrer noopener">NebulaGraph</a>,  <a href="https://memgraph.com/" target="_blank" rel="noreferrer noopener">Memgraph</a>,  <a href="https://aws.amazon.com/neptune/" target="_blank" rel="noreferrer noopener">Amazon Neptune</a>,  <a href="https://aws.amazon.com/neptune/" target="_blank" rel="noreferrer noopener">Amazon Neptune Analytics</a></p>



<p><strong>All 10 Vector databases working</strong>: <a href="https://qdrant.tech/" target="_blank" rel="noreferrer noopener">Qdrant</a>, <a href="https://www.elastic.co/elasticsearch" target="_blank" rel="noreferrer noopener">Elasticsearch</a> vector, <a href="https://opensearch.org/" target="_blank" rel="noreferrer noopener">OpenSearch</a> vector,  <a href="https://neo4j.com/" target="_blank" rel="noreferrer noopener">Neo4j</a> vector,  <a href="https://milvus.io/" target="_blank" rel="noreferrer noopener">Milvus</a>,  <a href="https://weaviate.io/" target="_blank" rel="noreferrer noopener">Weaviate</a>,  <a href="https://www.trychroma.com/" target="_blank" rel="noreferrer noopener">Chroma </a>(both http, embedded), <a href="https://www.pinecone.io/" target="_blank" rel="noreferrer noopener">Pinecone</a>,  <a href="https://www.postgresql.org/" target="_blank" rel="noreferrer noopener">PostgreSQL </a>+ <a href="https://github.com/pgvector/pgvector" target="_blank" rel="noreferrer noopener">pgvector</a>,  <a href="https://lancedb.com/" target="_blank" rel="noreferrer noopener">LanceDB</a></p>



<p><strong>All 3 Search engines working</strong>: <a href="https://www.elastic.co/elasticsearch" target="_blank" rel="noreferrer noopener">Elasticsearch</a>, <a href="https://opensearch.org/" target="_blank" rel="noreferrer noopener">OpenSearch</a>, LlamaIndex built-in BM25</p>



<p><strong>New Data Sources: using LlamaIndex readers</strong>: 1. working  ones that don&#8217;t use document processing: Web Pages, Wikipedia, Youtube,  2. working using document processing: S3,  3. ones using document processing still to test: Google Drive, Microsoft OneDrive, Azure Blob, GCS, Box, SharePoint. </p>



<p><strong>Support for  <a href="https://github.com/docling-project/docling" target="_blank" rel="noreferrer noopener">Docling</a> document processing</strong>  is currently available.  Being able configure to use <strong><a href="https://www.llamaindex.ai/llamaparse" target="_blank" rel="noreferrer noopener">LlamaParse</a> coming soon</strong>. </p>



<p><strong>Original data sources with document processing</strong> that don&#8217;t use LlamaIndex readers: filesystem, Alfresco, CMIS. <strong><a href="https://www.hyland.com/en/solutions/products/alfresco-platform" target="_blank" rel="noreferrer noopener">Hyland Alfresco Community</a></strong> can be included in the docker compose by taking the &#8220;#&#8221; comment off the beginning of its include.</p>



<p><strong>LLMs</strong>: LlamaIndex LLMs (LlamaIndex has support for very many), Flexible GraphRAG currently has config for 1. tested, working: OpenAI, Ollama, 2. untested: Anthropic Claude, Google Gemini, Azure OpenAI.</p>



<p><strong>Previous Flexible GraphRAG posts:</strong></p>



<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG Initial Version Blog Post</a></p>



<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/" target="_blank" rel="noreferrer noopener">New Tabbed UI for Flexible GraphRAG (and Flexible RAG)</a></p>



<p>See <a href="https://integratedsemantics.org/2025/09/09/flexible-graphrag-performance-improvements-falkordb-graph-database-support-added/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG: Performance improvements, FalkorDB graph database support added</a></p>



<p>See <a href="https://integratedsemantics.org/2025/10/18/flexible-graphrag-supports-arcadedb-graph-database-with-new-llamaindex-integration/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG: Supports ArcadeDB Graph Database with new LlamaIndex Integration</a></p>



<p>See <a href="https://integratedsemantics.org/2025/10/28/flexible-graphrag-amazon-neptune-neptune-analytics-and-graph-explorer-support-added/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG: Amazon Neptune, Neptune Analytics, and Graph Explorer support added</a></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/10/28/flexible-graphrag-or-rag-is-flexing-to-the-max-8-graph-databases-10-vector-databases-3-search-engines-working-can-docker-compose-all-including-dashboards-13-data-sources/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Flexible GraphRAG: Amazon Neptune, Neptune Analytics, and Graph Explorer support added</title>
		<link>https://integratedsemantics.org/2025/10/28/flexible-graphrag-amazon-neptune-neptune-analytics-and-graph-explorer-support-added/</link>
					<comments>https://integratedsemantics.org/2025/10/28/flexible-graphrag-amazon-neptune-neptune-analytics-and-graph-explorer-support-added/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 28 Oct 2025 21:32:06 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Amazon Neptune]]></category>
		<category><![CDATA[Flexible-GraphRAG]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[Knowledge Graphs]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Flexible GraphRAG]]></category>
		<category><![CDATA[Graph Databases]]></category>
		<category><![CDATA[KG]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=441</guid>

					<description><![CDATA[Flexible GraphRAG on GitHub X.com&#160;Steve Reiner @stevereiner&#160; LinkedIn&#160;Steve Reiner LinkedIn Posts Amazon Neptune, and Amazon Neptune Analytics support is working and checked int0 the Flexible GraphRAG github. Graph Explorer is supported and working with these graph databases and is also checked in. It runs in a docker and can be used to query and visualize &#8230; <a href="https://integratedsemantics.org/2025/10/28/flexible-graphrag-amazon-neptune-neptune-analytics-and-graph-explorer-support-added/" class="more-link">Continue reading<span class="screen-reader-text"> "Flexible GraphRAG: Amazon Neptune, Neptune Analytics, and Graph Explorer support added"</span></a>]]></description>
										<content:encoded><![CDATA[
<p><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG on GitHub</a></p>



<p><strong>X.com</strong>&nbsp;<a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">Steve Reiner @stevereiner</a>&nbsp; <strong>LinkedIn</strong>&nbsp;<a href="https://www.linkedin.com/in/steve-reiner-abbb5320/recent-activity/all/" target="_blank" rel="noreferrer noopener">Steve Reiner LinkedIn Posts</a></p>



<p><strong><a href="https://aws.amazon.com/neptune/" target="_blank" rel="noreferrer noopener">Amazon Neptune</a>,  and </strong><a href="https://aws.amazon.com/neptune/" target="_blank" rel="noreferrer noopener">Amazon Neptune Analytics</a>  support is working and checked int0 the Flexible GraphRAG github.<br><br><a href="https://t.co/XF0HyqKCuV" target="_blank" rel="noreferrer noopener">Graph Explorer</a>  is supported and working with these graph databases and is also checked in.    It runs in a docker and can be used to query and visualize with both Amazon Neptune and Amazon Neptune Analytics.  <br></p>



<figure class="wp-block-image size-large is-resized"><a href="https://github.com/aws/graph-explorer" target="_blank" rel=" noreferrer noopener"><img decoding="async" width="1024" height="503" src="https://integratedsemantics.org/wp-content/uploads/2025/10/image-3-1024x503.png" alt="" class="wp-image-442" style="width:619px;height:auto" srcset="https://integratedsemantics.org/wp-content/uploads/2025/10/image-3-1024x503.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-3-300x147.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-3-768x377.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-3-1536x755.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-3-1200x589.png 1200w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-3.png 2048w" sizes="(max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p></p>



<p><strong>Gremlin and openCypher can be used in Amazon Neptune with Graph Explorer</strong>, while <strong>openCypher is the primary language for Neptune Analytics</strong>. <strong>SPARQL </strong>is available for graph queries in the general Neptune database, SPARQL support for Graph Explorer is officially on the AWS development roadmap, but no firm timeline has been announced as of October 2025</p>



<p><strong>Note that for Neptune Analytics</strong>, Flexible GraphRAG had to put in a wrapper class to filter out vector queries from its LamaIndex integration that were causing errors in Neptune Analytics. This wasn&#8217;t an issue with regular Neptune.</p>



<p><strong><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG</a> </strong>or <a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible RAG</a><strong> </strong>, an <strong>Apache 2.0 open source python</strong> platform, supports <strong>8 graph databases, 10 vector databases, 3 search engines, and  13 data sources</strong>,. Supports <strong>knowledge graph auto-building, schemas</strong>, LlamaIndex <strong>LLMs</strong>, <strong>Docling</strong> doc processing (<strong>LlamaParse </strong>coming soon), <strong>GraphRAG </strong>mode, <strong>RAG only</strong> mode, <strong>Hybrid search</strong>, and <strong>AI query / chat</strong>. Has <strong>React, Vue, and Angular frontends</strong>, and a <strong>FastAPI backend</strong>. <strong> React, Vue, Angular, and Backend now work on Windows, Mac, Linux (standalone or in docker). </strong> <strong> </strong>Has a convenient <strong>docker compose</strong> that can include <strong>any of the databases (vector, graph, search, alfresco) and  dashboards / consoles</strong>.  There is also a Flexible GraphRAG <strong>MCP server</strong>. </p>



<p><strong>Previous Flexible GraphRAG posts:</strong></p>



<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG Initial Version Blog Post</a></p>



<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/" target="_blank" rel="noreferrer noopener">New Tabbed UI for Flexible GraphRAG (and Flexible RAG)</a></p>



<p>See <a href="https://integratedsemantics.org/2025/09/09/flexible-graphrag-performance-improvements-falkordb-graph-database-support-added/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG: Performance improvements, FalkorDB graph database support added</a></p>



<p>See <a href="https://integratedsemantics.org/2025/10/18/flexible-graphrag-supports-arcadedb-graph-database-with-new-llamaindex-integration/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG: Supports ArcadeDB Graph Database with new LlamaIndex Integration</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/10/28/flexible-graphrag-amazon-neptune-neptune-analytics-and-graph-explorer-support-added/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Flexible GraphRAG: Supports ArcadeDB Graph Database with new LlamaIndex Integration</title>
		<link>https://integratedsemantics.org/2025/10/18/flexible-graphrag-supports-arcadedb-graph-database-with-new-llamaindex-integration/</link>
					<comments>https://integratedsemantics.org/2025/10/18/flexible-graphrag-supports-arcadedb-graph-database-with-new-llamaindex-integration/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 18 Oct 2025 17:09:21 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ArcadeDB]]></category>
		<category><![CDATA[Flexible-GraphRAG]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[Knowledge Graphs]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Flexible GraphRAG]]></category>
		<category><![CDATA[Graph Databases]]></category>
		<category><![CDATA[KG]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=403</guid>

					<description><![CDATA[Flexible GraphRAG on GitHub X.com Steve Reiner @stevereiner  LinkedIn Steve Reiner LinkedIn Posts Flexible GraphRAG added support for the ArcadeDB graph database using this new integration: ArcadeDB LlamaIndex Integration and arcadedb-python available: arcadedb-llama-index Github arcadedb-python Github ArcadeDB (Apache 2.0) is a next generation Multi-Model Database for Graphs, Documents, Key/Value and Time-Series. Supports SQL, Cypher, Gremlin and MongoDB &#8230; <a href="https://integratedsemantics.org/2025/10/18/flexible-graphrag-supports-arcadedb-graph-database-with-new-llamaindex-integration/" class="more-link">Continue reading<span class="screen-reader-text"> "Flexible GraphRAG: Supports ArcadeDB Graph Database with new LlamaIndex Integration"</span></a>]]></description>
										<content:encoded><![CDATA[
<p><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG on GitHub</a></p>



<p><strong>X.com</strong> <a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">Steve Reiner @stevereiner</a>  <strong>LinkedIn</strong> <a href="https://www.linkedin.com/in/steve-reiner-abbb5320/recent-activity/all/" target="_blank" rel="noreferrer noopener">Steve Reiner LinkedIn Posts</a></p>



<p><strong>Flexible GraphRAG added support for the ArcadeDB graph database using this new integration:</strong></p>



<p><strong>ArcadeDB LlamaIndex Integration and arcadedb-python</strong> available:</p>



<p><a href="https://github.com/stevereiner/arcadedb-llama-index" target="_blank" rel="noreferrer noopener">arcadedb-llama-index Github</a></p>



<p><a href="https://github.com/stevereiner/arcadedb-python" target="_blank" rel="noreferrer noopener">arcadedb-python Github</a></p>



<p><strong>ArcadeDB </strong>(Apache 2.0) is a next generation Multi-Model Database for Graphs, Documents, Key/Value and Time-Series. Supports SQL, Cypher, Gremlin and MongoDB queries</p>



<figure class="wp-block-image size-full is-resized"><a href="https://arcadedb.com" target="_blank" rel=" noreferrer noopener"><img decoding="async" width="367" height="100" src="https://integratedsemantics.org/wp-content/uploads/2025/10/image-1.png" alt="" class="wp-image-416" style="width:432px;height:auto" srcset="https://integratedsemantics.org/wp-content/uploads/2025/10/image-1.png 367w, https://integratedsemantics.org/wp-content/uploads/2025/10/image-1-300x82.png 300w" sizes="(max-width: 367px) 85vw, 367px" /></a></figure>



<p><a href="https://arcadedb.com" target="_blank" rel="noreferrer noopener">arcadedb.com</a></p>



<p><a href="https://github.com/ArcadeData/arcadedb" target="_blank" rel="noreferrer noopener">ArcadeDB Github</a></p>



<p><strong>Flexible GraphRAG</strong> is open source python platform supporting Docling document processing, knowledge graph auto-building, schemas, 13 data sources, 10 Vector databases, 7 Graph databases, ElasticSearch and OpenSearch search engines, RAG, GraphRAG, hybrid search, and AI query / chat. Has React, Vue, and Angular frontends, and a FastAPI backend. Also has a FastMCP MCP server.</p>



<p><strong>Previous Flexible GraphRAG posts:</strong></p>



<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG Initial Version Blog Post</a></p>



<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/" target="_blank" rel="noreferrer noopener">New Tabbed UI for Flexible GraphRAG (and Flexible RAG)</a></p>



<p>See <a href="https://integratedsemantics.org/2025/09/09/flexible-graphrag-performance-improvements-falkordb-graph-database-support-added/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG: Performance improvements, FalkorDB graph database support added</a></p>



<p></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/10/18/flexible-graphrag-supports-arcadedb-graph-database-with-new-llamaindex-integration/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Flexible-GraphRAG: Performance improvements, FalkorDB graph database support added</title>
		<link>https://integratedsemantics.org/2025/09/09/flexible-graphrag-performance-improvements-falkordb-graph-database-support-added/</link>
					<comments>https://integratedsemantics.org/2025/09/09/flexible-graphrag-performance-improvements-falkordb-graph-database-support-added/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 09 Sep 2025 22:27:10 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[FalkorDB]]></category>
		<category><![CDATA[Flexible-GraphRAG]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[Kuzu]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[Neo4j]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Docling]]></category>
		<category><![CDATA[FalkorDB Database]]></category>
		<category><![CDATA[Flexible GraphRAG]]></category>
		<category><![CDATA[Graph Databases]]></category>
		<category><![CDATA[Kuzu Database]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=390</guid>

					<description><![CDATA[See&#160;Flexible GraphRAG Initial Version Blog Post See New Tabbed UI for Flexible GraphRAG (and Flexible RAG) Flexible GraphRAG on GitHub X.com&#160;Steve Reiner @stevereiner&#160;LinkedIn&#160;Steve Reiner LinkedIn]]></description>
										<content:encoded><![CDATA[
<p>See&nbsp;<a href="https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG Initial Version Blog Post</a></p>



<p>See <a href="https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/" target="_blank" rel="noreferrer noopener">New Tabbed UI for Flexible GraphRAG (and Flexible RAG)</a></p>



<p><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG on GitHub</a></p>



<p><strong>X.com</strong>&nbsp;<a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">Steve Reiner @stevereiner</a>&nbsp;<strong>LinkedIn</strong>&nbsp;<a href="https://www.linkedin.com/in/steve-reiner-abbb5320/" target="_blank" rel="noreferrer noopener">Steve Reiner LinkedIn</a></p>



<ol class="wp-block-list">
<li><strong>Improved the performance of flexible-graphrag</strong>
<ul class="wp-block-list">
<li>Added doing parallel Docling document conversion helped pipeline timing</li>



<li>Now not doing KeywordExtractor/SummaryExtractor also helped pipeline timing</li>



<li>Ollama Parallel Processing (need OLLAMA_NUM_PARALLEL=4)</li>



<li>Async PropertyGraphIndex with use_async=True</li>



<li>Increased kg_batch_size from 10 to 20 chunk</li>



<li> Logging added for performance timing</li>
</ul>
</li>



<li><strong>Added performance testing results</strong> to readme.md (6 docs with <strong>openai </strong>with each graph database  (<strong>neo4j</strong>, <strong>kuzu</strong>, <strong>falkordb</strong>)</li>



<li>Added docs/performance.md:  has <strong>performance testing</strong> results for each graph database with 2,4,6 docs with <strong>openai </strong>and 2,4 docs with <strong>ollama</strong></li>



<li><strong>Added support for FalkorDB </strong>graph database <a href="https://www.falkordb.com/" target="_blank" rel="noreferrer noopener">https://www.falkordb.com/</a> and <a href="https://github.com/FalkorDB/falkordb" target="_blank" rel="noreferrer noopener">https://github.com/FalkorDB/falkordb</a>  The abstractions of LlamaIndex, LlamaIndex support for FalkorDB,  and the configurability of flexible-graphrag made this a relatively straightforward process.</li>



<li>Added LlamaIndex DynamicLLMPathExtractor support (works on openai, not on ollama currently)</li>



<li>Added config of kg extractor type (simple, schema, or dynamic) to set which LlamaIndex extractor to use (SimpleLLMPathExtractor, <em>SchemaLLMPathExtractor</em>, or DynamicLLMPathExtractor)</li>



<li>Added config of MAX_TRIPLETS_PER_CHUNK and MAX_PATHS_PER_CHUNK</li>



<li>Added readme.md info on system environment setup of ollama for performance and parallelism (OLLAMA_CONTEXT_LENGTH, OLLAMA_NUM_PARALLEL, etc.)</li>



<li>Added new default schema with 35+ relationship combinations, more relations, and entity types: PERSON, ORGANIZATION, TECHNOLOGY, PROJECT, LOCATION</li>



<li>Fixed file upload dialog performance in all 3 front ends: <strong>React</strong>, <strong>Angular</strong>, and <strong>Vue</strong> (chosen files display quickly after dialog ok)</li>
</ol>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/09/09/flexible-graphrag-performance-improvements-falkordb-graph-database-support-added/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>New Tabbed UI for Flexible GraphRAG (and Flexible RAG)</title>
		<link>https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/</link>
					<comments>https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 24 Aug 2025 08:26:29 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Alfresco]]></category>
		<category><![CDATA[CMIS]]></category>
		<category><![CDATA[Flexible-GraphRAG]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[MCP]]></category>
		<category><![CDATA[Neo4j]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[RAG]]></category>
		<category><![CDATA[Angular]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[Flexible GraphRAG]]></category>
		<category><![CDATA[Graph Databases]]></category>
		<category><![CDATA[Hyland]]></category>
		<category><![CDATA[Kuzu Database]]></category>
		<category><![CDATA[MCP Server]]></category>
		<category><![CDATA[OpenSearch]]></category>
		<category><![CDATA[Qdrant]]></category>
		<category><![CDATA[React]]></category>
		<category><![CDATA[Search]]></category>
		<category><![CDATA[Vector Databases]]></category>
		<category><![CDATA[Vue]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=361</guid>

					<description><![CDATA[See Flexible GraphRAG Initial Version Blog Post Flexible GraphRAG on GitHub X.com&#160;Steve Reiner @stevereiner&#160;LinkedIn&#160;Steve Reiner LinkedIn The Angular, React, and Vue frontend clients now have different stages organized into different tabs so they have room. They all can be switched between a dark and light theme using the slider at the top right corner. New &#8230; <a href="https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/" class="more-link">Continue reading<span class="screen-reader-text"> "New Tabbed UI for Flexible GraphRAG (and Flexible RAG)"</span></a>]]></description>
										<content:encoded><![CDATA[
<p>See <a href="https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/" target="_blank" rel="noreferrer noopener">Flexible GraphRAG Initial Version Blog Post</a></p>



<p><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG on GitHub</a></p>



<p><strong>X.com</strong>&nbsp;<a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">Steve Reiner @stevereiner</a>&nbsp;<strong>LinkedIn</strong>&nbsp;<a href="https://www.linkedin.com/in/steve-reiner-abbb5320/" target="_blank" rel="noreferrer noopener">Steve Reiner LinkedIn</a></p>



<p>The <strong>Angular</strong>, <strong>React</strong>, and <strong>Vue </strong>frontend clients now have different stages <strong>organized into different tabs</strong> so they have room. They all can be switched between a<strong> dark and light theme</strong> using the slider at the top right corner. New functionality beyond the old UI includes a <strong>file upload dialog</strong>, <strong>drag/drop upload</strong>, a table with <strong>file processing  progress bars</strong>, and a <strong>new Chat UI</strong>. Note the github readme.md page has collapse / expand sections to look at screenshots with dark and light themes for React, and only shows the light theme for Angular and Vue.</p>



<h2 class="wp-block-heading"><strong>Sources Tab</strong></h2>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light.png"><img loading="lazy" decoding="async" width="1024" height="548" src="https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-1024x548.png" alt="" class="wp-image-377" srcset="https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-1024x548.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-300x161.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-768x411.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-1536x822.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-2048x1096.png 2048w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-sources-light-1200x642.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p><br>Allows you to choose file to upload from the <strong>file system</strong>,  or paths file or folder path in <strong>Alfresco</strong> or <strong>CMIS</strong> repositories. For filesystem  files you can now use a <strong>file upload dialog</strong> and <strong>drag/drop files onto the drop area</strong> in the source tab view. </p>



<p>For Alfresco and CMIS their no file picker UI  currently (only a field for folder or file path) Note  the file path is a basic CMIS style path like /Shared/GraphRAG/cmispress.txt.  You also specify username, password and base URL like prefilled  <a href="http://localhost:8080/alfresco">http://localhost:8080/alfresco</a>  for Alfresco and <a href="http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/atom">http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/atom</a>  for CMIS.    </p>



<p>You then click on &#8220;<strong>Configure Processing</strong>&#8220;</p>



<h2 class="wp-block-heading"><br><strong>Processing Tab</strong> </h2>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing.png"><img loading="lazy" decoding="async" width="1024" height="548" src="https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-1024x548.png" alt="" class="wp-image-380" srcset="https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-1024x548.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-300x161.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-768x411.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-1536x822.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-2048x1096.png 2048w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-processing-1200x642.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>Here you can modify what files get processed by unselecting / selecting file checkboxes, Remove from processing list by using  x a on file row, our use the remove selected button.<br>The click on <strong>Start Processing</strong> to process selected files. <br>There is an overall progress bar, and per file progress bars.  Note currently all files are processed as one batch in the backend, so the file progress bars will be showing the same status. <br>You can cancel processing by using the cancel button</p>



<h2 class="wp-block-heading"><strong>Search Tab</strong></h2>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light.png"><img loading="lazy" decoding="async" width="1024" height="548" src="https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-1024x548.png" alt="" class="wp-image-373" srcset="https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-1024x548.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-300x161.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-768x411.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-1536x822.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-2048x1096.png 2048w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-hybrid-search-light-1200x642.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query.png"><img loading="lazy" decoding="async" width="1024" height="548" src="https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-1024x548.png" alt="" class="wp-image-381" srcset="https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-1024x548.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-300x161.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-768x411.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-1536x822.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-2048x1096.png 2048w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-search-qa-query-1200x642.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p></p>



<p>Here you can do a<strong> Hybrid Search</strong> (Fulltext+Vector RAG+GraphRAG)  or (Fulltext+Vector RAG) depending on configuration. This gives you a traditional results list.  For now ignore the scores and extra results just check results order.</p>



<p>The <strong>Q&amp;A Query</strong>,  Here you ask a question using conversational style (This is an <strong>AI query</strong> using the configured <strong>LLM </strong>and the information submitted in the processing tab (and in  full text, vector, and graph &#8220;memory&#8221;)</p>



<h2 class="wp-block-heading"><strong>Chat Tab</strong></h2>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1.png"><img loading="lazy" decoding="async" width="1024" height="547" src="https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-1024x547.png" alt="" class="wp-image-385" srcset="https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-1024x547.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-300x160.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-768x410.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-1536x821.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-2048x1094.png 2048w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-light-1-1200x641.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1.png"><img loading="lazy" decoding="async" width="1024" height="547" src="https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-1024x547.png" alt="" class="wp-image-386" srcset="https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-1024x547.png 1024w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-300x160.png 300w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-768x410.png 768w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-1536x821.png 1536w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-2048x1094.png 2048w, https://integratedsemantics.org/wp-content/uploads/2025/08/react-chat-using-1-1200x641.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>This a traditional chat style UI allowing you the enter <strong>multiple conversational Q&amp;A queries</strong> (<strong>AI queries</strong> like the one at a time in the Search Tab).  You hit <strong>enter or click the arrow button</strong> to submit a query. You can also use <strong>Shift+Enter</strong> to get a extra new line for your question.  The chat view area displays  a history of questions and answers. The you can clear things with the <strong>Clear History</strong> button</p>



<h2 class="wp-block-heading"><strong>Flexible RAG</strong></h2>



<p>I used Flexible RAG in the title to indicate that Flexible GraphRAG can be <strong>configured to just be a RAG system</strong>. This would still have the flexibility that LlamaIndex abstractions provide to be able to plug in different search engines/databases, vector databases, and LLMs.    <strong>You still get Angular, React, and Vue frontends, have MCP server support, a FastAPI backend,  and Docker support</strong>.   You could just configure a search engine. You could just configure a Graph database for auto graph building of knowledge graphs using the configurable schema support.   </p>



<p>For <strong>RAG configuration</strong>:<br>Flexible GraphRAG can be setup to do <strong>RAG only</strong> without the GraphRAG (see env-sample.txt  and setup your environment in .env, etc.): </p>



<ul class="wp-block-list">
<li>Have SEARCH_DB and SEARCH_DB_CONFIG set for elasticsearch, opensearch, or bm25 </li>



<li>Have VECTOR_DB and VECTOR_DB_CONFIG setup for neo4j, qdrant, elasticsearch, or opensearch </li>



<li>Have GRAPH_DB set to none and ENABLE_KNOWLEDGE_GRAPH=false. </li>
</ul>



<p></p>



<h2 class="wp-block-heading"><strong>Server Monitoring and Management UI</strong></h2>



<p>Basically you can use the docker setup and get a docker compose that run  all the following  at the same time (or a subset by commenting out a compose include) without having to these up individually: <strong>Alfresco </strong>docker compose (which has Share and ACA), <strong>Neo4j </strong>docker (which has a console URL), <strong>Kuzu </strong>API server (not used, used embedded), <strong>Kuzu explorer</strong>, <strong>Qdrant</strong> (which has a dashboard), <strong>Elasticsearch</strong>, Elasticsearch <strong>Kibana</strong> dashboard, <strong>OpenSearch</strong> which has a OpenSearch Dashboards URL. </p>



<p>So you can setup a browser window with tabs for all these dashboards, Alfresco Share / ACA, and Neo4J console.  <strong>This is your monitoring and management UI</strong>.</p>



<p>You can uses the Neo4j, Elasticsearch Kibana, Qdrant dashboard,  OpenSearch dashboards to delete full text indexes (Elasticsearch, OpenSearch), delete vector indexes (Qdrant, Neo4j, Elasticsearch, OpenSearch)  and delete nodes and relationships (Neo4j and Kuzu consoles).  </p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/08/24/new-tabbed-ui-for-flexible-graphrag-and-flexible-rag/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Flexible GraphRAG initial version</title>
		<link>https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/</link>
					<comments>https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 16 Aug 2025 02:36:35 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Alfresco]]></category>
		<category><![CDATA[CMIS]]></category>
		<category><![CDATA[Flexible-GraphRAG]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[MCP]]></category>
		<category><![CDATA[Neo4j]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[RAG]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[FastAPI]]></category>
		<category><![CDATA[FastMCP]]></category>
		<category><![CDATA[Flexible GraphRAG]]></category>
		<category><![CDATA[Graph Databases]]></category>
		<category><![CDATA[Hyland]]></category>
		<category><![CDATA[Kuzu Database]]></category>
		<category><![CDATA[MCP Server]]></category>
		<category><![CDATA[OpenSearch]]></category>
		<category><![CDATA[Qdrant]]></category>
		<category><![CDATA[Search]]></category>
		<category><![CDATA[Vector Databases]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=348</guid>

					<description><![CDATA[Flexible GraphRAG on GitHub Flexible GraphRAG is an open source python platform supporting document processing, Knowledge Graph auto-building, Schema support, RAG and GraphRAG setup, hybrid search (fulltext, vector, graph), and AI Q&#38;A query capabilities. X.com Steve Reiner @stevereiner LinkedIn Steve Reiner LinkedIn Has a MCP Server, Fast API Backend, Docker support, Angular, React, and Vue UI &#8230; <a href="https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/" class="more-link">Continue reading<span class="screen-reader-text"> "Flexible GraphRAG initial version"</span></a>]]></description>
										<content:encoded><![CDATA[
<p><a href="https://github.com/stevereiner/flexible-graphrag" target="_blank" rel="noreferrer noopener">Flexible GraphRAG on GitHub</a></p>



<p><strong>Flexible GraphRAG</strong> is an open source python platform supporting document processing, Knowledge Graph auto-building, Schema support, RAG and GraphRAG setup, hybrid search (fulltext, vector, graph),  and AI Q&amp;A query capabilities.  </p>



<p><strong>X.com</strong>   <a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">Steve Reiner  @stevereiner</a>      <strong>LinkedIn</strong>  <a href="https://www.linkedin.com/in/steve-reiner-abbb5320/" target="_blank" rel="noreferrer noopener">Steve Reiner LinkedIn</a></p>



<p></p>



<p>Has a <strong>MCP Server, Fast API Backend, Docker support, Angular, React, and Vue UI clients</strong></p>



<p>Built with <strong>LlamaIndex </strong>which provides abstractions for allowing multiple vector, search graph databases, LLMs to be supported. </p>



<p><strong>Supports currently:</strong></p>



<p> <strong>Graph Databases: Neo4j, Kuzu</strong></p>



<p> <strong>Vector Databases: Neo4j, Qdrant, Elasticsearch, OpenSearch </strong></p>



<p><strong>Search Databases/Engines: Elasticsearch, OpenSearch, LlamaIndex built-in BM25</strong></p>



<p><strong>LLMs: OpenAI, Ollama</strong></p>



<p><strong>Data Sources: File System, Hyland Alfresco, CMIS</strong></p>



<p>A configurable hybrid search system that optionally combines vector similarity search, full-text search, and knowledge graph GraphRAG on document processed (Docling) from multiple data sources (<strong>filesystem, Alfresco, CMIS, </strong>etc.).  It has both a <strong>FastAPI backend with REST endpoints and a Model Context Protocol (MCP) server </strong>for MCP clients like Claude Desktop, etc. Also has simple <strong>Angular, React, and Vue UI clients</strong> (which use the REST APIs of the FastAPI backend) for using interacting with the system.</p>



<ul class="wp-block-list">
<li>H<strong>ybrid Search</strong>: Combines vector embeddings, BM25 full-text search, and graph traversal for comprehensive document retrieval</li>
</ul>



<p><strong>Knowledge Graph GraphRAG</strong>: Extracts entities and relationships from documents to create graphs in graph databases for graph-based reasoning</p>



<ul class="wp-block-list">
<li><strong>Configurable Architecture</strong>: LlamaIndex provides abstractions for vector databases, graph databases, search engines, and LLM providers</li>



<li><strong>Multi-Source Ingestion</strong>: Processes documents from filesystems, CMIS repositories, and Alfresco systems</li>



<li><strong>FastAPI Server with REST API</strong>: FastAPI server with REST API for document ingesting, hybrid search, and AI Q&amp;A query</li>



<li><strong>MCP Server</strong>: MCP server that provides MCP Clients like Claude Desktop, etc. tools for document and text ingesting, hybrid search and AI Q&amp;A query.</li>



<li><strong>UI Clients</strong>: Angular, React, and Vue UI clients support choosing the data source (filesystem, Alfresco, CMIS, etc.), ingesting documents, performing hybrid searches and AI Q&amp;A Queries.</li>



<li><strong>Deployment Flexibility</strong>: Supports both standalone and Docker deployment modes. Docker infrastructure provides modular database selection via docker-compose includes &#8211; vector, graph, and search databases can be included or excluded with a single comment. Choose between hybrid deployment (databases in Docker, backend and UIs standalone) or full containerization.</li>
</ul>



<p><strong>Check-ins 8/5/25 thru 8/9/25 provided:</strong><br>1. Added LlamaIndex support, configurability, KG Building, GraphRAG, Hybrid Search, AI Q&amp;A Query, Angular, React, and Vue UIs.   Based on  <a href="https://github.com/stevereiner/cmis-graphrag-ui" target="_blank" rel="noreferrer noopener">CMIS GraphRAG UI</a>  and <a href="https://github.com/stevereiner/cmis-graphrag" target="_blank" rel="noreferrer noopener">CMIS GraphRAG</a> which didn&#8217;t use LlamaIndex (used neo4j-graphrag python package) <br>2. Also added a FastMCP based MCP Server that uses the FastAPI server.</p>



<p><strong>Check-in today  8/15/25  provided:</strong></p>



<p><strong>Added: Multiple Databases Support, Docker, Schemas, and Ollama support</strong></p>



<ol class="wp-block-list">
<li>Leveraging <strong>LlamaIndex </strong>abstractions, added support for more search, vector and graph databases (beyond previous Neo4j, built-in BM25). Now support:<br><strong>Neo4j graph database, or Neo4j graph and vectors</strong> (also Neo4j browser / console)<br><strong>Elasticsearch search, or search and separate vector</strong> (also Kibana dashboard)<br>OpenSearch search, or search+vector hybrid search (also OpenSearch Dashboards)<br><strong>Qdrant vector database</strong> (also its dashboard)<br><strong>Kuzu graph database</strong> support (also Kuzu explorer)<br>LlamaIndex built-in local BM25 full text search<br>(Note: LlamaIndex supports additonal vector and graph databases which we could support)</li>



<li><strong>Added composable Docker support</strong><br>a. As way to run search, graph, and vector databases. Also dashboards, and alfreso<br>(comment out includes for what you have exernally or don&#8217;t use)<br>b. Databases together with Flexible GraphRAG backend, and Angular, React, and Vue UIs</li>



<li><strong>Added Schema support </strong>for Neo4j (optional), and Kuzu (needed). Support default and custom<br>schemas you configure in your environment (.env file, etc.)</li>



<li>Added <strong>Ollama </strong>support in addition to <strong>OpenAI</strong>. Tested thru Ollama gpt-oss:20b, llama3.1, llama3.2.<br>(Note: LlamaIndex supports additonal LLMs which we could support)</li>
</ol>



<p></p>



<p></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/08/15/flexible-graphrag-initial-version/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Python-Alfresco-MCP-Server 1.1.0 released</title>
		<link>https://integratedsemantics.org/2025/07/30/python-alfresco-mcp-server-1-1-0-released/</link>
					<comments>https://integratedsemantics.org/2025/07/30/python-alfresco-mcp-server-1-1-0-released/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 30 Jul 2025 07:44:30 +0000</pubDate>
				<category><![CDATA[Alfresco]]></category>
		<category><![CDATA[MCP]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[MCP Server]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=344</guid>

					<description><![CDATA[Video: Python-Alfresco-MCP-Server with Claude Desktop and MCP Inspectorhttps://x.com/stevereiner/status/1950418564562706655 Model Context Protocol Server (MCP) for Alfresco Content Services (Community and Enterprise) This uses FastMCP 2.0 and Python-Alfresco-API A full featured MCP server for Alfresco in search and content management areas. Features complete documentation, tests, examples,and config samples for various MCP clients (Claude Desktop, MCP Inspector, references &#8230; <a href="https://integratedsemantics.org/2025/07/30/python-alfresco-mcp-server-1-1-0-released/" class="more-link">Continue reading<span class="screen-reader-text"> "Python-Alfresco-MCP-Server 1.1.0 released"</span></a>]]></description>
										<content:encoded><![CDATA[
<p><strong>Video:  Python-Alfresco-MCP-Server with Claude Desktop and MCP Inspector</strong><br><a href="https://x.com/stevereiner/status/1950418564562706655" target="_blank" rel="noreferrer noopener">https://x.com/stevereiner/status/1950418564562706655</a></p>



<p>Model Context Protocol Server (MCP)  for Alfresco Content Services (Community and Enterprise)</p>



<p> This uses FastMCP 2.0 and Python-Alfresco-API</p>



<p>A full featured MCP server for Alfresco in search and content management areas. Features complete documentation, tests, examples,<br>and config samples for various MCP clients (Claude Desktop, MCP Inspector, references to configuring others).<br></p>



<p><strong>Python-Alfresco-MCP-Server on Github</strong><br><a href="https://github.com/stevereiner/python-alfresco-mcp-server" target="_blank" rel="noreferrer noopener">https://github.com/stevereiner/python-alfresco-mcp-server</a></p>



<p><strong>Tools:</strong><br>Basic search, advanced search, metadata search, and cmis query,<br>upload, download, check-in, checkout, cancel checkout,<br>create folder, folder browse, delete node,<br>get/set properties, repository info.</p>



<p>(With python-alfresco-api having full coverage of the 7 Alfresco REST APIs<br>you could customize with what tools you want from 191 in core, 29 in workflow,<br>3 in authentication, 1 in search, 1 in discovery, 18 in model, 1 search sql for solr)</p>



<p><strong>Resources</strong>: repository info repeated</p>



<p><strong>Prompts</strong>: search and analyze</p>



<p><strong>Latest on Github 7/29/25</strong></p>



<ul class="wp-block-list">
<li>readme.md focuses on install with uv and uvx</li>



<li>docs\install_with_pip_pipx.md covers install with pip and pipx</li>



<li>sample configs for Claude Deskop (stdio) with uv, uvx, pipx for windows and mac</li>



<li>sample configs for mcp-inspector with uv, uvx, pipx for both http and stdio</li>
</ul>



<p><strong>Python-Alfresco-MCP-Server  </strong>v<strong>1.1.0     7/25/25</strong></p>



<ul class="wp-block-list">
<li>Refactored code into single file per tool (organized in tools/search/,<br>tools/core/, resources/, prompts/, utils/</li>



<li>Changes for python-afresco-api 1.1.1</li>



<li>Must better testing (143/143 passing)</li>



<li>Added uv support (latest readme and config samples also have uvx)</li>



<li>First version on PyPI.org</li>
</ul>



<p><strong>Python-Alfresco-MCP-Server  v1.0   6/24/25</strong><br>Changed to use FastMCP vs original code</p>



<p></p>



<p><strong>Python-Alfresco-MCP-Server on PyPI</strong><br><a href="https://pypi.org/project/python-alfresco-mcp-server/" target="_blank" rel="noreferrer noopener">https://pypi.org/project/python-alfresco-mcp-server/</a><br>(On PyPI so don&#8217;t need source, still need python and optionally fast uv installed)</p>



<p><strong>Thse can be used to test install or run one thing</strong><br># Tests that installation worked<br>uv tool run python-alfresco-mcp-server &#8211;help <br>uvx python-alfresco-mcp-server &#8211;help     # alias for uv tool run</p>



<p><strong>This install may not be needed</strong><br>uv tool install python-alfresco-mcp-server</p>



<p><strong>Python-Alfresco-API on Github</strong><br><a href="https://github.com/stevereiner/python-alfresco-api" target="_blank" rel="noreferrer noopener">https://github.com/stevereiner/python-alfresco-api</a></p>



<p><strong>Python-Alfresco-API on PyPI</strong><br><a href="https://pypi.org/project/python-alfresco-api/" target="_blank" rel="noreferrer noopener">https://pypi.org/project/python-alfresco-api/</a></p>



<p><strong>X.com</strong><br><a href="https://x.com/stevereiner" target="_blank" rel="noreferrer noopener">https://x.com/stevereiner</a></p>



<p><strong>LinkedIn</strong><br><a href="https://www.linkedin.com/in/steve-reiner-abbb5320/" target="_blank" rel="noreferrer noopener">https://www.linkedin.com/in/steve-reiner-abbb5320/</a></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/07/30/python-alfresco-mcp-server-1-1-0-released/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Python-Alfresco-API  Updated</title>
		<link>https://integratedsemantics.org/2025/07/21/python-alfresco-api-updated/</link>
					<comments>https://integratedsemantics.org/2025/07/21/python-alfresco-api-updated/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 21 Jul 2025 22:53:06 +0000</pubDate>
				<category><![CDATA[Alfresco]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=340</guid>

					<description><![CDATA[ This is a complete Python client package for developing python code and apps for Alfresco. It supports using all 7 Alfresco REST APIs: Core, Search, Authentication, Discovery, Model, Workflow, Search SQL (Solr admin). It has Event support (activemq or Event Gateway). The project has extensive documentation, examples, and tests. See Python-Alfresco-MCP-Server . This is a &#8230; <a href="https://integratedsemantics.org/2025/07/21/python-alfresco-api-updated/" class="more-link">Continue reading<span class="screen-reader-text"> "Python-Alfresco-API  Updated"</span></a>]]></description>
										<content:encoded><![CDATA[
<p></p>



<p> This is a complete Python client package for developing python code and apps for Alfresco.  It supports using all 7 Alfresco REST APIs: Core, Search, Authentication, Discovery, Model, Workflow, Search SQL (Solr admin).  It has Event support (activemq or Event Gateway). The project has extensive documentation, examples, and tests. </p>



<p>See <a href="https://github.com/stevereiner/python-alfresco-mcp-server" target="_blank" rel="noreferrer noopener">Python-Alfresco-MCP-Server</a> .  This is a Model Context Protocol (MCP) Server that uses Python Alfresco API</p>



<p><a href="https://github.com/stevereiner/python-alfresco-api" target="_blank" rel="noreferrer noopener">https://github.com/stevereiner/python-alfresco-api</a></p>



<p><a href="https://pypi.org/project/python-alfresco-api" target="_blank" rel="noreferrer noopener">https://pypi.org/project/python-alfresco-api</a></p>



<p>You need Python  3.10+ installed.</p>



<p>This can be used to install:</p>



<p><code>pip install python-alfresco-api</code>  </p>



<p>The released v1.1.1 version goes well beyond the previous 1.0.x version.</p>



<p>It has a generated well organized hierarchical structure for higher level clients (1.0.x only had 7 wrapper files). Its generated from the openapi-python-client generated low level &#8220;raw clients&#8221;</p>



<p>Pydantic v2 models are now used the high level clients.  Hopefully in v1.2 the low level clients will too.  This can be done by configuring the openapi-python-client generator with templates. Some things need to be worked out, so no guarantees.  This will simplify things and avoid model conversions.</p>



<p>Added utilities for upload, download, versioning, searching. etc. Using the utilities reduced the amount of code you need to do these operations.</p>



<p>A well organized hierarchical structure of linked md docs for the high level client apis and models documentation is also generated. </p>



<p>Documentation now has  diagrams for overall architecture, model levels, and client type.</p>



<p>Readme now covers how to install an Alfresco Community docker from GitHub. In case you don&#8217;t already a Enterprise or Community version of Alfresco Content Services. Also see  <a href="https://www.hyland.com/en/solutions/products/alfresco-platform" target="_blank" rel="noreferrer noopener">Hyland Alfresco</a> .</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2025/07/21/python-alfresco-api-updated/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Creating Knowledge Graphs automatically for GraphRAG: Part 2: with LLMs</title>
		<link>https://integratedsemantics.org/2024/11/01/creating-knowledge-graphs-automatically-for-graphrag-part-2-with-llm/</link>
					<comments>https://integratedsemantics.org/2024/11/01/creating-knowledge-graphs-automatically-for-graphrag-part-2-with-llm/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 01 Nov 2024 04:17:12 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Knowledge Graphs]]></category>
		<category><![CDATA[LLM Graph Builder]]></category>
		<category><![CDATA[Neo4j]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[KG]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=310</guid>

					<description><![CDATA[And the winner is using LLMs to create knowledge graphs over using NLP. Can LLMs do a better job? The Neo4j LLM Graph Builder in particular, has shown they can. What about the cost of using OpenAI along with the loss of privacy of data by submitting? The answer is free and local LLM models &#8230; <a href="https://integratedsemantics.org/2024/11/01/creating-knowledge-graphs-automatically-for-graphrag-part-2-with-llm/" class="more-link">Continue reading<span class="screen-reader-text"> "Creating Knowledge Graphs automatically for GraphRAG: Part 2: with LLMs"</span></a>]]></description>
										<content:encoded><![CDATA[
<p>And the winner is using LLMs to create knowledge graphs over using NLP. Can LLMs do a better job? The<a href="https://neo4j.com/labs/genai-ecosystem/llm-graph-builder/" target="_blank" rel="noreferrer noopener"> Neo4j LLM Graph Builder</a> in particular, has shown they can. What about the cost of using OpenAI  along with the loss of  privacy of data by submitting? The answer is free and local LLM models  (Llama3 versions are available thru <a href="https://ollama.com/" target="_blank" rel="noreferrer noopener">ollama</a>) work too with Graph Builder. I tested with OpenAI GPT-4o, <a href="https://ollama.com/library/llama3" target="_blank" rel="noreferrer noopener">llama3</a>, <a href="https://ollama.com/library/llama3.1" target="_blank" rel="noreferrer noopener">llama3.1</a>, <a href="https://ollama.com/library/llama3.2" target="_blank" rel="noreferrer noopener">llama3.2</a>.  I noticed <a href="https://ollama.com/library/gemma2" target="_blank" rel="noreferrer noopener">gemma2</a> is also available thru ollama. With these local LLMs, you will need a high end Nvidia card to work best.</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder.jpg"><img loading="lazy" decoding="async" width="1024" height="487" src="https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-1024x487.jpg" alt="" class="wp-image-317" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-1024x487.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-300x143.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-768x365.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-1536x730.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-2048x974.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/Neo4j-LLM-Graph-Builder-1200x570.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p></p>



<p><a href="https://neo4j.com/labs/genai-ecosystem/llm-graph-builder/" target="_blank" rel="noreferrer noopener">Neo4j Labs LLM Knowledge Graph Builder</a> main info site</p>



<p>Short <a href="https://www.youtube.com/watch?v=LlNy5VmV290" target="_blank" rel="noreferrer noopener">Youtube demo</a> video</p>



<p>The <a href="https://llm-graph-builder.neo4jlabs.com/" target="_blank" rel="noreferrer noopener">Online LLM Graph Builder</a>  can be used.  You need to provide it with your  Aura Neo4j connection info (you can create an account for a free Aura DB).  It only has Diffbot, OpenAI, and Gemini LLM models available. </p>



<p>Graph Builder can upload from local files, AWS S3, web pages, Wikipedia, and Youtube.  Google GCS can be a source if configured.</p>



<p>First choose the LLM model to use.  Then upload one or more files. Then choose generate graph. You can view the graphs with the basic viewer (which allows hiding chunk nodes, community nodes, so you can see the entities and relationships). The Bloom viewer is also available, which is more complicated.</p>



<p>You can also chat with the data using GraphRAG and your chosen LLM.  Answers have a icon below them that when clicked, provides info on graph doc sources, what entities, and what chunks were used to answer.</p>



<p><a href="https://github.com/neo4j-labs/llm-graph-builder" target="_blank" rel="noreferrer noopener">LLM Graph Builder</a>  Github project (Apache  2.0 open source)</p>



<p>The online version doesn&#8217;t have the llama3 models. So you need to clone the github project and build locally. To add using Meta Llama3 models, you  need to configure it.  You use the example.env to create a .env file and then add an optional OpenAI key, LLM model configuration, and indicate you initial Neo4j database info. Neo4j connection info can also be provided in the UI. Then do docker compose up. I have  a fork of the main branch  in <a href="https://github.com/stevereiner/llm-graph-builder" target="_blank" rel="noreferrer noopener">my LLM Graph Builder</a> that has added: configuration for lllama3, llama3.1, llama3.2, and openai gpt-4 choices, some neo4j connection config examples, switched to 8090 to not conflict with Alfresco 8080, has an additional debug log to so you can check on model config. and has a sample files folder with space-station.txt.</p>



<p>Speaking of Alfresco, I could add to my <a href="https://github.com/stevereiner/alfresco-genai-semantic" target="_blank" rel="noreferrer noopener">Alfresco GenAI Semantic</a> project to call the separable backend of Graph Builder to generate a knowledge graph of new or updated Alfresco documents that have a new custom aspect.  The backend may only have support for sources coming for the app&#8217;s kinds of sources currently. Also note in terms of UI integration, Alfresco&#8217;s ADF components and the ACA client use Angular.  Neo4j Graph Builder&#8217;s front end uses React (and so does some of their other software projects).</p>



<p>space-station.txt with OpenAI GPT-4o:</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o.jpg"><img loading="lazy" decoding="async" width="1024" height="663" src="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-1024x663.jpg" alt="" class="wp-image-322" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-1024x663.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-300x194.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-768x498.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-1536x995.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-2048x1327.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-1200x777.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>space-station.txt with Meta Llama3:</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-scaled.jpg"><img loading="lazy" decoding="async" width="1024" height="641" src="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-1024x641.jpg" alt="" class="wp-image-325" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-1024x641.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-300x188.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-768x481.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-1536x962.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-2048x1283.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3-1200x752.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>space-station.txt with Meta Llama3.1:</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1.jpg"><img loading="lazy" decoding="async" width="1024" height="663" src="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-1024x663.jpg" alt="" class="wp-image-323" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-1024x663.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-300x194.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-768x498.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-1536x995.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-2048x1327.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.1-1200x777.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>space-station.txt with smaller Meta Llama3.2:</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-scaled.jpg"><img loading="lazy" decoding="async" width="1024" height="641" src="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-1024x641.jpg" alt="" class="wp-image-324" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-1024x641.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-300x188.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-768x481.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-1536x962.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-2048x1283.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-meta-llama3.2-1200x752.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>OpenAI GPT-4o with Albert Einstein Wikipedia page (340 nodes, 230 relationships):</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-scaled.jpg"><img loading="lazy" decoding="async" width="1024" height="653" src="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-1024x653.jpg" alt="" class="wp-image-326" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-1024x653.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-300x191.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-768x490.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-1536x979.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-2048x1306.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-gpt-4o-einstein-wikpedia-1200x765.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>Meta Llama3 with Albert Einstein Wikipedia page (150 nodes, 150 relationships),  not shown: Llama3.1 (had 161 nodes, 85 relationships), not shown Llama3.2 (125 nodes, 76 relationships)</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-scaled.jpg"><img loading="lazy" decoding="async" width="1024" height="641" src="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-1024x641.jpg" alt="" class="wp-image-329" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-1024x641.jpg 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-300x188.jpg 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-768x481.jpg 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-1536x962.jpg 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-2048x1283.jpg 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/graph-builder-openai-llama3-einstein-wikpedia-1200x752.jpg 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2024/11/01/creating-knowledge-graphs-automatically-for-graphrag-part-2-with-llm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Creating Knowledge Graphs automatically for GraphRAG: Part 1: with NLP</title>
		<link>https://integratedsemantics.org/2024/10/28/creating-knowledge-graphs-automatically-for-graphrag-part-1-with-nlp/</link>
					<comments>https://integratedsemantics.org/2024/10/28/creating-knowledge-graphs-automatically-for-graphrag-part-1-with-nlp/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 29 Oct 2024 02:50:46 +0000</pubDate>
				<category><![CDATA[Knowledge Graphs]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[GraphRAG]]></category>
		<category><![CDATA[KG]]></category>
		<category><![CDATA[LlamaIndex]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[Neo4j]]></category>
		<category><![CDATA[Relik]]></category>
		<guid isPermaLink="false">https://integratedsemantics.org/?p=287</guid>

					<description><![CDATA[(next post Part 2: with LLM) I first investigated how NLP could be used for both entity recognition and relation extraction for creating a knowledge graphs of content. Tomaz Bratanic&#8217;s Neo4j blog article  used Relik for NLP along with LlamaIndex for creating a graph in Neo4j, and setting up an embedding model for use with LLM &#8230; <a href="https://integratedsemantics.org/2024/10/28/creating-knowledge-graphs-automatically-for-graphrag-part-1-with-nlp/" class="more-link">Continue reading<span class="screen-reader-text"> "Creating Knowledge Graphs automatically for GraphRAG: Part 1: with NLP"</span></a>]]></description>
										<content:encoded><![CDATA[
<p>(next post Part 2: with LLM)</p>



<p>I first investigated how NLP could be used for both entity recognition and relation extraction for creating a knowledge graphs of content. Tomaz Bratanic&#8217;s Neo4j <a href="https://neo4j.com/developer-blog/entity-linking-relationship-extraction-relik-llamaindex/" target="_blank" rel="noreferrer noopener">blog article</a>  used <a href="https://github.com/SapienzaNLP/relik" target="_blank" rel="noreferrer noopener">Relik </a>for NLP  along with <a href="https://www.llamaindex.ai/" target="_blank" rel="noreferrer noopener">LlamaIndex</a>  for creating a graph in <a href="https://neo4j.com/" target="_blank" rel="noreferrer noopener">Neo4j,</a> and setting up an embedding model for use with LLM queries. </p>



<p>In my <a href="https://github.com/stevereiner/llama-relik" target="_blank" rel="noreferrer noopener">llama_relik</a>  github project,  I used the  notebook from the blog article and changed it to use fastcoref instead of coreferee. Fastcoref was mentioned in the <a href="https://medium.com/neo4j/entity-linking-and-relationship-extraction-with-relik-in-llamaindex-ca18892c169f" target="_blank" rel="noreferrer noopener">medium article</a> version of the Neo4j blog article in the comments. It&#8217;s supposed to work better.  There is also a python file in this project than can be used instead of the notebook.</p>



<p> I submitted some fixes to Relik on Windows, but it performs best on Linux in general and was more able to use the GPU &#8220;cuda&#8221; mode instead of &#8220;cpu&#8221;.</p>



<p>Similar work has been done using Rebel for NLP by <a href="https://towardsdatascience.com/extract-knowledge-from-text-end-to-end-information-extraction-pipeline-with-spacy-and-neo4j-502b2b1e0754" target="_blank" rel="noreferrer noopener">Neo4j / Tomaz Bratanic</a>, <a href="https://medium.com/@sauravjoshi23/building-knowledge-graphs-rebel-llamaindex-and-rebel-llamaindex-8769cf800115" target="_blank" rel="noreferrer noopener">Saurav Joshi</a>, and  <a href="https://medium.com/@kamaljp/building-knowledge-graphs-with-rebel-step-by-step-guide-for-extracting-entities-enriching-info-ec29f2566de" data-type="link" data-id="https://medium.com/@kamaljp/building-knowledge-graphs-with-rebel-step-by-step-guide-for-extracting-entities-enriching-info-ec29f2566de" target="_blank" rel="noreferrer noopener">Qrious Kamal</a></p>



<p>Note that Relik has closed information extraction (CIE) models that do both entity linking (EL) and relation extraction (RE) . It also has models focused on either EL or RE.  </p>



<p>Below is a screenshot from Neo4j with a knowledge graph created with the python file from the <a href="https://github.com/stevereiner/llama-relik" target="_blank" rel="noreferrer noopener">llama_relik</a> project using the &#8220;relik-cie-small&#8221; model with the spacy space station sample text (ignore chunk node and it&#8217;s mentions relations). Notice how it has separate entities for &#8220;ISS&#8221; and &#8220;International Space Station&#8221; .</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/image.png"><img loading="lazy" decoding="async" width="1024" height="547" src="https://integratedsemantics.org/wp-content/uploads/2024/10/image-1024x547.png" alt="" class="wp-image-291" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/image-1024x547.png 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-300x160.png 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-768x410.png 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-1536x821.png 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-2048x1094.png 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-1200x641.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>



<p>The &#8220;relik-cie-large&#8221; model finds more relations in screenshot below. It also has separate entities for &#8220;ISS&#8221; and  &#8220;International Space Station&#8221; (and throws in second &#8220;International Space Station&#8221;).</p>



<figure class="wp-block-image size-large"><a href="https://integratedsemantics.org/wp-content/uploads/2024/10/image-2.png"><img loading="lazy" decoding="async" width="1024" height="547" src="https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-1024x547.png" alt="" class="wp-image-302" srcset="https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-1024x547.png 1024w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-300x160.png 300w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-768x410.png 768w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-1536x821.png 1536w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-2048x1094.png 2048w, https://integratedsemantics.org/wp-content/uploads/2024/10/image-2-1200x641.png 1200w" sizes="auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px" /></a></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://integratedsemantics.org/2024/10/28/creating-knowledge-graphs-automatically-for-graphrag-part-1-with-nlp/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>