<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://daud1.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://daud1.github.io/" rel="alternate" type="text/html" /><updated>2026-05-03T01:00:54+00:00</updated><id>https://daud1.github.io/feed.xml</id><title type="html">daudi’s</title><subtitle>work/personal-ish blog</subtitle><author><name>David Mwebaza</name></author><entry><title type="html">A comparison between ‘traditional’ and hybrid (graph) RAG applications</title><link href="https://daud1.github.io/2026/04/15/trad-vs-hybrid.html" rel="alternate" type="text/html" title="A comparison between ‘traditional’ and hybrid (graph) RAG applications" /><published>2026-04-15T00:00:00+00:00</published><updated>2026-04-15T00:00:00+00:00</updated><id>https://daud1.github.io/2026/04/15/trad-vs-hybrid</id><content type="html" xml:base="https://daud1.github.io/2026/04/15/trad-vs-hybrid.html"><![CDATA[<p>I’ve recently come into some unscheduled downtime and have been trying to keep busy.
One of the things I’ve been exploring is LLMs and generative applications. So, I took a short course targeted at developers and while exploring the material, I stumbled upon a few tutorials showing a way to enhance RAG output using knowledge graphs.
I’m writing this post, and maybe a few others to follow, to document my process and notes as I try to replicate it.</p>

<p>Components and Structure</p>

<p>RAG in this context stands for Retrieval Augmented Generation.</p>

<p>It is a process of enhancing the output generated from a large language model by providing it with relevant context outside of its training data along with your query/prompt.</p>

<p>Typically you provide an LLM with a snippet of text (query/prompt) that it completes (answers) based on its training data. In a RAG application, you also provide specific material (text, audio etc) that the LLM references in addition to its training data when generating the response.</p>

<p>Ingestion</p>

<p>For a typical RAG application, the augmenting document is uploaded by a user and read by one of an ever increasing library. You then chunk it up and get a numerical/vector representation of each chunk called a vector embedding that you store in a vector store. This numerical representation allows you to search the data by similarity in an efficient way.</p>

<p>For a graph-RAG application however, you convert the document into a knowledge graph and generate vector embeddings from this graph. Alternatively, you can generate the vector embeddings the usual way as described above.</p>

<p>Querying</p>

<p>Next is finding the most relevant chunks to the query provided and supplying those to the LLM for generation.</p>

<p>Most datastores come packaged with appropriate functionality to query the underlying data efficiently. You can invoke these on the appropriate stores (chroma, neo4j) and get the relevant data to pass to the LLM for generation.</p>

<p>Worth noting, is the entity extraction. An LLM chain is used to get named entities from the user query and these entities are used to query the graph database for relevant pieces of information.</p>

<p>I then set these up to show the context retrieved and results of generation for both the vector store only and hybrid mode (vector and graph) side by side.</p>

<p>One of the bottlenecks I run into almost immediately was the time/resources required to create the knowledge graph from the uploaded document. Even when using paid tiers on the most popular inference provider platforms, a significant amount of time/tokens/credits is required to generate the graph for a single 30-50 page pdf.</p>

<p>I think further research and refinement in my configuration could cut this down significantly.</p>

<p>Initial testing and Results</p>

<p>Firstly, it’s important to highlight that I think it’s still too early to tell anything conclusive.
To test this, i set up a couple of LLM chains with vector only (chroma) and hybrid (neo4j KG) to show their output side by side.
Surprisingly, after testing it with the  same questions about the same 40-50 page pdf file, there isn’t an astounding difference I noticed between the output generated by the two. From the articles I read, the hybrid set up should perform better at surfacing implied information/</p>

<p>The hybrid setup is more detailed/verbose in its output and has drawn more nuance in 1 or 2 instances but overall the generated text is largely similar in accuracy/nuance.</p>

<p>Improvemments</p>

<p>As mentioned earlier, performance improvements in the knowledge graph generation would go a long way in making this more viable as a feature in a full-fledged product.
Additionally, more exhaustive testing using one of the more recently developed frameworks used to track/compare llm output would be a more objective way of comparing the performance between the two setups.</p>]]></content><author><name>David Mwebaza</name></author><summary type="html"><![CDATA[I’ve recently come into some unscheduled downtime and have been trying to keep busy. One of the things I’ve been exploring is LLMs and generative applications. So, I took a short course targeted at developers and while exploring the material, I stumbled upon a few tutorials showing a way to enhance RAG output using knowledge graphs. I’m writing this post, and maybe a few others to follow, to document my process and notes as I try to replicate it.]]></summary></entry><entry><title type="html">Why this blog</title><link href="https://daud1.github.io/2026/04/14/why-this-blog.html" rel="alternate" type="text/html" title="Why this blog" /><published>2026-04-14T00:00:00+00:00</published><updated>2026-04-14T00:00:00+00:00</updated><id>https://daud1.github.io/2026/04/14/why-this-blog</id><content type="html" xml:base="https://daud1.github.io/2026/04/14/why-this-blog.html"><![CDATA[<p>I’m writing this down for myself and anyone that stumbles upon it and I believe this is the most salient reason: to keep a record.
I’ve been meaning to resume writing since I stopped almost twenty years ago.
In the intervening time, I have come alive to the pernicious fog that comes from keeping every thought/plan/day in your mind. It’s not just the things you intended to do that you forget, but your motivations, resolve, perspective etc. end up subtly veering off course as well. After a while of this, it starts to feel like you’re losing to time all these parts that made you. I think writing provides an oasis of your own thoughts to refresh you everytime you lose sight.</p>

<p>Writing familiarises you with your experiences, helps you turn them over and reshape them in your mind as you try to fit them into language.
It opens you up to new perspectives on things you’re sure you know and gives you the much needed contempt/freedom/familiarity to turn knowledge into wisdom.
In this way, I think writing will help with my learning: by providing a way to reflect on my experiences and commit the lessons/patterns I draw from them to heart/mind.</p>

<p>I am also starting this blog because I think it’s a pretty fulfilling way of doing two things I’ve always enjoyed: reading and creating things</p>

<p>I think I’ll keep adding to this post as the reasons become clearer to me.</p>]]></content><author><name>David Mwebaza</name></author><summary type="html"><![CDATA[I’m writing this down for myself and anyone that stumbles upon it and I believe this is the most salient reason: to keep a record. I’ve been meaning to resume writing since I stopped almost twenty years ago. In the intervening time, I have come alive to the pernicious fog that comes from keeping every thought/plan/day in your mind. It’s not just the things you intended to do that you forget, but your motivations, resolve, perspective etc. end up subtly veering off course as well. After a while of this, it starts to feel like you’re losing to time all these parts that made you. I think writing provides an oasis of your own thoughts to refresh you everytime you lose sight.]]></summary></entry></feed>