AI & ML New Capability

Interfaces LLMs with Wikidata-scale graphs for multi-hop reasoning without any retraining of the model or the query executor.

April 1, 2026

Original Paper

UltRAG: a Universal Simple Scalable Recipe for Knowledge Graph RAG

Dobrik Georgiev, Kheeran Naidu, Alberto Cattaneo, Federico Monti, Carlo Luschi, Daniel Justus

arXiv · 2603.28773

AI-generated illustration

The Takeaway

It enables retrieval-augmented generation on massive Knowledge Graphs (1.6B relations) using off-the-shelf components. This drastically lowers the barrier for practitioners to ground LLMs in structured, large-scale factual data without expensive fine-tuning.

From the abstract

Large language models (LLMs) frequently generate confident yet factually incorrect content when used for language generation (a phenomenon often known as hallucination). Retrieval augmented generation (RAG) tries to reduce factual errors by identifying information in a knowledge corpus and putting it in the context window of the model. While this approach is well-established for document-structured data, it is non-trivial to adapt it for Knowledge Graphs (KGs), especially for queries that requir