Graphiti - Building real-time knowledge graphs for AI agents

Graphiti is a new framework for building dynamic knowledge graphs for AI agents.
Core content:
1. The core functions and features of the Graphiti framework
2. The comparative advantages of Graphiti over traditional RAG methods
3. Application cases of Graphiti in Zep AI agents
What is Graphiti
Graphiti is a framework for building and querying time-aware knowledge graphs , designed for AI agents running in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti can continuously integrate user interactions, structured and unstructured enterprise data, and external information into a coherent and queryable graph. The framework supports incremental data updates, efficient retrieval, and accurate historical queries without the need to recalculate the entire graph , making it ideal for developing interactive, context-aware AI applications.
Use Graphiti to:
• Integrate and maintain dynamic user interaction and business data.• Supports agent state-based reasoning and task automation.• Query complex and evolving data through semantic search, keyword search, graph traversal, etc.
A knowledge graph is a network of interrelated facts, such as "Kendra likes Adidas shoes." Each fact is a "triplet" consisting of two entities (or nodes, such as "Kendra" and "Adidas shoes") and a relationship between them (or edges, such as "likes"). Knowledge graphs have been widely studied and applied in the field of information retrieval.
Graphiti is unique in that it can autonomously build a knowledge graph while handling relationship changes and maintaining historical context .
Graphiti and Zep Memory
Graphiti powers Zep's AI agent memory core.
With Graphiti, we have demonstrated that Zep is the most advanced solution in the field of agent memory .
? Read our paper: Zep: A Temporal Knowledge Graph Architecture for Agent Memory [2]
We are very excited to open source Graphiti because we believe its potential goes far beyond AI memory applications.
Why choose Graphiti
Traditional RAG approaches usually rely on batch processing and static data summaries, and are therefore inefficient when dealing with frequently changing data.
Graphiti addresses this challenge with the following advantages:
• Real-time incremental updates New data fragments can be integrated instantly without batch recalculation.
• The dual-time data model clearly records the occurrence time and reception time of an event , supporting precise point-in-time queries.
• Efficient hybrid retrieval combines semantic embedding, keyword retrieval (BM25), and graph traversal to achieve low-latency queries without relying on large language model (LLM) summaries .
• Custom entity definition Through the intuitive and simple Pydantic model, developers can flexibly create ontologies and custom entities.
• Excellent scalability supports parallel processing, can efficiently manage large-scale data sets, and is suitable for enterprise-level application scenarios.
Comparison between Graphiti and GraphRAG
Graphiti is specifically designed to address the challenges of dynamic and frequently updated data sets, and is particularly suitable for application scenarios that require real-time interaction and accurate historical queries .
Installation Guide
Environmental requirements:
• Python 3.10 or higher• Neo4j 5.26 or higher (used as embedded storage backend)• OpenAI API key (for LLM inference and embedding generation)
Important Tips
Graphiti works best with LLM services that support structured outputs, such as OpenAI and Gemini. Using other services may result in incorrect output schemas or data import failures, especially when using small models.
Optional:
• An API key from Google Gemini, Anthropic, or Groq (for access to other LLM providers)
Tips
The easiest way to install Neo4j is to use Neo4j Desktop. It provides a user-friendly interface for managing Neo4j instances and databases.
pip install graphiti-core
or
poetry add graphiti-core
You can also extend functionality by installing optional LLM provider support:
# Install and support Anthropic
pip install graphiti-core[anthropic]
# Install and support Groq
pip install graphiti-core[groq]
# Install and support Google Gemini
pip install graphiti-core[google-genai]
# Installing multiple providers simultaneously
pip install graphiti-core[anthropic,groq,google-genai]
Quick Start
Important Tips
Graphiti uses OpenAI for LLM reasoning and embedding generation. Please make sure your environment is set up
OPENAI_API_KEY
. It also supports LLM reasoning from Anthropic and Groq. Other LLM providers can be accessed through OpenAI API compatibility.
For a complete example, see examples
Quickstart Example [3] in the directory . This quickstart example demonstrates the following:
• Connect to Neo4j database• Initialize Graphiti indexes and constraints• Add plots (both text and structured JSON) to your graph• Search for relations (edges) using a hybrid retrieval approach• Re-rank search results by graph distance• Search for nodes using predefined search recipes
The sample contains detailed instructions for each feature and provides a complete README with instructions for setting up the environment and next steps .
MCP Server
mcp_server
The directory contains Graphiti's **Model Context Protocol (MCP)** server implementation. Through the MCP protocol, AI assistants can interact with Graphiti's knowledge graph capabilities.
The main functions of the MCP server include:
• Story management (add, retrieve, delete)• Entity management and relationship processing• Support semantic and hybrid search• Group management to organize related data• Graph maintenance operations
The MCP server can be deployed with Neo4j via Docker, which makes it very simple to integrate Graphiti into your AI assistant workflow.
For detailed installation instructions and usage examples, see MCP server
README file in the directory.
REST Services
server
The directory contains an API service for interacting with the Graphiti API, which is built on FastAPI.
For more information, see server
README file in the directory.
Optional environment variables
In addition to Neo4j and OpenAI compatible authentication information, Graphiti also supports some optional environment variables. If you are using our supported models (such as Anthropic or Voyage models), you must set the corresponding environment variables.
•USE_PARALLEL_RUNTIME
is an optional boolean environment variable. If you wish to enable Neo4j's parallel runtime feature for certain search queries, you can set it to true
Note that this feature is not available for Neo4j Community edition or smaller AuraDB instances, and is therefore disabled by default .
Using Graphiti on Azure OpenAI
Graphiti supports LLM inference and embedding generation on the Azure OpenAI platform. If you want to use Azure OpenAI, you need to configure your Azure OpenAI authentication information for both the LLM client and the embedder.
from openai importAsyncAzureOpenAI
from graphiti_core importGraphiti
from graphiti_core.llm_client importOpenAIClient
from graphiti_core.embedder.openai importOpenAIEmbedder,OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client importOpenAIRerankerClient
api_key = "<your API key>"
api_version = "<your API version>"
azure_endpoint = "<your Azure endpoint address>"
azure_openai_client =AsyncAzureOpenAI(
api_key=api_key,
api_version=api_version,
azure_endpoint=azure_endpoint
)
graphiti =Graphiti(
"bolt://localhost:7687" ,
"neo4j" ,
"password" ,
llm_client = OpenAIClient(
client=azure_openai_client
),
embedder=OpenAIEmbedder(
config=OpenAIEmbedderConfig(
embedding_model = "text-embedding-3-small"
),
client=azure_openai_client
),
cross_encoder=OpenAIRerankerClient(
client=azure_openai_client
)
)
Make sure to replace the placeholders in the sample code (such as <your API key>
,<your API version>
,<your Azure endpoint address>
) with your actual Azure OpenAI authentication information and specify the correct embedding model name, which must have been deployed in your Azure OpenAI service.
Using Google Gemini in Graphiti
Graphiti supports using Google's Gemini model for LLM inference and embedding generation. To use Gemini, you need to configure your Google API key for both the LLM client and embedder separately.
Install Graphiti:
poetry add "graphiti-core[google-genai]"# oruv add "graphiti-core[google-genai]"
from graphiti_core importGraphiti
from graphiti_core.llm_client.gemini_client importGeminiClient,LLMConfig
from graphiti_core.embedder.gemini importGeminiEmbedder,GeminiEmbedderConfig
# Google API key configuration
api_key = "<your-google-api-key>"
# Initialize Graphiti with Gemini clients
graphiti = Graphiti (
"bolt://localhost:7687" ,
"neo4j" ,
"password" ,
llm_client = GeminiClient (
config = LLMConfig (
api_key=api_key,
model= "gemini-2.0-flash"
)
),
embedder = GeminiEmbedder (
config= GeminiEmbedderConfig (
api_key=api_key,
embedding_model= "embedding-001"
)
)
)
# Now you can use Graphiti with Google Gemini