Local-first, embedded graph memory for AI agents.
A small, inspectable, Python-first memory layer for agents that want structured recall without separate infrastructure.
Natural language in, persistent graph memory out. No Cypher. No servers. Just pip install clawgraph.
Today the shipped path is embedded Kuzu plus OpenAI-compatible APIs. Broader provider and backend support is part of the longer-term direction, but the current product is intentionally local-first.
No Cypher knowledge required. Just tell it facts in plain English and it extracts entities & relationships automatically.
Embedded Kuzu, no server, no Docker, just a local graph database on disk with snapshot portability.
The LLM infers and maintains your graph schema. Or constrain it to a fixed set of entity labels and relationship types.
from clawgraph import Memory β designed for agentic loops. Initialize once, reuse across calls.
Process multiple facts in a single LLM call. Efficient for bulk ingestion of knowledge.
Query the graph directly, export JSON, and inspect ontology evolution instead of treating memory as an opaque blob.
The easiest way to get started with ClawGraph.
Up and running in under a minute.
# Store facts
$ clawgraph add "John works at Acme Corp as a software engineer"
$ clawgraph add "Alice is a data scientist at Google"
$ clawgraph add "John and Alice are friends"
# Query the graph
$ clawgraph query "Where does John work?"
ββββββββββ³βββββββββββ³ββββββββββββ
β a.name β r.type β b.name β
β‘ββββββββββββββββββββββββββββββββ©
β John β WORKS_AT β Acme Corp β
ββββββββββ΄βββββββββββ΄ββββββββββββ
# Batch add (one LLM call for multiple facts)
$ clawgraph add-batch "Bob is a designer" "Bob works at Netflix"
# Export the graph as JSON
$ clawgraph export graph.json
from clawgraph import Memory
mem = Memory()
# Add facts
mem.add("John works at Acme Corp")
mem.add("Alice is a data scientist at Google")
# Batch add β multiple facts, one LLM call
mem.add_batch([
"Bob is a designer at Netflix",
"Carol manages engineering at Acme",
"Bob and Carol are married",
])
# Query
results = mem.query("Who works where?")
# [{"a.name": "John", "r.type": "WORKS_AT", "b.name": "Acme Corp"}, ...]
# Direct access
mem.entities() # all entities
mem.relationships() # all relationships
mem.export() # full graph + ontology as dict
from clawgraph import Memory
# Constrain extraction to a fixed schema
mem = Memory(
allowed_labels=["Person", "Company", "Skill"],
allowed_relationship_types=["WORKS_AT", "HAS_SKILL", "MANAGES"],
)
mem.add("Alice is a Python developer at Acme Corp")
# Entities: Alice (Person), Python (Skill), Acme Corp (Company)
# Relationships: Alice -WORKS_AT-> Acme Corp, Alice -HAS_SKILL-> Python
Two simple pipelines β one to store memories, one to retrieve them.
ClawGraph supports different categories of memory for richer agent behavior.
People, companies, tools, and how they connect. The core knowledge graph — who knows whom, what belongs where.
"Alice manages the ML team at Acme"
Multi-step processes, pipelines, and decision trees. Capture how tasks flow from one step to the next.
"Deploy flow: build β test β stage β prod"
Time-aware facts and event sequences. Track when things happened and how state changes over time.
"Alice joined Acme in March 2024"
Skills your agent discovers and builds over time. As it solves problems, it stores reusable capabilities in the graph — tools used, APIs called, patterns learned — so it gets better with every interaction.
"Agent learned: use Stripe API to process refunds via /v1/refunds endpoint"
| Component | Library | Why |
|---|---|---|
| CLI | Typer | Type-hint driven, minimal boilerplate |
| LLM | OpenAI SDK | OpenAI-compatible APIs today, simple direct integration |
| Graph DB | Kuzu | Embedded, no server, native Cypher |
| Output | Rich | Tables, panels, colors |
Today ClawGraph is tuned for OpenAI-compatible APIs via the OpenAI SDK. Start with gpt-5.4-mini for frequent writes, use gpt-5.4 for harder extraction, and treat broader provider support as longer-term direction.
Recommended default for frequent writes and agent loops. Strong balance of speed and extraction quality.
Better fit for more ambiguous or higher-stakes extraction where accuracy matters more than latency.
The current integration is built around the OpenAI SDK, including standard OpenAI endpoints and compatible APIs behind a base URL.
Near-term work is making the OpenAI-compatible path solid and predictable before expanding provider-specific support.
Additional provider-specific integrations and local-model stories may come later, but they are not the primary product path today.
Most agent memory today is just appending to a Markdown file. Here’s why a graph is fundamentally better.
| .md / text files | Graph DB | |
|---|---|---|
| Querying | Full-text search or dump entire file into context window | Precise traversals — find exactly what’s connected to what |
| Deduplication | Same fact appended multiple times, growing forever | MERGE semantics — idempotent by default |
| Relationships | Implicit, buried in prose. LLM must re-infer every time | First-class edges with types. Traversable without LLM |
| Scaling | File gets huge → exceeds context window → truncated or lost | Query only what you need. Graph stays fast at any size |
| Multi-hop reasoning | “Who does Alice’s manager work with?” requires reading everything | One Cypher query: MATCH (a)-[:MANAGES]->(b)-[:WORKS_WITH]->(c) |
| Structure | Unstructured text with no schema. Format drifts over time | Typed entities & relationships with automatic or custom ontology |
| Token cost | Dump full memory into every prompt. Expensive at scale | Only retrieve relevant subgraph. Minimal token overhead |
ClawGraph is intentionally built around embedded Kuzu today. Other backends are part of the longer-term horizon, not the current product promise.
Embedded, zero-config, local-first. The default backend.
A possible future backend for teams that already run managed or self-hosted graph infrastructure.
A possible future backend for low-latency graph workloads, but not part of the current local-first core path.
ClawGraph plugs into agent stacks as a local-first memory layer for structured recall.
Persistent graph memory for OpenClaw skills and agents
Use as structured persistent memory inside LangChain pipelines
Persistent graph memory across CrewAI tasks and teams
Any Python agent via from clawgraph import Memory
ClawGraph is an independent open-source project and is not affiliated with or endorsed by OpenClaw, LangChain, or CrewAI.
A place to test and benchmark agents on repeatable tasks, compare runs side by side, and see how memory changes outcomes over time.
Compare agents, prompts, and configurations side by side instead of judging a single run in isolation.
Benchmark on stable workflows and browser tasks so changes in quality, speed, and reliability are visible.
Measure whether persistence actually improves agent performance across sessions, retries, and longer task chains.
Install ClawGraph and give your agents structured recall in minutes.
Grab a prize (graph fragment) and drop it into the chute.
Everything is a graph... even this game
Get updates on new releases, features, and graph memory tips delivered to your inbox.