Give Your OpenClaw Agent a Real Knowledge Base

OpenClaw has taken the AI agent world by storm. Its ability to autonomously plan, execute, and iterate makes it one of the most capable coding agents available.

But there's a gap: OpenClaw doesn't remember anything between sessions. Every time you start a new task, your agent starts from scratch — no project history, no past decisions, no accumulated knowledge.

You can paste context into the prompt, but that doesn't scale. What happens when you have dozens of architecture decisions, research findings, and meeting notes that might be relevant?

The missing piece: persistent context

What OpenClaw needs is a knowledge base it can query on demand — one that:

  • Persists across sessions so context isn't lost
  • Searches by meaning so the agent finds relevant info even with different wording
  • Returns citations so you can verify what the agent is referencing
  • Stays in sync across every machine you use

That's exactly what Lore provides.

How it works with OpenClaw

Lore connects to OpenClaw through MCP (Model Context Protocol). Once connected, your OpenClaw agent gains access to tools like search, ingest, research, and get_brief — all backed by your personal knowledge base on Lore Cloud.

Step 1: Install and set up Lore

npm install -g @getlore/cli
lore setup

The setup wizard walks you through API key configuration and sign-in. It takes about 30 seconds. Lore Cloud is currently free to use.

Step 2: Add the MCP server to OpenClaw

Add Lore to your OpenClaw MCP configuration:

{
  "mcpServers": {
    "lore": {
      "command": "npx",
      "args": ["-y", "@getlore/cli", "mcp"]
    }
  }
}

Step 3: Feed it context

Add your project documents, research, and notes:

lore sync add --path ~/project/docs

Or tell your OpenClaw agent directly:

"Save this architecture decision to Lore for future reference."

The agent uses Lore's ingest tool to store it. Next session, it's searchable.

Step 4: Watch your agent get smarter

Now when your OpenClaw agent encounters a question about your project, it can search your knowledge base first:

"Before implementing this feature, search Lore for any previous decisions about the authentication system."

The agent gets back cited results — the actual text from your documents, with source attribution. No hallucinated context. No stale summaries.

What your agent can do with Lore

Once connected, your OpenClaw agent has access to these tools:

ToolWhat it does
searchFind relevant context by meaning
get_sourceRead a full document
ingestStore new knowledge
researchAI-powered deep research across your knowledge base
get_briefGet a project summary with key evidence
logRecord decisions and progress

Real-world example

Imagine you're building an e-commerce platform. Over the past month, you've made dozens of decisions about payment processing, inventory management, and user authentication.

Without Lore: Your OpenClaw agent starts each session blind. You either re-explain everything or it makes decisions that conflict with past choices.

With Lore: Your agent searches for "payment processing decisions" and gets back the exact conversation where you decided on Stripe, the reasons why, and the constraints you identified. It builds on your existing decisions instead of starting over.

Getting started

The whole setup takes under a minute:

  1. npm install -g @getlore/cli
  2. lore setup (enter API keys, sign in with email)
  3. Add the MCP config to OpenClaw
  4. Start adding context

Your knowledge base syncs across every machine through Lore Cloud, so your OpenClaw agent has the same context whether you're on your laptop or your desktop.


Learn more in the Lore documentation, or explore the full MCP tools reference.