How AI agents should install, configure, and use Lore to maintain shared context across tools and machines.
This guide is for AI agents that want to use Lore. If you're an agent reading this via llms.txt or the docs, this is your primary reference.
Lore is a research knowledge repository. You store documents (meeting notes, interviews, decisions, specs), search them by meaning, and cite original sources. Unlike memory systems that store summaries, Lore preserves complete originals so you can quote exactly what was said.
Before installing, ask the user for their email address.
They will also need two API keys:
npm install -g @getlore/cli
Requires Node.js 18+. Alternatively, use npx -y @getlore/cli to run without installing.
Present the user with two options for providing their API keys:
Recommended — user sets keys themselves (keys never enter chat history):
Tell the user to run these commands in their terminal:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
Then you run setup referencing the environment variables:
lore setup \
--openai-key $OPENAI_API_KEY \
--anthropic-key $ANTHROPIC_API_KEY \
--email user@example.com \
--data-dir ~/.lore
Convenient but riskier — user pastes keys into chat:
The user can give you the keys directly and you run setup with them. If they choose this, warn them: "API keys shared in chat may be stored in conversation history. If that's a concern, set them as environment variables instead."
The setup command sends a 6-digit verification code to the user's email and exits.
Ask the user: "Check your email for a 6-digit code from Lore and paste it here."
Once they provide the code, re-run the same setup command with --code appended:
lore setup \
--openai-key $OPENAI_API_KEY \
--anthropic-key $ANTHROPIC_API_KEY \
--email user@example.com \
--code 123456 \
--data-dir ~/.lore
This verifies the code and completes setup (data repo, welcome doc, background daemon). After this, Lore operates autonomously — no further user input is needed.
If setup was already completed and you only need to authenticate:
# Step 1: Send OTP
lore auth login --email user@example.com --send-only
# Step 2: Ask user for code, then verify
lore auth login --email user@example.com --code 123456
If auth is already configured (from a previous setup), add --skip-login to skip the login step entirely.
To update to the latest version:
lore update
This checks npm, installs the latest @getlore/cli, and restarts the background daemon so it picks up the new code. To check without installing:
lore update --check
If using npx -y @getlore/cli, the npm package is always fetched fresh. You only need to restart the daemon:
lore sync restart
After running lore setup, add Lore as an MCP server. No env vars needed — the MCP server reads API keys and auth from ~/.config/lore/ automatically:
{
"mcpServers": {
"lore": {
"command": "lore",
"args": ["mcp"]
}
}
}
See MCP Setup for client-specific instructions and advanced env var overrides.
When working on a project, start with get_brief. It gives you immediate context — current state, key evidence, open questions, and trajectory — without searching from scratch.
get_brief("backend-rewrite")
If the brief is stale (new sources added since generation), it tells you how many. If no brief exists or you need to refresh it, suggest the user run lore brief generate <project> via CLI.
Use search for specific lookups after you have the big picture from the brief.
search("authentication user feedback", project: "backend-rewrite")
Use research only when you need to cross-reference multiple sources or synthesize findings. It runs asynchronously (returns a job_id — poll research_status for results) and makes 10-30 internal LLM calls. When a project brief exists, the research agent uses it as a warm-start — fewer turns, more focused results.
research("What do we know about authentication approaches?")
Use ingest to push content — both full documents and short insights/decisions. Always pass source_url and source_name when available — they enable citation linking.
// Full document
ingest({
content: "...",
title: "Sprint Planning Notes",
project: "backend-rewrite",
source_type: "meeting",
source_url: "https://notion.so/...",
source_name: "Notion: Sprint Planning"
})
// Short decision (title auto-generated, LLM extraction skipped)
ingest({
content: "Chose JWT over session cookies due to mobile app requirements",
project: "backend-rewrite"
})
// Update existing content
ingest({
action: "update",
id: "source-id-here",
content: "Updated content with Chris's final response..."
})
// Delete a source (soft-delete — recoverable via CLI)
ingest({
action: "delete",
id: "source-id-here"
})
Ingestion is idempotent — calling with identical content returns immediately without cost. Deleted sources are soft-deleted and can be restored via lore docs restore in the CLI.
When you are coding inside a repo, prefer a project working copy when the user wants local context visible to native coding agents:
cd ~/workspace/my-code-repo
lore context add <project>
Then read .lore/context.md and relevant files in .lore/sources/ before planning. These are copied snapshots, not symlinks. context add registers the repo for daemon refresh, so clean snapshots can update after sync cycles. Do not edit .lore/base/ or .lore/manifest.json.
If the project has no sources yet, lore context add <project> still creates the scaffold. A repo-local .lore/ working copy currently tracks one Lore project; adding a different project replaces clean context, and active proposals require review or explicit --force.
If you find a factual issue, edit only the matching file in .lore/sources/ and tell the user to review:
lore context status
lore corrections diff
Canonical Lore changes only after explicit approval:
lore corrections approve <id>
If a correction is conflicted, use lore corrections reconcile <id> to create a merged proposal. The merge is not canonical until the user approves it.
If the user wants to discard a local proposal, use lore corrections reject <id>; lore corrections restore <id> is the same Git-like alias.
Use lore context list to see repos with daemon-managed Lore context. If the user wants to turn off repo-local context, use lore context remove. If proposals exist, review them first or use lore context remove --force only when the user explicitly wants to discard local working-copy edits.
The generated .lore/ folder includes its own .gitignore so generated context is local by default. Do not add .lore/ as a sync source. Daemon refresh never approves corrections and must preserve local edits as proposals.
After setup, configure at least one sync source so Lore has documents to search. Point it at a directory containing your documents:
lore sync add --name "Meeting Notes" --path ~/notes --project meetings
By default this syncs all files (**/*). Lore supports Markdown, JSON, JSONL, plain text, CSV, HTML, XML, PDF, and images. To restrict to specific types, pass --glob:
lore sync add --name "Docs" --path ~/docs --glob "**/*.md" --project docs
Then run the initial sync:
lore sync
The background daemon (started during setup) watches for new files and auto-indexes them. If the daemon wasn't started, you can start it manually:
lore sync start
Lore ships a welcome document that gets indexed during setup. You can search for it immediately to verify everything works:
lore search "getting started"
Lore provides skill files that teach AI agents how to use it effectively. After setup, install skills for the tools you use:
# See available skills
lore skills list
# Install for your tools
lore skills install claude-code
lore skills install gemini
lore skills install codex
Skills are instruction files placed in the right location for each tool. They teach the agent when to search, when to ingest, and how to cite sources properly. Install skills for every AI tool in your workflow so they all share the same Lore context.
All machines logged in with the same email share the same knowledge base. Lore deduplicates by content hash, so the same document ingested from different machines is only stored once.
Lore Cloud/Supabase is the primary sync layer across machines. The data directory (~/.lore) can also be a Git repo for optional local history, but generated caches should not block normal Lore use. If lore sync status reports repair-needed Git state, run:
lore sync repair
This rebuilds generated caches, imports deletion identities into Lore Cloud, and untracks cache files that should not be merged by Git.
1. search("database migration approach", project: "backend-rewrite")
2. get_source(id, include_content: true) // read the full document
3. Use the cited information to inform your response
1. ingest({
content: meeting_transcript,
title: "Product Review - Jan 15",
project: "product",
source_type: "meeting",
participants: ["Sarah", "Mike", "Lisa"]
})
1. research("What are the main user complaints about export?")
// Returns: summary, findings with citations, conflicts, follow-ups
2. ingest({
content: "Key finding: 3/5 users cited export speed as top pain point",
project: "product"
})
| Tool | Cost | Use For |
|---|---|---|
search | Low | Quick lookups, finding relevant sources |
get_source | Low | Full document retrieval by ID |
list_sources | Low | Browse what exists |
list_projects | Low | Discover projects |
get_brief | Low | Get project context — start here |
log | Low | Log status updates, decisions, progress notes |
ingest | Low-Medium | Push content — documents, insights, or decisions |
research | High | Cross-reference and synthesize |
research_status | Low | Poll for research results |