How AI agents should install, configure, and use Lore to maintain shared context across tools and machines.
This guide is for AI agents that want to use Lore. If you're an agent reading this via llms.txt or the docs, this is your primary reference.
Lore is a research knowledge repository. You store documents (meeting notes, interviews, decisions, specs), search them by meaning, and cite original sources. Unlike memory systems that store summaries, Lore preserves complete originals so you can quote exactly what was said.
Before installing, ask the user for their email address.
They will also need two API keys:
npm install -g @getlore/cli
Requires Node.js 18+. Alternatively, use npx -y @getlore/cli to run without installing.
Present the user with two options for providing their API keys:
Recommended — user sets keys themselves (keys never enter chat history):
Tell the user to run these commands in their terminal:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
Then you run setup referencing the environment variables:
lore setup \
--openai-key $OPENAI_API_KEY \
--anthropic-key $ANTHROPIC_API_KEY \
--email user@example.com \
--data-dir ~/.lore
Convenient but riskier — user pastes keys into chat:
The user can give you the keys directly and you run setup with them. If they choose this, warn them: "API keys shared in chat may be stored in conversation history. If that's a concern, set them as environment variables instead."
The setup command sends a 6-digit verification code to the user's email and exits.
Ask the user: "Check your email for a 6-digit code from Lore and paste it here."
Once they provide the code, re-run the same setup command with --code appended:
lore setup \
--openai-key $OPENAI_API_KEY \
--anthropic-key $ANTHROPIC_API_KEY \
--email user@example.com \
--code 123456 \
--data-dir ~/.lore
This verifies the code and completes setup (data repo, welcome doc, background daemon). After this, Lore operates autonomously — no further user input is needed.
If setup was already completed and you only need to authenticate:
# Step 1: Send OTP
lore auth login --email user@example.com --send-only
# Step 2: Ask user for code, then verify
lore auth login --email user@example.com --code 123456
If auth is already configured (from a previous setup), add --skip-login to skip the login step entirely.
To update to the latest version:
lore update
This checks npm, installs the latest @getlore/cli, and restarts the background daemon so it picks up the new code. To check without installing:
lore update --check
If using npx -y @getlore/cli, the npm package is always fetched fresh. You only need to restart the daemon:
lore sync restart
After running lore setup, add Lore as an MCP server. No env vars needed — the MCP server reads API keys and auth from ~/.config/lore/ automatically:
{
"mcpServers": {
"lore": {
"command": "lore",
"args": ["mcp"]
}
}
}
See MCP Setup for client-specific instructions and advanced env var overrides.
Use search for quick lookups. It's fast and cheap.
search("authentication user feedback", project: "backend-rewrite")
Use research only when you need to cross-reference multiple sources or synthesize findings. It runs asynchronously (returns a job_id — poll research_status for results) and makes 10-30 internal LLM calls.
research("What do we know about authentication approaches?")
Use ingest to push content — both full documents and short insights/decisions. Always pass source_url and source_name when available — they enable citation linking.
// Full document
ingest({
content: "...",
title: "Sprint Planning Notes",
project: "backend-rewrite",
source_type: "meeting",
source_url: "https://notion.so/...",
source_name: "Notion: Sprint Planning"
})
// Short decision (title auto-generated, LLM extraction skipped)
ingest({
content: "Chose JWT over session cookies due to mobile app requirements",
project: "backend-rewrite"
})
Ingestion is idempotent — calling with identical content returns immediately without cost.
After setup, configure at least one sync source so Lore has documents to search. Point it at a directory containing your documents:
lore sync add --name "Meeting Notes" --path ~/notes --project meetings
By default this syncs all files (**/*). Lore supports Markdown, JSON, JSONL, plain text, CSV, HTML, XML, PDF, and images. To restrict to specific types, pass --glob:
lore sync add --name "Docs" --path ~/docs --glob "**/*.md" --project docs
Then run the initial sync:
lore sync
The background daemon (started during setup) watches for new files and auto-indexes them. If the daemon wasn't started, you can start it manually:
lore sync start
Lore ships a welcome document that gets indexed during setup. You can search for it immediately to verify everything works:
lore search "getting started"
Lore provides skill files that teach AI agents how to use it effectively. After setup, install skills for the tools you use:
# See available skills
lore skills list
# Install for your tools
lore skills install claude-code
lore skills install gemini
lore skills install codex
Skills are instruction files placed in the right location for each tool. They teach the agent when to search, when to ingest, and how to cite sources properly. Install skills for every AI tool in your workflow so they all share the same Lore context.
All machines logged in with the same email share the same knowledge base. Lore deduplicates by content hash, so the same document ingested from different machines is only stored once.
The data directory (~/.lore) is a git repo. lore setup auto-discovers your existing data repo URL when setting up a new machine, so knowledge follows you.
1. search("database migration approach", project: "backend-rewrite")
2. get_source(id, include_content: true) // read the full document
3. Use the cited information to inform your response
1. ingest({
content: meeting_transcript,
title: "Product Review - Jan 15",
project: "product",
source_type: "meeting",
participants: ["Sarah", "Mike", "Lisa"]
})
1. research("What are the main user complaints about export?")
// Returns: summary, findings with citations, conflicts, follow-ups
2. ingest({
content: "Key finding: 3/5 users cited export speed as top pain point",
project: "product"
})
| Tool | Cost | Use For |
|---|---|---|
search | Low | Quick lookups, finding relevant sources |
get_source | Low | Full document retrieval by ID |
list_sources | Low | Browse what exists |
list_projects | Low | Discover projects |
ingest | Low-Medium | Push content — documents, insights, or decisions |
sync | Variable | Refresh from source directories |
research | High | Cross-reference and synthesize |