LLM Wiki — Second Brain for Claude Code + Obsidian
Inspired by Andrej Karpathy's LLM Wiki pattern (
gist). This skill turns Claude Code (or any agent CLI) into a disciplined wiki maintainer that
incrementally builds and maintains a persistent, interlinked Obsidian vault as you feed it sources. The knowledge compounds — cross-references, contradictions, and synthesis are already there when you query.
Core principle
Most LLM+docs workflows are RAG: retrieve fragments at query time, synthesize from scratch, forget. The wiki is compounding: sources are read once, integrated into a persistent markdown knowledge base, and kept current. You curate and ask; the LLM reads, files, cross-references, and maintains.
Obsidian is the IDE. The LLM is the programmer. The wiki is the codebase.
When to use
- Personal: track goals, health, psychology, journaling, self-improvement
- Research: deep dives over weeks on a topic — papers, articles, reports, evolving thesis
- Book companion: file chapters as you read; build a fan-wiki-style companion for characters, themes, plot threads
- Business/team: internal wiki fed by Slack, meeting notes, calls — LLM does maintenance nobody else wants to do
- Competitive analysis, due diligence, trip planning, course notes, hobby deep-dives
Do NOT use when: you need one-shot Q&A over a fixed document (use RAG), you don't plan to add sources over time, or you don't want Obsidian in the loop.
Architecture (three layers)
vault/
├── raw/ # Layer 1 — IMMUTABLE source of truth
│ ├── <source files> # Articles, papers, PDFs, images, data
│ └── assets/ # Downloaded images from clipped articles
├── wiki/ # Layer 2 — LLM-owned knowledge base
│ ├── index.md # Content catalog (LLM updates every ingest)
│ ├── log.md # Append-only timeline (## [YYYY-MM-DD] <op> | <title>)
│ ├── entities/ # Person/Org/Place pages
│ ├── concepts/ # Ideas, theories, frameworks
│ ├── sources/ # One summary page per ingested source
│ ├── comparisons/ # Cross-source analysis pages
│ └── synthesis/ # High-level syntheses, theses, overviews
├── CLAUDE.md # Schema + conventions (Claude Code)
└── AGENTS.md # Same content, for Codex/Cursor/Antigravity
- Layer 1 (raw/) — you own. LLM only reads; never writes.
- Layer 2 (wiki/) — LLM owns. It creates, updates, and cross-references pages. You read it.
- Layer 3 (CLAUDE.md / AGENTS.md) — the schema. Conventions, workflows, frontmatter rules. Co-evolved by you and the LLM.
Three core operations
- Ingest — LLM reads a source, discusses takeaways with you, writes a source summary, updates 10-15 relevant pages, updates index, appends to log. See
references/ingest-workflow.md
.
- Query — LLM reads first, drills into relevant pages, synthesizes with citations. Good answers get filed back into the wiki so explorations compound. See
references/query-workflow.md
.
- Lint — Health check: contradictions, stale claims, orphan pages, missing cross-refs, concepts mentioned but lacking their own page, data gaps to fill with web search. See
references/lint-workflow.md
.
Quick start
bash
# 1. Initialize a vault (in Obsidian's vault directory)
python scripts/init_vault.py --path ~/vaults/research --topic "LLM interpretability"
# 2. Drop a source into raw/, then ingest
/wiki-ingest ~/vaults/research/raw/anthropic-monosemanticity.pdf
# 3. Ask questions (answers can be re-filed into the wiki)
/wiki-query "how does monosemanticity compare to mechanistic interpretability?"
# 4. Periodic health check
/wiki-lint
# 5. See the timeline
/wiki-log --last 10
Slash commands (this plugin ships)
| Command | Purpose |
|---|
| Bootstrap a fresh vault with schema files + starter structure |
| Read a source, discuss, update wiki, log it |
| Search wiki, synthesize answer, offer to file back |
| Run health check — contradictions, orphans, stale claims, gaps |
| Show recent log entries (uses unix tools on ) |
Sub-agents (this plugin ships)
| Agent | When dispatched |
|---|
| Delegated ingest flow — reads source, proposes updates, applies after your approval |
| Runs the health-check workflow independently, reports findings |
| Answers queries using index-first search, synthesizes with citations |
Python tools ()
All tools are
standard library only (no pip installs). Run with
python scripts/<tool>.py --help
.
| Script | Purpose |
|---|
| Create folder structure + seed CLAUDE.md, AGENTS.md, index.md, log.md |
| Helper: extract text/frontmatter from a source file, ready for LLM review |
| Regenerate from wiki page frontmatter (category, date, source count) |
| Append a standardized log entry ## [YYYY-MM-DD] <op> | <title>
|
| BM25 search over wiki pages (standalone fallback when index.md isn't enough) |
| Find orphans (no inbound links), stale pages, missing cross-refs, broken links |
| Compute link graph stats — hubs, orphans, clusters, disconnected components |
| Render a wiki page (or subtree) to a Marp slide deck |
Cross-tool compatibility
The vault's
schema lives in CLAUDE.md (Claude Code) or AGENTS.md (Codex/Cursor/Antigravity/OpenCode). The same content works in both. This plugin ships both templates. For per-tool setup instructions see
references/cross-tool-setup.md
.
CLAUDE.md → Claude Code
AGENTS.md → Codex CLI, Cursor, Antigravity, OpenCode, Gemini CLI
.cursorrules → legacy Cursor (pre-AGENTS.md)
The scripts are pure Python stdlib → run identically everywhere. Only the loader file changes per tool.
Obsidian setup (recommended)
- Obsidian Web Clipper — browser extension; converts web articles to markdown and drops them in
- Download images locally — Settings → Files and links → Attachment folder path = . Settings → Hotkeys → bind "Download attachments for current file" to
- Graph view — see hubs/orphans; essential for spotting structural problems
- Marp plugin — Markdown-based slide decks directly from wiki pages
- Dataview plugin — dynamic tables/lists over page frontmatter (tags, dates, source counts)
- Git — the vault is a plain markdown repo; version it
Full setup walkthrough:
references/obsidian-setup.md
Why this works (vs plain RAG)
| Plain RAG | LLM Wiki |
|---|
| Rediscover knowledge each query | Knowledge accumulates |
| Cross-references re-computed every time | Cross-references pre-written and maintained |
| Contradictions surface only if you ask | Contradictions flagged during ingest |
| Exploration disappears into chat history | Good answers re-filed as new pages |
| Scales by embeddings infrastructure | Scales by markdown + + optional local search |
At ~100 sources / hundreds of pages,
+ filesystem search is enough. Past that, layer in a local search tool like
qmd or use
.
Related skills (chains via )
This skill is marked
so other skills can chain into it:
- — PARA-method memory; complementary as long-term personal memory that feeds sources into the wiki
- (mattpocock) — lightweight Obsidian note helper; this skill is the maintained-wiki layer on top
- — when wiki outgrows ~500 pages, use rag-design to bolt on a retrieval layer
- — expose the wiki as an MCP tool
- — for multi-agent wiki maintenance (ingestor + linter + librarian)
Reference docs
references/wiki-schema.md
— full vault layout, page frontmatter, naming conventions
references/page-formats.md
— entity, concept, source, comparison, synthesis templates
references/ingest-workflow.md
— the detailed ingest flow the wiki-ingestor agent follows
references/query-workflow.md
— query patterns, citation format, re-filing answers
references/lint-workflow.md
— health-check heuristics
references/obsidian-setup.md
— Obsidian plugins, hotkeys, vault config
references/cross-tool-setup.md
— per-tool setup (Codex, Cursor, Antigravity, etc.)
references/memex-principles.md
— Bush's Memex, why the LLM changes the maintenance math
Templates ()
- , , — schema loaders per tool
- , — starter index and log
- — entity, concept, source-summary, comparison, synthesis
- — small worked example you can study or copy
Iron rule
The LLM never edits files in . Ever. Sources are immutable. All LLM writes go to
. If you need to correct a source, do it in
yourself — then re-ingest.