v0.9.36 — stable release

Your AI tech lead,
always available

rekipedia scans any repository into a portable SQLite knowledge store and gives every developer an LLM-powered tech lead they can ask anything — grounded in your actual codebase.

↓ Install rekipedia View on GitHub →
# scan your repo once
$ reki init . && reki scan .
Extracted 1,248 symbols across 94 files
Wiki generated — 32 pages in 14 sections
Stored in .rekipedia/store.db (4.2 MB)

# ask anything
$ reki ask "How does the auth flow work?"
rekipedia The auth flow uses JWT tokens issued by AuthService
(src/auth/service.py:42). Tokens expire after 24h and are
refreshed via /api/auth/refresh. See wiki/auth-flow for
the full sequence diagram.

Features

Everything you need to
understand any codebase

No hallucinations, no guessing — every answer is grounded in your actual source code and wiki.

🗺️
Agentic wiki orchestration
PlannerAgent dynamically designs the wiki structure, assigns importance scores (0–100), and groups pages into logical sections like architecture and getting-started.
🔍
Hybrid RAG Q&A
FAISS-indexed code chunks + wiki pages give the LLM full codebase context. Answers cite real file paths and line numbers — always grounded.
Incremental updates
Only re-processes changed files after the first scan. Your knowledge base stays fresh without re-scanning the entire repo every time.
🗄️
Portable SQLite store
Everything lives in a single .rekipedia/store.db file. No servers, no cloud sync — commit it, share it, or keep it local.
🤖
Any LLM provider
Powered by litellm — use Ollama locally for free, or plug in OpenAI, Anthropic, Google Gemini, or any OpenAI-compatible endpoint.
📦
Wiki export
Bundle the entire wiki to a single Markdown file, ZIP archive, or structured JSON with reki export — ready to share or import.
🌐
Local web UI
Browse wiki pages, search full-text, and ask questions through a sleek dark web interface served locally — category sidebar, live search, dependency graph all built-in.
🕸️
File-level dependency graph
Interactive force-directed graph showing real file-to-file import relationships. Instantly spot tightly-coupled modules and architectural bottlenecks.
✂️
Context slicing
Each wiki page only receives the data it needs — ~40–60% token reduction vs fixed-layout approach. Faster, cheaper, more accurate.
🔌
Embed provider choice
Use --embed-provider openai|ollama|azure|... — any litellm-compatible embedding model for semantic RAG search.

Install

Get started in seconds

Choose your preferred installation method

# Zero install required — runs directly via npx
npx rekipedia init .
npx rekipedia scan .
npx rekipedia serve .

Requires Node.js 18+. The npm package wraps the Python CLI via uvx.

# Zero install required — runs directly via uvx
uvx rekipedia init .
uvx rekipedia scan .
uvx rekipedia serve .

Requires uv. Fastest cold-start with automatic venv isolation.

# Core install (scan + serve + ask)
pip install rekipedia

# With semantic RAG support (FAISS + numpy ~100MB)
pip install "rekipedia[rag]"

# Or with uv tool (isolated, globally available)
uv tool install rekipedia

Requires Python 3.11+. Both rekipedia and reki commands are available after install.

# Add the tap (first time only)
brew tap unrealandychan/tap

# Install the Go single binary — no Python required
brew install rekipedia

# Verify
reki --version

The Homebrew Formula installs a single statically-linked binary with no runtime dependencies. Recommended for macOS and Linux.


Commands

Everything at your fingertips

All commands work with both reki (short) and rekipedia (full).

Command Description
reki init [REPO]Scaffold .rekipedia/ with config.yml and update .gitignore
reki scan [REPO]Full analysis — extract symbols, synthesise wiki pages, export JSON
reki update [REPO]Incremental refresh — re-processes only changed files, keeps the rest
reki ask [QUESTION]Interactive Q&A REPL — streaming answers grounded in your codebase
reki serve [REPO]Start a local web UI to browse wiki pages and ask questions
reki embed [REPO]Build or rebuild the FAISS semantic search index for hybrid RAG Q&A
reki export [REPO]Bundle the wiki to a single file — --format md|zip|json

Configuration

Works with any LLM

Powered by litellm — plug in any model with a one-line change in .rekipedia/config.yml.

# .rekipedia/config.yml
version: 1
llm:
  model: ollama/llama4       # any litellm string
  api_key: ""              # or REKIPEDIA_API_KEY
  base_url: ""             # for local endpoints
  temperature: 0.2
ignore:
  - .git
  - node_modules
  - __pycache__
languages:
  - python
  - typescript
Ollama (local, free) ollama/llama4
OpenAI gpt-4o
Anthropic claude-opus-4-6
Google Gemini gemini/gemini-2.0-pro
Azure OpenAI azure/gpt-4o
Any OpenAI-compatible set base_url in config

Runtime override: REKIPEDIA_API_KEY, REKIPEDIA_MODEL, OPENAI_API_KEY


Download

Pre-built binaries

Single statically-linked binary — no Python, no runtime. Latest release: v0.9.36

🍎
macOS
darwin_arm64.tar.gz
↓ Apple Silicon
🍎
macOS
darwin_amd64.tar.gz
↓ Intel
🐧
Linux
linux_amd64.tar.gz
↓ x86_64
🐧
Linux
linux_arm64.tar.gz
↓ ARM64
🪟
Windows
windows_amd64.zip
↓ x86_64
📦
All packages
.deb / .rpm / .apk
↗ GitHub Releases

Start understanding
your codebase today

No servers, no cloud sync, no hallucinations. Just run reki scan . and ask anything.