🚨 Andrej Karpathy just dropped something that could replace a lot of RAG workflows. It's called LLM Wiki. The idea is simple: Most AI systems retrieve context from scratch every time you ask a question. LLM Wiki doesn't. It builds a persistent knowledge base that gets better every time you add a new source. So instead of: • search docs
• pull fragments
• answer
• forget everything
• repeat it does this: • ingest a source
• extract the important ideas
• update entity pages
• revise topic summaries
• connect related concepts
• flag contradictions
• keep compounding the knowledge over time That shift matters. RAG is great for retrieval. But a lot of people are really trying to build memory. Not just "find me the right chunk again."
More like: "help me build an evolving model of this topic over time." That's what this is. Karpathy's examples are strong too: • personal knowledge
• long-horizon research
• books and topics
• internal company knowledge
• meeting transcripts
• customer calls Basically, anything where the knowledge should accumulate, not reset every session. The best way to think about it: Obsidian is the IDE.
The LLM is the programmer.
The wiki is the codebase. You don't manually maintain the system. You feed it sources, ask questions, and the AI keeps the structure alive. That's a much bigger idea than "better RAG." 100% open source. [Translated from EN to English]
→ View original post on X — @scobleizer, 2026-04-06 15:06 UTC

Leave a Reply