Your Personal Site as a Second Mind: The Why

Apr 6, 2026 min read

I’ve been thinking seriously about knowledge management for years. I’ve tried Notion, Obsidian, Loop, and a half-dozen other tools. Each time the system works brilliantly for about three months and then quietly falls apart. Not because the tool is bad. Because maintenance is hard, and the system is only as good as the maintenance you put in.

The pattern is always the same. Capture is easy, every tool makes it trivial to jot something down. Organisation is manageable at first. But distillation, the work of revisiting what you’ve captured, extracting the real insight, and connecting it to everything else you know, is where it breaks down. Tiago Forte calls this Progressive Summarisation: the practice of revisiting notes layer by layer until you’ve surfaced the essential idea. In practice, it’s a huge ongoing effort. And when distillation fails, expression fails too: you end up rediscovering things you already knew, starting from scratch instead of building on prior work.

Then I read James Croft’s piece Why Your Second Brain Needs an AI Companion. He describes exactly this: the CODE stages (Capture, Organise, Distil, Express) reinforce each other’s failures. Poor organisation makes distillation hard. When distillation stalls, expression stalls, and when expression stalls you lose the motivation to capture in the first place. The whole system decays.

I’d lived with this for years. Reading Croft’s post was what finally pushed me to act. He wasn’t just diagnosing the problem, he had a concrete answer. And I knew what the fix had to do: reduce the maintenance burden dramatically, without reducing my control over what gets published.

What Karpathy adds

Andrej Karpathy’s piece on LLM wikis reframed the problem for me. His argument: retrieval-augmented generation (RAG), where an AI fetches context at query time, still tends to rediscover knowledge from scratch every time. What you actually want is a persistent, maintained knowledge layer, a wiki that an LLM incrementally builds as you add new information. The human curates, the AI maintains.

That distinction matters. An AI that answers questions from your notes is useful. An AI that actively maintains the connections between your notes, by cross-linking, surfacing gaps, and suggesting what is ready to publish, is a different thing entirely. The second one compounds over time. The knowledge graph gets richer the more you use it, rather than staying static.

Croft’s framing of the three pillars holds here too: a structured knowledge store (so AI has reliable patterns to follow), a knowledge graph (so AI knows what’s connected and what’s missing), and specialist AI skills (instructions grounded in your specific system, not generic prompts). The level of specificity is what separates a reliable AI collaborator from a chat interface that hallucinates.

What it actually gets you

Once the system is maintained, which is the whole premise, the payoff is different in kind, not just degree.

The obvious benefit is not forgetting things. The deeper payoff is semantic retrieval: search by meaning, not exact words. Instead of keyword matching, the system uses embeddings, numerical representations of meaning, across the full persistent store: notes, links, reading highlights, journal entries, project logs, all of it. A note from eighteen months ago about attention mechanisms surfaces when you’re thinking about knowledge graphs today, because the ideas sit close together in meaning even if the words don’t overlap.

This is exactly the distinction Karpathy draws. Every new thing you capture doesn’t just add to the pile. It lands in a space where old knowledge can connect to it. The graph gets denser. You stop rediscovering things you already knew and start building on yourself. The system compounds rather than decays.

But that only works if the persistent store is actually fed. Which means capture has to be effortless, and it has to work across every channel where useful things actually show up.

Why a Hugo site, not Obsidian

I already had a Hugo site, stephlocke.com, for published content: blog posts, talks, bio, press. It now lives in a private GitHub repo, which means one thing immediately: draft: true content in git is genuinely private, not just hidden. No “public but unlisted” leakage. The content never leaves the repo unless I explicitly publish it.

Hugo’s draft system turns out to be a nearly perfect visibility toggle. Inner mind = draft: true. Outer mind = draft: false. The entire distinction between private thinking and public output is already baked into the tool I was using.

The alternative, a separate Obsidian vault plus a site, means two systems to maintain and no easy bridge between raw thinking and published writing. I wanted the raw material and the finished product to live in the same place, with a clear but low-friction path between them.

The design

The site now has two build modes:

hugo                               # public site, drafts hidden
hugo server --environment private  # full second mind, everything visible

The private environment shows six new content sections alongside the existing blog and talks:

  • Links: bookmarks with AI-generated summaries, captured with one tap on mobile
  • Notes: atomic thoughts that might grow into posts
  • Journal: personal reflections, never published
  • Reading: highlights and notes from books and articles
  • Projects: ongoing work with status tracking
  • Resources: curated tools and references (public by default)

The system also has to meet you where you are. Knowledge worth keeping doesn’t arrive through one channel, it comes from a conversation, a talk, something skimmed on your phone, or a thought mid-run. If capture requires opening a specific app and forming a proper note, most of it disappears. So capture is spread across channels: a one-tap Android shortcut files a GitHub Issue that a script processes into a link or note entry; reading highlights sync in from whatever you’re reading on; Claude Code slash commands handle structured capture at the desk; newsletters and longer reads come in through email. Everything feeds the same store regardless of origin. The semantic index sees all of it.

The AI doesn’t decide what gets published. It summarises, suggests, and drafts. Every promotion to the outer mind is a deliberate human decision. The inner mind is where AI helps with cognitive load. The outer mind is still curated by hand.

In the next post I’ll cover the Hugo implementation in detail: the config environment approach, the archetypes, the connection graph, and how the private/public split works in practice. The post after that covers the AI workflows: Claude Code slash commands, the Android to GitHub Issues pipeline, and the semantic search layer.

If you want to start now, take 10 minutes to audit your current system: list where ideas enter, where they get stuck, and one place they die. That list is your migration brief.

This is part 1 of a 5-part series. Next: The Hugo Architecture →