A personal AI agent with two goals: be maximally useful for one person, and get better at doing that over time.
The LLM underneath is improving rapidly and getting democratized. That's a tailwind, not a moat.
The leverage is in the infrastructure around the model: memory, skills, self-modification. A bespoke framework for one person outperforms a general one using the same model — and it iterates faster because the agent understands its own architecture.
Each builds on the last. Memory is the foundation. Skills are the mechanism. Self-modification is what emerges.
Memory
Don't lose important information. Memory builds like a human's through usage — recent things verbatim, older things compressed but drillable. No hard amnesia cliffs.
Code-native skills
Code + prompt captures abilities more reliably than prose. When the agent learns something new, it writes a skill. Not a note. Not a prompt tweak. Code.
Infrastructure as memory
Learned abilities aren't just written down — they're captured by modifying the system itself. The codebase is the memory.
Human intuition, machine nature
Design starts from how humans think — then accounts for what machines actually are. An LLM has context limits but can load anything on demand. It's as native in code as in language. It can be aware of its own architecture. Design for both sides, not just one.
Built to be self-improved
The whole system is shaped around being easy to understand and modify — by the person building it, and by the agent itself.
The agent thinks between conversations
A periodic inner monologue reflects on open tasks, follows up on commitments, and reaches out when it has something worth saying. Not a cron job — a judgment call every time.
Self-reinforcing
The agent getting better and development getting faster are the same process.