Most productivity tools fail at the exact moment they’re needed most — when you’re deciding what to work on.
They handle tasks fine. They’ll track deadlines, send reminders, organize projects into boards and lists and gantt charts. But ask them “should I take this project?” or “why does this feel off-track?” and you get silence. They’re execution engines with no understanding of purpose.
The Intent Stack is an attempt to solve this by introducing a different kind of structure — one that makes why as legible to machines as what already is.
What it actually is
The Intent Stack is a hierarchical framework for organizing human intention. Five layers, each inheriting context from the one above:
LIFETIME INTENT — Identity, principles, values
└─ 5-YEAR INTENT — Strategic direction
└─ ANNUAL INTENT — This year's focus
└─ OPERATIONAL INTENTS — Active goals (3-6 at a time)
└─ PROJECT-SPECIFIC INTENTS — Tactical execution
A project-level intent doesn’t need to restate your values or your five-year vision — it inherits automatically. Every action traces back to a foundational reason without requiring re-derivation each time.
The stack sits inside what I call a Personal Context Document (PCD) — a structured representation of who someone is, what they’re working toward, and how they operate. Where the PCD says “this is who I am,” the Intent Stack says “this is what I’m trying to do and why.”
Each operational intent has sub-intents that evolve. A fitness intent might include handstands, pull-ups, regular padel. A professional effectiveness intent might include decision-first communication with a rule: open every proposal with the recommendation. These get added over time as new insights surface from therapy, assessments, feedback, journal reflections.
The stack is a living document, not a static plan.
The gap it fills
Anyone working with AI daily knows this friction: You tell Claude to use GitHub and it suggests GitLab. You specify a testing framework once and have to re-specify it every conversation. You ask for direct and get diplomatic. You nudge, correct, re-state, nudge again.
Real tools are emerging to address this. Claude Code has claude.md files. Cursor has rules files. Custom skills encode workflows. These work — they reduce repetition and persist context across sessions.
But they’re implementation-level solutions. They tell AI how to behave, not why you’re doing what you’re doing. A claude.md can say “always use pytest” but it can’t say “I’m building toward a consulting practice that compounds expertise, and this testing discipline serves that.”
The tools handle preferences. The Intent Stack handles purpose. They’re complementary — the stack provides the conceptual frame, tools like claude.md provide the mechanism for acting on it.
Three uses in practice
AI context layer. The stack gets consumed by AI agents — journal generators, project planners, code assistants. They understand why you’re doing what you’re doing. When an AI generates a morning journal prompt, it references active operational intents. When it processes new content, it scores relevance against the stack. When it reviews work, it checks alignment with stated goals.
Self-authoring tool. The stack updates through reflection. Journal entries, personality assessments, therapy sessions, colleague feedback. New sub-intents get added when you discover patterns. A personality assessment revealing high novelty-seeking led to a new sub-intent: “Before abandoning a project, ask: Am I bored or is this actually not serving my goals?” The stack becomes a structured record of self-knowledge in action.
Decision filter. When faced with a choice — new project, new commitment, new direction — the stack provides an alignment check. Does this serve an active intent? Which layer? If it doesn’t connect to anything in the stack, it’s either a distraction or a signal that the stack needs updating.
The design space that opens up
Most personal productivity tools operate at the task level. Some operate at the project level. Very few operate at the intent level — the reason behind the project, the goal behind the goal.
This matters because AI systems can increasingly execute tasks and manage projects autonomously. What they lack is intent — understanding why something matters and how it connects to everything else the person cares about.
Making intents first-class objects changes what’s possible:
Intents are hierarchical. “Be healthy” contains “exercise regularly” contains “do pull-ups on Tuesdays.” Current systems flatten this into a task list and lose the structure.
Intents inherit context. A project-level intent inherits values, strategic direction, annual focus without restating them. This matches how humans actually think — we don’t re-derive our life philosophy before every action.
Intents can be implicit. Some get detected from behavior rather than declared. If your AI observes you’ve been building infrastructure systems for three months, it can surface “Compound My Capabilities” as an implicit intent and ask if you want to make it explicit. Different interaction pattern from “add a task.”
Intents are temporal. Different time horizons, different activation patterns, different priorities that shift based on context. A task is either done or not done. An intent is a direction you’re traveling in.
Intents compose. “Bridge Ideas to Execution” and “Show Up Integrated” are separate intents that interact — self-awareness supports professional effectiveness. The stack makes these connections visible and workable.
There’s precedent for this kind of hierarchical structure: Asimov’s Three Laws of Robotics. The First Law overrides the Second, which overrides the Third. Each layer constrains the ones below it. The Intent Stack works similarly — a project-level intent can’t override core values. Except instead of three rigid rules imposed from outside, it’s a living hierarchy authored by the person themselves.
What this means for product design
The Intent Stack suggests a layer missing from current AI-powered tools. Today we give AI systems context through system prompts, conversation history, explicit instructions. The PCD and Intent Stack propose a persistent, structured, user-owned context layer that AI systems can read from and write to over time.
Design space around:
- Intent declaration: How do people express what they want at different abstraction levels?
- Intent detection: How can AI infer intents from behavior and propose them for ratification?
- Intent alignment: How do you check whether a specific action serves a higher-level intent?
- Intent evolution: How do intents change over time, and how should the system handle that?
- Intent conflict: What happens when two intents compete for the same resources?
The person who figures out the right interface for this — intent-aware AI that’s genuinely useful without being creepy or prescriptive — will have built something that matters.
From doing to being
Here’s what I didn’t expect. The Intent Stack looks like a productivity framework, but it’s actually a reconnection tool.
It forces the question most people avoid: who am I being today?
The stack’s top layer is identity, values, principles. When you trace any action back up the stack, you eventually hit: what do I actually care about? And that’s where the discomfort lives — because most people discover a gap between stated values and revealed time allocation.
The intent stack makes you more coherent. It asks you to be something before it asks you to do anything.
AI keeps getting better at doing. The human contribution is knowing what matters. Figuring that out is the work worth doing.
The Intent Stack is a concept developed through daily use since late 2025 as part of a Personal Knowledge Management system. It draws on Jeff Hawkins‘ hierarchical intelligence model (“A Thousand Brains“), Internal Family Systems therapy, and practical experience with AI-augmented journaling and project management.
One thought on “The Intent Stack: A New Design Space for Human-AI Collaboration”