Decision Making Frameworks
The cost of a decision is not just the outcome of what you choose -- it is the cost of the time spent choosing, the organizational energy consumed, and the opportunity cost of everything that waited while you decided.
The Decision Taxonomy — What Kind of Decision Is This?
Before reaching for a framework, classify the decision. The classification determines how much process you need.
Bezos: One-Way Door vs. Two-Way Door
Jeff Bezos (2015 Amazon shareholder letter) introduced the most practically useful decision classification for engineering leaders:
| Type | Characteristics | Process | Speed |
|---|---|---|---|
| Type 1 (One-way door) | Irreversible or very costly to reverse. High consequences. | Deliberate. Seek input broadly. Sleep on it. Document reasoning. | Slow is acceptable. |
| Type 2 (Two-way door) | Reversible. Low cost of being wrong. Recoverable. | Lightweight. Decide quickly. Adjust based on data. | Fast. Bias toward action. |
The critical insight: Most decisions are Type 2, but organizations treat them as Type 1. The result is decision paralysis at scale. When everything requires three meetings and a DACI document, you have turned two-way doors into one-way doors through process overhead.
Examples in engineering:
| Decision | Type | Why |
|---|---|---|
| Choosing your primary programming language for a new platform | Type 1 | Migration cost is enormous, hiring pipeline depends on it, ecosystem lock-in |
| Choosing a state management library for a frontend app | Type 2 | Can be swapped with moderate effort, contained blast radius |
| Signing a 3-year enterprise contract with a cloud provider | Type 1 | Financial commitment, migration cost, team skill investment |
| Picking a logging format (JSON vs. structured text) | Type 2 | Can be changed with a config update across services |
| Organizational restructuring (splitting/merging teams) | Type 1 | People disruption, relationship damage, 6+ months to stabilize |
| Deciding sprint scope for next two weeks | Type 2 | Can be adjusted mid-sprint if priorities change |
The Speed-Quality Tradeoff
Amazon’s leadership principle “Bias for Action” is often misquoted as “move fast.” The actual principle is: most decisions should be made with about 70% of the information you wish you had. If you wait for 90%, you are too slow. If you act on 50%, you are reckless.
The formula: Decision quality = (information quality) x (speed) x (reversibility consideration)
For Type 2 decisions: optimize for speed. For Type 1 decisions: optimize for information quality.
DACI / RACI — Decision Ownership Models
These frameworks answer the question: Who decides?
RACI
| Role | Definition | Trap |
|---|---|---|
| Responsible | Does the work | Too many Rs = diffusion of responsibility |
| Accountable | Owns the outcome, has final authority | Must be exactly one person |
| Consulted | Provides input before the decision | Too many Cs = slow decisions |
| Informed | Told after the decision | Under-informing creates political problems |
DACI (Preferred for Engineering Decisions)
DACI is a better fit for engineering because it explicitly separates the driver from the approver, which matters in matrix organizations.
| Role | Definition | Key Behavior |
|---|---|---|
| Driver | Drives the process, gathers input, makes a recommendation | Not the decision-maker. Runs the process. |
| Approver | Makes the final call. One person. | Must actually decide, not defer to consensus. |
| Contributors | Provide input, expertise, perspective | Their input matters, but they do not have a veto. |
| Informed | Stakeholders who need to know the outcome | Told after, with reasoning. |
When DACI Fails
-
The approver who will not approve. They ask for more data, more input, another meeting. This is usually a sign they are afraid of being wrong. Fix: set a decision deadline. “We will decide by Thursday. If no decision is made, the Driver’s recommendation stands.”
-
Everyone is a Contributor. When 15 people are consulted on every decision, the process collapses under its own weight. Fix: limit Contributors to people with unique, material input. “Would this decision be meaningfully different without this person’s input?” If no, they are Informed, not Contributor.
-
DACI without follow-through. The decision is made but never communicated to the Informed group, or Contributors who disagreed keep relitigating. Fix: document the decision with reasoning, share it broadly, and hold the line. “We decided X because [reasons]. We considered Y and Z but chose not to because [reasons]. This is final unless new information emerges.”
-
Confusing Driver with Approver. The EM writes the RFC, gathers input, and then asks their VP to approve. The VP defers back: “You are closer to this — you decide.” Neither takes ownership. Fix: assign roles explicitly before the process begins.
Cynefin Framework — Matching the Decision to the Domain
Dave Snowden’s Cynefin (2007) is the most useful meta-framework for deciding how to decide. It categorizes situations by the relationship between cause and effect.
The Five Domains
| Domain | Cause-Effect | Approach | Engineering Example |
|---|---|---|---|
| Clear (formerly Simple/Obvious) | Obvious. Everyone can see it. | Sense-Categorize-Respond. Apply best practice. | “The build is failing because a test is broken.” Fix the test. |
| Complicated | Discoverable, but requires expertise. | Sense-Analyze-Respond. Apply good practice. Expert analysis needed. | “Our P95 latency spiked. Need to analyze traces, DB queries, and cache hit rates to find root cause.” |
| Complex | Cause-effect only visible in retrospect. | Probe-Sense-Respond. Run safe-to-fail experiments. | “Will this AI agent architecture scale to 50 use cases?” Run 3 pilot agents, learn, adapt. |
| Chaotic | No perceivable cause-effect. | Act-Sense-Respond. Stabilize first, then figure out what happened. | Production is down, customers are affected, root cause unknown. Stop the bleeding first. |
| Confusion | Do not know which domain you are in. | Break the situation into parts and categorize each. | “Our team velocity dropped 40%.” Could be clear (bad tooling), complicated (architecture issues), complex (team dynamics), or chaotic (organizational turmoil). |
Why Cynefin Matters for Engineering Leaders
The most common leadership failure is applying Complicated-domain thinking to Complex-domain problems. When an engineering leader says “let me analyze this thoroughly, create a comprehensive plan, and then execute” for a problem like “how should we adopt AI across the organization?” — they are treating a complex problem as complicated.
Complex problems require probing, not planning. You cannot analyze your way to the right AI strategy. You need to run small experiments, learn from them, and let the strategy emerge from what works.
Conversely, treating Complicated problems as Complex wastes time. If your deployment pipeline is slow, you do not need to run experiments — you need an expert to profile it and fix the bottlenecks. Analysis, not experimentation.
The Disorder Trap
The center of Cynefin is “Confusion” — you do not know which domain you are in. The danger: people in confusion default to the approach they are most comfortable with. Engineers default to Complicated (analyze more). Managers default to Clear (apply a process). Neither works if the actual domain is Complex or Chaotic.
The fix: When confused, ask: “Can we predict the outcome of an intervention?” If yes with expertise, it is Complicated. If genuinely unpredictable, it is Complex. If everything is on fire, it is Chaotic.
Decision Journals — Learning from Past Decisions
A decision journal is the most underused leadership tool. The concept is simple: before making a significant decision, write down:
- The decision — What are you deciding?
- The context — What do you know right now? What are the constraints?
- The options — What alternatives did you consider?
- Your reasoning — Why are you choosing this option over others?
- What you expect to happen — Specific predictions with timelines.
- What would change your mind — Pre-commit to what evidence would make you reverse course.
Then revisit the entry 3-6 months later.
Why Decision Journals Work
They combat outcome bias — judging decisions by their results rather than the quality of reasoning at the time. A good decision with bad luck looks like a bad decision in hindsight. A bad decision with good luck looks brilliant. The journal lets you evaluate your reasoning process independently of outcomes.
They also combat hindsight bias — the tendency to believe you “knew it all along.” By writing down what you expected before the outcome, you get honest feedback on your predictive accuracy.
Practical Format for Engineering Leaders
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
## Decision: [Title]
**Date:** YYYY-MM-DD
**Type:** Type 1 / Type 2
**Domain:** Clear / Complicated / Complex / Chaotic
### Context
- What is happening that requires this decision?
- What constraints exist?
- What do I know? What am I uncertain about?
### Options Considered
1. Option A: [description] — Pros: ... Cons: ...
2. Option B: [description] — Pros: ... Cons: ...
3. Option C (status quo): [description] — Pros: ... Cons: ...
### Decision
Chose Option [X] because [reasoning].
### Predictions
- I expect [specific outcome] by [date].
- Risk: [what could go wrong] — probability: [high/medium/low]
- If [trigger], I will reconsider and potentially switch to Option [Y].
### Retrospective (fill in later)
**Date:** YYYY-MM-DD
- What actually happened:
- Was my reasoning sound?
- What did I miss?
- What would I do differently?
Consensus vs. Consent — Two Very Different Things
Most engineering organizations default to consensus decision-making without realizing it. Understanding the distinction matters enormously.
Consensus
Definition: Everyone agrees with the decision. All objections are resolved. The group aligns.
When it works: Small teams (3-5 people), decisions that require full buy-in to execute, values-level decisions.
When it fails: Larger groups, time pressure, any situation where one person can veto. Consensus in a group of 12 is either impossibly slow or produces watered-down decisions that no one is excited about.
The consensus trap in engineering: “We need everyone aligned on the architecture.” No, you do not. You need the architect and the EM to make a sound decision, Contributors to provide input, and the team to commit to executing even if they would have chosen differently.
Consent
Definition: No one has a paramount objection. People may not prefer the decision, but they can live with it and commit to executing it.
The key question: Not “do you agree?” but “can you live with this? Do you see a risk that would make this harmful?”
Why consent is superior for most engineering decisions:
- It is faster — you are looking for objections, not agreement.
- It respects dissent — people can say “I would choose differently, but I do not see a reason this will fail.”
- It prevents tyranny of the minority — one person cannot block a decision by simply not liking it.
- It produces better commitment — people commit to what they can live with, even if it was not their first choice.
Amazon’s “Disagree and Commit”
Amazon formalized this as a leadership principle. The mechanics: after a decision is made, everyone commits to executing it fully, even if they disagreed during the discussion. The person who disagreed does not get to say “I told you so” if it fails. They committed.
The prerequisite: This only works if the dissenter genuinely felt heard during the discussion. If people feel railroaded, “disagree and commit” becomes “shut up and comply,” which breeds resentment and passive resistance.
Group Decision-Making Dysfunctions
Groupthink (Janis, 1972)
The group converges on a decision because social pressure overrides individual critical thinking. Symptoms:
- No one challenges the leader’s preferred option
- Dissenters self-censor
- The group rationalizes warnings away
- Illusion of invulnerability and unanimity
Engineering version: The Staff Engineer proposes an architecture. Everyone nods. No one asks “what happens if this does not scale?” because the Staff Engineer is the most senior person in the room.
Countermeasures:
- Assign a deliberate devil’s advocate (rotate the role)
- Have people write down their position before discussion (prevents anchoring)
- The leader speaks last, not first
- Conduct a pre-mortem: “It is 6 months from now and this decision failed. Why?”
HiPPO (Highest Paid Person’s Opinion)
Decisions default to whatever the most senior person in the room thinks, regardless of evidence. The antidote is data and structured frameworks. “What does the data say?” is more powerful than “what does the VP think?”
Analysis Paralysis
The team cannot decide because they keep seeking more information. This is usually a symptom of either unclear ownership (no one wants to be the Approver) or fear of being wrong.
Fix: Set a decision deadline. Make the decision reversibility explicit. “This is a Type 2 decision. We will decide by Friday with 70% confidence. If we are wrong, we can reverse course in two weeks.”
Pre-Mortems and Decision Quality
Gary Klein’s pre-mortem technique (2007) is the single most effective tool for improving decision quality.
The process:
- The team has tentatively decided on a course of action.
- Before finalizing, the leader says: “Imagine it is [6 months from now]. We went ahead with this decision and it was a disaster. What went wrong?”
- Each person independently writes down failure scenarios for 2-3 minutes.
- Share and discuss.
Why it works: It gives people social permission to voice concerns. In a normal discussion, saying “I think this will fail” feels negative and confrontational. In a pre-mortem, imagining failure is the explicit task — it is safe.
When to use it: Type 1 decisions, high-stakes technical bets, org restructuring, major vendor selections. Do not use it for Type 2 decisions — the overhead is not justified.
Anti-Patterns
| Anti-Pattern | Description | Fix |
|---|---|---|
| Decision by committee | No single owner; the group endlessly discusses | Assign one Approver with a deadline |
| Decision by avoidance | Nobody makes the call; the default wins by inaction | Name the default explicitly: “If we do not decide by Friday, we are choosing the status quo. Is that acceptable?” |
| Decision by loudest voice | The most aggressive person wins, not the best argument | Structured process: written input first, then discussion, then decision by Approver |
| Relitigating decided issues | Decisions get reopened without new information | Document decisions with reasoning. “This was decided on [date] based on [reasoning]. To reopen, bring new information that changes the analysis.” |
| Over-indexing on reversibility | “We can always change it later” used to avoid rigor on genuinely high-stakes decisions | Not all reversible decisions are cheap to reverse. A “reversible” database migration that takes 3 months is effectively irreversible. |
| Pseudo-data-driven | Using data to justify a predetermined conclusion rather than to inform | Present data before stating a recommendation. Let the data speak first. |
References
- Bezos, J. (2015). Amazon shareholder letter. — One-way door vs. two-way door decision framework.
- Snowden, D. & Boone, M. (2007). “A Leader’s Framework for Decision Making.” Harvard Business Review. — Cynefin framework.
- Janis, I. (1972). Victims of Groupthink. Houghton Mifflin. — Groupthink dynamics and countermeasures.
- Klein, G. (2007). “Performing a Project Pre-Mortem.” Harvard Business Review. — Pre-mortem technique.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. — Cognitive biases in decision-making, System 1 vs System 2.
- Grove, A.S. (1983). High Output Management. Random House. — Decision-making as a manager’s core output.
- Larson, W. (2019). An Elegant Puzzle. Stripe Press. — Engineering decision-making in practice; “good process is lightweight process.”
- Martin, R. (2013). Playing to Win. Harvard Business Review Press. — Strategy as a cascade of choices.
- Ray Dalio (2017). Principles. Simon & Schuster. — Believability-weighted decision-making.
- Annie Duke (2018). Thinking in Bets. Portfolio. — Separating decision quality from outcome quality.