Post

Technical Strategy & Roadmapping

A technical strategy is a set of bets about which technical investments will create the most business value over time. Without business alignment, it is a hobby project list. Without technical depth, it is a slide deck that ships nothing.

Technical Strategy & Roadmapping

A technical strategy is a set of bets about which technical investments will create the most business value over time. Without business alignment, it is a hobby project list. Without technical depth, it is a slide deck that ships nothing.

What a Technical Strategy Actually Is

A technical strategy is not a technology list or a migration plan. It answers three questions:

  1. Where are we? — Current state of architecture, infrastructure, team capabilities, and debt
  2. Where do we need to be? — Target state driven by business goals (not technology trends)
  3. How do we get there? — Sequenced investments with clear milestones and decision points

The Strategy Stack

Layer Owns Horizon Example
Business strategy CEO / GM 3-5 years “Become the leading AI-powered electronics retailer in Europe”
Product strategy CPO / PM 1-2 years “Launch AI-assisted product recommendations in 3 markets”
Technical strategy CTO / VP Eng 1-2 years “Build a platform for AI agent deployment with centralized governance”
Engineering roadmap EM / Tech Lead 1-4 quarters “Q2: Agent orchestration layer. Q3: Guardrails framework. Q4: First consumer-facing agent.”
Sprint plan Team 2 weeks “This sprint: implement agent routing with fallback logic”

The alignment test: Every item on the engineering roadmap should trace upward to the technical strategy, which traces to the product strategy, which traces to the business strategy. If you cannot draw that line, the item is either wrong or the strategy is incomplete.


Tech Radar

What It Is

A tech radar (popularized by ThoughtWorks) classifies technologies into four rings based on your organization’s confidence in them:

Ring Meaning Action
Adopt Proven in production, standard choice for new projects Use by default
Trial Promising, used in non-critical contexts, gathering evidence Use in new projects with appropriate risk tolerance
Assess Interesting, worth investigating, not yet used Spike or POC only, do not build production systems on it
Hold Do not start new work with this; migrate away over time Existing usage is fine, but no new adoption

Building a Tech Radar for a 16-Person Org

At this scale, a tech radar prevents two failure modes:

  1. Technology sprawl: Every engineer picks their favorite framework, creating unmaintainable diversity
  2. Stagnation: Nobody adopts new tools because “we always use X”

Process:

  1. Principal engineer or tech leads draft the radar based on current stack and known needs
  2. Team reviews and challenges in a 90-minute session (quarterly)
  3. Published as a living document (Obsidian page, Confluence, or a simple spreadsheet)
  4. Updated quarterly — technologies move between rings based on experience

Example entries for an MMS AI platform team:

Quadrant Technology Ring Rationale
Languages TypeScript Adopt Standard for frontend and backend services
Languages Python Adopt Required for AI/ML workloads
Frameworks LangChain Trial Used in agent POCs, evaluating stability
Frameworks LangGraph Assess More structured than LangChain for agent workflows
Infrastructure GCP Vertex AI Adopt Company standard for AI platform
Infrastructure Terraform Adopt IaC standard
Databases AlloyDB Trial GCP-managed PostgreSQL, evaluating for new services
Practices ADRs Adopt Decision documentation standard

Architecture Decision Records (ADRs)

Why ADRs Matter

Architecture decisions are the most expensive to reverse and the most likely to be forgotten. ADRs capture the why behind decisions — not just what was decided, but what alternatives were considered and why they were rejected.

Without ADRs: In 6 months, a new engineer asks “why do we use Kafka instead of Pub/Sub?” and nobody remembers. The decision gets relitigated, wasting everyone’s time.

ADR Template

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# ADR-NNN: [Decision Title]

**Status:** Proposed | Accepted | Deprecated | Superseded by ADR-NNN
**Date:** YYYY-MM-DD
**Decision Makers:** [Names]

## Context

What is the situation? What problem are we solving? What constraints exist?

## Decision

What did we decide? State it clearly in one sentence.

## Alternatives Considered

| Alternative | Pros | Cons | Why Rejected |
|-------------|------|------|-------------|
| Option A | ... | ... | ... |
| Option B | ... | ... | ... |

## Consequences

What becomes easier? What becomes harder? What new constraints does this create?

## Review Date

When should we revisit this decision? (Usually 6-12 months)

ADR Practices

  • Store ADRs near the code they affect — in the repo, not in a wiki nobody reads
  • Number them sequentially — ADR-001, ADR-002. Makes it easy to reference
  • Never delete an ADR — mark it Deprecated or Superseded. The history is the value
  • Require ADRs for irreversible decisions — database choice, API design, authentication mechanism. Do not require them for reversible decisions (library choice, naming conventions)
  • Lightweight governance: Any engineer can propose an ADR. Senior engineer or principal reviews. EM ensures it exists for significant decisions.

When to Write an ADR

Write one when:

  • The decision affects more than one team
  • The decision is hard or expensive to reverse
  • The team disagrees and needs to commit to a path
  • You find yourself explaining the same decision to multiple people

Do not write one for:

  • Choosing between two equivalent libraries
  • Standard patterns already covered by the tech radar
  • Decisions that can be easily reversed with a PR

Tech Debt Management

The Four Quadrants of Tech Debt

Martin Fowler’s tech debt quadrant (based on the debt metaphor from Ward Cunningham):

  Prudent Reckless
Deliberate “We know this is a shortcut but we need to ship. We will clean it up in Q3.” “We don’t have time for design” (and never will)
Inadvertent “Now we know how we should have built it” (learning-driven debt) “What’s layering?” (ignorance-driven debt)

Only deliberate-prudent debt is acceptable. It requires:

  1. An explicit decision to take on debt (documented, ideally in an ADR)
  2. A repayment plan with a timeline
  3. Understanding of the interest rate (how much does this debt slow us down?)

Quantifying Tech Debt

Abstract “tech debt” is hard to prioritize. Make it concrete:

Debt Item Interest Rate Repayment Cost Priority
Flaky test suite (40% of builds fail, need manual retry) 2 hours/developer/week wasted 3 engineer-weeks to fix High — compounds across entire team
Legacy auth service (works but unmaintainable) 1 incident/quarter, 4 hours to debug 6 engineer-weeks to replace Medium — known risk but manageable
No API versioning (breaking changes require coordinated deploys) Blocks independent deployment 2 engineer-weeks to add versioning High — removes deployment coupling

Interest rate = ongoing cost of living with the debt (developer time, incidents, slower delivery). Repayment cost = effort to fix the debt. Priority = high interest rate relative to repayment cost.

The 20% Rule and Why It Fails

“Spend 20% of each sprint on tech debt” sounds reasonable but often fails because:

  • Tech debt work gets deprioritized when feature pressure rises (which is always)
  • 20% is not enough for large systemic issues
  • It fragments effort — lots of small cleanups, no strategic improvement

Better approaches:

  1. Tech debt sprints: Every 4th sprint is entirely tech debt. Gives enough focus for meaningful work.
  2. Embedded in features: “We will build Feature X, and while we are in that area, we will refactor the surrounding code.” Debt gets paid as part of value delivery.
  3. Dedicated capacity: One engineer permanently assigned to infrastructure/tooling improvement. At 16 engineers, this is 6% of capacity — high leverage if the right person.
  4. Quarterly tech debt review: Stack-rank all debt items by interest rate. Fund the top 3. Track completion.

Innovation Time

Models

Model How It Works Who Does It Success Rate
Google 20% time Engineers spend 20% of time on self-directed projects Individual Low in practice — most engineers cannot afford 20% given OKR pressure
Atlassian ShipIt Days 24-hour hackathons, quarterly Teams High for engagement and prototyping; low for production impact
Shape Up cooldown 2 weeks between 6-week cycles for self-directed work Teams Moderate — structured enough to produce results
Dedicated innovation sprint One sprint per quarter fully for experimentation Teams High — enough time for meaningful exploration
Tiger teams Temporary cross-functional team for a specific innovation goal Selected engineers High — focused mandate with clear goal

Recommendation for a 16-person org: Quarterly innovation sprints or Shape Up cooldowns. The key is making it real: no feature work during innovation time, demos at the end, and a path to production for promising results.


Aligning Tech and Product Roadmaps

The Tension

Product wants features. Engineering wants infrastructure. Both are right. The failure mode is treating them as competing priorities rather than complementary investments.

The Dual-Track Roadmap

Run two tracks on the same timeline:

Quarter Product Track Engineering Track Dependency
Q2 AI-assisted product search Agent orchestration layer, LLM gateway Engineering track enables product track
Q3 Personalized recommendations Guardrails framework, A/B testing for AI Same — infrastructure unlocks features
Q4 Multi-channel AI assistant Observability and eval pipeline Same

The key insight: Engineering track items should be selected because they enable product track items, not because engineers find them interesting. The pitch to leadership: “We need to build X so that we can ship Y. Without X, Y takes 3x longer and is fragile.”

Roadmap Communication

Audience Format Frequency Content
Engineers Detailed roadmap with technical milestones Continuously updated What, how, dependencies
Product/PM Quarterly roadmap with feature-enabling view Quarterly planning What it unlocks, timeline
Leadership 3-slide summary: current state, target state, next quarter Quarterly or monthly Business outcomes, risks, asks
Stakeholders Release timeline with dates As needed When, not how

Anti-Patterns

Anti-Pattern Symptom Fix
Resume-driven development Adopting Kubernetes for a 3-service app, or GraphQL for internal APIs Tech radar process with explicit justification for new technologies
Strategy without execution Beautiful strategy deck, no quarterly milestones, nothing ships Break strategy into quarterly deliverables with owners
Tech debt denial “We will clean it up later” with no plan or timeline Quantify debt with interest rates, make it visible to product
Big bang migration “We will rewrite everything in the new stack” Strangler fig pattern — migrate incrementally, prove value early
Architecture astronaut Over-engineering for scale you do not have Solve today’s problem with tomorrow’s extension points. YAGNI.
Invisible tech strategy Only the EM and principal know the strategy Publish it. If the team cannot explain the strategy, it does not exist.

Real-World Application

Spotify’s Tech Radar

Spotify open-sourced their tech radar format and tool (backstage.io/docs/features/tech-radar). Their radar is maintained by a guild of senior engineers who review and update it quarterly. The key: it is descriptive (what we actually use) not prescriptive (what we wish we used).

Amazon’s Reversible vs Irreversible Decisions

Bezos’s “one-way door vs two-way door” framework maps directly to when you need formal architecture decisions:

  • One-way door (irreversible): Database choice, API contract, cloud provider. Requires ADR, senior review, deliberation.
  • Two-way door (reversible): Library choice, internal naming, feature flag. Decide fast, change if wrong.

Most decisions are two-way doors that organizations treat as one-way doors. This is the more common failure mode — not moving fast enough on reversible decisions.

Stripe’s Approach to Technical Strategy

Stripe maintains a living document called “The Technical Agenda” — a ranked list of the most important technical investments for the company. It is reviewed monthly by senior engineering leadership and updated based on what has been learned. The ranking forces trade-offs: adding something means removing something else.

Google’s Design Docs

Google requires design documents for any project expected to take more than one engineer-week. The doc template includes: context, goals/non-goals, design, alternatives considered, and monitoring plan. Design docs are reviewed by peers and senior engineers before work begins. This is ADRs at scale.


References

Larson, W. (2019). An Elegant Puzzle: Systems of Engineering Management — Chapters on technical strategy and migrations Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate — Evidence for what technical practices drive performance Singer, R. (2019). Shape Up — Appetite-based planning and cooldown periods Fowler, M. (2009). “Technical Debt Quadrant” — martinfowler.com Nygard, M. (2011). “Documenting Architecture Decisions” — Original ADR blog post ThoughtWorks Technology Radar — thoughtworks.com/radar Spotify Backstage Tech Radar — backstage.io Google Design Docs — “Design Docs at Google” (Industrial Empathy blog) Cunningham, W. (1992). “The WyCash Portfolio Management System” — Original “technical debt” metaphor Kellan Elliott-McCrea — “Architecture Without an Architect” (O’Reilly talk)

This post is licensed under CC BY 4.0 by the author.