Platform as a Product
Delivery teams are your customers. The moment you stop treating them as a captive audience and start competing for their adoption, your platform becomes something engineers choose rather than endure.
Delivery teams are your customers. The moment you stop treating them as a captive audience and start competing for their adoption — with real discovery, a public roadmap, and metrics that expose whether anyone actually uses what you build — your platform becomes something engineers choose rather than endure.
Core Properties
| Property | Value |
|---|---|
| Origin | Evan Bottcher, What I Talk About When I Talk About Platforms (2018); Skelton & Pais, Team Topologies (2019) |
| Primary customer | Delivery (stream-aligned) teams |
| Success measure | Adoption rate, time-to-value, NPS — not ticket throughput |
| Key role added | Platform Product Manager (or Technical Product Owner) |
| Anti-pattern it replaces | Ticket-taker ops team; cost-centre infrastructure group |
| Industry consensus | DORA, CNCF, Gartner, Thoughtworks — all converge on this model |
| Gartner growth projection | 80% of large engineering orgs will have platform teams by 2026 (from 45% in 2022) |
| DORA 2025 finding | 90% of orgs now report using an IDP; 76% have a dedicated platform team |
When to Use / Avoid
Use When
- You have 5+ stream-aligned teams whose cognitive load is visibly suffering from duplicated infra concerns, inconsistent pipelines, or repeated “how do I deploy this” conversations.
- You can commit a dedicated platform team (even 2-3 engineers + 1 PM) — without sustained investment you ship features no one wants.
- Your platform team is willing to treat adoption as optional, not mandated. Delivery teams should be able to go off-piste at cost, not forbidden from it.
- You have or can build the discovery muscle: structured interviews, journey mapping, instrumented usage data.
- Leadership will fund based on adoption outcomes, not headcount or features shipped.
Avoid When
- You have a single product team where direct tooling already works and a shared platform would add ceremony without value.
- You’re solving a product-market fit problem. Platform engineering accelerates delivery throughput; it doesn’t fix the wrong product.
- You plan to mandate use of the platform instead of competing for adoption — a mandated platform is a cost centre wearing a product hat.
- You don’t yet know what delivery teams actually need. Build for 2-3 stream-aligned teams first, discover the friction, then abstract it.
- The “PM” role is actually a project manager managing platform team tickets. That’s not discovery; that’s backlog administration.
The Mindset Shift — From Cost Centre to Internal Product
The failure mode of almost every central infrastructure team is identical: it operates as a cost centre, tracks work through tickets, measures success by throughput (tickets closed, deploys handled), and treats delivery teams as captive consumers who have no choice but to wait. Manuel Pais and Matthew Skelton named this the “ticket-taker anti-pattern” in Team Topologies — a platform team perpetually in request-response mode with its consumers instead of in X-as-a-Service mode.
Evan Bottcher articulated the corrective in his 2018 martinfowler.com article:
“A digital platform is a foundation of self-service APIs, tools, services, knowledge and support which are arranged as a compelling internal product. Autonomous delivery teams can make use of the platform to deliver product features at a higher pace, with reduced co-ordination.”
The phrase “compelling internal product” does real work. A platform that is merely available is not compelling. A platform succeeds only when it is more attractive than the alternative — where “alternative” means a delivery team managing their own Kubernetes cluster, writing their own CI pipelines, or rolling their own secret management. Netflix formalised this as the “paved road”: the platform is the well-lit path teams take because it is faster and safer, not because they are told to.
The mindset shift has three concrete operational consequences:
- Competition. Platform teams that treat their work as a product accept that delivery teams can opt out. This forces the platform to remain useful and low-friction rather than metastasising into bureaucracy.
- Measurement. Success is adoption — what percentage of eligible teams use each capability, whether they stick with it, and whether it reduces their time-to-deploy. Tickets closed per week is an irrelevant vanity metric.
- Discovery. You cannot build a compelling product without knowing what customers actually struggle with. User interviews, journey mapping, and usage telemetry replace assumption-driven roadmaps.
graph LR
A[Delivery Teams] -->|use| B[Platform]
B -->|collects| C[Usage Data & Feedback]
C -->|informs| D[Platform Roadmap]
D -->|prioritises| E[Platform Capabilities]
E -->|reduces friction for| A
F[Platform PM] -->|owns| D
F -->|runs discovery with| A
Source: Evan Bottcher — What I Talk About When I Talk About Platforms — the original formulation of “compelling internal product” and the self-service definition
The Role of a Platform Product Manager
This is the single biggest structural change when adopting the Platform-as-a-Product model. Without a dedicated PM, the roadmap defaults to “whatever the loudest VP asked for last quarter” or “whatever the platform engineers find technically interesting.” Both produce platforms that don’t solve the actual friction delivery teams face.
The Platform PM (sometimes called Technical Product Owner or TPO) is not a project manager. The distinction matters:
| Dimension | Project Manager | Platform PM |
|---|---|---|
| Primary concern | Delivery timeline, resource allocation | Customer outcomes — is adoption growing? |
| Roadmap input | Stakeholder requests, capacity plans | Discovery findings, usage data, OKRs |
| Relationship to delivery teams | Tracks their requests in a backlog | Treats them as customers; conducts interviews |
| Definition of done | Feature shipped | Feature adopted and retained |
| Key skill | Gantt charts, status reporting | Product discovery, prioritisation, saying no |
| Who is their customer? | The platform team itself | Delivery teams |
The last row matters most. A PM who treats the platform team as the customer optimises for the team’s preferences. A PM who treats delivery teams as the customer optimises for adoption and developer experience — which is what the organisation actually needs.
The PM owns four things operationally:
1. Discovery. Regular structured interviews with delivery teams (at least monthly). Jobs-to-be-done framing: not “what features do you want?” but “what is the most painful part of getting from code to production today?” Journey mapping for specific personas — the new hire onboarding in their first deploy, the staff engineer rolling out a new service, the on-call engineer dealing with an incident.
2. Roadmap. A public, outcome-oriented roadmap with OKRs tied to adoption metrics, not feature counts. Platform teams that publish their roadmap build trust with delivery teams and make trade-offs explicit. Teams that keep their roadmap internal breed speculation and shadow-IT.
3. Prioritisation. The PM’s most important job is saying no. Platform teams face constant pressure to build the next feature before the last one is mature. The right instinct is to drive adoption of what already exists, harden it, document it, and retire the prior solution — before building anything new.
4. Internal marketing. Launch posts on internal wikis, demo sessions at engineering all-hands, office-hours slots, onboarding guides. Delivery teams do not discover platform capabilities by themselves. The platform team must actively sell its work to its internal customers.
graph TD
PM[Platform PM] -->|runs| D[Discovery — interviews, journey maps]
PM -->|owns| R[Public Roadmap & OKRs]
PM -->|drives| P[Prioritisation — saying no]
PM -->|leads| M[Internal Marketing & launch comms]
D -->|feeds| R
R -->|tied to| KR[Adoption KRs not feature count]
Adoption Metrics That Actually Matter
Platform teams commonly measure the wrong things: number of features shipped, tickets resolved, uptime percentages. These are output metrics. The right metrics are outcome metrics — do delivery teams adopt the platform, does it accelerate them, and do they stay?
The MONK Framework
An industry-converged measurement trio for platform teams:
| Metric | What it measures | Target benchmark |
|---|---|---|
| Market share | % of eligible teams or services using each capability | Depends on capability maturity — track trend, not absolute |
| Onboarding time | Time from a developer starting with a new codebase to first commit in production | Elite: under 2 hours; struggling: more than 1 day |
| NPS | How likely are delivery teams to recommend the platform? | Elite: +40 or above; negative NPS is a signal of structural failure |
| Keep rate (retention) | % of teams that adopt a capability and continue using it 90 days later | High churn means the capability solved the initial pain but failed to mature |
Beyond MONK, three operational metrics reveal day-to-day platform health:
- Self-service success rate — what % of platform interactions require no human intervention from the platform team? A low rate means the platform is not actually self-service; it is still in ticket-taker mode.
- Time-to-first-deploy — how long does a brand new service take to reach production the first time, using only the platform’s golden path? This is the single number that tells you whether the paved road is actually paved.
- Platform SLO compliance — the platform team should publish and track SLOs for its own services (portal uptime, CI pipeline success rate, environment provisioning time). See Developer Experience and DX Metrics for the full DX metrics treatment.
The 2024 DORA research found that “developer independence” (i.e., being able to complete a task without coordination with another team) correlates with a 5% improvement in individual and team-level productivity. That 5% is the quantifiable case for a self-service platform.
Common Failure Modes
Understanding why Platform-as-a-Product initiatives fail is more useful than restating why they succeed.
The Field-of-Dreams Fallacy
The assumption: “If we build it, they will come.” The reality: delivery teams have running systems, established workflows, and high switching costs. A platform capability needs to be dramatically better — not marginally better — to justify the cognitive load of migration. Teams spending months building a comprehensive IDP in isolation, then launching to find no one wants to use it because it doesn’t solve any burning problems, is the most common platform engineering failure mode.
The fix: build an MVP that solves one specific, high-friction, obviously-painful problem. Ship it to two teams who have that pain. Iterate. Only then generalise.
Roadmap Captured by the Loudest VP
When the platform lacks a real PM and a data-driven discovery process, the roadmap fills up with whatever the highest-ranked stakeholder last asked for. This produces platforms that serve executive optics rather than delivery teams — elaborate self-service portals that engineers don’t use because the actual friction (flaky CI, opaque error messages, manual secret rotation) was never addressed.
Too Many Features, None Mature
Platform teams under pressure to demonstrate velocity ship multiple capabilities at v0.5 maturity simultaneously. Each capability has gaps, footguns, and missing documentation. The delivery teams who try each one hit a sharp edge and give up. The platform team interprets the lack of adoption as “they just don’t want to change” — when the real problem is that no single capability is good enough to absorb the switching cost.
The right model: one capability at a time, driven to maturity, adopted widely, before starting the next.
The Platform Team That Doesn’t Talk to Its Customers
Platform engineers who have never shadowed a delivery team doing their first deploy, never attended an incident that exposed a platform gap, and never run a structured interview with a senior engineer in a stream-aligned team will build the wrong things confidently. Discovery is not optional; it is the core of the product loop.
PM Who Serves the Platform Team, Not Delivery Teams
The most subtle failure: a PM who manages the platform team’s work, tracks sprint velocity, and reports on features shipped — but does not own an adoption OKR and does not conduct regular delivery team interviews. This person is a project manager. Renaming them a PM while leaving their incentives unchanged does not fix the ticket-taker anti-pattern.
flowchart TD
A{Does the PM run regular delivery-team interviews?} -->|No| F1[Ticket-taker anti-pattern]
A -->|Yes| B{Is roadmap tied to adoption OKRs?}
B -->|No| F2[Feature shipping theatre]
B -->|Yes| C{Does the team say no to low-adoption requests?}
C -->|No| F3[Too many features, none mature]
C -->|Yes| D[Platform-as-a-Product in practice]
Org Structure for the Platform PM
There are two placement models, each with meaningful tradeoffs:
| Model | Description | Works well when | Breaks down when |
|---|---|---|---|
| Embedded in platform team | PM sits within the platform team, attends standups, owns the platform roadmap directly | Platform team is small-to-mid; PM has deep eng context | PM gets captured by platform team’s priorities; loses the delivery-team perspective |
| Central product org | PM reports into a product management hierarchy; platform team is a cross-functional product squad | Org has strong product management culture; PM can maintain independence | PM lacks the technical depth to challenge architecture decisions; becomes a ticket-relay |
The embedded model is more common and generally more effective for platform teams, with one condition: the PM must maintain regular, structured contact with delivery teams independent of what the platform team asks them to prioritise. The risk is “going native” — gradually optimising for the platform team’s preferences rather than delivery team outcomes.
A useful structural safeguard: the PM’s performance review should include adoption OKR achievement, not just platform team sprint delivery. This aligns incentives with actual customer outcomes.
What a Platform Product Strategy Doc Looks Like
Every platform team claiming the Platform-as-a-Product model should maintain a living strategy document. It forces the articulation of who the customers are and how success is measured — two things that remain dangerously vague in most platform teams. A minimal structure:
Platform vision (one paragraph): What does the ideal state look like in 18-24 months? What problem has been permanently retired for delivery teams?
Customers (explicit list): Which stream-aligned teams? What are their distinct contexts — different tech stacks, different regulatory requirements, different deployment frequencies? New-hire developer persona, senior engineer persona, on-call persona.
Jobs to be done (top 5): Not “we need a CI pipeline” but “when I want to deploy a new service, I want to go from git push to production in under 15 minutes without learning Kubernetes.” JTBD framing keeps the focus on customer outcomes rather than platform features.
Target outcomes + key results: What does success look like, measurably? Example: “90% of delivery teams self-onboard a new service without a ticket to the platform team within 3 months of capability launch.” OKRs tied to adoption, not shipped features.
Roadmap (next 2 quarters): Two columns: capability being hardened/adopted, capability being built. Anything in column two that does not have a delivery team waiting to use it should be challenged.
What we will not build: Explicitly listing declined requests builds trust and prevents scope creep. If a delivery team needs something the platform team will not build, say so and explain why — so they can build it themselves or find another path.
Source: Manuel Pais — Platform as a Product (PlatformCon) — the jobs-to-be-done and discovery framework for internal platforms
FinOps Connection — Showback and Chargeback
Platform teams control significant cloud spend. Connecting that spend back to delivery teams is a FinOps practice with a direct Platform-as-a-Product implication: when delivery teams see the cost of the platform resources they consume, the platform’s value proposition becomes explicit and falsifiable.
Showback: Delivery teams receive a cost report — “your services consumed $X of platform resources last month” — but are not billed. Showback builds cost awareness without creating accounting overhead. It also makes the platform’s value visible: “you consumed $40K of Kubernetes compute that you did not have to provision or manage.”
Chargeback: Delivery teams are actually billed for platform consumption against their budget. This sharpens incentives but requires accurate cost attribution and a degree of org maturity that many teams lack early on.
The Platform-as-a-Product relevance: showback is a form of making value measurable. A delivery team that sees its platform consumption costs and can compare them to what self-managed infrastructure would have cost becomes a more engaged customer. The conversation shifts from “why should we use your platform?” to “here is the quantifiable value you are getting.”
How Organisations Apply This
Spotify — Backstage as a Product, Not a Tool
Spotify did not build Backstage and mandate adoption. The Backstage team ran discovery with Spotify engineers, found that service catalogue fragmentation was the sharpest pain (280+ engineering teams, 2,000+ microservices with no central discovery), and built the developer portal to address that specific job-to-be-done. By the time Spotify open-sourced Backstage in 2020, internal adoption had reached 280 teams managing 2,000+ microservices, 300+ websites, and 4,000 data pipelines through a single portal. The growth was opt-in — teams adopted because onboarding time dropped 55%, not because it was mandated.
This is the canonical Platform-as-a-Product success pattern: deep discovery → MVP solving one burning problem → wide voluntary adoption → then generalise and extend.
Netflix — Paved Road with a Product Mindset
Netflix’s internal platform principle is “paved road, not railroad.” Centralised platform teams build and support a set of libraries, frameworks, and services — Spinnaker, Titus, Atlas, Hystrix — that are the well-lit paths. Teams choose the paved road because it embeds security, observability, and reliability defaults they would otherwise have to build themselves, not because they are prohibited from going off-road.
The product-team orientation shows in how Netflix introduces new platform capabilities: internal “launch posts” on the engineering wiki, demo sessions, and a clear statement of what problem the capability solves and for whom. The platform team treats internal adoption as a product problem — awareness, onboarding friction, retention — exactly as an external product team would. The result is a culture where “full-cycle developers” own their services end-to-end on top of a platform they genuinely want to use.
Mercado Libre — FURY Platform with Customer Voting
Mercado Libre’s internal developer platform, FURY, has an explicit mechanism for delivery-team input: developers vote on features to prioritise. The platform team reviews votes alongside usage data to decide what to build next. This is a lightweight but effective discovery practice — it externalises prioritisation signals from delivery teams rather than relying solely on what the platform team believes is important.
FURY has been in production for eight years, serving thousands of engineers across Latin America’s largest e-commerce platform. The longevity and scale validate the Platform-as-a-Product instinct: a platform that continuously incorporates delivery-team feedback doesn’t become stale or bypassed.
Source: Mercado Libre’s IDP journey (platformengineering.org) — developer-voting mechanism and FURY maturity
Adevinta — Platform with Explicit SLOs as the Value Proposition
Adevinta’s Common Platform, serving 50+ marketplace product teams across Europe, made platform SLOs central to its product contract with delivery teams. The platform team publishes reliability contracts — deploy success rate, pipeline duration, environment provisioning time — and treats breaches as P1 incidents exactly as product teams treat customer-facing outages.
This is important product positioning. Most delivery teams don’t trust a platform until they understand what reliability guarantees it carries. Publishing SLOs makes the value proposition concrete: “use our paved road and you get 99.9% pipeline success rate and sub-10-minute deploys; build your own and you own that responsibility.” The platform becomes something delivery teams purchase with their attention and adoption, not something imposed on them.
References
- 🔗 Evan Bottcher — What I Talk About When I Talk About Platforms (martinfowler.com, 2018) — the original definition of “compelling internal product” and the self-service platform model
- 🎥 Manuel Pais — Platform as a Product (PlatformCon 2022) — discovery practices, jobs-to-be-done, and delivery teams as internal customers
- 🎥 Manuel Pais — Platform as a Product: Latest Insights (Team Topologies, Dec 2024) — updated examples and internal resistance patterns
- 📖 Camille Fournier & Ian Nowland — Platform Engineering: A Guide for Technical, Product, and People Leaders (O’Reilly, 2024) — covers the product mindset, PM role, and developer-centric IDP design
- 📖 Matthew Skelton & Manuel Pais — Team Topologies, 2nd ed. (IT Revolution, 2025) — ticket-taker anti-pattern, X-as-a-Service interaction mode, platform team definition
- 🔗 DORA — Platform Engineering Capability (dora.dev) — research finding that developer independence (5% productivity gain) depends on self-service platform quality
- 🔗 DORA 2025 Report — Platform Quality and AI Effectiveness — finding that platform quality amplifies AI adoption impact; 90% of orgs now have an IDP
- 🔗 Gartner — Platform Engineering for I&O Leaders — “I&O leaders should shift from infrastructure projects to infrastructure products”; 80% of large orgs target by 2026
- 🔗 platformengineering.org — What Does a Platform PM Do? — discovery, roadmap, and prioritisation practices for internal platform PMs
- 🔗 Mercado Libre — IDP Journey (platformengineering.org) — FURY platform, developer voting, eight-year maturity trajectory
- 🔗 9 Platform Engineering Anti-Patterns That Kill Adoption (jellyfish.co) — field-of-dreams fallacy, ivory tower engineering, and other failure modes with concrete examples
- 🔗 Cortex — Platform Engineers Guide to Building Platform like a Product — MONK framework (market share, onboarding time, NPS, keep rate) and adoption benchmarks