Post

Internal Developer Platforms

An IDP is an opinionated, product-managed abstraction layer that converts developer intent into provisioned, policy-compliant cloud infrastructure — without the developer needing to understand Kubernetes, Terraform, or security policy.

Internal Developer Platforms

An IDP is not a CI/CD system with a nicer UI. It is an opinionated, product-managed abstraction layer that converts developer intent — “I need a service with a Postgres and a cache” — into provisioned, policy-compliant cloud infrastructure, and does so without the developer needing to understand Kubernetes, Terraform, or security policy.


Core Properties

Property Value
Primary goal Reduce cognitive load on delivery teams via self-service
Tenant contract Declarative workload spec → running, policy-compliant workload
Platform team size Typically 5–15 platform engineers
Serves 50–500+ product engineers per platform team
Engineer ratio ~1 platform engineer per 8–12 product engineers
CNCF project home tag-app-delivery.cncf.io/whitepapers/platforms
Distinguishing from CI/CD CI/CD is layer 4 of 6; the IDP is all six layers end-to-end
Distinguishing from PaaS IDP is company-specific, opinionated, composable; PaaS (Heroku, Cloud Foundry) is generic, external

When to Build / Avoid

Build When

  • You have 5+ stream-aligned teams shipping in parallel and their cognitive load from cloud + Kubernetes + security is measurably slowing delivery.
  • You see inconsistent reliability, security posture, or cost control across teams with no shared floor.
  • You have senior engineers solving the same infra bootstrapping problem repeatedly across teams.
  • You are above ~50 engineers and can staff 3+ dedicated platform engineers (minimum viable team to avoid single-point-of-failure risk).
  • You have executive support for treating the platform as a product, not a ticket queue.

Avoid When

  • Your org is under 30 engineers — SaaS tooling (Render, Railway, Fly.io) is cheaper and faster.
  • Your bottleneck is product-market fit, not delivery speed. Build product first.
  • You cannot commit a dedicated platform team. A part-time IDP is a fragmentation hazard, not a productivity multiplier.
  • You would be building before discovering — invest in 2–3 stream-aligned teams first to learn what actually needs paving.
  • No product manager available. Platforms built without a PM become ticket queues or YAML factories within 12 months.

What an IDP Is — and Is Not

The CNCF Platforms White Paper defines a platform as: “a foundation of self-service APIs, tools, services, knowledge and support which arranges them as a compelling internal product.” An Internal Developer Platform is the concrete instantiation of that philosophy.

Three things it is explicitly not:

Not a CI/CD pipeline. A pipeline ships code. An IDP abstracts everything the pipeline, the infrastructure provisioner, the secrets manager, the policy engine, and the service catalogue do — and presents it as a single self-service surface. CI/CD (Argo CD, Flux, Tekton) is layer 4 of the stack; the IDP is the whole stack.

Not a PaaS. Heroku and Cloud Foundry are general-purpose external products with fixed constraints. An IDP is company-specific, reflects the org’s own technology choices, and is composable — you pick what sits at each layer. The tradeoff: you own the operational complexity that a PaaS vendor absorbs.

Not a developer portal. Backstage, Port, Cortex, and OpsLevel are developer portals — the UI and service catalogue layer (L1). The portal is the front door to the platform. Conflating portal with platform is the most common conceptual mistake in the industry.

The Team Topologies framing (see Platform Engineering Foundations) is the canonical rationale: a platform team reduces cognitive load on stream-aligned teams by offering capabilities X-as-a-Service. The IDP is the technical mechanism by which that happens. The full six-layer stack is detailed in the next section.

Source: CNCF Platforms White Paper — foundational definition of platform attributes, capabilities, and the self-service contract


The Six-Layer Reference Stack

These layers are conceptual, not organisational — one team typically owns the full stack. Thinking in layers clarifies what the platform does, which pieces are replaceable, and where failure localises.

graph TD
    L1[L1: Developer Portal & Service Catalogue]
    L2[L2: Scaffolding & Golden-Path Templates]
    L3[L3: Workload Specification & Orchestration]
    L4[L4: Continuous Delivery & GitOps]
    L5[L5: Runtime Plane]
    L6[L6: Policy, Supply Chain & Observability]

    L1 --> L2
    L2 --> L3
    L3 --> L4
    L4 --> L5
    L6 -. guards .-> L4
    L6 -. guards .-> L5

L1: Developer Portal & Service Catalogue

Purpose. The single pane of glass for all developers. Provides service discovery (who owns what), dependency maps, documentation, SLOs, runbook links, and the launch point for scaffolding and self-service actions.

What it solves. Before a portal, service ownership, API contracts, deployment status, and on-call contacts live in wikis, chat, and tribal knowledge. With one, a developer can find the owner of a service in 30 seconds, not 30 minutes.

Tenant-facing contract. A catalogue with entities (services, APIs, libraries, teams) linked to metadata: owner, SLO, repo, pipeline status, docs URL.

Product Type Key characteristic
Backstage OSS (CNCF Incubating) Plugin ecosystem; full customisability; high setup cost
Port SaaS Low time-to-value; composable data model
Cortex SaaS Scorecard/maturity-model focus; integrates with Backstage
OpsLevel SaaS Rubrics for service maturity; strong for compliance orgs
Roadie Managed Backstage Backstage without self-hosting; plugin marketplace

Failure modes. The catalogue goes stale within weeks if ingestion is not automated — manually maintained metadata is doomed. The portal becomes “just another wiki” if it is not the single authoritative source linked from all other tools.

L2: Scaffolding & Golden-Path Templates

Purpose. Creates new services, repos, and pipelines with org-approved defaults baked in: CI/CD config, Dockerfile standards, dependency scanning, observability instrumentation, CODEOWNERS, IaC stubs. The paved road, expressed as code.

What it solves. Without scaffolding, every new service bootstrapping takes 1–3 days of copy-paste and tribal knowledge. With it, a developer answers 5–10 form fields and gets a production-ready repo in minutes.

Tenant-facing contract. A template catalogue in the portal; developer fills a form; scaffolder runs actions (create repo, push code, configure pipeline, register in catalogue).

Product Type Key characteristic
Backstage Scaffolder OSS Tightly coupled to Backstage; YAML-defined; extensible actions
Cookiecutter OSS Language-agnostic file templates; no portal integration out of the box
Copier OSS Jinja2 templates; supports updates to existing projects
Projen OSS Code-generates project config from a TS/Python DSL; AWS CDK lineage

Failure modes. Templates diverge from reality after 6 months if there is no automated drift detection. Teams fork templates instead of updating them, creating a long tail of subtly incompatible service shapes.

L3: Workload Specification & Orchestration

Purpose. The most important layer and the one with the least industry consensus in 2024–2025. The tenant declares what their workload needs (compute, a Postgres, a cache, env vars, resource dependencies); the orchestrator resolves how to deliver it using the org’s preferred IaC and Kubernetes tooling.

What it solves. Without this layer, delivery teams own Kubernetes YAML, Terraform modules, secrets configuration, and environment promotion logic. All of that is platform responsibility masquerading as app responsibility.

Tenant-facing contract. A declarative workload spec (YAML or CUE) naming the workload, its resource dependencies, and constraints. The orchestrator resolves the rest.

Product Type Key characteristic
Score OSS CNCF Sandbox spec Tool-agnostic; define once, deploy anywhere
Humanitec Platform Orchestrator Commercial SaaS Dynamic config management; RMCD pattern; SLA-backed
Kratix OSS (Syntasso) Kubernetes-native; Promises as self-service API; multi-cluster
Crossplane OSS (CNCF Graduated) Kubernetes control-plane extension; composites + providers for IaC
Radius OSS (Microsoft) Cloud-agnostic app model; first-class dependency graph
KusionStack OSS Kusion language; unified app + infra model for large enterprises

Failure modes. The orchestrator becomes a black box only one person understands. Vendor lock-in through proprietary resource schemas that survive for years. Score or CUE adoption requires buy-in that teams frequently resist without strong platform PM support.

L4: Continuous Delivery & GitOps

Purpose. Ships code and config from the last commit to running workloads. Maintains cluster state continuously reconciled to git.

What it solves. Manual kubectl applies, environment drift, “works in staging, broken in prod” class failures.

Tenant-facing contract. A git push triggers a pipeline that builds, tests, promotes, and syncs to the appropriate cluster. The tenant does not interact directly with Kubernetes.

Product Type Key characteristic
Flux OSS (CNCF Graduated) GitOps toolkit; image automation; multi-tenant via Flux Tenancy
Argo CD OSS (CNCF Graduated) UI-first; Application Sets; strong multi-cluster story
Tekton OSS (CNCF Graduated) Kubernetes-native pipeline engine; Argo CD peer for CI side
Spinnaker OSS (Netflix) Multi-cloud CD; sophisticated deployment strategies

Failure modes. GitOps without policy at L6 means any developer with repo write access can deploy malformed or non-compliant workloads. Argo CD application sprawl in large orgs requires Application Sets or a Kustomize overlays strategy from day one.

L5: Runtime Plane

Purpose. The compute, networking, and day-2 operational fabric the workloads run on. Kubernetes is the de facto standard. This layer also includes ingress, secrets injection, autoscaling, and certificate management.

What it solves. Without a runtime abstraction layer, every team manages their own cluster configuration, certificate renewal, and secrets rotation, which is both duplicated effort and inconsistent security posture.

Tenant-facing contract. A running container cluster with automatic secrets injection, cert rotation, horizontal scaling, and ingress — invisible to the tenant.

Component Leading choices
Managed Kubernetes GKE (Autopilot), EKS, AKS
Service mesh Istio, Linkerd
Event-driven autoscaling KEDA
Secrets management External Secrets Operator (ESO) + Vault
Certificate management cert-manager

Failure modes. No isolation between tenant workloads in multi-tenant clusters — a noisy neighbour causes cascading OOM kills. KEDA and HPA conflicts when both are applied to the same Deployment. ESO lag during secret rotation causing transient auth failures.

L6: Policy, Supply Chain & Observability

Purpose. The non-negotiable floor that every workload inherits. Policy enforcement (only compliant images deploy), software supply chain integrity (signed images, SBOMs, provenance), and unified observability (metrics, logs, traces).

What it solves. Without this layer, security and compliance are per-team concerns that are unevenly applied. Supply chain attacks (SolarWinds, XZ Utils) reach production because no admission gate verifies provenance.

Tenant-facing contract. Workloads that pass L3 and L4 are implicitly compliant with policy. Non-compliant workloads are rejected at the cluster admission gate, not in a manual security review.

Concern Leading choices
Admission policy Kyverno, OPA/Gatekeeper, Kubewarden
Image signing & provenance cosign / Sigstore, SLSA frameworks
SBOMs Syft, Grype, SPDX / CycloneDX
Metrics Prometheus / VictoriaMetrics, Grafana
Distributed tracing OpenTelemetry, Jaeger, Tempo
Logging Fluentd / Vector, Loki

Failure modes. Policy-as-code drift — Kyverno policies that are not tested against real workloads create deployment surprises. cosign verification only at build time, not at admission, leaves a window for image substitution attacks.

Source: CNCF TAG App Delivery — Platforms White Paper — six-capability model; internaldeveloperplatform.org — layer definitions and product categorisation


Two Architectural Patterns

Pattern 1: Platform Orchestrator (Dynamic Configuration Management)

Humanitec popularised this pattern and published a reference architecture for it. The core insight: configuration drift across environments is inevitable if configurations are committed statically. Instead, configurations are generated at deploy time from a declarative application model plus environment-specific resource bindings.

The orchestrator’s execution loop (Read, Match, Create, Deploy — RMCD):

  1. Read — parse the workload spec and resource requirements.
  2. Match — look up resource definitions matching the requested type (e.g. postgres.dev → shared RDS in dev, dedicated Aurora in prod).
  3. Create — invoke the IaC provider (Terraform, Crossplane) to provision or retrieve the resource; inject secrets via ESO.
  4. Deploy — emit Kubernetes manifests with injected connection strings and credentials; push to the GitOps engine.

The result: the developer never writes environment-specific configuration. The orchestrator handles promotion from dev → staging → prod by re-resolving resource bindings.

sequenceDiagram
    participant Dev as Developer
    participant Spec as score.yaml
    participant Orch as Platform Orchestrator
    participant IaC as IaC Provider
    participant GitOps as GitOps Engine
    participant K8s as Kubernetes

    Dev->>Spec: declare workload + resources
    Spec->>Orch: feed spec on git push
    Orch->>IaC: resolve resource bindings (Crossplane / TF)
    IaC-->>Orch: connection strings, credentials
    Orch->>GitOps: emit K8s manifests + secrets
    GitOps->>K8s: reconcile desired state
    K8s-->>Dev: workload running

Pattern 2: Workload Specification as a Standard (Score)

Score is a CNCF Sandbox project: a deliberately minimal, tool-agnostic workload spec. A score.yaml describes only what a workload needs — container image, ports, resource dependencies (a postgres, a redis) — without specifying how those resources are provisioned.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Example score.yaml — the full tenant-facing contract
apiVersion: score.dev/v1b1
metadata:
  name: payment-service
containers:
  payment-service:
    image: .
    variables:
      DATABASE_URL: ${resources.db.host}
      REDIS_URL: ${resources.cache.host}
resources:
  db:
    type: postgres
  cache:
    type: redis
service:
  ports:
    http:
      port: 8080

The platform team writes the Score implementation that translates this into Kubernetes manifests (via score-k8s) or Docker Compose (via score-compose). Developers never touch cluster config. The separation is explicit: developer owns the workload spec; platform owns the resource resolution.

Score’s value is portability and standardisation. Platforms get rewritten; a score.yaml survives. Multiple orchestrators (Humanitec, Argo CD, custom) can consume the same spec.

Source: Score specification — workload-centric developer contract; Humanitec dynamic config management — RMCD pattern


Build vs Buy vs Hybrid

No IDP choice is purely build or buy. The real question is which layers to buy vs assemble vs build.

Approach When it fits Risk
Buy (commercial) — Humanitec, Port, OpsLevel Small platform team, fast time-to-value required, budget available Vendor lock-in at L3; cost scales with deployment volume
Assemble OSS — Backstage + Flux/Argo + Crossplane + Kyverno Strong Kubernetes expertise, budget-constrained, customisation required High initial investment (~3–6 months to production-ready); operational ownership burden
Hybrid — OSS portal + commercial orchestrator Most common path for 50–200 engineer orgs Integration seams require ongoing maintenance
Full build — bespoke everything Extreme scale + unique requirements (Netflix, Google) 4.5× 3-year TCO vs commercial (Developers.dev analysis of 50+ enterprise clients)

Sizing thresholds:

  • Under 20 engineers: use SaaS (Render, Railway, Fly.io). No IDP team justified.
  • 20–50 engineers: thin OSS stack (Backstage or Port + GitHub Actions + managed K8s). 2 platform engineers maximum.
  • 50–200 engineers: hybrid. 3–8 platform engineers. Backstage + commercial orchestrator or Crossplane-based in-house orchestrator.
  • 200+ engineers: full IDP investment. 8–15 platform engineers. Dedicated PM. Explicit roadmap.

The Humanitec DevOps Benchmarking Study (2023) found that 93% of top-performing teams report using an IDP maintained by a platform team following Platform-as-a-Product. Only 5% of medium performers and 2% of low performers do the same.


The Tenant-Facing Contract

The contract is the most important design decision in any IDP. Get it wrong and teams build workarounds that become as complex as the problem the platform was meant to solve.

A good tenant contract has four properties:

  1. Declarative, not imperative. The developer states what the workload needs, not how to provision it. type: postgres is better than aws_db_instance.main.
  2. Minimal surface area. Expose only what teams need to vary per workload. Hide everything that the platform should own: cluster selection, resource sizing defaults, image registry, mTLS config.
  3. Escapable. Teams must be able to escape the golden path for legitimate needs without rebuilding the platform from scratch. Score supports extension points; Backstage Scaffolder supports custom actions.
  4. Verifiable. The spec should be validatable before deploy — syntax checking, policy pre-flight, resource quota estimation. Shift-left compliance.
flowchart TD
    A[Developer writes score.yaml] --> B{Platform lint + policy pre-flight}
    B -- pass --> C[Orchestrator resolves resources]
    B -- fail --> D[Blocked: policy violation reported to developer]
    C --> E[GitOps engine syncs to cluster]
    E --> F[Workload running with paved-road defaults]
    F --> G[Telemetry flows to L6 observability]

The contract boundary also defines on-call ownership. Everything above the contract line is the team’s responsibility. Everything below is the platform team’s SLO. This clarity is what Team Topologies calls X-as-a-Service interaction mode — see Platform Engineering Foundations.


CNCF Platform Engineering Maturity Model

The CNCF Platform Engineering Maturity Model (TAG App Delivery, first published October 2023) organises platform adoption across five aspects — Investment, Adoption, Interfaces, Operations, Measurement — each with four maturity levels.

stateDiagram-v2
    [*] --> Provisional
    Provisional --> Operational : team formed\nfirst capabilities live
    Operational --> Scalable : platform treated\nas product
    Scalable --> Optimizing : measured outcomes\ndriving roadmap
Level Definition Indicators
Provisional Temporary, ad-hoc capabilities; platform team forming Capabilities built reactively; no SLOs; minimal documentation
Operational Reliable capabilities in production; team is established Stable CI/CD, basic portal, golden paths for dominant use case; informal roadmap
Scalable Platform treated as a product; self-service by design PM role filled; NPS/adoption metrics tracked; multiple golden paths; policy automated
Optimizing Measured outcomes drive continuous improvement DORA metrics correlated to platform changes; cost attribution; contribution model from delivery teams

Most organisations surveyed in the Humanitec 2024 State of Platform Engineering (281 respondents) are Provisional to Operational: 56% of platform teams are under two years old, and nearly 45% do not measure platform outcomes at all. Being at Operational is table stakes for an org above 100 engineers; Scalable is the target for orgs with serious platform investment.

Source: CNCF Platform Engineering Maturity Model — five aspects × four levels framework


Common Failure Modes

The IDP failure catalogue is well-established enough that every pattern has a name.

Over-customisation / stretched platform. Measuring the platform team on internal market share incentivises adding golden paths for every technology, which defeats the goal of consolidation. A stretched platform serves every use case poorly and the primary use case adequately. Discipline: say no to the long tail; the platform supports the dominant technology choices.

No product manager. Without a PM, the platform team optimises for engineering elegance over developer experience. The resulting platform is technically correct and organisationally invisible. Platforms without a PM become ticket queues within 12 months (Puppet 2024 State of DevOps). The PM role is not a staffing luxury; it is a load-bearing function.

Golden paths nobody uses. If the golden path requires more YAML than the alternative it replaces, developers will use the alternative. “Build it and they will come” has a 100% failure rate in platform engineering. Golden paths need developer co-design, a frictionless migration story, and an internal marketing function.

Platform team as ticket-takers. The default anti-pattern: delivery teams submit requests (“create me a database”) and the platform team provisions manually. This looks like DevOps but is the Ops model in disguise. The escape hatch is always self-service: the platform team builds the automation, delivery teams trigger it.

Platform-of-platforms sprawl. Organisations that adopt every CNCF project that ships a press release end up with three portals, two orchestrators, and a GitOps engine that nobody fully understands. Each layer should have one chosen product. Consolidation is a feature.

Day-2 drift. Golden paths accurate at launch diverge as Kubernetes versions change, security policies evolve, and org standards update. Without automated drift detection and a template update process, the platform fragments into as many distinct shapes as it has tenants.

Golden cage. Excessive standardisation that prevents teams from using the right tool for genuinely novel problems. The platform should enforce defaults, not prohibit deviation. Escape hatches with a clear request process are better than rigid enforcement.


How Real Systems Use This

Spotify — Backstage as the L1 Foundation

Spotify built Backstage internally starting around 2016 to manage what had become an unnavigable service topology — hundreds of microservices with inconsistent ownership metadata. The core insight was catalogue-first: before building any self-service capability, ensure every service has an authoritative owner, SLO, and documentation link in one place.

Backstage was open-sourced in March 2020 and donated to the CNCF (now Incubating). As of early 2026 it is used by more than 3,400 organisations serving over two million developers, with approximately 89% market share among organisations that have adopted a developer portal framework. Notable adopters include American Airlines, Expedia, LinkedIn, HP, Mercedes-Benz, Siemens, Vodafone, LEGO, Wayfair, PagerDuty, and Twilio.

Within Spotify, approximately 50% of all employees use Backstage on a monthly basis; most engineers use it daily. The Scaffolder (L2) is the primary self-service mechanism: creating a new service involves selecting a template, filling 5–10 form fields, and receiving a repo with CI/CD, observability, and catalogue registration pre-wired.

The organisational lesson: a portal without template-enforced standards is a fancy wiki. Backstage’s power comes from Scaffolder closing the loop — the template guarantees that the catalogue entry is created, the CI/CD pipeline is wired, and the monitoring dashboard exists before the first line of product code is written.

ING Bank — Golden Paths at 7,000-Engineer Scale

ING’s internal Backstage initiative started four years ago when their 7,000+ engineers were navigating too many portals across different business units. The challenge at ING’s scale is not building a portal; it is making one portal convincing enough that engineers migrate from every existing alternative.

Their approach to the scaffolding layer is distinctive: the Golden Path acts as a meta-template that stitches multiple templates owned by different teams into one seamless developer journey. A developer triggering “create microservice” gets a multi-step guided workflow that spans infra provisioning, security scanning, monitoring setup, and production readiness review — all owned by different platform sub-teams but presented as a single coherent flow that can be paused and resumed.

The scale challenge at ING illustrates the L2 failure mode precisely: templates that cannot be composed across team ownership boundaries require developers to execute multi-day handoff sequences manually. The meta-template approach solves the composition problem without requiring one team to own the full bootstrapping surface.

Netflix — Full-Cycle Developers on a Paved Road

Netflix’s platform is the canonical example of what a mature IDP enables organisationally. “Full-cycle developers” own their services from development through on-call — but they do so on top of a deep internal platform, not instead of one.

The Netflix platform stack includes Spinnaker (L4, multi-cloud CD, open-sourced in 2015), Titus (L5, container runtime built on Mesos/Kubernetes), Atlas (L6, time-series metrics), and a rich library of paved-road client libraries covering distributed tracing, circuit breaking, service discovery, and structured logging. Developers opt into the paved road; the platform is positioned as the path of least resistance, not a mandatory gate.

The key design principle: the platform team ships capabilities as internal OSS libraries that delivery teams pull in. When a library has a breaking change or security fix, rollout happens via automated dependency bumps across all services. This makes the platform team a force-multiplier rather than a bottleneck — they can push security fixes to hundreds of services in a single PR.

Zalando — Learning the Cost of Unconstrained Autonomy

Zalando’s “Radical Agility” phase (2015–2018) gave every team full autonomy including infrastructure: separate Kubernetes clusters, separate tooling choices, separate operational runbooks. The cognitive load savings went to product teams; the coordination cost accumulated on a small SRE team.

By 2018–2019 the cost of that autonomy was clear: provisioning a new cluster took weeks, observability was inconsistent across teams, and security audits required touching hundreds of distinct configurations. Zalando migrated toward a shared Kubernetes platform — what became their “Developer Control Plane” — without reverting to centralised Ops.

The lessons that shaped their IDP: (1) autonomy without a paved road is entropy, not freedom; (2) the migration from cluster-per-team to shared cluster requires a hard political decision about platform ownership that cannot be bottom-up; (3) Backstage’s software catalogue is most valuable when it replaces tribal knowledge that previously existed only in Slack, not when it duplicates a functioning wiki.

Monzo — Abstraction Layer for 2,000 Microservices

Monzo’s ~2,000-microservice platform on Kubernetes + Linkerd demonstrates how a disciplined abstraction layer at L3 lets a small platform team support a large service count. Their internal RPC framework abstracts the service-mesh layer — services declare their dependencies in a manifest; the platform handles mTLS bootstrapping, load balancing, retry policies, and circuit breaker configuration.

The scaffolding layer (L2) generates new services from a strict template that includes RPC stubs, structured logging, Prometheus metrics registration, and a Grafana dashboard stub. A new service bootstrapping at Monzo takes under an hour and arrives production-instrumented. The key constraint that makes this work: Monzo’s platform team enforces the golden path vigorously and has maintained very low technology diversity — Go as the primary language, a single service framework — which keeps the template space small.

The scale lesson: 2,000 services with five distinct language runtimes, three RPC frameworks, and two observability stacks cannot be managed by a small platform team. Monzo’s homogeneity is a feature, not an accident.

Mercedes-Benz — Enterprise IDP at Automotive Scale

Mercedes-Benz built a Backstage-based IDP (documented in a CNCF case study) to serve software engineers across both cloud services and in-car software (MB.OS). The challenge distinguishing their case from pure-web companies: automotive software development has hard certification requirements (ISO 26262, ASPICE) that must be encoded into golden paths, not left to individual teams.

Their L6 (policy + supply chain) layer is particularly hardened: Kyverno policies enforce image provenance requirements from their internal container registry; SLSA provenance checks are gates in the delivery pipeline for any software component destined for the vehicle. The compliance story is not a separate audit process — it is baked into the IDP as an admission policy that runs on every deployment.

The enterprise lesson: regulated industries can adopt platform engineering, but the policy layer (L6) is not optional and must be designed before the portal and templates are built. Starting with L1 and retrofitting L6 requires renegotiating every golden path template.


References

This post is licensed under CC BY 4.0 by the author.