Kubernetes vs Internal Developer Platform: Do You Need Both for AWS Deployments?
A practical breakdown for engineering teams choosing between raw Kubernetes and an IDP on AWS
The majority of AWS teams running containers have landed on Kubernetes. According to the 2025 CNCF Annual Cloud Native Survey, 82% of container users now run Kubernetes in production, a number AWS continues to see grow across EKS deployments. It is the default choice for containerised workloads and EKS makes it accessible enough that most teams land there eventually.
But at some point the same teams start looking at internal developer platforms. And the question that comes up is a reasonable one: if Kubernetes already handles deployments, container orchestration, scaling, and health checks, what does an IDP actually add? Are these two separate tools solving two different problems, or is one replacing the other?
The answer is not obvious. On AWS, the boundaries blur quickly. EKS integrates deeply with IAM, networking, and other managed services, which makes Kubernetes feel like it should be enough. But in practice, teams still run into gaps around how developers interact with that infrastructure.
This post breaks down where each one starts, where it ends, and whether you actually need both running together on AWS.
TL;DR
Amazon EKS runs your containers. An internal developer platform defines how engineers actually deploy and operate them.
A well-designed IDP on AWS does not just connect to a cluster. It standardises how infrastructure like VPCs, EKS, CI/CD, and observability are provisioned and used.
Developers push code. The platform handles everything underneath.
You still need Kubernetes. The real question is whether every engineer should be dealing with it directly on every deploy.
The shift toward IDPs is generally the right call, but only if the platform is designed with escape hatches. When something breaks at the Kubernetes level, engineers with zero cluster knowledge cannot debug it.
Most teams do not make this shift intentionally. They make it when the alternative stops working.
What Kubernetes Handles on AWS and Where It Stops
Kubernetes is a container orchestration system. It schedules containers across a pool of compute, manages service-to-service networking, restarts failed workloads, and scales pod replicas based on load. On AWS, EKS is the managed Kubernetes service. AWS handles the control plane — the API server and etcd — so you do not operate those components yourself.
What stays with your team in a standard EKS setup: VPC design, subnets, NAT gateways, and security group rules. IAM setup, including role bindings and service account mapping. Choosing and managing node groups. Installing and configuring cluster add-ons like CoreDNS, VPC CNI, and the AWS Load Balancer Controller. Planning and executing Kubernetes version upgrades.
EKS Auto Mode extends AWS management further into the data plane and handles more of the node lifecycle automatically. But even with Auto Mode, platform design, developer workflows, environment management, and delivery standardisation remain your responsibility.
This is where the internal developer platform question starts. Kubernetes handles the runtime. It does not handle how your developers interact with that runtime. It does not create environments, wire CI/CD pipelines, or give a backend engineer a self-service path to deploy a new service without understanding the cluster underneath.
That layer has to come from somewhere. On AWS, that is what an IDP is for.
Kubernetes vs IDP: What Each One Actually Does on AWS
Most teams assume Kubernetes and an IDP overlap significantly. They do overlap in deployment automation, but they operate at different abstraction levels and solve different problems.
Kubernetes is the orchestration and runtime layer. It schedules containers, maintains workload state, handles service discovery, and scales pods. An internal developer platform is the developer experience and automation layer above it. It shapes how engineers create environments, deploy services, access observability, and interact with shared infrastructure — without needing to touch the cluster directly.
The confusion usually comes from the fact that both touch deployments.
But Kubernetes manages how containers run. An IDP manages how developers deploy.
On AWS, a platform like LocalOps does not sit beside EKS — it provisions EKS, manages surrounding AWS resources, and abstracts cluster complexity away from engineers who should not need to think about it on every deploy.
Kubernetes runs the workloads. The IDP simplifies how developers consume the platform. You need both.
How an IDP Handles What Kubernetes Does Not on AWS
Kubernetes does not provision environments. It does not wire CI/CD pipelines. It does not give a backend engineer a self-service path to deploy a new service without touching cluster config. Those are not gaps in Kubernetes — it was never designed to do those things. But someone on your team ends up doing them anyway, usually the person who set up the cluster.
An IDP takes that work off the individual and puts it at the platform level. When a developer pushes to a branch, the platform handles VPC provisioning, EKS cluster setup, EC2 node configuration, CI/CD pipeline wiring, auto-scaling, SSL, and deployment. The developer writes a service config file. The infrastructure side is handled by the platform.
No Dockerfile. No Terraform. No Helm required from the developer’s side.
The trade-off is real though. Full abstraction means engineers lose visibility into what is running underneath. When a pod enters CrashLoopBackOff or a service fails a health check, an engineer who has never touched kubectl cannot diagnose it. A well-built IDP handles this by exposing controlled access to the cluster when needed. Engineers should not need Kubernetes knowledge for routine deploys, but they should be able to get to it when something goes wrong.
What an IDP Actually Sets Up in Your AWS Account
When you create a new environment, a production-grade IDP provisions the following inside your AWS account:
Dedicated VPC with private and public subnets, NAT gateway, and internet gateway
Managed EKS cluster with EC2 compute nodes
Elastic Load Balancer for inbound HTTP/HTTPS traffic
Prometheus, Loki, and Grafana for metrics, log aggregation, and dashboards
Managed AWS services on demand: RDS, S3, SQS, Elasticache
CI/CD pipeline triggered on branch push
Auto-renewing SSL certificates, encrypted secrets storage, and role-based access control
Everything runs inside your AWS account. The vendor does not hold your data or access your infrastructure directly.
Each environment is isolated at the VPC level. For BYOC deployments where enterprise customers bring their own AWS account, the entire stack gets provisioned inside the customer’s account. Each customer gets their own cluster, their own VPC, their own compute. That is the architecture enterprise compliance frameworks typically require.
SuprSend, a notification infrastructure company, used LocalOps to handle this entire setup for their BYOC (Bring your own cloud) distribution. Before that, every enterprise customer deal required spinning up dedicated infrastructure manually.
LocalOps is provisioning the per-customer AWS environments in 30mins without changing how their engineering team works — same git-push workflow, same branch-based deploys, just running inside each customer’s own AWS account. They are able to close enterprise deals faster without adding DevOps headcount. For the full picture, read the case study from their CTO: How SuprSend Unlocks Enterprise Revenue with BYOC
Without an IDP, someone on your team is doing all of this manually, per environment, every time a new one is needed.
Backstage, Port and an IDP: Which One Works for AWS Teams
Only 28% of organisations have a dedicated DevOps / platform engineering team responsible for internal platforms, according to the Q1 2026 CNCF Technology Landscape Radar report. PR Newswire That number matters when evaluating IDP options because most DevOps tools assume you have that team already.
Before comparing tools, one distinction worth clarifying: an internal developer portal vs platform is not just a naming difference. A portal surfaces information about existing infrastructure. A platform provisions and manages cloud resources. This matters because Backstage, the most widely used open source internal developer platform, is actually a portal. It gives you a service catalog and a UI layer but does not provision infrastructure or manage deployments out of the box.
Teams searching for a Backstage internal developer platform often discover this gap after they have already invested months in setup. Backstage needs integrations and plugins to act as a full platform. You build those yourself. Right call if you have the platform engineering capacity/team to sustain it internally.
Port works well as a catalog and visibility layer on top of existing infrastructure. A cloud native IDP fits teams that need production-grade AWS environments running without the upfront platform investment. Not sure which fits your stack? Book a demo with us and our engineer will walk you through it.
Do You Still Need a DevOps Engineer If You Have an IDP on AWS?
Short answer: yes. But the actual role changes significantly.
According to CNCF survey data, organisations typically allocate one platform engineer per 17 to 50 developers — roughly 2 to 6% of total engineering headcount. byteiota That ratio only works if the platform is handling routine infrastructure work. Without an IDP, that one person becomes the bottleneck for every deployment question, every new environment, and every EKS config change on the team.
With an IDP, that same engineer sets the platform up once. Developers provision environments, deploy services, and access observability without filing a ticket. The DevOps or platform engineer shifts to work that actually requires their expertise: cost architecture, security posture, compliance requirements, and reliability engineering.
High-maturity platform teams report 40 to 50% reductions in cognitive load for developers. DEV Community That is not just a developer experience metric. It directly affects how fast product teams ship and how much of your engineering budget goes toward infrastructure overhead versus product work.
What an IDP still cannot replace on AWS:
Reserved Instance and Savings Plan strategy
Custom VPC architectures for specific compliance frameworks
Multi-account setups with complex permission boundaries
Incident response when something breaks at the infrastructure level
The abstraction trade-off is real. When a pod enters CrashLoopBackOff or a node group fails to scale, someone needs to know what they are looking at. An IDP reduces how often engineers hit those situations. It does not eliminate them. Teams should maintain baseline Kubernetes literacy even if developers do not use it daily.
Teams that delay building this layer tend to accumulate technical debt quietly. Helm chart configurations drift across services, cluster knowledge stays siloed with one or two people, and onboarding new engineers to the deployment process takes longer than it should.
What to Look for Before Choosing an IDP for AWS
Not all IDPs that claim AWS support are built the same way. Before committing, these are the questions worth asking:
Does it provision EKS or just connect to one you already have? Connecting to an existing cluster means you still own the setup, configuration, and upgrade cycle. Provisioning means the platform handles the full lifecycle.
Does it require developers to write Helm charts? Helm support for engineers who need it is fine. Requiring it from everyone means you have moved the complexity rather than removed it.
Is observability included or a separate integration? Prometheus, Loki, and Grafana should come with the platform. Wiring observability after the fact is a project in itself.
Does it provision managed AWS services from the same interface? RDS, S3, SQS, Elasticache — if these require a separate Terraform repo, you have two systems to maintain instead of one.
Does your data stay in your AWS account? The vendor should not have direct access to your application data or infrastructure. Everything should run inside your own account.
Can you eject if you need to? Vendor lock-in is a real consideration. If you stop using the platform, you should be able to take the infrastructure and run it independently.
Does it fit your deployment model? SaaS, single-tenant, BYOC, and self-hosted have different infrastructure requirements. The platform should support your model without requiring custom tooling for each.
To see how LocalOps specifically handles these on AWS, the LocalOps developer documentation covers environment provisioning, EKS setup, observability, BYOC, and the eject path in full detail.
FAQs
1. What is the best internal developer platform for AWS teams?
The best internal developer platform for AWS depends on your team size and whether you have a dedicated platform engineering team. Backstage is the most widely adopted open source internal developer platform, but it requires significant setup and maintenance investment, typically 6 to 12 months before developers are using it consistently. For teams that need AWS environments running quickly without dedicated DevOps overhead, a cloud native IDP like LocalOps provisions EKS, observability, and CI/CD inside your AWS account out of the box. Not sure what fits your stack? You can talk to our engineers to help figure it out.
2. What should an AWS internal developer platform actually do?
An AWS internal developer platform should provision and manage EKS clusters, handle VPC and subnet configuration, wire CI/CD pipelines, set up observability, and manage access control, all inside your own AWS account. It should give developers a self-service path to deploy services without touching Kubernetes directly. If it only connects to an existing cluster rather than provisioning one, you still own most of the infrastructure complexity yourself.
3. What does internal developer platform architecture look like on AWS?
A production-grade internal developer platform architecture on AWS includes a dedicated VPC with private and public subnets, a managed EKS cluster, EC2 compute nodes, an Elastic Load Balancer, Prometheus and Grafana for observability, managed AWS services like RDS and S3, and a CI/CD pipeline wired to branch pushes. Each environment runs in isolation at the VPC level. For BYOC deployments, that entire architecture gets replicated inside the customer’s AWS account.
4. Should you build an internal developer platform or buy one for AWS?
Building gives you full control but requires significant engineering investment. SuprSend estimated that building their BYOC infrastructure setup in-house would have taken 10 to 12 engineer months. Buying a platform like LocalOps reduces that to under 30 minutes for a production-ready environment. Build makes sense if you have a dedicated platform team and specific requirements that off-the-shelf platforms cannot meet. Buying makes sense if your engineering team’s time is better spent on product work rather than platform infrastructure.
5. How does platform engineering relate to an internal developer platform?
Platform engineering and internal developer platform adoption are growing in parallel, but they are not the same thing. Platform engineering is the practice of building and owning developer infrastructure as a product. An internal developer platform is what that practice produces, the actual system engineers use to deploy, provision environments, and access infrastructure. You can run an IDP without a formal platform engineering team. Many smaller teams buy a pre-built IDP specifically to avoid needing one.
So Do You Need Both Kubernetes and an IDP on AWS?
Yes. But that is not really the right question.
Kubernetes and an internal developer platform are not competing for the same job. EKS handles container orchestration. An IDP handles how your engineers interact with that orchestration layer without needing to understand it on every deployment. Removing either one creates a gap the other cannot fill.
The more useful question is what happens when you have Kubernetes but no IDP above it. Environment setup stays manual. Deployment workflows differ across services. New engineers spend days getting cluster access before they contribute anything. The one person who understands the EKS setup becomes the path of least resistance for every infrastructure question on the team.
An IDP does not make Kubernetes disappear. It makes Kubernetes someone else’s problem — specifically, the platform layer’s problem — so your product engineers can stay focused on the product.
Developers are increasingly accessing Kubernetes indirectly through internal developer platforms rather than directly, according to a March 2026 CNCF report covering 12,500 developers across 100 countries. Cloud Native Computing Foundation That shift is not happening because Kubernetes is being replaced. It is happening because teams have realised that exposing cluster complexity to every engineer is a choice, not a requirement.
On AWS, you have the tooling to make that choice cleanly. The question is whether you build the layer above EKS yourself or use a platform that already has it.
If you’re figuring out how this would fit into your setup, the LocalOps team can help you work through it:
Book a Demo → Walk through how environments, deployments, and AWS infrastructure are handled in practice for your setup.
Get started for free → Connect an AWS account and stand up an environment to see how it fits into your existing workflow.
Explore the Docs → A detailed breakdown of how LocalOps works end-to-end, including architecture, environment setup, security defaults, and where engineering decisions still sit.



