Heroku's Hidden Infrastructure Limitations: What CTOs Only Discover at Scale
No VPC. No real compliance. Manual scaling. Here’s what breaks on Heroku when your company starts growing.
Heroku’s infrastructure limitations are not hidden in the sense that Heroku conceals them. They are hidden in the sense that they are invisible until a specific trigger surfaces them, a failed enterprise deal, a compliance audit, a scaling incident, or an architecture decision that gets made differently because of what Heroku cannot support.
By the time these limitations become visible, they are no longer theoretical. They are active constraints shaping product decisions, blocking revenue, and accumulating technical debt. The CTO who discovers Heroku’s compliance ceiling during a live enterprise deal is not making a calm architectural decision; they are managing a crisis that adequate lead time would have prevented.
This guide covers the five infrastructure limitations that Heroku does not surface until scale, what each one costs when it surfaces, and what the AWS-native architecture that replaces Heroku actually looks like.
TL;DR
What this covers: Heroku’s five hidden infrastructure limitations, VPC isolation, SOC 2 and HIPAA compliance risks, access control and audit logging gaps, stateful workload limitations, and dyno scaling failures and the AWS-native alternatives that solve each structurally
Who it is for: CTOs and engineering leaders who are on Heroku, approaching Series A or beyond, and want to understand the infrastructure constraints before they surface as crises
The pattern: Every limitation on this list is invisible at the small scale and becomes a strategic constraint at the growth stage. The teams that navigate this well discover the constraints before they become urgent.
Want to see what your infrastructure looks like on AWS without these limitations? Speak with the LocalOps team →
Limitation 1: No VPC Isolation, No Private Networking, No Infrastructure You Control
Heroku’s networking model is simple by design: your application runs on Heroku’s shared infrastructure and communicates with the outside world over the public internet. There is no VPC. There is no private subnet configuration. There is no IAM-based access control at the network layer. Your application, your database, and your cache all communicate over public endpoints, secured at the application layer with credentials, but not isolated at the network layer.
For early-stage applications, this model is acceptable. The operational simplicity it provides is genuine and valuable. The limitations become visible when two things happen: the team starts selling to enterprise customers and its architecture begins to evolve toward inter-service communication.
Enterprise procurement processes require infrastructure controls that Heroku’s networking model cannot provide. VPC configuration, private subnets between services, network isolation between environments, and the ability to describe your infrastructure’s security posture in a vendor security review- none of these are possible on Heroku because the infrastructure is not yours to configure.
The architecture problem surfaces separately. As applications decompose into microservices, the services need to communicate with each other. On Heroku, all inter-service communication traverses the public internet. There is no private DNS. There is no service mesh. Services communicate through public endpoints with application-layer security. For architectures where internal services should never be publicly accessible, Heroku’s networking model is a fundamental mismatch.
What AWS-native alternatives provide:
Moving to AWS via an Internal Developer Platform like LocalOps automatically provisions a dedicated VPC for every environment. Private subnets separate application tiers. Security groups control traffic flow between services at the network layer. Services communicate over private IP addresses, never over the public internet. IAM roles govern access to every AWS resource with least-privilege policies applied automatically.
From the first deployment, the network architecture around which enterprise security questionnaires are built is in place. VPC configuration, private networking, and network isolation between environments are defaults, not configuration projects.
Every environment LocalOps provisions follows AWS Well-Architected standards by default, with private subnets, security group rules, and network ACLs applied automatically without manual configuration.
See how LocalOps handles network security by default →
Limitation 2: Heroku cannot support HIPAA or SOC 2 Workloads, and Most Teams Discover This During an Enterprise Deal
This is the limitation with the highest business cost, and the one that surfaces at the worst possible moment.
Heroku offers a Heroku Shield product for teams with compliance requirements, but even with Shield, the fundamental constraint remains: your infrastructure runs on Heroku’s systems. Your compliance posture is bound by what Heroku chooses to certify and support. When an enterprise security questionnaire asks about infrastructure ownership, data residency, VPC configuration, or audit logging, the honest answer on Heroku is that the team does not control those things.
For B2B SaaS teams selling to healthcare organizations, financial institutions, or any enterprise with a structured security review process, this constraint is not a technical inconvenience. It is a revenue blocker.
HIPAA compliance on Heroku: HIPAA requires administrative, physical, and technical safeguards around protected health information. The technical safeguards include access controls, audit logging, data integrity mechanisms, and transmission security. On Heroku, the infrastructure implementing these safeguards is Heroku’s, not the team’s. A Business Associate Agreement with Heroku provides some coverage, but the team cannot independently audit, configure, or demonstrate control over the infrastructure handling PHI. Enterprise healthcare customers consistently require infrastructure that the vendor controls, not infrastructure that a third party controls on the vendor’s behalf.
SOC 2 compliance on Heroku: SOC 2 Type II requires demonstrating consistent control over infrastructure over time. The controls around logical access, change management, risk assessment, and monitoring all require the ability to configure and audit the underlying infrastructure. Teams on Heroku cannot configure VPC access controls, cannot implement custom IAM policies, and cannot generate infrastructure-level audit logs, because those capabilities belong to Heroku, not the team.
What AWS-native alternatives eliminate:
When infrastructure runs in the team’s own AWS account, the compliance surface is AWS, which holds SOC 2 Type II, HIPAA BAA, GDPR adequacy, PCI DSS, FedRAMP, and dozens of additional certifications. Every environment LocalOps provisions includes private subnets, least-privilege IAM policies, encrypted secrets via AWS Secrets Manager, and audit logging through AWS CloudTrail.
The compliance architecture is not assembled after migration. It is in place from the first deployment, as a default, not as a configuration project initiated by a compliance audit or an enterprise deal.
For teams evaluating the best Heroku alternatives for compliance-sensitive workloads, infrastructure ownership in their own AWS account is the only path that satisfies enterprise security requirements without a compliance ceiling defined by a vendor.
Limitation 3: Heroku Has No Least-Privilege Access, No Role-Based Permissions, and No Audit Logging
Access control and audit logging are the two capabilities that SOC 2, HIPAA, and virtually every enterprise security framework require as baseline infrastructure controls. Heroku provides neither at the infrastructure level.
On Heroku, access control is application-level. Developers are granted access to Heroku applications, not to the infrastructure running underneath. There is no concept of least-privilege access to specific infrastructure resources. A developer with access to a Heroku application has the same access surface as every other developer with access to that application. Granular, role-based access to specific infrastructure components, the EKS cluster, the RDS database, and specific S3 buckets does not exist.
Audit logging at the infrastructure layer does not exist on Heroku. There is no equivalent to AWS CloudTrail, no log of who accessed what infrastructure resource, when, from where, and what action was taken. For teams undergoing SOC 2 Type II audits or responding to enterprise security questionnaires asking about infrastructure audit trails, this is a gap that cannot be closed while running on Heroku.
Implementing least-privilege access on an AWS-native platform:
AWS Identity and Access Management provides role-based access control at every layer of the infrastructure stack. IAM roles can be scoped to specific resources, specific actions, and specific conditions. A developer role can be granted read access to application logs without granting access to the production database. A CI/CD pipeline role can be granted permission to update a specific EKS deployment without any other AWS access.
LocalOps provisions IAM roles following least-privilege principles automatically. Every service gets a role scoped to exactly the AWS resources it needs. Developers access infrastructure through the LocalOps interface; the AWS account is always accessible, but direct infrastructure access is governed by IAM policies that the team controls.
AWS CloudTrail logs every API call to every AWS service, who made the call, when, from which IP, with which credentials, and what the response was. For SOC 2 audits, this audit trail is comprehensive, searchable, and exportable. It exists by default in every AWS account, not as a configuration project.
For teams evaluating Herokualternatives in 2026 with compliance requirements, the access control and audit logging gap is the most technically specific reason managed PaaS platforms fail enterprise security reviews. AWS-native infrastructure closes this gap structurally.
See how LocalOps implements least-privilege access automatically →
Limitation 4: Heroku Is Architecturally Unsuitable for Persistent, Stateful Workloads
Heroku’s application model is built around the twelve-factor app methodology, stateless processes, ephemeral filesystems, and external services for all persistence. This model works well for web applications following these patterns. It creates real constraints for workloads that do not.
Heroku dynos have an ephemeral filesystem. Any data written to the local filesystem within a dyno is lost when the dyno restarts, which can happen for any number of reasons, including Heroku platform events that are outside the team’s control. For applications that need to write temporary files, process large datasets, or maintain any local state, this is a constraint that requires architectural workarounds.
The dyno model also creates problems for workloads that need to maintain connections across restarts. Database connection pools, WebSocket connections, and long-running background jobs all behave differently when the underlying process can be restarted at any time by a platform the team does not control. Teams running these workloads on Heroku accumulate workarounds, connection retry logic, session state externalization, job queue durability mechanisms, that add complexity specifically to compensate for Heroku’s architectural model.
Persistent, stateful workloads, databases that need to run close to the application, stateful processing pipelines, machine learning inference services with large model files, legacy applications with filesystem dependencies, are all architecturally difficult on Heroku’s ephemeral, shared infrastructure model.
How AWS-native alternatives solve stateful workloads:
Kubernetes-based platforms running on AWS handle stateful workloads through persistent volumes, storage that survives pod restarts and is available to specific workloads. StatefulSets provide stable network identities and persistent storage for workloads that require them. Amazon EFS provides shared filesystem access across multiple pods when applications need it.
More significantly, the AWS service ecosystem provides purpose-built managed services for every category of stateful workload. Amazon RDS runs inside the team’s VPC, private, configurable, and not shared with any other tenant. ElastiCache provides Redis with VPC isolation and persistence configuration. Amazon SQS provides reliable message delivery for background job queues with dead-letter queue handling and retry logic.
LocalOps supports web services, background workers, cron jobs, internal services, and stateful workloads as first-class service types. Each is configured and scaled independently based on its own workload signals, not forced into Heroku’s dyno model that treats all workloads the same.
For teams evaluating Heroku open source alternatives or AWS Heroku alternative platforms for stateful workload support, the Kubernetes persistent volume model, combined with AWS managed services, is the correct architectural foundation. It is what modern production SaaS applications require and what Heroku’s design explicitly does not support.
Limitation 5: Heroku’s Manual Dyno Scaling Fails Under Real Traffic Patterns
Heroku’s scaling model is vertical and manual. When an application needs more capacity, the options are: upgrade to a larger dyno tier or add more dynos. Both decisions require human intervention. Both result in paying for the selected capacity level continuously, whether or not the traffic justifies it.
This model has three failure modes at scale that surface consistently across engineering teams.
The over-provisioning trap. Teams running workloads with variable traffic, B2B applications that peak during business hours, consumer applications that spike around campaigns, and event-driven systems with bursty processing requirements must provision for peak capacity and pay for it at all times. There is no mechanism to automatically scale down when traffic drops. Teams pay for peak capacity during off-peak periods continuously. The cost compounds with the service count.
The tier-jump problem. Heroku’s pricing scales in discrete tiers, not proportionally with usage. When resource requirements cross a tier boundary, the cost jumps to the next tier regardless of whether actual usage justifies the full tier ceiling. For finance teams preparing infrastructure forecasts, this makes cost modeling unreliable. Infrastructure spend jumps at irregular intervals unrelated to business metrics.
The response latency problem. When a traffic spike arrives, and the team has not pre-provisioned adequate capacity, Heroku’s response time for manual scaling is measured in minutes of human decision-making plus minutes of dyno startup time. For high-concurrency APIs serving real-time workloads, this latency is visible to customers as performance degradation during exactly the moments when reliable performance matters most.
How event-driven horizontal autoscaling solves this:
Kubernetes horizontal pod autoscaling responds to real workload signals, CPU utilization, memory pressure, request queue depth, and custom application metrics, automatically and in seconds. When traffic increases, the platform scales out. When traffic drops, it scales back in. Teams pay for actual compute consumption proportional to real usage, not for the tier ceiling required to handle the peak.
The scaling configuration on LocalOps is set once based on the application’s resource requirements and traffic patterns. From that point, scaling decisions are made by the platform in response to real signals, without human intervention, without manual dyno configuration, and without the over-provisioning that Heroku’s model structurally requires.
For teams evaluating alternatives to Heroku, specifically because of scaling problems, the difference between manual vertical scaling and event-driven horizontal autoscaling is not marginal. It is the difference between an infrastructure model designed for predictable linear workloads and one designed for the variable, bursty traffic patterns that real SaaS applications experience.
See how autoscaling works on LocalOps →
The Pattern Across All Four Limitations
These five limitations share a common structure. Each one is invisible at a small scale, where Heroku’s simplicity provides genuine value, and the constraints are either absent or manageable. Each one becomes a strategic constraint at the growth stage, Series A and beyond, when enterprise deals arrive, when architecture needs to evolve, when compliance frameworks become sales requirements, and when infrastructure cost compounds past the point of easy justification.
The teams that navigate this transition well are the ones that discover these limitations before they surface as crises. The CTO who identifies Heroku’s compliance ceiling six months before the first enterprise deal closes has time to plan a migration under calm conditions. The CTO who discovers it during a live security review does not.
How LocalOps + AWS Addresses All Five Limitations
LocalOps is an AWS-native Internal Developer Platform built specifically for teams replacing Heroku.
Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles with least-privilege policies, encrypted secrets via AWS Secrets Manager, CloudTrail audit logging, and a complete Prometheus + Loki + Grafana observability stack, automatically. No Terraform. No Helm charts. No manual configuration. Production-ready in under 30 minutes.
From this point onwards, the developer experience is identical to Heroku. Push to your configured branch. LocalOps builds, containerizes, deploys, runs health checks, and handles rollback automatically. Preview environments spin up on every pull request. Horizontal autoscaling runs by default based on real traffic signals. Stateful workloads run as first-class service types with persistent storage.
The infrastructure runs in your AWS account. VPC isolation, private networking, IAM-based access control, audit logging, compliance-ready defaults, all present from the first deployment. If you stop using LocalOps, the infrastructure keeps running. Nothing needs to be rebuilt.
“Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.” – Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy
“Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10–12 man months of effort, all of which LocalOps has saved for us.” – Gaurav Verma, CTO and Co-founder, SuprSend
Get started for free, first environment on AWS in under 30 minutes →
Frequently Asked Questions
How do teams get VPC isolation and private networking when moving off Heroku?
Moving to an AWS-native Internal Developer Platform provisions a dedicated VPC for every environment automatically. Private subnets separate application, database, and cache tiers. Security groups control traffic between services at the network layer. Services communicate over private IP addresses, never over the public internet. LocalOps provisions this VPC architecture automatically from the first deployment, following AWS Well-Architected standards. There is no manual VPC configuration required and no separate security project to complete before going to production.
What are the specific compliance risks of running HIPAA or SOC 2 workloads on Heroku?
HIPAA requires demonstrable control over infrastructure handling protected health information, access controls, audit logging, and transmission security, all implemented and auditable by the team. SOC 2 Type II requires consistent control over infrastructure over time. On Heroku, the infrastructure implementing these controls belongs to Heroku; the team cannot independently configure, audit, or demonstrate control over it. Moving to AWS via LocalOps puts the compliance surface in the team’s own AWS account, which holds HIPAA BAA, SOC 2 Type II, GDPR adequacy, and PCI DSS certifications. The compliance architecture is in place from the first deployment as a default.
How do teams implement least-privilege access and audit logging after leaving Heroku?
AWS IAM provides role-based access control at every layer of the infrastructure stack. LocalOps provisions IAM roles following least-privilege principles automatically; every service gets a role scoped to exactly the AWS resources it needs. AWS CloudTrail logs every API call to every AWS service automatically, providing a comprehensive audit trail for SOC 2 audits and enterprise security reviews. Both are present by default in every environment LocalOps provisions, not as separate configuration projects.
Why is Heroku unsuitable for stateful workloads?
Heroku’s dyno filesystem is ephemeral; any data written locally is lost when the dyno restarts. There is no persistent storage that survives dyno restarts. Database connection pools, WebSocket connections, and long-running stateful processes all behave unpredictably when the underlying process can restart at any time without the team’s control. Kubernetes running on AWS solves this through persistent volumes that survive pod restarts, StatefulSets that provide stable network identities, and Amazon EFS for shared filesystem access. LocalOps supports stateful workloads as a first-class service type.
How does event-driven autoscaling differ from Heroku’s dyno scaling model?
Heroku scaling is manual; humans decide when to add dynos or upgrade tiers, and teams pay for whatever capacity level is configured continuously. Kubernetes horizontal pod autoscaling responds to real workload signals, CPU, memory, and request queue depth, automatically and in seconds. When traffic increases, the platform scales out. When it drops, it scales back in. Teams pay for actual consumption rather than the tier ceiling required to handle peak load. For applications with variable traffic, the cost and reliability difference is significant.
Is LocalOps the best Heroku alternative for teams with compliance requirements?
For teams with SOC 2, HIPAA, GDPR, or enterprise compliance requirements, the defining characteristic of any Heroku alternative is whether infrastructure runs in the team’s own cloud account. Managed PaaS alternatives like Render and Railway run on the vendor’s shared cloud; the compliance ceiling is vendor-defined, the same structural problem as Heroku. LocalOps provisions infrastructure into the team’s own AWS account with compliance-ready defaults, private subnets, least-privilege IAM, encrypted secrets, and CloudTrail audit logging. The compliance surface is AWS, which holds the relevant certifications, not a vendor’s representations about what they support.
What does Rails hosting, a Heroku alternative, look like for compliance-sensitive Rails applications?
Rails applications have specific infrastructure requirements, Sidekiq workers, Postgres with connection pooling, Action Cable with Redis, Active Storage with object storage, and scheduled tasks. LocalOps handles all of these as first-class service types running inside a dedicated VPC. Web processes and Sidekiq workers scale independently. RDS provides Postgres with VPC isolation. ElastiCache provides Redis for Action Cable and job queuing. All services communicate over private networking. For Rails teams with compliance requirements, this is the Rails hosting Heroku alternative that satisfies enterprise security reviews and infrastructure ownership, with the developer experience intact.
Key Takeaways
The infrastructure limitations Heroku hides from small teams are the same limitations that become strategic constraints at the growth stage. VPC isolation and private networking become enterprise deal requirements. HIPAA and SOC 2 compliance become sales blockers. Least-privilege access and audit logging become audit requirements. Stateful workload support becomes an architectural necessity. Event-driven autoscaling becomes a cost and reliability requirement.
None of these surfaces is a crisis at a small scale. All of them surface before Series B for any B2B SaaS team with enterprise ambitions.
The best Heroku alternatives in 2026 are those that solve all five limitations simultaneously, not by adding compliance features on top of a managed PaaS, but by running infrastructure in the team’s own AWS account with compliance-ready defaults from the first deployment.
For engineering leaders evaluating alternatives to Heroku, the frame that produces the best decisions is the three-year question: what infrastructure limitations will constrain the business at the next stage? The answer to that question consistently points toward infrastructure ownership on AWS, with a platform layer that preserves the developer experience Heroku provided without the constraints it imposed.
Schedule a Migration Call → Our engineers review your current Heroku setup and walk through the AWS migration for your specific stack.
Get Started for Free → First production environment on AWS in under 30 minutes. No credit card required.
Read the Heroku Migration Guide → Full technical walkthrough, database migration, environment setup, DNS cutover.



