Rails Hosting After Heroku: The Best Alternatives for Production Ruby Applications
What actually breaks on Heroku for Rails at scale, and how production teams rebuild their stack for reliability, performance, and control.
TL;DR
What this covers: Why Heroku’s architecture creates specific failure modes for Rails applications at production scale, what capabilities a genuine Rails hosting alternative must provide, how modern platforms handle the workloads Heroku manages with fragile add-ons, and what a production-grade CI/CD pipeline looks like for a Rails team moving off Heroku.
Who it is for: CTOs and VPs of Engineering running Ruby on Rails applications on Heroku who are evaluating production alternatives, specifically teams with Postgres databases, Sidekiq background workers, and growth-stage traffic that is making Heroku’s limitations visible.
The conclusion: Heroku was the right Rails hosting choice for a long time. It understood the Rails application model, handled Procfile-based process management naturally, and abstracted infrastructure decisions that most Rails teams did not need to make. The reason teams move off it is not that Heroku stopped understanding Rails, it is that Rails applications at production scale need infrastructure capabilities that Heroku’s architecture cannot provide: persistent stateful workloads that survive dyno cycling, background job queues that do not depend on fragile add-on integrations, Active Storage and Action Cable deployments that work without platform workarounds, and CI/CD workflows that match modern Git-based development practices. This guide covers what those capabilities look like on a modern alternative.
See what your Rails stack looks like on AWS (EKS + RDS + Redis + S3), fully set up in your own account
→ Get a live environment in under 30 minutes
The Best Heroku Alternative for Rails in Production: What the Stack Needs to Cover
For a Rails application running in production with Postgres, Sidekiq, and real traffic, the hosting alternative needs to satisfy a specific set of requirements. The list is not long, but each item is non-negotiable for a production-grade deployment.
A genuine Rails hosting alternative in 2026 needs to handle the full application model:
Want to see how this full Rails stack is provisioned (EKS, RDS, Redis, S3) without writing Terraform or Kubernetes YAML?
→ Explore how LocalOps sets up production-ready infrastructure
The platform that comes closest to this for Rails teams is an AWS-native Internal Developer Platform like LocalOps. Not because AWS is the only option, but because the combination of EKS for persistent workloads, RDS for Postgres, ElastiCache for Redis, and S3 for Active Storage maps the Rails application model to managed AWS services that are production-proven, priced on actual consumption rather than arbitrary tiers, and controllable by the platform team in ways Heroku does not allow.
Why EKS specifically for Rails workloads:
Kubernetes on EKS allows the Rails application model to be expressed correctly. Web processes, Sidekiq workers, scheduled jobs (Whenever or Sidekiq-Scheduler), and cable servers can all run as separate Kubernetes Deployments with independent scaling policies, independent resource allocation, and independent restart behaviour. A Sidekiq worker that needs to process a memory-intensive job can be allocated 2GB RAM without affecting the web dyno’s resource allocation. A web process under traffic pressure can scale to fifteen replicas without triggering a Redis tier jump. None of this requires application code changes; it requires a hosting model that cleanly expresses the Rails multi-process architecture.
See how Rails workloads map to Kubernetes in practice (web, Sidekiq, schedulers, Action Cable)
→ Book a walkthrough with an engineer
Unified Platform for Rails, Node.js, Python, Django, and Go: Why It Matters for Platform Teams
Most engineering organisations running Rails are not running only Rails. The Rails application is the core product, but the surrounding architecture includes Node.js services for specific workloads, Python services for data processing or ML inference, Go services for high-performance API layers, and Django applications for internal tooling.
The failure mode of Heroku at this scale is not that it cannot run these workloads; it can, but that each language runtime requires a separate buildpack, each buildpack has its own behaviour and failure modes, and the operational model across language runtimes is inconsistent enough to create cognitive overhead for the platform team.
More significantly, Heroku’s observability story is fragmented across services regardless of language. A Rails service and a Python service running on the same Heroku team both send logs to the same Papertrail drain, and both have their own New Relic agents, but correlating a request that touches both services during an incident requires stitching together data from multiple places with no unified service map.
Running multiple services beyond Rails?
→ See how LocalOps handles multi-service deployments on one platform
What a unified platform looks like across language runtimes:
A container-native deployment platform is inherently language-agnostic. Docker containers encapsulate the runtime; the platform does not know or care whether the application inside is Rails, Django, Node.js, or Go. The deployment model is identical: push to branch, platform builds the container image, deploys to Kubernetes, serves traffic.
LocalOps handles multi-language stacks with a consistent deployment model across all services. A Rails web process, a Python background worker, a Node.js API service, and a Go microservice all deploy through the same pipeline, log to the same Loki instance, and surface metrics in the same Grafana dashboard. An incident that crosses service boundaries, a Rails request that calls a Python inference service that produces a slow response, is traceable in a single observability interface.
For platform teams managing heterogeneous stacks, this consistency is the difference between having a platform model and having a collection of separately managed services with a common billing account.
CI/CD consistency across language runtimes:
The CI/CD story on Heroku is language-runtime-specific. Rails applications use the Ruby buildpack. Node.js applications use the Node.js buildpack. When buildpack versions change, when build-time dependencies differ across services, and when environment variable requirements differ between language runtimes, the CI/CD behaviour is inconsistent in ways that are difficult to reason about at the platform level.
A container-native platform uses Dockerfiles (or auto-generated container builds) that fully specify the build environment per service. The build environment for a Rails application, Ruby version, bundler version, Node.js version for asset compilation, and system library dependencies is explicitly declared and reproducible. The same Dockerfile builds the same image in every environment, eliminating the class of “works on staging, fails on production because the buildpack version differs” failures that Heroku’s buildpack model produces.
How Modern Alternatives Handle Rails-Specific Workloads Heroku Manages Badly
Active Storage: From Workaround to Native
On a container-native platform using AWS EKS, Active Storage’s persistent storage requirement is satisfied without application-level workarounds. The Rails application mounts S3 as its Active Storage backend, not as a compromise, but as the correct production architecture. File uploads go directly to S3. Variant generation stores outputs to S3. Temporary files in the container filesystem are genuinely temporary and do not affect application state.
The operational difference from Heroku is that the ephemeral filesystem constraint no longer shapes application behaviour. Variant generation works predictably. Direct upload flows do not depend on temp file staging that survives in some dyno configurations and fails in others. The application behaves consistently across all replicas because no replica depends on local filesystem state.
LocalOps environments include S3 bucket provisioning as part of the standard environment setup. Teams configure config/storage.yml to point at the provisioned S3 bucket. Active Storage works in production from day one without add-on configuration, drain setup, or architectural accommodations.
Action Cable: Persistent Connections Without Platform Constraints
Action Cable on Kubernetes eliminates the connection timeout constraints that shape Action Cable architecture on Heroku. Kubernetes pods support long-lived WebSocket connections natively. The platform does not impose connection timeouts that require client-side reconnection logic as a reliability mechanism.
The Redis connection count problem from Heroku also resolves structurally. On EKS with ElastiCache, Redis connection limits are governed by the ElastiCache node type’s actual capacity, not by Heroku’s tier pricing model, which forces Redis upgrades based on connection count thresholds rather than actual resource consumption. A Rails application with 500 concurrent Action Cable subscribers connecting through 10 web pods uses 10 Redis connections from the server side. ElastiCache at an appropriate node type handles this without tier-jump pricing pressure.
Sidekiq: Persistent Workers Without Daily Interruption
Sidekiq workers on Kubernetes run as persistent Deployments. They are not subject to daily restarts as part of normal platform operation. They restart only when a new deployment is pushed or when a pod fails health checks.
When a new deployment is pushed, Kubernetes performs a rolling update: new Sidekiq pods start and become healthy before old ones terminate. Sidekiq’s shutdown signal handling (SIGTERM) gives in-flight jobs a configurable timeout to complete before the process exits. Jobs that cannot complete within the timeout are requeued for processing by the new pod.
This is meaningfully more reliable than Heroku’s daily dyno restart model for Sidekiq workloads. Teams running batch processing, data pipelines, or long-running background jobs on Sidekiq find that the daily restart window on Heroku requires explicit engineering investment to handle safely, retry logic, idempotency guarantees, and job state persistence. On Kubernetes, the restart behaviour is controlled, predictable, and aligned with how long-running background workloads should behave.
Asset Pipeline: Reproducible Builds Without Buildpack Fragility
Container-based builds for Rails applications handle asset precompilation in a Dockerfile layer that is fully specified and reproducible. The build environment, Ruby version, Node.js version, and system library versions are declared explicitly in the Dockerfile. The assets: precompile step runs in the same environment on every build, in every environment. There is no buildpack version drift, no build-time environment variable injection that affects compilation behaviour, and no memory pressure from shared build infrastructure.
Teams running Webpacker, Shakapacker, or Vite Ruby alongside Sprockets benefit directly from this model. The JavaScript build toolchain is specified in the Dockerfile, Node.js version pinned, npm dependencies installed from package-lock.json, Webpack or Vite build executed in the same layer that has access to the full build environment. The compiled assets are baked into the container image and deployed consistently to every replica. There is no per-dyno asset compilation, no CDN configuration required to make assets available across dynos, and no compile-time failures caused by environment configuration differences between the build environment and the runtime.
Why Heroku Is Architecturally Unsuitable for Persistent, Stateful Workloads
This is the structural incompatibility that underlies most of the Rails-specific failure modes described above, and it is worth stating clearly rather than leaving it implicit.
Heroku’s architecture is designed for stateless, ephemeral processes. The dyno model treats compute as fungible: dynos start, serve requests or process jobs, and stop. The ephemeral filesystem means no state persists between dyno restarts. The daily restart cycle means no process runs indefinitely. This model is intentional; it makes Heroku simple to reason about and resilient to individual process failures.
The problem is that production Rails applications are not fully stateless and ephemeral. They have components that are legitimately persistent and stateful:
Background job queues (Sidekiq) maintain in-progress job state in memory during execution. A job interrupted mid-execution is in an indeterminate state. Heroku’s restart model treats this as acceptable because the ephemeral design expects processes to be interruptible. Sidekiq’s operational model expects processes to complete in-flight work before restarting.
WebSocket connections (Action Cable) are inherently persistent. A WebSocket connection to a Heroku dyno is subject to the dyno’s connection timeout policies and restart behaviour. The connection abstraction that Heroku provides is not designed for long-lived stateful connections.
File system operations (temp file staging, variant generation, direct upload flows) assume that the local filesystem persists at least for the duration of a request-response cycle and in many cases across requests. Heroku’s ephemeral filesystem satisfies the first assumption but not reliably the second, particularly across dyno restarts.
What modern infrastructure design does instead:
Kubernetes pods on EKS are persistent by default. A pod runs until explicitly replaced by a deployment update or until it fails a health check. Pods are not subject to scheduled daily restarts. Persistent volumes can be mounted for workloads that genuinely need local filesystem persistence. Long-lived connections are supported without platform-imposed timeouts.
The design principle is different: instead of treating all compute as ephemeral and requiring applications to accommodate that, Kubernetes allows workloads to declare their persistence requirements explicitly. Stateless web processes use rolling deployments and horizontal autoscaling. Stateful background workers use persistent pod lifecycles with graceful shutdown handling. Workloads with persistent storage requirements mount persistent volumes. Each workload type gets the persistence model it needs.
For Rails applications, this means the application can be designed around what the feature needs rather than around what the platform will reliably support. That is the structural difference between Heroku and modern alternatives for production Rails workloads.
Why CI/CD Workflows Built on Heroku Fail at Scale, and What Production-Grade Looks Like
Heroku’s deployment model was built around a specific Git-based workflow: push to a branch, Heroku deploys. For individual developers or small teams, this is simple and effective. For engineering organisations with multiple teams, feature branch workflows, review environments, and deployment promotion across staging and production, the model breaks in specific ways.
The specific CI/CD failure modes on Heroku:
No native review app reliability at scale. Heroku’s review apps feature spins up ephemeral environments per pull request. In practice, review apps on Heroku have reliability problems at scale: slow provisioning, environment variables that do not correctly inherit from the parent app configuration, and ephemeral environments that do not accurately replicate production because Heroku’s add-on provisioning in review apps does not match production configuration.
The deployment pipeline lacks integration with modern Git workflows. Heroku Pipelines, the mechanism for promoting builds from staging to production, works for simple linear workflows. Teams using trunk-based development with feature flags, teams with multiple staging environments for different workstreams, or teams that need to deploy specific commits rather than the latest main branch find that Heroku’s pipeline model does not accommodate their workflow without significant workarounds.
Build failures are opaque. When a Heroku build fails, during slug compilation, during buildpack execution, or during asset precompilation, the failure message is often insufficient to diagnose the root cause quickly. Buildpack builds are black boxes with limited introspection. Teams spend engineering time decoding build failures that a container build with explicit Dockerfile layers would surface clearly.
No native canary or blue-green deployment. Heroku Pipelines support build promotion but not sophisticated deployment strategies. Blue-green deployments require manual Heroku preboot configuration. Canary deployments, routing a percentage of traffic to a new version to validate before full rollout, are not natively supported. For teams deploying to production multiple times per day with reliability requirements, the absence of native traffic-splitting deployment strategies is a meaningful operational gap.
What a production-grade Rails CI/CD pipeline looks like:
A production-grade CI/CD pipeline for a Rails application in 2026 has a few defining characteristics:
Container-based builds that are environment-consistent. The same Docker image that is tested in CI is deployed to staging and then to production. There is no slug re-compilation, no buildpack re-execution, no environment-specific build behaviour. The image is built once, tested, and promoted. What passes CI is exactly what runs in production.
Branch-triggered environment provisioning. Feature branches trigger the provisioning of isolated review environments automatically. The review environment is not a stripped-down approximation of production; it is the same Kubernetes deployment with the same configuration, the same managed Postgres instance, and the same observability stack. Developers can test against an environment that accurately represents production behaviour.
Deployment strategy configuration per service. Rolling deployments by default. Blue-green for services where zero-downtime cutover is critical. Canary traffic splitting for high-risk releases. These are configuration choices per service, not platform limitations that require workarounds.
Integrated observability in the deployment event stream. Deployments appear as events in the metrics and log timeline. When a deployment at 2:35 PM correlates with an error rate spike at 2:36 PM, that correlation is visible in Grafana without cross-referencing deployment logs in a separate interface.
LocalOps delivers this pipeline model for Rails teams. Push to branch triggers a Docker build, image push to ECR, and deployment to the target EKS environment automatically. Review environments are provisioned on pull request open and torn down on merge. Deployments are visible as events in the Grafana dashboard. Rolling deployments run by default. The pipeline configuration is per-service and per-environment, not global and opaque.
How LocalOps Addresses the Full Rails Production Stack
LocalOps is an AWS-native Internal Developer Platform built for teams, replacing Heroku. For Rails teams specifically, it handles the full production stack:
Web processes run on EKS with horizontal autoscaling driven by CPU and request rate metrics. Web dynos scale out under traffic pressure and back in during off-peak periods. No manual dyno count management.
Sidekiq workers run as persistent Kubernetes Deployments with independent resource allocation from web processes. Daily restarts do not occur. Rolling deployments give in-flight jobs time to complete before old pods terminate.
Postgres runs on Amazon RDS with automated backups, read replica support, Multi-AZ availability, and storage autoscaling. Pricing scales with actual resource consumption, not with row count tiers.
Redis runs on Amazon ElastiCache with connection count driven by actual workload, not by pricing tier thresholds. Sidekiq, Action Cable, and Rails cache all share the ElastiCache instance with appropriate namespace separation.
Active Storage connects to S3 buckets provisioned as part of the environment. No architectural workarounds. No ephemeral filesystem fragility.
Action Cable runs on web pods without platform-imposed connection timeouts. Redis pub/sub backend connects to ElastiCache.
Asset pipeline builds inside Docker during CI. Compiled assets are baked into the container image. No per-dyno compilation, no buildpack fragility.
Observability, Prometheus, Loki, and Grafana are included in every environment. Rails application metrics, Sidekiq job throughput and error rates, database performance, and Redis connection metrics are all available from day one without add-on configuration.
Sign up for free at LocalOps, and connect your AWS account. Connect your GitHub repository. LocalOps provisions the full environment automatically in under 30 minutes. First Rails application deployed to AWS without writing Terraform, Helm charts, or Kubernetes YAML.
“Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.” – Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy
“Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10–12 man months of effort, all of which LocalOps has saved for us.” – Gaurav Verma, CTO and Co-founder, SuprSend
Frequently Asked Questions
What is the best Heroku alternative for Rails in production with Postgres, Sidekiq, and autoscaling?
For teams that need production-grade Rails hosting with all three, Postgres, Sidekiq, and horizontal autoscaling, an AWS-native Internal Developer Platform like LocalOps is the strongest option. It maps the full Rails application model to managed AWS services: EKS for web and worker processes, RDS for Postgres, ElastiCache for Redis and Sidekiq, and S3 for Active Storage. Autoscaling runs horizontally on EKS rather than vertically through dyno tiers. Sidekiq workers run as persistent Kubernetes Deployments without daily interruption. The developer experience stays close to Heroku, push to branch, service deploys, without requiring Kubernetes expertise from the engineering team.
How does the Sidekiq migration from Heroku to Kubernetes actually work?
Sidekiq on Kubernetes runs as a separate Kubernetes Deployment from the Rails web process. The Procfile convention from Heroku maps directly: worker: bundle exec sidekiq becomes a separate Deployment in Kubernetes with its own replica count, resource allocation, and scaling policy. LocalOps reads the Procfile and generates the corresponding Kubernetes resources automatically during environment setup. The Sidekiq configuration, concurrency, queue weights, and Redis connection URL are passed through environment variables or Kubernetes secrets exactly as on Heroku. The operational difference is that Sidekiq workers are no longer subject to daily dyno restarts, and rolling deployments give in-flight jobs time to finish before old pods terminate.
Can teams run Rails, Node.js, and Python services from the same deployment platform on LocalOps?
Yes, LocalOps deploys any containerised workload to the same EKS cluster. A Rails API, a Node.js frontend server, and a Python ML inference service all deploy through the same pipeline, log to the same Loki instance, and surface metrics in the same Grafana dashboard. Language runtime differences are encapsulated in each service’s Dockerfile. The platform layer is language-agnostic. For platform teams managing heterogeneous stacks, this means a consistent deployment model, consistent observability, and consistent incident response across all services regardless of language runtime.
What does the Rails asset pipeline look like in a Docker-based deployment?
The standard pattern for Rails asset precompilation in Docker uses a multi-stage build: a build stage installs all dependencies, runs bundle exec rails assets: precompile, and produces compiled assets; a production stage copies only the compiled assets and production dependencies from the build stage, leaving build tooling behind. This produces a smaller container image than single-stage builds and ensures that asset compilation always runs in a controlled, reproducible environment. Node.js is available during the build stage for Webpacker, Shakapacker, or Vite Ruby compilation. The compiled assets are baked into the image and consistent across all replicas, no per-dyno compilation, no CDN required to serve assets consistently across a multi-replica deployment.
How do review environments on LocalOps compare to Heroku Review Apps for Rails applications?
LocalOps provisions review environments as full Kubernetes deployments in isolated namespaces upon opening a pull request. The review environment includes the Rails application, a Postgres database (RDS instance or shared cluster with namespace isolation), Redis, and the full observability stack. It is not a stripped-down approximation of production; it is the same deployment configuration. Environment variables are inherited from the parent environment’s configuration. The review environment tears down automatically on PR merge or close. For Rails teams doing feature branch development, the practical difference from Heroku Review Apps is reliability: review environments on Kubernetes are provisioned consistently and behave like production rather than like an approximation of it.
Why is Heroku architecturally unsuitable for long-running Rails background jobs?
Heroku’s dyno model is built for ephemeral processes that start, do work, and stop. Daily dyno restarts are a feature of this model, not a bug; the platform is designed to treat compute as interruptible. Sidekiq workers are not interruptible without consequences: a job interrupted mid-execution may leave application state in an inconsistent condition, and the engineering investment required to make every job safely interruptible is significant. On Kubernetes, pods are persistent and only restart when explicitly replaced by a deployment update or when health checks fail. Deployment updates use rolling restart with a configurable termination grace period that allows Sidekiq to finish in-flight jobs before the old pod exits. The platform’s restart model is aligned with how long-running background workloads actually behave.
Key Takeaways
Rails teams leave Heroku for a specific set of reasons that are tied to the Rails application model, not to generic infrastructure concerns. Sidekiq workers that cannot run persistently. Active Storage that requires application-level workarounds for ephemeral filesystem constraints. Action Cable deployments shaped by connection timeout policies and Redis tier pricing. Asset pipeline builds that fail in opaque ways due to buildpack environment drift. CI/CD workflows that do not accommodate modern Git-based development practices at the team scale.
These failure modes share a common root: Heroku’s architecture optimises for stateless, ephemeral processes. Production Rails applications need a hosting model that correctly handles persistent, stateful workloads alongside the stateless web tier.
Modern alternatives built on AWS address this structurally. EKS runs persistent workloads with graceful shutdown handling. RDS provides Postgres without tier-jump pricing. ElastiCache provides Redis without connection-count-driven pricing pressure. S3 makes Active Storage work correctly without filesystem workarounds. Container-based builds make asset precompilation reproducible. Rolling deployments with graceful termination make Sidekiq migrations safe.
What changes when Rails teams move to LocalOps: the infrastructure is in the team’s AWS account, workloads run persistently without daily interruption, observability is integrated rather than assembled from add-ons, and the CI/CD pipeline accommodates modern Git workflows rather than constraining them.
What stays the same: the deployment model. Push to branch, service deploys. The Rails application does not know it moved.
Get Started with LocalOps → First Rails production environment on AWS in under 30 minutes. No credit card required.
Schedule a Migration Call → Our engineers walk through your specific Rails stack, Sidekiq configuration, Active Storage setup, Action Cable deployment, and map the migration path.
Read the Heroku Migration Guide → Full technical walkthrough: database migration, Sidekiq configuration, DNS cutover, asset pipeline setup.



