<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Keep Shipping]]></title><description><![CDATA[Ideas, mental models and strategies for AIOps, platform engineering, and making cloud infrastructure self-driven & invisible.]]></description><link>https://blog.localops.co</link><generator>Substack</generator><lastBuildDate>Sun, 12 Apr 2026 18:01:42 GMT</lastBuildDate><atom:link href="https://blog.localops.co/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[LocalOps Inc.]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[localops@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[localops@substack.com]]></itunes:email><itunes:name><![CDATA[LocalOps Inc]]></itunes:name></itunes:owner><itunes:author><![CDATA[LocalOps Inc]]></itunes:author><googleplay:owner><![CDATA[localops@substack.com]]></googleplay:owner><googleplay:email><![CDATA[localops@substack.com]]></googleplay:email><googleplay:author><![CDATA[LocalOps Inc]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Heroku Alternatives the Engineering Community Actually Recommends in 2026]]></title><description><![CDATA[Heroku Alternatives Backed by Real Migrations: Lessons from Engineering Teams in 2026]]></description><link>https://blog.localops.co/p/heroku-alternatives</link><guid isPermaLink="false">https://blog.localops.co/p/heroku-alternatives</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Fri, 10 Apr 2026 06:40:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QJY-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QJY-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QJY-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!QJY-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!QJY-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!QJY-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QJY-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4282653,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/193586542?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QJY-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!QJY-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!QJY-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!QJY-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bdfb6a0-af5a-4377-b5c8-210a1aea29f3_2400x1345.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The most useful signal when evaluating Heroku alternatives is not vendor marketing or feature comparison pages. It is the pattern of decisions made by engineering leaders who have already undergone the migration and are willing to share what worked, what did not, and what they wish they had known before starting.</p><p>Across Reddit threads on r/devops, r/rails, r/node, and r/selfhosted, and Hacker News discussions on platform engineering and infrastructure decisions, consistent patterns have emerged in 2026 that did not exist with the same clarity two years ago. The community has been through enough Heroku migrations now, successful ones, failed ones, and ones that required doing twice, to have developed a genuine consensus on what works at production scale.</p><p>This guide synthesizes those patterns, maps them to the structural differences between platform categories, and gives engineering leaders a framework for choosing a Heroku alternative that does not replicate the vendor lock-in problem they are trying to solve.</p><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> What the engineering community actually recommends as Heroku alternatives in 2026, how the landscape has shifted, how top alternatives compare on TCO, and how to choose a platform that avoids recreating Heroku&#8217;s lock-in</p><p><strong>Who it is for:</strong> CTOs and engineering leaders evaluating Heroku alternatives who want a community-validated signal alongside structural analysis</p><p><strong>The community consensus:</strong> Managed PaaS alternatives are a transitional step, not a destination. Infrastructure ownership on AWS, with a platform layer that preserves developer experience, is what the community consistently validates for production SaaS at scale.</p><p><strong>Want to see what LocalOps looks like for your specific stack?</strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Schedule a walkthrough</a></p><h2><strong>How the Heroku Alternatives Landscape Has Shifted in 2026</strong></h2><p>The Heroku alternatives landscape in 2026 looks meaningfully different from what it was two years ago, and the shift is not primarily about new platforms entering the market.</p><p>The shift is in how engineering leaders are framing the decision.</p><p>In 2023 and 2024, the dominant question in the community was: <em>What is the easiest migration from Heroku?</em> Teams were evaluating alternatives primarily on migration friction, how quickly they could get off Heroku, and how similar the new experience would feel.</p><p>In 2026, the dominant question has changed: <em>what infrastructure foundation does our business need for the next three years?</em> Teams are evaluating alternatives on strategic fit, compliance capability, cost structure at scale, exit optionality, and whether the platform they choose will be the last migration they make or the first of two.</p><p>That shift in framing produces significantly different answers. The easiest migration is often not the right strategic choice. A team that moves from Heroku to Render solves the immediate cost and developer experience problem. If they are selling to enterprise customers, they will face the same compliance conversation 18 months later, with more accumulated dependencies and less time to address it under better conditions.</p><p><strong>Where compliance-sensitive teams are landing:</strong></p><p>Enterprise-grade, compliance-sensitive SaaS teams have converged on AWS-native infrastructure with a platform layer on top. The reasons are consistent across community discussions:</p><p>SOC 2 and HIPAA requirements demand infrastructure running in the team&#8217;s own cloud account. Enterprise security questionnaires require VPC configuration, private networking, and IAM audit trails that managed PaaS platforms cannot provide. Cost efficiency at scale requires direct AWS pricing, not a PaaS margin that compounds with every service added. Architectural flexibility requires Kubernetes-grade infrastructure, not dyno-based compute.</p><p>The best Heroku alternatives for this profile in 2026 are platforms that run on AWS infrastructure the team owns, with enough abstraction that developers do not need to interact with that infrastructure directly.</p><h2><strong>What the Engineering Community Is Actually Recommending</strong></h2><p>The community signal on Heroku alternatives is worth examining directly, because it captures the failure modes that vendor comparisons do not surface.</p><h3><strong>Pattern 1: The Managed PaaS Stepping Stone</strong></h3><p>The most frequently discussed migration pattern in the community is one that involves two migrations, not one.</p><p>Team moves from Heroku to Render or Railway to reduce friction and cost. The migration goes smoothly. Developer experience is preserved. Costs reduce modestly. The platform feels like a clear upgrade from Heroku for the first 12&#8211;18 months.</p><p>Then something changes. An enterprise prospect sends a security questionnaire. Or a compliance audit surfaces infrastructure requirements that the platform cannot meet. Or the cost structure at 15+ services starts looking familiar, platform margins compounding across services the same way Heroku&#8217;s did.</p><p>The team migrates again. This time to infrastructure they own.</p><p>The community commentary on this pattern is consistent and pointed: <em>the second migration is more expensive than going directly to infrastructure ownership would have been.</em> More accumulated dependencies to untangle. More technical debt from the intermediate platform. More urgency because the enterprise deals are now in an active pipeline rather than theoretical future scenarios.</p><p>The observation that surfaces in nearly every thread where this pattern is discussed: <em>the teams that went to infrastructure ownership directly made the migration once.</em></p><h3><strong>Pattern 2: The Raw AWS Complexity Trap</strong></h3><p>The second common pattern discussed is the team that moves directly to raw AWS and discovers that getting from &#8220;AWS is provisioned&#8221; to &#8220;any developer can deploy independently&#8221; is a multi-month platform engineering project.</p><p>The infrastructure is technically sound. The compliance posture is correct. The cost structure is right. But product engineers who used to deploy every 20 minutes on Heroku now file tickets with a platform team and wait 48 hours. Shipping velocity drops. The migration is blamed, even though the infrastructure itself is fine.</p><p>The community diagnosis is consistent: the technical migration succeeded. The developer experience migration failed. The team built the infrastructure layer without building the platform layer, which makes the infrastructure accessible to product engineers.</p><p>The resolution discussed in these threads is almost always the same: adopt a platform layer on top of the AWS infrastructure. In many cases, this is specifically what brings teams to AWS-native Internal Developer Platforms; they have the AWS foundation already, and they need the developer experience layer on top.</p><h3><strong>Pattern 3: The Rails Hosting Question</strong></h3><p>Rails teams are among the most active in these discussions, and their requirements surface a specific evaluation dimension that generic platform comparisons miss.</p><p>The community consensus on Rails hosting Heroku alternative options is clear and specific: any platform being evaluated for Rails production hosting needs to handle Sidekiq workers, Postgres with connection pooling, Active Storage with object storage, Action Cable with Redis, and scheduled tasks, as first-class service types, not as workarounds or add-on integrations.</p><p>Platforms that handle these as edge cases consistently receive community recommendations against for production Rails applications. Platforms that treat them as native deployment patterns, web processes and workers scaling independently, Redis inside the VPC, cron jobs as a first-class service type, consistently receive positive recommendations.</p><h3><strong>Pattern 4: The Infrastructure Ownership Conclusion</strong></h3><p>The most significant shift in community sentiment between 2024 and 2026 is the emergence of a clear conclusion on infrastructure ownership.</p><p>In 2024, the community was still debating whether infrastructure ownership was worth the operational complexity. In 2026, that debate has largely been settled for B2B SaaS teams with an enterprise go-to-market motion.</p><p>The observation that captures the current community position most precisely:</p><p><em>&#8220;The teams that moved to infrastructure they own early are the ones having the smoothest conversations with enterprise prospects. The teams still on managed platforms are the ones explaining to their board why a $200K deal is stuck in security review.&#8221;</em></p><p>This is not a technical observation. It is a business one. And it reflects how the community conversation has matured from infrastructure optimization to strategic positioning.</p><p><a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read how engineering teams navigate this transition</a></p><h2><strong>How Top Heroku Alternatives Compare on Total Cost of Ownership</strong></h2><p>For teams running 20+ production services, the cost structure differences between Heroku alternative categories are significant and compounding. Surface-level pricing comparisons miss the structural dynamics that determine actual TCO at scale.</p><p><strong>Managed PaaS alternatives: Render, Railway, Fly.io</strong></p><p>These platforms reduce cost versus Heroku, but they do not eliminate the platform margin. Every compute resource, managed database, cache instance, and monitoring capability still carries a vendor margin layered on top of the underlying infrastructure cost.</p><p>At 20+ services, this margin compounds. Each new service adds compute margin, database margin, Redis margin, and monitoring cost simultaneously. The efficiency ceiling is lower than direct AWS because the platform margin persists regardless of scale. And observability typically requires additional add-ons with separate billing, recreating one of the most frustrating cost dynamics of Heroku at a slightly lower price point.</p><p><strong>Open-source self-hosted alternatives: Coolify, Dokku, CapRover.</strong></p><p>These platforms eliminate the platform margin. Infrastructure runs at direct cloud pricing with no vendor markup. For teams with dedicated platform engineering capacity, the compute and managed service costs are as low as they can be.</p><p>The TCO calculation changes when engineering time is included. Provisioning, security patching, observability setup, autoscaling configuration, and on-call response for platform issues all fall to the team. For most product-focused engineering teams, the engineering hours required to operate a self-hosted platform in production represent a higher total cost than a managed platform fee, even accounting for the elimination of the platform margin.</p><p><strong>AWS-native Internal Developer Platforms: LocalOps</strong></p><p>The cost structure is fundamentally different from both categories above. LocalOps charges a flat platform fee. The underlying infrastructure runs at AWS list pricing with no markup. Observability, Prometheus, Loki, and Grafana are included at no additional cost regardless of service count.</p><p>At 20+ services, the difference compounds in the IDP&#8217;s favour. Every additional service adds only AWS infrastructure cost. No observability cost increment. No platform margin on database or cache. The gap between managed PaaS total cost and AWS-native IDP total cost widens with every service added.</p><p>For an accurate TCO comparison based on your current Heroku invoice and service count,<a href="https://go.localops.co/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> the LocalOps team will model it directly</a>.</p><h2><strong>Structural Differences Between First-Generation Alternatives and AWS-Native IDPs</strong></h2><p>This is the distinction that most Heroku alternative evaluations underweight, because the surface-level experience of managed PaaS platforms and AWS-native IDPs can look similar to developers, while the underlying architecture is fundamentally different.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/itDJS/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/61f5269d-d6d6-4a6c-887f-21181905d9d9_1220x1080.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f3b447c-abf2-4fd5-8a27-0144f2766b07_1220x1080.png&quot;,&quot;height&quot;:540,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/itDJS/1/" width="730" height="540" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>The structural difference is not about developer experience on day one. Both categories can provide a git-push deployment workflow. The structural difference is about what happens at month 18 when enterprise deals arrive, compliance requirements sharpen, and the cost structure at scale comes under scrutiny.</p><p>First-generation alternatives, Render, Railway, and Fly.io, improve on Heroku&#8217;s developer experience and pricing. The fundamental model is the same: your infrastructure runs on someone else&#8217;s cloud. Your compliance posture is bound by what the vendor chooses to support. Your exit path requires rebuilding infrastructure from scratch. You are trading one managed dependency for another.</p><p>AWS-native Internal Developer Platforms change the model entirely. Infrastructure runs in your cloud account. Developer experience is preserved, and git push deploys without Kubernetes knowledge. Observability, CI/CD, autoscaling, and secrets management are built in. And if you stop using the platform, your infrastructure keeps running. Nothing needs to be rebuilt.</p><p>This structural difference is what the community discovered through the stepped migration pattern described above. The teams that made it once recognized this distinction before choosing. The teams that made it twice discovered it afterwards.</p><h2><strong>How to Choose a Heroku Alternative That Avoids Replicating Vendor Lock-in</strong></h2><p>This is the question that separates a good infrastructure decision from a decision that creates the same problem in a different form.</p><p>Heroku&#8217;s vendor lock-in has a specific mechanism: your infrastructure lives in Heroku&#8217;s systems. When you leave, it disappears. You start from scratch. Every year you stay on Heroku, the dependencies accumulate, and the eventual migration becomes more expensive.</p><p>The risk when choosing a Heroku alternative is choosing a platform that replicates this mechanism with a different vendor name. The managed PaaS category does this structurally; Render, Railway, and Fly.io all use the same model. Your infrastructure lives in their systems. When you leave, you start from scratch.</p><p><strong>The infrastructure design decisions that future-proof the platform choice:</strong></p><p><strong>Decision 1: Infrastructure must run in your cloud account.</strong></p><p>This is the binary decision that determines everything downstream. If infrastructure runs in your account, your VPC, your EKS cluster, your RDS database, then the platform vendor you use to manage that infrastructure is replaceable. The infrastructure is yours. The management layer is a service you pay for, not a dependency you are locked into.</p><p>If infrastructure runs in the vendor&#8217;s systems, you are locked in structurally, regardless of how the platform markets itself.</p><p><strong>Decision 2: The platform must use standard, portable technology.</strong></p><p>Kubernetes is the standard. Helm charts are standard. Terraform is standard. Any platform that runs your workloads on standard Kubernetes in your own account gives you the option to manage that infrastructure directly if you ever need to change the platform layer.</p><p>Platforms that run on proprietary runtimes, proprietary deployment formats, or proprietary infrastructure abstractions create lock-in even if the infrastructure nominally runs in your account.</p><p><strong>Decision 3: Verify the exit path before committing.</strong></p><p>Ask every platform vendor directly: <em>if we stop using your platform tomorrow, what does our infrastructure look like, and can we continue operating it independently?</em></p><p>LocalOps answers this question specifically and directly. Every resource LocalOps provisions lives inside the team&#8217;s own AWS account. EKS clusters, RDS databases, VPCs, load balancers, all owned by the team, all running in their account, all manageable directly through the AWS console or CLI if the team ever stops using LocalOps. There is no data to export. There is no infrastructure to rebuild. The exit path is always open.</p><p>Platforms that cannot answer this question clearly, or that answer it with migration timelines, data export processes, or infrastructure rebuild requirements, are creating vendor lock-in regardless of how they describe their model.</p><p><strong>Decision 4: Evaluate compliance ceiling, not just current compliance.</strong></p><p>Managed PaaS platforms have a compliance ceiling defined by what the vendor chooses to support. That ceiling may be adequate today. It may not be adequate in 18 months when enterprise procurement processes become part of the sales cycle.</p><p>AWS-native IDPs running in the team&#8217;s own account have no compliance ceiling. The compliance surface is AWS, which holds SOC 2, HIPAA, GDPR, PCI DSS, and dozens of additional certifications. The compliance architecture grows with the business rather than constraining it.</p><p><a href="https://localops.co/features/secure-by-default?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps handles compliance by default</a></p><p><strong>How LocalOps Fits the Community&#8217;s Validated Pattern</strong></p><p>LocalOps is an AWS-native Internal Developer Platform built specifically for teams replacing Heroku.</p><p>Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><p>From that point onwards, the developer experience is identical to Heroku. Push to your configured branch. LocalOps builds, containerizes, and deploys to AWS automatically. Preview environments spin up on every pull request. Logs and metrics available from day one. Autoscaling and auto-healing run by default.</p><p>The infrastructure runs in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt. This is the architectural model the community has converged on: infrastructure ownership with developer simplicity, no new vendor dependency, no compliance ceiling.</p><blockquote><p><em>&#8220;Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.&#8221;</em><strong> &#8211; Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy</strong></p><p><em>&#8220;We saved months of DevOps effort by using LocalOps,&#8221;</em> <strong>&#8211; Shobit Gupta, Ex-Uber, CTO and Co-founder, Segwise.</strong></p></blockquote><p><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get started for free, first environment live in under 30 minutes</a></p><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>How has the Heroku alternatives landscape shifted in 2026, and which platforms have emerged as most viable for enterprise-grade SaaS teams?</strong></p></li></ol><p>The most significant shift is not in the platforms available; it is in how engineering leaders are framing the decision. In 2023 and 2024, teams optimised for migration ease. In 2026, teams are optimizing for the three-year infrastructure foundation their business needs. This shift produces different answers. Managed PaaS alternatives remain viable for early-stage teams without current enterprise compliance requirements. For enterprise-grade, compliance-sensitive SaaS teams, AWS-native Internal Developer Platforms running in the team&#8217;s own account have emerged as the clear choice, because they are the only category that satisfies compliance requirements without a ceiling, eliminates platform margin at scale, and provides a genuine exit path.</p><ol start="2"><li><p><strong>What are engineering teams on Reddit and Hacker News actually recommending in 2026?</strong></p></li></ol><p>The community consensus has coalesced around three consistent positions. First: managed PaaS alternatives like Render and Railway are a transitional step, not a destination; teams that go there first often end up migrating again when compliance requirements arrive. Second: going directly to raw AWS without a platform layer creates developer experience problems that erode the infrastructure benefits. Third: AWS-native Internal Developer Platforms, infrastructure in your own account with a developer experience layer on top, is the pattern the community validates for production SaaS teams with enterprise ambitions. Rails teams specifically require platforms that handle Sidekiq, Postgres, Active Storage, and Action Cable as first-class concerns.</p><ol start="3"><li><p><strong>How do Render, Railway, Fly.io, and AWS-native IDPs compare on TCO at 20+ services?</strong></p></li></ol><p>At 20+ production services, the structural cost differences become significant. Managed PaaS alternatives maintain a platform margin on every component, compute, database, cache, and monitoring, which compounds with each new service. At scale, the observability cost alone adds meaningfully as per-service monitoring costs multiply. AWS-native IDPs like LocalOps charge a flat platform fee with AWS list pricing on all infrastructure and observability included at no additional cost. The cost gap widens with every service added because every service adds another component where the margin difference applies. For an accurate comparison based on your current stack, the LocalOps team will model it from your Heroku invoice.</p><ol start="4"><li><p><strong>What are the structural differences between first-generation Heroku alternatives and AWS-native IDPs?</strong></p></li></ol><p>The fundamental difference is infrastructure ownership. First-generation alternatives, Render, Railway, and Fly.io, run infrastructure on the vendor&#8217;s shared cloud. No VPC ownership. Compliance ceiling defined by the vendor. Exit path requires a full infrastructure rebuild. Platform margin persists regardless of scale. AWS-native IDPs run infrastructure in the team&#8217;s own AWS account. Full VPC isolation. No compliance ceiling, the surface is AWS itself. Exit path is always open, infrastructure continues running independently. Direct AWS pricing with no platform margin. The developer experience on day one can look similar. The strategic implications diverge significantly at month 18.</p><ol start="5"><li><p><strong>How do engineering leaders choose a Heroku alternative that avoids replicating vendor lock-in?</strong></p></li></ol><p>Four infrastructure design decisions future-proof the platform choice. First: infrastructure must run in your cloud account, not the vendor&#8217;s. Second: the platform must use standard, portable technology, Kubernetes, not proprietary runtimes. Third: verify the exit path explicitly before committing, ask what happens if you stop using the platform tomorrow and evaluate the answer carefully. Fourth: evaluate compliance ceiling against 18-month requirements, not just current requirements. LocalOps specifically addresses all four: infrastructure in your AWS account, standard Kubernetes, explicit exit path with infrastructure running independently, and AWS compliance surface with no vendor-defined ceiling.</p><ol start="6"><li><p><strong>Is LocalOps a viable Heroku alternative for Rails applications specifically?</strong></p></li></ol><p>Yes. Rails applications require specific infrastructure handling: Sidekiq background workers, Postgres with connection pooling, Action Cable with Redis, Active Storage with object storage, and scheduled tasks. LocalOps handles all of these as first-class service types. Web processes and Sidekiq workers are configured and scale independently. Amazon RDS provides Postgres inside your VPC with connection pooling configuration. ElastiCache provides Redis for Action Cable and job queuing. Native cron jobs replace Heroku Scheduler. The rails hosting heroku alternative path through LocalOps preserves the git-push deployment workflow while running on infrastructure the team owns.</p><ol start="7"><li><p><strong>What is the difference between a Heroku self-hosted alternative and LocalOps?</strong></p></li></ol><p>A Heroku self-hosted alternative like Coolify or Dokku gives full infrastructure ownership with no platform vendor dependency. The team owns the complete operational burden, provisioning, security patching, observability setup, scaling configuration, and on-call response for platform issues. For teams without dedicated platform engineering capacity, the operational cost of running a self-hosted platform in production consistently exceeds initial estimates. LocalOps provides the same infrastructure ownership; everything runs in your own AWS account, with the platform layer managed. The infrastructure ownership is equivalent. The operational overhead is not. LocalOps is designed for teams that want infrastructure ownership without building and maintaining the platform themselves.</p><h2><strong>Key Takeaways</strong></h2><p>The engineering community&#8217;s consensus on Heroku alternatives in 2026 is clearer than it has ever been, because enough teams have now been through the full cycle of migration, operation, and in some cases re-migration to know what works at production scale.</p><p>Managed PaaS alternatives are a transitional step, not a destination. They solve the immediate Heroku problem and recreate the structural lock-in problem. Teams with enterprise ambitions discover this ceiling within 12&#8211;18 months.</p><p>Raw AWS without a platform layer solves the infrastructure ownership problem and creates a developer experience regression that erodes the infrastructure benefits. The two problems require two solutions, infrastructure ownership and developer experience preservation, not one.</p><p>AWS-native Internal Developer Platforms are the pattern the community validates for production SaaS teams at scale. Infrastructure in your own account. Developer experience preserved. No new vendor dependency. No compliance ceiling. Cost structure that scales proportionally with usage rather than in tier jumps.</p><p>The best Heroku alternatives in 2026 are the ones that solve the immediate migration problem and the long-term infrastructure ownership problem simultaneously, so the migration is made once, under conditions the team controls, and does not need to be repeated.</p><p><strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> Our engineers review your current Heroku setup and walk through what the migration looks like for your specific stack.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First production environment on AWS in under 30 minutes. No credit card required.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Heroku Migration Guide &#8594;</a></strong> Full technical walkthrough, database migration, environment setup, DNS cutover.</p>]]></content:encoded></item><item><title><![CDATA[The Real Cost of Heroku at Scale: A Teardown for CTOs Evaluating Alternatives]]></title><description><![CDATA[Beyond the Invoice: Understanding Heroku&#8217;s True Cost for Scaling SaaS Teams]]></description><link>https://blog.localops.co/p/the-real-cost-of-heroku-at-scale</link><guid isPermaLink="false">https://blog.localops.co/p/the-real-cost-of-heroku-at-scale</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Tue, 07 Apr 2026 05:32:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_xn7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_xn7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_xn7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 424w, https://substackcdn.com/image/fetch/$s_!_xn7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 848w, https://substackcdn.com/image/fetch/$s_!_xn7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 1272w, https://substackcdn.com/image/fetch/$s_!_xn7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_xn7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png" width="2400" height="1408" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1408,&quot;width&quot;:2400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6718208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/193316405?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a2f06c-3709-4140-8fba-406b71956f3b_2400x1600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_xn7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 424w, https://substackcdn.com/image/fetch/$s_!_xn7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 848w, https://substackcdn.com/image/fetch/$s_!_xn7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 1272w, https://substackcdn.com/image/fetch/$s_!_xn7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ca9cd71-566e-4ef8-9138-08e85acfdf46_2400x1408.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The real cost of Heroku is not what appears on the invoice.</p><p>The invoice is the visible portion: dyno tiers, database add-ons, Redis instances, and monitoring tools. It is real, it compounds, and it grows faster than  engineering leaders expect. But for  Series A and beyond SaaS teams, the invoice cost is the smallest component of what Heroku actually costs the business.</p><p>The higher costs are the ones that do not appear on any statement: the engineering hours spent working around platform limitations instead of building products, the architectural decisions shaped by what Heroku supports rather than what the system needs, and the enterprise deals that stall or never close because the infrastructure cannot satisfy the security questionnaire.</p><p>This guide is a complete cost teardown. It is written for CTOs who are evaluating whether the infrastructure decision in front of them is an operational one or a strategic one.</p><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> The complete cost of Heroku at scale, invoice cost, add-on compounding, engineering opportunity cost, observability stack costs, and the true total cost calculation versus migrating to AWS</p><p><strong>Who it is for:</strong> CTOs and engineering leaders evaluating whether the financial case for migrating from Heroku justifies the migration investment</p><p><strong>The conclusion:</strong> The invoice cost is only one of three cost components. For  B2B SaaS teams at Series A and beyond, the compliance cost and engineering opportunity cost together exceed the infrastructure invoice, and neither appears in standard infrastructure reviews.</p><p><strong>Want to model what your Heroku setup costs on LocalOps + AWS?</strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Speak with the LocalOps team</a></p><h2><strong>When Heroku&#8217;s Pricing Becomes Financially Indefensible</strong></h2><p>Heroku&#8217;s pricing model does not become a problem at a specific team size or traffic level. It becomes a problem at a specific combination of service count, add-on depth, and business ambition, and that combination arrives faster than  teams expect.</p><p>The inflection point for  B2B SaaS teams arrives somewhere between five and fifteen engineers. Not because the team is large. Because product complexity at that stage drives service count past the point where add-on costs become a significant and compounding line item.</p><p><strong>What the inflection looks like in practice:</strong></p><p>A team running a single production application on Heroku has a manageable bill. One dyno tier. One Heroku Postgres instance. One Redis instance. Maybe Papertrail for logs. The total is meaningful but explainable.</p><p>A team running five production services on Heroku has a fundamentally different cost structure. Each service has its own dyno configuration. Each service typically requires its own database tier. Heroku Postgres pricing is per-instance, not shared across services. Each service adds to the Redis connection count, pushing toward higher Redis tiers. Log volume across five services pushes Papertrail into higher pricing tiers. APM costs multiply across services.</p><p>The relationship between service count and cost is not linear on Heroku. It is multiplicative. Every new service does not add one cost layer. It adds five: compute, database, cache, logging, and monitoring, each carrying a platform margin.</p><p><strong>What the comparison looks like when migrating to AWS:</strong></p><p>The cost difference between Heroku and AWS via an Internal Developer Platform comes from two structural sources. First: the platform margin disappears <em>entirely</em>. Compute, database, cache, and job queue resources run at AWS list pricing with no markup. Second: observability is included. LocalOps includes Prometheus, Loki, and Grafana pre-configured in every environment at no additional cost, eliminating the Papertrail, New Relic, and APM add-on line items entirely.</p><p>The direction of this difference is structural. It does not change with scale; AWS pricing without a platform margin is 3-4x lower than PaaS pricing with one. The size of the difference depends on stack composition. For a model based on your current Heroku invoice,<a href="https://go.localops.co/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> the LocalOps team will calculate it directly</a>.</p><h2><strong>Why Heroku Add-On Costs Grow Faster Than Revenue</strong></h2><p>This is the cost dynamic that surprises engineering leaders when they first examine it systematically. Heroku&#8217;s add-on costs do not scale with revenue. They scale with product complexity, and product complexity grows faster than revenue at  SaaS companies in the growth stage.</p><p><strong>The Heroku Postgres compounding problem.</strong></p><p>Heroku Postgres pricing is structured around tiers defined by row limits, connection limits, and storage. As applications grow, databases move through these tiers, but not in proportion to actual usage growth. A database that grows from 5 million to 7 million rows may jump a full pricing tier even though the actual resource consumption increase is modest. More significantly, in a multi-service architecture, each service typically requires its own Heroku Postgres instance. The database cost compounds per service, not per application.</p><p><strong>The Heroku Redis compounding problem.</strong></p><p>Heroku Redis pricing is structured around connection limits and memory. As more services connect to Redis, for session management, job queuing, caching, and pub/sub, the connection count drives tier upgrades. Redis tier upgrades on Heroku are significant price jumps. And like Postgres, a multi-service architecture typically requires multiple Redis instances, each on its own billing tier.</p><p><strong>The monitoring add-on compounding problem.</strong></p><p>Papertrail pricing scales with log volume. As service count grows, log volume grows, typically faster than traffic growth, because more services mean more internal log output independent of external request volume. New Relic and Scout APM pricing scales with host count or service count. Adding a new service does not just add compute cost. It adds monitoring cost across every observability add-on in the stack.</p><p><strong>The AWS-native alternative cost structure:</strong></p><p>On AWS via LocalOps, the cost structure is fundamentally different. Amazon RDS pricing is based on instance type and storage, not on row counts or arbitrary tier boundaries. A database with 7 million rows costs the same as a database with 5 million rows if the instance type handles both. Amazon ElastiCache pricing is based on node type and replication configuration, not on connection count tiers that force upgrades as services scale. And observability, logs, metrics, and dashboards are included in LocalOps at no additional cost, regardless of service count or log volume.</p><p>The structural difference: Heroku add-on costs are designed around tier boundaries that create forced upgrades as applications grow. AWS-native services are priced on actual resource consumption with no artificial tier boundaries driving cost jumps.</p><p><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how the full Heroku to AWS stack mapping works</a></p><h2><strong>The True Total Cost of Heroku: What CTOs Miss in the Analysis</strong></h2><p>The Heroku cost analysis that surfaces in infrastructure reviews covers only the invoice. For a CTO preparing a board-level infrastructure recommendation, the invoice is the wrong starting point.</p><p>The true total cost of Heroku has three components.</p><h3><strong>Component 1: The Invoice Cost</strong></h3><p>The visible portion. Dyno tiers, database add-ons, Redis instances, monitoring tools, scheduler add-ons, and log management. This is the number that appears on the credit card statement and in the finance team&#8217;s questions.</p><p>It is real, and it compounds, but for  Series A and beyond teams, it is not the largest cost component.</p><h3><strong>Component 2: The Engineering Opportunity Cost</strong></h3><p>The hours engineering teams spend working around Heroku&#8217;s limitations rather than building a product. This cost does not appear on any invoice. It accumulates in recognizable patterns that every engineering leader at a scaling SaaS company has observed.</p><p>A senior architect scopes a feature differently because the technically correct implementation requires a storage pattern that Heroku handles poorly. A backend engineer spends three days building a workaround for a networking limitation that VPC-native infrastructure would handle natively. A team defers a microservices decomposition they know is right for the product because the operational complexity on Heroku is prohibitive without underlying networking primitives.</p><p>None of these decisions appears as infrastructure costs. All of them are real costs, paid in engineering time, in technical debt, and in product decisions made to serve the platform rather than the customer.</p><p>For a Series B SaaS company with fifteen engineers at an average fully-loaded cost of $200,000 per year, every engineering hour is worth approximately $100. If Heroku&#8217;s limitations consume two hours per engineer per week in workarounds, delayed decisions, and architectural compromises, the annual opportunity cost exceeds $150,000. This cost does not appear anywhere in infrastructure reviews. It is often the largest cost component.</p><h3><strong>Component 3: The Compliance Revenue Cost</strong></h3><p>For B2B SaaS teams with an enterprise go-to-market motion, this is frequently the significant cost component and the least visible until an enterprise deal surfaces.</p><p>Enterprise procurement processes require infrastructure controls that Heroku cannot provide: VPC isolation, private networking between services, IAM-based access control with audit logging, dedicated infrastructure and data residency in a specified region. When the security questionnaire arrives, and the honest answer to every infrastructure question is &#8220;we don&#8217;t control that,&#8221; the deal quickly starts going south.</p><p>The revenue impact of this infrastructure gap is real and calculable. A single $150,000 ARR enterprise deal delayed by one quarter because of infrastructure compliance questions costs $37,500 in revenue timing. A single deal that never closes because the infrastructure cannot satisfy the security review costs the full contract value. For a company with multiple enterprise deals in the pipeline, the compliance cost of staying on Heroku can dwarf every other cost component combined.</p><h3><strong>The Total Cost Calculation</strong></h3><p>When CTOs present the infrastructure transition to their board or CEO, the analysis that generates alignment is the one that includes all three components.</p><p><strong>Invoice savings:</strong> structural, predictable, and begin immediately on migration. The platform margin disappears. Observability add-on costs disappear.</p><p><strong>Engineering opportunity cost recovery:</strong> directionally clear, grows with team size. Senior engineering hours redirected from platform workarounds to product development.</p><p><strong>Compliance revenue unlock:</strong> the component that makes the migration financially obvious for any B2B SaaS team with enterprise ambitions. Infrastructure that answers the security questionnaire cleanly is infrastructure that does not block deals.</p><p>Together, these three components reframe the infrastructure migration from an operational cost to a strategic investment with a compounding return.</p><h2><strong>Why Heroku&#8217;s Tier-Based Pricing Fails SaaS Companies</strong></h2><p>Heroku&#8217;s pricing model was designed for simplicity at a small scale. It is structurally misaligned with how SaaS businesses actually grow and operate at scale.</p><p><strong>The tier-jump problem.</strong></p><p>Heroku pricing scales in tiers, not proportionally with usage. When resource requirements grow past a tier boundary, the cost jumps to the next tier regardless of whether actual usage justifies the full tier ceiling. Teams pay for the tier ceiling, not for actual consumption.</p><p>For finance teams preparing infrastructure forecasts, this makes cost modeling unreliable. Infrastructure spend jumps at irregular intervals unrelated to business growth metrics. A 20% increase in traffic does not produce a 20% increase in infrastructure cost; it might produce a 0% increase or a 40% jump, depending on where the team sits relative to tier boundaries.</p><p><strong>The seasonal traffic problem.</strong></p><p>Many SaaS applications have non-linear traffic patterns. B2B applications peak during business hours and drop to near-zero overnight and on weekends. Consumer applications spike around product launches and marketing campaigns. Event-driven workloads process jobs in bursts that may be 10x the average load.</p><p>Heroku&#8217;s response to all of these patterns is identical: provision for peak capacity and pay for it continuously. Go to the performance tier, pay us more to save more. Teams either over-provision, paying for idle capacity at all times, or under-provision and accept performance degradation during spikes.</p><p>AWS horizontal autoscaling on EKS responds to this directly. Workloads scale out when traffic increases and back in when it drops, automatically, without human intervention. Teams pay for actual compute consumption proportional to real usage, not for the tier ceiling required to handle the peak.</p><p><strong>The predictability gap.</strong></p><p>For a CTO preparing a 12-month infrastructure budget, Heroku&#8217;s tier-based model creates a forecasting problem. The budget for next year is not last year&#8217;s Heroku invoice scaled by growth. It is last year&#8217;s invoice scaled by growth, plus the tier-jump events triggered by crossing the service count and traffic thresholds the product roadmap implies.</p><p>AWS-native infrastructure priced by actual consumption solves this forecasting problem directly. Infrastructure spend grows in proportion to actual usage. Budget modeling is straightforward. Surprises are eliminated.</p><p><a href="https://localops.co/features/auto-scaling?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how autoscaling works by default on LocalOps &#8594;</a></p><p><strong>The Real Cost of the Heroku Observability Stack</strong></p><p>This is the cost teams underestimate, until they look at the Heroku invoice line by line and add up what monitoring actually costs.</p><p>A typical production Heroku stack assembles observability from multiple paid add-ons:</p><p><strong>Papertrail</strong> for log management. Pricing scales by log volume, which grows with service count and traffic regardless of optimization. At production scale with multiple services, Papertrail costs accumulate quickly as log volume grows past free tier limits.</p><p><strong>New Relic or Scout</strong> for application performance monitoring. APM pricing on Heroku add-ons scales with host count or agent count. Every new service added to the production stack adds another APM agent, another billing line item that compounds with each new service deployment.</p><p><strong>Additional tools</strong> for error tracking, uptime monitoring, and alerting, each with their own pricing tier, their own billing cycle, and their own failure modes.</p><p><strong>The operational cost beyond the financial one:</strong></p><p>The financial cost of the Heroku observability stack is significant. The operational cost is often larger.</p><p>When an incident occurs at 2 am, a Heroku team correlates information across multiple dashboards from multiple vendors with different data models and different refresh rates. Logs in Papertrail. Metrics in New Relic. The relationship between a spike in error rates and a specific deployment requires context-switching between tools. Each tool switch adds minutes to incident response time. For SaaS applications with customer-facing SLAs, those minutes matter.</p><p>The tools are often configured independently with no unified alerting model. An alert threshold set in New Relic does not automatically correlate with a log pattern in Papertrail. Building that correlation requires manual work, or accepting that incidents will be identified more slowly than they would be on a platform with integrated observability.</p><p><strong>What integrated observability looks like:</strong></p><p>LocalOps includes Prometheus, Loki, and Grafana pre-configured in every environment at no additional cost.</p><p>Prometheus collects metrics automatically from every service, CPU, memory, request rate, error rate, and custom application metrics. No agent installation. No per-service configuration.</p><p>Loki aggregates logs from all services through standard output. No log drain configuration. No Papertrail account. No log volume pricing tiers.</p><p>Grafana provides unified dashboards with pre-built views for infrastructure metrics and application logs in a single interface. When something breaks at 2 am, logs and metrics are in the same place, with the same timestamps, correlated automatically.</p><p>The observability tools that are monthly line items on a Heroku invoice, adding up to hundreds of dollars per month for a typical production stack, are included in LocalOps as infrastructure. There is no add-on to configure. There is no additional cost. There is no vendor to manage.</p><p><a href="https://localops.co/features/builtin-monitoring?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how built-in monitoring works on LocalOps</a></p><h2><strong>How LocalOps Addresses the Cost Problem Structurally</strong></h2><p>LocalOps is an AWS-native Internal Developer Platform built specifically for teams replacing Heroku.</p><p>Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><p>From that point, the developer experience is identical to Heroku. Push to your configured branch. LocalOps builds, containerizes, and deploys to AWS automatically. Logs and metrics are available from day one. Autoscaling and auto-healing run by default.</p><p>The cost structure is fundamentally different from Heroku. LocalOps charges a flat platform fee. The underlying infrastructure runs at AWS list pricing with no markup. Observability is included. The tier-jump cost model is replaced by proportional pricing that scales with actual usage.</p><p>The infrastructure runs in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt.</p><blockquote><p><em>&#8220;Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.&#8221; <strong>&#8211;</strong></em> <strong> Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy</strong></p><p><em>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10&#8211;12 man months of effort, all of which LocalOps has saved for us.&#8221;</em> <strong>&#8211; Gaurav Verma, CTO and Co-founder, SuprSend</strong></p></blockquote><p><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get started for free, first environment on AWS in under 30 minutes &#8594;</a></p><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>At what point does Heroku&#8217;s pricing become financially indefensible?</strong></p></li></ol><p>The inflection point varies by stack composition but consistently arrives when service count grows past five and add-on costs begin compounding across multiple services simultaneously. The signal is not the absolute invoice amount; it is when the invoice becomes difficult to attribute cleanly across services, difficult to forecast accurately, and impossible to optimize without changing the underlying platform. For  B2B SaaS teams, this happens between five and fifteen engineers, driven by product complexity rather than team size directly.</p><ol start="2"><li><p><strong>Why do Heroku add-on costs grow faster than revenue as SaaS teams scale?</strong></p></li></ol><p>Heroku add-on costs scale with product complexity rather than with revenue. Adding a new service to a Heroku production stack does not add one cost layer; it adds compute, database, Redis, logging, and monitoring costs simultaneously, each carrying a platform margin. Database tier pricing is driven by row counts and connection limits that force upgrades independently of revenue growth. Log volume and APM agent counts scale with service count rather than with business metrics. The result is a cost structure where infrastructure spend grows faster than revenue at precisely the growth stage where unit economics matter.</p><ol start="3"><li><p><strong>How should a CTO calculate the true total cost of Heroku?</strong></p></li></ol><p>The full calculation has three components. Invoice cost: dyno tiers, database add-ons, Redis, monitoring tools, scheduler, totalled across all production services. Engineering opportunity cost: hours spent on platform workarounds, architectural compromises made to serve Heroku&#8217;s limitations, and deferred technical decisions that accumulate as debt. Compliance revenue cost: Deals are delayed or lost because the infrastructure cannot satisfy enterprise security questionnaires. For  Series A and beyond B2B SaaS teams with enterprise ambitions, the compliance revenue component is the largest and least visible, and the one that makes the migration decision strategically obvious when it surfaces.</p><ol start="4"><li><p><strong>Why is Heroku&#8217;s tier-based pricing misaligned for seasonal or variable traffic?</strong></p></li></ol><p>Heroku requires teams to provision for peak capacity and pay for it continuously; there is no automatic scale-down when traffic drops. For B2B applications with sharp usage peaks during business hours, consumer applications with campaign-driven spikes, or any application with variable traffic patterns, the choice is between over-provisioning at continuous cost or under-provisioning and accepting performance degradation. AWS horizontal autoscaling on EKS scales out when the load increases and back in when it drops automatically. Teams pay for actual compute consumption, not for the tier ceiling required to handle the peak.</p><ol start="5"><li><p><strong>What does the Heroku observability stack actually cost at production scale?</strong></p></li></ol><p>A typical production Heroku stack assembles observability from Papertrail for log management, New Relic or Scout for APM, and potentially additional tools for error tracking and uptime monitoring. Each add-on has its own pricing tier that scales with usage, log volume for Papertrail, host or service count for APM tools. The combined cost compounds with the service count. Beyond the financial cost, the operational cost of correlating logs and metrics across multiple disconnected tools adds meaningful time to incident response. LocalOps includes Prometheus, Loki, and Grafana pre-configured in every environment at no additional cost, replacing the entire assembled observability stack with integrated tooling that provides better correlated visibility at zero marginal cost.</p><ol start="6"><li><p><strong>What is the difference between a Heroku self-hosted alternative and an AWS-native IDP in terms of cost?</strong></p></li></ol><p>A Heroku self-hosted alternative like Coolify or Dokku eliminates platform margin on infrastructure but requires the team to own the full operational burden, provisioning, security patching, observability setup, and on-call response for the platform itself. The infrastructure cost is lower. The engineering cost of running the platform is high and ongoing. An AWS-native IDP like LocalOps provides the same infrastructure cost efficiency, direct AWS pricing, and no platform margin, with the platform layer managed. For teams without dedicated platform engineering capacity, the total cost of a self-hosted alternative consistently exceeds the total cost of a managed IDP once engineering hours for platform maintenance are included.</p><ol start="7"><li><p><strong>How do Heroku&#8217;s open source alternatives compare on observability cost?</strong></p></li></ol><p>Heroku open source alternatives eliminate the platform margin on compute and managed services. They do not eliminate the observability cost problem; they shift it. Rather than paying for Papertrail and New Relic, teams running open-source alternatives take on the engineering cost of setting up, configuring, and maintaining their own observability stack. Prometheus, Loki, and Grafana are available as open-source tools, but setting them up correctly, integrating them with application infrastructure, and maintaining them over time requires engineering investment. LocalOps includes this observability stack pre-configured as part of the platform; the setup work is done, the maintenance is handled, and the cost is zero beyond the platform fee.</p><h2><strong>Key Takeaways</strong></h2><p>The real cost of Heroku at scale has three components, and  infrastructure reviews only examine one of them.</p><p>The invoice cost is real, and compounds with every service added. The engineering opportunity cost is rarely measured but consistently significant for teams running more than five services. The compliance revenue cost is the largest component for any B2B SaaS team with enterprise ambitions, and the one that makes the migration decision strategically obvious rather than operationally optional.</p><p>The observability cost is a specific case study in how Heroku&#8217;s add-on model creates financial and operational overhead that integrated platforms eliminate. Hundreds of dollars per month in add-on fees, plus the operational cost of correlating incidents across disconnected tools, are replaced by a pre-configured observability stack at no additional cost.</p><p>For CTOs preparing the business case for infrastructure migration, the frame that generates board-level alignment is not &#8220;we should save money on infrastructure.&#8221; It is &#8220;we are currently paying a tax on every enterprise deal we close, and the migration eliminates that tax while also reducing infrastructure costs and recovering engineering capacity.&#8221;</p><p>That is the real cost of Heroku at scale. And that is the case for moving.</p><p><strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> Our engineers model your current Heroku costs against LocalOps + AWS and walk through the migration for your specific stack.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First production environment on AWS in under 30 minutes. No credit card required.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Heroku Migration Guide &#8594;</a></strong> Full technical walkthrough, database migration, environment setup, DNS cutover.</p>]]></content:encoded></item><item><title><![CDATA[Kubernetes vs Internal Developer Platform: Do You Need Both for AWS Deployments?]]></title><description><![CDATA[A practical breakdown for engineering teams choosing between raw Kubernetes and an IDP on AWS]]></description><link>https://blog.localops.co/p/kubernetes-vs-internal-developer-platform-aws</link><guid isPermaLink="false">https://blog.localops.co/p/kubernetes-vs-internal-developer-platform-aws</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Fri, 03 Apr 2026 12:27:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Tr-6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tr-6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tr-6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!Tr-6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!Tr-6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!Tr-6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tr-6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5024677,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/193060504?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tr-6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!Tr-6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!Tr-6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!Tr-6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42f3414e-fb3b-4c5b-ac8b-cb70c1681c90_2400x1600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The majority of AWS teams running containers have landed on Kubernetes. According to the 2025 CNCF Annual Cloud Native Survey, 82% of container users now run Kubernetes in production, a number<a href="https://aws.amazon.com/blogs/containers/aws-at-kubecon-eu-2026-open-source-leadership-meets-production-innovation/"> AWS continues to see grow</a> across EKS deployments. It is the default choice for containerised workloads and EKS makes it accessible enough that most teams land there eventually.</p><p>But at some point the same teams start looking at internal developer platforms. And the question that comes up is a reasonable one: if Kubernetes already handles deployments, container orchestration, scaling, and health checks, what does an IDP actually add? Are these two separate tools solving two different problems, or is one replacing the other?</p><p>The answer is not obvious. On AWS, the boundaries blur quickly. EKS integrates deeply with IAM, networking, and other managed services, which makes Kubernetes feel like it should be enough. But in practice, teams still run into gaps around how developers interact with that infrastructure.</p><p>This post breaks down where each one starts, where it ends, and whether you actually need both running together on AWS.</p><h2>TL;DR</h2><ul><li><p>Amazon EKS runs your containers. An internal developer platform defines how engineers actually deploy and operate them.</p></li><li><p>A well-designed IDP on AWS does not just connect to a cluster. It standardises how infrastructure like VPCs, EKS, CI/CD, and observability are provisioned and used.</p></li><li><p>Developers push code. The platform handles everything underneath.</p></li><li><p>You still need Kubernetes. The real question is whether every engineer should be dealing with it directly on every deploy.</p></li><li><p>The shift toward IDPs is generally the right call, but only if the platform is designed with escape hatches. When something breaks at the Kubernetes level, engineers with zero cluster knowledge cannot debug it.</p></li><li><p>Most teams do not make this shift intentionally. They make it when the alternative stops working.</p></li></ul><h2>What Kubernetes Handles on AWS and Where It Stops</h2><p>Kubernetes is a container orchestration system. It schedules containers across a pool of compute, manages service-to-service networking, restarts failed workloads, and scales pod replicas based on load. On AWS, EKS is the managed Kubernetes service. AWS handles the control plane &#8212; the API server and etcd &#8212; so you do not operate those components yourself.</p><p>What stays with your team in a standard EKS setup: VPC design, subnets, NAT gateways, and security group rules. IAM setup, including role bindings and service account mapping. Choosing and managing node groups. Installing and configuring cluster add-ons like CoreDNS, VPC CNI, and the AWS Load Balancer Controller. Planning and executing Kubernetes version upgrades.</p><p>EKS Auto Mode extends AWS management further into the data plane and handles more of the node lifecycle automatically. But even with Auto Mode, platform design, developer workflows, environment management, and delivery standardisation remain your responsibility.</p><p>This is where the internal developer platform question starts. Kubernetes handles the runtime. It does not handle how your developers interact with that runtime. It does not create environments, wire CI/CD pipelines, or give a backend engineer a self-service path to deploy a new service without understanding the cluster underneath.</p><p>That layer has to come from somewhere. On AWS, that is what an IDP is for.</p><h2>Kubernetes vs IDP: What Each One Actually Does on AWS</h2><p>Most teams assume Kubernetes and an IDP overlap significantly. They do overlap in deployment automation, but they operate at different abstraction levels and solve different problems.</p><p>Kubernetes is the orchestration and runtime layer. It schedules containers, maintains workload state, handles service discovery, and scales pods. An internal developer platform is the developer experience and automation layer above it. It shapes how engineers create environments, deploy services, access observability, and interact with shared infrastructure &#8212; without needing to touch the cluster directly.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/So8V9/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fad25816-906d-4387-b466-1d158a3a2fed_1220x816.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/93077a61-1952-480f-b477-050a4f4c879a_1220x816.png&quot;,&quot;height&quot;:406,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/So8V9/1/" width="730" height="406" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>The confusion usually comes from the fact that both touch deployments. </p><blockquote><p>But Kubernetes manages how containers run. An IDP manages how developers deploy. </p></blockquote><p>On AWS, a platform like LocalOps does not sit beside EKS &#8212; it provisions EKS, manages surrounding AWS resources, and abstracts cluster complexity away from engineers who should not need to think about it on every deploy.</p><p>Kubernetes runs the workloads. The IDP simplifies how developers consume the platform. You need both.</p><h2>How an IDP Handles What Kubernetes Does Not on AWS</h2><p>Kubernetes does not provision environments. It does not wire CI/CD pipelines. It does not give a backend engineer a self-service path to deploy a new service without touching cluster config. Those are not gaps in Kubernetes &#8212; it was never designed to do those things. But someone on your team ends up doing them anyway, usually the person who set up the cluster.</p><p>An IDP takes that work off the individual and puts it at the platform level. When a developer pushes to a branch, the platform handles VPC provisioning, EKS cluster setup, EC2 node configuration, CI/CD pipeline wiring, auto-scaling, SSL, and deployment. The developer writes a service config file. The infrastructure side is handled by the platform.</p><p>No Dockerfile. No Terraform. No Helm required from the developer&#8217;s side.</p><p>The trade-off is real though. Full abstraction means engineers lose visibility into what is running underneath. When a pod enters CrashLoopBackOff or a service fails a health check, an engineer who has never touched kubectl cannot diagnose it. A well-built IDP handles this by exposing controlled access to the cluster when needed. Engineers should not need Kubernetes knowledge for routine deploys, but they should be able to get to it when something goes wrong.</p><h2>What an IDP Actually Sets Up in Your AWS Account</h2><p>When you create a new environment, a production-grade IDP provisions the following inside your AWS account:</p><ul><li><p>Dedicated VPC with private and public subnets, NAT gateway, and internet gateway</p></li><li><p>Managed EKS cluster with EC2 compute nodes</p></li><li><p>Elastic Load Balancer for inbound HTTP/HTTPS traffic</p></li><li><p>Prometheus, Loki, and Grafana for metrics, log aggregation, and dashboards</p></li><li><p>Managed AWS services on demand: RDS, S3, SQS, Elasticache</p></li><li><p>CI/CD pipeline triggered on branch push</p></li><li><p>Auto-renewing SSL certificates, encrypted secrets storage, and role-based access control</p></li></ul><p>Everything runs inside your AWS account. The vendor does not hold your data or access your infrastructure directly.</p><p>Each environment is isolated at the VPC level. For BYOC deployments where enterprise customers bring their own AWS account, the entire stack gets provisioned inside the customer&#8217;s account. Each customer gets their own cluster, their own VPC, their own compute. That is the architecture enterprise compliance frameworks typically require.</p><p>SuprSend, a notification infrastructure company, used LocalOps to handle this entire setup for their BYOC (Bring your own cloud) distribution. Before that, every enterprise customer deal required spinning up dedicated infrastructure manually. </p><p>LocalOps is provisioning the per-customer AWS environments in 30mins without changing how their engineering team works &#8212; same git-push workflow, same branch-based deploys, just running inside each customer&#8217;s own AWS account. They are able to close enterprise deals faster without adding DevOps headcount. For the full picture, read the case study from their CTO:<a href="https://localops.co/case-study/suprsend-unlocks-enterprise-revenue-byoc"> How SuprSend Unlocks Enterprise Revenue with BYOC</a></p><p>Without an IDP, someone on your team is doing all of this manually, per environment, every time a new one is needed.</p><h2>Backstage, Port and an IDP: Which One Works for AWS Teams</h2><p>Only 28% of organisations have a dedicated DevOps / platform engineering team responsible for internal platforms, according to the Q1 2026 CNCF Technology Landscape Radar report.<a href="https://www.prnewswire.com/news-releases/cncf-and-slashdata-report-finds-platform-engineering-tools-maturing-as-organizations-prepare-for-ai-driven-infrastructure-302722721.html"> PR Newswire</a> That number matters when evaluating IDP options because most DevOps tools assume you have that team already.</p><p>Before comparing tools, one distinction worth clarifying: an internal developer portal vs platform is not just a naming difference. A portal surfaces information about existing infrastructure. A platform provisions and manages cloud resources. This matters because Backstage, the most widely used open source internal developer platform, is actually a portal. It gives you a service catalog and a UI layer but does not provision infrastructure or manage deployments out of the box.</p><p>Teams searching for a Backstage internal developer platform often discover this gap after they have already invested months in setup. Backstage needs integrations and plugins to act as a full platform. You build those yourself. Right call if you have the platform engineering capacity/team to sustain it internally.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/94C8j/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06fad217-4625-4ea7-a96a-0a055caa1de4_1220x1092.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b01320d0-0395-42bc-93e3-4257ff736b46_1220x1092.png&quot;,&quot;height&quot;:548,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/94C8j/1/" width="730" height="548" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>Port works well as a catalog and visibility layer on top of existing infrastructure. A cloud native IDP fits teams that need production-grade AWS environments running without the upfront platform investment.  Not sure which fits your stack?<a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> Book a demo with us</a> and our engineer will walk you through it.</p><h2>Do You Still Need a DevOps Engineer If You Have an IDP on AWS?</h2><p>Short answer: yes. But the actual role changes significantly.</p><p>According to CNCF survey data, organisations typically allocate one platform engineer per 17 to 50 developers &#8212; roughly 2 to 6% of total engineering headcount.<a href="https://byteiota.com/platform-engineering-2026-80-adoption-devops-dead/"> byteiota</a> That ratio only works if the platform is handling routine infrastructure work. Without an IDP, that one person becomes the bottleneck for every deployment question, every new environment, and every EKS config change on the team.</p><p>With an IDP, that same engineer sets the platform up once. Developers provision environments, deploy services, and access observability without filing a ticket. The DevOps or platform engineer shifts to work that actually requires their expertise: cost architecture, security posture, compliance requirements, and reliability engineering.</p><p>High-maturity platform teams report 40 to 50% reductions in cognitive load for developers.<a href="https://dev.to/meena_nukala/platform-engineering-in-2026-the-numbers-behind-the-boom-and-why-its-transforming-devops-381l"> DEV Community</a> That is not just a developer experience metric. It directly affects how fast product teams ship and how much of your engineering budget goes toward infrastructure overhead versus product work.</p><p>What an IDP still cannot replace on AWS:</p><ul><li><p>Reserved Instance and Savings Plan strategy</p></li><li><p>Custom VPC architectures for specific compliance frameworks</p></li><li><p>Multi-account setups with complex permission boundaries</p></li><li><p>Incident response when something breaks at the infrastructure level</p></li></ul><p>The abstraction trade-off is real. When a pod enters CrashLoopBackOff or a node group fails to scale, someone needs to know what they are looking at. An IDP reduces how often engineers hit those situations. It does not eliminate them. Teams should maintain baseline Kubernetes literacy even if developers do not use it daily.</p><p>Teams that delay building this layer tend to accumulate technical debt quietly. Helm chart configurations drift across services, cluster knowledge stays siloed with one or two people, and onboarding new engineers to the deployment process takes longer than it should.</p><h2>What to Look for Before Choosing an IDP for AWS</h2><p>Not all IDPs that claim AWS support are built the same way. Before committing, these are the questions worth asking:</p><p><strong>Does it provision EKS or just connect to one you already have?</strong> Connecting to an existing cluster means you still own the setup, configuration, and upgrade cycle. Provisioning means the platform handles the full lifecycle.</p><p><strong>Does it require developers to write Helm charts?</strong> Helm support for engineers who need it is fine. Requiring it from everyone means you have moved the complexity rather than removed it.</p><p><strong>Is observability included or a separate integration?</strong> Prometheus, Loki, and Grafana should come with the platform. Wiring observability after the fact is a project in itself.</p><p><strong>Does it provision managed AWS services from the same interface?</strong> RDS, S3, SQS, Elasticache &#8212; if these require a separate Terraform repo, you have two systems to maintain instead of one.</p><p><strong>Does your data stay in your AWS account?</strong> The vendor should not have direct access to your application data or infrastructure. Everything should run inside your own account.</p><p><strong>Can you eject if you need to?</strong> Vendor lock-in is a real consideration. If you stop using the platform, you should be able to take the infrastructure and run it independently.</p><p><strong>Does it fit your deployment model?</strong> SaaS, single-tenant, BYOC, and self-hosted have different infrastructure requirements. The platform should support your model without requiring custom tooling for each.</p><p>To see how LocalOps specifically handles these on AWS, the<a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> LocalOps developer documentation</a> covers environment provisioning, EKS setup, observability, BYOC, and the eject path in full detail.</p><h2>FAQs</h2><p><strong>1. What is the best internal developer platform for AWS teams?</strong></p><p>The best internal developer platform for AWS depends on your team size and whether you have a dedicated platform engineering team. Backstage is the most widely adopted open source internal developer platform, but it requires significant setup and maintenance investment, typically 6 to 12 months before developers are using it consistently. For teams that need AWS environments running quickly without dedicated DevOps overhead, a cloud native IDP like LocalOps provisions EKS, observability, and CI/CD inside your AWS account out of the box. Not sure what fits your stack?<a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> You can talk to our engineers</a> to help figure it out.</p><p><strong>2. What should an AWS internal developer platform actually do?</strong></p><p>An AWS internal developer platform should provision and manage EKS clusters, handle VPC and subnet configuration, wire CI/CD pipelines, set up observability, and manage access control, all inside your own AWS account. It should give developers a self-service path to deploy services without touching Kubernetes directly. If it only connects to an existing cluster rather than provisioning one, you still own most of the infrastructure complexity yourself.</p><p><strong>3. What does internal developer platform architecture look like on AWS?</strong></p><p>A production-grade internal developer platform architecture on AWS includes a dedicated VPC with private and public subnets, a managed EKS cluster, EC2 compute nodes, an Elastic Load Balancer, Prometheus and Grafana for observability, managed AWS services like RDS and S3, and a CI/CD pipeline wired to branch pushes. Each environment runs in isolation at the VPC level. For BYOC deployments, that entire architecture gets replicated inside the customer&#8217;s AWS account.</p><p><strong>4. Should you build an internal developer platform or buy one for AWS?</strong></p><p>Building gives you full control but requires significant engineering investment. SuprSend estimated that building their BYOC infrastructure setup in-house would have taken 10 to 12 engineer months. Buying a platform like LocalOps reduces that to under 30 minutes for a production-ready environment. Build makes sense if you have a dedicated platform team and specific requirements that off-the-shelf platforms cannot meet. Buying makes sense if your engineering team&#8217;s time is better spent on product work rather than platform infrastructure.</p><p><strong>5. How does platform engineering relate to an internal developer platform?</strong></p><p>Platform engineering and internal developer platform adoption are growing in parallel, but they are not the same thing. Platform engineering is the practice of building and owning developer infrastructure as a product. An internal developer platform is what that practice produces, the actual system engineers use to deploy, provision environments, and access infrastructure. You can run an IDP without a formal platform engineering team. Many smaller teams buy a pre-built IDP specifically to avoid needing one.</p><h2>So Do You Need Both Kubernetes and an IDP on AWS?</h2><p>Yes. But that is not really the right question.</p><p>Kubernetes and an internal developer platform are not competing for the same job. EKS handles container orchestration. An IDP handles how your engineers interact with that orchestration layer without needing to understand it on every deployment. Removing either one creates a gap the other cannot fill.</p><p>The more useful question is what happens when you have Kubernetes but no IDP above it. Environment setup stays manual. Deployment workflows differ across services. New engineers spend days getting cluster access before they contribute anything. The one person who understands the EKS setup becomes the path of least resistance for every infrastructure question on the team.</p><p>An IDP does not make Kubernetes disappear. It makes Kubernetes someone else&#8217;s problem &#8212; specifically, the platform layer&#8217;s problem &#8212; so your product engineers can stay focused on the product.</p><p>Developers are increasingly accessing Kubernetes indirectly through internal developer platforms rather than directly, according to a March 2026 CNCF report covering 12,500 developers across 100 countries.<a href="https://www.cncf.io/announcements/2026/03/24/cncf-and-slashdata-report-finds-cloud-native-community-reaches-nearly-20-million-developers/"> Cloud Native Computing Foundation</a> That shift is not happening because Kubernetes is being replaced. It is happening because teams have realised that exposing cluster complexity to every engineer is a choice, not a requirement.</p><p>On AWS, you have the tooling to make that choice cleanly. The question is whether you build the layer above EKS yourself or use a platform that already has it.</p><p>If you&#8217;re figuring out how this would fit into your setup, the LocalOps team can help you work through it:</p><p><strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Book a Demo</a> &#8594;</strong> Walk through how environments, deployments, and AWS infrastructure are handled in practice for your setup.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Get started for free</a> &#8594;</strong> Connect an AWS account and stand up an environment to see how it fits into your existing workflow.</p><p><strong><a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Explore the Docs</a> &#8594;</strong> A detailed breakdown of how LocalOps works end-to-end, including architecture, environment setup, security defaults, and where engineering decisions still sit.</p><h2>Related Articles</h2><ol><li><p><a href="https://blog.localops.co/p/what-is-an-internal-developer-platform-idp">What Is an Internal Developer Platform? Definition, Core Components and Real-World Use Cases</a></p></li><li><p><a href="https://blog.localops.co/p/internal-developer-platform-build-vs-buy-cost-comparison">How Much Does It Cost to Build an Internal Developer Platform In-House vs Buying One?</a></p></li><li><p><a href="https://blog.localops.co/p/standardize-dev-staging-prod-internal-developer-platform?">How to Standardize Dev, Staging and Production Environments with an Internal Developer Platform</a></p></li></ol>]]></content:encoded></item><item><title><![CDATA[Why Your Team Is Outgrowing Heroku - And the Architecture That Comes Next]]></title><description><![CDATA[The cost, scaling, and compliance inflection points that push teams beyond Heroku, and how AWS-native platforms replace it without losing developer experience.]]></description><link>https://blog.localops.co/p/why-your-team-is-outgrowing-heroku</link><guid isPermaLink="false">https://blog.localops.co/p/why-your-team-is-outgrowing-heroku</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Tue, 31 Mar 2026 06:30:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!johs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!johs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!johs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 424w, https://substackcdn.com/image/fetch/$s_!johs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 848w, https://substackcdn.com/image/fetch/$s_!johs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 1272w, https://substackcdn.com/image/fetch/$s_!johs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!johs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png" width="2400" height="1583" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1583,&quot;width&quot;:2400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7957189,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192598190?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F349fd1a1-4702-458b-9e8e-3c4b6ca6169f_2400x1808.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!johs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 424w, https://substackcdn.com/image/fetch/$s_!johs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 848w, https://substackcdn.com/image/fetch/$s_!johs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 1272w, https://substackcdn.com/image/fetch/$s_!johs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9f38c67-1a77-48c1-a797-8353066275e1_2400x1583.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Outgrowing Heroku is not a sudden event. It is a pattern that develops over 12 to 18 months and becomes undeniable at a specific inflection point, usually when a cost review, a failed enterprise deal, or a production incident forces the conversation that has been quietly building.</p><p>Most engineering leaders recognize the pattern in retrospect. The Heroku bill was manageable at $500 per month. Then it was $2,000. Then it was $5,000 and growing, fragmented across dyno tiers, database add-ons, monitoring tools, and Redis instances, each scaling independently with no unified optimization lever. The CFO started asking questions that the CTO could not answer cleanly.</p><p>Or the pattern shows up in architecture. The product that started as a monolith now has background workers, event-driven components, and services that need to communicate privately. Heroku handles these patterns poorly. Workarounds accumulate. Senior engineers start spending time on platform constraints rather than product features.</p><p>Or it shows up in a deal. An enterprise prospect sends a security questionnaire. The infrastructure questions, VPC configuration, private networking, and IAM audit logging reveal that the team does not control the infrastructure on which their product runs.</p><p>These are not isolated problems. They are the predictable sequence of constraints that surface as SaaS products mature past what Heroku was designed to support. This guide covers each one, what causes it, and what the architecture that comes next actually looks like.</p><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> The specific points at which Heroku&#8217;s pricing, scaling model, reliability, and architecture become constraints, and what the migration path to a modern alternative looks like</p><p><strong>Who it is for:</strong> CTOs and engineering leaders who recognize the Heroku constraints described above and are evaluating what comes next</p><p><strong>The architecture that replaces Heroku:</strong> AWS-native infrastructure with an Internal Developer Platform layer,  infrastructure you own, developer experience you keep, at direct AWS pricing with no platform margin</p><p><strong>Want to see exactly what a Heroku to AWS migration looks like?</strong> We have covered it in detail -<a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> localops.co/migrate-heroku-to-aws</a></p><h2><strong>The Pricing Inflection Point: When Heroku Becomes Financially Indefensible</strong></h2><p>Heroku&#8217;s pricing model is not inherently expensive at a small scale. The inflection point arrives at a specific combination of team size, service count, and traffic volume,  and it arrives faster than most teams expect.</p><p>The structural problem is not the per-dyno cost in isolation. It is the compounding of the platform margin across every component of the stack simultaneously.</p><p>A team running five production services on Heroku is typically paying for: Standard-2X dynos at $50 per dyno per month, Heroku Postgres tiers per service, Heroku Redis tiers for caching and job queues, Papertrail or equivalent for log management, New Relic or Scout for APM, and Heroku Scheduler for background jobs. Each component carries a platform margin. Each component scales independently. And each new service added to the product adds another compounding layer of platform cost.</p><p>The inflection point for most B2B SaaS teams arrives when the Heroku invoice stops being explainable as a simple infrastructure cost and starts requiring a detailed breakdown to justify. This typically happens between five and fifteen engineers,  not because the team is large, but because product complexity at that team size drives service count past the point where add-on costs become significant.</p><p><strong>What the comparison looks like when migrating to AWS:</strong></p><p>The cost difference between Heroku and AWS via an Internal Developer Platform comes from two structural sources. First: the platform margin disappears. Compute, database, cache, and job queue resources run at AWS list pricing with no markup. Second: observability is included. LocalOps includes Prometheus, Loki, and Grafana pre-configured in every environment at no additional cost,  eliminating the Papertrail, New Relic, and APM add-on line items entirely.</p><p>The size of the cost reduction depends on stack composition and scale. The direction is structural and does not change with scale. AWS infrastructure pricing without a platform margin is lower than PaaS pricing with one. For a model based on your current Heroku invoice,<a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> speak with the LocalOps team</a>.</p><p><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See what your Heroku setup costs on LocalOps + AWS</a></p><h2><strong>The True Total Cost of Heroku: What CTOs Miss in the Analysis</strong></h2><p>The Heroku cost analysis that surfaces in most infrastructure reviews covers only the invoice. The invoice is the smallest component of the true total cost.</p><p><strong>Component 1: The invoice cost.</strong></p><p>The visible portion. Dyno tiers, database add-ons, Redis instances, monitoring tools, scheduler add-ons, and log management. This is the number that appears on the credit card statement and in the finance team&#8217;s questions. It is real, and it compounds,  but it is not the largest cost component for most Series A and beyond teams.</p><p><strong>Component 2: The engineering opportunity cost.</strong></p><p>The hours engineering teams spend working around Heroku&#8217;s limitations rather than building a product. This cost does not appear on any invoice. It accumulates in recognizable patterns.</p><p>A senior architect scopes a feature differently because the technically correct implementation requires a storage pattern that Heroku handles poorly. A backend engineer spends three days building a workaround for a networking limitation that VPC-native infrastructure would handle natively. A team defers a microservices decomposition they know is right for the product because the operational complexity on Heroku is prohibitive without the underlying networking primitives.</p><p>None of these decisions appears as infrastructure costs. All of them are real costs,  paid in engineering time, in technical debt, and in product decisions made to serve the platform rather than the customer.</p><p><strong>Component 3: The compliance revenue cost.</strong></p><p>For B2B SaaS teams with an enterprise go-to-market motion, this is frequently the largest cost component and the least visible until an enterprise deal surfaces.</p><p>Enterprise procurement processes require infrastructure controls that Heroku cannot provide: VPC isolation, private networking between services, IAM-based access control with audit logging, and data residency in a specified region. When the security questionnaire arrives, and the honest answer to every infrastructure question is &#8220;we don&#8217;t control that,&#8221; the deal goes into extended security review. Some deals never return from it.</p><p>The revenue impact of infrastructure compliance gaps is difficult to quantify precisely before it surfaces,  and difficult to ignore once it does. For teams building toward enterprise, it is the cost component that makes the Heroku migration decision strategic rather than operational.</p><p><strong>The total cost calculation:</strong></p><p>When CTOs present the infrastructure transition to their board or CEO, the analysis that generates alignment is the one that includes all three components,  not just the infrastructure invoice. Invoice savings are structural and begin immediately. Engineering opportunity cost recovery is directional and grows with team size. Compliance revenue unlock is the component that makes the migration financially obvious for any B2B SaaS team with enterprise ambitions.</p><p><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Walk through the cost comparison with a LocalOps engineer.</a></p><h2><strong>The Vertical Scaling Problem: Why Heroku&#8217;s Dyno Model Breaks at Scale</strong></h2><p>Heroku&#8217;s scaling model is vertical. When an application needs more capacity, the answer is always the same: upgrade to a larger dyno tier or add more dynos. The unit of scale is the dyno. The mechanism is manual.</p><p>This model works for linear, predictable workloads where traffic grows steadily, and scaling decisions can be made deliberately. It does not work for the traffic patterns that characterize most SaaS applications at the growth stage.</p><p><strong>Why vertical scaling fails for high-concurrency APIs:</strong></p><p>High-concurrency APIs do not have linear traffic. They experience request bursts driven by user behavior, webhook deliveries, batch processing jobs, and external events. A payment processor webhook that triggers processing for 10,000 accounts simultaneously. A B2B application that sees 80% of its daily traffic between 9 am and 12 pm in a single timezone. A consumer application that spikes 5x normal volume during a marketing campaign.</p><p>Heroku&#8217;s response to all of these patterns is identical: manually add more dynos before the spike, pay for them whether or not the traffic materializes, and manually remove them afterwards. There is no event-driven scaling that responds to real traffic signals. There is no automatic scale-down when traffic drops. The choice is between continuously over-provisioning, paying for idle capacity, or under-provisioning and accepting degraded performance during spikes.</p><p><strong>How Kubernetes-based alternatives handle burst traffic differently:</strong></p><p>Kubernetes horizontal pod autoscaling responds to real workload signals, CPU utilization, memory pressure, request queue depth, and custom application metrics, automatically and in seconds. When a traffic spike arrives, the platform scales out to handle it. When traffic drops, it scales back in. Teams pay for actual compute consumption rather than for the tier ceiling required to handle the peak.</p><p>For high-concurrency APIs, the operational difference is significant. Kubernetes can scale a service from two instances to twenty in under two minutes in response to a traffic spike, then scale back to two when the spike passes. Heroku requires a manual decision and a manual dyno configuration, and it accepts the cost of overprovisioning during the waiting period.</p><p>LocalOps runs workloads on EKS with horizontal pod autoscaling configured by default. No manual scaling decisions. No dyno tier upgrades. Services scale based on actual traffic signals and scale back automatically when the load drops.</p><p><a href="https://localops.co/features/auto-scaling?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how autoscaling works on LocalOps.</a></p><h2><strong>What Happens to Reliability When a SaaS Product Outgrows Heroku</strong></h2><p>Reliability degradation on Heroku follows a predictable sequence as applications grow past what the Standard dyno tier was designed to support.</p><p><strong>The cold start problem.</strong></p><p>Standard dynos on Heroku sleep after 30 minutes of inactivity and require a cold start when traffic arrives. For production applications, this means the first request after a quiet period experiences significantly elevated response time. For applications with consistent traffic, this is manageable. For applications with variable traffic patterns, common in B2B SaaS,  cold starts create periodic reliability events that are visible to customers and difficult to eliminate without upgrading to Performance dynos at significantly higher cost.</p><p><strong>The resource ceiling problem.</strong></p><p>Standard-2X dynos provide 1GB of memory. For applications with growing data processing requirements, ML inference, or complex query patterns, this ceiling creates memory pressure that manifests as intermittent performance degradation and occasional dyno restarts. The upgrade path to Performance dynos is a significant cost jump with no intermediate steps.</p><p><strong>The shared infrastructure problem.</strong></p><p>Heroku&#8217;s dynos run on shared infrastructure. Noisy neighbor effects, where other tenants on the same physical infrastructure consume resources that affect application performance, are documented and acknowledged by Heroku but are not preventable by teams running on the platform. For SaaS applications with customer-facing SLAs, this is an infrastructure risk that cannot be mitigated without leaving the platform.</p><p><strong>The safest migration path:</strong></p><p>The migration path that minimizes customer-facing reliability risk runs both environments in parallel before any DNS cutover. The new environment, LocalOps provisioning AWS infrastructure in the team&#8217;s own account, receives all the verification traffic while Heroku remains the production environment. Database migration runs with AWS DMS replication to keep both databases synchronized. DNS cutover happens only after the new environment has handled real traffic patterns for a sufficient observation period.</p><p>This approach means there is no forced downtime window. Heroku stays live throughout. The cutover is a DNS switch, not a service migration under pressure. LocalOps&#8217;s<a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> white-glove migration service</a> handles this process end-to-end for teams that prefer not to manage it themselves.</p><p>Read the zero-downtime migration playbook.</p><h2><strong>Why Heroku Is Architecturally Incompatible With Modern SaaS</strong></h2><p>The deepest reason engineering teams outgrow Heroku is not cost and not compliance. It is architecture. Heroku was designed around a specific application model, a single web application with stateless processes and external services for persistence, and that model constrains the architectural patterns that production SaaS applications require as they mature.</p><p><strong>Microservices.</strong></p><p>Heroku&#8217;s model is built around individual applications. Each application is a separate Heroku app with its own dyno configuration, add-ons, environment variables, and deployment pipeline. As a product decomposes into microservices, managing the relationships between these Heroku apps, routing, service discovery, shared configuration, and deployment coordination becomes increasingly complex without the underlying networking primitives that VPC-native infrastructure provides.</p><p>Private communication between Heroku applications requires going over the public internet. There is no service mesh. There is no private DNS. Services communicate through public endpoints that must be secured at the application layer rather than the network layer. For microservices architectures where internal services should never be publicly accessible, this is a fundamental mismatch.</p><p><strong>Event-driven systems.</strong></p><p>Event-driven architectures depend on reliable message delivery, consumer group management, and dead-letter queue handling. Heroku&#8217;s add-on marketplace offers CloudAMQP for RabbitMQ and various Kafka-as-a-service options, but these run outside the Heroku networking model, require external service accounts, and add cost and operational complexity that compounds with every event-driven component added.</p><p>AWS-native services, SQS, SNS, EventBridge, and MSK, run inside the team&#8217;s VPC with native IAM integration, no external service accounts, and direct pricing with no platform margin. The operational model for event-driven systems on AWS is fundamentally cleaner than assembling it from Heroku add-ons.</p><p><strong>Sidecar patterns.</strong></p><p>Modern application deployment patterns increasingly rely on sidecars, containers running alongside the main application container to handle concerns like logging, metrics collection, service mesh proxying, and secret rotation. Heroku&#8217;s application model does not support multi-container deployments. The sidecar pattern does not exist on Heroku.</p><p>On Kubernetes, which LocalOps runs on EK, sidecars are a first-class pattern. Logging agents, metrics collectors, Envoy proxies for service mesh, and secret management sidecars all run alongside application containers within the same pod. This is the deployment model that modern SaaS architectures assume.</p><p><strong>What platforms support these architectures natively on AWS:</strong></p><p>AWS-native Internal Developer Platforms running on Kubernetes support all three patterns natively. Private networking between services through VPC. Event-driven architectures through native AWS services with IAM integration. Sidecar containers through Kubernetes pod specifications. LocalOps provides this infrastructure foundation, provisioned automatically, configured to AWS Well-Architected standards, running in the team&#8217;s own AWS account.</p><p><a href="https://docs.localops.co/environment/services/micro-services?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps supports micro services on AWS and modern architectures.</a></p><h2><strong>The Architecture That Comes Next</strong></h2><p>The architecture that replaces Heroku for scaling SaaS teams has five consistent characteristics.</p><p><strong>Infrastructure in the team&#8217;s own cloud account.</strong> VPC isolation. Private networking between services. IAM-based access control with audit logging. Data residency in a specified region. The compliance foundation that enterprise deals require.</p><p><strong>Developer experience that does not regress.</strong> Git-push deployments. Self-serve environment management. Preview environments on every pull request. Developers deploy without tickets, without infrastructure knowledge, without platform team involvement. The autonomy that made Heroku valuable survives the migration.</p><p><strong>Observability is built into the platform.</strong> Prometheus for metrics. Loki for log aggregation. Grafana for unified dashboards and alerting. Available from the first deployment at no additional cost. Not assembled from add-ons after the fact.</p><p><strong>Horizontal autoscaling by default.</strong> Workloads scale based on real traffic signals automatically. No manual dyno configuration. No over-provisioning for anticipated peaks. Cost proportional to actual usage.</p><p><strong>No new vendor lock-in.</strong> Standard Kubernetes in the team&#8217;s own AWS account. Infrastructure that continues running independently of any platform vendor. An exit path that is always open.</p><p>LocalOps provisions all five as the default configuration. Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and the full Prometheus + Loki + Grafana observability stack, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><blockquote><p><em>&#8216;&#8217; Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.&#8221;</em> <strong>&#8211; Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy</strong></p><p><em>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10&#8211;12 man months of effort , all of which LocalOps has saved for us.&#8221;</em> <strong>&#8211;  Gaurav Verma, CTO and Co-founder, SuprSend</strong></p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get started for free - first environment live in under 30 minutes.</a></strong></p></blockquote><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>At what point does Heroku&#8217;s pricing become financially indefensible?</strong></p></li></ol><p>The inflection point varies by stack composition but consistently arrives when service count grows past five and add-on costs begin compounding across multiple services simultaneously. The signal is not the absolute invoice amount; it is when the invoice becomes difficult to attribute cleanly across services and difficult to optimize without changing the underlying platform. For most B2B SaaS teams, this happens between five and fifteen engineers, not because of team size directly, but because product complexity at that stage drives the service count and add-on accumulation that makes the cost structure opaque.</p><ol start="2"><li><p><strong>How should a CTO calculate the true total cost of staying on Heroku?</strong></p></li></ol><p>The full calculation has three components. Invoice cost: dyno tiers, database add-ons, Redis, monitoring tools, and scheduler, totalled across all production services. Engineering opportunity cost: hours spent on platform workarounds, architectural compromises made to serve Heroku&#8217;s limitations, and deferred technical decisions that accumulate as debt. Compliance revenue cost: Deals are delayed or lost because the infrastructure cannot satisfy enterprise security questionnaires. For most Series A and beyond B2B SaaS teams with enterprise ambitions, the compliance revenue component is the highest and least visible cost, and the one that makes the migration decision strategically obvious when it surfaces.</p><ol start="3"><li><p><strong>Why does Heroku&#8217;s dyno scaling fail for high-concurrency production workloads?</strong></p></li></ol><p>Heroku scales vertically in fixed tiers and requires manual intervention to adjust capacity. There is no event-driven autoscaling that responds to CPU, memory, or request queue signals automatically. For high-concurrency APIs that experience traffic bursts, common in B2B SaaS with peak business-hours usage patterns, the choice is between over-provisioning continuously or accepting degraded performance during spikes. Kubernetes horizontal pod autoscaling on EKS responds to real traffic signals in seconds, scales to handle burst load, and scales back automatically when traffic drops. Teams pay for actual compute consumption rather than for the tier ceiling required to handle the peak.</p><ol start="4"><li><p><strong>What is the safest migration path when a production application has outgrown Heroku?</strong></p></li></ol><p>The safest path runs both environments in parallel before any DNS cutover. Provision the new AWS environment with LocalOps and verify the full application stack, web services, background workers, scheduled jobs, and third-party integrations against the new environment before moving any production traffic. Use AWS DMS to replicate database changes from Heroku Postgres to RDS in near-real time during the transition period. Lower DNS TTL 48 hours before the planned cutover. Switch DNS only after the new environment has handled real traffic patterns for a sufficient observation period, with Heroku remaining live throughout. For teams that prefer not to manage this themselves, LocalOps&#8217;s white-glove migration handles the process end-to-end.</p><ol start="5"><li><p><strong>Why is Heroku incompatible with microservices and event-driven architectures?</strong></p></li></ol><p>Heroku&#8217;s application model assumes a single web application with stateless processes. Private communication between separate Heroku applications requires traversing the public internet; there is no VPC, no service mesh, and no private DNS. Event-driven architectures assembled from Heroku add-ons run outside the Heroku networking model, require external service accounts, and add operational complexity with every component added. Sidecar container patterns, logging agents, metrics collectors, and service mesh proxies are not supported because Heroku does not support multi-container deployments. On Kubernetes running inside a VPC, all three patterns are first-class: private inter-service networking, native AWS event services with IAM integration, and pod-level sidecar support.</p><ol start="6"><li><p><strong>What does the architecture look like after migrating from Heroku to AWS?</strong></p></li></ol><p>The post-migration architecture runs on EKS inside a dedicated VPC with private subnets, least-privilege IAM policies, and encrypted secrets via AWS Secrets Manager, all provisioned automatically by LocalOps. Developers push to a configured branch, and the application deploys. Services communicate over private networking. Prometheus collects metrics automatically. Loki aggregates logs from all services. Grafana provides unified dashboards from day one. Horizontal autoscaling responds to real traffic signals without manual intervention. The developer experience is identical to Heroku. The infrastructure underneath is the team&#8217;s own AWS account, with no platform margin, no compliance ceiling, and no vendor lock-in to unwind.</p><h2><strong>Key Takeaways</strong></h2><p>Engineering teams outgrow Heroku in a predictable sequence. Cost predictability breaks down as service count grows and add-on costs compound. Infrastructure control becomes a compliance requirement when enterprise deals arrive. Vertical scaling fails for variable traffic workloads. Reliability degrades as applications push against Standard dyno limits. And modern architectural patterns, microservices, event-driven systems, and sidecars hit fundamental platform incompatibilities.</p><p>The architecture that comes next is not more complex to operate. It is different in model, infrastructure the team owns, running on AWS, with a platform layer that preserves the developer experience Heroku provided. For engineering teams at Series A and beyond, this is the foundation that supports the next stage of growth rather than constraining it.</p><p>The teams that navigate this transition well are the ones who recognize the sequence before any single constraint becomes a crisis, and move from a position of clarity rather than under pressure.</p><p><strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> Our engineers review your current Heroku setup and walk through what the transition looks like for your specific stack.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First environment on AWS in under 30 minutes. No credit card required.</p><p><strong><a href="https://localops.co/vs/heroku-alternative?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Migration Guide &#8594;</a></strong> Full walkthrough, database migration, environment setup, DNS cutover.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.localops.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[🎉 New release: Switch organizations/teams & New CLI update]]></title><description><![CDATA[Handling multiple engineering teams and environments, just got easier in your Internal developer platform.]]></description><link>https://blog.localops.co/p/new-release-switch-organizationsteams</link><guid isPermaLink="false">https://blog.localops.co/p/new-release-switch-organizationsteams</guid><dc:creator><![CDATA[Anand]]></dc:creator><pubDate>Tue, 31 Mar 2026 06:21:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TKvM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We released quite a few enhancements today.</p><p>If you run multiple products and multiple engineering teams handling their own qa, uat and production environments, you will love this update.</p><h3>Switch between multiple organizations/teams:</h3><p>Users can now belong to multiple organizations using their same login / email address. And they can easily switch between the organizations from the top left menu like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TKvM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TKvM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 424w, https://substackcdn.com/image/fetch/$s_!TKvM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 848w, https://substackcdn.com/image/fetch/$s_!TKvM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 1272w, https://substackcdn.com/image/fetch/$s_!TKvM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TKvM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png" width="1412" height="985" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:985,&quot;width&quot;:1412,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:548426,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192689175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TKvM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 424w, https://substackcdn.com/image/fetch/$s_!TKvM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 848w, https://substackcdn.com/image/fetch/$s_!TKvM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 1272w, https://substackcdn.com/image/fetch/$s_!TKvM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7f2e296-9bd5-4d53-9db3-df5a448ffee1_1412x985.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Each organization can have unique environments like this:</p><ul><li><p>Github org</p></li><li><p>ECR Registries</p></li><li><p>Environments</p><ul><li><p>qa</p></li><li><p>uat</p></li><li><p>production</p></li></ul></li><li><p>Deployments</p></li></ul><h3>Enhanced CLI Login:</h3><p>LocalOps CLI now use the existing web login sessions to authenticate. </p><p>To login, just type this in your terminal:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:null}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">$ ops login</code></pre></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ylUn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ylUn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 424w, https://substackcdn.com/image/fetch/$s_!ylUn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 848w, https://substackcdn.com/image/fetch/$s_!ylUn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 1272w, https://substackcdn.com/image/fetch/$s_!ylUn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ylUn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png" width="1456" height="1322" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1322,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1032206,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192689175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ylUn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 424w, https://substackcdn.com/image/fetch/$s_!ylUn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 848w, https://substackcdn.com/image/fetch/$s_!ylUn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 1272w, https://substackcdn.com/image/fetch/$s_!ylUn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64d0e0ca-cc6b-4cc6-93c2-af66c1eddc62_1864x1692.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And you will get a link to click and open the browser. If you&#8217;re already logged in to LocalOps console (console.localops.co), we will show up the authorization form that CLI is requesting to use your current login. Once you authorize, boom! You can access LocalOps services via CLI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pjex!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pjex!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 424w, https://substackcdn.com/image/fetch/$s_!pjex!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 848w, https://substackcdn.com/image/fetch/$s_!pjex!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 1272w, https://substackcdn.com/image/fetch/$s_!pjex!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pjex!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png" width="1160" height="1040" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1040,&quot;width&quot;:1160,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:102224,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192689175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pjex!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 424w, https://substackcdn.com/image/fetch/$s_!pjex!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 848w, https://substackcdn.com/image/fetch/$s_!pjex!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 1272w, https://substackcdn.com/image/fetch/$s_!pjex!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133c2275-998e-4086-a022-52f0a68be04f_1160x1040.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You will need to update the CLI version to v3.0.0 to get this update. Checkout <a href="https://docs.localops.co/cli/install-macos">https://docs.localops.co/cli/install-macos</a> (for MacOS) or <a href="https://docs.localops.co/cli/install-windows">https://docs.localops.co/cli/install-windows</a> (for Windows) or <a href="https://docs.localops.co/cli/install-linux">https://docs.localops.co/cli/install-linux</a> (for Linux) to learn more.</p><p>Reach out to us get a quick tour of LocalOps - <a href="https://go.localops.co/tour">https://go.localops.co/tour</a>. </p><p>Or sign up for free at <a href="https://console.localops.co/signup">https://console.localops.co/signup</a>.</p><p>Cheers.</p>]]></content:encoded></item><item><title><![CDATA[How Internal Developer Platforms Enable Control Across Multi-Cloud, Regions, and Microservices]]></title><description><![CDATA[Bring consistency to multi-cloud deployments and keep visibility, security, and control as your systems scale.]]></description><link>https://blog.localops.co/p/manage-multi-cloud-multi-region-deployments-with-idp</link><guid isPermaLink="false">https://blog.localops.co/p/manage-multi-cloud-multi-region-deployments-with-idp</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Sun, 29 Mar 2026 05:30:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LaCc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LaCc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LaCc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 424w, https://substackcdn.com/image/fetch/$s_!LaCc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 848w, https://substackcdn.com/image/fetch/$s_!LaCc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!LaCc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LaCc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png" width="992" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:992,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LaCc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 424w, https://substackcdn.com/image/fetch/$s_!LaCc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 848w, https://substackcdn.com/image/fetch/$s_!LaCc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!LaCc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb98781b-fa2c-44f6-b565-843b3ed5712b_992x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Multi-cloud and multi-region setups look simple at first. In practice, they don&#8217;t stay that way.</p><p>A team runs the same service on two cloud providers. It works in the beginning. Then small differences start to show up. Load balancers behave differently. IAM policies don&#8217;t map cleanly. Network defaults are not the same. Teams add fixes to make things work, and those fixes stay.</p><p>After a while, staging and production stop matching. Debugging depends on which cloud the service is running on.</p><p>At that point, the problem is not provisioning. It is keeping environments consistent when the underlying systems behave differently.</p><p>Traditional DevOps workflows struggle here. They assume environments behave the same. In multi-cloud setups, they don&#8217;t. Pipelines get duplicated. Infrastructure definitions drift. Ownership becomes unclear.</p><p>An internal developer platform changes how this is handled. It does not try to hide the differences between cloud providers. It adds structure to how environments are created, updated, and maintained.</p><p>That shift is what keeps environments consistent as systems grow.</p><h2>TL;DR</h2><ul><li><p>Multi-cloud breaks on four gaps: provisioning inconsistency, credential sprawl, environment drift, and lack of clear visibility into what is running in each environment</p></li><li><p>Traditional DevOps workflows slow down as systems grow. Infrastructure changes move through tickets, and simple environment updates take days</p></li><li><p>An internal developer platform standardises how environments are defined and requested. Developers work with a consistent model, while the platform handles provider-specific differences</p></li><li><p>Multi-region deployments need more than reusable templates. Data residency, region-specific credentials, and environment parity need to be enforced at creation time</p></li><li><p>Managing microservices across environments fails on dependency visibility, not deployment mechanics</p></li><li><p>Preview environments fail when treated as full clones of production. Most systems cannot support that model reliably</p></li><li><p>Security needs to be part of environment provisioning. Adding it later leads to inconsistent policies and access gaps</p></li></ul><h2>Why Multi-Cloud and Multi-Region Deployments Break Down: The Four Gaps</h2><p>Multi-cloud setups break in predictable ways. Most teams running these systems run into the same four gaps.</p><h4>Provisioning inconsistency</h4><p>The same service is not provisioned the same way across providers.</p><p>An API in an AWS internal developer platform setup might use an ALB with a 60-second idle timeout. The same service on GCP sits behind a Cloud Load Balancer with a<a href="https://cloud.google.com/load-balancing/docs/backend-service"> 30-second backend timeout default</a>. Health check intervals and thresholds differ too. These are not configuration preferences. They affect how the service handles slow clients, retries, and upstream failures.</p><p>Over time, provider-specific fixes get added to patch these differences. Environments stop matching. What works in one cloud fails silently in another.</p><h4>Credential sprawl</h4><p>IAM models do not align across cloud providers.</p><p>AWS uses IAM roles with policy documents. GCP uses service accounts with IAM bindings. Azure uses managed identities with role assignments. None of these map cleanly to each other. When teams manage them independently, permissions get widened to unblock deployments. An S3 read policy becomes s3:* because narrowing it requires time nobody has. A GCP service account gets project-level editor access because the specific resource-level permission took too long to figure out.</p><p>Without a consistent access control layer sitting above all three providers, permissions become impossible to audit at scale. You end up with a spreadsheet mapping roles to resources across three different IAM models, and it is out of date the moment someone widens a permission to unblock a deployment.</p><h4>Environment drift</h4><p>Environments diverge at the infrastructure level over time.</p><p>A database parameter group tuned for performance in us-east-1 never gets applied to eu-west-1. A Kubernetes node pool configuration updated in staging never propagates to production. A security group rule added manually in the AWS console does not exist in the Terraform state. Each change is small. Collectively they mean staging and production are running on different infrastructure even when the application code is identical.</p><p>Drift is usually discovered during failures. A latency spike in eu-west-1 looks like a code problem for two hours before someone checks the RDS parameter group and finds it was never updated after the us-east-1 tuning.</p><h4>Lack of visibility</h4><p>Teams lose track of what is actually running.</p><p>At 30 services across three clouds, answering a basic question like &#8220;what version is in production right now&#8221; requires checking the AWS console, the GCP deployment history, a Terraform state file, and maybe a Slack message from two weeks ago. None of these agree with each other because none of them are the source of truth. They are all partial records of different parts of the system.</p><p>The problem is not that the data does not exist. It is that it lives in too many places to be useful during an incident. By the time you have reconstructed the state of the system, the debugging window has already cost you an hour.</p><h2>Why Traditional DevOps Models Collapse at Multi-Cloud Scale</h2><p>Traditional DevOps works when environments are consistent. Multi-cloud breaks that assumption.</p><h4>Pipelines diverge</h4><p>Teams maintain separate CI/CD pipelines that evolve differently across providers.</p><p>An AWS pipeline pushes container images to ECR, deploys to EKS using kubectl, and runs health checks against an ALB target group. A GCP pipeline pushes to Artifact Registry, deploys to GKE using Helm, and checks against a Cloud Load Balancing backend service. Both deploy the same application. The deployment logic shares nothing. Rollback mechanisms differ. Environment variable injection differs. A new engineer moving between teams spends the first week learning the pipeline instead of shipping.</p><h4>Infrastructure definitions split</h4><p>Infrastructure as code does not prevent variation.</p><p>Terraform modules fork across providers. An AWS module defines a VPC with specific CIDR ranges, subnet layouts, and NAT gateway configuration. The GCP equivalent uses a different network model entirely because GCP VPCs are global, not regional. The fork starts as a necessary difference. Over time, unrelated changes get applied to one module and not the other. The divergence goes undocumented. Six months later nobody knows which differences are intentional and which are drift. Without a strong internal developer platform, these differences compound silently.</p><p>This is the hidden cost teams discover when they attempt to build an internal developer platform on top of existing IaC tooling. The modules exist but the consistency layer does not.</p><h4>Ownership becomes fragmented</h4><p>Responsibility spreads across teams with no technical enforcement layer.</p><p>A developer needs a new environment with a specific RDS instance class and a particular security group configuration. They file a ticket. Three days later they get an environment with a different instance class because the platform team defaulted to what they normally provision. The misconfiguration does not surface until a load test shows the environment cannot handle the expected throughput.</p><p>This is the core problem platform engineering and internal developer platforms are meant to solve: enforcing standards through the system, not through documentation.</p><p>Application teams implement standards differently in practice because nothing in the toolchain enforces alignment at provisioning time. Changes move through tickets instead of a self-service system with guardrails.</p><h4>Feedback loops slow down</h4><p>Debugging becomes environment-specific in a way that is expensive to resolve.</p><p>A deployment passes all tests in us-east-1 staging and fails in eu-west-1 production with a connection timeout. The timeout traces back to a security group rule that exists in us-east-1 but was never applied in eu-west-1 because the Terraform state for that region was last updated four months ago. Finding this requires manually comparing security group rules across two regions, two Terraform state files, and the actual AWS console output, none of which are guaranteed to match.</p><p>Without a canonical environment definition to compare against, every cross-environment debugging session starts from reconstructing what the environment is supposed to look like. That takes time and is often incomplete.</p><p>Traditional DevOps does not fail because the approach is wrong. It fails because coordination overhead grows faster than the system can handle.</p><h2>What is an Internal Developer Platform?</h2><p>An internal developer platform is an abstraction layer above cloud infrastructure. It exposes a consistent interface for provisioning environments, deploying services, and managing configuration across cloud providers. The underlying cloud-specific resources, EKS, GKE, AKS, RDS, Cloud SQL, are provisioned by the platform based on which cloud account the environment targets. Engineers interact with the platform, not directly with cloud APIs.</p><p>The platform owns four things: provisioning logic, credential management, state tracking, and environment lifecycle. These need to work as one system for the abstraction to hold.</p><p>For a deeper breakdown of how internal developer platforms are defined and where they fit in modern engineering teams, read<a href="https://blog.localops.co/p/what-is-an-internal-developer-platform-idp?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> What Is an Internal Developer Platform</a> blog.</p><h2>How Does an IDP Handle Environment Provisioning Across Cloud Providers?</h2><p>Cloud providers do not share infrastructure primitives. A VPC in AWS is not the same as a VPC in GCP. EKS and GKE both run Kubernetes but differ in how node pools, IAM, and networking are configured. Writing separate provisioning logic per provider is how teams end up with forked infrastructure definitions that diverge silently over time.</p><p>An internal developer platform solves this through four layers that work together.</p><h4>Environment model</h4><p>The platform defines what an environment is independent of any cloud provider. A developer requests an environment by specifying a target cloud account and a region. They do not specify cloud resources directly. The environment model describes what needs to exist: a network layer, a compute layer, an orchestration layer, an observability layer. The platform owns that definition.</p><h4>Translation layer</h4><p>The platform translates the environment model into provider-specific resources at provisioning time. The same environment definition produces the correct networking, compute, and Kubernetes resources for whichever cloud account it targets. The developer interface does not change across providers. Provider differences are handled inside the platform, not distributed across individual team workflows.</p><h4>Orchestration</h4><p>Provisioning is not just creating resources in isolation. Network, compute, and Kubernetes components have dependencies. The platform provisions them in the correct order, wires them together, and validates the environment is functional before marking it ready. Observability tooling gets installed and connected to the environment at this stage, not added separately afterward. Logs and metrics are available from the first deployment. This matters because debugging cross-environment differences requires consistent instrumentation across all environments. When observability is set up manually per environment, one environment ends up better instrumented than another.</p><h4>Lifecycle management</h4><p>Application-specific cloud resources, databases, queues, storage buckets, are defined at the service level rather than the environment level. The platform provisions them when a service is created and removes them when the service is deleted. Resource lifecycle stays coupled to service lifecycle. This prevents orphaned infrastructure accumulating across environments over time, which is one of the more common sources of unexpected cloud spend in multi-environment setups.</p><p>If you want to see what a full environment actually includes, you can explore the <a href="https://docs.localops.co/environment/inside?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">LocalOps breakdown of what&#8217;s inside an environment.</a></p><h2>How Do You Manage Multi-Region Deployments Inside an IDP?</h2><p>Running the same environment definition in two regions is straightforward. What breaks after provisioning is keeping environments equivalent across regions, enforcing where data is allowed to exist, and preventing credentials from one region being used to provision resources in another.</p><p>An internal developer platform handles multi-region through three mechanisms.</p><h4>Account-level region constraints</h4><p>Region selection does not happen at environment creation time. It happens when a cloud account is connected to the platform. The account configuration determines which regions are available for environments targeting that account. An account configured for EU data residency only surfaces EU regions. A developer creating an environment against that account cannot select a US region because the platform does not present it as an option. Data residency gets enforced at the infrastructure level, not through documentation or process that depends on developers remembering the constraint.</p><h4>Parity through a shared environment definition</h4><p>Two environments provisioned from the same definition in different regions come out structurally identical: same network layout, same cluster configuration, same observability stack. The platform guarantees this at creation time. What breaks parity is changes made outside the platform, a configuration edit applied directly in a cloud console in one region that never reaches the other. The platform has no visibility into changes that bypass it. Parity only holds for what the platform provisions and manages. Any change applied outside the platform becomes undocumented drift that will surface as a debugging problem later.</p><h4>Explicit cross-region dependencies</h4><p>A service calling a dependency in another region introduces latency and potentially crosses a compliance boundary. Within an environment, services communicate through stable internal references that the platform assigns and maintains. These references do not change when infrastructure is updated. Cross-environment or cross-region dependencies cannot use these internal references. They need to be configured explicitly in the service&#8217;s configuration. This keeps cross-region calls visible in configuration rather than embedded in application code where they are difficult to audit.</p><p>See how LocalOps <a href="https://docs.localops.co/accounts/aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">handles cloud account and region configuration</a></p><h2>How Do You Manage Microservices Across Multiple Environments in an IDP?</h2><p>Deploying a container to a Kubernetes namespace is not the hard part. What gets hard at scale is knowing what is actually running: which version of which service is in which environment, whether service interfaces are still compatible after a recent change, and who is responsible when something breaks.</p><p>An internal developer platform handles microservice management across environments through four mechanisms.</p><h4>Per-service configuration isolation</h4><p>Each service carries its own configuration per environment. A change to one service&#8217;s configuration does not affect any other service in the same environment. Services are deployed independently against their own repository and branch. There is no shared configuration file that multiple services read from. When shared configuration exists at the wrong level, deploying one service requires coordinating with teams that own other services. That coordination overhead is what the isolation is designed to remove.</p><h4>Stable service references</h4><p>Services in a microservice architecture depend on each other. Dependencies expressed as hardcoded hostnames or IP addresses break when infrastructure changes underneath them. The platform assigns each service a stable internal reference that maps to its hostname within the environment. That reference does not change for the lifetime of the service. Dependent services use this reference rather than direct addresses. When infrastructure is updated or a service is redeployed, the reference continues to resolve correctly without any changes in application code.</p><h4>Independent deployability</h4><p>One service can be deployed, updated, or rolled back without touching any other service&#8217;s configuration or notifying another team. Each service has its own deployment pipeline triggered by commits to its configured branch. The platform enforces this independence structurally. When cross-service coordination is happening regularly before deployments, it usually indicates shared configuration has been introduced somewhere it should not be.</p><h4>Deployment state per environment</h4><p>The platform tracks deployment state per service per environment. Which version is running, when it was last deployed, whether the deployment is healthy. At 20 or 30 services across multiple environments this state cannot be tracked manually. Without a centralized view, teams find out a service is unhealthy when a dependent service starts returning errors or when a user reports a problem. With it, the state of every service across every environment is visible in one place.</p><p>Here&#8217;s an example of <a href="https://docs.localops.co/environment/services/micro-services?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">how LocalOps handles service dependencies and aliases across environments</a></p><h2>Internal Developer Platform Architecture: Control Layers and Orchestration</h2><p>An internal developer platform is not a single tool. It is a set of layers that need to work together. When one layer is missing or disconnected from the others, the complexity it was supposed to hide leaks back to engineers.</p><h4>Developer interface layer</h4><p>This is how engineers interact with the platform: a web console, a CLI, or an API. It accepts environment and service requests and passes them to the provisioning layer. The interface should be opinionated enough to prevent misconfigured requests but not so rigid that every new environment type requires platform team involvement to support.</p><p>This layer is what most people refer to as the internal developer portal. The internal developer portal vs platform distinction matters here. A portal handles the interface: service catalog, documentation, self-service UI. What it does not do is provision environments, manage credentials, or track infrastructure state. Teams that deploy a portal without building the layers underneath get a catalog with no operational capability. The portal works in the demo. Nothing actually provisions.</p><p>The best internal developer platform is not the one with the most features in the interface layer. It is the one where the provisioning, credential, and state layers work reliably underneath.</p><h4>Provisioning layer</h4><p>This is where environment definitions get translated into cloud resources. The provisioning layer owns the environment model, the per-provider translation logic, and the dependency ordering that ensures resources get created in the correct sequence. Security baselines get applied here, at provisioning time, not as a separate step afterward. Disk encryption, network isolation, and IAM scoping are part of the provisioning definition. An environment provisioned without these and hardened later ran without them for some period of time.</p><h4>Credential layer</h4><p>The platform needs access to cloud accounts to provision resources. That access should be role-based and keyless. The platform assumes a scoped role at provisioning time rather than holding long-lived credentials. No engineer holds direct cloud credentials. Access is auditable because it flows through the platform, not through individually managed keys scattered across team members.</p><h4>State and observability layer</h4><p>The platform tracks what it has provisioned across all cloud accounts, all regions, and all environments. This is the layer that closes the visibility gap from Section 1. Which version is deployed where, what changed recently, which environments are out of sync. These questions have answers because the platform is the system of record for everything it has provisioned. Observability tooling running inside each environment feeds into this layer, giving teams logs and metrics without manual setup per environment.</p><p>These four layers need to be integrated. A portal connected to a separate provisioning tool with no shared state between them is not a platform. It is two tools that happen to be used together, and the gap between them is where coordination overhead lives.</p><p>To understand how these four layers work together as one system, you can check out the<a href="https://docs.localops.co?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> LocalOps docs</a> which walk through the architecture end to end.</p><h2>How Do You Measure Control in a Multi-Cloud IDP?</h2><p>Control in a multi-cloud internal developer platform is measurable. If the platform is working, specific signals stay stable across environments and providers.</p><p><strong>Provisioning time per environment.</strong> A well-built platform provisions consistently regardless of which cloud account it targets. If provisioning time varies significantly between AWS and GCP for the same environment definition, the translation layer is not stable. Inconsistency here usually means provider-specific logic is leaking into the provisioning path.</p><p><strong>Environment parity failures.</strong> Track how often a deployment passes in staging but fails in production due to an environment difference. Each occurrence is a parity failure. These get caught in postmortems. If the number is not trending toward zero, the environment definition is not enforcing consistency at provisioning time.</p><p><strong>Drift detection rate.</strong> How often does the platform detect that live infrastructure differs from its provisioned state. If the answer is never, the platform has no visibility into out-of-band changes. Changes made directly in cloud consoles are invisible to the platform and accumulate as undocumented divergence.</p><p><strong>Deployment success rate across environments.</strong> Not just whether a deployment completed, but whether the service behaved consistently across environments after deployment. Failures that are environment-specific point to configuration or infrastructure differences the platform did not catch.</p><p><strong>Time to debug cross-environment issues.</strong> This exposes visibility gaps directly. If root cause analysis requires manually checking multiple cloud consoles, the state layer is not doing its job.</p><p>Three operational signals that sit underneath these metrics:</p><p>Can a developer provision an environment without involving another team? If not, the self-service model is not working. Can the platform tell you what is running in each environment right now without manual reconstruction? If not, the state layer is incomplete. How long does it take a new engineer to deploy their first service? Weeks means the platform has not reduced the knowledge barrier. Days means it has.</p><h2>How Does an IDP Handle Security Across Cloud Environments?</h2><p>Security in multi-cloud environments fails in predictable ways. Policies exist as documentation that teams implement inconsistently. Credentials get distributed to individuals rather than managed by the platform. Environments get provisioned without a security baseline and controls get added afterward, which means they were absent for some period of time.</p><h4>Security at provisioning time, not after</h4><p>Every cloud provider has security configurations that are not enabled by default but should be on in every production environment. Disk encryption, VPC flow logs, security groups with minimal open ports, database encryption at rest. A platform that applies these at provisioning time through the environment definition makes them non-optional. A developer cannot provision an environment without them because the template does not offer that option. Security teams stop reviewing individual provisioning requests and start reviewing the template instead. One review covers every environment provisioned from it.</p><h4>Keyless credential management</h4><p>The platform connects to cloud accounts using role-based, keyless access. In AWS this is IAM role assumption. In GCP it is the workload identity federation. In Azure it is managed identity. The platform assumes the role it needs at provisioning time, scoped to that specific operation. No engineer holds a long-lived access key for any cloud account. If an engineer&#8217;s machine is compromised, no cloud credentials are exposed because none were stored there.</p><h4>Per-environment secret isolation</h4><p>Application secrets should not be shared across environments. A production database credential should not be accessible in a staging environment. The platform provisions isolated secret storage per environment and scopes access to secrets at the environment level. Services access secrets through the platform&#8217;s credential mechanism, not through hardcoded values or shared configuration files.</p><h4>Network isolation by default</h4><p>Each environment gets its own dedicated network. Private subnets host resources with no public IP. Public subnets are limited to resources that explicitly need internet access. Services communicate internally through private DNS. This is not a configuration option. It is the default network layout for every environment the platform provisions.</p><h2>FAQs</h2><p><strong>1. How does an internal developer platform handle secrets across cloud environments?</strong></p><p>Each environment gets its own isolated secret storage. A production database credential is not accessible in staging because secrets are scoped at the environment level, not shared across environments. Services access secrets through the platform&#8217;s credential mechanism at runtime. No hardcoded values, no shared configuration files. The secret storage backend varies by cloud provider, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, but the access model is consistent across all of them because the platform abstracts it.</p><p><strong>2. How do IDPs ensure consistency across regions?</strong></p><p>Consistency across regions comes from provisioning both environments from the same definition. Same network layout, same cluster configuration, same observability stack. The platform guarantees structural equivalence at creation time. What breaks consistency is changes made outside the platform directly in cloud consoles or state files that the platform cannot track. Consistency only holds for what the platform provisions and manages.</p><p><strong>3.Can IDPs handle service discovery for microservices?</strong></p><p>Yes. Each service gets a stable internal reference that maps to its hostname within the environment. That reference does not change when infrastructure is updated or a service is redeployed. Dependent services use this reference as an environment variable rather than hardcoded hostnames or IP addresses. At runtime the platform resolves it to the actual internal hostname. This means service discovery works without a separate service mesh or DNS configuration per environment.</p><p><strong>4. How do IDPs reduce manual provisioning overhead?</strong></p><p>By replacing ticket-driven infrastructure requests with self-service. A developer selects a target cloud account and region. The platform provisions the full environment: networking, compute, Kubernetes cluster, and observability tooling. No engineer on the platform team needs to be involved. The provisioning definition is maintained once by the platform team and applied consistently across every environment request. Manual work shifts from handling individual requests to maintaining the platform itself.</p><p><strong>5. How do IDPs enforce compliance in multi-region setups?</strong></p><p>Compliance constraints get enforced at the cloud account level, not at environment creation time. When a cloud account is connected to the platform, its configuration determines which regions are available for environments targeting that account. An account configured for EU data residency only surfaces EU regions. A developer cannot select a non-compliant region because the platform does not present it as an option. Security baselines, disk encryption, network isolation, and IAM scoping are part of the provisioning definition and applied to every environment automatically.</p><p><strong>6. What is the difference between a managed and open source internal developer platform?</strong></p><p>An open source platforms like the Backstage internal developer platform gives you the portal layer: service catalog, documentation, and a self-service interface. What it does not include out of the box is a provisioning engine, a credential layer, or state management. Teams that deploy Backstage without building those layers get a catalog with no operational capability.</p><p>A managed internal developer platform provides all four layers, provisioning, credentials, state, and the developer interface, as one integrated system. The tradeoff is flexibility versus time to operational capability. Open source gives you full control but requires significant engineering investment to build and maintain the provisioning layer. A managed platform reduces that investment but operates within the boundaries the vendor has defined.</p><h2>Conclusion</h2><p>Multi-cloud control is often mistaken for having infrastructure as code per provider. It is not.</p><p>Having Terraform modules for AWS, GCP, and Azure feels like a solved problem. Until someone leaves and the modules go undocumented. Until staging and production diverge and nobody can explain why. Until a new engineer needs three weeks to get their first environment running because the knowledge is in someone&#8217;s head, not in the system.</p><p>IaC is an input to a platform. It is not the platform itself.</p><p>The gap between what teams think they have and what they actually have usually comes down to the same missing pieces: no consistent environment definition above the cloud layer, no centralized credential management, no observability provisioned by default.</p><p>Each of these gaps was manageable when the system was small. At scale they compound. Debugging takes longer. Incidents are harder to reproduce. New engineers take longer to become productive. The platform team becomes a bottleneck instead of an enabler.</p><p>An internal developer platform closes these gaps by owning the provisioning layer. One environment definition that translates across cloud providers. Role-based, keyless credential access scoped to the platform. Observability provisioned inside every environment at creation time, not added afterward. The conditions for drift get removed at the source because every environment starts from the same definition, not because the platform detects and corrects drift after the fact.</p><p>That is what control actually looks like.</p><p>If your team is dealing with environment inconsistency across cloud providers, manual provisioning overhead, or visibility gaps across regions, LocalOps is designed to handle these problems at the platform layer.</p><p>If you&#8217;re figuring out how this would fit into your setup, the LocalOps team can help you work through it:</p><p><strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Book a Demo</a> &#8594;</strong> Walk through how environments, deployments, and AWS infrastructure are handled in practice for your setup.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Get started for free</a> &#8594;</strong> Connect an AWS account and stand up an environment to see how it fits into your existing workflow.</p><p><strong><a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Explore the Docs</a> &#8594;</strong> A detailed breakdown of how LocalOps works end-to-end, including architecture, environment setup, security defaults, and where engineering decisions still sit.</p>]]></content:encoded></item><item><title><![CDATA[How Internal Developer Platforms Help a Growing SaaS Engineering Team Scale Without Hiring More DevOps]]></title><description><![CDATA[Reduce bottlenecks, speed up releases and scale your engineering team without increasing headcount.]]></description><link>https://blog.localops.co/p/how-to-scale-saas-engineering-team-without-hiring-more-devops</link><guid isPermaLink="false">https://blog.localops.co/p/how-to-scale-saas-engineering-team-without-hiring-more-devops</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Sat, 28 Mar 2026 05:30:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ApGQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ApGQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ApGQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 424w, https://substackcdn.com/image/fetch/$s_!ApGQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 848w, https://substackcdn.com/image/fetch/$s_!ApGQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!ApGQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ApGQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6214855,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192283373?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ApGQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 424w, https://substackcdn.com/image/fetch/$s_!ApGQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 848w, https://substackcdn.com/image/fetch/$s_!ApGQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!ApGQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2b4e43-ed4c-4454-894e-208708027de3_2400x2400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As SaaS teams grow, infrastructure complexity doesn&#8217;t just increase, it compounds.</p><p>What starts as a simple setup with a few services quickly turns into multiple environments, fragmented pipelines, access controls, and constant operational overhead. Over time, even routine tasks like spinning up an environment or deploying a feature begin to depend on a small DevOps team.</p><p>That&#8217;s when the bottleneck shows up.</p><p>Most teams respond by hiring more DevOps engineers. But that approach only adds more people to manage an already complex system. It increases cost and coordination overhead without fixing the underlying issue.</p><p>The real challenge isn&#8217;t a lack of DevOps capacity, it&#8217;s a lack of standardization and self-service.</p><p>Internal Developer Platforms (IDPs) address this by turning infrastructure and deployment workflows into reusable, self-service systems, allowing teams to scale engineering output without scaling DevOps headcount.</p><h2>TL;DR</h2><ul><li><p>Most SaaS teams that hit an infrastructure bottleneck assume they need more DevOps engineers. They hire, the backlog clears briefly, then the same problems come back.</p></li><li><p>The issue is not capacity. It is that infrastructure work is still manual, inconsistent, and dependent on a small number of people who know how things are set up.</p></li><li><p>An internal developer platform removes that dependency. Developers provision environments, deploy services, and manage configuration without routing through anyone. Standards are enforced by the platform, not by whoever happens to be available.</p></li><li><p>The teams that get this right do not just deploy faster. They change how the whole infrastructure function works, from a request-driven queue to a self-service system developers can operate without waiting on anyone.</p></li></ul><h2>What Actually Breaks as Your SaaS Team Scales</h2><p>As a SaaS system scales, the failure point isn&#8217;t code velocity, it&#8217;s the lack of standardized infrastructure and repeatable workflows. The same patterns show up across teams once you move beyond a handful of services.</p><h4>Environment Drift and Configuration Inconsistency</h4><p>Teams typically maintain separate dev, staging, and production environments, but they&#8217;re rarely identical.</p><ul><li><p>Different instance types, env variables, or secrets</p></li><li><p>Manual hotfixes applied only in production</p></li><li><p>Inconsistent Terraform or incomplete IaC coverage</p></li></ul><p>This leads to:</p><ul><li><p>Bugs that cannot be reproduced outside production</p></li><li><p>Failed deployments due to missing or mismatched configs</p></li><li><p>Increased time spent debugging environment-specific issues</p></li></ul><p>Without strict environment templating, drift becomes inevitable.</p><h4>DevOps as a Request-Driven Bottleneck</h4><p>In most growing teams, infrastructure access is centralized for safety. In practice, this creates a ticket-driven workflow:</p><ul><li><p>&#8220;Create a new service&#8221;</p></li><li><p>&#8220;Provision a database&#8221;</p></li><li><p>&#8220;Update IAM permissions&#8221;</p></li><li><p>&#8220;Fix CI/CD pipeline&#8221;</p></li></ul><p>Each request requires:</p><ul><li><p>Context switching for DevOps</p></li><li><p>Manual validation and setup</p></li><li><p>Back-and-forth communication</p></li></ul><p>As request volume increases, lead time grows linearly. Deployment frequency drops, even if engineering capacity increases.</p><p>Industry reports from Atlassian and Puppet consistently show that a significant share of DevOps time is spent on maintenance and operational tasks rather than innovation.</p><h4>Fragmented CI/CD Pipelines</h4><p>Pipelines evolve organically per service or team:</p><ul><li><p>Different GitHub Actions / Jenkins configs</p></li><li><p>Inconsistent build, test, and deploy stages</p></li><li><p>No shared rollback or failure handling strategy</p></li></ul><p>This creates:</p><ul><li><p>Unpredictable deployment behavior</p></li><li><p>Difficult debugging across services</p></li><li><p>Lack of enforceable standards (security, testing, approvals)</p></li></ul><p>Without a unified pipeline abstraction, every service becomes a snowflake.</p><h4>Lack of Reusable Infrastructure Patterns</h4><p>Common components are repeatedly reimplemented:</p><ul><li><p>Service templates (API, worker, cron jobs)</p></li><li><p>Database provisioning patterns</p></li><li><p>Networking and service discovery setup</p></li></ul><p>Instead of reusable modules, teams copy-paste configs and modify them. Over time:</p><ul><li><p>Divergence increases</p></li><li><p>Bugs get duplicated</p></li><li><p>Upgrades become risky and inconsistent</p></li></ul><h4>Increasing Cognitive Load on Developers</h4><p>Developers are expected to handle:</p><ul><li><p>Kubernetes manifests or ECS task definitions</p></li><li><p>Networking (VPCs, subnets, security groups)</p></li><li><p>Secrets management and IAM roles</p></li><li><p>CI/CD configuration</p></li></ul><p>This leads to:</p><ul><li><p>Slower feature delivery</p></li><li><p>Higher onboarding time for new engineers</p></li><li><p>More production mistakes due to partial understanding</p></li></ul><p>At scale, this isn&#8217;t a skills issue, it&#8217;s a systems design issue.</p><h4>Poor Observability and Debugging Across Environments</h4><p>Monitoring and logging are often:</p><ul><li><p>Configured differently per service</p></li><li><p>Missing in non-production environments</p></li><li><p>Not tied to deployment events</p></li></ul><p>As a result:</p><ul><li><p>Failures are detected late</p></li><li><p>Root cause analysis takes longer</p></li><li><p>Teams rely on manual investigation instead of structured signals</p></li></ul><h4>The Core Pattern</h4><p>All of these issues point to the same underlying problem:</p><ul><li><p>Infrastructure is not standardized</p></li><li><p>Workflows are not repeatable</p></li><li><p>Systems depend on individuals instead of abstractions</p></li></ul><p>Until those are fixed, adding more DevOps engineers only increases the system&#8217;s coordination cost.</p><h2>Why Hiring More DevOps Doesn&#8217;t Solve It</h2><p>When infrastructure bottlenecks appear, the default response is to hire more DevOps engineers. It feels like a capacity problem. More requests, more people to handle them.</p><p>In reality, it&#8217;s a systems problem.</p><h4>Linear Scaling of an Operational Model</h4><p>As systems grow, the number of operational tasks increases rapidly:</p><ul><li><p>Provisioning infrastructure</p></li><li><p>Managing IAM roles and access</p></li><li><p>Maintaining CI/CD pipelines</p></li><li><p>Handling incidents and rollbacks</p></li></ul><p>Each new service or environment adds more surface area. But hiring increases capacity only linearly, while system complexity grows non-linearly.</p><p>This creates a persistent gap:</p><ul><li><p>Request volume keeps increasing</p></li><li><p>Backlogs grow despite hiring</p></li><li><p>Lead times for changes remain high</p></li></ul><h4>Increased Coordination Overhead</h4><p>Adding more DevOps engineers introduces more coordination layers:</p><ul><li><p>More handoffs between developers and DevOps</p></li><li><p>More communication required for each change</p></li><li><p>More dependencies across team members</p></li></ul><p>Instead of speeding up execution:</p><ul><li><p>Requests take longer to process</p></li><li><p>Context gets fragmented</p></li><li><p>Small changes require multiple touchpoints</p></li></ul><p>The system becomes slower not because of lack of effort, but because of increased coordination cost.</p><h4>Knowledge Silos and Operational Risk</h4><p>Infrastructure knowledge is often:</p><ul><li><p>Distributed across individuals</p></li><li><p>Built through experience rather than systems</p></li><li><p>Poorly documented or inconsistently applied</p></li></ul><p>As the team grows:</p><ul><li><p>Each engineer owns a subset of the system</p></li><li><p>Debugging requires multiple people</p></li><li><p>Onboarding new engineers takes longer</p></li></ul><p>This leads to:</p><ul><li><p>Slower incident resolution</p></li><li><p>Higher reliance on specific individuals</p></li><li><p>Increased operational risk</p></li></ul><h4>Inconsistent Practices at Scale</h4><p>Without a shared abstraction layer:</p><ul><li><p>Naming conventions differ</p></li><li><p>Configurations diverge</p></li><li><p>Deployment workflows vary across services</p></li></ul><p>Over time:</p><ul><li><p>Infrastructure becomes harder to reason about</p></li><li><p>Changes become riskier</p></li><li><p>Debugging becomes more expensive</p></li></ul><p>Every service starts behaving like its own system instead of part of a cohesive platform.</p><h4>DevOps Becomes a Gatekeeper Function</h4><p>In a request-driven model, DevOps becomes the checkpoint for:</p><ul><li><p>Deployments</p></li><li><p>Environment provisioning</p></li><li><p>Configuration updates</p></li></ul><p>This results in:</p><ul><li><p>Slower release cycles</p></li><li><p>Reduced developer autonomy</p></li><li><p>Bottlenecks during high-demand periods</p></li></ul><p>Even simple changes are delayed because they depend on a centralized team.</p><h4>The Structural Issue</h4><p>The core problem isn&#8217;t team size. It&#8217;s the operating model.</p><ul><li><p>Workflows are request-driven instead of self-service</p></li><li><p>Infrastructure is manually managed instead of abstracted</p></li><li><p>Systems depend on individuals instead of standardized platforms</p></li></ul><p>This pattern is widely observed across modern DevOps and platform engineering practices.</p><p>As teams scale, adding more DevOps engineers increases coordination overhead, fragments knowledge, and reinforces ticket-driven workflows instead of eliminating them. Without standardized, self-service systems, infrastructure complexity grows faster than the team managing it.</p><p>The result is predictable:</p><ul><li><p>Slower delivery</p></li><li><p>Higher operational overhead</p></li><li><p>Increasing cost without proportional gains in efficiency</p></li></ul><h2>How Internal Developer Platforms Solve This Structurally</h2><h4>What is an Internal Developer Platform?</h4><p>An Internal Developer Platform (IDP) is a centralized layer that standardizes infrastructure, deployment workflows, and operational practices, and exposes them as self-service tools that developers can use independently.</p><p>IDPs don&#8217;t just improve workflows, they replace the underlying operating model. Instead of scaling DevOps teams to handle growing complexity, they standardize infrastructure and expose it through self-service systems that developers can use directly.</p><p>This shifts the model from DevOps-driven execution to platform-enabled autonomy, where developers can provision environments, deploy services, and manage changes without relying on manual intervention.</p><p>For a deeper breakdown of how internal developer platforms are defined and where they fit in modern engineering teams, read<a href="https://blog.localops.co/p/what-is-an-internal-developer-platform-idp?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> What Is an Internal Developer Platform</a> blog.</p><h4>Self-Service Infrastructure via Environment-Based Provisioning</h4><p>IDPs move infrastructure from ad-hoc provisioning to standardized environment templates.</p><p>Each environment (development, staging, production) is provisioned using predefined configurations that include:</p><ul><li><p>Networking and access controls</p></li><li><p>Compute and storage resources</p></li><li><p>Container orchestration setup</p></li><li><p>Supporting services required to run applications</p></li></ul><p>These environments are created through reusable templates, not manual setup.</p><p>Developers don&#8217;t request infrastructure. They create environments.</p><p>Result:</p><ul><li><p>No ticket-based provisioning</p></li><li><p>Identical environments across stages</p></li><li><p>Elimination of configuration drift</p></li></ul><h4>Push-to-Deploy with Standardized CI/CD</h4><p>Instead of maintaining separate pipelines for each service, IDPs provide centralized and reusable CI/CD workflows.</p><p>A typical flow:</p><ul><li><p>Connect repository</p></li><li><p>Select branch</p></li><li><p>Trigger build and deployment automatically on code push</p></li></ul><p>Pipelines are preconfigured with:</p><ul><li><p>Build and test stages</p></li><li><p>Deployment logic</p></li><li><p>Rollback strategies</p></li></ul><p>This ensures:</p><ul><li><p>Consistent deployment behavior across services</p></li><li><p>Reduced failure rates</p></li><li><p>Faster release cycles</p></li></ul><p>CI/CD becomes a platform capability rather than a team-level responsibility.</p><h4>Infrastructure Abstraction Without Losing Control</h4><p>IDPs introduce an abstraction layer over infrastructure.</p><p>Developers interact with simple actions such as:</p><ul><li><p>Create service</p></li><li><p>Deploy application</p></li><li><p>Scale workloads</p></li></ul><p>Behind the scenes, the platform handles:</p><ul><li><p>Resource provisioning</p></li><li><p>Container orchestration</p></li><li><p>Networking and permissions</p></li></ul><p>This creates a clear separation:</p><ul><li><p>Developers define intent</p></li><li><p>The platform executes it using standardized configurations</p></li></ul><p>At the same time, governance is preserved through built-in controls and policies.</p><h4>Built-in Observability and Operational Tooling</h4><p>Observability is often inconsistent across services in growing systems.</p><p>IDPs embed monitoring and logging into the platform by default:</p><ul><li><p>Centralized logging</p></li><li><p>Metrics collection</p></li><li><p>Preconfigured dashboards</p></li></ul><p>This leads to:</p><ul><li><p>Faster detection of issues</p></li><li><p>Easier debugging across environments</p></li><li><p>Consistent visibility across services</p></li></ul><p>Observability becomes a default capability, not an additional setup step.</p><h4>Eliminating DevOps Work Through Standardization</h4><p>In traditional setups, DevOps teams repeatedly:</p><ul><li><p>Write infrastructure configurations</p></li><li><p>Maintain deployment pipelines</p></li><li><p>Manage service-specific setup</p></li></ul><p>IDPs convert these into reusable system-level components:</p><ul><li><p>Standard service templates</p></li><li><p>Predefined deployment workflows</p></li><li><p>Shared infrastructure patterns</p></li></ul><p>Developers no longer need to manage these details, and DevOps doesn&#8217;t need to rebuild them for every service.</p><h4>From Ticket-Driven Ops to Platform Engineering</h4><p>The most important change is operational.</p><p>Before:</p><ul><li><p>DevOps operates through request-driven workflows</p></li><li><p>Every change requires manual intervention</p></li></ul><p>After IDP:</p><ul><li><p>Developers use self-service systems</p></li><li><p>Infrastructure and deployments are automated</p></li><li><p>DevOps focuses on building and improving the platform</p></li></ul><p>This marks the shift from reactive operations to platform engineering.</p><h4>The Structural Shift</h4><p>IDPs solve the root problem by changing how systems operate:</p><ul><li><p>From manual to automated</p></li><li><p>From fragmented to standardized</p></li><li><p>From request-driven to self-service</p></li></ul><p>Instead of adding more DevOps engineers to manage growing complexity, teams build systems that absorb that complexity once and apply it consistently across all services and environments.</p><p>This is what enables engineering teams to scale output without increasing operational overhead.</p><p>To understand how this works end-to-end, including environment setup, deployment flow, and infrastructure defaults, take a look at <a href="https://docs.localops.co/howitworks?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">how LocalOps IDP works</a>.</p><h2>Before vs After Internal Developer Platforms</h2><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/aEJy2/2/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f3d473e6-67d6-49dc-a34a-72df4b6f0ece_1220x1088.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09931166-5b10-417e-bf87-8f72f9925576_1220x1088.png&quot;,&quot;height&quot;:547,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/aEJy2/2/" width="730" height="547" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><h2>Measurable Impact</h2><p>Shifting to an internal developer platform does not just reduce manual work. It shows up in metrics that engineering leaders actually track.</p><p><strong>Deployment frequency increases.</strong> When developers can ship without waiting on infrastructure setup or DevOps approval, release cycles shorten. Teams move from batching changes into infrequent releases to shipping smaller updates continuously.</p><p><strong>Lead time for changes drops.</strong> Environment provisioning that took days becomes self-service. A change that previously sat in a queue now goes from commit to deployed in hours.</p><p><strong>DevOps ticket volume falls.</strong> Routine requests, environment setup, access configuration, secrets management, service deployment, stop generating tickets. The DevOps team handles genuinely complex work instead of a backlog of repetitive tasks.</p><p><strong>New engineers ramp up faster.</strong> Onboarding stops depending on tribal knowledge. A new developer connects a repo, picks a branch, and deploys without needing someone to walk them through the infrastructure setup.</p><p><strong>Environment-related incidents reduce.</strong> Standardized environments mean staging behaves like production. Inconsistencies that only surface in production become rare because every environment is built from the same template.</p><p><strong>Rollbacks become predictable.</strong> Consistent deployment pipelines mean when something goes wrong, the rollback path is known and tested. There is no guessing which environment has which configuration.</p><h2>What to Look for in an Internal Developer Platform</h2><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/aEJy2/2/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4532e8a2-b521-4443-b02d-7a8e12932cb6_1220x1088.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e9135257-f6b1-4298-91c4-b09c060713d1_1220x1088.png&quot;,&quot;height&quot;:547,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/aEJy2/2/" width="730" height="547" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>If you want to see how these criteria map to a real implementation, you can explore it with the LocalOps team by <a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">booking a demo </a>or <a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">trying it out yourself for free</a>.</p><h2>Common Mistakes Teams Make</h2><p>One of the most common mistakes is introducing an internal developer platform without clearly defining the problems it should solve. Teams adopt or build a platform, but continue operating the same way, so the underlying bottlenecks remain.</p><p>Another issue is not driving adoption across teams. Even a well-designed platform fails if developers continue using old processes. If it&#8217;s not clearly better, faster, and easier, it won&#8217;t be used.</p><p>Many teams also skip proper standardization. They introduce a platform but still allow multiple patterns for deployments, environments, and configurations. This brings back the same inconsistency the platform was meant to eliminate.</p><p>A frequent mistake is focusing only on infrastructure and ignoring developer experience. In platform engineering, the goal is not just automation, but enabling developers to move faster with less friction. Without that, even the best internal developer platform fails in practice.</p><p>As teams start scaling, many begin thinking about how to build an internal developer platform internally. This often leads to trying to solve too many problems at once or building for hypothetical future needs. Instead of reducing complexity, the effort shifts into maintaining the platform itself.</p><p>Building can make sense in specific cases, but during the scaling phase, it introduces additional overhead:</p><ul><li><p>Time spent designing and maintaining internal tooling</p></li><li><p>Slower time to value while the platform is still evolving</p></li><li><p>Ongoing effort required to keep workflows and integrations up to date</p></li></ul><p>This is why teams evaluating the best platform for internal developer experience often prioritize faster adoption and standardization over building everything from scratch.</p><p> If your team is weighing this decision,<a href="https://blog.localops.co/p/internal-developer-platform-build-vs-buy-cost-comparison?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> here is a detailed breakdown of what building vs adopting an internal developer platform actually costs</a>.</p><p>Some teams also don&#8217;t define clear ownership. Without a dedicated team responsible for maintaining and improving the platform, it becomes inconsistent over time.</p><p>There&#8217;s also a tendency to overcomplicate workflows by adding too many steps, approvals, or abstractions, which recreates the same friction the platform was meant to remove.</p><h2>FAQs</h2><p><strong>1. Is an open source internal developer platform or a managed IDP better for a growing SaaS company?</strong></p><p>Open source tools like the Backstage internal developer platform give you flexibility but the build and maintenance cost sits entirely with your team. Backstage covers the portal layer. You still need separate tooling for provisioning, CI/CD, secrets, and observability. Integrating and maintaining that stack requires dedicated platform engineering capacity most growing SaaS teams do not have.</p><p>The complexity is not upfront. It compounds. Every upgrade, patch, and new service type adds more platform team work. Without dedicated ownership the stack drifts, which defeats the standardization it was meant to create.</p><p>A managed IDP comes pre-integrated and maintained by the vendor. For teams between 15 and 60 engineers, that tradeoff usually makes more sense.</p><p><strong>2. What does an internal developer platform architecture include?</strong></p><p>An internal developer platform sits on top of your cloud infrastructure and abstracts it into layers developers can use directly. Those layers are infrastructure provisioning (environments, networking, compute), a deployment layer triggered by git push, service configuration (secrets, environment variables, custom domains), role-based access control across environments, and observability covering logs, metrics, and alerting.</p><p>In a well-built IDP these are not separate tools the platform team wires together. They come pre-integrated. A developer creates a service and gets all of it by default.</p><p><strong>3. How is platform engineering related to internal developer platforms?</strong></p><p>Platform engineering and internal developer platforms go hand in hand. Platform engineering is the practice. An internal developer platform is the output.</p><p>Platform engineering teams design systems that reduce infrastructure friction for developers. The IDP is what those systems look like in practice. It packages provisioning, deployments, and environment management into self-service workflows developers can use without understanding what runs underneath.</p><p><strong>4. Internal Developer Portal vs Platform: What is the difference?</strong></p><p>A portal is a catalog. It gives developers a place to find services, documentation, and tooling that already exists. The Backstage internal developer platform is the most common example.</p><p>A platform provisions and manages the infrastructure itself. The difference matters because a portal does not remove manual work. It organizes it. A platform automates it.</p><p><strong>5. Does an internal developer platform deploy on your own cloud account?</strong></p><p>Yes, for cloud accounts like AWS, internal developer platform provisions infrastructure directly inside your own account, not on shared infrastructure managed by the vendor. The VPCs, Kubernetes clusters, IAM roles, databases, and compute resources all live in your account and are billed to you by the cloud provider. This matters for a few reasons. Your data stays within your own cloud boundary. You retain full visibility and control over the underlying infrastructure. And if you ever need to move away, the infrastructure is already yours.</p><p>For growing SaaS teams this also covers enterprise customer requirements. When a customer needs a dedicated deployment in their own cloud account, the internal developer platform provisions it there using the same templates. No custom work per customer, no separate DevOps project, same process regardless of whose account it runs in.</p><h2>Take Away</h2><p>As SaaS teams grow, the real challenge is not writing more code, it&#8217;s managing the increasing complexity of infrastructure, environments, and deployments.</p><p>Relying on hiring more DevOps engineers might work temporarily, but it doesn&#8217;t solve the underlying problem. It adds coordination overhead, slows down workflows, and makes systems harder to manage over time.</p><p>The shift is not about scaling teams. It&#8217;s about scaling systems.</p><p>Internal developer platforms enable this shift by standardizing infrastructure, automating workflows, and making them accessible through self-service. Instead of depending on a few people to manage complexity, teams build systems that handle it consistently across every service and environment.</p><p>Platform engineering and internal developer platforms go hand in hand in making this possible. Together, they reduce cognitive load, improve developer experience, and allow teams to move faster without compromising reliability.</p><p>For growing teams, the goal is simple: remove friction, not add more layers to manage it.</p><p>Not sure where to start? The LocalOps team can help you figure out what fits your setup:</p><p><strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Book a Demo</a> &#8594;</strong> Walk through how environments, deployments, and AWS infrastructure are handled in practice for your setup.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Get started for free</a> &#8594;</strong> Connect an AWS account and stand up an environment to see how it fits into your existing workflow.</p><p><strong><a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Explore the Docs</a> &#8594;</strong> A detailed breakdown of how LocalOps works end-to-end, including architecture, environment setup, security defaults, and where engineering decisions still sit.</p>]]></content:encoded></item><item><title><![CDATA[AWS as a Heroku Alternative: How Scaling Teams Cut Infrastructure Costs]]></title><description><![CDATA[A CTO&#8217;s guide to using AWS as a Heroku alternative, cut infrastructure costs, eliminate paying platform margin, and preserve developer experience with an AWS-native platform.]]></description><link>https://blog.localops.co/p/aws-as-a-heroku-alternative</link><guid isPermaLink="false">https://blog.localops.co/p/aws-as-a-heroku-alternative</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Fri, 27 Mar 2026 08:17:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YtkB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YtkB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YtkB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 424w, https://substackcdn.com/image/fetch/$s_!YtkB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 848w, https://substackcdn.com/image/fetch/$s_!YtkB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 1272w, https://substackcdn.com/image/fetch/$s_!YtkB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YtkB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png" width="2400" height="1286" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1286,&quot;width&quot;:2400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2592026,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192286413?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ae60fa7-888a-4228-887d-69bdd7cefe87_2400x2400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YtkB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 424w, https://substackcdn.com/image/fetch/$s_!YtkB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 848w, https://substackcdn.com/image/fetch/$s_!YtkB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 1272w, https://substackcdn.com/image/fetch/$s_!YtkB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dcf44a5-f666-4ded-b1ac-93a6e88b0ffd_2400x1286.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Using AWS as a Heroku alternative means running on the same underlying infrastructure Heroku uses, without the platform margin Heroku charges on top of every resource it provisions.</p><p>That distinction matters because it reframes what the migration decision actually is. Engineering teams moving from Heroku to AWS are not switching cloud providers. They are removing a cost layer that sits between them and the infrastructure they are already paying for, and gaining the infrastructure control that Heroku&#8217;s abstraction deliberately withholds.</p><p>For early-stage teams, that abstraction is worth the cost. The operational simplicity Heroku provides in exchange is genuinely valuable when speed matters more than efficiency. For scaling SaaS teams at Series A and beyond, the calculus changes. The platform margin compounds with every service added. The compliance ceiling becomes a sales blocker. The scaling model stops matching real traffic patterns. And the cost difference between running on AWS directly and running through Heroku becomes a strategic number, not just an infrastructure one.</p><p>This guide covers the practical path: how to use AWS as a Heroku alternative without losing developer experience, what the full stack mapping looks like, and what the cost reduction is grounded in.</p><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> How to use AWS as a Heroku alternative, stack mapping, cost comparison, pricing model analysis, and developer experience preservation</p><p><strong>The core challenge:</strong> AWS gives you everything Heroku cannot. The challenge is accessing AWS&#8217;s capabilities without requiring developers to become infrastructure engineers.</p><p><strong>The answer:</strong> An AWS-native Internal Developer Platform that handles infrastructure complexity invisibly, so developers keep git-push deployments, and the business gets AWS-grade infrastructure at AWS pricing.</p><p><strong>Want to see what your Heroku stack looks like on AWS?</strong><a href="https://go.localops.co/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Speak with the LocalOps team</a>.</p><h2><strong>Why Engineering Teams Move to AWS as a Heroku Alternative</strong></h2><p>The decision to move from Heroku to AWS is rarely driven by cost alone. Cost is usually what makes the decision visible, the invoice that prompts the conversation. The underlying reasons are structural.</p><p><strong>No infrastructure you control.</strong> No VPC. No private networking between services. No IAM-based access control. No data residency in a region you choose. For teams selling to enterprise customers, this is not an operational inconvenience. It is a compliance blocker that prevents deals from closing.</p><p><strong>A scaling model that does not match modern workloads.</strong> Heroku scales vertically. Pick a larger dyno. Add more dynos. The model works for predictable, linear workloads. It does not work for SaaS applications with variable traffic, product launches, seasonal spikes, and B2B usage patterns with sharp peaks and troughs. Teams either over-provision at continuous cost or under-provision and degrade under load. There is no middle ground.</p><p><strong>Observability is assembled from parts.</strong> Every monitoring capability on Heroku is a paid add-on with its own billing, its own interface, and its own failure modes. Logs in one tool. Metrics in another. Errors in a third. This fragmentation adds cost and slows incident response at exactly the moments when speed matters most.</p><p><strong>A cost model that compounds.</strong> The platform margin applies to every resource, compute, database, cache, and monitoring. As teams add services, the margin compounds. The cost difference between Heroku and AWS direct pricing does not stay constant. It grows with every service added.</p><p>AWS solves all of these structurally. The challenge is accessing those solutions without losing what made Heroku valuable in the first place.</p><h2><strong>Operational Challenge: AWS Without Kubernetes Expertise</strong></h2><p>AWS gives engineering teams everything Heroku cannot. VPC isolation. IAM-based access control. Horizontal autoscaling based on real traffic signals. Direct infrastructure pricing. Compliance certifications covering SOC 2, HIPAA, GDPR, and additional frameworks.</p><p>What AWS does not provide automatically is a developer-friendly deployment experience.</p><p>Deploying a production application to EKS requires configuring the cluster, the VPC, the load balancers, the IAM roles, the security groups, and the CI/CD pipeline. Writing Kubernetes manifests. Managing Helm charts. Configuring health checks and rollback logic. None of this is unreasonable work for a platform engineer. All of it is unreasonable work for a product engineer whose job is building features.</p><p>This is the gap that causes most AWS migrations to fail from a developer experience perspective. The infrastructure moves to AWS. The technical configuration is correct. And then developers who could deploy themselves every 20 minutes on Heroku now file tickets with a platform team and wait 48 hours.</p><p>The failure mode has a name in the engineering community: trading a PaaS dependency for a platform team dependency. The infrastructure problem is solved. The developer autonomy problem is recreated in a different form.</p><p>The most practical way to use AWS as a Heroku alternative is through an Internal Developer Platform, a layer that sits on top of AWS infrastructure and gives developers the same self-service deployment experience they had on Heroku, while handling every AWS operation invisibly underneath.</p><p>LocalOps is an AWS-native Internal Developer Platform built specifically for teams making this transition. Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><p>From this point, the developer experience is identical to Heroku. Push to a configured branch. LocalOps builds the container image, pushes it to Amazon ECR, updates the Kubernetes deployment on EKS, runs health checks, and handles rollback if the deployment fails. No Kubernetes knowledge required from product engineers.</p><p><a href="https://localops.co/features/continuous-deployments?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps handles deployments automatically.</a></p><h2><strong>The Full Stack Mapping: Heroku to AWS</strong></h2><p>For teams evaluating AWS as a Heroku alternative, the practical question is what AWS services replace each component of the current Heroku stack. The mapping is straightforward. The operational complexity with the right platform layer is low.</p><p><strong>Compute and scaling.</strong> Heroku dynos are fixed-tier compute units scaled manually by choosing a larger tier or adding more dynos. Amazon EKS runs containerized workloads on managed Kubernetes infrastructure with horizontal pod autoscaling. Workloads scale automatically based on CPU utilization, memory pressure, and custom metrics. No manual scaling decisions. No dyno tier upgrades. Teams pay for actual compute consumption rather than for the tier ceiling required to handle peak load.</p><p><strong>Data layer.</strong> Heroku Postgres runs on Heroku&#8217;s shared infrastructure with limited configuration control. Amazon RDS runs inside your own VPC with full configuration, instance type, storage, read replicas, connection pooling, and automated backups. Heroku Redis communicates over the public internet with TLS. Amazon ElastiCache runs inside your VPC with private networking, no public internet exposure for session data or application state. CloudAMQP, the common Heroku add-on for message queuing, maps directly to Amazon SQS, native AWS, VPC-integrated, and priced by actual message volume rather than queue tier.</p><p><strong>Observability and scheduling.</strong> This is where the cost and operational difference are most significant. Heroku has no native observability. Log management, APM, and monitoring all require separate paid add-ons with separate billing. LocalOps includes Prometheus for metrics, Loki for log aggregation, and Grafana for unified dashboards, pre-configured in every environment at no additional cost. Logs and metrics are available from the first deployment with no setup required. Heroku Scheduler,  a basic add-on with limited reliability and no retry logic, is replaced by native cron jobs in LocalOps with configured schedules, retry logic, and execution logging as a first-class service type.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/XFgJT/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48b8298e-6e16-49ee-98fd-afeb2e303c45_1220x1140.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47c701a0-3cb5-465c-a932-af99f11fcf0f_1220x1210.png&quot;,&quot;height&quot;:605,&quot;title&quot;:&quot;Complete Stack Reference:&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/XFgJT/1/" width="730" height="605" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See the full migration guide, including add-on mapping.</a></p><h2><strong>Why Heroku&#8217;s Pricing Model Fails Scaling SaaS Teams</strong></h2><p>Heroku&#8217;s tier-based dyno pricing is structurally misaligned with how SaaS businesses grow and operate. Understanding why matters for building the internal case for migration.</p><p><strong>The tier-jump problem.</strong> Heroku pricing scales in tiers, not proportionally with usage. When an application&#8217;s resource requirements grow past a tier boundary, the cost jumps to the next tier regardless of whether actual usage justifies the full tier ceiling. Teams pay for the tier ceiling, not for actual consumption. For finance teams and for board-level infrastructure reporting, this makes cost attribution difficult and forecasting unreliable; infrastructure spend jumps at irregular intervals unrelated to business growth metrics.</p><p><strong>The seasonal traffic problem.</strong> Many SaaS applications have non-linear traffic patterns. B2B applications peak during business hours and drop to near-zero overnight and on weekends. Consumer applications spike around product launches and marketing campaigns. Heroku&#8217;s response to all of these patterns is the same: provision for peak capacity and pay for it continuously. There is no mechanism to scale down automatically when traffic drops. Teams either over-provision, paying for idle capacity, or under-provision and accept performance degradation during spikes.</p><p>AWS horizontal autoscaling on EKS responds to this directly. Workloads scale out when traffic increases and back in when it drops, automatically, without human intervention. Teams pay for actual compute consumption proportional to real usage, not for the tier ceiling required to handle the peak.</p><p><strong>The add-on compounding problem.</strong> Every capability beyond basic compute on Heroku carries its own pricing tier. For a team running 10+ production services, each with its own database, cache, and observability requirements, the add-on cost structure compounds significantly. Every new service adds not just a compute cost but a database tier, a Redis tier, a logging volume increment, and a monitoring seat. AWS eliminates this compounding structure. LocalOps eliminates the observability compounding; Prometheus, Loki, and Grafana are included in every environment at no additional cost, regardless of service count.</p><p><a href="https://localops.co/features/auto-scaling?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how autoscaling works on LocalOps.</a></p><h2><strong>AWS vs Heroku: Cost, Control, and Reliability at Scale</strong></h2><p><strong>Cost.</strong> Heroku&#8217;s cost model applies a platform margin to every resource, compute, database, cache, and monitoring. That margin does not decrease at scale. It compounds as services are added because each new service adds another component where the margin applies. AWS infrastructure pricing has no platform margin. Teams pay AWS list pricing directly for every resource. The observability tools that are monthly line items on a Heroku invoice, Papertrail, New Relic, and Scout, are included in LocalOps at no additional cost. The direction of the cost difference is structural: AWS pricing without a platform margin is lower than PaaS pricing with one. The size of the difference depends on stack composition and scale. For a model based on your current Heroku invoice,<a href="https://go.localops.co/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> speak with the LocalOps team</a>.</p><p><strong>Control.</strong> Heroku makes infrastructure decisions on the team&#8217;s behalf. This is the source of its simplicity and the source of its compliance ceiling. Teams cannot configure VPCs, security groups, IAM policies, or network isolation, because those decisions belong to Heroku. When an enterprise security questionnaire asks about VPC configuration, private networking, or infrastructure audit logging, the honest answer on Heroku is that the team does not control those things. AWS gives teams full ownership of infrastructure configuration. Every environment LocalOps provisions follows<a href="https://aws.amazon.com/architecture/well-architected"> AWS Well-Architected standards</a> by default, private subnets, least-privilege IAM policies, and encrypted secrets via AWS Secrets Manager. The compliance surface is the team&#8217;s own AWS account, which holds SOC 2, HIPAA, GDPR, and additional certifications. This is what makes enterprise deals closeable, not just technically possible.</p><p><strong>Reliability.</strong> On Heroku, a platform, an incident affects customer-facing application availability. Heroku&#8217;s management plane and runtime plane are not separated; when Heroku has issues, applications running on Heroku are affected. LocalOps provisions infrastructure into the team&#8217;s own AWS account. Once running, that infrastructure operates independently of LocalOps. If LocalOps experiences downtime, applications running on EKS in the team&#8217;s account continue operating without interruption. Applications depend on AWS uptime, not on any platform vendor&#8217;s uptime. This runtime independence is a deliberate architectural decision, and it is the opposite of how Heroku and most managed PaaS alternatives to Heroku work.</p><h2><strong>How LocalOps Makes AWS Practical as a Heroku Alternative</strong></h2><p>LocalOps is an AWS-native Internal Developer Platform built specifically for teams replacing Heroku.</p><p>Connect your AWS account through keyless IAM role assumption; credentials never leave your cloud. Connect your GitHub repository. Create an environment. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and the full Prometheus + Loki + Grafana observability stack automatically. No Terraform. No Helm charts. No manual configuration. Production-ready in under 30 minutes.</p><p>Developers push to a configured branch. LocalOps builds, containerizes, deploys, runs health checks, and handles rollback automatically. Logs and metrics are available from day one. Autoscaling runs by default. Preview environments spin up on every pull request.</p><p>The infrastructure runs in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt. This is what makes LocalOps a genuine AWS Heroku alternative rather than a managed platform that replaces one vendor dependency with another.</p><blockquote><p><em>&#8220;Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.&#8221; <strong>&#8211;</strong></em><strong> Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy</strong></p><p><em>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10&#8211;12 man months of effort , all of which LocalOps has saved for us.&#8221;</em> <strong>&#8211; Gaurav Verma, CTO and Co-founder, SuprSend</strong></p></blockquote><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>What is the most practical way to use AWS as a Heroku alternative without Kubernetes expertise?</strong></p></li></ol><p>The most practical path is an AWS-native Internal Developer Platform that sits on top of AWS infrastructure and provides a Heroku-equivalent developer experience. LocalOps is built specifically for this use case. Developers push code to a configured branch, and it deploys, with no Kubernetes knowledge, no Helm charts, and no Terraform required. The platform handles every AWS operation invisibly: cluster management, container builds, load balancer configuration, IAM role setup, and observability wiring. Product engineers interact with git and a deployment interface. The AWS infrastructure layer is never exposed to them directly.</p><ol start="2"><li><p><strong>How does running on AWS compare to Heroku on cost, control, and reliability?</strong></p></li></ol><p>On cost: AWS infrastructure pricing without a platform margin is structurally lower than Heroku&#8217;s PaaS pricing with one. The difference compounds with every service added. On control: AWS gives teams full ownership of VPC configuration, IAM policies, private networking, and compliance architecture, none of which Heroku provides. On reliability: applications running on AWS via LocalOps operate independently of any platform vendor&#8217;s uptime. A LocalOps outage does not affect running applications. On Heroku, a platform, an incident directly affects customer-facing availability because the management plane and runtime plane are not separated.</p><ol start="3"><li><p><strong>What AWS services replace the full Heroku stack?</strong></p></li></ol><blockquote><p>The complete mapping: Heroku dynos &#8594; EKS with horizontal autoscaling. Heroku Postgres &#8594; Amazon RDS in your VPC. Heroku Redis &#8594; ElastiCache in your VPC. Papertrail + New Relic &#8594; Built-in Prometheus + Loki + Grafana included in LocalOps at no extra cost. Heroku Scheduler &#8594; Native cron jobs. CloudAMQP &#8594; Amazon SQS. The operational complexity of each replacement is low with LocalOps, the platform provisions and configures each AWS service automatically. Application code does not change for most migrations. Connection strings and environment variables change.</p></blockquote><ol start="4"><li><p><strong>Why is Heroku&#8217;s tier-based pricing misaligned for SaaS teams with variable traffic?</strong></p></li></ol><p>Heroku requires teams to provision for peak capacity and pay for it continuously, whether or not the traffic is present. There is no automatic scale-down when traffic drops. Teams either over-provision and pay for idle capacity, or under-provision and accept performance degradation during spikes. AWS horizontal autoscaling on EKS responds to real traffic signals automatically, scaling out when load increases and back in when it drops. Teams pay for actual compute consumption proportional to usage, not for the tier ceiling required to handle the peak.</p><ol start="5"><li><p><strong>Is AWS a good Heroku alternative for Rails applications?</strong></p></li></ol><p>Yes, with the right platform layer. Rails applications have specific infrastructure requirements: Sidekiq workers, Postgres with connection pooling, Action Cable with Redis, Active Storage with object storage, and scheduled tasks. LocalOps handles all of these natively as first-class service types. Web processes and Sidekiq workers scale independently based on their own workload signals. RDS provides Postgres inside your VPC. ElastiCache provides Redis for Action Cable and background job queuing. Native cron jobs replace Heroku Scheduler. The rails hosting heroku alternative path through LocalOps preserves the git-push deployment workflow Rails teams depend on, running on infrastructure the team owns.</p><ol start="6"><li><p><strong>What is the difference between a Heroku self-hosted alternative and an AWS-native IDP?</strong></p></li></ol><p>A Heroku self-hosted alternative like Coolify or Dokku runs on infrastructure the team provisions and maintains. The team owns the full operational burden: server provisioning, security patching, platform updates, and on-call response for platform issues. An AWS-native IDP like LocalOps runs on Kubernetes in the team&#8217;s own AWS account, providing the same infrastructure ownership, but the platform layer is managed by LocalOps rather than the team. The infrastructure ownership is equivalent. The operational overhead is not. For teams without dedicated platform engineering capacity, the AWS-native IDP model provides infrastructure ownership without the cost of building and maintaining the platform themselves.</p><ol start="7"><li><p><strong>How do Heroku open source alternatives compare to AWS-native IDPs for production workloads?</strong></p></li></ol><p>Heroku open source alternatives give full infrastructure control at no licensing cost. For production SaaS workloads, the tradeoff is significant operational overhead, provisioning, security, observability setup, autoscaling configuration, and platform on-call all fall to the team. Most Heroku open source alternatives also have meaningful feature gaps in production-grade autoscaling and integrated observability. AWS-native IDPs provide equivalent infrastructure ownership with the platform layer managed. For product-focused engineering teams at Series A and beyond, the engineering hours required to operate an open-source platform in production consistently represent a higher cost than a managed IDP platform fee.</p><h2><strong>Key Takeaways</strong></h2><p>AWS is not an alternative to Heroku in the direct sense. It is the infrastructure foundation that makes all genuine alternatives to Heroku possible, and the one Heroku itself runs on.</p><p>The question for engineering teams is not whether AWS infrastructure is more capable than what Heroku provides. It is, structurally, on cost, control, compliance, and reliability. The question is how to access those capabilities without losing the developer experience that made Heroku valuable.</p><p>An Internal Developer Platform that runs on AWS in the team&#8217;s own account answers that question. Developers keep git-push deployments. The business gets infrastructure it owns, compliance architecture that supports enterprise deals, and a cost model that scales proportionally with usage rather than in tier jumps driven by a platform margin.</p><p>For engineering teams evaluating the best Heroku alternatives in 2026, the AWS Heroku alternative path through an IDP like LocalOps is the one that solves the immediate cost and compliance problem and the long-term infrastructure ownership problem at the same time.</p><p><strong><a href="https://go.localops.co/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> Our engineers review your current Heroku setup and walk through the AWS migration for your specific stack.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First production environment on AWS in under 30 minutes. No credit card required.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Heroku Migration Guide &#8594;</a></strong> Full technical walkthrough, database migration, environment setup, DNS cutover.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.localops.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to Deploy to AWS Without a Dedicated DevOps Engineer]]></title><description><![CDATA[What an Internal Developer Platform Makes Possible on AWS]]></description><link>https://blog.localops.co/p/how-to-deploy-to-aws-without-a-devops-engineer</link><guid isPermaLink="false">https://blog.localops.co/p/how-to-deploy-to-aws-without-a-devops-engineer</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Fri, 27 Mar 2026 08:10:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qpvg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qpvg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qpvg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 424w, https://substackcdn.com/image/fetch/$s_!qpvg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 848w, https://substackcdn.com/image/fetch/$s_!qpvg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 1272w, https://substackcdn.com/image/fetch/$s_!qpvg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qpvg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png" width="1456" height="1097" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1097,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4093620,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192277078?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qpvg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 424w, https://substackcdn.com/image/fetch/$s_!qpvg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 848w, https://substackcdn.com/image/fetch/$s_!qpvg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 1272w, https://substackcdn.com/image/fetch/$s_!qpvg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F974def64-6adb-49f3-aebc-82d85710e2f0_2400x1808.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Configuring AWS for a production-grade setup is not a one-day job. A team that wants CI/CD, isolated environments, observability, autoscaling, and sensible security defaults is looking at weeks of work: EKS or ECS setup, IAM role configuration, VPC and subnet design, CodePipeline wiring, CloudWatch setup, and a rollback strategy. Each has its own learning curve..</p><p>For most small teams, this work falls on the engineer who knows the most about infrastructure. They were not hired to do it. They have a backlog of features. But the alternative is deploying from a laptop with a shell script, which works until the first serious incident.</p><p>An internal developer platform exists to take this work off the team entirely. This post covers what that looks like in practice on AWS, where the boundaries are, and what becomes possible once the infrastructure problem is solved.</p><h2>TL;DR</h2><ul><li><p>An internal developer platform (IDP) abstracts AWS complexity into a self-service layer developers can use without learning Kubernetes, Terraform, or CI/CD pipelines</p></li><li><p>It handles environment provisioning, CI/CD wiring, observability, and security guardrails out of the box</p></li><li><p>Developers deploy by pushing to a GitHub branch. No Dockerfiles, no pipeline YAML, no manual AWS console work</p></li><li><p>IDPs still have limits: VPC design for regulated workloads, FinOps strategy, and complex networking still need engineering judgment</p></li><li><p>Once the deployment bottleneck is cleared, teams ship faster, support BYOC for enterprise customers, and any platform hire can focus on real architecture instead of pipeline maintenance</p></li></ul><h2>What Is an Internal Developer Platform?</h2><p>An internal developer platform is a self-service layer that sits between developers and infrastructure. It encodes infrastructure best practices, CI/CD pipelines, environment standards, and security policies into a product developers can use directly, without needing to understand the underlying cloud primitives.</p><p>Developers work with higher-level concepts like services, environments, and branches, rather than VPC route tables, IAM roles, or Helm charts. The platform handles the rest.</p><p>AWS describes internal developer platforms as internal products that let developers independently manage environments, deployments, and configurations, guided by automated best practices. The industry term for those curated, opinionated workflows is &#8220;golden paths.&#8221;</p><p>For a deeper breakdown of how internal developer platforms are defined and where they fit in modern engineering teams, read<a href="https://blog.localops.co/p/what-is-an-internal-developer-platform-idp?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> What Is an Internal Developer Platform</a> blog.</p><h2>Where AWS Gets Complicated</h2><p>AWS has everything on paper: ECS or EKS for compute, RDS for databases, S3 for storage, CodePipeline for CI/CD, CloudWatch for observability. The problem is wiring all of it into something that works reliably, repeatedly, and safely.</p><p>A realistic production-grade setup means configuring a VPC, subnets, security groups, IAM roles, an ECR registry, container orchestration, a CI/CD pipeline, a monitoring stack, and a rollback strategy. For a team doing this the first time, that&#8217;s weeks of work. Sometimes more.</p><p>Two patterns show up when small teams skip it. The first is unmanaged DevOps workflows. The most senior engineer quietly absorbs all the infra work on top of their actual job. Features slow down, that person burns out, and nothing gets documented. The second is manual, out-of-band deployments. Releases run from someone&#8217;s laptop via shell scripts, things hold together until they don&#8217;t, and the first real incident exposes how fragile the whole setup is.</p><p>Neither is a DevOps problem. It&#8217;s an abstraction problem. The team doesn&#8217;t need someone who knows AWS inside out. They need the infrastructure complexity abstracted away so engineers can focus on shipping software.</p><h2>What an Internal Developer Platform Actually Does on AWS</h2><p>An internal developer platform doesn&#8217;t replace AWS. It sits on top of it and handles the parts that don&#8217;t need to be custom every single time.</p><p>When you connect a cloud account &#8212; AWS, internal developer platform provisions a full environment: VPC, private and public subnets, a managed Kubernetes cluster via EKS, compute, and storage. Every environment gets its own isolated set of resources. Test, staging, and production each live in their own infrastructure bubble, with no shared state between them.</p><p>CI/CD wires in automatically. Connect a GitHub repo, pick a branch, and any push to that branch triggers a build and deployment. The platform handles image builds, container orchestration, and rollout. Developers don&#8217;t write Dockerfiles or pipeline YAML. They push code.</p><p>Observability is provisioned as part of every environment. Loki, Prometheus, and Grafana are set up inside each environment by default, connected and configured, so logs and metrics are available from day one without buying Datadog or configuring anything separately.</p><p>Security defaults are on. Disk encryption, VPC isolation, auto-renewing SSL certificates, encrypted secrets, and role-based access come with every environment. You don&#8217;t configure these individually. They&#8217;re part of the baseline.</p><h2>Core Components of an Internal Developer Platform</h2><ul><li><p><strong>Environment provisioning:</strong> spin up test, staging, production, or per-customer stacks on your AWS account with isolated VPCs, subnets, and compute. No AWS console, no Terraform, no manual networking setup</p></li><li><p><strong>CI/CD abstraction:</strong> connect a GitHub branch and the platform builds, containerises, and deploys to your EKS cluster automatically. No CodePipeline config, no Dockerfiles, no deployment scripts to maintain</p></li><li><p><strong>Built-in observability:</strong> every environment on AWS gets its own Loki, Prometheus, and Grafana stack, pre-wired and running. Logs, metrics, and alerts are available from the first deploy without routing anything through CloudWatch manually</p></li><li><p><strong>Security guardrails:</strong> disk encryption, VPC isolation, private subnets, auto-renewing SSL certificates, encrypted secrets, and role-based access are on by default in every environment. These follow AWS security best practices and require no manual configuration per service</p></li><li><p><strong>Deployment model support:</strong> run your product as standard SaaS in your own AWS account, spin up dedicated single-tenant infrastructure for large enterprise customers in the same account, or deploy directly into a customer&#8217;s AWS account via BYOC. Each model is a configuration choice, not a separate engineering project</p></li></ul><h2>How Developers Deploy to AWS With Just a GitHub Push</h2><p>Here is how the flow usually looks with LocalOps:</p><p><strong>Step 1: Connect your GitHub and AWS accounts</strong> Link your GitHub repositories and your AWS account via keyless, role-based access. No long-lived credentials, no IAM user keys sitting in config files. LocalOps uses this to watch for new commits and to provision infrastructure directly in your AWS account.</p><p><strong>Step 2: Create an environment</strong> Spin up a named environment: test, staging, production, or a dedicated stack for a specific customer. Each environment gets its own VPC, subnets, EKS cluster, and compute. Fully isolated. Takes a few minutes, not a few days.</p><p><strong>Step 3: Define your services</strong> Create a service for each component of your application: API, frontend, background workers, cron jobs. Assign a GitHub repo and branch to each one. That branch becomes the deployment trigger.</p><p><strong>Step 4: Push code to deploy</strong> From this point, every commit pushed to the configured branch triggers an automatic build and deployment. LocalOps pulls the latest code, builds the container, and rolls it out to the Kubernetes cluster in your AWS account. No manual steps, no deployment scripts, no one watching a terminal.</p><p><strong>Step 5: Preview environments for every pull request</strong> Every pull request automatically gets an ephemeral environment with its own URL, spun up in your AWS account, connected to your existing databases and services. Your team can review, test, and catch issues before anything merges to the main branch.</p><p>This is the entire path from code to production. No Dockerfiles to write, no CodePipeline to configure, no Helm charts to maintain. The monitoring stack (Loki, Prometheus, Grafana) is provisioned and wired up inside each environment automatically. Your team just ships.</p><p>Read more about how LocalOps connects to your AWS account and provisions environments in our<a href="https://docs.localops.co/accounts/aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> AWS setup guide</a>.</p><h2>IDP vs. DIY DevOps on AWS</h2><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/Ht2Eg/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/da02adc2-17f3-4312-a564-c449056320f9_1220x1160.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cec37cbe-935a-4070-8337-1deb99b6d114_1220x1160.png&quot;,&quot;height&quot;:584,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/Ht2Eg/1/" width="730" height="584" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>The in-house IDP path using Backstage, Argo CD, and Terraform is legitimate. Large orgs with dedicated platform engineering teams do it well. For a team of five to twenty engineers who need AWS working now, building and maintaining that stack is its own multi-quarter project before it saves anyone any time.</p><p><strong>There is also a hybrid adoption path that many growing teams land on naturally:</strong></p><ul><li><p>Start with a cloud-native IDP to get AWS working in days, not months. Standard workloads, CI/CD, environments, and observability are handled from day one</p></li><li><p>As the team grows, a platform engineer joins and extends the platform where needed: custom Terraform modules, specific AWS service integrations, or compliance-driven networking changes</p></li><li><p>The IDP continues handling 80-90% of standard workloads. The platform engineer focuses on architecture, security posture, and cost strategy rather than rebuilding deployment infrastructure from scratch</p></li><li><p>Teams that need even more control can extend LocalOps with their own Terraform or Pulumi scripts directly, without ejecting from the platform entirely</p></li></ul><p>This is a more practical internal developer platform architecture than the binary choice of &#8220;build everything in-house&#8221; vs &#8220;hand it all to a platform&#8221; suggests. Most teams don&#8217;t make a single infrastructure decision and stick with it. They evolve.</p><p>It also reframes how to think about platform engineering and internal developer platforms as a concept: not a one-time tool decision, but a foundation you grow on top of. The best internal developer platforms for AWS are the ones that meet you where you are today and don&#8217;t box you in tomorrow.</p><h2>Where an IDP Still Has Limits</h2><p>An internal developer platform handles most of the heavy lifting, but knowing where the boundaries are helps teams plan better.</p><p>VPC architecture for regulated industries still needs deliberate design. If you&#8217;re building toward SOC 2, HIPAA, or regional data residency requirements, you need someone who understands how AWS account structure, network segmentation, and encryption policies interact with those frameworks. A platform sets the foundation, but those decisions need human input.</p><p>FinOps is a separate discipline. An IDP can enforce resource tagging and standardise instance types, but budget visibility, reserved instance strategy, and rightsizing analysis sit outside what most platforms cover today.</p><p>Complex networking, including Direct Connect for hybrid cloud, on-prem integration, or multi-region setups, requires additional configuration beyond standard abstractions. The same applies to stateful workloads or services with unusual compute requirements.</p><p>Non-standard workflows occasionally need custom handling. Most teams find that the common 80% of their deployment patterns fit well within what an IDP supports, and the edge cases can usually be addressed as the platform evolves.</p><p>Adoption takes some investment. Teams moving from custom tooling benefit from a clear onboarding plan, and the earlier that conversation happens, the smoother the transition.</p><p>On costs, the economics typically improve at scale, though it&#8217;s worth modelling your expected growth before committing to any managed platform.</p><h2>Real-World Use Cases</h2><h4>Use Case 1</h4><p><strong>Shipping a B2B SaaS product to AWS with no DevOps hire</strong></p><p>A five-person team with a Node.js API, React frontend, background worker, and Postgres database. No DevOps engineer on the team.</p><p><strong>The problem without an IDP</strong></p><ul><li><p>Configure EKS, set up ECR for container images, wire CodePipeline to GitHub</p></li><li><p>Set up IAM roles with least-privilege access, VPC with public and private subnets</p></li><li><p>Figure out CloudWatch for logs, set up rollback strategy</p></li><li><p>Each of those is its own rabbit hole. Together they are weeks of work before a single line of product code ships to production</p></li></ul><p><strong>With an IDP</strong></p><ul><li><p>Connect GitHub and AWS, create an environment, define services for API, frontend, worker, and cron job</p></li><li><p>Each service gets a branch assignment. Every push deploys automatically</p></li><li><p>RDS provisions without writing a Terraform module. Monitoring runs inside the environment from day one</p></li><li><p>Preview environments spin up per pull request, wired into the existing database</p></li></ul><p><strong>Why it matters</strong></p><p>According to <a href="https://www.atlassian.com/blog/developer/developer-experience-report-2024">Atlassian&#8217;s 2024 State of Developer Experience report</a>, 69% of developers lose eight or more hours every week to inefficiencies, most of which trace back to environment access and deployment friction. The same report found that 63% of developers consider developer experience a key factor in deciding whether to stay at their current job, which matters when a five-person team cannot afford attrition.</p><h4>Use Case 2</h4><p><strong>How SuprSend unlocked enterprise revenue with BYOC on AWS</strong></p><p>SuprSend builds notification infrastructure for developer teams. They were initially SaaS-only. Customers in regulated industries including fintech, insurance, and healthcare wanted to self-host SuprSend in their own cloud to avoid sharing sensitive PII like email addresses and phone numbers with a third-party SaaS platform.</p><p><strong>The problem without an IDP</strong></p><ul><li><p>Build a full BYOC distribution pipeline from scratch: per-customer VPCs, EKS clusters, IAM roles, Helm charts, and CI/CD pipelines</p></li><li><p>Parameterise Helm charts for each customer&#8217;s cloud environment</p></li><li><p>Build a release and versioning workflow for self-hosted packages</p></li><li><p>Maintain deployment tooling alongside core product development</p></li></ul><p>SuprSend&#8217;s CTO Gaurav Verma estimated the in-house build would have taken 12-15 man months of engineering effort, pulling the entire team away from core product development. With a high-revenue enterprise customer waiting and a tight delivery deadline, that timeline was not realistic.</p><p><strong>With LocalOps</strong></p><ul><li><p>BYOC became a configuration choice, not a separate engineering project</p></li><li><p>GitHub integration slotted into their existing commit, push, and deploy workflow</p></li><li><p>Self-hosted versions could be built, tested, and released privately using licence tokens</p></li><li><p>Pre-sales engineers could independently set up POCs on enterprise customer cloud environments</p></li></ul><p><strong>The outcome</strong></p><p>SuprSend went from zero BYOC capability to delivering a self-hosted version to a new enterprise customer in under a day. They saved 12-15 man months of engineering effort and unlocked an entirely new enterprise customer segment that was previously out of reach. Read the<a href="https://localops.co/case-study/suprsend-unlocks-enterprise-revenue-byoc?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> full case study</a> here.</p><h4>Use Case 3</h4><p><strong>Migrating off hand-rolled CI/CD before a compliance requirement hits</strong></p><p>A startup with GitHub Actions pipelines nobody fully understands, manual deploys from the same laptop, no rollback, no audit trail. A compliance requirement arrives: SOC 2, or an enterprise customer&#8217;s security questionnaire asking about access controls, encryption at rest, and deployment audit logs.</p><p><strong>The problem without an IDP</strong></p><ul><li><p>No audit trail for who deployed what and when</p></li><li><p>Security groups configured ad-hoc, some open wider than they should be</p></li><li><p>Secrets stored in environment variables, not a secrets manager</p></li><li><p>No rollback mechanism &#8212; a bad deploy means manually reverting and redeploying</p></li><li><p>Passing a security review with this setup means months of remediation work</p></li></ul><p><strong>With an IDP</strong></p><ul><li><p>Every environment provisions with VPC isolation, encrypted secrets, disk encryption, and RBAC on by default</p></li><li><p>Audit logs exist from the first deployment</p></li><li><p>Rollbacks are a platform-level operation, not a manual process</p></li><li><p>Migration path: connect lower-risk services first, validate, then move critical services over one by one</p></li></ul><p><strong>Why it matters</strong></p><p>The security defaults that feel like overhead when moving fast become the exact thing that unblocks enterprise deals and passes security reviews later. Building them in from the start costs nothing extra on a platform. Retrofitting them onto a hand-rolled setup costs weeks.</p><p>If your team is dealing with any of these situations, it helps to see how this maps to your own infrastructure.</p><p>You can<a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> book a demo </a>with LocalOps to walk through it.</p><h2>FAQs</h2><p><strong>1. How does an IDP reduce DevOps bottlenecks for product teams?</strong></p><p>By making environment provisioning and deployment self-service. Developers don&#8217;t open tickets or wait for an ops engineer to spin up an environment or push a deployment. The platform handles it through standard workflows any developer can trigger.</p><p>The bottleneck in most small teams is not a lack of DevOps skill. It is that one or two people hold all the infrastructure context and everyone else waits on them. An IDP encodes that context into the platform itself. New environments spin up in minutes. Deployments trigger on a git push. A developer joining the team on day one can ship to staging without asking anyone how the pipeline works.</p><p><strong>2. Can developers deploy to AWS without learning Kubernetes?</strong></p><p>Yes, on a platform that abstracts the orchestration layer. LocalOps runs workloads on Kubernetes under the hood but developers never interact with it directly. They create a service, assign a branch, and push code.</p><p>This matters because Kubernetes expertise is genuinely hard to acquire and maintain. Understanding pod scheduling, resource limits, ingress controllers, persistent volumes, and cluster upgrades is a full-time concern. An IDP that manages the Kubernetes layer means your developers focus on application code. The cluster gets created, configured, and managed by the platform. Your team never needs to write a Helm chart or debug a failing pod unless they choose to go deeper.</p><p><strong>3. Should you build an internal developer platform or buy one?</strong></p><p>Building gives you full control but costs significant engineering time. A typical in-house IDP on AWS, wiring Backstage internal developer platform, Argo CD, Terraform, and a monitoring stack together, takes a platform team multiple quarters to build before it reliably saves anyone time. You are essentially building and maintaining a product alongside your actual product.</p><p>Buying, or using a cloud-native IDP, means trading some configurability for speed. You get CI/CD, environment provisioning, observability, and security defaults on AWS without writing a line of infrastructure code. The tradeoff is that edge cases, highly regulated workloads, or exotic networking requirements may sit outside what the platform handles.</p><p>The practical answer for most teams: start with a cloud-native IDP, ship your product, and build custom tooling only where the platform has genuine gaps. Most teams never hit those gaps with standard web workloads.</p><p><strong>4. Do I still need a DevOps engineer if I use an IDP?</strong></p><p>For most standard web workloads, not in the early stage. An IDP handles what a DevOps engineer would otherwise own: environment provisioning, CI/CD, monitoring, and security defaults. Your developers deploy themselves.</p><p>As the team grows, a platform engineer becomes valuable. But their job looks different. Instead of maintaining pipelines and spinning up environments, they focus on cloud architecture, cost strategy, and compliance. The day-to-day deployment work is already handled.</p><p>If you ever outgrow the platform, a good IDP gives you a full eject path so you take the underlying infrastructure with you.</p><p><strong>5. Internal developer portal vs platform: which one do you actually need for AWS?</strong></p><p>For AWS, you need a platform. A portal handles service catalog and discoverability. It does not provision VPCs, configure IAM roles, or wire CI/CD. It has no infrastructure layer.</p><p>A platform is what actually runs on AWS. It provisions environments, manages deployments, and enforces security defaults. Backstage is often called an &#8220;internal developer platform&#8221; but it is technically a portal. Teams that adopt it for AWS deployments quickly find they still need to build the full infrastructure stack underneath it.</p><p>For small teams, discoverability is rarely the problem. Shipping to AWS reliably without a DevOps engineer is. That is a platform problem, not a portal problem.</p><h2>Key Takeaways: What an IDP Actually Changes for Your Team</h2><p>The value of an internal developer platform isn&#8217;t just faster deploys. It&#8217;s what becomes possible when engineers aren&#8217;t waiting on infrastructure.</p><p>Product teams ship on their own schedule. Nobody is blocked on a ticket to get a staging environment or a preview URL. The senior engineer who was quietly doing infra on the side goes back to building features.</p><p>Distribution models that previously required months of engineering work become available much earlier. BYOC support lets you pitch enterprise customers who won&#8217;t use shared infrastructure. Single-tenant stacks let you offer dedicated environments to large accounts without custom work per customer. Self-hosted deployments let you reach customers with strict data residency requirements.</p><p>When a platform engineer does eventually join the team, they don&#8217;t spend their first quarter reverse-engineering ad-hoc scripts. They work on cloud architecture, security posture, and cost strategy. The things that actually matter at scale.</p><p>Finding the best platform for internal developer experience is not just a tooling decision. It directly affects how fast your team ships, which enterprise deals you can close, and whether your first platform hire spends their time on architecture or pipeline maintenance.</p><p>An IDP doesn&#8217;t remove the need for engineering judgment. It removes the need to re-solve the same infrastructure problems from scratch every time, which is a different thing entirely.</p><p><strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Book a Demo</a> &#8594;</strong> Walk through how environments, deployments, and AWS infrastructure are handled in practice for your setup.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Get started for free</a> &#8594;</strong> Connect an AWS account and stand up an environment to see how it fits into your existing workflow.</p><p><strong><a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Explore the Docs</a> &#8594;</strong> A detailed breakdown of how LocalOps works end-to-end, including architecture, environment setup, security defaults, and where engineering decisions still sit.</p>]]></content:encoded></item><item><title><![CDATA[Best Heroku Alternatives for Production SaaS Teams: A CTO's Evaluation Framework]]></title><description><![CDATA[A CTO&#8217;s guide to self-hosted Heroku alternatives in 2026, comparing build vs buy, real operational costs, and achieving AWS infrastructure ownership without platform lock-in.]]></description><link>https://blog.localops.co/p/best-heroku-alternatives-for-production</link><guid isPermaLink="false">https://blog.localops.co/p/best-heroku-alternatives-for-production</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Thu, 26 Mar 2026 14:13:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FWyN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FWyN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FWyN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 424w, https://substackcdn.com/image/fetch/$s_!FWyN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 848w, https://substackcdn.com/image/fetch/$s_!FWyN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 1272w, https://substackcdn.com/image/fetch/$s_!FWyN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FWyN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png" width="2400" height="1511" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1511,&quot;width&quot;:2400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5384594,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/192201464?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99540626-76f0-48ca-b3ee-68f3827eb25b_2400x1808.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FWyN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 424w, https://substackcdn.com/image/fetch/$s_!FWyN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 848w, https://substackcdn.com/image/fetch/$s_!FWyN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 1272w, https://substackcdn.com/image/fetch/$s_!FWyN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14eb3ca9-fc60-458e-96a7-8188b1c76275_2400x1511.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The best Heroku alternative for a production SaaS team is not the one with the most features. It is the one that solves the right problems for the specific stage the business is at, and does not create new ones.</p><p>For engineering leaders evaluating Heroku alternatives in 2026, the evaluation landscape is more complex than it appears. The number of credible options has grown. The structural differences between them are significant. And the cost of choosing the wrong one, a second migration 18 months later, a compliance blocker in an enterprise deal, a developer experience regression that kills shipping velocity, is high enough to justify a rigorous evaluation framework before committing.</p><p>This guide is that framework. It is written for CTOs and VPs of Engineering at B2B SaaS companies scaling from a few thousand in MRR to 100K-2M+ ARR, the stage where Heroku&#8217;s limitations become strategic constraints, and the infrastructure decision has real business consequences.</p><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> How to evaluate the best Heroku alternatives for production SaaS, total cost of ownership, production-readiness criteria, git-push deployment requirements, and the native capabilities any viable replacement must have</p><p><strong>Who it is for:</strong> CTOs and VPs of Engineering evaluating alternatives to Heroku for teams running 20+ production services</p><p><strong>The framework:</strong> Six evaluation criteria that separate production-grade Heroku alternatives from platforms that work for early-stage or hobbyist workloads</p><p><strong>Want to see how LocalOps maps to each of these criteria for your specific stack?</strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Request a walkthrough</a>.</p><h2><strong>Why the Evaluation Framework Matters More Than the Feature List</strong></h2><p>Most platform comparisons focus on features. This one focuses on criteria, because the feature list is not what determines whether an alternative to Heroku is right for your business.</p><p>The question is not whether a platform has autoscaling. It is whether the autoscaling responds to real production signals automatically or requires manual configuration and human intervention.</p><p>The question is not whether a platform has observability. It is whether observability is built into the platform at no additional cost or assembled from paid add-ons that recreate Heroku&#8217;s fragmented monitoring model.</p><p>The question is not whether a platform supports deployments. It is whether any developer on the team can deploy independently without tickets, waiting, or infrastructure knowledge, and whether that remains true at 20 engineers as it was at five.</p><p>Feature lists answer the first version of each question. The evaluation framework in this guide answers the second, which is the version that determines whether the migration succeeds in production.</p><h2><strong>The Four Categories of Heroku Alternatives in 2026</strong></h2><p>Before applying the evaluation framework, it is worth understanding what you are evaluating. The Heroku alternatives landscape in 2026 has matured into four distinct categories. Each solves a different problem and suits a different team profile.</p><h3><strong>Managed PaaS: Render, Railway, Fly.io</strong></h3><p>The most direct alternatives to Heroku in terms of developer experience. Git-based deployments, managed databases, familiar workflows. Migration from Heroku is faster than any other path.</p><p><strong>The structural limitation:</strong> Infrastructure runs on the vendor&#8217;s shared cloud. No VPC ownership. Compliance requirements that block teams on Heroku frequently block them here, too. The platform margin on compute and managed services means cost efficiency ceilings lower than direct AWS. And the exit path requires rebuilding infrastructure from scratch, a new version of the same vendor lock-in problem.</p><p><strong>Suited for:</strong> Early-stage teams that need to move quickly and have not yet encountered enterprise compliance pressure.</p><h3><strong>Open-Source Self-Hosted: Coolify, Dokku, CapRover</strong></h3><p>Full infrastructure ownership. No platform vendor dependency. Your application runs on servers you control in a cloud account you own.</p><p><strong>The structural limitation:</strong> Your team owns the full operational burden, provisioning, security patching, observability setup, scaling configuration, and on-call response for the platform itself. Most Heroku open source alternatives have meaningful feature gaps for production workloads. Autoscaling, built-in observability, and multi-environment management require significant additional configuration.</p><p><strong>Suited for:</strong> Teams with dedicated platform engineering capacity and specific requirements that no managed platform can meet.</p><h3><strong>Raw AWS</strong></h3><p>Maximum control. Maximum cost efficiency. Full compliance capability. No platform margin.</p><p><strong>The structural limitation:</strong> No developer-friendly deployment experience out of the box. Deploying to ECS or EKS requires significant infrastructure configuration before a product engineer can deploy independently. Without a platform layer, developer autonomy disappears post-migration.</p><p><strong>Suited for:</strong> Teams with the platform engineering investment to build and maintain the developer experience layer on top of AWS infrastructure.</p><h3><strong>AWS-Native Internal Developer Platforms: LocalOps</strong></h3><p>The category where scaling SaaS teams are converging and consolidating. An IDP built on AWS Kubernetes runs in your own cloud account, preserves git-push developer workflows, and handles the infrastructure complexity that makes raw AWS inaccessible to product engineers.</p><p><strong>Why this category wins the evaluation for Series A&#8211;B teams:</strong> Infrastructure ownership without operational overhead. Developer experience without platform team dependency. Cost efficiency of AWS without the setup complexity. Compliance capability without compliance ceiling.</p><p><strong>Suited for:</strong> Series A and beyond B2B SaaS teams with enterprise customers, compliance requirements, or cost structures where infrastructure ownership is a business requirement.</p><p><a href="https://localops.co/solution/internal-developer-platform?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps works as an AWS-native IDP</a>.</p><h2><strong>The Evaluation Framework: Six Criteria That Matter for Production SaaS</strong></h2><h3><strong>Criterion 1: Total Cost of Ownership</strong></h3><p>For teams running 1-2 services, infrastructure cost differences between Heroku alternatives are manageable. For teams running anything more, the structural differences in the cost model become significant and compound.</p><p><strong>How to evaluate TCO for alternatives to Heroku:</strong></p><p>Map your current Heroku stack to each alternative&#8217;s pricing model. Include every component, compute, database, cache, job queues, observability, and secrets management. The comparison is not compute cost versus compute cost. It is the total platform cost across every component your production stack requires.</p><p><strong>What the comparison reveals:</strong></p><p>Managed PaaS alternatives reduce costs versus Heroku but maintain a platform margin on every component. At 20+ services, this margin compounds. The efficiency ceiling is lower than direct AWS because the vendor margin never disappears, regardless of scale.</p><p>AWS-native IDPs like LocalOps charge a flat platform fee. The underlying infrastructure runs at AWS list pricing with no markup. Observability is included, Prometheus, Loki, and Grafana pre-configured at no additional cost. The observability tools that are monthly line items on a Heroku invoice disappear as cost line items entirely.</p><p>The cost difference is structural, not marginal. It exists at every scale and widens as services are added because every new service adds another component where the margin difference applies.</p><p>For an accurate TCO comparison based on your current Heroku invoice, the LocalOps team will model it directly from your stack.<a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Request a TCO analysis</a><strong>.</strong></p><p><strong>What this criterion eliminates:</strong></p><p>Any platform where the cost model scales in steps rather than proportionally with usage. Any platform where observability, secrets management, or CI/CD are add-ons with separate billing rather than native platform capabilities.</p><p><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Request a TCO analysis based on your current Heroku invoice.</a></p><h3><strong>Criterion 2: Git-Push Deployment on Infrastructure You Own</strong></h3><p>This criterion directly answers one of the most important questions in any Heroku alternative evaluation: which alternatives support git-push style deployment while running infrastructure on the team&#8217;s own cloud account rather than a shared third-party platform?</p><p>The answer narrows the field significantly.</p><p><strong>Managed PaaS alternatives</strong> support git-push deployment. They do not run on your cloud account. Infrastructure is the vendor&#8217;s.</p><p><strong>Open-source self-hosted alternatives</strong> can run on your cloud account. Git-push deployment typically requires significant additional configuration and maintenance.</p><p><strong>AWS-native IDPs</strong> provide both git-push deployment on infrastructure in your own AWS account. This is the combination that the best Heroku alternatives for production B2B SaaS teams require.</p><p>A developer pushes code to a configured branch. LocalOps detects the push, builds the container image automatically, pushes to Amazon ECR, updates the Kubernetes deployment on EKS, runs health checks, and handles rollback if the deployment fails. The developer sees the deployment in progress. Within minutes, the new version is live.</p><p>No Kubernetes knowledge required. No Helm charts. No Terraform. No platform team to notify. The workflow is identical to Heroku. The infrastructure underneath is the team&#8217;s own AWS account.</p><p><strong>What this criterion eliminates:</strong></p><p>Any platform where git-push deployment is not native, requiring external CI/CD configuration to replicate what Heroku provides out of the box. Any platform where the git-push simplicity exists, but the infrastructure lives in a shared third-party cloud rather than the team&#8217;s own account.</p><p><a href="https://localops.co/features/continuous-deployments?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps handles continuous deployments.</a></p><h3><strong>Criterion 3: Native Platform Capabilities - The Non-Negotiable Four</strong></h3><p>Before a team considers any platform a viable Heroku replacement for production workloads, four capabilities must be native to the platform, not assembled from third-party add-ons.</p><p><strong>Capability 1: CI/CD</strong></p><p>CI/CD must be triggered automatically on git push with no external pipeline configuration required. Any platform that requires teams to configure GitHub Actions, CircleCI, or other external tools to replicate Heroku&#8217;s deployment automation has not replaced Heroku&#8217;s developer experience; it has delegated it.</p><p>LocalOps&#8217;s CI/CD triggers automatically on every push to a configured branch. Build, containerize, deploy, health check, rollback, all handled without external pipeline configuration.</p><p><strong>Capability 2: Observability</strong></p><p>Observability must be built into the platform from day one, with metrics, logs, and alerts available from the first deployment without add-on configuration. Any platform that requires teams to assemble observability from Papertrail, New Relic, Scout, or equivalent tools has recreated Heroku&#8217;s fragmented monitoring model with different vendor names.</p><p>LocalOps includes Prometheus for metrics, Loki for log aggregation, and Grafana for dashboards, pre-configured in every environment, at no additional cost. Logs and metrics are available from the first deployment with no setup required.</p><p><a href="https://localops.co/features/builtin-monitoring?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how built-in monitoring works on LocalOps.</a></p><p><strong>Capability 3: Autoscaling</strong></p><p>Autoscaling must respond to real workload signals automatically, CPU utilization, memory pressure, and request queue depth, without manual intervention. Any platform where scaling requires a human decision, a manual configuration change, or a dyno tier upgrade has not solved Heroku&#8217;s scaling model problem.</p><p>LocalOps runs workloads on EKS with horizontal pod autoscaling configured by default. Services scale up under load and back down automatically when traffic drops. Teams pay for actual compute consumption, not for the tier above what they need.</p><p><a href="https://localops.co/features/auto-scaling?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how autoscaling works on LocalOps.</a></p><p><strong>Capability 4: Secrets Management</strong></p><p>Secrets must be stored securely, encrypted at rest, and injected into containers at runtime, without developers having direct access to production credentials. Any platform that stores secrets as plain-text environment variables, or where secrets management requires external tooling, has not met the baseline security requirement for B2B SaaS production workloads.</p><p>LocalOps stores all secrets in AWS Secrets Manager. Secrets are encrypted at rest, accessible to services through IAM-scoped roles, and never exposed to developers directly. Secrets management is part of the platform, not an integration requirement.</p><p><strong>What this criterion eliminates:</strong></p><p>Any platform where one or more of these four capabilities is missing, requires external tooling to achieve, or is available only on higher pricing tiers. For production SaaS teams at Series A and beyond, all four are baseline requirements, not advanced features.</p><h3><strong>Criterion 4: Production-Readiness, How to Distinguish Genuine From Superficial</strong></h3><p>This is the criterion most commonly underweighted in Heroku alternative evaluations. Many platforms work well for early-stage applications and side projects. Fewer are genuinely production-ready for B2B SaaS teams running 20+ services with enterprise customers, SLA commitments, and compliance requirements.</p><p><strong>Multi-environment management.</strong> A production-ready platform manages development, staging, and production environments as first-class concepts, with isolated VPCs, environment-specific configuration, and blast radius containment between environments. A platform that requires manual environment management or shares infrastructure between environments is not production-ready.</p><p><strong>Preview environments on pull requests.</strong> Production-ready platforms spin up ephemeral environments automatically on every pull request, giving QA and code review a live URL before merge. This is one of the most valuable Heroku features. Its absence on an alternative is a signal about production maturity.</p><p>LocalOps provisions preview environments automatically on every pull request, complete, isolated environments with their own URL, running the full application stack. No configuration required. When the PR closes, the environment is torn down, and AWS resources are released.</p><p><a href="https://localops.co/features/preview-environments?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how preview environments work on LocalOps.</a></p><p><strong>Persistent workload support.</strong> Production SaaS applications run background workers, cron jobs, and stateful services that need to run reliably without ephemeral filesystem issues. A production-ready Heroku alternative handles these as first-class service types, not as workarounds.</p><p>LocalOps supports web services, background workers, cron jobs, internal services, and job-type workloads natively. Each service type is configured independently and scales independently based on its own workload signals.</p><p><strong>Incident response capability.</strong> When something breaks at 2 am, a production-ready platform provides complete information immediately, logs, metrics, and recent deployment history in one interface. Platforms where incident response requires correlating four separate tools are not production-ready in the operational sense.</p><p><strong>What this criterion eliminates:</strong></p><p>Any platform where preview environments require additional configuration or third-party tooling. Any platform where background workers and cron jobs are not first-class service types. Any platform where multi-environment management shares infrastructure between staging and production.</p><h3><strong>Criterion 5: Compliance and Security Architecture</strong></h3><p>For B2B SaaS teams scaling from Series A to Series B, compliance is not a future consideration. It is an active sales requirement.</p><p>Enterprise procurement processes ask specifically about VPC configuration, private networking between services, IAM-based access control, infrastructure audit logging, and data residency. These are not advanced requirements. They are the baseline that enterprise security questionnaires are built around.</p><p><strong>What production-grade compliance architecture requires:</strong></p><p>Infrastructure in the team&#8217;s own cloud account, not on a vendor&#8217;s shared platform. Data residency in a configurable AWS region. VPC isolation with private subnets. Least-privilege IAM policies are applied automatically to every environment. Encrypted secrets via AWS Secrets Manager. Audit logging through AWS CloudTrail.</p><p>LocalOps applies all of these by default to every environment, following<a href="https://aws.amazon.com/architecture/well-architected"> AWS Well-Architected standards</a>. The compliance architecture is not a configuration option. It is the default.</p><p><strong>The compliance ceiling problem:</strong></p><p>Managed PaaS alternatives, Render, Railway, and Fly.io, have a compliance ceiling defined by what the vendor chooses to support. Teams selling to enterprise customers consistently discover this ceiling 12&#8211;18 months after migrating to a managed PaaS alternative to Heroku. The security questionnaire arrives, and the platform cannot answer it.</p><p>AWS-native IDPs running in the team&#8217;s own account have no such ceiling. The compliance surface is AWS itself, which holds SOC 2, HIPAA, GDPR, PCI DSS, and dozens of additional certifications.</p><p><strong>What this criterion eliminates:</strong></p><p>Any platform where infrastructure runs on a shared third-party cloud. Any platform where compliance certification depends on the vendor&#8217;s decisions rather than AWS&#8217;s. Any platform that cannot provide VPC isolation, private networking, and IAM-based access control as defaults.</p><p><a href="https://localops.co/features/secure-by-default?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps handles security by default.</a></p><h3><strong>Criterion 6: Exit Optionality and Vendor Lock-in</strong></h3><p>The final criterion is one that most teams underweight at evaluation time and overweight in retrospect: what happens if you need to change platforms in three years?</p><p>Platforms that run standard Kubernetes in the team&#8217;s own AWS account answer this question clearly. The infrastructure runs independently of the platform vendor. Stopping use of the platform means the infrastructure continues running, managed directly through the AWS console or CLI. No data to export. No infrastructure to rebuild. No migration to execute.</p><p>Platforms where infrastructure lives in the vendor&#8217;s cloud, including Heroku and all managed PaaS alternatives, require a full rebuild to exit. This is the vendor lock-in that teams are trying to escape from Heroku. Choosing a managed PaaS alternative to Heroku replaces one lock-in with another.</p><p><strong>How LocalOps handles this:</strong></p><p>Every resource LocalOps provisions lives inside the team&#8217;s own AWS account. EKS clusters, RDS databases, VPCs, load balancers, all owned by the team, all running in their account. If a team stops using LocalOps, the infrastructure continues running without interruption. Nothing depends on LocalOps&#8217;s systems to stay operational.</p><p>This is a deliberate architectural decision. Infrastructure ownership is the core premise of the product. The exit path is always open.</p><h2><strong>How the Framework Applies Decision</strong></h2><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/0qZ3p/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21ce1de1-2609-4093-a807-3a91150b60ea_1220x1208.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/493ca457-7a0d-414e-b3fb-c085ea855375_1220x1378.png&quot;,&quot;height&quot;:704,&quot;title&quot;:&quot;For teams actively comparing specific platforms, the framework produces a clear differentiation.&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/0qZ3p/1/" width="730" height="704" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>For teams actively comparing specific platforms, the framework produces a clear differentiation.</p><p>The framework does not produce a single universal winner. It produces the right answer for a specific team profile.</p><p>For early-stage teams with no current enterprise compliance pressure: Render and Railway are reasonable transitional options.</p><p>For teams with dedicated platform engineering capacity and specific customization requirements, open-source self-hosted alternatives can work.</p><p>For Series A&#8211;B B2B SaaS teams with enterprise customers, compliance requirements, and 20+ production services: AWS-native IDPs are the only category that meets all six criteria simultaneously.</p><p><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps maps to your team&#8217;s specific requirements.</a></p><h2><strong>How LocalOps Delivers Against the Framework</strong></h2><p>LocalOps is an AWS-native Internal Developer Platform built specifically for teams replacing Heroku.</p><p>Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><p>From here, the developer experience is identical to Heroku. Push to your configured branch. LocalOps builds, containerizes, and deploys to AWS automatically. Preview environments spin up on every pull request. Logs and metrics available from day one. Autoscaling and auto-healing run by default.</p><p>The infrastructure runs in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt.</p><blockquote><p><em>&#8220;Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.&#8221;</em> <strong>&#8211; Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy</strong></p><p><em>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10&#8211;12 man months of effort, all of which LocalOps has saved for us.&#8221;</em> <strong>&#8211; Gaurav Verma, CTO and Co-founder, SuprSend</strong></p></blockquote><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>What are the best Heroku alternatives in 2026 for production SaaS teams running 20+ services?</strong></p></li></ol><p>For teams running 20+ production services, the best Heroku alternatives are AWS-native Internal Developer Platforms. At this scale, the structural differences in cost model become significant; managed PaaS alternatives maintain a platform margin on every component that compounds across 20+ services, while AWS-native IDPs run at direct AWS pricing with a flat platform fee. The compliance, observability, and autoscaling requirements of teams at this scale also eliminate managed PaaS alternatives that cannot provide VPC isolation, native observability, or event-driven horizontal autoscaling. LocalOps is built specifically for this profile, production SaaS teams at Series A and beyond who need infrastructure ownership without the operational overhead of building a platform from scratch.</p><ol start="2"><li><p><strong>Which Heroku alternatives support git-push deployment while running on the team&#8217;s own cloud account?</strong></p></li></ol><p>This combination, git-push deployment on infrastructure the team owns, is what separates AWS-native Internal Developer Platforms from other alternatives to Heroku. Managed PaaS alternatives support git-push but run on shared third-party infrastructure. Open-source self-hosted alternatives run on your own infrastructure but require significant platform engineering to provide a genuine git-push experience. LocalOps provides both natively, push to a configured branch, and LocalOps handles build, containerization, deployment, health checks, and rollback automatically, running entirely within the team&#8217;s own AWS account. No Kubernetes knowledge required. No platform team involvement.</p><ol start="3"><li><p><strong>What native capabilities must a Heroku alternative have before it is production-ready?</strong></p></li></ol><p>Four capabilities must be native to the platform, not assembled from add-ons or external tools. CI/CD that triggers automatically on git push without external pipeline configuration. Observability that includes metrics, logs, and alerts from the first deployment at no additional cost. Autoscaling that responds to real workload signals without manual intervention. And secrets management that stores credentials encrypted at rest and injects them at runtime without developer access to raw credentials. Any platform missing one or more of these natively is not a viable Heroku replacement for production B2B SaaS workloads, regardless of how it presents these capabilities on a feature page.</p><ol start="4"><li><p><strong>How do you evaluate whether a Heroku alternative is genuinely production-ready?</strong></p></li></ol><p>The production-readiness signals that matter are specific. Multi-environment management with isolated VPCs and blast radius containment between environments. Preview environments that spin up automatically on every pull request without manual configuration. Persistent workload support, background workers, cron jobs, and stateful services as first-class service types. And incident response capability, complete logs, metrics, and deployment history in a unified interface without correlating multiple tools. Platforms that require manual environment management, do not support preview environments natively, or fragment observability across add-ons, are not production-ready for teams with SLA commitments and enterprise customers.</p><ol start="5"><li><p><strong>What is the difference between the best Heroku alternatives for early-stage versus Series A teams?</strong></p></li></ol><p>Early-stage teams are optimizing for migration speed and operational simplicity. Managed PaaS alternatives like Render and Railway are reasonable options; they reduce costs modestly versus Heroku, preserve developer experience, and require minimal migration effort. Series A and beyond teams are optimizing for infrastructure ownership, compliance capability, and TCO at scale. These requirements eliminate managed PaaS alternatives structurally, not because they are bad products, but because they cannot provide VPC isolation, compliance architecture, or cost efficiency at scale that B2B SaaS teams with enterprise customers require. The best Heroku alternative for an early-stage team and the best Heroku alternative for a Series A team are genuinely different answers to genuinely different questions.</p><ol start="6"><li><p><strong>Is AWS a good Heroku alternative for teams without a dedicated DevOps engineer?</strong></p></li></ol><p>AWS is the right infrastructure foundation, but raw AWS is not a developer-friendly deployment platform without a layer on top. Configuring EKS, VPCs, load balancers, IAM roles, CI/CD pipelines, and observability from scratch is a three to six-month platform engineering project before a single product engineer can deploy independently. LocalOps makes AWS a practical AWS Heroku alternative for teams without dedicated DevOps by handling all of that infrastructure configuration automatically. Your engineers connect an AWS account and a GitHub repository. LocalOps provisions a production-ready environment in under 30 minutes. Developers deploy the same way they did on Heroku from day one.</p><h2><strong>Key Takeaways</strong></h2><p>The best Heroku alternatives for production SaaS teams are not evaluated on features. They are evaluated on whether they solve the right problems for the specific business stage, and whether they create new problems in the process.</p><p>For B2B SaaS teams scaling from Series A to Series B, the evaluation framework is clear. Infrastructure ownership is a business requirement. Developer experience preservation is non-negotiable. Native CI/CD, observability, autoscaling, and secrets management are baseline requirements. Production-readiness is specific and testable. Compliance architecture determines enterprise deal outcomes. And exit optionality is the difference between the infrastructure you own and the infrastructure you rent.</p><p>The best Heroku alternative in 2026 for this profile is an AWS-native Internal Developer Platform that meets all six criteria simultaneously, infrastructure in your own account, developer experience preserved, and operational complexity handled by the platform rather than your team.</p><p><strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> Our engineers review your current setup and walk through what the migration looks like for your specific stack.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First production environment live in under 30 minutes. No credit card required.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Heroku Migration Guide &#8594;</a></strong> Step-by-step database migration, environment setup, and DNS cutover.</p>]]></content:encoded></item><item><title><![CDATA[Heroku to AWS Migration: A Zero-Downtime Playbook]]></title><description><![CDATA[The six-step playbook for moving from Heroku to AWS without downtime, and the three planning mistakes that cause most migrations to stall.]]></description><link>https://blog.localops.co/p/heroku-to-aws-migration-a-zero-downtime</link><guid isPermaLink="false">https://blog.localops.co/p/heroku-to-aws-migration-a-zero-downtime</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Wed, 25 Mar 2026 04:30:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!og24!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!og24!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!og24!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 424w, https://substackcdn.com/image/fetch/$s_!og24!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 848w, https://substackcdn.com/image/fetch/$s_!og24!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 1272w, https://substackcdn.com/image/fetch/$s_!og24!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!og24!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png" width="2400" height="1509" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1509,&quot;width&quot;:2400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6383928,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/191838355?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86610395-465d-4ae4-ac9d-84b57d93204b_2400x2084.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!og24!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 424w, https://substackcdn.com/image/fetch/$s_!og24!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 848w, https://substackcdn.com/image/fetch/$s_!og24!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 1272w, https://substackcdn.com/image/fetch/$s_!og24!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42223a12-0890-45b2-86fa-61343a4bf663_2400x1509.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Engineering teams evaluating Heroku alternatives in 2026 face a consistent challenge. The infrastructure move to AWS is straightforward. The organizational discipline required to execute it without downtime is not.</p><p>Most Heroku to AWS migrations do not fail because of technical mistakes. They fail because the migration was treated like a technical project. In reality, it is an operational one. The infrastructure moves successfully. Then something nobody documented, a webhook pointing to the old URL, a background job hitting the wrong database, an SSL certificate missing before DNS cutover,  causes an incident. Two extra days of preparation would have prevented it.</p><p>The best Heroku alternative is only as good as the migration that gets you there. This playbook is written for engineering teams who want to get it right the first time.</p><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> Step-by-step migration from Heroku to AWS with zero downtime, database cutover, DNS migration, cost reduction, and the planning mistakes that cause delays</p><p><strong>Who it is for:</strong> Engineering teams actively planning a migration away from Heroku who need a practical operational playbook</p><p><strong>The core principle:</strong> Operational discipline determines migration outcomes. Not technical complexity.</p><p><strong>Want to skip straight to the technical steps?</strong> T<a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">he full step-by-step reference is here</a>.</p><h2><strong>Why Heroku to AWS Migrations Stall</strong></h2><p>Engineering teams evaluating the best Heroku alternatives in 2026 encounter the same failure modes repeatedly. Understanding them before starting is how teams avoid them.</p><p>Three planning mistakes account for the majority of delayed migrations.</p><p><strong>Mistake 1: Starting before the inventory is complete.</strong></p><p>Teams begin provisioning AWS infrastructure before documenting what they are migrating. Halfway through, they find a Heroku-specific buildpack nobody remembered. Or a scheduled job not captured in any deployment manifest. Or a third-party integration that assumes the Heroku environment and breaks silently in the new setup.</p><p>Each undocumented dependency adds days. Several at once add weeks. The inventory is not overhead. It is the foundation on which everything rests.</p><p><strong>Mistake 2: Treating database migration as a single step.</strong></p><p>Database migration is the highest-risk part of any Heroku alternative migration. Teams that treat it as one step export, restore, switch connection strings, consistently run into problems. A parallel running approach would have caught these issues before customers were affected.</p><p>Running both databases simultaneously, verifying integrity at each stage, and keeping rollback options open is the operational standard. Not a precaution.</p><p><strong>Mistake 3: Skipping the observation period.</strong></p><p>Teams that move directly from &#8220;testing looks good&#8221; to &#8220;update DNS&#8221; skip the most valuable risk mitigation available, time. Real production traffic surfaces edge cases that testing never reveals. Background jobs behave differently under real load. Third-party webhooks go to stale URLs. Queries that passed testing degrade under concurrent traffic.</p><p>Teams that execute Heroku alternative migrations cleanly treat parallel running as non-negotiable.</p><h2><strong>Step 1: Run the Inventory Sprint</strong></h2><p>Every successful Heroku to AWS migration begins with documentation. Not infrastructure.</p><p>Before provisioning a single AWS resource, complete a full inventory of everything being migrated. For most production applications, this takes two to three days. It consistently eliminates the undocumented dependencies that cause mid-migration delays.</p><p><strong>What to inventory:</strong></p><p>Every Heroku service, dyno type, dyno count, buildpack configuration, the GitHub branch it deploys from, and any Heroku-specific environment assumptions in the code.</p><p>Every add-on, including ones that look unused. Some were enabled for features that no longer exist but were never removed. All need to be evaluated before migration begins.</p><p>Every config var, exported from Heroku and mapped to equivalents in the new environment. Pay close attention to Heroku-specific values: callback URLs pointing to herokuapp.com, DATABASE_URL values in Heroku&#8217;s format, and add-on credentials that need replacement.</p><p>Every third-party integration, services sending webhooks to the Heroku application, external APIs with the Heroku domain whitelisted, and email services with Heroku-specific callback URLs. These are the most commonly missed items. They are also the most common source of post-migration incidents.</p><p>Every scheduled job, whether running through Heroku Scheduler, the whenever gem, or a clock process. Scheduled jobs are easy to miss because they only run at specific intervals. They rarely surface during standard testing.</p><p><strong>Heroku&#8217;s dependency on AWS equivalent:</strong></p><p>Teams with years of Heroku-specific buildpacks, Heroku Postgres, and add-on dependencies do not need to rewrite their application stack. They need to map each dependency to its AWS equivalent and validate the replacement before cutover. LocalOps handles this mapping automatically for the most common stacks, including Rails, Node.js, Python, and Go. This makes it a practical Rails hosting Heroku alternative as well as a general AWS deployment platform.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/Do8F4/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5486b1e4-4b1a-4d39-a5c2-9ce746fff578_1220x694.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/180aa902-7623-443c-8ce8-9b7e03dda715_1220x694.png&quot;,&quot;height&quot;:341,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/Do8F4/1/" width="730" height="341" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>Application code does not change for most of these migrations. Connection strings change. Environment variables change. The underlying logic stays the same.</p><h2><strong>Step 2: Provision the AWS Environment</strong></h2><p>With the inventory complete, provision the target environment in AWS before touching Heroku production.</p><p>With LocalOps, this takes under 30 minutes. Connect your AWS account through keyless IAM role assumption; credentials never leave your cloud. Connect your GitHub repository. Create a new environment. LocalOps then provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically.</p><p>No Terraform. No Helm charts. No manual configuration.</p><p>Every environment follows<a href="https://aws.amazon.com/architecture/well-architected"> AWS Well-Architected standards</a> by default. Private subnets, least-privilege IAM policies, encrypted secrets via AWS Secrets Manager, and security group configurations are applied automatically. This is what makes LocalOps a genuine Heroku self-hosted alternative: your infrastructure, your AWS account, your compliance posture.</p><p><strong>Verify before moving on:</strong></p><p>Confirm the environment is provisioned in the correct AWS region. Confirm VPC configuration meets your security and compliance requirements.</p><p><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get started and provision your first environment in under 30 minutes.</a></p><h2><strong>Step 3: Deploy and Verify the Application</strong></h2><p>Push application code to a &#8216;prod&#8217;  branch. Create a service in the environment created by LocalOps and configure the prod branch as the source. LocalOps detects the push, builds the container image automatically, and deploys to EKS. Rails, Node.js, Python, Go, and .NET are all detected and configured automatically.</p><p>Heroku buildpack replacement happens transparently. If your team has Heroku-specific buildpack configurations, LocalOps&#8217;s container build process handles the equivalent setup. No application code changes are required. This is one of the most common concerns teams raise when evaluating alternatives to Heroku. In practice, it is one of the simpler parts of the migration.</p><p>Configure secrets in the service. Use the same Heroku postgres database string for now.</p><p><strong>Do not proceed to database migration until all of these pass:</strong></p><ul><li><p>Web service endpoints respond correctly</p></li><li><p>Background workers start and process test jobs</p></li><li><p>Scheduled jobs execute at configured intervals</p></li><li><p>File storage operations work with the new backend</p></li><li><p>All environment variables load correctly</p></li><li><p>Third-party integrations connect and respond</p></li><li><p>Logs and metrics appear in Grafana dashboards</p></li></ul><p>Teams that rush this phase find the issues later, during the database migration phase, under production pressure.</p><h2><strong>Step 4: Migrate the Database</strong></h2><p>Database migration is the highest-risk phase of any Heroku migration. Treat it as a phase with distinct steps and verification gates.</p><p>The full technical reference is in the<a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> LocalOps Heroku migration guide</a>. The operational steps are:</p><p><strong>Create and verify a backup.</strong> Create a manual backup of Heroku Postgres using the Heroku CLI. Verify the backup file size matches the expected database size before proceeding.</p><p><strong>Provision Amazon RDS.</strong> LocalOps provisions RDS inside your VPC automatically. The database runs privately, not publicly accessible, with credentials stored in AWS Secrets Manager.</p><p><strong>Restore and run integrity checks.</strong> Restore the backup to RDS using an EC2 instance inside the same VPC. Run integrity checks before switching any application traffic. Check row counts on critical tables, spot-check recent data, and verify all Postgres extensions are present. Do not proceed until every check passes.</p><p><strong>Run both databases in parallel.</strong> For production applications where downtime is unacceptable, use<a href="https://aws.amazon.com/dms"> AWS Database Migration Service</a> to replicate ongoing changes from Heroku Postgres to RDS in near-real time. Both databases receive writes. Both stay in sync. Keep both running until confidence in the target database is high, not just until initial tests pass.</p><p><strong>Handle background workers.</strong> Scale background workers to zero before switching the database connection string. Let in-flight jobs complete. Later, switch the connection string. Verify connectivity. Scale workers back up. Monitor job processing closely for the first hour after the switch.</p><p><strong>Switch connection strings and monitor.</strong> Update environment variables to point to RDS. Deploy the change. Monitor logs and database metrics for the first 30 minutes. Watch specifically for connection pool exhaustion, query performance changes, and connectivity errors.</p><p><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the full database migration walkthrough.</a></p><h2><strong>Step 5: The Parallel Running Period</strong></h2><p>Before updating DNS, run both the Heroku app and the AWS environment simultaneously with real production traffic.</p><p>This is the step most commonly compressed under deadline pressure. It is also the source of most post-migration incidents. Teams that skip parallel running consistently discover in production the edge cases that a proper observation period would have caught safely.</p><p><strong>What parallel running catches:</strong></p><ul><li><p>Background jobs behave differently under real load</p></li><li><p>Third-party webhooks are going to stale URLs</p></li><li><p>Cron jobs creating conflicts under real concurrency</p></li><li><p>Queries degrade under concurrent production traffic</p></li></ul><p><strong>Minimum periods:</strong></p><p>Small applications - 48 to 72 hours.</p><p>Mid-size production applications - 5 to 7 days.</p><p>Large applications with complex dependencies - 2 weeks.</p><p>If unexpected behaviour surfaces, extend the period. Do not compress it to meet a deadline.</p><h2><strong>Step 6: DNS Cutover</strong></h2><p>DNS cutover is the second-highest-risk step when migrating to an AWS Heroku alternative. Execute it with preparation and a documented rollback plan.</p><p><strong>48 hours before:</strong></p><p>Lower TTL on all DNS records to 60 seconds. Verify SSL certificates are active for all custom domains in AWS Certificate Manager. LocalOps provisions these automatically, but verify before cutover day. Document the rollback procedure and confirm the team member responsible is available during the cutover window.</p><p><strong>On cutover day:</strong></p><p>Update DNS records to point to the new LocalOps environment. Both environments receive traffic briefly during propagation, keep both operational with monitoring active. Watch the new environment closely for the first four hours. Look specifically for elevated error rates, response time changes, job processing failures, and integration errors.</p><p><strong>After cutover:</strong></p><p>Keep Heroku running for 24&#8211;48 hours. The cost of two extra days is far lower than the cost of rolling back under pressure. Do not decommission Heroku until the observation period is complete.</p><p><a href="https://cal.com/anand-localops/migrate-from-heroku-to-aws?duration=30&amp;utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Prefer to have LocalOps engineers handle the cutover for you? Schedule a migration call.</a></p><h2><strong>What the Cost Reduction Looks Like</strong></h2><p>The cost reduction from moving to an AWS Heroku alternative via LocalOps comes from two structural sources.</p><p><strong>Source 1: No platform margin.</strong></p><p>Heroku runs on AWS and charges a markup on every resource it provisions. Every dyno carries a Heroku margin. Every Heroku Postgres instance is an Amazon RDS with a Heroku margin. Every Heroku Redis instance is an ElastiCache with a Heroku margin.</p><p>With LocalOps, you pay AWS list pricing for every resource. The margin is gone. The size of the difference depends on dyno count, database tier, and traffic volume. However, the direction is structural; AWS pricing without a platform margin is lower than PaaS pricing with one.</p><p><strong>Source 2: No observability add-on fees.</strong></p><p>A typical production Heroku stack includes paid add-ons for log management and application performance monitoring. LocalOps includes Prometheus, Loki, and Grafana pre-configured in every environment. These tools are included, not sold separately.</p><p>This is a structural difference from Heroku open source alternatives, where observability setup is entirely your team&#8217;s responsibility, and from first-generation alternatives to Heroku like Render or Railway, where monitoring add-ons remain separate cost items.</p><p>For an estimate based on your current Heroku setup,<a href="https://go.localops.co/heroku"> speak with the LocalOps team</a>.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/vUjHW/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b4f2015-62aa-440a-bede-3ee3d73bd18b_1220x716.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d007c0e8-7bae-4361-b66a-5e27cae91909_1220x786.png&quot;,&quot;height&quot;:389,&quot;title&quot;:&quot;Realistic Migration Timelines&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/vUjHW/1/" width="730" height="389" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>The timeline is determined by documentation quality and parallel running discipline. Teams that invest in both migrate faster, because they are not debugging mid-migration the issues the inventory would have caught upfront.</p><p><strong>White-glove migration available.</strong> LocalOps engineers handle the entire process end-to-end, including database migration, environment variables, custom domains, and DNS cutover. Most teams are live within a day.<a href="https://cal.com/anand-localops/migrate-from-heroku-to-aws?duration=30&amp;utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Schedule a migration call</a>.</p><h2><strong>How LocalOps Works as the Best AWS Heroku Alternative</strong></h2><p>LocalOps is an AWS-native Internal Developer Platform built for teams, replacing Heroku.</p><p>Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a full observability stack automatically. No Terraform. No Helm charts. No manual steps. First environment ready in under 30 minutes.</p><p>From that point, the developer experience matches Heroku exactly. Push to your configured branch. LocalOps builds, containerizes, and deploys to AWS automatically. Logs and metrics are available from day one. Autoscaling and auto-healing run by default.</p><p>The infrastructure lives in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt. This is the fundamental difference between LocalOps and alternatives to Heroku that simply replace one managed platform with another.</p><blockquote><p><em>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10&#8211;12 man months of effort , all of which LocalOps has saved for us.&#8221;</em> <strong>&#8211; Gaurav Verma, CTO and Co-founder, SuprSend</strong></p><p><em>&#8220;We saved months of DevOps effort by using LocalOps.&#8221;</em> <strong>&#8211; Shobit Gupta, Ex-Uber, CTO and Co-founder, Segwise</strong></p></blockquote><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get started for free - first environment live in under 30 minutes.</a></strong></p><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>What is the highest-risk step when migrating to an AWS Heroku alternative?</strong></p></li></ol><p>Database migration carries the highest risk. It involves moving the most critical stateful component with limited rollback options if something goes wrong. The safe path runs both databases in parallel using AWS DMS, switches connection strings only after integrity is verified, and keeps Heroku Postgres running until confidence in RDS is established. DNS cutover is the second-highest-risk step. Lower TTL 48 hours before the switch, document the rollback plan before starting, and keep Heroku running through the full observation period after propagation.</p><ol start="2"><li><p><strong>What are the best Heroku alternatives in 2026 for zero-downtime migration?</strong></p></li></ol><p>AWS-native Internal Developer Platforms are the strongest Heroku alternatives in 2026 for teams with strict uptime requirements. The migration approach uses AWS DMS for near-real-time database replication. Both environments run simultaneously until confidence is high. DNS cutover happens after a full parallel running period. The Heroku application stays live throughout, with no forced downtime window. First-generation alternatives to Heroku, like Render and Railway, do not support this model because they require rebuilding infrastructure rather than running parallel environments.</p><ol start="3"><li><p><strong>How long does a Heroku to AWS migration realistically take?</strong></p></li></ol><p>Small applications, two to three services, database under 2GB, take three to seven days. Mid-size production applications, five to ten services, background workers, and custom domains, take two to four weeks. Large applications with ten or more services and compliance requirements take six to twelve weeks. Documentation quality before migration begins is the primary timeline variable. A two to three-day inventory sprint consistently reduces overall timelines by eliminating undocumented dependencies.</p><ol start="4"><li><p><strong>Do we need to rewrite the application when choosing the best Heroku alternative?</strong></p></li></ol><p>No. Teams with years of Heroku-specific buildpacks, Heroku Postgres, and add-on dependencies do not need to rewrite their stack. Application code stays the same. Connection strings change. Environment variables change. LocalOps handles buildpack replacement automatically through container build detection. The DATABASE_URL connection string format requires a configuration update, not a code change.</p><ol start="5"><li><p><strong>What is a Heroku self-hosted alternative?</strong></p></li></ol><p>A Heroku self-hosted alternative is a deployment platform running on infrastructure the team owns, in their own AWS account, rather than on a shared third-party cloud. Teams should consider this path when compliance requirements demand data residency in their own cloud, when enterprise customers require VPC isolation and private networking, or when the platform margin of a managed PaaS has become a meaningful cost concern. LocalOps provides a self-hosted Heroku alternative without the operational overhead of building and running the platform layer from scratch.</p><ol start="6"><li><p><strong>How do Heroku open source alternatives compare to LocalOps?</strong></p></li></ol><p>Heroku open source alternatives like Coolify and Dokku give full infrastructure control but require the team to own the complete operational burden, provisioning, security patching, observability setup, scaling, and on-call response for the platform itself. LocalOps provides the same infrastructure ownership with the platform layer managed. For teams without dedicated platform engineering capacity, the operational cost of running a Heroku open source alternative in production consistently exceeds initial estimates. LocalOps is designed for teams that want infrastructure ownership without building and maintaining the platform themselves.</p><h2><strong>Key Takeaways</strong></h2><p>A zero-downtime Heroku migration is an operational achievement. Not a technical one.</p><p>The engineering work is straightforward with the right tooling. LocalOps provisions production-ready AWS infrastructure in under 30 minutes, preserves git-push developer workflows, and includes built-in observability. What this playbook provides is the operational discipline that turns a technically correct migration into a production-safe one.</p><p>For engineering teams evaluating Heroku alternatives in 2026, the right decision is not just about what is cheapest or fastest. It is about what still works when the system, compliance requirements, and team are significantly larger than they are today. The best Heroku alternative solves the immediate migration problem and the long-term infrastructure ownership problem at the same time.</p><p><strong><a href="https://cal.com/anand-localops/migrate-from-heroku-to-aws?duration=30&amp;utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> LocalOps engineers handle the migration end-to-end, database, environment variables, domains, and DNS.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First environment live in under 30 minutes. No credit card required.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Technical Migration Guide &#8594;</a></strong> Step-by-step commands, configuration, and RDS setup, all covered.</p><p><em><strong>Related reading:</strong></em></p><p><a href="https://blog.localops.co/p/heroku-alternatives-in-2026-the-complete">Heroku Alternatives in 2026: The Complete Guide for Engineering Leaders</a></p><p><a href="https://localops.substack.com/p/how-to-migrate-from-heroku-to-aws">How to Migrate from Heroku to AWS Without Losing Developer Experience</a></p>]]></content:encoded></item><item><title><![CDATA[How to Migrate from Heroku to AWS Without Losing Developer Experience]]></title><description><![CDATA[Git-push deployments, preview environments, zero-downtime database cutover, the full migration path, without the developer experience regression.]]></description><link>https://blog.localops.co/p/how-to-migrate-from-heroku-to-aws</link><guid isPermaLink="false">https://blog.localops.co/p/how-to-migrate-from-heroku-to-aws</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Tue, 24 Mar 2026 10:40:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IV4n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IV4n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IV4n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!IV4n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!IV4n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!IV4n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IV4n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3632598,&quot;alt&quot;:&quot;How to Migrate from Heroku to AWS Without Losing Developer Experience.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/191838772?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="How to Migrate from Heroku to AWS Without Losing Developer Experience." title="How to Migrate from Heroku to AWS Without Losing Developer Experience." srcset="https://substackcdn.com/image/fetch/$s_!IV4n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!IV4n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!IV4n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!IV4n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a0468f-9fb7-4608-9bf4-65bbdc22b3a5_2400x1345.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For engineering teams evaluating Heroku alternatives in 2026, migrating to AWS is not primarily a technical challenge. It is an organizational one.</p><p>The infrastructure move is straightforward. The hard part is ensuring developers who could deploy themselves every 20 minutes on Heroku can still do the same after the migration, without learning Kubernetes, Helm, or Terraform.</p><p>Teams searching for the best Heroku alternative consistently name developer experience preservation as their top concern. Not cost. Not compliance. Developer experience. Because when that regresses, the migration fails regardless of how sound the underlying infrastructure is.</p><p>This guide covers the full path from Heroku to AWS: how to preserve git-push deployments, replicate Heroku Review Apps, maintain genuine developer self-service, execute a zero-downtime database cutover, and plan realistic timelines for a production migration.</p><p><strong>Want to see exactly what a Heroku to AWS migration looks like?</strong> <a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read everything you need to know here.</a></p><h2><strong>TL;DR</strong></h2><p><strong>The core challenge:</strong> AWS gives you everything Heroku cannot, VPC isolation, cost efficiency, compliance controls, but does not give you a developer-friendly deployment experience automatically</p><p><strong>The solution:</strong> An Internal Developer Platform that handles AWS infrastructure complexity so developers never interact with it directly</p><p><strong>The outcome:</strong> Git-push deployments preserved, observability built in, autoscaling by default, infrastructure running in your own AWS account</p><h2><strong>Why Engineering Teams Are Moving to AWS as a Heroku Alternative</strong></h2><p>Most teams do not look for an alternative to Heroku because the platform is broken. They look because their requirements outgrew it.</p><p>Costs fragment across dynos and add-ons with no clear optimization lever. Compliance questions arrive, and the infrastructure is not theirs to control. Scaling models do not match modern workloads. Observability depends on paid add-ons instead of being native to the platform.</p><p>AWS solves all of these structurally. VPC isolation and private networking. IAM-based access control. Horizontal autoscaling based on real traffic signals. Direct infrastructure pricing with no platform margin. Full compliance controls for SOC 2, HIPAA, and GDPR. Infrastructure running in your own cloud account.</p><p>What AWS does not provide automatically is a developer-friendly deployment experience. This is the gap that causes migrations to fail. Teams move to raw AWS, the infrastructure is technically correct, and developers lose the self-service capability they depended on.</p><p>The solution is an Internal Developer Platform, an abstraction layer that handles AWS infrastructure complexity invisibly so the developer-facing workflow stays identical to Heroku. This is what the best Heroku alternatives in 2026 actually provide.</p><p><a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps closes this gap.</a></p><p><strong>What Losing Developer Experience Actually Means</strong></p><p>On Heroku, any developer on the team can deploy. Push to a branch. Done. No tickets. No waiting. No infrastructure knowledge required.</p><p>The most common migration failure is not technical. It is organizational. The infrastructure moves to AWS successfully, but developers who used to deploy themselves now file tickets with the platform team and wait 48 hours. Shipping velocity drops. Engineers are frustrated. The migration gets blamed, even though the underlying infrastructure is sound.</p><p>This failure has a name in the engineering community: trading a PaaS dependency for a platform team dependency. The infrastructure problem is solved. The developer autonomy problem is recreated in a different form.</p><p>A successful migration away from Heroku is measured by one thing: whether any developer on the team can deploy their service, access their logs, and check the health of their application, without asking anyone for help.</p><h2><strong>How to Preserve Git-Push Deployments on the Best Heroku Alternative</strong></h2><p>The git-push deployment workflow is the single most important thing to preserve. Losing it is what causes migrations to fail organizationally.</p><p>Preserving it on an AWS Heroku alternative requires an abstraction layer that translates a git push event into the Kubernetes operations required to deploy the new version, without the developer ever touching Kubernetes directly.</p><p><strong>Here is what that looks like with LocalOps:</strong></p><p>A developer pushes code to a configured branch. LocalOps detects the push, builds a container image automatically, pushes it to Amazon ECR, updates the Kubernetes deployment on EKS, runs health checks, and handles rollback if the deployment fails. Within minutes, the new version is live.</p><p>From the developer&#8217;s perspective, the workflow is identical to Heroku. Push code. Deployment happens. No Kubernetes knowledge required. No Helm charts. No Terraform. No platform team to notify.</p><p><strong>Replacing Heroku buildpacks on AWS:</strong> LocalOps replaces Heroku buildpacks with container image builds automatically. If your team has a Dockerfile, LocalOps uses it directly. If not, LocalOps detects the language and framework and generates container configuration automatically. Rails, Node.js, Python, Go, and .NET are all supported out of the box. The build is triggered by a git push, identical to what your team did on Heroku.</p><p>This is particularly relevant for teams evaluating a Rails hosting Heroku alternative. Rails-specific requirements, Sidekiq workers, Postgres with connection pooling, Active Storage, and Action Cable, are all handled natively without fragile add-on integrations.</p><p><strong>What the platform must provide:</strong> Pre-configured CI/CD pipelines that trigger on every push. Deployment status visibility without kubectl or AWS console access. Rollback capability, any developer can trigger it themselves, without platform team involvement.</p><p><a href="https://localops.co/features/continuous-deployments?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps handles continuous deployments.</a></p><h2><strong>Replicating Heroku Review Apps on AWS</strong></h2><p>Heroku Review Apps, ephemeral, per-pull-request environments with a live URL, are one of the most valuable features teams lose when they move to alternatives to Heroku. Losing them slows QA, makes code review harder, and reduces confidence in deployments before they reach production.</p><p>Replicating this on AWS requires spinning up a full environment automatically when a pull request is opened and tearing it down when the PR is closed. Technically possible on Kubernetes, but configuring it from scratch requires significant platform engineering work.</p><p>LocalOps handles this automatically. Every pull request triggers a complete, isolated environment with its own URL running the full application stack. No additional configuration. No platform team involvement.</p><p><a href="https://localops.co/features/preview-environments?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how preview environments work on LocalOps</a>.</p><p>Each preview environment gets its own isolated namespace in the EKS cluster. Environment variables and secrets are inherited from the base configuration. The URL is posted automatically to the pull request. When the PR is closed, the environment is torn down, and AWS resources are released.</p><p>Preview environments on LocalOps do not share a database with production or staging. Each is fully isolated, configurable to use a dedicated test database or a seeded copy of production data. A broken preview environment has zero blast radius on other environments.</p><h2><strong>Developer Self-Service on an AWS-Native Heroku Alternative</strong></h2><p>Developer self-service is not just about deployment. It is the full scope of infrastructure interactions a developer needs, without filing a ticket.</p><p>On Heroku, this was implicit in the platform design. On a raw AWS migration without a platform layer, all of it requires explicit design decisions. Without them, the platform team becomes a bottleneck for every infrastructure interaction.</p><p><strong>What genuine self-service requires on a Heroku alternative:</strong></p><p>Deployment without tickets. Any developer pushes code and sees it deployed. No approval workflow. No waiting.</p><p>Environment management without ops involvement. Developers create environments, configure variables, and manage secrets through a self-service interface, without understanding VPCs, IAM roles, or Kubernetes namespaces.</p><p>Log and metric access without AWS console knowledge. Developers access logs and metrics through a unified interface without navigating CloudWatch or writing Prometheus queries.</p><p><strong>How platform teams enforce guardrails without becoming bottlenecks:</strong></p><p>The key is encoding security controls into the platform rather than into an approval process.</p><p>With LocalOps, every environment is provisioned from hardened infrastructure templates following<a href="https://aws.amazon.com/architecture/well-architected"> AWS Well-Architected standards</a>. Private subnets, least-privilege IAM policies, encrypted secrets via AWS Secrets Manager, and security group configurations are applied automatically to every environment. Developers cannot provision insecure infrastructure because insecure options are not available in the self-service interface.</p><p>Platform teams set the guardrails once. Developers work within them without knowing they exist. Security is enforced at the infrastructure level, not through a ticket queue. This is the model that makes Heroku self-hosted alternatives and AWS-native IDPs genuinely viable for compliance-sensitive teams.</p><p><a href="https://localops.co/features/secure-by-default?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps handles security by default.</a></p><h2><strong>The Highest-Risk Step: Zero-Downtime Database and DNS Cutover</strong></h2><p>Of all the steps in a migration from Heroku to AWS, database migration carries the highest risk. It involves moving the most critical stateful component with a limited ability to roll back cleanly.</p><p><strong>The safe database migration path:</strong></p><p>Start by creating a verified backup of Heroku Postgres before touching anything. Restore it to Amazon RDS. Run integrity checks: row counts on critical tables, spot checks on recent data, and verification of all database extensions. Do not proceed until integrity is confirmed.</p><p>For production applications where downtime is not acceptable, use<a href="https://aws.amazon.com/dms"> AWS Database Migration Service</a> to replicate changes from Heroku Postgres to RDS in near-real time. Both databases run in parallel and stay in sync. Switch connection strings only after the target database has been verified as consistent. Monitor closely for the first 30 minutes after switching.</p><p>The full step-by-step process is covered in the<a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> LocalOps Heroku migration guide</a>.</p><p><strong>The safe DNS cutover path:</strong></p><p>Lower your DNS TTL to 60 seconds 24&#8211;48 hours before the planned cutover. Update DNS records to point to the new service created within the LocalOps environment. Keep Heroku running during propagation. Have a documented rollback plan ready before starting. Keep Heroku running for 24&#8211;48 hours after DNS propagation completes.</p><p><strong>The most common mistake:</strong></p><p>Cutting over too quickly. Teams that skip parallel running consistently discover edge cases that testing did not catch, background jobs that behave differently under real load, webhooks that fail because the new URL is not configured with external services, and queries that degrade under concurrent traffic.</p><p>Parallel running is not a waste. It is the primary risk mitigation mechanism in any migration away from Heroku.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/foH9z/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b237cdc-ae90-4270-8fc3-f6b873fc38c1_1220x716.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3400cf87-a5d2-4265-a5fd-d2be37d3b20a_1220x786.png&quot;,&quot;height&quot;:389,&quot;title&quot;:&quot;Realistic Timelines for Heroku to AWS Migration&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/foH9z/1/" width="730" height="389" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>The variable that most affects the timeline is documentation quality before migration begins. Teams with every service inventoried, every add-on catalogued, and every config var recorded migrate significantly faster than teams with accumulated technical debt.</p><p>The first investment worth making: a two to three-day documentation sprint before writing any migration code. Catalogue every Heroku service, add-on, config var, third-party integration, and buildpack configuration. This consistently compresses overall timelines by weeks.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/Q5MDt/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/861e3bb3-d12e-4bab-bf11-c93dd5d016a2_1220x620.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0fd545c2-59f3-48bc-ad2a-4dbbf1945d72_1220x690.png&quot;,&quot;height&quot;:339,&quot;title&quot;:&quot;Heroku add-on to AWS equivalent:&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/Q5MDt/1/" width="730" height="339" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p><strong>White-glove migration:</strong> For teams that prefer not to manage the migration themselves, LocalOps engineers handle the entire process end-to-end. Most teams are live within a day.<a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026"> Schedule a migration call</a>.</p><h2><strong>How LocalOps Fits In as the Best AWS Heroku Alternative</strong></h2><p>LocalOps is an AWS-native Internal Developer Platform built specifically for teams replacing Heroku.</p><p>Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><p>From that point, the developer experience is identical to Heroku. Push to your configured branch. LocalOps builds, containerizes, and deploys to AWS automatically. Preview environments spin up on every pull request. Logs and metrics available from day one. Autoscaling and auto-healing run by default.</p><p>The infrastructure lives in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt. This is what separates a true Heroku open source alternative and AWS-native IDP from a managed PaaS that simply replaces one vendor dependency with another.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Migration Guide </a>-</strong> Full technical walkthrough, database migration, environment setup, DNS cutover.</p><p><em>&#8220;Their thoughtfully designed product and tooling entirely eliminated the typical implementation headaches. Partnering with LocalOps has been one of our best technical decisions.&#8221;</em> <strong>&#8211; Prashanth YV, Ex-Razorpay, CTO and Co-founder, Zivy</strong></p><p><em>&#8220;We saved months of DevOps effort by using LocalOps.&#8221;</em> <strong>&#8211; Shobit Gupta, Ex-Uber, CTO and Co-founder, Segwise</strong></p><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>What are the best Heroku alternatives in 2026 for teams migrating to AWS?</strong></p></li></ol><p>The best Heroku alternatives in 2026 for production SaaS teams fall into three categories: managed PaaS alternatives like Render and Railway, Heroku open source alternatives like Coolify and Dokku, and AWS-native Internal Developer Platforms like LocalOps. For teams with compliance requirements, enterprise customers, or cost structures where infrastructure ownership is necessary, AWS-native IDPs are the strongest option. They deliver Heroku-equivalent developer experience with infrastructure running in your own AWS account, built-in observability, and horizontal autoscaling, at direct AWS pricing without a platform margin.</p><ol start="2"><li><p><strong>Is AWS a good Heroku alternative for small engineering teams?</strong></p></li></ol><p>Yes, with the right platform layer. Raw AWS requires significant infrastructure expertise to operate. An AWS Heroku alternative built on an Internal Developer Platform abstracts that complexity entirely. LocalOps handles VPC setup, EKS cluster management, IAM configuration, security hardening, and observability automatically. Teams of five to ten engineers run production-grade AWS infrastructure without a dedicated DevOps hire. The threshold where dedicated infrastructure expertise becomes necessary is when requirements exceed what the platform handles automatically, well above where most small teams operate.</p><ol start="3"><li><p><strong>What is the best Heroku alternative for Rails applications?</strong></p></li></ol><p>Rails teams have specific infrastructure requirements, Sidekiq background workers, Postgres with connection pooling, Active Storage, Action Cable, and asset pipeline management. The best Rails hosting Heroku alternative handles all of these natively rather than through fragile add-on integrations. LocalOps supports Rails deployments out of the box with automatic container configuration, native cron job support for scheduled tasks, Amazon RDS for Postgres, and ElastiCache for Redis, all running in your own AWS VPC. Web and worker processes scale independently based on real load signals.</p><ol start="4"><li><p><strong>What is a Heroku self-hosted alternative, and when should teams consider one?</strong></p></li></ol><p>A Heroku self-hosted alternative is a deployment platform that runs on infrastructure the team owns and controls, typically in their own AWS account, rather than on a shared third-party cloud. Teams should consider this path when compliance requirements demand data residency in their own cloud, when enterprise customers require VPC isolation and private networking, or when infrastructure costs at scale make the platform margin of a managed PaaS unsustainable. LocalOps is an AWS-native IDP that gives teams a self-hosted Heroku alternative without the operational overhead of building and running the platform themselves.</p><ol start="5"><li><p><strong>How do Heroku open source alternatives compare to AWS-native IDPs for production workloads?</strong></p></li></ol><p>Heroku open source alternatives like Coolify, Dokku, and CapRover offer full infrastructure control at no licensing cost. For production workloads, the tradeoff is operational overhead; your team owns provisioning, security patching, observability setup, scaling configuration, and on-call response for the platform itself. AWS-native IDPs provide the same infrastructure ownership with the platform layer managed for you. For teams without dedicated platform engineering capacity, the operational cost of running a Heroku open source alternative in production consistently exceeds the licensing cost of a managed alternative.</p><ol start="6"><li><p><strong>Can we migrate one service at a time from Heroku to AWS?</strong></p></li></ol><p>Yes,  and it is the recommended approach for any team with multiple services. Moving service by service reduces the blast radius significantly if something goes wrong. Start with the least critical service, verify it fully in the new environment, then move to the next. Most teams run Heroku and LocalOps in parallel for two to four weeks during a phased migration, shifting traffic service by service until everything is confirmed stable on AWS.</p><ol start="7"><li><p><strong>What happens to our infrastructure if we stop using LocalOps?</strong></p></li></ol><p>Your AWS infrastructure continues running without interruption. Everything LocalOps provisions lives inside your own AWS account; nothing depends on LocalOps&#8217;s systems to stay operational. Your EKS clusters, RDS databases, load balancers, and VPC remain fully functional. Unlike Heroku, leaving LocalOps does not mean rebuilding from scratch. This is the core architectural difference between an infrastructure-ownership model and a managed PaaS like Heroku or its first-generation alternatives.</p><h2><strong>Key Takeaways</strong></h2><p>Migrating from Heroku to AWS without losing developer experience requires one thing above all else: a platform layer that handles AWS infrastructure complexity so developers never have to interact with it directly.</p><p>Git-push deployments, preview environments, developer self-service, and zero-downtime database cutover are all achievable on the best Heroku alternatives in 2026. None requires developers to learn Kubernetes, Helm, or Terraform. They require a platform designed to absorb that complexity invisibly.</p><p>For teams evaluating alternatives to Heroku at Series A and beyond, the right decision is not just about what is cheapest or easiest to migrate to. It is about what still works when your system, compliance requirements, and team are significantly larger than they are today.</p><p><strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong> Our engineers review your Heroku setup and walk through the migration end-to-end.</p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong> First environment live in under 30 minutes. No credit card required.</p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Migration Guide &#8594;</a></strong> Full technical walkthrough, database migration, environment setup, DNS cutover.</p><p><em>Related reading: </em><strong><a href="https://blog.localops.co/p/heroku-alternatives-in-2026-the-complete">Heroku Alternatives in 2026: The Complete Guide for Engineering Leaders</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Heroku Alternatives in 2026: The Complete Guide for Engineering Leaders]]></title><description><![CDATA[Every Heroku alternative category compared, managed PaaS, self-hosted, and AWS Heroku alternative IDPs, and how to choose the one you won't migrate away from again.]]></description><link>https://blog.localops.co/p/heroku-alternatives-in-2026-the-complete</link><guid isPermaLink="false">https://blog.localops.co/p/heroku-alternatives-in-2026-the-complete</guid><dc:creator><![CDATA[Nidhi Pandey]]></dc:creator><pubDate>Mon, 23 Mar 2026 10:36:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-B4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-B4q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-B4q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!-B4q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!-B4q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!-B4q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-B4q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4651445,&quot;alt&quot;:&quot;Heroku Alternatives in 2026: The Complete Guide for Engineering Leaders.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/191838418?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Heroku Alternatives in 2026: The Complete Guide for Engineering Leaders." title="Heroku Alternatives in 2026: The Complete Guide for Engineering Leaders." srcset="https://substackcdn.com/image/fetch/$s_!-B4q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 424w, https://substackcdn.com/image/fetch/$s_!-B4q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 848w, https://substackcdn.com/image/fetch/$s_!-B4q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!-B4q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb76effc-f7d5-4e3e-815b-0e02fc24daef_2400x1600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>TL;DR</strong></h2><p><strong>What this covers:</strong> The best Heroku alternatives in 2026, how the landscape has shifted, structural differences between platform categories, and the long-term architectural risks of each choice.</p><p><strong>Who it is for:</strong> Engineering leaders evaluating alternatives to Heroku for production SaaS workloads.</p><p><strong>The short answer:</strong> The right Heroku alternative depends on where your team is. Managed PaaS alternatives work for early-stage teams. AWS-native Internal Developer Platforms are where scaling, compliance-sensitive SaaS teams are converging in 2026.</p><p><strong>Want to see exactly what a Heroku to AWS migration looks like?</strong> <a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">We have covered it in detail here.</a></p><h2><strong>Why Engineering Teams Are Evaluating Heroku Alternatives?</strong></h2><p>Most teams don&#8217;t look for a Heroku alternative because the platform broke. They look because their requirements outgrew it.</p><p>Costs fragment across dynos and add-ons with no clear optimization lever. Compliance questions arrive, and the infrastructure isn&#8217;t yours to control. Scaling models don&#8217;t match modern, high-concurrency workloads. Observability depends on multiple paid add-ons instead of being native to the platform.</p><p>These constraints don&#8217;t show up on day one. They compound quietly, and by the time they&#8217;re visible, they&#8217;re already shaping architecture decisions.</p><p>That&#8217;s what drives the search for alternatives to Heroku in 2026.</p><h2><strong>What Are the Best Heroku Alternatives in 2026?</strong></h2><p>The best Heroku alternatives for production SaaS teams fall into three categories. Each solves a different problem and suits a different team profile.</p><h3><strong>Managed PaaS Alternatives: Render, Railway, Fly.io</strong></h3><p>The most direct alternatives to Heroku in terms of developer experience. Git-based deployments, managed databases, and familiar workflows make migration fast.</p><p><strong>What they do well:</strong> Fast migration path. Familiar mental model. Lower pricing than Heroku&#8217;s tier-based model. Good for early-stage teams that need to move quickly.</p><p><strong>What they don&#8217;t solve:</strong> Infrastructure still runs on the vendor&#8217;s shared cloud. No VPC ownership. Compliance requirements that block you on Heroku frequently block you here, too. You are trading one managed dependency for another, with the same infrastructure-disappears-on-exit risk.</p><p><strong>Best for:</strong> Early-stage teams prioritizing migration speed over infrastructure ownership.</p><h3><strong>Open-Source Self-Hosted Alternatives: Coolify, Dokku, CapRover</strong></h3><p>For teams that want full infrastructure control without a platform vendor. These are the most common Heroku open source alternatives and Heroku self-hosted alternatives discussed in the engineering community.</p><p><strong>What they do well:</strong> True infrastructure ownership. Data in your own cloud account. No platform margin on compute or services.</p><p><strong>What they don&#8217;t solve:</strong> Your team owns the full operational burden, provisioning, patching, observability setup, and on-call response for the platform itself. Most open-source alternatives lack production-grade autoscaling and built-in observability out of the box.</p><p><strong>Best for:</strong> Teams with dedicated platform engineering capacity and specific customization requirements.</p><h3><strong>Raw AWS: Amazon ECS as a Heroku Alternative</strong></h3><p>Some engineering teams evaluate Amazon ECS, Elastic Container Service, as a direct path to AWS before reaching for a full Internal Developer Platform. It is worth understanding what ECS provides and where it stops short.</p><p><strong>What ECS does well:</strong> ECS runs Docker containers on AWS infrastructure, where the underlying compute is managed by AWS. With ECS, it handles underlying server provisioning automatically and integrates natively with IAM, VPC, ALB, CloudWatch, and ECR. For teams with a platform engineer who can own the configuration, ECS is a capable and cost-efficient compute layer.</p><p><strong>Where ECS falls short as a Heroku alternative:</strong> ECS is a compute service, not a deployment platform. There is no built-in CI/CD, no self-serve developer experience, and no native observability beyond basic CloudWatch metrics. The git-push workflow that made Heroku valuable does not exist on raw ECS without deliberate platform engineering work to build and maintain it. Environment management, dev, staging, production, with proper isolation, is entirely manual. Preview environments on pull requests require building them from scratch.</p><p><strong>The honest assessment:</strong> ECS is a viable path for teams with a dedicated platform engineer and the capacity to build the developer experience layer on top. For product-focused teams without that capacity, ECS without a platform layer trades Heroku&#8217;s simplicity for significant operational overhead. The infrastructure cost savings are real. The engineering cost to replace what Heroku provided is also real.</p><p>This is precisely the gap that AWS-native Internal Developer Platforms are designed to close.</p><p><strong>Best for:</strong> Teams with a dedicated platform engineer, existing AWS expertise, and the capacity to build and maintain the deployment and observability tooling on top of ECS.</p><h3><strong>AWS-Native Internal Developer Platforms: LocalOps</strong></h3><p>This is where scaling SaaS teams are converging in 2026. An AWS-native IDP provides a Heroku-equivalent developer experience, git-push deployments, and self-serve environments, with infrastructure running in your own AWS account.</p><p><strong>What they do well:</strong> Infrastructure ownership without operational overhead. Built-in observability (Prometheus, Loki, Grafana). Horizontal autoscaling by default. Direct AWS pricing, no platform margin. No new vendor lock-in, everything runs on standard Kubernetes in your AWS account.</p><p><strong>Best for:</strong> Series A and beyond teams with compliance requirements, enterprise customers, or cost structures where infrastructure ownership is necessary.</p><p>If this feels like a solution to you, take a demo with LocalOps.</p><p><strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call</a>.</strong></p><h2><strong>Structural Differences Between First-Generation Alternatives and AWS-Native IDPs</strong></h2><p>This is the question most engineering leaders underestimate when making the migration decision.</p><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/2boVw/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c87b918c-21a2-4219-ac35-f8f99ef11ada_1220x900.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e877043f-9a6e-433d-abaf-8491ee1b5a95_1220x900.png&quot;,&quot;height&quot;:448,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/2boVw/1/" width="730" height="448" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>The structural difference is not just technical. It is architectural and strategic.</p><p><a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See how LocalOps compares to Heroku side by side.</a></p><p>First-generation Heroku alternatives, Render, Railway, and Fly.io, improve on Heroku&#8217;s developer experience and pricing. But the fundamental model is the same: your infrastructure runs on someone else&#8217;s cloud. Your compliance posture is bound by what the vendor chooses to support. Your cost efficiency ceiling is lower than that of direct AWS. And your exit path still involves rebuilding infrastructure from scratch.</p><p>AWS-native Internal Developer Platforms change the model entirely. Infrastructure runs in your cloud account. Developer experience is preserved, and git push deploys without Kubernetes knowledge. Observability, CI/CD, autoscaling, and secrets management are built in. And if you stop using the platform, your infrastructure keeps running. Nothing needs to be rebuilt.</p><p>This is the structural difference that makes AWS-native IDPs the more viable long-term choice for enterprise-grade SaaS teams in 2026.</p><h2><strong>How has the Heroku Alternatives Landscape Shifted today?</strong></h2><p>The 2026 landscape of Heroku alternatives looks significantly different from two years ago.</p><p><strong>What has changed:</strong></p><p>First-generation managed PaaS alternatives have matured. Render, Railway, and Fly.io are more reliable and feature-complete than they were in 2023. For early-stage teams, they are a stronger option than they used to be.</p><p>At the same time, their structural limitations have become more visible. Teams that moved to Render or Railway as a first step are now facing the same compliance and infrastructure control conversations they had on Heroku, 18 months later, with more accumulated dependencies.</p><p><strong>Where compliance-sensitive teams are landing:</strong></p><p>Enterprise-grade, compliance-sensitive SaaS teams are converging on AWS-native infrastructure with a platform layer on top. The reasons are consistent:</p><ul><li><p>SOC 2 and HIPAA requirements demand infrastructure in your own cloud account</p></li><li><p>Enterprise security questionnaires require VPC configuration, private networking, and IAM audit trails</p></li><li><p>Cost efficiency at scale requires direct AWS pricing, not a PaaS margin</p></li><li><p>Architectural flexibility requires Kubernetes-grade infrastructure, not dyno-based compute</p></li></ul><p>The best Heroku alternatives for this profile in 2026 are platforms that run on AWS infrastructure the team owns, with enough abstraction that developers don&#8217;t need to interact with that infrastructure directly.</p><p><strong>The AWS Heroku alternative conversation:</strong></p><p>AWS as a Heroku alternative is no longer a theoretical discussion. The question engineering teams are asking in 2026 is not whether to move to AWS, it is how to do it without losing the developer experience that made Heroku valuable in the first place. AWS-native IDPs are the practical answer to that question.</p><p><a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read how LocalOps makes AWS practical as a Heroku alternative.</a></p><h2><strong>What Is the Engineering Community Recommending in 2026?</strong></h2><p>Across Reddit threads on r/devops, r/rails, and r/node, and Hacker News discussions on platform engineering, consistent patterns emerge from teams that have been through this decision.</p><p><strong>The managed PaaS stepping stone pattern:</strong></p><blockquote><p><em>&#8220;Moved to Render to get off Heroku quickly. Hit the same compliance walls 18 months later. Now on AWS with a platform layer. Should have gone there first.&#8221;</em></p></blockquote><p><strong>The raw AWS complexity pattern:</strong></p><blockquote><p><em>&#8220;Went straight to EKS. Took four months before a single product engineer could deploy independently. Not the right call for a twelve-person team.&#8221;</em></p></blockquote><p><strong>The rails hosting heroku alternative question:</strong></p><p>Rails teams consistently surface as the most active in these discussions. The community consensus: any Rails hosting a Heroku alternative needs to handle Sidekiq workers, Postgres with connection pooling, Active Storage, and Action Cable natively, not through fragile add-on integrations. Platforms that handle these as first-class concerns are consistently recommended over those that treat them as edge cases.</p><p><strong>The Heroku free alternatives question:</strong></p><p>Since Heroku removed its free tier in 2022, the Heroku free tier alternatives conversation has shifted. The community recommendation in 2026: rather than looking for a free managed PaaS, use AWS&#8217;s own free tier allowances through a platform that runs on your AWS account directly. The combined cost is lower, and the infrastructure is yours.</p><p><strong>The community consensus:</strong></p><p>Managed PaaS alternatives are a transitional step, not a destination. Teams that move to infrastructure they own, on their own cloud account, running standard Kubernetes, make the migration once. Teams that move to another managed PaaS frequently make it twice.</p><p><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">See what the migration looks like for your specific stack.</a></p><h2><strong>Long-Term Architectural Risks of Managed PaaS vs AWS-Native IDPs</strong></h2><p>This is the question most engineering leaders underweight when making the migration decision. The risks of managed PaaS alternatives are not visible at the moment of choosing. They become visible 12&#8211;18 months later.</p><p><strong>Risk 1: Compliance ceiling</strong></p><p>Every managed PaaS platform has a compliance ceiling defined by what the vendor chooses to support. Your SOC 2 posture, HIPAA controls, and GDPR data residency are all bound by the vendor&#8217;s infrastructure decisions. AWS-native IDPs running in your own account have no such ceiling; the compliance surface is AWS, which holds the relevant certifications.</p><p><strong>Risk 2: Recreated vendor lock-in</strong></p><p>Moving from Heroku to Render or Railway solves the immediate cost problem. It recreates the structural lock-in problem. Your infrastructure still lives in someone else&#8217;s cloud. Leaving still requires rebuilding. The exit path is still expensive. With an AWS-native IDP like LocalOps, every resource provisioned lives in your AWS account. Your infrastructure continues running the moment you stop using the platform. There is nothing to rebuild.</p><p><strong>Risk 3: Cost ceiling at scale</strong></p><p>Managed PaaS platforms reduce cost versus Heroku. They do not eliminate the platform margin. As your application grows, more services, more databases, more traffic, the margin compounds. AWS-native infrastructure has no margin on compute or managed services. The cost difference becomes significant at scale and widens with every service you add.</p><p><strong>Risk 4: The migration you make twice</strong></p><p>Teams that choose a managed PaaS alternative frequently find themselves making the same infrastructure migration decision 18&#8211;24 months later, this time under more pressure, with more complexity, and more accumulated dependencies. The teams that move to infrastructure ownership early make the migration once, under conditions they control, with time to do it properly.</p><p><strong>Risk 5: Architecture constraints</strong></p><p>Managed PaaS platforms make infrastructure decisions on your behalf. These constraints are acceptable early on. They become limiting as architecture evolves toward microservices, event-driven systems, and complex inter-service communication. AWS-native IDPs inherit AWS&#8217;s full architectural flexibility; any pattern that runs on Kubernetes is available to your team.</p><p><a href="https://localops.co/migrate-heroku-to-aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Understand the full architecture difference between Heroku and LocalOps.</a></p><h2><strong>How LocalOps Fits In</strong></h2><p>LocalOps is an AWS-native Internal Developer Platform built for teams, replacing Heroku.</p><p>Connect your AWS account. Connect your GitHub repository. LocalOps provisions a dedicated VPC, EKS cluster, load balancers, IAM roles, and a complete observability stack, Prometheus, Loki, and Grafana, automatically. No Terraform. No Helm charts. No manual configuration. First environment ready in under 30 minutes.</p><p>From that point, the developer experience is identical to Heroku. Push to your configured branch. LocalOps builds, containerizes, and deploys to AWS automatically. Logs and metrics are available from day one. Autoscaling and auto-healing run by default.</p><p>The infrastructure lives in your AWS account. If you stop using LocalOps, it keeps running. Nothing needs to be rebuilt.</p><blockquote><p><em>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10&#8211;12 man months of effort &#8212; all of which LocalOps has saved for us.&#8221;</em> <strong>&#8212; Gaurav Verma, CTO, SuprSend.</strong></p><p><em>&#8220;We saved months of DevOps effort by using LocalOps.&#8221;</em> <strong>&#8212; Shobit Gupta, Ex-Uber, CTO, Segwise.</strong></p></blockquote><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku">Read the Migration Guide</a>.</strong></p><h2><strong>Frequently Asked Questions</strong></h2><ol><li><p><strong>Is LocalOps a true Heroku replacement?</strong></p></li></ol><blockquote><p>If you mean feature-for-feature parity with Heroku&#8217;s add-on marketplace, no platform is a perfect drop-in replacement. If you mean a platform that delivers the same operational simplicity, git-based deployments, managed environments, and automatic scaling, while fixing Heroku&#8217;s core limitations on cost, ownership, and lock-in, then yes. LocalOps is a direct alternative to Heroku for most production workloads.</p></blockquote><ol start="2"><li><p><strong>Do we need AWS expertise or a DevOps engineer to use LocalOps?</strong></p></li></ol><blockquote><p>No. Infrastructure provisioning, security configuration, networking, IAM policy management, and autoscaling are all handled by LocalOps automatically. Your developers interact with a clean deployment interface. The AWS complexity is abstracted entirely, but your AWS account is always fully accessible to your team.</p></blockquote><ol start="3"><li><p><strong>What happens to our infrastructure if we stop using LocalOps?</strong></p></li></ol><blockquote><p>Your AWS infrastructure continues running without interruption. Everything LocalOps provisions lives inside your own AWS account; nothing depends on LocalOps&#8217;s systems to stay operational. Your EKS clusters, RDS databases, load balancers, and VPC remain fully functional. Unlike Heroku, leaving does not mean rebuilding from scratch.</p></blockquote><ol start="4"><li><p><strong>Do I need to write Terraform or automation scripts to use LocalOps?</strong></p></li></ol><blockquote><p>No. LocalOps handles all infrastructure provisioning through pre-built, hardened templates. You connect your AWS account, connect your repository, and LocalOps provisions your full AWS stack automatically. No Terraform files. No Helm charts. No custom automation scripts.</p></blockquote><ol start="5"><li><p><strong>If LocalOps goes down, will my applications go down?</strong></p></li></ol><blockquote><p>No. Your applications run on AWS infrastructure inside your own AWS account, not on LocalOps&#8217;s servers. Once your EKS cluster is running, it operates independently of LocalOps. Your applications depend on AWS uptime, not LocalOps uptime. On Heroku, a platform outage means your applications go down. On LocalOps + AWS, your infrastructure runs independently of the platform that manages it.</p></blockquote><h2><strong>Key Takeaways</strong></h2><p>The Heroku alternatives landscape in 2026 has matured into three clear categories with distinct tradeoffs. Managed PaaS alternatives are the fastest migration path, but recreate vendor dependency. Open-source self-hosted alternatives give full control at a high operational cost. AWS-native Internal Developer Platforms combine infrastructure ownership with developer simplicity and no new vendor lock-in.</p><p>The teams that make this decision well are the ones who evaluate it before the constraints become a crisis. Not when the Heroku bill becomes a board conversation. Not when a compliance audit flags the shared infrastructure. Before those moments.</p><p><strong>Ready to see what your stack looks like on LocalOps?</strong></p><p><strong><a href="https://go.localops.co/heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Schedule a Migration Call &#8594;</a></strong></p><p><strong><a href="https://console.localops.co/signup?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Get Started for Free &#8594;</a></strong></p><p><strong><a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=heroku_alternatives_2026">Read the Migration Guide &#8594;</a></strong></p>]]></content:encoded></item><item><title><![CDATA[How Much Does It Cost to Build an Internal Developer Platform In-House vs Buying One?]]></title><description><![CDATA[A practical breakdown of real costs, hidden trade-offs, and opportunity cost CTOs should consider before building or buying an IDP]]></description><link>https://blog.localops.co/p/internal-developer-platform-build-vs-buy-cost-comparison</link><guid isPermaLink="false">https://blog.localops.co/p/internal-developer-platform-build-vs-buy-cost-comparison</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Mon, 23 Mar 2026 06:16:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6-U4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6-U4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6-U4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!6-U4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!6-U4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!6-U4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6-U4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3564242,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/191827325?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6-U4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!6-U4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!6-U4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!6-U4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb8c7f1d-108f-40fd-be32-01afe8bf3ee0_2400x1345.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Building an internal developer platform sounds like a straightforward engineering investment. It rarely is.</p><p>Most teams that attempt it budget for 2-3 engineers and 4-6 months. What they get is a multi-year platform program that pulls senior DevOps engineers off other work, generates its own internal support queue, and still isn&#8217;t fully adopted 18 months later.</p><p>This blog breaks down what an internal development platform actually costs to build, what buying one looks like in real numbers, and where the decision genuinely tips one way or the other. If you are in the middle of evaluating the best internal developer platforms against a build decision, the framework in this post should give you a clearer picture of where the real costs sit.</p><h2>TL;DR</h2><ul><li><p>Building an IDP in-house costs significantly more than most engineering teams budget for, in time, headcount, and opportunity cost</p></li><li><p>The hidden expense isn&#8217;t the build. It&#8217;s the maintenance, the adoption work, and the BYOC layer nobody scopes for</p></li><li><p>Open source isn&#8217;t free to run. Backstage is the most common example of this</p></li><li><p>Buying a commercial platform trades control for speed, with real lock-in tradeoffs worth understanding</p></li><li><p>The right answer depends on your org size, delivery model, and whether platform engineering is your core business or just something you need to support it</p></li></ul><h2>What Is an Internal Developer Platform (IDP)?</h2><p>An internal developer platform (IDP) is a self-service layer built by platform engineering teams that enables developers to provision environments, deploy services, and manage infrastructure without relying on manual processes or ticket-based workflows.</p><p>It sits between infrastructure and application teams, abstracting underlying complexity such as cloud resources, Kubernetes clusters, CI/CD pipelines, secrets, and observability, and exposing them through standardised workflows developers can use directly.</p><p>This is different from an internal developer portal, which is typically a UI layer for discoverability covering service catalogs, documentation, and API registries. A portal is part of a platform. A platform is the full system underneath. Many teams build a portal and think they have a platform. They do not.</p><p>If you want to go deeper on what an IDP actually involves, we have covered it in detail in <a href="https://blog.localops.co/p/what-is-an-internal-developer-platform?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">this guide</a>.</p><h2>What Does Internal Developer Platform Architecture Actually Include?</h2><p>This is where most build estimates go wrong.</p><p>Teams scope for a deployment tool and discover they are building something much larger. Here is what a production-grade internal developer platform actually needs:</p><h4>Infrastructure orchestration </h4><p>VPC design, subnet layout, cluster provisioning, IAM policies, storage, and networking across one or more clouds. Not a one-time setup. Needs to be repeatable, auditable, and version-controlled.</p><h4>Control plane vs. data plane separation </h4><p>The control plane manages desired state, policies, and orchestration logic. The data plane handles actual workload execution. Conflating these two is one of the most common architectural mistakes in early IDP builds. It creates systems that are hard to scale, hard to debug, and impossible to hand off.</p><h4>Environment lifecycle orchestration</h4><p>Not just provisioning. Creation, promotion, teardown, drift detection, and state reconciliation across dev, staging, production, and customer environments. Most teams underscope this until they are managing 20+ environments manually.</p><h4>Secrets Management</h4><p>Distinct from RBAC. Covers secret injection at runtime, rotation policies, per-environment secret scoping, and integration with Vault, AWS Secrets Manager, or GCP Secret Manager. Self-built IDPs frequently have security gaps here. Secrets hardcoded in CI pipelines, shared across environments, rotated manually. This is where audits get uncomfortable.</p><h4>Deployment abstraction layer </h4><p>Whether you are targeting Kubernetes, ECS, Nomad, or bare metal, the IDP needs a layer that normalises deployment primitives so developers do not need to know what is underneath. Harder to build correctly than it looks. Needs to stay current as infrastructure evolves.</p><h4>Golden paths and CI/CD </h4><p>Service scaffolding, GitOps workflows, security baselines, and guardrails. Not optional configurations. Default behavior.</p><h4>Self-service workflows</h4><p> Environment provisioning, dependency management, and service creation without tickets or manual intervention.</p><h4>RBAC and governance</h4><p>Fine-grained access control, audit trails, and policy enforcement. Required by enterprise customers and auditors.</p><h4>Observability layer </h4><p>Per-environment logs, metrics, and traces, pre-integrated with deployed services. Running Prometheus, Loki, and Grafana yourself adds operational overhead that compounds over time.</p><h4>BYOC and self-hosted delivery </h4><p>Private Helm chart generation, license token enforcement, and customer cloud provisioning. A product capability, not just an infrastructure concern. This is where most self-built IDPs either stall or never start.</p><p>A CTO reading this list should be asking one question: which of these do we already have, which do we need to build, and which could a vendor replace? That is the actual evaluation.</p><p>If you want to see how LocalOps handles each of these layers out of the box, <a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">the docs are a good place to start.</a></p><h2>What CTOs Are Actually Signing Up For When They Decide to Build an IDP In-House</h2><p>The first planning document usually says: 2-3 engineers, 6 months, MVP by Q3.</p><p>Here is what actually happens.</p><h4>How your team structure changes</h4><p>A side project becomes a standing platform team. Once internal teams depend on the platform, you cannot wind it down. You now have a product, with internal customers, a backlog, and an on-call rotation.</p><h4>The kind of talent you actually need</h4><p>A serious IDP requires staff-level engineers with deep Kubernetes knowledge, cloud networking experience, and a security engineering background, plus a platform PM to manage internal stakeholder requests. These roles are expensive. They are also hard to retain. Platform engineers who build good IDPs get recruited aggressively.</p><h4>What gets delayed on your roadmap</h4><p>The engineers building your IDP are typically your best engineers. They are not building product features for 12-18 months. That is the real cost most teams miss entirely.</p><h4>The internal overhead you take on</h4><p>Once the platform launches, it generates support tickets, onboarding requests, documentation gaps, and feature requests from every team using it. Practitioner data from Puppet State of DevOps shows this work consumes roughly half of platform team capacity after launch.</p><h4>The extra layer BYOC adds</h4><p>If your enterprise sales motion requires BYOC or self-hosted options, you are not building one platform. You are building two programs simultaneously: the internal IDP and the customer-facing delivery layer on top of it.</p><p>That second layer introduces its own requirements such as per-customer provisioning, versioned deployments, secure distribution, and upgrades outside your control. This significantly increases operational complexity.</p><h2>The Real Cost of Building an IDP In-House</h2><h4>How many engineers does it take?</h4><p>Across platform engineering and internal developer platform research, practitioner guidance from platformengineering.org puts the minimum at 3-5 engineers for sub-100-developer orgs, scaling to 5-10+ for larger organizations. These are not junior hires.</p><p>An <a href="https://roadie.io/blog/the-true-cost-of-self-hosting-backstage">independent analysis of a Backstage-based portal </a>for 300 developers estimated 7 Full -Time Engineers for the first 12 months to reach an initial production portal, followed by 6 Full - Time Engineers ongoing. Total over 3 years, including infrastructure: <strong>approximately $3.25M</strong>. That figure accounts for fully-loaded salaries, not base pay.</p><p>Separate estimates put ongoing Backstage maintenance at roughly $150,000 per year per 20 developers once the portal becomes central to delivery. Multiple organizations report needing between 3 and 15 Full-Time Engineers just to maintain Backstage long-term, based on Backstage community reports and independent analyses.</p><h4>How long does it take?</h4><p>The honest answer: 12-18 months to a usable platform. Longer for full adoption.</p><p>Optimistic estimates of 8-16 weeks exist. These describe an MVP, not a production-ready system. A first slice of golden paths and a basic service catalog is not the same as a platform your entire engineering org depends on.</p><p>DORA research and platform engineering practitioner surveys consistently report 12-18 months as the realistic minimum. Some teams report 3+ years to reach adoption levels that justify the investment. During that entire period, the platform team is on payroll and senior engineers are pulled from product work.</p><h4>What does maintenance actually cost?</h4><p>This is where the real ongoing cost lives.</p><p>Kubernetes releases new versions regularly. Cloud providers deprecate APIs. The Backstage internal developer platform alone requires ongoing plugin maintenance, version tracking, and security updates that compound over time. Security baselines evolve. Every one of these generates platform team work that does not stop.</p><p><a href="https://www.puppet.com/resources/state-of-devops-report">Puppet State of DevOps data</a> shows 60-80% of platform team capacity goes to maintenance after launch, keeping existing functionality working rather than building new capabilities. The observability stack alone, if self-managed, can consume several SRE-months per year.</p><h4>Why BYOC Adds Significant Cost</h4><p>Most IDP cost analyses stop at the internal platform. For B2B SaaS teams, that is the wrong place to stop.</p><p>Enterprise customers increasingly require dedicated single-tenant environments, BYOC deployments into their own cloud account, or fully self-hosted installations with no dependency on your infrastructure.</p><p>Building BYOC support requires:</p><ul><li><p>Private Helm chart generation, signing, versioning, and hosting</p></li><li><p>License token enforcement for self-hosted installs</p></li><li><p>Per-customer environment templates across AWS, GCP, and Azure</p></li><li><p>Upgrade workflows customers can run without access to your internal systems</p></li></ul><p>This is not an extension of your internal IDP. It is a separate engineering program that typically runs 2-4 additional quarters on top of the base platform build.</p><p>SuprSend, a notification infrastructure company, documented saving 12-15 man-months by using LocalOps for their BYOC distribution pipeline instead of building it in-house. That figure is consistent with what the component breakdown above suggests.</p><p><a href="https://localops.co/case-study/suprsend-unlocks-enterprise-revenue-byoc?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Read how SuprSend did it.</a></p><h2>What Buying a Commercial IDP Actually Costs</h2><p>Commercial IDPs are typically priced on some combination of users, environments, and consumption. The cost structure looks very different from an in-house build.</p><h4>Common pricing models:</h4><ul><li><p>Per-seat fees covering platform access and build minutes</p></li><li><p>Per-environment fees for provisioned infrastructure environments</p></li><li><p>Consumption-based fees for compute, storage, and egress in some models</p></li></ul><h4>The real tradeoffs of buying:</h4><ul><li><p>Vendor lock-in is real. Migrating off a platform once your deployment workflows depend on it is non-trivial</p></li><li><p>Roadmap dependency: features you need may not be on the vendor&#8217;s roadmap</p></li><li><p>Feature constraints: opinionated platforms make certain architectural decisions for you</p></li><li><p>Support quality varies significantly between vendors and pricing tiers</p></li></ul><p>These tradeoffs are worth taking seriously. But for most B2B SaaS teams under 100 engineers, they are significantly smaller problems than a failed in-house build.</p><p>If you&#8217;re thinking through these tradeoffs, <a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">book a demo</a> and let&#8217;s talk. The LocalOps team is happy to help you figure out what actually makes sense for your setup and team.</p><h4>On Backstage specifically:</h4><p>Backstage is the most widely adopted open source internal developer platform. It is free to use. It is not free to run. The $3.25M TCO figure cited earlier comes entirely from the engineering cost of operating Backstage at scale, not from licensing. That distinction matters when teams evaluate it as a &#8220;<strong>free</strong>&#8220; option.</p><h4>On infrastructure cost model:</h4><p>For teams running on cloud accounts like AWS, an internal developer platform that provisions directly into your AWS account rather than sitting on top of a PaaS layer changes the cost model significantly. You pay AWS directly, startup credits apply, and you avoid markup on infrastructure you do not control. The same applies to GCP and Azure. The question is not just what the platform costs but where the infrastructure bill actually lands.</p><h2>Build vs. Buy: 3-Year TCO Comparison</h2><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/kg9Vu/1/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38ddc735-1cd4-4061-86c1-65ab4082f192_1220x854.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/61c876f1-42f9-4448-bf0f-838cf863cf60_1220x854.png&quot;,&quot;height&quot;:425,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/kg9Vu/1/" width="730" height="425" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p>Salary assumptions use $130K-$180K fully-loaded, US market baseline. Numbers will vary by geography and org size. All figures sourced from published platform engineering analyses and vendor documentation.</p><h2>Why So Many In-House IDP Builds Fail</h2><p>A significant share of internal platform initiatives fail to reach the adoption levels that justify the investment. Gartner research on platform engineering and DORA reports on DevOps transformation consistently surface this. The failure rate is not marginal.</p><h4>Scope underestimation</h4><p>Teams often start by scoping a developer portal, but the requirements expand into a full platform. That gap can add 12+ months of work and significant engineering cost.</p><h4>Losing your core engineer </h4><p>Platform teams built around one or two staff engineers are inherently fragile. When those engineers leave, so does most of the system&#8217;s context. What remains is a partially documented platform that nobody else fully understands or feels safe changing.</p><h4>Adoption failure</h4><p>Building the platform is not the hardest part. Getting hundreds of engineers to change how they build and deploy software is.</p><p>Adoption breaks down when the platform does not make the default path easier than what teams already have. Gaps in documentation, missing golden paths, and a poor developer experience will stall adoption, even if the underlying system is technically sound.</p><h4>Waiting increases the cost of change</h4><p>Teams that stall at month 14 rarely make a clean decision to stop. They keep investing, hoping adoption improves. When they eventually evaluate commercial platforms, they do it with less leverage, more urgency, and a partially-built internal system they now need to migrate away from.</p><p>Vendor-side risks are real too. Lock-in is not hypothetical. Migration paths from commercial platforms vary in quality. Support at lower pricing tiers is often inadequate for production incidents.</p><h2>Real Scenarios: What This Decision Looks Like in Practice</h2><h4>Sub-50 engineer team needing BYOC to close enterprise deals</h4><p>At this team size, engineering capacity is the constraint for everything.</p><p>There are usually one or two engineers who understand Kubernetes and cloud infrastructure at the level required to build a serious IDP. Those same engineers are carrying product infrastructure responsibilities at the same time. They are not waiting for a platform project.</p><p>When a team this size decides to build an IDP, what typically happens is this: the platform work starts, the product infrastructure gets less attention, and both move slower than planned. Six months in, the IDP is partially built, the product has accumulated infrastructure debt, and the engineers who started the project are stretched across both.</p><p>For most sub-50 teams, the question is not whether an IDP would be useful. It clearly would. The question is whether building one from scratch is the best use of the engineering capacity available.</p><h4>Growing Teams Trying to Standardise Deployments</h4><p>This is where most IDP conversations start.</p><p>The team has grown from 15 to 50 engineers over 18 months. Three teams use slightly different CI setups. Environment configs live across Terraform files, hand-edited YAML, and a Notion doc someone wrote in 2022 that may or may not still be accurate.</p><p>Onboarding a new engineer takes two weeks just to understand how to get something into production. Senior engineers spend meaningful time every week answering questions that should have a documented answer somewhere.</p><p>The instinct is right. You need a platform.</p><p>The question is whether building one from scratch is the fastest path to fixing the problem.</p><p>In most cases at this stage, it is not. A commercial platform gets you standardised golden paths, self-service environments, and consistent CI/CD in days or weeks. Building in-house takes 12-18 months to deliver a robust, adopted platform. Not a thin MVP. The system your entire engineering org actually depends on.</p><p>By the time an in-house build is stable enough to rely on, the team has usually grown again and the requirements have already shifted.</p><p>LocalOps was built specifically for teams at this stage. You can <a href="https://localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">try it for free</a> or<a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> book a demo </a>to see how it fits your setup.</p><h2>When Building In-House Actually Makes Sense</h2><p><strong>You have 150+ engineers and can staff a permanent platform team</strong> Below 150 engineers, platform engineering competes directly with product engineering for the same people. If you cannot commit to staffing a dedicated team of 8-12 engineers permanently, the build decision will cost you more than it saves.</p><p><strong>Regulatory or security constraints genuinely rule out external control planes</strong> FedRAMP High boundaries, classified infrastructure, strict data sovereignty mandates where no third-party control plane is acceptable. Most compliance requirements that fall short of full air-gap are satisfied by commercial platforms with self-hosted control plane options.</p><p><strong>You have the specific talent and can retain it. </strong>Building a serious IDP requires engineers who already understand Kubernetes internals, multi-tenancy patterns, and secret management at scale. If one of those engineers leaves, the institutional knowledge goes with them.</p><p><strong>Even then, build the right layers: </strong>Own the abstraction layer: your tenancy model, deployment abstractions, and domain-specific golden paths. Buy the infrastructure plumbing underneath. Environment provisioning, observability wiring, and BYOC distribution are solved problems. Building them creates a maintenance surface, not competitive advantage.</p><h2>How to Run This Evaluation: A 5-Step Framework</h2><p>This works regardless of what you decide.</p><h4>Step 1: Estimate Full - Time Engineers requirements conservatively</h4><p>Use the ranges above: 3-5 Full - Time Engineers minimum for sub-100-developer orgs. Apply fully-loaded salary costs, not base salary. Add 20% for tooling, infrastructure, and overhead.</p><h4>Step 2: Model time-to-value realistically</h4><p>12-18 months to a usable platform. Map that against your current roadmap. Which features get delayed? Which enterprise deals require capabilities you will not have for 12 months? Quantify that as a cost.</p><h4>Step 3: Map mandatory capabilities against vendor coverage</h4><p>Take the component list from the production-grade IDP section above. Mark what a vendor covers out of the box. Mark what you would still need to build. The delta is your actual build scope.</p><h4>Step 4: Compare fully-loaded 3-year TCO</h4><p>Salaries plus infrastructure plus opportunity cost for in-house. Subscription fees plus infrastructure for commercials. Use real vendor pricing. Model ramp because you will not be at full environment count on day one.</p><h4>Step 5: Stress-test the failure scenario</h4><p>What happens if your in-house build stalls at month 14? What is the rollback path? What does evaluating vendors under time pressure actually cost? If you cannot answer this, your risk model is incomplete</p><h2>Frequently Asked Questions</h2><p><strong>1. How long does it take to build an internal developer platform in-house?</strong></p><p>For most organizations, 12&#8211;18 months is the realistic minimum to reach a usable platform, based on DORA research and platform engineering practitioner data.</p><p>That timeline gets you a system stable enough for early internal use, not full adoption. Rolling it out across an entire engineering org typically takes longer, with some teams reporting 2&#8211;3+ years before the platform fully delivers value.</p><p>Shorter timelines like 8&#8211;16 weeks usually refer to an MVP, not a production-ready platform your entire engineering org depends on.</p><p><strong>2. Internal developer portal vs platform: which is better for a growing SaaS team?</strong></p><p>They solve different problems so the comparison is not really either/or.</p><p>A portal gives your team a place to find services, read documentation, and understand what exists. A platform is what actually provisions environments, manages deployments, handles secrets, and wires up observability. One is a UI. The other is the operational system underneath it.</p><p>For a growing SaaS team, the platform layer is what unblocks engineering velocity. The portal becomes useful once you have enough services and teams that discoverability is a real problem. Most teams under 50 engineers need the platform first. The portal can come later.</p><p><strong>3. What is the difference between an open source internal developer platform, a managed platform, and building your own?</strong></p><p>Open source platforms like Backstage give you the codebase for free. You still need engineers to deploy, maintain, and integrate it with your infrastructure. The license costs nothing. Running it at scale does.</p><p>A managed commercial platform handles the infrastructure layer, provisioning, observability, and in some cases BYOC distribution for you. You pay a subscription and trade some control for faster time to value and a lower maintenance burden.</p><p>Building your own means writing everything from scratch: provisioning logic, deployment abstractions, secrets management, observability integration, and golden paths. You own every layer and maintain every layer. This rarely makes sense below 150 engineers unless your requirements are specific enough that no existing option accommodates them.</p><p><strong>4. Can a small engineering team realistically build and maintain their own IDP?</strong></p><p>Technically yes. Practically, it is a difficult trade.</p><p>A team of 30-50 engineers typically has one or two people with the depth required to build a serious IDP. Pulling them onto platform work for 12-18 months has a direct product cost. Those same engineers are usually also carrying core infrastructure responsibilities alongside product work.</p><p>Most teams at this scale are better served by a commercial platform until they grow past the point where a dedicated platform org makes economic sense. The build conversation becomes more defensible around 150+ engineers with a permanent platform team.</p><p><strong>5. What does it cost to maintain a homegrown IDP?</strong></p><p>More than most teams budget for. Puppet State of DevOps data shows 60-80% of platform team capacity goes to maintenance after launch, not new features.</p><p>Kubernetes version upgrades, cloud API deprecations, security baseline changes, observability stack management, and internal developer support all generate ongoing work that does not stop. The observability stack alone, if self-managed, can consume several SRE-months per year.</p><p>For a mid-size org running a Backstage-based platform, independent analyses estimate roughly $150,000 per year per 20 developers in ongoing maintenance costs once the platform becomes central to delivery.</p><h2>Conclusion</h2><p>For most teams, building an internal developer platform is not a question of technical feasibility. It is a question of cost, time, and focus.</p><p>In-house platforms make sense for a narrow set of organisations with the scale, constraints, and long-term commitment to support them. Everyone else is trading months of engineering time and significant opportunity cost for something that does not directly move the product forward.</p><p>Buying is often the more practical choice. You get what you need without taking on the maintenance.</p><p>The real decision is not build vs buy in the abstract. It is whether owning this layer is core to your business, or whether it is infrastructure you need to get out of the way.</p><p>Choose based on that, not on instinct.</p><p>If your goal is to standardise environments and ship faster without building and maintaining an internal platform, you can<a href="https://localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> try LocalOps for free</a> or <a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">book a demo</a> to see how it fits your workflow.</p><p></p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[How to Standardize Dev, Staging, and Production Environments with an Internal Developer Platform]]></title><description><![CDATA[From manual setup to repeatable environment workflows]]></description><link>https://blog.localops.co/p/standardize-dev-staging-prod-internal-developer-platform</link><guid isPermaLink="false">https://blog.localops.co/p/standardize-dev-staging-prod-internal-developer-platform</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Fri, 20 Mar 2026 12:22:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IIr8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IIr8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IIr8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!IIr8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!IIr8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!IIr8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IIr8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/565dd961-535f-4394-8468-03557448cb2b_2400x1345.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5069489,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/191566496?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IIr8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!IIr8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!IIr8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!IIr8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F565dd961-535f-4394-8468-03557448cb2b_2400x1345.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Environment drift is not a discipline problem. It is an infrastructure problem. When dev, staging, and production are configured differently, deployments break in ways that are hard to debug.</p><blockquote><p>An internal developer platform (IDP) fixes this by making environment definitions the single source of truth, not the engineers configuring them manually.</p></blockquote><h2><strong>TL;DR</strong></h2><ul><li><p>Environments drift when engineers build them by hand instead of from a shared blueprint. An internal developer platform fixes this by treating that blueprint as the source of truth across every environment.</p></li><li><p>True environment parity means enforcing identical structure, networking, and deployment behavior, not identical compute size</p></li><li><p>Shared staging breaks as teams grow. Per-PR ephemeral environments give every developer isolated, on-demand environments</p></li><li><p>Secrets scoped per environment and encrypted in your own cloud account eliminate the .env file drift behind most silent production failures</p></li><li><p>Day-2 operations including monitoring, drift detection, and auto-healing need to be built into every environment from day one</p></li></ul><h2><strong>What Is an Internal Developer Platform?</strong></h2><p>An internal developer platform is a self-service layer between your developers and cloud infrastructure. It handles environment provisioning, CI/CD pipelines, secrets management, observability, and access control.</p><p>The platform owns the operational layer. Developers stop worrying about Kubernetes configs, Terraform scripts, and cloud account setup. They push code and the platform handles the rest.</p><p>A well-built IDP does not just automate deployments. It enforces consistency across every environment, every team, and every cloud account your organization runs.</p><p>If you want a deeper look at what an internal developer platform is and how it works, <a href="https://blog.localops.co/p/what-is-an-internal-developer-platform?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">we have covered it in detail here</a>.</p><h2><strong>Why Environment Standardization Fails Without an IDP</strong></h2><p>Most teams do not set out to create inconsistent environments. The inconsistency accumulates.</p><h4><strong>Manual Provisioning Creates Snowflake Environments</strong></h4><p>One engineer sets up staging. Another sets up production. Each makes slightly different decisions about instance types, security groups, and database configs. Six months later, none of them match.</p><p>This is the default state for teams without an internal development platform enforcing consistency. The environment definition exists only in someone&#8217;s memory or a stale wiki page.</p><h4><strong>Shared Staging Serializes Your Team</strong></h4><p>When five developers work against one staging environment, a broken commit from any one of them stops everyone. Deployments queue. Work stops while someone tracks down why staging is returning 502s.</p><p>This is a structural problem. It gets worse linearly as the team grows.</p><h4><strong>Spinning Up a New Environment Takes Weeks</strong></h4><p>Without an IDP, a new environment means writing Terraform, configuring a Kubernetes cluster, setting up VPCs, wiring monitoring, and manually configuring secrets. The realistic timeline is two to four weeks. The resulting environment is still slightly different from production in ways that will matter later.<br><br>This kind of fragmentation is exactly why developers lose 6 to 15 hours every week to tool sprawl and context switching, according to the <a href="https://www.port.io/state-of-internal-developer-portals">2025 State of Internal Developer Portals report by Port.</a></p><h4><strong>The DevOps Team Becomes the Gatekeeper</strong></h4><p>Every environment request goes through DevOps. The team, already stretched, becomes the bottleneck every request waits on.</p><p>This is where platform engineering and internal developer platforms directly impact delivery speed. Without automated provisioning, every new environment requires a DevOps engineer to review configs, apply Terraform, validate networking, and ensure everything is wired correctly.</p><p>It is not just slow, it is sequential. One request blocks the next.</p><p>The team ends up spending hours on repetitive provisioning instead of improving infrastructure. Ticket queues grow, environment requests pile up, and developers wait days just to start testing their code.</p><h2><strong>The Environment Parity Problem</strong></h2><p>Parity is often interpreted as making dev, staging, and production identical. In practice, that is neither necessary nor maintainable.</p><p>Parity should be treated as consistency of behavior, not symmetry of infrastructure.</p><p>The goal is to keep the components that affect runtime behavior identical, while allowing controlled differences in scale and cost.</p><h4><strong>What Must Be Identical</strong></h4><ul><li><p>The container image. Same artifact, built once, promoted through environments</p></li><li><p>Core networking: VPC layout, subnet configuration, security group rules</p></li><li><p>Secrets handling: how secrets are stored, injected, and rotated</p></li><li><p>Deployment behavior: same CI/CD pipeline, same rollout strategy</p></li><li><p>Observability: same logging stack, same metrics collection</p></li><li><p>IAM and access policies: same permission boundaries across environments</p></li></ul><h4><strong>What Can Differ Intentionally</strong></h4><ul><li><p>Instance size and replica count. Staging runs smaller</p></li><li><p>Data volume. Staging uses anonymized subsets</p></li><li><p>Backup frequency. Production has daily backups; staging may not</p></li></ul><p>A well-built IDP enforces this by making constraints explicit and versioned. When staging drifts from production, the platform surfaces it.</p><h4><strong>Enforcement via an IDP</strong></h4><p>An IDP enforces parity by making these constraints explicit and version-controlled:</p><ul><li><p>Environment definitions are codified as templates</p></li><li><p>Changes are versioned and promoted across environments</p></li><li><p>Drift is detectable when an environment deviates from the expected state</p></li></ul><p>Without this, parity depends on convention. With it, parity becomes enforceable.</p><h4><strong>Why Parity Failures Cause Production Incidents</strong></h4><p>Most production issues caused by parity gaps are not logic errors. They are environment-induced failures.</p><p>Common patterns:</p><ul><li><p>Timeout values differ across environments, masking latency issues</p></li><li><p>IAM policies restrict access paths only exercised in production</p></li><li><p>Autoscaling policies are never triggered in staging due to lower load</p></li><li><p>Network rules allow traffic in staging but block it in production</p></li></ul><p>These issues pass staging because the system being tested is not identical in behavior to production.</p><p>By the time they surface, they are already user-facing.</p><h2><strong>How an IDP Standardizes Environments</strong></h2><p>Environments should be declared, not built by hand every time someone needs one.</p><p>The moment you treat environment config as a versioned artifact rather than a runbook, things get a lot more predictable.</p><h4><strong>Blueprint-Based Provisioning</strong></h4><p>The core principle: environments are declared, not improvised. </p><p>A well-built internal developer platform architecture treats the environment definition as code. Dev, staging, production, and customer-dedicated environments all come from the same blueprint.</p><p>A solid blueprint encodes:</p><ul><li><p>VPC topology: private and public subnets, CIDR ranges, routing rules</p></li><li><p>Kubernetes cluster: node groups, autoscaling configuration</p></li><li><p>Ingress layer: load balancers, TLS termination</p></li><li><p>Observability stack: logging, metrics, tracing backends</p></li></ul><p>Differences between environments are explicit parameters in that definition:</p><ul><li><p>Instance size</p></li><li><p>Replica count</p></li><li><p>Backup policies</p></li></ul><p>Any difference that is not a parameter is unmanaged drift. Drift compounds over time.</p><h4><strong>Git-Push Deployments With No Manual Steps</strong></h4><p>In a properly built IDP, developers push to a branch and the platform handles the rest.</p><p>The flow:</p><ul><li><p>Code is pushed</p></li><li><p>One artifact is built</p></li><li><p>That artifact is promoted through each environment</p></li><li><p>Deployment config is resolved from the environment definition</p></li></ul><p>No Dockerfiles to write per environment. No manual kubectl or Helm steps. No per-team CI scripts that only one person understands.</p><p>The pipeline is owned by the platform, not held together by team convention. This is what separates a real internal developer platform from a collection of deployment scripts.</p><h4><strong>What Actually Changes</strong></h4><p>Without a platform, every team owns their own Terraform. Kubernetes config lives in scripts or someone&#8217;s head. Pipelines diverge slowly until nobody is sure what is running where.</p><p>With an opinionated IDP, infrastructure patterns come pre-built. Kubernetes complexity is abstracted behind platform APIs. Pipelines are standardized and reused across every service.</p><blockquote><p>The goal is not just automation. It is reducing the number of decisions developers have to make to ship safely.</p></blockquote><p>Here is an example of <a href="https://docs.localops.co/environment/inside?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">what gets provisioned inside a production-grade environment.</a></p><h2><strong>Environment Isolation and Ephemeral Environments</strong></h2><p>Shared  staging feels efficient. In practice, it breaks more than it helps. Isolation by default is how you actually keep environments stable.</p><h4><strong>What Isolation Means in Practice</strong></h4><p>A well-architected IDP gives every environment:</p><ul><li><p><strong>Network isolation:</strong> A dedicated VPC per environment. Services within an environment communicate freely. Services across environments cannot unless explicitly configured</p></li><li><p><strong>Secrets isolation:</strong> Credentials in staging are separate entries from credentials in production, encrypted and stored in your cloud account&#8217;s secret manager</p></li><li><p><strong>Compute isolation:</strong> Each environment runs its own Kubernetes cluster with its own compute nodes. No resource contention between staging and production</p></li></ul><h4><strong>Ephemeral Environments Fix the Shared Staging Problem</strong></h4><p>Per-PR ephemeral environments work like this:</p><ol><li><p>A developer opens a pull request</p></li><li><p>The platform spins up a full-stack copy of the service automatically</p></li><li><p>The preview gets its own secrets, cloud resources, and a public URL</p></li><li><p>New commits to the PR branch trigger automatic rebuilds</p></li><li><p>When the PR closes or merges, the preview and all its resources are deleted automatically</p></li></ol><p>Staging becomes a stable integration environment where only merged commits land. Feature work happens in isolated environments where breaking something affects only the developer who broke it.</p><p>Here is an example of <a href="https://docs.localops.co/use-cases/ephemeral?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">how this works end to end with full-stack preview environments.</a></p><h2><strong>Secrets and Config Management Across Environments</strong></h2><p>Most production incidents trace back to a secret or config that was different across environments. Teams rarely notice until something breaks.</p><h4><strong>The Typical Problem Without an IDP</strong></h4><ul><li><p>Local dev uses .env files passed around manually</p></li><li><p>Staging uses a mix of cloud secret managers, configured differently by whoever set it up</p></li><li><p>Production uses a different setup, sometimes undocumented</p></li><li><p>Nobody is confident which keys exist in which environment</p></li></ul><p>A developer adds a new environment variable, updates local and staging, forgets production. The service deploys fine to staging. It fails silently in production.</p><h4><strong>How a well-built IDP Handles Secrets</strong></h4><ul><li><p>Secrets are scoped per environment and per service with one consistent mechanism</p></li><li><p>Values are encrypted at rest in your cloud account&#8217;s secret manager</p></li><li><p>Secrets are injected as environment variables at runtime. Code reads them identically across all environments</p></li><li><p>Cross-service references eliminate duplication. Change a value once and every service referencing it picks it up on next deploy</p></li><li><p>Preview environments inherit the right secrets automatically, preventing them from accidentally hitting production databases.</p></li></ul><p>Here is how<a href="https://docs.localops.co/environment/services/secrets?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> LocalOps handles secrets across environments</a> if you want a concrete reference.</p><h2><strong>The Platform Team Bottleneck</strong></h2><p>The platform team, built to eliminate bottlenecks, becomes one. It happens in almost every organization that builds developer infrastructure internally.</p><p>Product teams grow faster than the platform team. New environment requests pile up. A security policy change needs rolling out across 15 environments and falls to two engineers. The team that was supposed to unblock product development ends up blocking it.</p><p>Self-service provisioning breaks this cycle. A developer connects their cloud account, selects an environment template, links their GitHub repo, and deploys. The best internal developer platforms let developers provision environments without filing a ticket.</p><p>The platform team maintains the platform. They do not service every environment request individually.</p><p>SuprSend, a notification infrastructure company, estimated that building their BYOC deployment capability in-house would have required 10 to 12 engineer-months. That is engineering time that ships zero product features.</p><p>&#8220;Even if we had diverted all our engineering resources to doing this in-house, it would have easily taken 10 to 12 man months of effort,&#8221; said Gaurav Verma, CTO and Co-founder of SuprSend.</p><p>Teams that rebuild the same infrastructure from scratch stay stuck in ticket queues. Teams that get the platform right move faster.</p><p><a href="https://localops.co/case-study/suprsend-unlocks-enterprise-revenue-byoc?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Read the full SuprSend case study</a></p><h2><strong>Governance and Compliance Across Environments</strong></h2><p>Governance is where most IDP content goes quiet. For engineering leaders selling into enterprise, it is often the deciding factor.</p><h4><strong>Security Defaults in Every Environment</strong></h4><p>A production-grade IDP bakes security into the provisioning blueprint so every environment inherits:</p><ul><li><p>Dedicated VPCs with network isolation per environment</p></li><li><p>Encrypted storage volumes and encrypted secrets at rest</p></li><li><p>Auto-renewing SSL certificates</p></li><li><p>Role-based access control scoped per environment</p></li><li><p>IAM-based keyless access to cloud resources</p></li></ul><p>These are not optional configurations. They are the baseline every environment starts from.</p><h4><strong>Auditability for SOC2 and HIPAA</strong></h4><p>Every environment creation, deployment, secret update, and access event should be recorded in an immutable audit log with the user, timestamp, and action. Compliance auditors ask for exactly this. Most teams without an IDP assemble it manually from CloudTrail logs and deployment records after the fact.</p><p>A well-built IDP captures this automatically across all environments and cloud accounts. Every action is attributed to a specific user. No entry can be deleted or modified. This is the traceability that SOC2, HIPAA, and ISO 27001 audits require, without building a separate logging infrastructure.</p><p>For SaaS companies selling into finance, healthcare, telecom, and energy, this combination of isolation, auditability, and consistent defaults is what closes regulated industry deals.</p><p>See how <a href="https://docs.localops.co/security?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">LocalOps handles security and compliance</a></p><h2><strong>Day-2 Operations After the Environment Is Live</strong></h2><p>Provisioning a consistent environment is day-1. Keeping it healthy over time is day-2. Most IDP content stops at day-1. That is where teams get into trouble.</p><h4><strong>Observability Parity Across Environments</strong></h4><p>If staging and production run different observability stacks, comparing behavior between them is guesswork. A well-built IDP pre-configures the same observability tooling in every environment from day one: structured logs, system metrics, and consistent alerting. When a latency spike appears in production but not in staging, the investigation starts from a place of confidence.</p><h4><strong>Drift Detection and Auto-Healing</strong></h4><p>Environments drift from their defined state over time. A manual change under pressure. A resource accidentally deleted. A configuration updated in one environment but not others.</p><p>A mature IDP detects when the running state diverges from the defined state and surfaces it before the drift causes an incident. Auto-healing that restarts failed services and replaces unhealthy nodes reduces the operational burden significantly.</p><h2><strong>How to Implement Environment Standardization with an IDP</strong></h2><h4><strong>Day-0: Define Your Blueprint</strong></h4><ul><li><p>Connect your version control system and cloud accounts to the IDP</p></li><li><p>Define services: web services, workers, cron jobs, microservices</p></li><li><p>Declare cloud resource dependencies per service: databases, caches, queues, storage</p></li><li><p>Set naming conventions and branch-to-environment mappings</p></li><li><p>Define which secrets each service needs and how they are scoped</p></li></ul><p>This is the source of truth for every environment provisioned from it.</p><h4><strong>Day-1: Roll Out Environments</strong></h4><p>Create dev, staging, production, and preview environments, each linked to the appropriate branch. Enable per-PR ephemeral environments per service. Configure auto-deployment on each service so every commit to the linked branch triggers a build and deploy automatically. From this point, developers push to a PR branch, get an isolated environment with a shareable URL, and merge when ready. Staging and production update automatically.</p><h4><strong>Day-2: Keep Things Healthy</strong></h4><ul><li><p>Review the audit log regularly for deployments, changes, and access events</p></li><li><p>Monitor for drift between running state and the defined blueprint</p></li><li><p>Rotate secrets through the IDP console and trigger a deployment to propagate</p></li><li><p>Delete environments cleanly when no longer needed</p></li></ul><p>If you are still figuring out where an internal developer platform fits into your workflow, <strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">book a demo</a></strong>. Sometimes a 30 minute conversation is faster than three days of research.</p><h2><strong>FAQs</strong></h2><p><strong>1. How do internal developer platforms manage multiple deployment environments like staging and production?</strong></p><p>An IDP uses a single blueprint to create every environment. Dev, staging, and production all get the same network setup, the same pipeline, and the same monitoring.</p><p>Any differences between environments are set on purpose. Smaller instance sizes in staging, for example, are a defined choice, not an accident.</p><p>If an environment drifts from its blueprint, the platform flags it.</p><p><strong>2. What does environment isolation mean in an internal developer platform?</strong></p><p>Isolation means each environment runs on its own separate infrastructure. It gets its own network, its own cluster, its own compute, and its own secrets.</p><p>Environments cannot talk to each other unless you set that up explicitly.</p><p>So a broken staging deploy stays in staging. One developer&#8217;s test environment cannot affect anyone else.</p><p><strong>3. Why does it take so long to spin up a new staging environment?</strong></p><p>Without an IDP, every new environment is built by hand. Someone has to write Terraform, set up a Kubernetes cluster, configure networking, wire up monitoring, and sort out secrets.</p><p>That takes weeks. And every new environment starts from zero.</p><p>With an IDP, environments are built from a saved blueprint. What took weeks now takes under 30 minutes.</p><p><strong>4. How do platform engineering teams create reproducible cloud environments?</strong></p><p>They store the full environment definition in code. That includes networking, compute, secrets, pipelines, and monitoring.</p><p>When a new environment is needed, the platform reads that definition and builds it automatically.</p><p>The result is the same structure every time. No manual steps, no variation, no relying on whoever set it up last.</p><p><strong>5. How do you stop developers from breaking each other&#8217;s environments?</strong></p><p>The real problem is shared environments. When everyone works in the same staging setup, one bad commit breaks things for the whole team.</p><p>The fix is to stop sharing. Give each developer their own isolated environment for every pull request.</p><p>It spins up when the PR opens. It shuts down when the PR closes. No one else is affected.</p><p><strong>6. How does an IDP solve scaling problems with shared staging environments?</strong></p><p>Shared staging works fine for small teams. Once you hit 10 to 15 engineers, it starts to break down.</p><p>Queued deploys, conflicting changes, and broken builds slow everyone down.</p><p>An IDP fixes this by giving each developer their own environment on demand. The team scales without staging becoming a bottleneck.</p><p><strong>7. How does an internal developer platform work on AWS?</strong></p><p>When you create an environment, the IDP sets up your full AWS stack automatically. That includes the VPC, subnets, EKS cluster, EC2 nodes, load balancer, and monitoring tools.</p><p>Secrets go into AWS Parameter Store, scoped to each environment. IAM roles handle access so no one needs long-lived credentials.</p><p>Developers skip Terraform entirely. Teams can still add custom Terraform or Pulumi for anything outside the defaults. This is exactly how a well-built AWS internal developer platform removes infrastructure complexity without taking away control.</p><p><strong>8. Should I use an open source internal developer platform like Backstage?</strong></p><p>Backstage is the most used open source internal developer platform. It works well for service catalogs, developer portals, and documentation.</p><p>But Backstage is a framework, not a ready-made platform. To get it managing environments, secrets, and pipelines, your team has to build and maintain that themselves.</p><p>If you have a dedicated platform team and very specific needs, Backstage makes sense. If you want standardized environments without building the tooling yourself, a managed internal development platform is the faster path.</p><p><strong>9. Internal developer portal vs platform: which one actually standardizes environments?</strong></p><p>A portal shows you what is happening. A platform controls what happens.</p><p>Portals like Backstage give you a service catalog, documentation, and dashboards. They are useful for visibility but they do not provision environments or enforce consistency.</p><p>A platform handles the operational work. It provisions environments from a shared blueprint, manages secrets, runs pipelines, and keeps environments in sync.</p><p>If your goal is standardized dev, staging, and production environments, you need a platform. A portal alone will not get you there.</p><p><strong>10. Should I build an internal developer platform or buy one for better standardization?</strong></p><p>Building an internal developer platform takes longer than most teams expect. Networking, Kubernetes, CI/CD pipelines, secrets management, and observability can take 10 to 12 engineer-months to get right.</p><p>That is just the baseline. Every team rebuilds roughly the same 80% before getting to anything specific to their product.</p><p>Buying a managed platform gets you that baseline on day one. Your team focuses on work that actually matters for your product.</p><p>For most teams, buying is the faster path to standardized environments. Building only makes sense when you have requirements no existing platform can meet.</p><h2><strong>Conclusion</strong></h2><p>Most teams treat environment inconsistency as a process problem. They write better docs. They add more steps to the checklist. It works until the team grows further.</p><p>After that, drift grows faster than any process can fix it.</p><p>Environment standardization is an infrastructure problem. The only real fix is a platform. One that turns environment definitions into code, enforces consistency automatically, and gives every developer their own isolated environment on demand.</p><p>Without that, your DevOps team becomes a bottleneck. Staging stays broken. Production incidents keep coming from differences nobody caught.</p><p>The decision engineering leaders face is simple. Do you spend 10 to 12 engineer-months building that platform from scratch? Or do you start with something that handles the baseline so your team can focus on the product?</p><p>If you want to see what standardized environments look like in practice, LocalOps provisions production-grade environments on your cloud with no Dockerfiles, Helm charts, or Terraform required.</p><p><a href="https://localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Try LocalOps</a> or<a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog"> book a demo</a>. Most teams have their first environment running in under 30 minutes.</p>]]></content:encoded></item><item><title><![CDATA[What Is an Internal Developer Platform (IDP)?]]></title><description><![CDATA[Definition, Core Components, and Real-World Use Cases]]></description><link>https://blog.localops.co/p/what-is-an-internal-developer-platform-idp</link><guid isPermaLink="false">https://blog.localops.co/p/what-is-an-internal-developer-platform-idp</guid><dc:creator><![CDATA[Madhushree Sivakumar]]></dc:creator><pubDate>Wed, 18 Mar 2026 12:58:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xrME!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xrME!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xrME!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!xrME!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!xrME!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!xrME!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xrME!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5122058,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/191351763?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xrME!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 424w, https://substackcdn.com/image/fetch/$s_!xrME!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 848w, https://substackcdn.com/image/fetch/$s_!xrME!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 1272w, https://substackcdn.com/image/fetch/$s_!xrME!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbe69060-68a6-4e99-8724-3b3a9493c63c_2400x1345.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>We are helping CTOs at startups, growing businesses and enterprise companies with streamlining their cloud operations using LocalOps Internal Developer platform. While we have seen them manage DevOps and platform engineering in-house, they have eventually started to feel the drag with manual cloud management and moved on to implement IDPs - Internal developer platform as a leverage to</p><ul><li><p>Accelerate their move to the cloud</p></li><li><p>Streamline and standardize cloud environments</p></li><li><p>Ship faster and improve release velocity</p></li><li><p>Bring a smooth developer experience in deploying applications on the cloud.</p></li></ul><p>We wanted to share lessons on IDP, with our community here at &#8220;<a href="https://blog.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Keep Shipping</a>&#8221; and engage in conversations that can help modern CTOs succeed in transforming their engineering practices.</p><h2><strong>TL;DR</strong></h2><ul><li><p><strong>What is IDP:</strong> A self-service infrastructure layer owned by a platform team and used by developers</p></li><li><p><strong>What it covers:</strong> Environment provisioning, CI/CD pipelines, observability, secrets, and security guardrails in one platform</p></li><li><p><strong>How developers use it:</strong> Familiar workflows like git push instead of waiting on infra tickets</p></li><li><p><strong>The outcome:</strong> Faster deployments, consistent environments, and lower operational overhead</p></li></ul><p><strong>An Internal Developer Platform (IDP) </strong>is a self-service layer over your infrastructure that lets developers build, deploy, and operate applications, without opening infra tickets or manually managing cloud resources.</p><p>A platform team owns it. Developers use it. The best platform for internal developer experience is one that removes infrastructure friction entirely and lets developers focus on shipping. As B2B SaaS companies grow and take on more deployment models like standard SaaS, dedicated single-tenant, and BYOC, managing environments manually stops scaling. IDPs are how engineering teams stay fast without adding DevOps headcount.</p><h2><strong>What Is an Internal Developer Platform?</strong></h2><p><strong>An IDP is the internal stack of tools and automation </strong>that gives developers self-service access to the infrastructure they need.</p><p>Instead of filing tickets for a new environment or asking ops to set up a pipeline, developers interact with golden-path workflows the platform team has already built, tested, and hardened.</p><p>It is not a single tool. A real IDP bundles:</p><ul><li><p><strong>Infrastructure orchestration</strong> to provision environments from standardized templates</p></li><li><p><strong>CI/CD pipelines</strong> using golden-path workflows that encode deployment best practices</p></li><li><p> <strong>Runtime configuration</strong> for consistent settings across every environment</p></li><li><p><strong>Secrets management</strong> for secure, centralized access to credentials</p></li><li><p> <strong>Observability</strong> with logs, metrics, and dashboards wired in by default</p></li><li><p><strong>Security controls</strong> with hardened defaults built into every environment template</p></li></ul><p>The platform team designs and maintains all of this. Developers consume it, usually without needing to know what is running underneath.</p><p><strong>In short: </strong>An IDP removes infrastructure bottlenecks by giving developers self-service access to standardized, pre-hardened environments.</p><h2><strong>Internal Developer Portal Vs Platform vs PaaS</strong></h2><div id="datawrapper-iframe" class="datawrapper-wrap outer" data-attrs="{&quot;url&quot;:&quot;https://datawrapper.dwcdn.net/3ZN4C/2/&quot;,&quot;thumbnail_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c577bd7b-29c0-4e19-90dd-3909feaefdce_1220x934.png&quot;,&quot;thumbnail_url_full&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/de2cc8fd-ad87-4548-89aa-8bca64f1c719_1220x934.png&quot;,&quot;height&quot;:469,&quot;title&quot;:&quot;Created with Datawrapper&quot;,&quot;description&quot;:&quot;&quot;}" data-component-name="DatawrapperToDOM"><iframe id="iframe-datawrapper" class="datawrapper-iframe" src="https://datawrapper.dwcdn.net/3ZN4C/2/" width="730" height="469" frameborder="0" scrolling="no"></iframe><script type="text/javascript">!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();</script></div><p><strong>Key takeaway:</strong> The portal is just the UI. The IDP is the engine doing the actual work. IDP is the engine: it provisions real infrastructure in your cloud, wires in CI/CD and observability, and enforces security defaults without requiring you to build any of it.</p><h2><strong>Why Engineering Teams Need an IDP</strong></h2><p><strong>IDPs matter because </strong>manual infrastructure management compounds over time. The symptoms show up everywhere in your delivery cycle and your operations.</p><p>Here is what that looks like in practice:</p><p><strong>Delivery bottlenecks</strong></p><ul><li><p><strong>TicketOps:</strong> A developer needs a new staging environment. They open a ticket. Someone gets to it a few days later. In the meantime, work stops or goes in a direction that causes problems later.</p></li><li><p><strong>Environment drift:</strong> Staging was set up months ago and has slowly drifted from production. A bug shows up in prod that nobody caught in staging because the two environments were never actually the same.</p></li><li><p><strong>New service onboarding takes longer than it should:</strong> Every new service needs a Dockerfile, a pipeline, database connections, and monitoring wired up. None of it is reused from the last service. It gets done from scratch each time.</p></li></ul><p><strong>Operational bottlenecks</strong></p><ul><li><p><strong>Manual scaling and teardown:</strong> A test environment spins up for a release cycle and stays running after it is done. Nobody turned it off. The cost shows up in the next AWS bill.</p></li><li><p><strong>Observability gaps after new releases:</strong> Existing services have dashboards. New services and new deployments often do not. The gap gets noticed when something breaks in production.</p></li><li><p><strong>Enterprise customers create per-customer DevOps:</strong> An enterprise customer needs your software in their own AWS account. Without a repeatable process, each one is a manual effort to set up and maintain separately.</p></li><li><p><strong>Bespoke AWS operations:</strong> Security groups, IAM roles, VPCs, RDS instances set up differently each time, by different people, with no shared baseline. Every engineer has their own way of doing it. What works in one environment does not carry over to the next.</p></li></ul><p>None of these are edge cases. They are the everyday reality for most SaaS engineering teams that have not standardized how they provision and operate infrastructure.</p><p><strong>In short:</strong> An IDP removes these DevOps bottlenecks by standardizing how SaaS teams provision, deploy, and operate infrastructure. The same process runs for every environment, every service, and every customer deployment.<br><br>If your team is facing any of these bottlenecks, let&#8217;s talk. <a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Book a demo</a></p><h2><strong>Core Components of an Internal Developer Platform</strong></h2><p>A complete internal developer platform architecture covers six areas, and it brings all six areas together so teams can ship without friction across environments.</p><p><strong>1. Infrastructure Orchestration and Environment Management</strong> Provisions environments from standardized templates: VPCs, compute clusters, databases, supporting services.</p><ul><li><p>Every environment is consistent and repeatable</p></li><li><p>No manual setup, no one-off configs only one person understands</p></li><li><p>Same template works across regions and cloud accounts</p></li></ul><p><strong>2. CI/CD and Golden-Path Pipelines</strong> Ready-made workflows that encode deployment best practices. Push to a branch, build and deploy happen automatically.</p><ul><li><p>No pipeline YAML to write from scratch</p></li><li><p>Covers builds, tests, security scans, and deployments</p></li><li><p>Reduces variability and misconfiguration across teams</p></li></ul><p><strong>3. Developer Self-Service Workflows</strong> Developers spin up environments, create services, and deploy code without opening infra tickets.</p><ul><li><p>New service, new environment, new customer deployment &#8212; all self-serve</p></li><li><p> Removes dependency on ops for routine tasks</p></li><li><p>Frees up platform team time for higher-value work</p></li></ul><p><strong>4. Built-In Observability</strong> Logging, metrics, and dashboards are pre-configured in every environment.</p><ul><li><p>Every new service inherits the same observability stack</p></li><li><p>No separate monitoring setup required per environment</p></li><li><p> Logs, metrics, and alerts available from the first deployment</p></li></ul><p><strong>5. Security and Compliance Guardrails</strong> Security defaults are built into the environment template, not applied after the fact.</p><ul><li><p>VPC isolation, disk encryption, encrypted secrets, and RBAC on by default</p></li><li><p>Applies to internal and customer-facing environments alike</p></li><li><p>Reduces risk of misconfiguration at the environment level</p></li></ul><p><strong>6. DORA Metrics and Feedback Loops</strong> Tracks whether your delivery process is working: deployment frequency, lead time, change failure rate, and MTTR.</p><ul><li><p> Embedding these into the platform makes performance visible over time</p></li><li><p>Helps platform teams identify bottlenecks and prioritize improvements</p></li><li><p>Gives engineering leaders data to back infrastructure investment decisions</p></li></ul><p><strong>In short: </strong>Each component targets a specific DevOps bottleneck. Together they give SaaS teams a repeatable foundation so engineering effort goes into building product, not managing infrastructure.</p><p><a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Explore the docs</a> to understand how LocalOps implements each of these out of the box? </p><h2><strong>How an IDP Works: The Developer Workflow</strong></h2><p>A common developer workflow in an internal developer platform looks like this: connect, provision, configure, push, monitor.</p><p>Here is what it looks like:</p><ol><li><p><strong>Connect your repository and cloud account.</strong> Link your GitHub account and your cloud account (AWS, GCP, or Azure). Uses keyless, role-based access. Ready in under 30 minutes. If you want to see how this can work in practice, here&#8217;s how <a href="https://docs.localops.co/accounts/aws?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">LocalOps handles cloud connections</a>.</p></li><li><p><strong>Spin up an environment.</strong> Select the type: test, staging, production, or customer-specific. The platform provisions the full stack: VPC, Kubernetes cluster, observability, security defaults. Here&#8217;s an example of <a href="https://docs.localops.co/environment/inside?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">what a fully provisioned environment looks like in LocalOps</a>.</p></li><li><p><strong>Create services and configure branches.</strong> For each application component (API, frontend, workers, cron jobs), create a service, point it at the right GitHub branch, and set up auto-deployment. See <a href="https://docs.localops.co/environment/services/create?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">how service creation and branch mapping works in LocalOps.</a></p></li><li><p><strong>Push code to deploy.</strong> Any commit to the configured branch triggers a build and deployment. No tickets. No manual pipeline runs. Here&#8217;s how <a href="https://docs.localops.co/environment/services/deploy?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">deployments are triggered and managed in LocalOps.</a></p></li><li><p><strong>Monitor from a single console.</strong> Logs, metrics, and health dashboards are available from the first deployment. Troubleshooting starts with real data, not log hunts. Here&#8217;s <a href="https://docs.localops.co/environment/monitoring/howto?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">how monitoring is structured inside LocalOps</a>.</p></li></ol><p><strong>In short: </strong>Connect once. Provision in minutes. Push to deploy. Monitor out of the box.</p><h2><strong>Real-World Use Cases</strong></h2><p><strong>IDPs are not just for large engineering orgs. </strong>Any team managing multiple environments or delivery models will feel the benefit.</p><h3><strong>SaaS Teams Managing Multiple Environments</strong></h3><p>A B2B SaaS company with 15 engineers runs test, staging, production, and regional variants for EU compliance. They&#8217;re spending engineering time on:</p><ul><li><p>Environment drift between staging and production</p></li><li><p>Ad hoc pipeline fixes when deployments break</p></li><li><p> Infra onboarding every time a new service is added</p></li></ul><p>With internal developer platforms, they define their environment template once. Environments across regions spin up in minutes. Every new service inherits the same CI/CD, monitoring, and security configuration automatically. The platform team stops being a bottleneck.</p><p>See how teams like yours <a href="https://docs.localops.co/use-cases/public-saas?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">manage SaaS environments on LocalOps</a></p><h3><strong>B2B SaaS Supporting BYOC Enterprise Customers</strong></h3><p>An enterprise customer needs your product deployed in their own AWS account for data residency or compliance. Without a standard process, this means:</p><ul><li><p>Manually replicating your infrastructure per customer</p></li><li><p>Debugging environment-specific issues with no standard baseline</p></li><li><p>Managing deployments separately, outside your normal release cycle</p></li></ul><p>An IDP solves this by treating customer environments the same way it treats internal ones. The customer connects their cloud account. You spin up an environment from your standard template on an AWS internal developer platform, point a dedicated branch at it and manage updates through the same console you use for everything else. No bespoke setup per customer. No separate release process.<br><br>Read how SuprSend unlocked enterprise revenue using LocalOps for BYOC deployments. <a href="https://localops.co/case-study/suprsend-unlocks-enterprise-revenue-byoc?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Read case study</a><br></p><h3><strong>Teams Migrating Off PaaS</strong></h3><p>Teams leaving Heroku, Render, or Vercel because of pricing or compliance constraints face a hard choice:</p><ul><li><p> Go directly to hand-rolled Kubernetes and Terraform, which is expensive and time-consuming</p></li><li><p>Stay on PaaS, which means limited control and increasing cost</p></li></ul><p>An IDP offers a third path. Developers keep a familiar push-to-deploy workflow with monitoring already configured. The runtime is Kubernetes running in their own cloud account. The complexity of managing clusters, networking, and pipelines sits inside the platform, not on individual engineers.</p><p>For teams making this move, LocalOps provides specific migration guides for <a href="https://docs.localops.co/migrate-to-aws/from-heroku?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Heroku</a>, <a href="https://docs.localops.co/migrate-to-aws/from-render?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Render</a>, <a href="https://docs.localops.co/migrate-to-aws/from-vercel?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Vercel</a>, and <a href="https://docs.localops.co/migrate-to-aws/from-fly?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">Fly.io.</a></p><h3><strong>Teams Migrating from On-Prem to AWS</strong></h3><p>Many engineering teams running workloads on on-prem data centers eventually need to move to cloud infrastructure. The challenge is not just the migration itself. It is rebuilding deployment workflows, monitoring, and environment management from scratch on a new platform. Without a standard process, this means:</p><ul><li><p>Recreating infrastructure manually for each workload</p></li><li><p>No consistent baseline across migrated services</p></li><li><p>Monitoring and pipelines built differently every time</p></li></ul><p>An IDP makes this transition repeatable. Teams provision cloud environments from standardized templates they will continue using going forward. The migration becomes a structured process, not a per-service project.</p><p>With teams using LocalOps, we see this move happening at a rapid pace. A migration that used to take years is now done in weeks.</p><p><strong>In short: </strong>IDPs are most valuable when environments multiply: multiple stages, multiple regions, multiple customers.</p><h2><strong>Frequently Asked Questions</strong></h2><p><strong>1. What is an internal developer platform and what should it include?</strong></p><p>An IDP is a self-service layer over your infrastructure that lets developers build, Fdeploy, and operate applications without working directly with cloud consoles or ops teams. A platform team owns it; developers use it through standardized workflows. A complete IDP includes environment provisioning, CI/CD pipelines, observability, secrets management, security guardrails, and feedback loops like DORA metrics.</p><p><strong>2. What is the difference between an IDP and a CI/CD pipeline?</strong></p><p>A CI/CD pipeline is one component inside an IDP. It handles building, testing, and deploying code. An IDP is the full platform: it provisions environments, manages infrastructure, handles secrets, wires in observability, and enforces security defaults. CI/CD is a piece of that, not the whole thing.</p><p><strong>3. What is the difference between an IDP and a PaaS?</strong></p><p>A PaaS like Heroku runs on vendor-managed infrastructure. You get convenience but limited control over networking, data residency, and compliance. An IDP runs in your own cloud account, giving you full control over the underlying infrastructure. The developer experience can be just as smooth. The difference is that the infrastructure belongs to you.</p><p><strong>4. What problems does an IDP solve for SaaS engineering teams?</strong></p><p>Mostly bottlenecks and inconsistency. Without one, developers wait on infra tickets to get environments, staging drifts from production, every new service requires manual setup, and supporting enterprise customers means replicating infrastructure by hand for each one. An IDP standardizes and automates all of it.</p><p><strong>5. When should a team invest in an IDP?</strong></p><p>When manual infrastructure work starts slowing engineering down. The clearest signals: developers opening tickets for environments, staging and production behaving differently, new service onboarding taking days, or enterprise customers asking for BYOC or dedicated deployments with no standard process to handle it.</p><p><strong>6. How is an internal developer platform different from a DevOps tool like Jenkins or Terraform?</strong></p><p>Jenkins and Terraform are individual tools that solve specific problems: Jenkins automates builds, Terraform provisions infrastructure. An IDP sits above these tools and orchestrates them together. It uses tools like Terraform under the hood but abstracts them away. Developers do not interact with Jenkins or Terraform directly. They interact with the platform, which handles the tooling underneath.</p><p><strong>7. Open-source internal developer platform vs managed: which should you choose?</strong></p><p>An open-source internal developer platform like Backstage gives you full flexibility to customize every layer of the stack. But you are responsible for hosting, maintaining, and evolving it. Most teams underestimate what that takes. It is not a one-time setup. It requires ongoing platform engineering effort to keep it stable, secure, and up to date.</p><p><strong>8. Should you build an internal developer platform or buy one?</strong></p><p>Building means assembling your own stack from scratch, writing the automation, integrating the tools, and owning everything that breaks. It makes sense if your requirements are genuinely unique and no existing platform covers them. For most SaaS teams those requirements do not exist. Buying gets you a working platform in days, not months, and lets your engineers focus on product instead of infrastructure tooling.</p><h2><strong>Key Takeaways</strong></h2><p>Manual infrastructure management doesn&#8217;t get better as your team grows. TicketOps, environment drift, and per-customer DevOps work are symptoms of the same underlying problem: no standard, automated way to provision and operate environments at scale.</p><p>An IDP fixes this by moving complexity to the right place: encoded in templates and automation the platform team maintains, not scattered across engineers and ad hoc scripts.</p><p>The result:</p><ul><li><p>Developers get self-service workflows and stop waiting on infra</p></li><li><p>Platform teams get control over standards, security, and cost</p></li><li><p>Engineering orgs ship faster and support more deployment models without adding headcount</p></li></ul><p>If you are managing multiple environments, supporting enterprise customers, or feeling the drag of manual infrastructure work. LocalOps is one of the best internal developer platforms for SaaS teams running on their own cloud, without the build-it-yourself cost.</p><p>If you&#8217;re looking to put this into practice, you can explore<strong> <a href="https://localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">LocalOps</a>, </strong>dive deeper in the <strong><a href="https://docs.localops.co/?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">LocalOps docs</a></strong> or <strong><a href="https://cal.com/anand-localops/tour?utm_source=substack&amp;utm_medium=content&amp;utm_campaign=idp_blog">book a demo</a></strong> to see how it works in your setup.</p>]]></content:encoded></item><item><title><![CDATA[What's new at LocalOps: Custom IP ranges, Custom Node groups, ARM, 50% savings, Windows support and more]]></title><description><![CDATA[Your Internal developer platform just got way way sophisticated.]]></description><link>https://blog.localops.co/p/whats-new-at-localops-custom-ip-ranges</link><guid isPermaLink="false">https://blog.localops.co/p/whats-new-at-localops-custom-ip-ranges</guid><dc:creator><![CDATA[Anand]]></dc:creator><pubDate>Tue, 10 Mar 2026 15:21:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!f7FC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!f7FC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!f7FC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 424w, https://substackcdn.com/image/fetch/$s_!f7FC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 848w, https://substackcdn.com/image/fetch/$s_!f7FC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!f7FC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!f7FC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png" width="1456" height="1761" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1761,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6984572,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!f7FC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 424w, https://substackcdn.com/image/fetch/$s_!f7FC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 848w, https://substackcdn.com/image/fetch/$s_!f7FC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!f7FC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2c93d6-573b-4e8c-b5d7-b32f3fe1ee0c_1984x2400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We released a ton of new capabilities this week. All of them, make it so easier for teams to customise how services are deployed or run.</p><p>Starting with custom network IP ranges.</p><h3>Custom CIDRs:</h3><p>When creating a new environment for production or staging setups, you can now pick custom CIDRs ranges. This lets you pick a custom network IP range for your environment and fulfill requirements set by your cloud operations team or those set by your customers network team. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3iDb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3iDb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 424w, https://substackcdn.com/image/fetch/$s_!3iDb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 848w, https://substackcdn.com/image/fetch/$s_!3iDb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 1272w, https://substackcdn.com/image/fetch/$s_!3iDb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3iDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png" width="1456" height="1446" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1446,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1266546,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3iDb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 424w, https://substackcdn.com/image/fetch/$s_!3iDb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 848w, https://substackcdn.com/image/fetch/$s_!3iDb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 1272w, https://substackcdn.com/image/fetch/$s_!3iDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d1b339f-0569-4b59-b8fa-1339794bf378_1968x1954.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Custom node groups:</h3><p>So far, all environments were powered by one node group. These nodes in AWS were <em>on demand</em> Amazon Linux nodes with AMD64 architecture that was specifically picked to run: </p><ul><li><p>system components of the environment</p></li><li><p>monitoring stack - prometheus, loki and grafana backends</p></li><li><p>and your services.</p></li></ul><p>It meant that your services can be only be Linux containers and were built to run with AMD64 architecture. </p><p>From now, you can create custom node groups with any</p><ul><li><p>OS: Linux or Windows</p></li><li><p>Capacity type: ON_DEMAND or SPOT</p></li><li><p>Instance type: any instance type supported by AWS</p><ul><li><p>Architecture: AMD64 or ARM64.</p></li></ul></li></ul><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ujii!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ujii!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 424w, https://substackcdn.com/image/fetch/$s_!Ujii!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 848w, https://substackcdn.com/image/fetch/$s_!Ujii!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 1272w, https://substackcdn.com/image/fetch/$s_!Ujii!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ujii!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png" width="1456" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:400,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1333673,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ujii!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 424w, https://substackcdn.com/image/fetch/$s_!Ujii!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 848w, https://substackcdn.com/image/fetch/$s_!Ujii!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 1272w, https://substackcdn.com/image/fetch/$s_!Ujii!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdcd4a9a8-aa2b-42c8-b9e1-18c7ab7117c9_3796x1044.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VOhU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VOhU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 424w, https://substackcdn.com/image/fetch/$s_!VOhU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 848w, https://substackcdn.com/image/fetch/$s_!VOhU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 1272w, https://substackcdn.com/image/fetch/$s_!VOhU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VOhU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png" width="728" height="863.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1727,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:1834441,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VOhU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 424w, https://substackcdn.com/image/fetch/$s_!VOhU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 848w, https://substackcdn.com/image/fetch/$s_!VOhU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 1272w, https://substackcdn.com/image/fetch/$s_!VOhU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc43d7dee-900c-4b93-8a85-7befef815aca_1642x1948.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#128070; And services can be assigned to any node group so that its containers are run only within the assigned nodes. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t5cs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t5cs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 424w, https://substackcdn.com/image/fetch/$s_!t5cs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 848w, https://substackcdn.com/image/fetch/$s_!t5cs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 1272w, https://substackcdn.com/image/fetch/$s_!t5cs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t5cs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png" width="1456" height="735" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:735,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1038798,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t5cs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 424w, https://substackcdn.com/image/fetch/$s_!t5cs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 848w, https://substackcdn.com/image/fetch/$s_!t5cs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 1272w, https://substackcdn.com/image/fetch/$s_!t5cs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff37acc9b-3d18-4301-8c23-5270cd15f397_2168x1094.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="pullquote"><p>This opens up all the freedom for your teams to run their services on any node set, to improve efficiency and save costs </p></div><p><strong>Spot instances &amp; 50% savings:</strong></p><p>While creating node groups, you can pick capacity type to be &#8220;Spot&#8221; and then create them. Spot instances are spare capacity AWS has and that they make it available temporarily for ~50% lower cost. They retrieve these instnaces with 2min notice. Idempotent services / services that can be stopped and restarted on a new node without any side-effects can be run on SPOT node groups and you can enjoy 50% savings.</p><p><strong>Windows nodes:</strong></p><p>Today&#8217;s changes introduce Windows support too. </p><p>While creating node groups, any supported Windows OS can be chosen as AMI family. And then you can create services that are based out of windows docker images and assign them to these windows node groups. </p><p>Observability for windows-based services including node metrics, container logs and metrics are made available within Grafana dashboard, just as Linux services.</p><p><strong>ARM64 &amp; ~30% savings:</strong></p><p>And yes, you can also spin up node groups with an ARM64 instance type like t4g.medium that offer 20-30% more performance / dollar on AWS. Code pushed to Github</p><blockquote><p>The unique aspect of this is that all of this behavior will be cloud neutral. Same mechanics will exist on environments creaed on both Azure and Google cloud.</p></blockquote><h3>Custom resource tags:</h3><p>All resources provisioned for environments already come with two standard tags.</p><ul><li><p>id</p></li><li><p>name</p></li></ul><p>This allowed teams to analyze costs by filtering cloud resources by these properties.</p><p>In addition, you can add custom resource tags under Account settings. So that you can attach your own tags to all cloud resources. For example:</p><ol><li><p><code>team: api</code></p></li><li><p><code>org: acme-corp</code></p></li><li><p><code>bu: org-name</code></p></li></ol><h3>Real time run status:</h3><p>Services will now display exact run status of all the underlying pods. So , you will see</p><ul><li><p><code>running (2/2)</code></p><ul><li><p>if all 2 replicas/pods are running as desired</p></li></ul></li><li><p><code>degraded (1/2)</code></p></li><li><p><code>failing</code></p></li></ul><p>Devs can skip using our CLI or k9s or kubectl to dig into running status of their services. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bduS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bduS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 424w, https://substackcdn.com/image/fetch/$s_!bduS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 848w, https://substackcdn.com/image/fetch/$s_!bduS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 1272w, https://substackcdn.com/image/fetch/$s_!bduS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bduS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png" width="1392" height="882" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:882,&quot;width&quot;:1392,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:619011,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bduS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 424w, https://substackcdn.com/image/fetch/$s_!bduS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 848w, https://substackcdn.com/image/fetch/$s_!bduS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 1272w, https://substackcdn.com/image/fetch/$s_!bduS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11de5109-37da-43c9-8c01-98a2d4fb9ca8_1392x882.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For crons and jobs, we have also introduce &#8220;<strong>Runs</strong>&#8221; tab. It will show all current and historic job runs with failed/success status.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!O9mQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!O9mQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 424w, https://substackcdn.com/image/fetch/$s_!O9mQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 848w, https://substackcdn.com/image/fetch/$s_!O9mQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 1272w, https://substackcdn.com/image/fetch/$s_!O9mQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!O9mQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png" width="1456" height="705" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:705,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:856257,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!O9mQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 424w, https://substackcdn.com/image/fetch/$s_!O9mQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 848w, https://substackcdn.com/image/fetch/$s_!O9mQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 1272w, https://substackcdn.com/image/fetch/$s_!O9mQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F299f5e02-f830-4a32-bfc5-6e5d2999a764_2028x982.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Deployment notes:</h3><p>Teams can now add a note while triggering new deployments. This is an optional field though.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VoSp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VoSp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 424w, https://substackcdn.com/image/fetch/$s_!VoSp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 848w, https://substackcdn.com/image/fetch/$s_!VoSp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 1272w, https://substackcdn.com/image/fetch/$s_!VoSp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VoSp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png" width="1456" height="1338" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1338,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:788217,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VoSp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 424w, https://substackcdn.com/image/fetch/$s_!VoSp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 848w, https://substackcdn.com/image/fetch/$s_!VoSp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 1272w, https://substackcdn.com/image/fetch/$s_!VoSp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58412e95-9850-4296-ba6a-ffdad49c8bfa_1508x1386.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And they appear as a chat bubble.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cTeV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cTeV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 424w, https://substackcdn.com/image/fetch/$s_!cTeV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 848w, https://substackcdn.com/image/fetch/$s_!cTeV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 1272w, https://substackcdn.com/image/fetch/$s_!cTeV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cTeV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png" width="1405" height="1044" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1044,&quot;width&quot;:1405,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:545809,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/190429843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cTeV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 424w, https://substackcdn.com/image/fetch/$s_!cTeV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 848w, https://substackcdn.com/image/fetch/$s_!cTeV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 1272w, https://substackcdn.com/image/fetch/$s_!cTeV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be1943-f0e6-49d4-823d-17560ec4981a_1405x1044.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>LocalOps is your Internal developer platform, making it easy for teams to deploy on any cloud via just <code>git push</code>, automate &amp; standardize their infrastructure setup &amp; deployments across any number of environments and cloud accounts. All from a single console which product engineering teams can understand and operate.</p><p>Schedule a quick demo now at <a href="https://go.localops.co/tour">https://go.localops.co/tour</a>. Or sign up for a free account at <a href="https://console.localops.co/">https://console.localops.co/</a></p><p>Cheers.</p><p></p>]]></content:encoded></item><item><title><![CDATA[The Hidden Cost of EC2 Architectures: Paying for Idle Compute]]></title><description><![CDATA[Paying for idle compute (EC2) is anti-cloud. Results in 80% wastage in AWS spend]]></description><link>https://blog.localops.co/p/the-hidden-cost-of-ec2-architectures</link><guid isPermaLink="false">https://blog.localops.co/p/the-hidden-cost-of-ec2-architectures</guid><dc:creator><![CDATA[Anand]]></dc:creator><pubDate>Tue, 03 Mar 2026 19:14:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sAsE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sAsE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sAsE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 424w, https://substackcdn.com/image/fetch/$s_!sAsE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 848w, https://substackcdn.com/image/fetch/$s_!sAsE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!sAsE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sAsE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png" width="1456" height="1933" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1933,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2981612,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/189741335?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sAsE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 424w, https://substackcdn.com/image/fetch/$s_!sAsE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 848w, https://substackcdn.com/image/fetch/$s_!sAsE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!sAsE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F440a053d-f850-4da0-a1ab-4ad40f61963e_1808x2400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="pullquote"><p>TLDR; Move off of static EC2 based hosting architectures to stop paying for idle compute and save up to 80% in your cloud bills.</p></div><p>One of the biggest hidden inefficiencies in cloud infrastructure today is <strong>idle compute</strong>.</p><p>Many companies running on AWS EC2 are unknowingly paying large cloud bills for infrastructure that isn&#8217;t doing any useful work most of the time.</p><p>This doesn&#8217;t happen because engineers are careless. It happens because traditional cloud architectures encourage over-provisioning<strong>.</strong></p><h2>The EC2 Capacity Problem</h2><p>When teams deploy applications on EC2, infrastructure is usually sized for peak demand. For example:</p><p>Normal weekday traffic: 30%</p><p>Weekend traffic: 15%</p><p>Product launch: 100%</p><p>To stay safe, teams provision infrastructure for 100% capacity all the time.</p><p>This guarantees reliability during spikes. But the trade-off is obvious: Most of the time, servers are sitting idle<strong>.</strong></p><h2>What Idle Compute Looks Like in Real Systems</h2><p>Here are common patterns that lead to wasted compute in EC2 environments.</p><ol><li><p>Over provisioned kubernetes nodes</p></li><li><p>Static auto scaling groups</p></li><li><p>Always-on background services</p></li></ol><h4>Over provisioned Kubernetes Nodes</h4><p>Clusters are typically sized for peak workloads.</p><p>During normal operation:</p><ul><li><p>nodes run at 20&#8211;40% utilization</p></li><li><p>entire nodes may remain mostly idle</p></li></ul><p>Yet each node continues generating costs.</p><h4>Static Auto Scaling Groups</h4><p>Even with autoscaling enabled, many teams configure high minimum instance counts to avoid cold starts.</p><p>Example:</p><pre><code>min instances: 10
average demand: 3</code></pre><p>Seven instances are effectively idle most of the time.</p><h4>Always-On Background Services</h4><p>Micro services frequently run continuously even when they have no active work.</p><p>Examples include:</p><ul><li><p>queue workers</p></li><li><p>batch processors</p></li><li><p>internal APIs</p></li></ul><p>Instead of scaling dynamically, they remain running 24/7 regardless of demand<strong>.</strong></p><h2>Why Engineers Accept This</h2><p>Most teams knowingly accept this inefficiency for good reasons. Reliability Comes First Infrastructure failures are costly. Over provisioning ensures traffic spikes never cause downtime.</p><h3>Autoscaling Is Hard to Tune</h3><p>Scaling systems require careful configuration:</p><ul><li><p>metrics</p></li><li><p>cooldown windows</p></li><li><p>traffic forecasting</p></li><li><p>load testing</p></li></ul><p>Many teams simply avoid the operational complexity.</p><h3>Traditional Infrastructure Is Static</h3><p>Many deployment tools assume:</p><ul><li><p>servers exist permanently</p></li><li><p>infrastructure is provisioned in advance</p></li><li><p>workloads are long-running</p></li></ul><p>As a result, compute becomes a fixed cost instead of an elastic one.</p><h2>The Cost Impact</h2><p>Let&#8217;s look at a simple example. A startup runs:</p><ul><li><p>12 EC2 instances</p></li><li><p>$80/month each</p></li></ul><p>Total monthly compute would be:</p><pre><code>$960/month</code></pre><p>But real usage averages <strong>30%</strong>.</p><p>Which means roughly:</p><pre><code>$672/month</code></pre><p>is paying for idle capacity.</p><p>Multiply this across:</p><ul><li><p>staging environments</p></li><li><p>development environments</p></li><li><p>multiple services</p></li></ul><p>And suddenly companies are spending thousands every month on unused compute.</p><h3>The Future: Infrastructure That Shrinks</h3><p>The cloud promised <strong>elastic infrastructure</strong>, but many EC2 setups still behave like traditional servers with hourly billing.</p><p>A better model is infrastructure that can:</p><ul><li><p>scale up quickly when demand appears</p></li><li><p>aggressively scale down when workloads are idle</p></li><li><p>sometimes even <strong>scale all the way to zero</strong></p></li></ul><p>This dramatically reduces wasted compute.</p><blockquote><p><em>At <a href="https://localops.co">LocalOps</a>, one of the capabilities we&#8217;re working on is making this behavior a built-in property of the infrastructure itself. Instead of engineers manually tuning autoscaling policies, the environments LocalOps provisions are designed to expand and shrink automatically based on real workload demand. </em>In other words, scaling down becomes just as automatic as scaling up.</p></blockquote><p>The cloud should behave like electricity. You shouldn&#8217;t pay for capacity just because it exists. You should pay when it&#8217;s actually being used.</p><p>And the cheapest server in the cloud will always be the one that isn&#8217;t running.</p><h4>Free cloud wastage assessment:</h4><p>If this is a problem in your team, talk to us. We will assess and potentially provide quick wins and practical guidelines to save up to 50% on your cloud wastage. <a href="https://cal.com/anand-localops/cloud-wastage-assessment">Schedule a call now</a>. </p>]]></content:encoded></item><item><title><![CDATA[LocalOps + SOC2]]></title><description><![CDATA[Enterprise-grade security for your deployments and AI investigations with SOC2 Type 2.]]></description><link>https://blog.localops.co/p/localops-soc2</link><guid isPermaLink="false">https://blog.localops.co/p/localops-soc2</guid><dc:creator><![CDATA[Anand]]></dc:creator><pubDate>Thu, 26 Feb 2026 08:39:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MEGE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MEGE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MEGE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 424w, https://substackcdn.com/image/fetch/$s_!MEGE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 848w, https://substackcdn.com/image/fetch/$s_!MEGE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!MEGE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MEGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4981246,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/189228144?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MEGE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 424w, https://substackcdn.com/image/fetch/$s_!MEGE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 848w, https://substackcdn.com/image/fetch/$s_!MEGE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!MEGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3883238d-ee6c-4fde-84b3-c2dbd5207c8e_2400x2400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>We&#8217;re happy to share that LocalOps is pursuing SOC2 Type 2 compliance now. </p><p>This means our security controls safeguard the environments and deployments we make on your cloud accounts and the AI-powered investigations done across all those environments are secured as per the latest industry standards on data security.</p><p>Not just security, we will be covering all 3 major criteria:</p><ol><li><p>Security</p></li><li><p>Availability (so that our systems are resilient &amp; reliable at all times)</p></li><li><p>Data confidentiality </p></li></ol><p>Our seasoned engineering team at LocalOps has more than 10 years of experience in achieving and maintaining SOC2 Type 2 compliance for production systems on AWS. And so, when we designed and built LocalOps company &amp; platform, we made all our security controls SOC2 aligned - across product, infrastructure and organization.</p><p>For example, </p><ol><li><p>Org: Everyone in the org is required to setup 2FA for their logins and is enforced by our internal SSO system.</p></li><li><p>Product: All login sessions expire in 24 hours. Audit log for users to see who did what.</p></li><li><p>Code/SDLC: All code is reviewed and tested before release.</p></li><li><p>Infrastructure: Cloud Infrastructure hosted on AWS, has isolated test and production environments and only authorized personnel get access to production data. Data and backups are encrypted at rest.</p></li></ol><p>We have documented more details on similar existing controls in our security documentation here - <a href="https://docs.localops.co/security">https://docs.localops.co/security</a>. </p><p>All our controls are already SOC2 aligned. This certification we&#8217;re pursuing now, just puts things in writing.</p><p>Let us know if you want more details on our SOC2 compliance. Reach out to <a href="mailto:anand@localops.co">anand@localops.co</a> and we will share relevant documents. </p><p>Or set up a time to take a tour of LocalOps - <a href="https://go.localops.co/tour">https://go.localops.co/tour</a>. </p><p>Get started with LocalOps for free at <a href="https://console.localops.co/signup">https://console.localops.co/signup</a>. </p><p>Cheers.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Introducing LocalOps BYOC]]></title><description><![CDATA[Enterprise users can run LocalOps developer platform in their AWS account. LocalOps team will remotely setup and manage the instance 24/7.]]></description><link>https://blog.localops.co/p/introducing-localops-byoc</link><guid isPermaLink="false">https://blog.localops.co/p/introducing-localops-byoc</guid><dc:creator><![CDATA[LocalOps Inc]]></dc:creator><pubDate>Fri, 06 Feb 2026 08:54:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sjg3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sjg3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sjg3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 424w, https://substackcdn.com/image/fetch/$s_!sjg3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 848w, https://substackcdn.com/image/fetch/$s_!sjg3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!sjg3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sjg3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png" width="1456" height="1761" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1761,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6828163,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/187063946?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sjg3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 424w, https://substackcdn.com/image/fetch/$s_!sjg3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 848w, https://substackcdn.com/image/fetch/$s_!sjg3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 1272w, https://substackcdn.com/image/fetch/$s_!sjg3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb9568de-cb0e-4a98-845c-7e8c8b7c8d4c_1984x2400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>LocalOps platform helps teams deploy their code on any cloud environment - cloud or bare-metal, their cloud or their customer-cloud. All in minutes instead of weeks, without hiring a DevOps team, writing Dockerfiles or Helm charts.</p><p>Starting today, teams can host &amp; run LocalOps platform itself in their own cloud account (Bring your own cloud).</p><h2>Why?</h2><p>While we have <a href="https://docs.localops.co/security">strong NIST/SOC2 aligned data security controls</a> in our SaaS version, data residency laws and other related policies might require enterprise users to have data in a specific country/location and might stop them from using our global SaaS solution (on console.localops.co). </p><p>We understand that and so we have come up with our BYOC (Bring your own cloud) version of LocalOps platform. This allows our enterprise users to host and run LocalOps platform in their own cloud account and region.</p><h2>How it works?</h2><p>You give us access to one of your AWS accounts. Then, our team will provision and manage a private/exclusive instance of LocalOps in that account. And your team(s) can access it on your domain like - <code>deploy.yourcompany.com</code>.</p><p>At a high level, here is the difference between our SaaS and BYOC version.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KMM5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KMM5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 424w, https://substackcdn.com/image/fetch/$s_!KMM5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 848w, https://substackcdn.com/image/fetch/$s_!KMM5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 1272w, https://substackcdn.com/image/fetch/$s_!KMM5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KMM5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png" width="1240" height="740" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:740,&quot;width&quot;:1240,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:92446,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.localops.co/i/187063946?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KMM5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 424w, https://substackcdn.com/image/fetch/$s_!KMM5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 848w, https://substackcdn.com/image/fetch/$s_!KMM5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 1272w, https://substackcdn.com/image/fetch/$s_!KMM5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba9428f0-0e49-4239-9c2b-c4b82f4ecb4a_1240x740.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Before we deploy LocalOps in your AWS account, we will share the list of resources we will create and the estimated cost you will incur in that AWS account. Once you approve, we will proceed to set things up.</p><ol><li><p>Unlike SaaS, LocalOps BYOC version will be setup in your AWS account.</p></li><li><p>You decide which regions amongst <a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/">all the regions supported by AWS</a> must host your private/exclusive instance of LocalOps.</p></li><li><p>Your Co. and teams alone can access the private instance of LocalOps. </p></li><li><p>You can create any number of accounts, any number of projects and users. This will all be owned by you and your Co.</p></li><li><p>Your Co. has to pay for all the resources running in your AWS account as part of the private LocalOps instance.</p></li><li><p>LocalOps will fully manage this private instance in terms of software setup, update and uptime.</p></li></ol><p><strong>Data ownership:</strong> </p><p>LocalOps BYOC enables you to host and own all the data (projects, users, code, docker images, deployment pipeline and others) in your premises / cloud account fulfilling your security policies. </p><h2>Our Mission:</h2><p>LocalOps Inc. was started with a mission to give data ownership back to users. We are more than excited to bring this BYOC version and support it first class. And for years to come! </p><h2>Contact us now:</h2><p>Talk to us at <a href="mailto:anand@localops.co">anand@localops.co</a>. We will setup a call, answer all your questions and get you started on the same day. If you like to schedule a call, go for it. Pick a 30-min time slot at <a href="https://go.localops.co/tour">https://go.localops.co/tour</a>.</p><p></p><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.localops.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Keep Shipping: A Builder's Chronicle! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>