The Cloud Is Just Someone Else’s Computer And It’s Costly
There’s a meme that haunts every infrastructure engineer who’s ever opened an AWS bill: “The cloud is just someone
There’s a meme that haunts every infrastructure engineer who’s ever opened an AWS bill: “The cloud is just someone else’s computer.” It’s funny because it’s true, and painful because it’s expensive.
In 2025, enterprises will waste $44.5 billion on cloud infrastructure they’re paying for but not fully using. That’s not a rounding error. That’s 21% of total cloud spend evaporating into idle instances, over-provisioned resources, and development environments running 24/7 that nobody remembers spinning up. For every hundred thousand pounds you’re spending on cloud, roughly twenty-one thousand is lighting money on fire.
And the really uncomfortable bit? Most companies know this. They’ve known for years. Yet cloud costs keep climbing at 21% annually whilst organisations frantically hire FinOps teams to figure out where all the money went.
Something has gone profoundly wrong with how we think about computing infrastructure.
The Promise vs The Reality
Remember when cloud computing was going to save everyone money? The pitch was elegant: stop buying expensive servers that sit idle most of the time. Pay only for what you use. Scale infinitely. Focus on your business, not your data centre.
For startups and companies with unpredictable workloads, this promise largely held true. If you don’t know whether you’ll have ten users or ten million next month, the cloud is genuinely brilliant. You can spin up resources when you need them and shut them down when you don’t.
But something happened on the way to the promised land. Companies that weren’t startups, companies with predictable, steady workloads, companies that knew exactly how many servers they needed, all moved to the cloud anyway. Because that’s what you were supposed to do. Cloud-first became cloud-only became cloud-always.
The industry convinced everyone that on-premises infrastructure was legacy thinking. Old-fashioned. Something only dinosaurs still did. Never mind that the economics might not make sense for your particular situation. Never mind that you might be paying a massive premium to rent computers you’d be using constantly anyway.
The cloud became ideology, not infrastructure choice.
What $3.2 Million Buys You
In 2022, 37signals (the company behind Basecamp and HEY) looked at their AWS bill and had a moment of clarity that felt a bit like waking up from a fever dream. They were spending $3.2 million annually to rent computers from Amazon.
David Heinemeier Hansson, the company’s CTO, did something radical. He did the maths. Not the sophisticated financial modelling that justifies what you’ve already decided to do, but actual, straightforward arithmetic. What would it cost to buy the servers instead of renting them?
The answer was uncomfortable. They could spend $600,000 on Dell servers that would handle their workloads just fine. Their infrastructure costs would drop from $3.2 million per year to under a million. Same team size. Same capabilities. Less complexity.
So they left. Moved everything off AWS and onto their own hardware. And by the end of 2024, after all their long-term cloud contracts finally expired, they were projecting over $10 million in savings over five years.
Let that sink in. A company that had been cloud-native, that helped popularise modern web development practices, that knew AWS inside and out, saved millions by going back to owning their infrastructure.
The cloud defenders had responses, naturally. “But what about disaster recovery? What about scaling? What about maintenance?” All reasonable questions. All things 37signals addressed without adding headcount or sacrificing reliability.
The uncomfortable truth that nobody wanted to say out loud: for companies with predictable workloads, the cloud is grotesquely expensive.
The Waste Nobody Talks About
Whilst 37signals made headlines by leaving entirely, most companies are stuck in a different kind of hell. They know they’re overpaying. They can see the waste. They just can’t seem to stop it.
According to multiple industry studies, the average organisation wastes between 27% and 35% of its cloud budget. Some estimates put it higher. When you ask developers what percentage of their cloud spend is wasted, over half of organisations admit it’s more than 25%.
Think about that. A quarter to a third of all cloud spending buys you absolutely nothing. It’s idle instances still running. It’s storage nobody’s accessed in months. It’s over-provisioned resources because someone made a conservative guess and nobody ever checked if the guess was right. It’s development environments that spin up Monday morning and run until someone remembers to shut them down Friday night, if they’re shut down at all.
Here’s where it gets properly dark. These aren’t secrets. Cloud providers give you dashboards showing your waste. Consultants will happily sell you FinOps services to optimise your spend. Tools exist to identify idle resources and rightsize instances.
Yet waste barely budges. It was 35% in 2023. It’s still around 32% in 2025. After years of “optimisation” and dedicated teams and expensive platforms, companies have reduced their waste by roughly three percentage points.
Why? Because the problem isn’t technical. It’s organisational. It’s human. And it’s structural.
The Disconnect That’s Costing You Millions
Here’s the core dysfunction: the people who can see the cost problem can’t fix it, and the people who can fix it can’t see the cost problem.
Finance teams watch cloud costs climb and create FinOps groups to understand where the money’s going. These teams generate lovely dashboards showing cost by service, by team, by project. They identify waste. They write reports. They recommend optimisations.
Then nothing happens.
Because the developers who actually provision resources don’t have access to these dashboards. Only 43% of developers can see real-time data on idle cloud resources. Only 39% can see unused or orphaned resources. Just 33% have visibility into whether their workloads are over or under-provisioned.
The result? 55% of cloud purchasing commitments are based on guesswork. Developers spin up an instance, make their best guess about what size it needs to be, and move on to the next task. The cost implications won’t be visible for weeks, by which time they’re working on something completely different.
It takes an average of 31 days to identify and eliminate cloud waste. A month. That’s how long it takes from “we’re wasting money on this resource” to “we’ve actually turned it off.” In that month, you’ve burned through thousands on resources you knew you didn’t need but couldn’t quite get around to shutting down.
This isn’t malice. It’s not even negligence really. It’s what happens when you split responsibility from visibility. Finance teams who can see costs don’t understand the technical architecture well enough to know what’s safe to shut down. Developers who understand the architecture don’t have cost data integrated into their workflow, so cost becomes someone else’s problem.
FinOps was supposed to bridge this gap. Instead, it created a new silo sitting between two other silos, generating reports nobody acts on.
The Hidden Costs of Elastic Everything
The cloud’s killer feature is also its most expensive. You can scale instantly. Spin up a thousand servers in minutes. Add storage on demand. It’s brilliant when you need it. It’s ruinous when you’re paying for the capability even though you never use it.
Most companies have predictable workloads. Your Monday morning traffic spike happens every Monday morning. Your end-of-month batch processing runs every end of month. Your development teams work roughly the same hours every week. There’s variance, but it’s not wild unpredictable chaos.
Yet you’re paying cloud prices optimised for wild unpredictable chaos. You’re renting excavators by the hour because theoretically you might need to dig a hole at midnight on a Sunday, even though in practice you dig holes Tuesday through Thursday between 9 and 5.
AWS operates at roughly 30-40% profit margins on cloud services. That’s not a criticism exactly. They built the infrastructure, they maintain it, they should profit from it. But it does mean that for every pound you spend, 30-40 pence is pure profit for Amazon before we even talk about your own waste.
When 37signals moved off AWS, their infrastructure costs didn’t drop by 20% or 30%. They dropped by two-thirds. Same capabilities. Same team. Same reliability. Just different ownership model.
The cloud was never expensive because servers are expensive. Servers are cheap. The cloud is expensive because convenience is expensive, and because flexibility you don’t use still costs money.
The Repatriation Nobody Admits To
The dirty secret of 2025: companies are quietly moving workloads back off the cloud. Not with fanfare. Not with blog posts announcing their departure. Just steadily, workload by workload, bringing things back on-premises or into colocation facilities.
According to recent surveys, 86% of CIOs are planning to repatriate at least some workloads from public cloud. Eighty-six percent. That’s not a fringe movement. That’s a consensus.
Most aren’t doing full exits like 37signals. They’re being strategic. Stable, predictable workloads that run constantly? Those are coming home. AI training workloads that need massive compute for short bursts? Those stay in the cloud. The goal isn’t cloud-free. It’s cloud-appropriate.
Financial services companies are bringing regulated data back on-premises because compliance is easier when you control the infrastructure. Gaming companies are moving to hybrid setups where they own baseline capacity and burst to cloud for launches. Manufacturing firms are pulling IoT data processing back to local data centres because latency matters.
Even Amazon’s own Prime Video team moved core components from distributed microservices to a monolith running on their own infrastructure, cutting costs by over 90%. Amazon. The company that sells cloud infrastructure for a living. Saved 90% by moving off distributed cloud architecture.
If that doesn’t tell you something about when cloud makes sense and when it doesn’t, nothing will.
Why You’re Still Paying
So if cloud is expensive, if waste is endemic, if companies know they’re overpaying, why isn’t everyone leaving?
Because leaving is hard. Genuinely hard, not just “requires effort” hard.
If you built everything cloud-native, you used cloud-native services. Managed databases. Serverless functions. Proprietary APIs. Moving off means rebuilding pieces of your architecture. That’s months of work. Maybe years. All whilst maintaining your existing system.
There’s the skills problem. Your team knows AWS or Azure. They know how to debug cloud-specific issues, how to navigate the console, how to work with cloud-native tools. Bringing infrastructure in-house means either retraining your team or hiring different people.
There’s the capital expenditure problem. Cloud is operational expense. You’re renting. Moving to owned infrastructure means buying servers. That’s capital expenditure. Different budgets. Different approvals. Different accounting. The CFO who’s been complaining about cloud costs might still baulk at a £600,000 equipment purchase, even if it saves millions over five years.
And there’s the fear. What if you’re wrong? What if you move off cloud and suddenly you do need to scale massively? What if something breaks and you don’t have AWS support to call? What if this turns into a career-limiting decision?
Staying on cloud, even whilst bleeding money, is the safe choice. Nobody got fired for choosing AWS. But you might get fired for moving off AWS and having something go wrong.
So companies stay. They hire FinOps teams. They buy optimisation tools. They generate reports about their waste. They know they’re overpaying. They know roughly where the money’s going. They just can’t quite bring themselves to actually change anything fundamental.
It’s organizational inertia dressed up as prudent risk management.
The AI Wildcard
There’s a twist coming that nobody’s quite prepared for. AI workloads are different from everything that came before, and they’re about to make cloud economics even more complicated.
Training large AI models requires enormous compute for relatively short periods. That’s exactly what cloud is good at. Fine-tuning existing models needs substantial but predictable resources. That’s exactly what cloud is expensive at.
In 2024, 31% of organisations were managing AI spend. In 2025, it’s 63%. Companies that thought they had their cloud costs under control are discovering that AI is adding entirely new expense categories that don’t replace existing infrastructure spending, they supplement it.
Running AI inference at scale on predictable workloads? The economics point toward owned infrastructure again. But companies are so used to cloud-first thinking that they’re not even considering it. They’re looking at their AI cloud bills, wincing, and asking how to optimise spend within the cloud rather than whether they should be in the cloud at all for these workloads.
Meanwhile, cloud providers are happy to help. AWS, Azure, and Google Cloud are rolling out AI-specific infrastructure, new GPU instance types, managed AI services. Each one makes it a bit easier to run AI in the cloud and a bit harder to run it anywhere else.
The lock-in gets tighter whilst the bills get bigger.
What Actually Works
Some companies are getting this right, and the pattern is clear: they think about infrastructure placement strategically rather than ideologically.
They keep things in the cloud that benefit from cloud. Genuinely variable workloads. New products where demand is unknown. Geographic expansion. Short-term projects. Anything that needs rapid scaling you can’t predict.
They move things out of the cloud that don’t benefit from cloud. Stable, predictable workloads. Regulated data that’s easier to manage on-premises. Latency-sensitive applications. Anything running constantly at known capacity.
They invest in portability. Open standards. Containerisation that isn’t tied to a specific cloud provider. Infrastructure-as-code that can deploy to multiple environments. Data formats that aren’t proprietary. This costs more upfront but avoids lock-in later.
They give developers cost visibility. Not in separate dashboards they have to log into separately, but integrated into their deployment pipelines. Cloud costs displayed in pull requests. Real-time feedback on infrastructure decisions. Make cost a technical concern, not just a financial one.
They set clear ownership. This team owns this cost centre. They have budget authority and cost accountability. No diffusion of responsibility where nobody knows whose job it is to shut down the idle resources.
None of this is revolutionary. It’s all reasonably obvious. The hard part isn’t knowing what to do. The hard part is actually doing it.
The Uncomfortable Truth

The cloud isn’t a bad choice. It’s a choice with specific economics that make sense in specific situations and don’t make sense in others.
For startups with unpredictable growth, the cloud is brilliant. For companies launching new products where demand is unknown, the cloud is brilliant. For workloads that genuinely vary wildly, the cloud is brilliant.
For established companies with stable, predictable workloads running constantly at known capacity, the cloud is expensive. Grotesquely expensive. Paying £3 million annually to rent computers you use the same way every day expensive.
The industry convinced everyone that cloud-first was the only sensible approach, that owning infrastructure was old-fashioned, that anyone not in the cloud was behind the times. Companies adopted this wholesale without doing the actual mathematics on whether it made sense for their specific situation.
Now they’re stuck. Not technically stuck, cloud repatriation is entirely possible. Organisationally stuck. Financially stuck with long-term contracts. Architecturally stuck with cloud-native dependencies. Culturally stuck with teams that only know cloud.
And whilst they’re stuck, they’re wasting billions on resources they’re not using, paying premium prices for flexibility they don’t need, renting computers at a markup when buying would be cheaper.
The cloud is just someone else’s computer. And like anything you rent long-term instead of own, eventually you pay more than it would have cost to buy.
The only question is whether you’ll do the maths before or after you’ve haemorrhaged millions



