🔥 Hot Take

Serverless: The Future of Cloud, or Just a More Expensive Way to Run a Container?

4 min read

Serverless was supposed to free us from infrastructure. Instead, we got vendor lock-in, unpredictable performance, and bills that grow faster than the features they fund.

⚡
Spicy Opinion Alert: This is a deliberately provocative take. We're here to start conversations, not end them.

You upload your code. It runs. You only pay for what you use. Infinitely scalable. No servers to manage. The serverless pitch was a masterpiece of marketing that promised to finally free developers from infrastructure hell.

What we got instead was a different kind of hell, just with better marketing.

Serverless didn’t remove servers—it just hid them behind your credit card and locked you into a vendor’s ecosystem.

The cold start problem is the first red flag no one wants to admit. Your user clicks a button. Nothing happens for 500 milliseconds. A second passes. Two seconds. Your function finally wakes up and processes their request. This isn’t an edge case—it’s the default behavior for any function that hasn’t been invoked recently.

The workarounds are absurd. You implement “keep-warm” strategies where you pay AWS to ping your function every few minutes, effectively paying to defeat the entire “pay-for-what-you-use” model. You’re now paying for idle compute time on top of invocation fees, turning your serverless savings into a premium tax.

Then comes the vendor lock-in that catches everyone by surprise. Your code isn’t serverless—it’s AWS-specific. S3 triggers, DynamoDB streams, IAM roles, Lambda-specific error handling. Every convenience AWS provided to make your job easier is now a chain binding you to their platform. Moving to Google Cloud or Azure isn’t a refactor—it’s a complete rewrite.

The cost model is a lie. Serverless is sold as “pay-per-use,” implying you only pay for what you need. In practice, it’s more like “pay-per-use-unless-your-usage-is-consistent, in which case you overpay dramatically.”

Run a function 1 million times a day at $0.00001667 per invocation. Do that math. Now compare that to a $20/month container that handles those same 1 million invocations with zero additional cost. For any workload with consistent traffic, serverless is orders of magnitude more expensive.

And that’s before you factor in the monitoring bill. AWS CloudWatch logs for a serverless application can easily cost more than the compute itself. You’re paying to debug a distributed system of hundreds of tiny functions, each generating telemetry, each creating opportunities for things to go wrong in ways that are invisible until you’re looking at your bill.

The serverless success story obscures a fundamental failure: it optimized for the vendor’s problem, not yours.

For AWS, serverless is genius. They sell you compute without the responsibility of managing infrastructure. They lock you in so completely that leaving is economically painful. They charge you for observability tools to make their black box visible. It’s a brilliant business model.

For developers and organizations, it’s often a trap.

The cold, hard truth: serverless is excellent for a very specific use case—asynchronous, event-driven workloads with unpredictable traffic. It’s perfect for the function that resizes images when files are uploaded to S3. It’s perfect for webhooks that fire sporadically. It’s perfect for batch jobs that run at odd hours.

For everything else—your core API, your background workers, anything with consistent traffic or strict latency requirements—the boring, reliable, portable container is a better choice.

Containers give you performance predictability (no cold starts), cost predictability (a fixed monthly bill), and architectural freedom (Docker runs the same everywhere). A container is still the universal standard for portable software deployment.

The serverless dogma that we’re moving toward a serverless-first future is just the cloud vendors’ marketing strategy disguised as technology progress.

The best modern architectures don’t choose between serverless and containers. They use containers for what containers are good at (core services with consistent load) and serverless for what serverless is good at (event-driven glue code).

Stop treating serverless as the default, modern choice. It’s a specialized tool that solves a specific problem elegantly. Use it for that problem. For everything else, reach for the technology that gives you control over your performance, your costs, and your future.

The vendors want you serverless-first because it locks you in and maximizes their revenue. What you should want is architecture-first: choose the tool that actually fits your problem, not the tool that fits the vendor’s business model.