ezrast 4 hours ago

Another article that, by the third sentence, namedrops seven different AWS services they want to build their app on and then spends the rest of the argument pretending like that ecosystem has zero in-built complexity. My friend, each one of those services has its own security model, limitations, footguns, and interoperability issues that you have to learn about independently. And you don't even mention any of the operational services like CloudWatch, CloudTrail, VPCs (even serverless, you'll need them if you want your lambdas to hit certain other services efficiently), and so on. Those are not remotely free. Your "real developers" can't figure out how to write a YAML document, but you trust them to manage infrastructure-as-code for motherloving API Gateway? Absolutely wild.

Kubernetes and AWS are both complex, but one of them frontloads all the complexity because it's free software written by infrastructure dorks, and one of them backloads all of it because it's a business whose model involves minimizing barriers to entry so that they can spring all the real costs on you once you're locked in. That doesn't mean either one is a better or worse technical solution to whatever specific problem you have, but it does make it really easy to make the wrong choice if you don't know what you're getting into.

As for the last point, I don't discourage serverless solutions because they make less work for me, I do it because they make more. The moment the developers decide they want any kind of consistency across deployments, I'm stuck writing or rewriting a bunch of Terraform and CI/CD pipelines for people who didn't think very hard about what they were doing the first time. They got a PoC working in half an hour clicking around the AWS console, fell in love, and then handed it to someone else to figure out esoterica like "TLS termination" and "logs" and "not making all your S3 buckets public by accident."

  • moltar 3 hours ago

    I can do all of the stacks well, including serverless described or pure ECS Fargate or Kubernetes.

    From my experience Kubernetes is the most complex with most foot guns and most churn.

    • cybrox 3 hours ago

      Is it? If you compare to serverless, you'd almost have to compare AWS EKS Fargate and with that, there's a lot less operational overload. You still have to learn ingress, logging, networking, etc. but you'd have to do that with serverless as well.

      I'd argue between AWS serverless and AWS EKS fargate, the initial complexity is about the same. But serverless is a lot harder to scale cost efficiently and not accidentally go wild with function or sns loops.

      • dontlaugh 2 hours ago

        ECS Fargate is simple to set up and scales just fine.

        • NomDePlum an hour ago

          This is my experience too. We served fairly complex data requests, around 200,00 per day, for mobile and commercial users using ECS Fargate and Aurora Postgres as our main technologies and it coped fine.

          Used Golang and optimised our queries and data structures and rarely needed more than 2 of whatever the smallest ECS Fargate task size is, but if we did it scaled in and out without any issues.

          Realise that isn't at scale for some but it's probably a relatively common point for a lot of use cases.

          We put some effort into maintenance, mostly ensuring we kept on an upgrade path but barely touched the infrastructure code other than that.

          One thing we did do was limit the number of other AWS services we adopted and kept it fairly basic. Seen plenty of other teams go down the rabbit hole.

mnahkies 3 hours ago

I don't think the author has seen k8s done well. They imply that serverless is necessary to achieve a "you build it you run it" setup, but that's false.

We operate in a self-serve fashion predominantly on kubernetes, and the product teams are perfectly capable of standing up new services and associated infrastructure.

This is enabled through a collection of opinionated terraform modules and helm charts that pave a golden path for our typical use cases (http server, queue processor, etc). If they want to try something different/new they're free to, and if successful we'll incorporate it back into the golden path.

As the author somewhat acknowledges, the answer isn't k8s or serverless, but both. Each has their place, but as general rule of thumb if it's going to run more than about 30% of the time it's probably more suitable for k8s, assuming your org has that capability.

I think it's also worth noting that k8s isn't the esoteric beast it was ~5-8 years ago - the managed offerings from GCP/AWS and projects like ArgoCD make it trivial to operate and maintain reliable, secure clusters.

kryptn 5 hours ago

> To a k8s engineer, serverless means “no servers”!

I'd assume a majority of people working with k8s knows what serverless is and where Functions as a Service work more generically.

The rest of the post just seems to be full of strawman arguments.

who is this kubernetes engineer villain? It sounds like a bad coworker at a company with a toxic culture, or a serverless advocate complaining at a bar after a bad meeting.

> k8s is great for container orchestration and complex workloads, while serverless shines for event-driven, auto-scaling applications.

> But will a k8s engineer ever admit that?

Of course. I manage k8s clusters in aws with eks. We use karpenter for autoscaling. A lot of our system is argo workflows, but we've also got a dozen or so services running.

We also have some large step functions written by a team that chose use lambda because aws can handle that kind of scaling much better than we would have wanted to in k8s.

  • aduwah 4 hours ago

    I think what happened is:"Chatgpt generate me a ragebait for HN about serverless and a k8s-engineer"

    • renatovico 4 hours ago

      kkk, busted, trying a new shine thing :)

  • teekert 4 hours ago

    Can't we think if another name for "serverless"? Like OS-less? Stackless? By now it's pretty well known but it's confusing to anyone first hearing it. Like calling a city buildingless, oh but when you call for a building it's just there.

    • sirtaj 3 hours ago

      Perhaps CGI 2.0, or EvenFasterCGI.

KronisLV an hour ago

> If you’re arguing with a k8s purist, you’ll never convince them.

I feel like the whole article very much sounded like constructing a strawman and arguing against that. The way I see it, there can be advantages and disadvantages to either approach.

If you really find a good use case for serverless, then try it out, summarize the good and the bad and go from there. Maybe it's a good fit for the problem but not for the team, or vice versa. Maybe it's neither. Or maybe it works and then you can implement it more. Or maybe you need to value consistency over an otherwise optimal solution so you just stick with EC2.

Most of the deployments I've seen don't really need serverless, nor do they need Kubernetes. More often than not, Docker Swarm is more than enough from a utilitarian perspective and often something like Docker/Compose with some light Ansible server configuration is also enough. Kubernetes seems more like the right solution when you have strong familiarity and organizational support for it, much like with orgs that try to run as much of their infra as possible on a specific Linux distro.

It's good when you can pick tech that's suited for the job (that you have now and in the near future, vs the scale you might need to be at in N years), the problems seem to start when multiple options that are good enough meet strong opinions.

I will admit that I quite do like containers for packaging and running software, especially since they're the opposite of vendor lock (OCI).

trynumber9 5 hours ago

>As long as you keep the cost down, you will never need to move away.

Yes, as long as the $2 trillion dollar American corporation beholden to shareholders to maximize profits doesn't try to milk its captive customers you'll be fine. Shouldn't be a problem.

  • onli 5 hours ago

    You shouldn't repeat the shareholder value myth, it is not true. See https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?arti... for example.

    Whether that means that Amazon won't try to squeeze profits is a different question.

    • trynumber9 5 hours ago

      I didn't, as far as I'm aware. They are indeed beholden to their shareholders and I said nothing of value. Investors desire shares of profitable companies with consistent growth. As an AWS customer, you are the consistent growth. First as a new customer, a statistic paraded to investors. Later through price increases bringing real revenue.

      Brushing off lock-in is a short term luxury.

    • mrkeen 3 hours ago

      I can't quite follow the article. Is it trying to argue that it's bad when it happens, or that it doesn't happen, or both?

      • onli 3 hours ago

        I understood parent as repeating the claim that companies are beholden to their shareholders to maximize (short-term) profit. The article I linked discusses from several angles that this is a myth, companies are not forced (for example by law, as myth repeaters often claim) to maximize short term profit for shareholders. They can aim for different values and strategies.

    • OrderlyTiamat 4 hours ago

      If society at large, and judges in particular think it's true, then it's true. A partially socially constructed world is like that.

    • sa-code 4 hours ago

      While that's an interesting read, reality would disagree

snicker7 2 days ago

We literally had a major us-east-1 incident on AWS today. Only thing we can do is sit on our butts and wait for it to end so that we can clean up. This happens every few months. I am unimpressed with the the "thousands of engineers" argument.

  • swiftcoder 5 hours ago

    Even if you had deployed Kubernetes into us-east-1, you'd likely still be down during the incident

hn_throw2025 an hour ago

> But the real question is, why will you migrate? It is not like AWS is like Orkut, which can be shutdown overnight. As long as you keep the cost down, you will never need to move away.

Seems like a shallow take. Prices could rise and reliability fall, but you’d still be married to them.

bob1029 2 hours ago

Serverless is such a trap. The vendor's need to standardize the execution model is poorly aligned with the developer's need for control and stability over time. I gave Azure Functions a genuine try and was treated with piles of deprecation notices regarding in proc execution model after just a few months into it. Perhaps AWS is better (I suspect they are), but the concern remains. I don't know how anyone is driving meaningful business value with the amount of distraction these ecosystems bring.

I also don't see the scalability argument. Being able to own a whole CPU indefinitely means I can take better advantage of its memory architecture over time. Caches actually have meaning. Latency becomes something you can control. A t2.large running proper software and handling full load for 60 seconds could cost $10-20 if the same were handled in AWS lambda. The difference is truly absurd.

TCO-wise, serverless is probably the biggest liability in any cloud portfolio, just short of the alternative database engines and "lakes".

solatic 4 hours ago

The promise of both Kubernetes and serverless was to abstract away the infrastructure from the developer, who can stick to writing line-of-business code. In both cases, companies end up needing to hire infrastructure teams to manage the underlying infrastructure.

Author is making a moot argument that doesn't resonate. The real struggle is about steady-state load versus spiky load. The best place to run steady-state load is on-prem (it's cheapest). The best place to run spiky workloads is in the cloud (cheapest way of eliminating exhausted capacity risk). Then you have crazy cloud egress networking costs throwing a wrench into things. Then you have C-suite folks concerned about appearances and trading off stability (functional teams) versus agility (feature teams) with very strong arguments for treating infrastructure teams not as feature teams ("platform teams") but as functional teams (the "Kubernetes team" or the "serverless team").

And yes, there woukd be a "serverless" team, because somebody has to debug why DynamoDB is so expensive (why is there a table scan here...?!) and cost-optimize provisioned throughput, and somebody has to look at those ECS Fargate steady-state costs and wonder if managing something like auto-patching Amazon Linux is really that hard considering the cost savings. At the end of the day, infrastructure teams are cost centers, and knowing how to reduce costs while supporting developer agility is the whole game.

biot 3 hours ago

> Serverless Advocate: Yes, but instead of paying for infrastructure overhead and hiring 5–10 highly specialized k8s engineers, you pay AWS to manage it for you.

This argument lost me. If you’re running your own k8s install on top of servers, you’re doing it wrong. You don’t need highly specialized k8s engineers. Use your cloud provider’s k8s infrastructure, configure it once, put together a deploy script, and you never have to touch yaml files for typical deploys. You don’t need Lambda and the like to get the same benefits. And as a bonus, you avoid the premium costs of Lambda if you’re doing serious traffic (like a billion incoming API requests/day).

Every developer should be able to deploy at any time by running a single command to deploy the latest CI build. Here’s how: https://engineering.streak.com/p/implementing-bluegreen-depl...

  • cybrox 3 hours ago

    Also: As if you didn't need "5-10 highly specialized engineers" (neither needs this number but alas) to get all AWS serverless services to coexist and scale cost and compute efficiently with proper monitoring, logging, permissions, tracing, etc.

ducksinhats 3 hours ago

>whynotboth.jpg

Strangely no mention of knative in this thread, there's a lot of tradeoffs in going full serverless and the promised reduction in infra costs/wages doesn't always pan out.

It's a fairly mature CNCF project at this point and makes running your own serverless setup quite simple.

I doubt the fight between microservices and batch processing will end any decade soon but it's easy enough to run both on the same infra, that most importantly you control.

Wouldn't call it the best of both worlds, but it's reasonable enough to offer you the option of both worlds.

https://knative.dev

dovys 6 hours ago

You are trying to convince the team they don't need to exist while their livelihood depends on the opposite

  • thrwaway55 5 hours ago

    This is an issue of comp no? I'd delete my team if it made sense given the chance because we are all share holders as well.

    • dovys 5 hours ago

      If your company is big enough to have a dedicated k8s team, chances are deleting an entire team won't directly boost your comp. Better to sell the entire endeavor as change of responsibilities - from a team that manages k8s to one that's responsible for uptime. Set constraints and let the team find the best tool for the job.

hnarayanan 5 hours ago

I like how, in this context, k8s is considered the raw metal thing. :)

  • madduci 4 hours ago

    The assumption is that you can always install k8s on bare metal, if cloud providers aren't good anymore

QuinnyPig 3 days ago

I want to like this, but it really needs an editing pass to make it not grating to read.

layoric 5 hours ago

I’m not a fan of k8s, but “serverless” imo is not a good trade either, especially AWS lambda. ECS with ALB is probably an ok middle, but still expensive, even with heavy use of Spot instances. Cloud isn’t going to be getting cheaper..

  • politelemon 5 hours ago

    It is certainly cheaper than the engineer time it takes to run and manage k8s at enterprises. It's a perfectly good trade.

    • hbogert 2 hours ago

      Aren't we exaggerating the whole Kubernetes is difficult thing a bit? Unix is also more complex than DOS, still glad we didn't get stuck with the latter bc it was "good enough"

florbnit 5 hours ago

K8s team insists k8s team needs to exist. Just fire them already, hire a consultant to set you up if you can’t do it yourself. And the difference in cost will certainly be covered by the savings on the k8s team.

No need to worry about them, they’ll easily get a job at Amazon running the infrastructure that you will use instead of running the infrastructure you would have built for them.

  • hbogert 2 hours ago

    Letting consultants do this one-off is the worst you could do. If you can't do it yourself do it fully managed with your cloud provider, or hire a party which maintains k8s on a subscription basis

elric 4 hours ago

Huzzah for promoting more vendor lock-in.

Are there any open standards for "serverless" yet?

moomin 3 hours ago

Is it just me or do these arguments not even convince a disinterested bystander? I particularly dislike “stupidity should be costly”. All of us are stupid sometimes. I’d rather that didn’t tank our entire firm.

  • elygre 3 hours ago

    That was the exact point where I stopped reading, to be honest.

politelemon 5 hours ago

Both in the conversations as well as these comments I see two sets of people talking past each other. Immediately that phrase becomes very evident, "It is difficult to get a man to understand something when his salary depends on his not understanding it."

Embracing k8s is a monumental decision for a company and from that point forward it becomes the galactic center of everything you do. Any new solution you introduce needs to fit into the k8s ecosystem you will have created. It insists upon itself, and it insists upon people's time. It is also a lock in of its own kind, and an insidious one too. You are not going to escape a lock in, the only thing you can do is accept it or perform an appropriate set of mental gymnastics to convince yourself that it isn't one.

Many of us are technologists, and part our role is to understand that cost and impact. After 10 years, hopefully it is evident by now and we have learned in the industry that k8s not something that should be taken on lightly (spoiler: we haven't learned squat).

It absolutely is possible to go managed well - forget serverless itself - managed services where you work with what is given by a cloud provider. The point of managed services isn't just cost, it is reducing the amount of time and effort your humans are spending. Part of managed services is to actually understand what they do and not go in blindly, then acting surprised when they do something else. Small functions, Lambda. Containers, ECS Fargate. Databases, RDS. The service costs and boogeymans often trotted out are irrelevant in the face of human time, if your humans are having to maintain and manage something, that is wasted time, and they are not delivering actual things of value.

renatovico 4 hours ago

it’s not about collaboration or finding the right tool for the job, but rather about “my tech is better than yours.”

K8s and Lambda serve different scopes and use cases. You can adopt a Lambda-style architecture using tools like Fargate. But if a company has already committed to a k8s, and this direction has been approved by engineering leadership, then pushing a completely different serverless model without alignment is a recipe for friction.

IHMO, the author seems to genuinely want to try something new and that’s great. But they may have overlooked how their company’s architecture and team dynamics were already structured. What comes across in the post isn’t just a technical argument — it reads like venting frustration after failing to get buy-in.

I’ve worked with “Lambda-style” architectures myself. And yes, while there are limitations (layer size, deployment package limits, cold starts), the real charm of serverless is how close it feels to the old CGI-bin days: write your code, upload it, and let it run. But of course, that comes with new challenges: observability, startup latency, vendor lock-in, etc...

On the other side, the engineer in this story could have been more constructive. There’s clearly a desire from the dev team to experiment with newer tools. Sometimes, the dev just wants to try out that “cool shiny thing” in a staging environment — and that should be welcomed, not immediately shut down.

The biggest problem I see here is culture. The author wanted to innovate, but did it by diminishing the current status quo. The engineer felt attacked, and the conversation devolved into ego clashes. When DevOps loses the trust of developers, it creates long-term instability and resentment within teams.

Interestingly, K8S itself was born from that very tension. If you read Beautiful Code or the original Borg paper (which inspired), you’ll see it was designed to abstract complexity away from developers — not dump it on their heads in YAML format.

At the end of the day, this shouldn’t be a religious debate. Good architecture comes from understanding context, constraints, and cooperation, not just cool tech.

vasco 4 hours ago

Reminds me of the "this is what devops means" posts of the old days. Just another guy who found the right way of doing things but everyone around them is wrong.

  • tyingq 3 hours ago

    Yep. And even if there's some chance the person is right...they are right for one very specific use-case, scenario, team, etc. With no guarantee this setup is good for the next thing.

user32489318 an hour ago

“A is better than B”. B is “bad, bro!” based on vaguely relevant reasons, therefore “A is the best possible solution”.

JojoFatsani 2 days ago

[flagged]

  • tbrownaw 5 hours ago

    Perhaps there are legitimate non-fear-based reasons to not like it?