Back to Blog

You Are Already Paying for the Server. Stop Paying Someone Else to Run It.

Vanilla Kubernetes in the cloud is not a radical move. It is the obvious one nobody told you about.

By Catalin Lichi · Sugau


This is not an article about leaving the cloud.

You are not going to wake up tomorrow and buy servers. Your procurement process alone would take six months. Your board would have questions. Your engineers would be nervous. I understand all of that, and I am not asking you to do any of it.

What I am asking you to do is think carefully about what you are actually buying when you pay your cloud bill — and whether you could buy less of it, on the same infrastructure you already have, without changing a single thing about where your workload runs.

The answer, for most organisations, is yes. And the mechanism is simpler than the people selling you managed services would prefer you to believe.


What you are actually renting

When you run a workload in AWS or GCP or Azure, you are renting compute — CPU, RAM, disk, network. That is the foundational transaction. Everything else is optional.

The cloud providers figured out, some years ago, that the margin on raw compute is thin and getting thinner. Customers comparison-shop. Spot instances, reserved instances, committed use discounts — the market for raw compute is competitive and the prices show it.

Managed services are a different business. When you use RDS instead of running PostgreSQL yourself, you are no longer buying compute. You are buying compute plus an operational wrapper, priced as a single unit at a margin that does not appear on any comparison page. The wrapper is convenient. It is also expensive in ways that compound quietly over time.

The question nobody asks at procurement time is this: what exactly is in the wrapper, and could your engineering team provide it themselves?

In most cases, the answer is yes. In most cases, they already have the skills. What they lack is the infrastructure pattern that makes self-hosting straightforward — and that pattern has existed, in mature and production-tested form, for several years now.

It is called Kubernetes. And you are probably not using it to its full potential.


What vanilla Kubernetes actually means

Kubernetes is an orchestration platform. You describe the workloads you want to run, and Kubernetes runs them — scheduling containers onto available compute, restarting failures, managing networking between services, handling configuration and secrets. It was designed at Google to run at a scale most organisations will never approach, and it has been open-source and free since 2014.

When I say vanilla Kubernetes, I mean exactly that. Not EKS. Not GKE. Not AKS. Not any managed Kubernetes product from any cloud provider. The open-source distribution, installed and operated directly on cloud virtual machines you already control.

The immediate objection is predictable: that sounds harder than using EKS. Let me address it directly.

EKS manages the Kubernetes control plane for you — the API server, the scheduler, the etcd cluster that stores state. This is genuinely useful, and I am not dismissing it. But it comes with constraints. EKS determines the upgrade cadence. EKS determines which networking plugins are supported. EKS integrates naturally with AWS services in ways that are convenient in the short term and binding in the long term. And EKS costs money — a flat fee per cluster per hour, before a single workload runs.

A vanilla control plane deployed with kubeadm or k3s or Talos on three virtual machines you already own costs nothing beyond the compute you are already paying for. The operational overhead of maintaining it is real and should be acknowledged honestly — but it is finite, learnable, and once learned, transferable to any environment you ever run Kubernetes in. Including your own hardware, if that conversation ever becomes relevant.


The cloud is already Kubernetes

Here is something the managed services marketing never says out loud.

GCP runs on Borg — the internal Google cluster manager that Kubernetes was built to replace and bring to the world as open source. AWS and Azure run their own equivalents: vast internal orchestration systems managing hundreds of thousands of Linux machines, scheduling workloads, restarting failures, handling networking. The cloud is not a mysterious proprietary system. It is Linux machines in data centres, orchestrated at scale, by systems that are conceptually identical to Kubernetes.

Every managed service they sell you — RDS, ElastiCache, MSK, OpenSearch — runs on that same orchestrated Linux infrastructure. You are paying a premium to not use the open version of the tool that their own systems are built on.

Vanilla Kubernetes does not add a foreign layer on top of the cloud. It removes the toll booth placed between you and what the cloud already is. You are not fighting the infrastructure. You are using it as the people who built it actually intended.


What you stop paying for

The financial case is not complicated once you look at it directly.

A mid-size organisation running a production application typically consumes some combination of the following managed services: a relational database, a caching layer, a message queue, possibly a search index. In AWS terms: RDS, ElastiCache, SQS or MSK, OpenSearch. Each one carries a managed services premium over the raw compute cost of running the equivalent open-source software yourself.

PostgreSQL, operated on Kubernetes via CloudNativePG, delivers everything RDS delivers — automated failover, streaming replication, point-in-time recovery, connection pooling via PgBouncer — on compute you already own, with no per-hour service premium. Redis on Kubernetes with Sentinel for high availability replaces ElastiCache. Kafka on Kubernetes replaces MSK. OpenSearch on Kubernetes replaces the managed OpenSearch service.

None of these are experimental. All of them are running in production at organisations that examined their bills honestly and made a decision. The software is mature. The Kubernetes operators that manage them are mature. The skills exist in the market — in many cases they exist inside your own engineering team, underutilised because the path of least resistance at procurement time was to click the managed service button.

The savings vary by workload and scale. For smaller deployments the absolute number may not be dramatic. For organisations running significant data infrastructure the difference between managed and self-hosted, calculated over a year, is typically measured in the tens of thousands. Sometimes more. The number is knowable — your cloud bill contains every line item needed to calculate it.


The reversibility argument

Here is the part that matters most for risk-averse organisations, and it is the part that is least often made clearly.

This is reversible.

If you run PostgreSQL on Kubernetes in AWS today and decide next quarter that you want RDS back, you migrate your data and switch your connection strings. The cloud provider will take you back without complaint. Nothing about running vanilla Kubernetes on cloud compute burns any bridges or forecloses any options.

The reverse is not equally true. Every month you run RDS, you accumulate tooling, runbooks, operational muscle memory, and institutional knowledge tied to a proprietary interface. Leaving becomes incrementally harder — not because the data cannot move, but because everything built around the assumption that RDS is permanent has to be rebuilt around something else.

Optionality has value. The ability to change your mind, to move workloads, to renegotiate with your cloud provider from a position of genuine independence — this is worth something on its own, independent of any cost calculation. Vanilla Kubernetes preserves it. Managed services quietly erode it.


What this requires from your organisation

Honesty demands that I say this plainly: self-hosting is not free. It requires engineering time to set up, engineering judgment to operate, and engineering discipline to maintain. These are not trivial requirements.

What they are not is exotic. The skills required to run PostgreSQL on Kubernetes, to maintain a Kubernetes cluster on cloud VMs, to operate Redis with proper persistence and failover — these are standard infrastructure engineering skills. They are taught, certified, and available in the market. If your organisation does not have them internally, they can be acquired — through hiring, through training, or through a consulting engagement that builds capability rather than dependency.

The managed services model implicitly assumes your engineering team cannot be trusted to operate infrastructure. In some cases that assumption is correct and the premium is justified. In most cases it is a comfortable story that benefits the cloud provider more than it benefits you.

Your engineers are more capable than the bill implies. The infrastructure patterns that make self-hosting reliable and maintainable are more accessible than the managed services marketing suggests. And the compute you are already paying for is sitting there, ready to run whatever you ask it to run.


The only question that matters

Somewhere in your organisation, someone knows exactly what you spend on managed database services each month. They know the RDS line. The ElastiCache line. The MSK line. They have probably looked at it and moved on, because the alternative seemed complicated and the status quo seemed safe.

Ask them to calculate what it would cost to run the equivalent workload on compute you already own. Not as a commitment. Not as a migration plan. Just as a number.

Then decide whether the gap is worth understanding better.

That is the only ask. Look at the number. The rest follows from there on its own.


There are engineers in your industry who have spent years learning to run this infrastructure well — on Kubernetes, on bare metal, in air-gapped environments where no managed service was ever an option. Most of them are not hard to find. If the number turns out to be interesting and you want to talk through what the path looks like, Sugau is one place to start. If you find someone else, that is equally fine. The important thing is that you looked at the number.