When to Cut the Wire. And When Not To.
Storage Series — Part 4
Three articles ago we started with a spinner. A user tapping a screen and waiting. A feeling everyone recognises and almost nobody traces back to its source.
We traced it back. The source is the wire — the network hop between your application and its storage that EBS requires by design. We showed the architecture that eliminates it. We showed the performance gap it closes. We showed the cost recovery in under seven months.
Now the honest part.
Because the goal was never to tell you cloud storage is always wrong. The goal is to give you the analysis your cloud vendor will not.
When Bare-Metal Local NVMe is the Clear Answer
Your workload is IO intensive. Databases, high-frequency reads, media serving, AI inference, real-time analytics — anything where storage latency shows up directly in application response time. If your PostgreSQL instance is the thing standing between your user and their answer, local NVMe is not a nice-to-have. It is the correct engineering decision.
Your data volume is predictable. You know roughly how much storage you need for the next two to three years. The economics of bare-metal storage depend on utilisation — a drive you own and fill is dramatically cheaper than a drive you rent and fill. If your capacity needs are stable, the payback period shrinks and the long-term savings compound.
You have or are building infrastructure expertise. This is not a managed service. ZFS, ECC RAM, RAID-10, drive health monitoring, firmware updates — these require someone who knows what they are doing. If that person exists on your team, or is someone you are bringing in, the operational overhead is manageable and the capability you build is permanent. If that person does not exist and you have no path to finding one, the managed abstraction of EBS has real value.
Your workload has sovereignty or compliance requirements. Data that cannot leave a jurisdiction, infrastructure that cannot touch a shared hypervisor, air-gapped environments where cloud connectivity is not an option — these are not edge cases anymore. They are growing requirements across finance, defence, healthcare, and any organisation operating under GDPR, Australian Privacy Act, or sector-specific regulation. Local bare-metal is not just faster in these contexts. It is sometimes the only legal answer.
When Cloud Storage Still Makes Sense
Your workload is genuinely unpredictable. If you are a startup that might need 2TB this month and 20TB next month, the elasticity of EBS has real value. Bare-metal storage scales in drive-sized increments with lead time. Cloud storage scales in API calls with no lead time. For volatile, early-stage workloads that elasticity is worth paying for.
Your access patterns are cold. Infrequent access, archival data, disaster recovery targets — S3 Glacier and similar cold tiers exist for good reason and price accordingly. Nobody is suggesting you run your seven-year compliance archive on local NVMe. Use the right tool for the temperature of your data.
You genuinely need global distribution. Content that needs to be close to users across multiple continents, data that needs to replicate across regions automatically — cloud infrastructure solves this elegantly and the operational cost of replicating that globally with bare-metal is significant. This is a real cloud advantage and it deserves to be said plainly.
Your team cannot support the infrastructure. Honest assessment matters here. A well-run EBS deployment maintained by a competent team will outperform a poorly configured bare-metal environment maintained by nobody. The platform matters less than the thinking behind it and the people in front of it.
The Decision Framework
One question cuts through most of the analysis: is storage latency showing up in your application response time today?
If yes — and for most IO-intensive workloads it is, whether or not anyone has measured it — the wire is costing you user experience and money simultaneously. The case for cutting it is strong and the numbers support it within two quarters.
If no — your bottleneck is elsewhere, your workload is cold or unpredictable, or your team cannot support the infrastructure — cloud storage is a rational choice and you should not let anyone tell you otherwise including us.
What This Series Was Really About
Not cloud versus on-premises. That framing is twenty years old and it has never been useful.
This series was about one specific, measurable, fixable problem. Storage that moved off the machine introduced latency that users feel, costs that compound monthly, and performance ceilings that no amount of spending fully removes.
Local NVMe in ZFS RAID-10 on ECC hardware eliminates all three. It pays for itself in six months. It performs at a level EBS cannot reach regardless of tier. And it gives your team a storage platform they own, understand, and can reason about completely.
The web got slower when storage moved off the machine. Moving it back is a decision, not a revolution. The hardware exists. The software is mature. The economics are not close.
The wire is optional. You just have to decide to cut it.