You’re Paying for the Wire
Storage Series — Part 1
You notice it more than you admit.
You tap a product image and wait. You open a dashboard and watch a spinner complete a full rotation before anything appears. You click a link on a site that should be fast — the company is well-funded, the engineering team is experienced — and there is still that pause. That small but unmistakable moment where the app is thinking.
The web did not used to feel like this. Ten years ago, a well-run server felt immediate. Today that feeling costs extra.
Most people blame the frontend. JavaScript bundles, too many third-party scripts, unoptimised images. Those things matter. But there is a quieter culprit sitting further down the stack, one that affects every read, every write, every database query your application makes.
The Wire
When your application running on AWS reads from or writes to an EBS volume, that IO does not stay local. It leaves your instance, travels across a network, hits a storage node somewhere in AWS’s infrastructure, and comes back. Every single time. For every query. For every image. For every user, all day.
EBS is not a disk. It is a network block device. AWS has done impressive engineering to make that round trip as invisible as possible. But the wire is still there. And under real mixed load — concurrent users, database queries, background jobs all competing for the same volume — that latency accumulates. The spinner is not a coincidence. It is the wire doing its job.
This is not a criticism of AWS. It is physics. And most web applications running in the cloud are built on top of it without ever questioning it.
What the Alternative Looks Like
Four Samsung 9100 PRO NVMe drives in RAID-10 on a bare-metal server. The IO does not go anywhere. It travels a few centimetres across a PCIe 5.0 slot. In RAID-10, reads split across both drives in each mirror stripe and writes commit in parallel — so you are not getting the performance of one drive. You are getting roughly double.
Sequential reads approach 28,000 MB/s. Random IOPS exceed 4 million.
EBS gp3 has a hard ceiling of 1,000 MB/s throughput and 16,000 IOPS. That ceiling is not a configuration limit you can pay to raise. It is the product.
Your users do not feel IOPS numbers. They feel the gap between clicking and seeing. That gap is measured in the distance between your application and its storage. On EBS it is a network hop. On local NVMe it is a PCIe slot.
The Cost
A 16TB gp3 volume in us-east-1 costs $1,280 every month at baseline. Add a production database workload with real IOPS requirements and you are at $1,350/month conservatively. That number does not shrink. It renews every month, forever.
Four Samsung 9100 PRO 8TB NVMe drives cost roughly $4,400. A refurbished Dell PowerEdge R750 with dual Xeon Gold, ECC RAM and iDRAC — non-negotiable for ZFS — runs $3,500–5,000. Total investment: under $10,000.
Payback period: six to seven months.
After that, your storage costs nothing. The wire costs $1,350 every month, forever.
The web got slower because storage moved off the machine. The fix is not complicated. It is just a decision most teams have not been asked to make.
Next: how four NVMe drives become a production-grade storage platform — ZFS RAID-10, ECC RAM, ARC caching, snapshots, and compression. Everything EBS charges extra for, or simply does not offer.