Big Lie Revealed: Commodity Servers Not Cheaper Than Mainframe
For decades, CIOs have been sold the Big Lie that running workloads on commodity servers is cheaper than running them on the mainframe.
Recent studies by Gartner Senior Adviser and MIT Associate Dr. Howard Rubin, however, reveal what many of us knew all along: Ownership of commodity infrastructure is so horrifically expensive—especially when we try to achieve mainframe-like levels of reliability, security and scale—that it’s far more economical to simply stay on the mainframe.
In fact, according to Rubin’s research, total infrastructure costs are 60% higher at companies that depend heavily on commodity servers than they are at mainframe-centric companies. IT costs per dollar of goods are also 41% higher at server-centric companies than at mainframe-centric ones.
Commodity servers, in other words, are more expensive than the mainframe. A lot more expensive.
The Real Score
Rubin’s study looks at MIPS and TCO—taking into account the fact that some industries are more IT-intensive than others.
In retail, for example, the study quantifies IT costs per SKU in mainframe-heavy environments at $194.09. In server-heavy environments, that number skyrockets to $252.27—a unit cost premium of $68.18, which is a whopping 37%.
In insurance, IT costs per claim were found to be approximately $56 in mainframe-heavy environments and $92 in server-heavy environments. That’s a premium of $36—or a whopping 64% greater cost.
Of course, we know from direct experience that distributed infrastructure is a money pit. We’ve spent countless billions patching operating systems, virtualizing CPU allocation, overhauling storage architectures, and trying to keep massive racks powered and cooled. Plus, every time we’ve needed more capacity, we’ve added non-trivial opex.
IDC’s numbers confirm this. Capital spending on commodity servers has actually declined since the late 90’s. But server-related opex is completely out of control—multiplying more than five-fold over the same period.
The mainframe, in stark contrast, allows IT to add workloads without adding staff or hardware. And attributes such as security and virtualization are inherent in the platform.
What’s a CIO to Do?
How should IT decision-makers respond to these truths? How can they take advantage of the mainframe’s superior economics?
I have three suggestions:
- Admit the truth. Numbers don’t lie. Every time you add a workload to your server environment, you waste money—and create technology debt that will keep sucking cash out of your company for years to come.
- Re-examine the mainframe platform. The mainframe has evolved magnificently. IBM z13 can run Java and Linux. Specialty processors allow workloads to be run at lower cost. Costs for mobile back-end support have been reduced as well. These advances and others are definitely worth a look.
- Call my company. We can show you how to intelligently maintain and advance mainframe applications, accelerate devops processes to support greater business agility and—perhaps most important of all—equip Millennials to become your next generation of mainframe professionals.
There is no Tooth Fairy. And commodity servers are not economical. The sooner IT acts rationally in light of this truth, the better.
Originally published on LinkedIn Pulse.
- CIO Challenge: Dodging the Bimodal IT Bullet - Mar 13, 2019
- Yes, There Is a Clear Path to Mainframe Agility - Dec 26, 2018
- Digital Denial: Why You Can’t Trust the Application Economy - Jul 4, 2018