There are a countless number of articles on the subject of mainframe computing, and whether or not you should stay with it or replace it with a “more modern” technology. As you can probably well imagine, most vendors that are mainframe-specific will recommend that you stay on the platform because it is cheaper and more reliable than any other platform out there. This is actually true, most particularly for the large-scale revenue-generating applications that are in use by the Fortune 500 today. In fact BMC recently released its annual mainframe survey study which indicated that mainframe use is likely to continue in this year and beyond, and even increase somewhat. I get opportunities to talk with many CFOs and CIOs of the largest companies in the world and I believe that point. I am actually not hearing a lot of anti-mainframe sentiment; instead I hear more of the common issues facing business today — my business is hard, competition is tough, I need to cut costs and increase market share.
That’s not really a mainframe issue per se, but rather an issue about making good choices with technology. Companies that rely on DevOps and the practices related to that may tell you that you can increase your reliance on mainframes by adopting agile processes. That’s likely true, but then you really need to be looking at best- of-breed performance and optimization practices to get the most out of your mainframe system. Investing in not only the DevOps paradigm but the IT analytics to truly understand how to tie together company revenue against IT investment to help manage and direct those costs will lead you to being able to make the right IT investment decisions. You’ll be pulling on two levers at once by increasing profitability while reinforcing a system to handle more transactions. You literally will do more with less.
Vendors specializing in mainframe migration will suggest that their tools will give you a straight forward process to moving large scale enterprise applications to other platforms such as Windows or Linux and they likely will do this by focusing on a target platform that is typically JAVA or .NET based. This may make sense if you are one of the few people that still have a mainframe application that is not accounting for revenue and is more of a functional piece of logic that is required to run your business, like say payroll or logistics. These applications account for cost and not revenue, and you likely should be moving these applications from your mainframe to free up those machine cycles to do what they do best for you, and that is make you money. Be careful when considering moving revenue generating applications off the mainframe – for the most part, migration vendors typically fail to mention that many mainframe applications include workloads that use COBOL, PL/1 and possibly mainframe assembler, and moving these workloads is simply not straight forward, and comes with risk that may impact revenue.
Are Mainframes ‘Legacy’?
Are mainframe computers legacy systems? Well, they have been around for a long time, but for good reason. It’s easy to find the stats surrounding today’s mainframe systems, and it is pretty impressive what they do, have done and will likely continue to do. IBM invests heavily in the mainframe, and has come up with technology that is used within that machine that is actually pretty ground breaking and nothing short of amazing —yet some of the vendors out there that discredit the mainframe have successfully created a marketing buzzword — “legacy” — to describe mainframe technology. The use of the word is intended to distract uninformed IT professionals unfamiliar with mainframe value, performance, reliability, security, and even more importantly, cost of ownership (COO) and return on investment (ROI). It’s legacy, it’s old, you have to replace it in a hurry! The truth is that serious studies show that COO and ROI, in certain mainframe computing environments are actually less costly (and less complex!) than moving workloads from a single mainframe system to a herd of low-cost commodity servers.
If you look at the newest processor for the mainframe from IBM it includes not only a fair amount of computing power but some interesting processing that manages memory, security and complex computational workloads right within the silicon itself. This has the potential to drive improved profitability for anyone running high transaction-intensity mainframe environments – and that includes many of the largest financial services companies in the world. And that should be of great interest to business leaders facing tighter and tighter shareholder demands.
Enterprises have kept their mainframes, and if the BMC study is to be believed, are increasing their use of the platform. “Why are they staying on the mainframe?” is a legitimate question in the face of all of the market activity to get enterprises to change. According to the CEOs, CIOs and CFOs that I have talked to, it has a lot to do with revenue and profitability and fitting the right type of computing to the right type of workload. And that coupled with analysis of IT data (really, they’re co-dependent) will enable IT to be the catalyst to help the business to actually forecast business and shareholder dividends with a greater degree of accuracy. The disruption of FinTech with technologies like Blockchain and personalized unified commerce are just around the corner, and in order to embrace these technologies to help drive more revenue, it makes a lot more sense to engage a lot closer to the system of record. And that is precisely the mainframe, so this does seem to corroborate what the BMC survey results are telling us.
In the end, enterprises have to decide if, in their IT environment and with their applications, a distributed computing environment would be better than a centralized one. Additionally, they also must consider if a patchwork quilt management and monitoring environment will introduce simplicity and cost savings over a more centralized management approach. Further, they should also consider security and reliability issues that might arise in a distributed, multi-platform, multi-vendor computing environment. Of course this is a complex set of considerations, and you can’t really make any one in the absence of the others, so it’s no wonder that we are seeing hybrid environments in today’s large businesses that include distributed, mainframe, on-premises and cloud based computing.
Like many things in life, there is no easy answer here that universally applies across the board. I really have found that every customer is unique, and the challenges of their IT strategy and implementation are just as unique as the customers themselves and the way they service their customers.
I do think we will continue to see mainframe computing into the 2020’s decade and likely even beyond. Why? Because it doesn’t make sense to force the varying types of computing requirements to run on a single platform type. It’s about the cost of running your business as opposed to running the profit and revenue generation of your business; they’re not the same thing. They’re each better served by specialized computing platforms; a single platform will do one well, and marginalize the other – and we can’t have that.
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.
Latest posts by Allan Zander (see all)
- Improving the Bank’s Bottom Line: Reducing Cost per Transaction - Aug 8, 2018
- Death of the Traditional IT Department - Jul 5, 2018
- Blockchain technology – the hype versus the reality - Mar 14, 2018