One common sense, and three less-than great
Business has been booming, for the most part, for large enterprises in the US and across the globe for the last seven to eight years. The continuous growth in the Dow-Jones, the SSE Composite Index, DAX, Nikkei, FTSE, and most other indices, attests to this.
But increases in business are a double-edged sword as they bring with them increases in business transaction processing costs. And that goes doubly today, as more and more items are purchased with credit cards and with debit cards than ever before. And smaller, less expensive items are being purchased in this way, which all tend to increase the costs per transaction.
As corporate IT organizations cope with rising costs all around—with rising hardware and software costs, rising infrastructure costs, and rising personnel costs piled on top—CIOs and managers strive to find ways to balance or offset as much as is practical.
For better or for worse, there are four solutions that seem to be the go-to solutions for IT leadership, some effective, others not so much, and none of them particularly creative.
Contemporary go-to solutions
Together, these fall into four general categories: good housekeeping, migrating off the mainframe, pay-the-man, and give-up solutions. Of these, one is just good common sense, but the other three are problematic and have serious drawbacks.
1. Good Housekeeping
These are common-sense solutions that require little if any added operational expense, other than your personnel spending time on them. These include things like getting rid of the shelfware: the software that has been purchased, installed, but is not being used, and in some cases never used. The sad part of the shelfware story is that the IT organization is paying for this software everywhere it’s installed: on the mainframe (on every LPAR on which it’s installed), and on every server in the server farm (local and outsourced) on which it’s installed. It’s estimated that in the US alone, almost $10 billion is wasted every year on unused software. Is it worth it to look into this? You bet it is.
Another housekeeping challenge is duplication: multiple software solutions that do the same thing. Obviously, it would make sense to investigate and to settle on one solution rather than on more than one, where possible. It will more than likely require an exhaustive audit project but, at the end of it, if you can eliminate a few unnecessary applications it pays for itself. These are just two examples of good housekeeping practices that can save on operational costs—every single year.
2. Migrate from the Mainframe
This is the never-ending debate that usually starts with someone talking about how costly mainframe hardware is, and how inexpensive commodity servers are—a very common apples-to-oranges argument. When comparing apples-to-apples, however, it quickly becomes apparent that mainframe computing is less costly than distributed computing within large transaction processing environments.
The numbers really show themselves when all aspects of a computing solution are put on the table for a fair and reasonable comparison. Things like power consumption, hardware costs for equal (five-nines) environments, software costs (for all servers in the datacenter), support personnel costs, air conditioning, server room footprint, upgrade time/cost, and more. This is not always done; for example, the support personnel requirements for distributed server environments are often hidden elsewhere in the budget. The problem with hidden costs is that any cost advantage that you hope will come out the other end, just won’t. Hiding personnel costs isn’t going to make a difference when those bills come in for server software.
But more than that, everything you do now on the mainframe will have to be duplicated on your networked servers, and that includes your COBOL applications that represent decades of investment in IP. Rebuilding from the ground up is not going to be a trivial task, and there’ll be no hiding a project that may very well span years of work—much of it reinventing the wheel—while you continue to run (and pay for) your mainframe systems during the entire migration process.
And at the end, there may be one or two applications that you simply cannot port to the server network. If you’re going for this option, you’d better do your homework, because the unknowns will hurt when they come around to bite you.
This is self-explanatory. It can be a quick-fix solution and, in some cases, it works just fine, thank you very much. But the truth is, many problems cannot be solved just by throwing money at them. Usually, there is a time component that goes hand-in-hand with the pay-the-man solution.
In instances where workloads are growing and machine capacity has been reached, shelling out to buy more machine—a mainframe upgrade, or the addition of new banks of servers—can actually work out well. And sometimes it can be a relatively quick fix. Problem solved by throwing cash at it? That’s sometimes just fine for a large multi-billion-dollar company, or a large government department with a huge budget.
But some problems just can’t be solved quickly and easily by a big wad of cash. If your batch windows are filling up fast, you might find that no amount of extra machine can solve the problem. If you need to provide new mobile access to legacy systems, you might find that you’ll need a couple of years’ worth of development and consulting time on top of that big wad of cash to solve the problem. Throwing a ton of money at a serious server security issue may not work either—you may be forced to tear things down and rebuild them—and that’s never a quick fix.
I use the term “give-up” solutions to make a point: nobody is just throwing their hands in the air and giving up. When faced with complex problems and unsatisfactory solutions, there isn’t always a quick fix. Sometimes what seems like the best solution in the here-and-now, is to wait. When the recommended solutions are all large capital expenditure solutions like getting off the mainframe, complete application rebuilds from the ground up, things of that nature, it’s often wise not to make snap decisions, or more to the point, uninformed decisions. The “give up” option often looks like the least painful path, at least at the present. There may be pain and pressure to change, but if change in and of itself will bring pain, then why jump in?
Also budgetary reasons may exist for the “do nothing” option. Sure, we need to make a change, but whose department pays for it? The hurry-up-and-wait scenario is a pretty common thing in enterprise IT; it always has been. It’s not about incompetence, or laziness, or any other seemingly negative reason. It’s about uncertainty and risk avoidance. And it’s understandable as there are plenty of disaster stories out there, caused by knee-jerk or even reckless decisions that have resulted in cost overruns and ended careers.
The key is to look at all of the solutions available to you. Not just the two or three most common solutions, or the favorite solutions of your IT services partner, but all of the solutions. You may find some that are less costly, some that provide good ROI and payback, that do not require capex-heavy, highly-visible multi-year building projects, and even some that are low-risk and high-reward type solutions.
There are also six other solutions out there that are not as well known, but can be both creative and highly effective—we’ll explore these in Part II of this series next week.
Latest posts by Keith Allingham (see all)
- Part II: Six ways to improve datacenter performance while saving on costs - Apr 25, 2019
- Typical Techniques for Improving Datacenter Mainframe Performance - Apr 18, 2019
- Fast Mainframe Data Access – Apples and Oranges - Jan 17, 2019