3 dimensional thinking

This is not an article on multidimensional databases, 3D rendering, or spatial databases, but rather how CIOs in IT think, based on their education, experience and professional motivations.

A good friend of mine, Pete, recently asked me how many servers does it take to equal a mainframe? He’s actually quite a smart guy, a Salesman and former software Engineer, but has never worked in the mainframe business, and knows nothing about it. But it was a fair question, and I had to answer “x”—a variable. This did not satisfy him at all, as you might imagine, but that’s the truth—it depends upon many variables.

This immediately reminded me of a LinkedIn conversation from a few years back. An individual, Dan was his name, I believe, explained to a LinkedIn group, in which we were both members, that PCs were faster than mainframes. He went on to explain that the processor of a fast PC is actually faster than the processor on a mainframe. And that a migration from the mainframe to a fast PC could be accomplished successfully by a couple of smart people in a couple of months. In fact, he did this for a living at one time. I didn’t bother replying, but someone else suggested that in most cases it would take a couple of buildings of smart people, and that it would take a couple of years.

But to be fair to Dan, this LinkedIn conversation occurred several years ago, and I’m sure that his successes in this area happened many years before that. In fact, there was a time—in the 1970s and before—that any company needing to do business computing needed a mainframe. In fact, even small companies, like the advertising agency in Mad Men, needed a mainframe: either an IBM 360, or any number of similar systems produced by various IBM competitors of the day.

One dimensional IT thinking

By the 1980s, CIOs actually were able to dump their costly mainframe systems for much cheaper IBM PCs or PC clones based on x86 architecture, and run their computing tasks on these machines just fine, thank you very much. So, in a way, Dan was right: a CIO could do that, if the organization performed only small-scale computing. Companies like that have no business running mainframe systems today, and I dare say that none of them do anymore. And people like Dan made a fortune helping CIOs of small organizations migrate from the mainframe. The important point was that processing speed was sufficient on the new systems, computing memory, capacity, reliability, etc., did not really play an important role. For them.

So for my friend Pete, in cases like that, we could answer his question by saying that a single x86 server is just as capable as a huge expensive mainframe. Does the conversation end there? Well, certainly not. And any CIO who took this example to apply to any and all mainframe migration projects soon learned that it would eventually be a career-limiting miscalculation, due to “one dimensional IT thinking.”

One dimensional IT thinking focuses on one dimension, whether it’s processing speed or hardware cost, or whatever else is the sky-is-falling, must-solve challenge of the day. Wearing blinders. The problem is that in a three dimensional world, CIOs can miss a whole lot if focusing on a single dimension. Think about trying to figure out the mass of a cube of material by focusing on just one edge of the cube, and ignoring things like volume and density.

Two dimensional IT thinking

Certainly, vendors won’t get far trying to convince CIOs these days that they can replace their mainframe systems with an x86 server because its processor might be as fast as one of the mainframe’s processors. However, CIOs do talk a lot about replacing mainframe systems with server rooms full of cheaper x86 servers, the idea being that a large number of servers will equal what can be done on the mainframe. That’s how Google does it, and how Amazon does it, so we should be able to do it, too.

This is undoubtedly a step beyond the one-dimensional thinking CIO, but is still missing some very important information. While it’s true that the world’s biggest computing networks—Google, Amazon, Facebook, etc.—run on commodity x86 server farms, there are huge costs associated with this. And once again, if CIOs aren’t careful to compare apples to apples (as opposed to oranges), they can be shocked to learn that they’re worse off than they were before a migration.

For example, if a CIO were to look at the costs on the mainframe side—hardware, software, monthly licensing, maintenance and upgrades—the result is a pretty big number. When planning for a migration, if that same CIO were to focus on only those items for a planned distributed computing replacement, the number might look pretty good. But it’s actually an apples-to-oranges comparison. What will the difference in system reliability mean in terms of cost on a yearly basis? If the transaction throughput was properly considered (it often isn’t), what will that mean for yearly cooling costs? What about the extra personnel support costs for a data center full of servers, as opposed to one, two or three mainframe systems?

And that’s just the tip of the iceberg. This is akin to two-dimensional thinking in a three-dimensional world. The problem is that in a three dimensional world, CIOs are still missing a whole lot if focusing on only two of three dimensions. Think about trying to figure out the mass of a cube of material by focusing on just one surface or an internal plane of the cube. You still just don’t have all of the information you need.

So for people like my friend Pete, that “x” variable is becoming a little better defined, but there’s still more…

Three dimensional IT thinking

This is the panacea or the utopia of IT thinking. It seems obvious, but it is hard to do. The costs associated with mainframe computing are fairly well understood, especially when it comes time to pay the monthly bill, when the next revision of software is ready, or when it is time to upgrade the system hardware. But these costs don’t necessarily translate directly to distributed computing. There are similarities like comparing hardware cost and software licensing per machine. But there are massive differences, too.

For example, if you discover you need twice as many servers as you planned, well the hardware cost is relatively insignificant, but the OS and software licensing costs could be quite significant, and will result in an unplanned and ongoing cost increase. It is the same for datacenter environment costs and personnel support costs. Three dimensional IT thinking means considering everything that will impact a migration project: the direct hardware and software cost dimension, the patch/update/upgrade dimension, the datacenter management dimension, etc., and much more.

Things that have a minor impact now may have a huge impact after a migration, and vice-versa. Project forward and estimate how the comparison will look after three years, when considering all aspects. Consider future upgrades, software updates, and maintenance. Consider the personnel costs associated with all this activity. Don’t let your biases blind you to important considerations. If you’re not sure, consult your vendor. But also consult third-party experts like Dr. Howard Rubin. Strangers on LinkedIn can help as some of them have succeeded in migrations just like yours, but many others have failed. Some of them maybe forthcoming. Take everything into consideration—that’s the secret sauce.

So what is x?

So that original question posed by my friend Pete: How many servers does it take to equal a mainframe? now has an answer. The answer is still “x”—a variable. But now we know that variable depends upon how you think about IT. Do you want to look at the big picture? Do you want to carefully consider all the variables, whether they’re expected or not? Do you have any biases? Are you willing to take another look at your plan? And another? Have you missed anything obvious? Do you fully understand everything involved in a comparison between mainframe computing and distributed computing? Are you sure? Are you willing to bet your job on it?

Answer all those questions to solve for “x”.

Allan Zander
Follow me

Allan Zander

Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.
Allan Zander
Follow me

Latest posts by Allan Zander (see all)

Share this article: Share on Facebook4Share on Google+1Tweet about this on Twitter0Share on LinkedIn89Email this to someone

2 comments

  1. Vikas Pujar

    Very good post. It makes sense to stick with Mainframe if processing huge chunk of data…

    Reply

  2. Carlos Morales

    Very good material. Excellent reflections.

    I have been part of a downsizing project (390 to PC Lan) … and yes, of course, it even seemed like the CICS programs run under Realia were faster. But no one of us evaded the fact that it was not comparable, since every PC was the entire CICS and operating system FOR ONLY ONE USER !!! No multiuser or multiprocessing at all …

    In few words, for every reality there is an adequate and different capacity planning, but planning do require the mentioned 3D…

    Anyway, as it is well said in the note, when the numbers rule without comparing everything else … it is probable that some inexperienced CIO will make the same mistake again.

    Reply

Leave a reply

Your email address will not be published. Required fields are marked *