mainframe frontier

Innovation in Our Genes

“Thank you, Gene!”

My fellow-nerd and I were just passing through an automatically-opening door, and he was pointing out the role of Star Trek, and therefore of its creator Gene Roddenberry, in having inspired many innovations we now take for granted, including automatic doors.

In the world of mainframe computing, innovation has also been in our Genes – especially thinking of Gene Amdahl, who contributed so much to the creation of the original System/360, and then to the advancement of the ecosystem by founding some important competition during the adolescence of the modern mainframe.

Inspiration, innovation, competition: all hallmarks of a healthy ecosystem. They gave the illusion of having evaporated from the world of mainframe beginning in the early 1980’s when consumer electronics computing began to be widely adopted for business activities. We watched with fascination, and participated, as Apple, Commodore, Radio Shack, other personal-sized computers, many mid-range platform manufacturers, and then IBM with their PCs and clone manufacturers flooded the market with affordable devices that gave us everything from spreadsheets and word processing to video games, and even some advanced business and engineering applications.

The Pandora’s Box of consumer computing innovation turned into an endless cornucopia, and as graphical interfaces became the norm, we practically forgot about the need for reliable back ends. And when people tried to trust these devices with important and sensitive processing and ended getting their RASes in a sling, their response was often to ask for more innovation on the front end – firewalls, anti-virus, two-factor authentication – while taking the back end for granted, either because they thought the soft, chewy centres of their servers were somehow protected by a hardened outer shell , or because they were invisibly reliant on mainframes.

Roughly four decades since the beginning of the personal computing revolution, the world is moving on to other consumer electronics and letting IT inherit the mess of all the heterogenous pigs in pokes that attractive salespeople have foisted on flattered management as alternatives to mainframe processing. We’re pushing all our processing out to “the cloud,” which really just means, “don’t bother me with the details: I want a nebulous back end supporting front-end results.”

In practice, this should mean that “cloud processing” would gravitate to the platforms that have the right combination of qualities of service and price and availability to meet the business need – eventually. That’s a very encouraging thought to us mainframers, as we are generally of the opinion that no back end can touch what the mainframe offers, at least at a certain scale and above.

This kind of begs the question about how we got to a place where the mainframe was misperceived to be extinct (call it “point A”) rather than the default option for industrial-scale back-end processing (call that “point B”). I like to observe that it’s pretty hard to get from point A to point B if you’re not willing to admit you’re at point A. And part-and-parcel with admitting it is often figuring out how we got here.

How Did We Get Here?

Alexander Graham Bell and Thomas J. Watson, Sr. were contemporaries, both born in the mid-19th century, and both leading the technological advancement of the world with their companies. However, during the century that followed, their organizations had two very different, but related, events change them permanently, for a similar set of circumstances.

Bell’s company, which was so successful as to have become a telephone monopoly, was broken up into smaller companies by the U.S. Department of Justice (DoJ) in 1984 in order to maintain a sufficiently competitive business landscape (https://en.wikipedia.org/wiki/Regional_Bell_Operating_Company).

Meanwhile, during the fifteen years prior to that event, the DoJ had also been carefully investigating IBM to see if they were sufficiently competitive, and while IBM survived this scrutiny, it was a call to account that IBM carefully heeded, doing their best not to even seem anti-competitive during those formative years of IT.

Partly as a consequence of this, and partly as an expression of their natural scrupulousness, IBM took steps to ensure competition in the mainframe space and IT generally, and one may be tempted to suspect that it was a motivator in their fostering non-mainframe computing platforms, and maybe even in allowing clone manufacturers to gain such leading roles in the ongoing saga of personal and small-scale business computing.

Certainly, the IBM use of Non-Disclosure Agreements (NDAs) was naturally consistent with this context, as IBM tightly controlled information about any new strategic product directions or innovations that weren’t yet commercially available, thereby avoiding undermining competitors who already had similar offerings for sale.

These have become such a core part of the IBM mainframe ecosystem that I like to joke that there are two kinds of people in the world of mainframe: those who don’t know what’s going on, and those who are under NDA.

That may be humourous, but it’s also sad in its own way, as it restrains the mainframe from competing in the field of “sizzle” where other platform vendors can sell “motivational vision” and we are tied to substance.

On the other hand, substance is what the mainframe has always been about, a fact that has resulted in a culture of the kind of people who care about important things like reliability, availability and security (RAS) more than shallow appearances.

That’s good, and it’s important, but sometimes matters of appearance start to become matters of substance, and need people of substance to get them properly configured and managed. And that’s what has happened to the world of business processing for all those platforms that lacked the strengths of the mainframe. A wide range of concerns, from viruses and hacking, to regulatory compliance, to insufficient quality staffing, have pushed organizations to realize they need to find someone and somewhere they can trust to reliably take care of the technical details while they refocus on their core competencies.

The initial nebulous answer that has been offered to these organizations has been “cloud,” which essentially amounts to offering results as a service over an internet. But as that concept has taken the journey to becoming commodified, we have begun to seriously discover how many metrics, including various qualities of service, are nowhere near being satisfied by generic backends.

Going Metric

A favourite old saying goes: “Price, Quality, Speed: pick two.”

As performance management and capacity planning experts have found over the decades, the balance between different metrics and qualities of service and costs is a non-trivial thing. It can require in-depth experience and insight, advanced calculation and analytical tools, and the ability to adjust according to business requirements.

Of course, on the mainframe we were blessed with Work Load Manager (WLM) just before the turn of the millennium after many years of manually having to tweak the IEAIPSxx and IEAICSxx in SYS1.PARMLIB. Cheryl Watson consequently spent years cajoling mainframe shops to begin taking advantage of this important innovation using Goal Mode, and her efforts certainly made a difference as mainframe shops grudgingly began trusting the mainframe itself with this important area of control.

That was just the beginning. Capacity on Demand, and now Tailored Fit Pricing, have given mainframe shops the ability to dynamically adjust their costs based on real-time usage and business requirements.

A key advantage of these features is their ability to run automatically once they’re initiated, not requiring a human “driver” to closely monitor their behaviours. And yet, there remain many other factors of relevance for mainframe professionals to configure and watch and adjust, some of which are unique to the platform.

In fact, one of our biggest challenges on the mainframe throughout the history of IT is that we have had unique value propositions that no other platform could touch, and so they marketed around them rather than addressing them, and customers bought the simplified pitch.

It reminds me of the story of Underwood Typewriters and speed typing. By the roaring 1920’s speed typing competitions were slowly sorting out winners from losers, and the winning typists were increasingly sponsored by and using Underwood equipment. By the end of that decade, it had become so commonplace that the media and other manufacturers began shifting their attention to other matters in order to meet their bottom-line requirements. While Underwood took a great deal of marketing advantage from all their wins during that decade (see https://oztypewriter.blogspot.com/2014/11/last-days-of-speed-typing-glory.html), once everyone moved on to other matters, Underwood became an also-ran, and was acquired by Olivetti, who eventually retired the brand.

For me, this story is a reminder that flying the flag of unique value can actually hurt your competitiveness, as your competitors will refuse to engage if they’re sure to lose, and so they’ll change the discourse to give them relevance.

In other words, one of the main reasons the mainframe has gone dormant in the public consciousness isn’t because it didn’t measure up, but because its competition failed to do so, and the mainframe public image didn’t “measure down” with generic features that other platforms were more easily able to advance and compete on.

In order for the world to rediscover the strengths and qualities and advantages of the mainframe, then, there needs to be some way for people to become aware of metrics that other platforms have not traditionally been able to respond to. And there are a few ways I can see that this might happen:

  1. Other platforms rise to the occasion so there is a sufficiently level playing field that the mainframe’s strengths can be offered as a legitimate option in a competitive scenario.
  2. Purchase/configuration/adjustment interfaces need to be made available that allow organizations to check which features matter to them, use a sliding scale to choose how much of each metric they’re willing to pay for, and see the consequent price of these choices. (See my article at https://destinationz.org/Mainframe-Solution/Trends/needed-innovations-for-the-ibm-z-ecosystem for more on this idea.)
  3. Professional practices need to be defined that explicitly involve taking these matters into account and taking responsibility for how much of each are chosen and paid for.
  4. Regulations need to recognize the type of RAS that is available on the mainframe and its impact on privacy and legal responsibility, and require them from any production platform tasked with handling sensitive data of record.

Fortunately, unlike Underwood’s speed advantage, which was an optional benefit as long as the average typist was going as fast as their abilities allowed, the IBM mainframe’s strengths are essential, and still substantially unique. But as the requirements for these qualities of service become more and more ingrained for world class workloads, and the world starts to recognize that they are available through a number of different business models, including cloud (again, using Tailored Fit Pricing as an example), this historically-unique platform has a strong potential to become something of a gravity well for consolidating all of these strengths at a substantially negotiable combination of price, quality and performance.

But, and this is VERY IMPORTANT: the mainframe culture is not ready for this to happen, and we need to wake up and get ready fast if we’re not going to become victims of our own success.

Check in at Planet Mainframe next week for the balance of Reg’s article.

Reg Harbeck is Chief Strategist at Mainframe Analytics ltd., responsible for industry influence, communications (including zTALK and other activity on IBM Systems Magazine), articles, whitepapers, presentations and education. He also consults with people and organizations looking to derive greater business benefit from their involvement with mainframe technology.

Leave a Reply

Your email address will not be published. Required fields are marked *