A Guide to Debunking Mainframe Myths, Criticisms, and More

Why is defending the mainframe so difficult? One reason is that many people do not fully understand what is meant by the term “mainframe.” Ubiquitous computing leads many to believe that the mainframe is an amorphous type of computer from the mid-20th century that was made obsolete by laptop computers and mobile technologies. Nothing could be further from the truth. Mainframe computing still plays a vital role in global computing, just as personal computing does.

To be clear, by “mainframe,” our focus is on enterprise class mainframes, specifically IBM Z. While years ago, the term “mainframe” could refer to any number of large computer systems, today, the remaining representative of this class of computers is the IBM Z. In March 1991, Stewart Alsop, one-time editor in chief of InfoWorld said, “I predict that the last mainframe will be unplugged on March 15, 1996.” The perception is that “everyone knows that the mainframe is dead.” Yet, nearly 25 years later, the mainframe is still prevalent in many industry sectors, particularly for government and financial operations.

The Value of the Mainframe

A few years ago, in the paper “Don’t Believe the Myth-Information About the Mainframe,” the following statistics were noted about IBM’s System z mainframes (now known as IBM Z):

  • 96 of the world’s top 100 banks, 23 of the top 25 U.S. retailers, and 9 out of 10 of the world’s largest insurance companies run System z
  • 71% of global Fortune 500 companies are System z clients
  • 9 out of the top 10 global life and health insurance providers process their high-volume transactions on a System z mainframe
  • Mainframes process roughly 30 billion business transactions per day, including most major credit card transactions and stock trades, money transfers, manufacturing processes, and ERP systems

There were several reasons given for replacing mainframes, but now, let’s discuss the attempts to support the mainframe. For many years, IBM has tried to explain the value of its mainframe. Remarkably, IBM, who has the most to gain by defending its solution, has been somewhat ineffective in defining a compelling, unique value proposition. Some efforts have focused on protecting the existing install base, while others have tried to articulate new workloads — most notably consolidation of individual servers onto fewer mainframes.

IBM spent many years touting the Total Cost of Ownership (TCO) of the mainframe relative to individual servers. The immediately visible aspect of TCO of a mainframe is fewer physical servers — in other words, it takes more physical servers to deliver the same results as a mainframe. Some of the resulting lower-cost-of-ownership characteristics can be shown as follows:

  • 1/25th floor space: 400 sq. ft. versus 10,000 sq. ft
  • 1/20 energy requirement: $32/day versus $600/day
  • 1/5 the administration: < 5 people versus > 25 people

To summarize the above table: Less floor space, less power consumption, lower staffing, and add to that a simplified environment, simplified software management, flexibility to adapt to business requirements, leads to an easy to scale-up model that can handle spikes in business needs, as well as enhanced security, reliability, availability, and serviceability.

As a result of TCO characteristics, it appears that IBM easily transitioned to driving server consolidation efforts. Combining technical thought leadership presentations and migration services, IBM attempted to guide clients back toward mainframes.

A major proof point was IBM’s Project Big Green plan (pre-2010) to consolidate thousands of its servers onto mainframes in an effort to reduce environmental impact. The basics of this plan were as follows:

  • IBM would consolidate and virtualize thousands of servers onto approximately 30 IBM System z mainframes
  • Substantial savings expected in multiple dimensions: energy, software, and system support costs
  • The consolidated environment would use 80% less energy and 85% less floor space
  • This transformation was enabled by the System z sophisticated virtualization capability

In the first three years of the project, IBM claimed it had doubled its computing capacity without increasing energy use. These arguments provide evidence for IBM Z as a virtualization platform. While the TCO case helps to defend the existing IBM install base, the Server Consolidation argument targets new client acquisition — those prospects that were experiencing pain in a perpetually growing server farm with its corresponding costs and management issues. But, even this is not a unique offering. Cloud providers can also claim to offload some of the problems of server sprawl. For example, by moving workloads to cloud, the server management and power requirements are offloaded to the cloud provider — of course, since they are bearing the burden of these requirements, the costs can be passed along as part of the fees.

What’s Missing?

The value of the mainframe is complex and multifaceted. This, it turns out, is one of the key factors that makes it difficult to express. And, being difficult to articulate makes it even harder to express in a sound bite — what is often referred to as an “elevator pitch.”

The elevator pitch should encompass the unique value of the mainframe that would encourage new clients’ needs to be described. In the intervening years since the height of the mainframe era, personal computers were introduced, which meant that access to computing services is everywhere and that everyone has exposure to computing. There is a feeling that the computer on your desk, with its graphics and its ease-of-use characteristics, represents the only needed computing platform. Often, many of the commonly perceived benefits of the mainframe seem to have been matched by desktop computers. It is up to computer professionals and computer educators to help businesses understand how the massive parallel capabilities of the mainframe can deliver better service to both internal and external consumers. Let’s look at some of the issues:

  • The proposed value of the mainframe varies depending on whether the objective is to protect the existing mainframe install base, or whether the target is expanding the install base (e.g., new workloads), or if net new mainframe installs are the target. The message needs to be customized to the situation.
  • Is the value proposition unique and compelling? Can the problem ONLY be solved with a mainframe? Possible benefits include:

    – Speed, reliability, security – these are not unique
    – Lower cost of ownership – apparently not compelling
    – Massively parallel processing – unique and compelling

Unfortunately, it is my belief that the most powerful argument for mainframes — massively parallel processing — is impossible to visualize, difficult to quantify, and hard to describe.

The value of the mainframe needs to be viewed from the perspective of commercial business needs, not just consumer needs. For example, a consumer’s view of speed can be seen as a one-to-one request-response relationship, and response (in terms of seconds) is acceptable. A single individual, using their personal computer, makes a request and gets the corresponding response within 1–2 seconds, which is considered reasonable. A business’s view of speed is a many-to-many request-response. Examples might be airline reservation agents or multiple insurance agents requesting quotes or multiple people making withdrawals from ATMs. You have multiple, concurrent users making requests at the same time, and all of them expecting instantaneous responses to said requests. How do you handle thousands of requests and deliver reliable immediate response? Massively parallel processing.

Maybe the need to deliver many-to-many requests is no longer well understood. Students in university computer science programs, as well as professors teaching computer science, do not seem to be learning and promoting the merits of many-to-many parallel processing, or perhaps the problem is believed to be addressed by adding more servers rather than better utilizing the technologies designed to exploit parallel processing. Another point of confusion is multitasking versus parallel processing. This distinction is part of the difficulty of creating unique differentiation.

Cycles in Computing

As the demands for computing continue to increase, perhaps there could be a resurgence of the need for mainframe computing. Computing seems to go through cycles of centralized to decentralized facilities and control. Through the past decade, one of the latest crazes in computing is cloud. Yet, this is not really new.

Going back to the early days of computing (pre-1960s), centralized computing was all there was, since only large companies could afford computing. Equipment was expensive. Facilities to support mainframes were complex. Then, in the 1960s and ‘70s, while large mainframe computing centers were still pretty rare, a solution offering computing to a broader audience emerged. That solution was time sharing. Companies that had large computing facilities would sell time (processor time, data storage, connectivity) on their computers. Sound familiar? Sounds like cloud. Then, in the 1970s and ‘80s, computing became a competitive differentiator, the cost of mainframe processors declined, and mainframe computing centers proliferated — so called “data centers.” During the 1980s, as budgets for computing moved from centralized computing organizations toward business units, allowing business units to get better accountability and delivery from computing, central data centers declined and distributed processing (departmental or client-server) grew. Accelerating the growth of distributed computing was the rise of personal computing and internet connectivity. The next cycle (mid-2000s and beyond), in a sense, was an extension of distributed computing leveraging paid computing services , but now outsourced to external providers, which became known as “the cloud.”

A Future Role for the Mainframe

So, is it possible that the next iteration could be a move back toward in-house centralized processing? Factors leading to such a change could be: better control of data and resources, better flexibility to respond to business needs, protecting sensitive data, increasing cost of cloud services, and fear of lock-in to cloud providers. In other words, bring the cloud story in-house. The mainframe delivers its legendary performance and security qualities while diminishing network latency by co-locating data and the processing of that data, and reducing dependencies on third parties (i.e., external cloud providers). The skills to maintain a mainframe can be somewhat unique, but they can be acquired, and the number of staff required to manage a mainframe is significantly less than to manage multiple individual servers.

Not enough internal consumers to justify bringing centralized computing back in-house? While the use for the mainframe in the early days was for centralized computing to provide computing service to internal departments, a little twist would suggest an additional role for mainframes supporting external entities like smaller companies or allied partners, needing computing but not wanting to lose control of data.

Could state government agencies provide computing to county or city agencies? Could universities provide computing to local high school districts? The mainframe easily compartmentalizes the different consumers and has sufficient capacity and parallel processing to ensure that each organization, whether internal, external, or both, gets high-quality computing.

Another way to think of this is that the earlier server consolidation story was Infrastructure-as-a-Service (IaaS). Now, the mainframe can provide Platform-as-a-Service (PaaS), allowing departments (or external partners) the ability to develop their own applications and delivering the responsiveness and accountability that was missing in earlier data center models. And, the mainframe could also be a venue for Software-as-a Service (SaaS). For departments or partners that would cede the evaluation and selection of software solutions to centralized IT, the resulting software could be delivered on the mainframe. Meanwhile, the expertise and purchasing power for computing gets pulled together into the centralized IT organization. To drive new mainframe adoption, a new vision of the role of the mainframe needs to be shared. This starts in university computer science programs articulating the value of massively parallel processing. Universities need to reexamine the needs of businesses and realize that many of the issues associated with the mainframes of the 20th century have been addressed, and the benefits of in-house computing outweigh the remaining perceived concerns. This vision extends to business leaders who seek business agility and competitive advantage, while divesting themselves of the details of managing and operating computing systems. IT services return to an internal organization that specializes in delivering superior computing services on the mainframe and manages a computing platform that is agile, responsive, secure, and accountable.

Janet Sun is Senior Consultant, at Sun Coast, responsible for z/OS storage and cloud portfolios.

Leave a Reply

Your email address will not be published. Required fields are marked *