When I heard about the HuffPost article highlighting a video debunking the myths that Hollywood has been repeating about the mainframe, I was cautiously optimistic. Unfortunately, the writer chose to use only one reference book, and focused on the negative points.
Here at SHARE, we believe that the mainframe is the most secure, lowest cost and best performing mixed workload computing platform on the planet. SHARE continues to serve the mainframe community, helping to show our members the best practices for managing the mainframe environment and optimizing the value that the mainframe delivers. The mainframe is the core of the computing environment for many companies.
“Most companies don’t use mainframes” they said – Seriously? SHARE, the mainframe community, represents more than 20,000 individuals from nearly 2,000 companies. Those companies include: state and federal government agencies, universities, retail, energy, manufacturing, banks, and insurance companies. More specifically:
- 96 of the world’s top 100 banks, 23 of the 25 top US retailers, and 9 out of 10 of the world’s largest insurance companies run System z
- Seventy-one percent of global Fortune 500 companies are System z clients
- Nine out of the top 10 global life and health insurance providers process their high-volume transactions on a System z mainframe
- Mainframes process roughly 30 billion business transactions per day, including most major credit card transactions and stock trades, money transfers, manufacturing processes, and ERP systems.
That doesn’t exactly sound like a technology that’s no longer in use, or even going away anytime soon. So, what are the other most common myths about the mainframe?
- Mainframes are old
- Mainframes don’t run modern applications
- Mainframes are expensive
- The skills to manage mainframes are not available or you need more people
Mainframes are Old?
Well, the mainframe is celebrating its 50th birthday next year. But, there have been generational differences between the mainframe that was introduced in 1964 and today’s mainframe. The automobile is more than 100 years old, but no one suggests that automobiles are an old or outdated technology.
Are the cars of today different from the cars of 1964? Absolutely. Well, today’s mainframe is faster, has more capacity, is more reliable and more energy efficient than the mainframe of the 60’s, 70’s, 80’s, or even those delivered in 2010.
The mainframe delivered in 2010 improved single system image performance by 60 percent, while keeping within the same energy envelope when compared to previous generations. And the next mainframe which shipped in 2012 has up to 50 percent more total system capacity, as well as availability and security enhancements.
It uses 5.5 GHz hexa-core chips – hardly old technology. It is scalable to 120 cores with 3 terabytes of memory. Clearly larger (more capacity) and faster than anything available in the 60’s, with a smaller physical footprint and better energy consumption characteristics.
IBM has a corporate directive for every generation of mainframe: each successive mainframe model must be more reliable than the previous one. Incremental and breakthrough improvements have been made over 20 generations of mainframes. Fault tolerance, self-healing capabilities, and concurrent maintainability are characteristics of the mainframe that are lacking in many other systems. The integration of mainframe hardware, firmware, and the operating system enable the highest reliability, availability, and serviceability capabilities in the industry.
Mainframes Don’t Run Modern Applications?
Mainframes have been running Linux workloads since 2000 and the Linux workloads on the mainframe are growing. From IBM’s 2012 Annual Report – “The increase in MIPS (i.e. capacity) was driven by the new mainframe shipments, including specialty engines, which increased 44 percent year over year driven by Linux workloads.”
The mainframe also has a specialty processor that is specifically intended to run Java workloads. How about Hoplon Infotainment running their TaikoDom game hosted on System z?
You say that green screens are ugly? There are graphical interfaces and even iPhone and Android apps that put a pretty face on the green screens for those who those who are trying to use business applications. More and more, interfaces that the general public is familiar with and comfortable with are being utilized even in business contexts to make access to the mainframe easier and more transparent (how many people are accessing a mainframe on a regular basis today and don’t know it? – most of them!)
Those who manage the mainframe often prefer the green screens. These are incredibly fast interfaces that can deliver sub-second response time. When is the last time you clicked your mouse and got sub-second response from your Java application?
What about “cloud”? The “cloud” is actually an online computer environment consisting of components (including hardware, networks, storage, services, and interfaces) in a virtualized environment that can deliver online services (including data, infrastructure, storage, and processes), just in time or based on user demand. By this definition of Cloud Computing, System z has been an internalized cloud for decades.
System z has been “in the clouds” for more than 40 years! Whether you are thinking cloud computing (e.g., Infrastructure-as-a-Service) or simply server virtualization, System z is a great platform. Instead of running dozens of virtual images, a mainframe can run hundreds. And besides Infrastructure-as-a-Service, you could also implement Platform-as-a-Service or Software-as-a-Service.
Starting in 2007, IBM embarked on its own server consolidation project called “Project Big Green”. They consolidated 3900 servers onto 16 mainframes decreasing energy and floor space by more than 80 percent.
The electrical power ($600/day vs $32/day), floor space (10,000 sq ft vs 400 sq ft), and cooling costs for those mainframes were less than those of distributed servers handling a comparable load. In addition, those mainframes required 80 percent less administration/labor (>25 people vs <5 people); “Mean Time Between Failure” measured in decades for mainframe vs months for other servers.
Need more? Ask City and County of Honolulu about their cloud implementation on System z. They had issues with planning, deployment, and maintenance of hardware and services in a Windows environment which took weeks. They created a mainframe cloud environment and offered ‘Software as a Service’ to other departments.
They saw immediate benefits:
- Planning, deployment and maintenance of hardware and services can be done in hours vs. weeks
- Lower costs enable the expansion of database services for a fraction of the distributed costs
- Improved performance and response time to end users
- Sharing of resources with other state wide jurisdictions
Ready for more mainframe misconceptions? Janet Sun continues the ‘Don’t Believe the Myths about the Mainframe’ series very soon on Planet Mainframe.
Originally published on the SHARE.org blog.
- Still COBOL After All These Years? - Jul 16, 2020
- Don’t Believe the Myth-information about the Mainframe: Part 3 - Dec 6, 2017
- Don’t Believe the Myth-information about the Mainframe: Part 2 - Sep 26, 2017