So what is a mainframe? The answer to that question might, at first glance, seem to be obvious to some of us who have worked with mainframe systems our entire career. But have you ever really tried to step back and think about the question? What if your Mother asked you what you do? Could you talk to her about mainframes in terms she would understand? After the discussion, would she know, at any level, how a mainframe is different than other types of computers when you were done? And would she have any idea what it is that you do all day?

Well, I was thinking about this very question, which prompted me to write this post. The first thing I did, of course, was to pose an Internet search on the term “mainframe” to see if that would help. What did I find?

The very first result was the dictionary definition shown here:

Mainframe definition

This is a not a bad starting point, especially the first definition. But it is not definitive IMHO because there are many “large, high speed computer(s)” that can support “numerous workstations or peripherals.” Arguably, Linux, Unix, and Windows servers possess all of these qualities.

The next result was the Wikipedia entry for “Mainframe computer,” which reads as follows:

“Mainframe computers (colloquially referred to as “big iron”) are computers used primarily by large organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and transaction processing…The term originally referred to the large cabinets called “main frames” that housed the central processing unit and main memory of early computers. Later, the term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve.”

Hmmm… I think that is a bit better, but I’m not sure my Mom would nod her head and say that she understood it. I like that it tackles the use cases (at a high level) and even helps to clarify the second dictionary definition above. But this is not really what I was looking for.
The next entry in my search results was a series of mainframe photos and images. I spent a few moments looking at those (for old times’ sake) but came away from that experience a bit frustrated. There were a lot of old pictures showing tape drives, card punches and printers spewing green bar reports. Sure, those are mainframes… historically. And I’m glad those pictures are archived “out there” for posterity. But if I search on laptop I’m not going to get a picture of this old Osborne 1:


I know, because I tried it. I had to specifically search for the old laptop to get that picture! But search for mainframe and you get historical pictures (even older than that Compaq) instead of the beautiful, sleek IBM z13 that is the current state of the art in the mainframe world. Here is a z13…

OK, what was next on my search results? It was the TechTarget WhatIs definition for mainframe shown below:

It starts off similarly to the earlier definitions, but I like this one better. It offers up a point of comparison against other computers, namely that a mainframe is “used for large-scale computing purposes that require greater availability and security than a smaller-scale machine can offer.”

The next result of my “mainframe” search took me to webopedia, and things took a turn for the worse, at least in the first sentence: “A very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously.” Expensive compared to what? The notion that mainframes are intrinsically more costly than other forms of computing has been debunked multiple times (here’s one example for SHARE.org ).

The Webopedia definition improves somewhat as you read on:

“In the hierarchy that starts with a simple microprocessor (in watches, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.”

There were many other entries in my search results, but we have kind of winnowed down the general ideas in most of the definitions. Here’s what I’d tell Mom: A mainframe computer is large and powerful, can supports lots of users simultaneously, and is used by the world’s biggest organizations. It can support complex tasks that smaller computers cannot. I think Mom should be able to understand that!

What do you think? How do you define “mainframe” these days?

Regular Planet Mainframe Blog Contributor
Craig Mullins is President & Principal Consultant of Mullins Consulting, Inc., and the publisher/editor of The Database Site. Craig also writes for many popular IT and database journals and web sites, and is a frequent speaker on database issues at IT conferences. He has been named by IBM as a Gold Consultant and an Information Champion. He was recently named one of the Top 200 Thought Leaders in Big Data & Analytics by AnalyticsWeek magazine.

One thought on “How to Tell Your Mom about the Mainframe”
  1. I agree, however,

    “[The cloud] is large and powerful, can supports lots of users simultaneously, and is used by the world’s biggest organizations. It can support complex tasks that smaller computers cannot.” would be accurate as well. What is the important distinction between mainframe and cloud? Maybe there is none, but the following may be good candidates technically: Cloud means distributed, i.e. data is most often separated from the business logic, whereas on the mainframe the are all on the same machine with all the expected benefits following from that. The machines are widely different, the cloud uses cheap commodity hardware, which can fail often (has a low MTBF), whereas the mainframe consists of hardware components meant to be realiable and in case of failure to be able to be exchanged without much hassle. That last point may look like a sales point to the ignorant, but it is a fact that the hardware is made for very different purposes, i.e. decimal and floating point is just one difference. It all goes back to Intel and ARM being used for microcomputing, not business on a big scale. Science has had success using clusters of cheap commodity hardware as super computers, but business less so, and just for intertia and historic reasons.

    OK, but how do I explain the difference to my mom, who may actually be aware about Azure and iCloud, but does not know the difference between those and a mainframe? These days we have banks entirely based on cloud, and not a mainframe, so things have indeed become a bit clouded, pun intended.

    Thank you for fun article!

Leave a Reply

Your email address will not be published. Required fields are marked *