There was a time, quite a while ago now, when there were serious IT executives announcing that the mainframe was obsolete, overpriced technology and that its days were numbered. It was a dinosaur. But these folks weren’t well informed, and perhaps had certain biases that clouded their judgement. Gone are the days where a CIO would start a new job with the intent of “getting off the mainframe” just for the sake of doing it. There have been too many failed attempts at that (here’s an example), and even successful migrations have left IT departments with less capability than they had before the migration.

I think we’re past that now, and for the most part, CIOs realize that the mainframe is just one platform in a range of options available to IT organizations as they strive to balance performance, capacity, transaction throughput, cost, reliability, flexibility, security and a handful of other top-level business concerns. The fact is that the mainframe is the most cost-effective AND the best-performing platform available for large, high-volume, transaction-intensive IT environments. In fact, this has been the case for some time now, and nobody really doubts that.

But it may still come as a surprise to many just how prevalent the mainframe is in everyday life. Many realize that most ATMs connect to the bank’s own CICS on-line mainframe applications. Many large retailers, most large automotive service centres and all large airlines use mainframe systems for transaction processing, parts inventory, reservations, and much more.

With all these mainframe systems in place, is it really a surprise to find out that many of the everyday things that you do with your Apple Watch will initiate mainframe activity? When your bank sends you an email about some new promotion— that’s initiated by the bank’s mainframe-based rules engines. Even the newest buzz in IT – Big Data and analytics – are often run on the mainframe – why? Because that’s where much of the transaction records are, where the customer information is, and where that information is most fresh and recent.

The mainframe has been with us for a long time, and is still as relevant as ever today, as business looks to evolve, leveraging data and transform into future success.

 

Allan Zander
Follow me

Allan Zander

Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.
Allan Zander
Follow me
Share this article: Share on Facebook0Share on Google+0Tweet about this on Twitter0Share on LinkedIn15Email this to someone

8 comments

  1. Robert Barnes

    I agree, but mainframes have a problem. Because they are such a specialized area of IT they don’t get the attention of more modern platforms (Windows, Web, Apps, etc) and so you still have to develop CICS functions in 40-year-old languages. There is a need, but few companies are providing innovations in this space. When an innovation such as our MANASYS Jazz product exists (see http://www.jazzsoftware.co.nz/default.aspx) it is extremely difficult to be heard: we’ve been trying for over a year to find a user who will look at our new product and work with us to identify the functional gaps so that we can fill them. Perhaps mainframe users are at the extreme-conservative end of the adoption curve, and simply don’t want to consider any new idea, even one with the potential to trim large amounts from their development budget.

    Reply

    • Craig S Mullins

      It is a misconception to say that you HAVE to develop CICS functions in 40 year-old languages. The mainframe supports development in many languages (yes, including old ones, which preserves the many years of development and business logic that have been written over those decades)… Java, for example, was released in 1995 (20 years)…

      https://www-304.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmidtrmg/zmiddle_16.htm

      Reply

      • Robert Barnes

        While that might be technically true, doesn’t IBM documentation say that the languages supported by CICS are Assembler, COBOL, PL/I, C, and C++? And the support for C and C++ seems to be pretty basic. I don’t think that you could interface with BMS screen maps (of course this is only necessary if you’re working in a transitional project), and I suspect that it would be more difficult to write a Java program accessing a VSAM file or DB2 database than it would be in COBOL. That’s why the first intermediate language used by MANASYS Jazz is COBOL, not Java. We may later add Java if it provides access to markets or problem areas that COBOL doesn’t support.

        My feeling is that I’d use Java for new web service apps (which are managed by CICS), but not for anything that requires access to data that is not within the ZOS Unix environment, but managed within the “old” MVS environment. Is this correct, or have I got it wrong?

        Obviously new SOA systems will include components operating in both these ZOS environments, as well as components operating in Windows, Web, and mobile apps.

        Reply

        • Ajay

          I’ve run into that, quite often. Particularly over the past 15 years. You would think, that as a contractor, none of these thgnis would be relevant. But, recruiters (a.k.a. buzzword matchers ), and personnel pukes, operate on the same page. So, I run into the same thing. Unfortunately, the responsible manager/executive is also clueless, at times. That’s where it becomes really difficult.I usually meet the over experienced objection with simply stating That just means there’a a good chance that I know how to avoid the mistakes, or resource misallocation, of my lesser experienced competitors. . Then I follow it up with Do you want the project/job done, or are you screening for a popularity contest? . Then I point out how I can not only bring the end deliverable on time and within budget, assuming their estimates and expectations reflect reality to begin with, but also minimize their maintenance costs thereafter by applying all those 40+ years of experience to the entire effort. I also point out that I will review their estimates, deliverables, and expectations, up front, and either agree, or disagree. If I disagree, I’ll tell them explicitly why. Depending on the scope of the effort, about 50% of the time, I’ll do the up front assessment, for free. The othr 50% of the time, I charge for that assessment, if the scope of the effort/project is large, or the risk extremely high. Some of the contracts that I do, are for exactly that, independent review of project plans, scope, costs, timeline, etc..Works (or used to) about half of the time. I haven’t been doing low level contracts (straight programming) contracts for quite awhile now, though. Primarily because of exactly this phenomenon. For those of us old, dinosaurs, I think we have to now, make our own market.Perhaps this can be done by looking at what is being advertised as a need by companies, and then coming up with a short (1-2 pg.) proposal to include, scope, cost, and duration estimates, submitted independently, if possible. For example, if someone is advertising for development, requiring expertise in, Java, HTML, XML, SQL, C, C++, and SOA, we can derive that what is to be developed is going to be on a network, a front end probably for a client/server based architecture, w/a GUI front end. Usually there will be a brief description of what the developed software is to do. From this, we can extrapolate an estimate of the overall effort it’s going to entail, and come up with a proposal (high level) that describes what we will do, how long it is estimated to take, and how much it will cost. Has anybody tried this, for low level contracts? If so, what’s been your experience? Anyone else have any other, or better, ideas?As for COBOL, it’s never going away. I’ve seen languages come, go, change, and resurrect themselves, over the years. Two thgnis are never going away. Assembler, doesn’t matter what platform(s)/processors, and COBOL. Assembler, by definition, can never go away. COBOL has been around for 60 years now, and there are trillions of dollars invested in the core systems of any large application. To make it go away would require trillions of dollars of investment, again! Nobody is going to make that kind of investment. Why fix what doesn’t need fixing?All the other languages I’ve seen over the years, come and go. In my younger years in the filed, just like everybody else, I wanted to reinvent the wheel, because it would be so much better, and cheaper. That’s the key pitch that has been used over the years. Even with COBOL. The idea was, and is, that code generators will decrease development time, be cheaper thereby, more flexible, and reduced maintenance cost. How many time have you heard this? I can think of at least a dozen variations for COBOL alone, such as Meta-COBOL, PsuedoCOBOL, TELON, eGen, GenTran, Gen-Code, Flexus, and countless others. None of them really do the job, and usually increase the development costs. And runtime performance what a joke!All the other high level languages I’ve seen, with a few exceptions, have come and gone. Examples are: APL, ACP/TPF, PL/1, Fortran, Prolog, Dbase, ALGOL, ADA, AutoCoder, Basic (not VB), Cecil, Clarion, Clipper, LISP, Compass, Rexx, DataTreive, Delphi, Easytrieve, DIBOL, FoxPro, Forth, Forte, Jovial, MARK IV Mumps, Adabas/NATURAL, Pascal, PDL (although arguably PDL is not a language, IMHO), SNOBOL, J, J++, and the list goes on and on. These are just a few off the top of my head. The exceptions (not going away) are: C, C++, C#, Java (and related), PL/C, UNIX scripting (bash, perl, etc.), HTML. These have become entrenched, standardized, and heavily invested in, as well. For good reason. Higher level languages that work, on multiple platforms, and are flexible enough to adapt to any environment. Same as COBOL.For the above entrenched, heavily invested development in the languages, above, there is no financial incentive, and doubtfully, will ever be, to reinvent the wheel . The world will not stand for another upheavel in IT the likes of when IBM completely changed from the 360 architecture to the 370 baseline architecture! Not gonna happen again.Any comments to the above will be appreciated.

          Reply

      • Robert Barnes

        Correction, I should have clicked the link that you gave me before replying. However, do my other comments about Java and access to VSAM, DB2, etc still hold?

        Reply

        • Keith Allingham

          Hi Robert.
          Not really. There are products out there that are designed to interface directly to the mainframe to access these legacy assets. All new development can take place on a LUW platform (Java, .NET, etc.), while products like HostBridge and DataKinetics VTS Edge can be used to allow new LUW applications to access mainframe-based DB2 or VSAM data via WebSphere and TCP/IP (or MQ or other). Much of these workloads can also be made to run using the mainframe zIIP processors, keeping check on the increased resource usage on the mainframe side.
          Hope this helps.

          Reply

  2. Pingback: The Mainframe Dilemma: Coping With an Aging Workforce - Planet Mainframe

  3. Pingback: The Third Platform and the Mainframe - Planet Mainframe

Leave a reply

Your email address will not be published. Required fields are marked *