Request Response
Mainframe

Application Architecture: an Evolutionary View

The Mainframe Era

In 1970 I joined a company, Databank Systems Ltd, that operated a national network of 6 computer centres, each containing a S360/40.  In total this was less than 1/10,000th of the power of my current laptop, at a cost several thousand times more, yet with this we ran almost all of New Zealand’s banking.  Typical of the era, processing was entirely batch, and to communicate tapes were sent between centres by courier.  Programmers were cheap, computers were expensive, so Databank’s main systems were written with Assembler, although most of the nascent IT industry used COBOL for commercial systems, and Fortran for scientific work.  Whatever language was used, we programmed with 80-column punched cards, compiled by batch jobs into executable programs.

With S/370 came Virtual Memory, and database systems such as DL/I (IMS) and IDMS became common.  We gained a better understanding of the way that we should structure data, formalised further as relational databases like DB2 appeared and we learnt about data normalisation.

By the mid 70’s on-line systems, usually managed by CICS, had become common, providing enquiry and sometimes update with the ubiquitous 3270 terminal.  On line programming was very different to batch programming, preoccupations being rapid task switching and lock management, in contrast to batch programs which took exclusive control of files for extended periods.  A typical approach would use on line processing for data entry, with the entered data later updating files through batch updates.

The world understood that computers were expensive and could only be made personally affordable through time sharing.  There were mainframes and mini computers, but either way nobody expected to have their own computer.  All that was about to change.

Personal Computers

By the early 1970’s single-chip microcomputers were available to OEM’s and hobbyists, and by early 1980’s the first home computers were being sold.  Computer-on-a-chip slashed the cost to manufacture a computer system, and computers quickly developed from the first 8-bit microcomputers to 16, 32, and now 64-bit processors, with essentially all the features we take for granted in a PC today.  

But PC’s are not cheap mainframes, they are designed for a different task, and they behave very differently to a timeshare terminal.  Critically, a personal computer is intended for interactive individual use, so keystrokes are immediately processed, and graphic devices with icons and pointers have been standard since about 1990.  Spreadsheet and word processing on a PC provide a hugely better experience than their shared-computer predecessors, and new applications like CAD and games are enabled.

Running Windows, UNIX or Linux a small business could have a central PC managing shared documents and databases through a local area network, sharing them to each users’ individual PC at a fraction of the cost of a timeshared computer.  Windows dominated the desktop environment.  New languages (VB, Java, C#, …) were used to program Windows applications: COBOL and Fortran were rarely, if ever, seen. 

Within larger companies, office networks of PC’s appeared and emulators replaced actual 3270 terminals.  Using the emulator users could communicate with the mainframe as before.  Outside the emulator users had all their preferred office software to prepare documents, send emails, arrange meetings, and so on.  But communication between the two environments was difficult.  Communication outside the LAN was also difficult.  Then, in the mid 90’s, the Internet changed all this.

The Internet – PC’s learn to communicate

The Internet came from several wide area networking projects in the US, UK, and France, and originally connected several universities, defence, and research establishments.   The invention in 1989-90 by Tim Berners-Lee of the World Wide Web and the first web browser, and the opening of the Internet to general use in the mid 90’s has had a revolutionary impact on culture, commerce and technology.  Wikipedia records that from handling 1% of information flowing through two-way networks in 1993, by 2000 it was handling over 50%, and 97% by 2007.  Communication with the Internet, using TCP/IP and HTTP has almost completely replaced alternatives.

The Internet has brought us near-instant communication by electronic mailinstant messagingvoice over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forumsblogssocial networking, and online shopping sites.  Today we can access all these services from our phone, a pocket-sized device with more computer power than supercomputers of a decade earlier.

Web pages can be much more than a static page of text, with pictures.  Technologies like Microsoft’s ASP can retrieve database information and respond to queries as a 3270 application did, but with much more programming and display flexibility.  All of this from commodity hardware costing a few thousand dollars. So why pay millions for a mainframe?

Where does this leave the Mainframe?

When I say that I work with mainframes the usual reaction is amazement.  “Are any still around?” The last mainframe was supposed to have been switched off decades ago, but IBM’s mainframe sales continue to grow, and it is vendors like Sun who loudly proclaimed the “Death of the Mainframe” who are gone.  IBM continue to invest heavily in mainframe development.  Yet the general perception remains that mainframes are a relic of last century, to be retired as quickly as possible. 

This perception is not surprising.  Most programmers, even within companies with mainframes, focus on front-end development – presentation and client-side logic – and very few have the skills required to maintain a CICS system.  COBOL is no longer taught in most colleges.   And the sheer volume of PCs and phones makes it seem that they’ve replaced mainframes.  They haven’t, there’s as many mainframes as ever, it’s just that there are thousands of PCs for every mainframe.  PCs have replaced 3270s however.

“Application Modernization” is assumed to mean “rewrite the legacy systems in new languages for new platforms”, but major projects to replace these legacy systems have proven expensive and risky.  Not only is it difficult to migrate software built for the mainframe platform, the mainframe’s massive I/O capability and the very high OLTP capacity possible with transaction management systems like CICS are difficult to match.  At companies like banks and insurance companies with large OLTP loads it remains the most cost-effective platform.

But it must adapt! Your customers want to be able to order your products and pay their bills from their phone or PC. If you can’t satisfy this need you won’t have customers for long. Of course, your customers must be able to trust what you tell them: lose this trust and your business is finished. Legacy systems, developed over decades, have earned this trust. Replacing them is expensive, risky, and unnecessary.  “Application Modernization” doesn’t mean “replace the mainframe” or “replace your COBOL with Java”. It means making your enterprise data available in the right way on the right platforms. This means that you need a controlled way of making data available where it is needed.  You need web services.

Web Services

Web services send data from one program to another through the Internet, following standards that both programs understand even when developed with different software on different platforms.  Web pages designed to be read by a program, not a human. 

For simple situations mainframe data can be exposed with general software like zOS Connect, but for anything more complicated than accessing a single record you’ll need web service logic, written in one of the languages supported by the server (COBOL, PL/I, Java, etc). Writing such web service programs is not easy: there is a lot of detail that you must get exactly right, especially if the service might update the data.  A tool like MANASYS Jazz can handle this detail, leaving you to concentrate on the business problem. A few clicks can create a web service communicating with SOAP (WSDL) or REST (JSON), handling one or several records, using complete records or selected fields, and providing inquiry only or full update. If the generator doesn’t provide the flexibility you want, you can edit the MANASYS logic. If even that is insufficient, you can add your own COBOL logic. Click Here to see more detail, including an example of the dialog, Jazz record definition and program, the generated COBOL program, and links to video demonstrations. Ready-to-consume web services, hosted directly out of the mainframe, written by the people who understand the data.

Web Services that provide data for display are conceptually simple, but there are challenges if they are to update it.  Web services are self-contained and atomic, and everything they need must be available from their input message, so we cannot pass intermediate data from one request/response to the next step in the conversation by saving it in COMMAREA or an equivalent.  MANASYS Jazz implements CICS pseudo-conversational logic in web services using a checksum, a cryptographically secure hash total calculated on enquiry, and recalculated when records are re-read for update.

Future-proofing Application Architectures

As we distribute function around a network, if we are to avoid creating future legacies that are even harder to maintain than current legacies, we must not lose sight of the key principles that we’ve learnt over the last 60 years.   Firstly, it is vital to organize data well.  Fields and records should be cleanly and clearly defined, with related information kept together and unrelated information separated.  If we don’t understand our data model, then we don’t have a firm foundation on which to build our logic.    Secondly, our logic should be structured and encapsulated.  The ideal is to have objects that provide all the functions required with their data, which they expose through clean interfaces.   Data should be validated at source, but revalidated by the service response processing.  But these principles, so clear in a classroom setting, are not so clear with the real complexities of the real world of big data, distributed function, and artificial intelligence.  We are going to have some interesting challenges.

Robert Barnes
Latest posts by Robert Barnes (see all)
Share this article: Share on Facebook
Facebook
0Tweet about this on Twitter
Twitter
Share on LinkedIn
Linkedin
Email this to someone
email

Leave a Reply

Your email address will not be published. Required fields are marked *