April 2018 marked the 54th anniversary of IBM’s mainframes. It is uncertain if anyone could have ever predicted such a long and successful life for them. A quick historical sketch: IBM launched the first modern mainframe, IBM 360, on April 7, 1964. With 229,000 calculations per second, it was the mainframe that played a key role in taking mankind to the moon. Fast forward to the current era: multiple banks, the healthcare industry, along with the insurance and government sector, still continue to use mainframes. Mainframes might not be trending as the hottest technology of the decade, but they act as the backbone for these and various other industries due to their reliability, availability, security, and transaction processing speed.

IBM defines a mainframe as “a large computer, in particular one to which other computers can be connected so that they can share facilities the mainframe provides. The term usually refers to hardware only, namely, main storage, execution circuitry and peripheral units.”

In many ways it closely resembles the rise of the present-day “cloud” in terms of its huge infrastructure, high computing power, and storage capabilities. Of course there are significant differences, the cloud is not centralized while mainframes are for example, but rather than having endless debates about the cloud killing or surpassing mainframes, perhaps we can bring them together in order to best utilize them. We can leverage our current investments in mainframes and adopt the cloud model. We can, for example, plan to back up mainframe data on the cloud, or use the cloud as a disaster recovery solution for mainframes—the possibilities are endless.

In this article, we are going to explain some key points which can help mainframe administrators and organizations currently using mainframes to get started and progress on their cloud journey. We will explain the difference between the mainframe and the cloud-operating model. Also, we will highlight some of the use cases which can help in making a smooth transition to the hybrid model, taking advantage of both mainframes and the cloud.

Essential Knowledge Before Setting Off on Your Cloud Journey

If you are a mainframe administrator or an organization planning to head off on your cloud journey, you probably have many doubts and questions you’d like answered. With the help of the framework below, we will try to explain and provide you with the right approach to help your transition go smoothly:

  • Let’s do a basic training course – Before embarking on any new journey, it is always good to do some homework. As an organization, you should plan and arrange a basic cloud administration training course for the core team so that they can learn the jargon and various important technical terms. If you think it is going to be easy to hire external resources and get started, keep in mind, they might take a decent amount of time to learn the organizational culture, thereby slowing down the implementation cycle.
  • Consider it a lab experiment – Up to a defined level, there is no up-front cost involved with using public cloud resources. Companies should promote the experiments, and appreciate learning from their failures.
  • Link up with a good partner – As an organization, you can plan to link up with an external partner who can help you in setting up the framework. Later on, once the team is mature enough, the show can be run in-house, without external support.
  • Be cautious; start low and slow – It is always good to start with a low-criticality use case. Migrating the database of a tier one application to the cloud might not be the right approach and in case of a failure, can have a huge impact on the overall journey to the cloud. Archiving data or data backup in the cloud or disaster recovery site in the cloud, can be good starting points.

Cloud vs. Traditional IT: Life Is not Going to Be the Same for an Administrator

There is a complete paradigm shift when we talk about a cloud operating model as compared with the traditional way we have been running our IT stores and mainframe administrators should take note of it. We will touch upon some of the key differences:

  • Elasticity and Scalability – The cloud is elastic, meaning that resources such as compute and storage will be added and removed from the servers on the fly, whenever demand increases or comes down respectively. As an administrator, you no longer need to worry about augmenting extra compute and memory power to the servers to tackle month-end peak loads, nor add storage to keep more data, the cloud will take care of it automatically.
  • Resilient – Although there have been multiple cloud-related outages, still we must agree that the cloud has been built to be one of the most resilient architectures. Applications and data are distributed across various data centers in different countries and continents. It is next to impossible to replicate this resiliency on your corporate data centers and within the price range as offered by various cloud players.
  • OpEx model – The cloud operates on an operating expenses (OpEx) model and you have to pay according to the resources you use, at a regular interval and there is no single monthly peak 4-hour interval that sets the pricing for the whole month. Also, there is no need to order storage boxes, compute chassis and other infrastructure components, nor track the shipment and match the bill of materials, or rack and stack the servers, as huge infrastructure is available at the click of a button.
  • Shared responsibility – Information security in the cloud is a shared responsibility model and administrators should clearly understand their role. Your cloud service provider will be responsible for the physical security of: the data center, the physical servers, as well as the storage and network devices operating out of the data center. However, the customer has to take care of : protecting customer data with proper configuration of various infrastructure components, encryption and user and access management.
  • Compliance – Compliance is one of the key areas where the role of admins will change drastically, they will have to be more vigilant from now on. What data set can be moved to the cloud in line with the regulatory and compliance guidelines and what has to be stored in-house, are some of the important decisions they will now be tasked with. Additionally, some of the key questions you should be ready to answer are: Which country can data reside in? Is it a shared cloud infrastructure, and how are on-premise controls enabled in the cloud? Having the answers to these questions at the ready will save you a lot of time.

How to Get Started on Your Cloud Journey

In light of the advantages that the cloud offers such as elasticity, high availability, and agility, it is pretty clear that you cannot neglect it, or else you will lag behind your competitors. In this section, we will explain some of the use cases which are the best starting points when setting out on a cloud journey:

  • Backup – It is critical that you backup your critical data so that in the event of data corruption or an outage, recovery can be quickly initiated. Many regulatory authorities also mandate having a proper backup and recovery plan before you go live with a new application. A cloud-based backup is one of the easiest use cases to get started with. If you are running your critical workloads on mainframes, you can opt for cloud-based backups and hence eliminate tape architecture and other expensive solutions.
  • Disaster recovery (DR) – A secondary site really comes in handy when there is a natural disaster or power failure. The cloud can act as your DR data site by storing data at different data centers within the same country or on two different continents, according to your needs. Predefined service level agreements guarantee a quick recovery, saving your company both time and money.
  • Archive – The cloud is ideal for storing your infrequently accessed data. There are many sectors such as the financial and healthcare industries which are heavily regulated and are bound to store data for long durations of up to 10 years or more. It’s not a forward-thinking strategy to store this infrequently accessed data in your on-premise data center, instead offload this data for cheap on the cloud and restore it whenever needed. It not also saves you the effort involved in buying disks or tape hardware but it is also the most cost-effective solution.
  • Test setup – If you are planning to set up a lab environment where your teams will play around with different technologies, the cloud is the way to go. Instead of waiting for a long provisioning cycle, the cloud offers the infrastructure within no time and can be decommissioned in the blink of an eye. Time to market is the key differentiating factor and the cloud guarantees your company this expediency.

Summary

Mainframes have been the most stable and mature platform for the last 50+ years, hands down. However, these days it is all about collaboration, innovation, and new economics. The business landscape is changing at a lightning pace and requires that you dig deep to climb new heights. Hence, as a company you should make full use of your current investments in mainframes along with leveraging the new technological advancements that have taken place over the last decade with the cloud being one of the biggest breakthroughs. This will require adapting to a new working style as the cloud is quite different from the traditional way of working, however, the results will speak for themselves in terms of the rewards you will reap in huge cost savings and time-to-market reduction.

With over two decades of hands-on experience in enterprise computing, data centers management, mainframe system programming and storage development, I’m now on a mission to accelerate cloud adoption at large enterprises by making their most trusted core business platforms more flexible, affordable and cloud compatible.

Connect with Gil on LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *