Read an interesting article a while back on how to pull off a DevOps implementation successfully: 10 ways to win at devops that identified some important ideas requiring attention in any new DevOps implementation. Some are pretty obvious: get buy-in from the C-Suite, creating and maintaining real feedback loops between groups and careful progress measurement. However, tucked away under the heading “Remember The Systems” was something that caught my eye: DevOps gives you the opportunity to get rid of wasteful processes, streamline inefficiencies, and bring IT systems into alignment with business needs.
That is a sentence that made me say “uh-oh.” Taken the wrong way, that advice could lead to an IT disaster of the first order. Let me tell you what I mean by that. But first understand that I am a champion of DevOps—it’s clearly in the best interests of any serious IT organization—for those interested in things like profitability and competitive advantage… It applies to the newest startups as well as it does to the IT organizations of the largest banks and insurers running distributed computing systems and mainframe systems.
There are many CIOs who, given the chance, would gladly divest their departments of their aging mainframe systems. You know the story: those clunky systems with their retiring gray-beard support staffs, and their crazy-expensive monthly licensing payments. These are the CIOs that will one day bring the whole thing down on themselves. I’ve talked about this many times before: anyone who focuses on the disadvantages of mainframe computing, and ignores (or is ignorant of) what that technology is actually doing for the organization, is doomed to fail (worst case), or will soon learn some very hard lessons (best case).
My point is, in the course of preparing for a serious DevOps implementation, if CIOs see an opportunity to divest themselves and their organization of a technology that they see as wasteful and inefficient, without fully understanding its actual value (the big picture), they may very well be sabotaging that DevOps project right from the start. And that includes CIOs who have some experience in small mainframe migration projects.
A failed or unsatisfactory mainframe migration bolted onto the side of a wholesale DevOps culture change has the potential to completely sidetrack that DevOps implementation. A large-scale mainframe migration is a gargantuan project that can overshadow and overwhelm any simultaneous culture change effort. How is that? Well, large-scale mainframe migration projects can easily run into the tens of millions of dollars, which makes them more than just IT projects—they gain a high degree of visibility to the entire organization. The huge budgetary impact puts a giant magnifying glass on the whole project. And you want to get upper management to buy into that AND a simultaneous DevOps implementation project? Doable, but it’s a tall order.
Now certainly there will be mainframe migration projects that makes sense, especially small-scale projects—if there are any of those left. But organizations running mainframe systems today are doing it pretty much because they have to. It’s not so much because of the cost and effort involved in getting off the mainframe, it’s more about the mainframe being the right platform for the job. Broken record time: the mainframe is singularly the best machine available for large-scale transaction processing—nothing can beat its throughput capacity.
Therefore, the mainframe and its support and development groups must be fully integrated into any DevOps implementation. Remember that DevOps is an organizational culture change–not just selected parts of an organization—the entire organization. Let’s not let BiModal-IT, or as I like to call it, Rogue-IT, mess this up as well.
Originally Published on LinkedIn Pulse.
- The IBM Mainframe: The most powerful and cost-effective computing platform for business - Sep 7, 2021
- Getting a Client in Trouble Out of Trouble - May 22, 2019
- Pervasive encryption concerns - Mar 21, 2019