Dev Ops

Positioned by some as the panacea of new-age IT management, promising shorter development times, improved business processes, reduced time to market, more engaged teams, greater business agility, and more, DevOps is all the rage. If you’re not doing it, you’re doing everything wrong. Your organization is a dinosaur. Your competition is doing it, and they’re going to take away your market share. And so on.

But conversely, for two years now, people have been writing about (and demonstrating) what can short-circuit your DevOps efforts, how it can result in no real improvements at all, and can even bring your IT organization screeching to a standstill. And it’s still happening a lot. In fact, when IBM first experimented with it (before there was actually a name for DevOps), they experienced a complete failure – the main reason (among many) was that people didn’t properly understand their own workflows – which pretty much guarantees failure. As in trying to fix something that you don’t understand…

Worst-case scenarios aside, DevOps can, in fact, transform your IT organization (and even your business) for the better – but only with serious forethought. That means ensuring that the organization is making the effort to understand current workflows, implementing standard processes, insisting on cooperation between all groups, fostering a cultural change within IT, and more often than not, within the entire organization.

But you’ve read this all before. What has been missing is the all-inclusiveness required in data centers leveraging all sorts of different platforms – I’m talking about data centers that run mainframe systems along with their UNIX, Linux, WinTel and cloud-based servers. In many cases mainframe systems are responsible for processing 75% or more of a company’s revenue streams. And the mainframe group is typically not included in DevOps activity. So think about this: How can a business effectively implement DevOps in its IT organization, and not include the IT department(s) that actually generate the revenue for the business? It makes no sense at all, if you think about it.

So why on earth do companies bar the mainframe from DevOps activity? Well, it’s due to another buzz word that we’ve all heard – bi-modal-IT.  Bi-modal-IT is a concept that Gartner came up with some time ago, which in actuality, really just described what was going on in business anyway – IT being divided into two distinct areas.  The division is between the areas of responsibility for the IT people supporting the mainframe, and the rest of the IT organization. Really what we’re describing is the encasing of the mainframe and its supporting IT personnel into a silo. Cost freeze inside, new funding allowed only outside.

The thinking behind bi-modal-IT was to control the high cost of computing on the mainframe. Unfortunately, this was based on faulty (or perhaps misleading) information – the truth is that in transaction-intense environments, the mainframe is actually the most cost-effective platform on the planet. But we digress.

If you want to make DevOps work for your organization, you absolutely must have a complete understanding of all things IT before you put DevOps in place. A big part of that is knowing what IT systems are costing the business – what costs how much, and who’s using what.

There are excellent visualization tools available now, but the majority of the offerings in this space are for the most part just reporting tools, as opposed to tools that offer actionable IT intelligence. They are capable visual tools that can nicely transform IT data into graphs and bar charts, which definitely add value over and above the default scenario – forcing CIOs, DBAs and managers to delve through reams of text-based log data. But beyond that, they generally don’t add any real intelligence to the picture.

There are some tools that do offer IT business intelligence – but precious few that will give you the big picture across all IT systems.  Most of these tools are applicable only on the distributed side of the data center, and do not offer intelligence on the mainframe side. And while there are there are some similar technologies available for the mainframe side, they are narrow in scope, and do not offer the same level of intelligence as those on the distributed side.

What is needed is a data center-wide solution that provides IT business intelligence on all platforms – UNIX, WinTel and the mainframe. And fortunately, there are a small number of solutions that provide this transparency into the data center.

The key is to combine the reams of IT data that is being collected now, with a minimal amount of cost information on server attributes like server memory, disk space, CPU core capacity, etc., and similar cost information on mainframe CPU capacity, MSU usage, etc. Added to this would be an even more minimal amount of information on business information like department names, asset ownership, system names, LPAR names, application names, etc.

Adding cost and business information to IT information transforms the data into IT business intelligence, and from it you can easily see who uses specific resources, and what that usage costs the company. It can even shine light on the impact of new applications, changes to business processes, or even business mergers. The reality is that this transparency can transform the IT organization from a huge cost center into a window into business efficiency.

These solutions are truly DevOps-enabling technologies; helping CIOs and managers to better understand their own workflows, costs and IT resource usage. In fact without this level of data center-wide intelligence, and without the transparency it provides, effective DevOps is handicapped, and relies on company personnel to provide the intelligence. Unfortunately, there are few people (if any) in most organizations that have the access to (and understanding of) all of this information. Without a capable IT business intelligence system in place, the information needed for effective DevOps just won’t be available.

If you’re dedicated to effective, systems-wide DevOps, you owe it to yourself to take a look at IT business intelligence.

Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.

Leave a Reply

Your email address will not be published. Required fields are marked *