Accelerating Db2

Your Db2 database

The IBM mainframe and its Db2 database is the transaction processing workhorse for the world’s biggest banks, insurers and financial services companies. It was designed to execute their high-intensity transaction processing workloads very efficiently. Other platforms are not optimized for these workloads – and that’s the main reason that you’re using a mainframe system and a Db2 database today. But as time goes on, your Db2 database is becoming more and more sluggish. And you know what that means – it’s upgrade time. Time to spend serious coin on your mainframe systems. Again. And it may seem like you’re doing this more often than before.

Worse, sometimes upgrades may not even help that much. Some consultants are advising that you need to spend a LOT more, eventually. Some will tell you that you need to re-architect some of your key applications. Others may warn you that you need to offload some of your processing, or even migrate. (that would be good news for the IT implementation side of their businesses…) Either way, though, you’re probably facing some type of large IT upgrade spend.

But does it really have to be that way? Do you need a system upgrade when your Db2 database gets sluggish? Do you need to upgrade mainframe system resources across the board? Is a sluggish database really an indication that you need to up your IT spending enormously? Well, it might interest you to know that the answer to these questions doesn’t have to be a resounding YES!

Tell me more!

In many cases, the performance of your Db2 database has very little to do with the actual database. As has been drilled into you from day one, IBM’s Db2 database is THE database for business operations. It was designed from the ground up for processing performance, throughput performance, reliability, maintainability, security, and even backwards compatibility. So if it’s so hot, why is it so sluggish?

Well, more often than not, it’s your applications – applications that were very well designed from the outset, but have been asked to process much more that originally designed. Originally, they handled either batch or online processing – sometimes both. However, some of these rock-solid applications are buckling under the sheer weight of transactions being thrust upon them. No longer are they just handling your 9-5 transaction processing; they’re now handling far more different types of processing than their designers ever imagined. Still handing legacy-style workloads, but add to that 24/7 processing from web browser and mobile workloads.

So what is the answer for this complete paradigm shift for your Db2 applications? It’s simple. Optimize the applications. And how do you optimize them? By accelerating them – making very subtle changes to certain database calls can leverage memory, to both soften the demands they make on your Db2 database, and to make the applications run much faster.

Getting to the Bottom of Your Database Performance Issues

First you need to understand why your database is so sluggish. It could very well be because your Db2 applications are accessing tremendous amounts of database data for every single business transaction. They’ll typically access what’s known as reference data – data that rarely ever changes, and is largely Read-Only – multiple times per business transaction. Maybe dozens of times, or hundreds of times – and that is for every single business transaction. And for each and every access, that long database overhead path is being taken.

(Before you say “wait, what about in-memory data buffering?”, – know that even for buffered data, those repeated database accesses are still running through that database overhead path – even for every buffered access.)

Now imagine if you could access that data from unencumbered memory rather than from your database. That is, copy that reference data – maybe 10, 5, or 2 percent or less of your business data – into high-performance in-memory tables. You would then allow your Db2 applications to access it directly from there? That will make a tremendous difference to your heavily burdened Db2 database. Those multiple database hits to the same data can be eliminated. Fortunately, you can get that today – it’s called in-memory optimization.

Optimizing Your Db2 Applications

This is done is very simply – by isolating these repeated database accesses, and swapping them for high-performance in-memory table accesses. To implement that, you will copy highly-accessed RO data specific data into high-performance in-memory tables. Your Db2 applications will still access most of their data normally from your database, but will access the highly-accessed data from memory using a very short code path. How short? A typical buffered database access may consume 10,000 to 100,000 machine cycles. Accessing data from high-performance in-memory tables typically consumes 400 machine cycles.

Effecting this optimization is not complex. None of your Db2 application logic changes. Most database accesses are unchanged. Only a small number of data accesses use a tight API to access high-performance in-memory tables. But the difference in terms of both access time and run time can be astounding.

Optimized applications can have a beneficial effect on your database. When multiple applications are optimized using high-performance in-memory technology, the cumulative effect can result in significant improvements to overall database performance. Take away billions of repetitive RO accesses, and your database performance will be much improved.

Expected Results

High-performance in-memory technology is being used today by many of the largest banks, financial services companies and insurers, helping them to optimize their Db2 database and Db2 applications significantly: Batch processing times reduced from eight hours to less than one hour; application I/O usage reduced by 99%; CPU usage cut in half. If you’re running a mainframe shop and you feel that your Db2 database is holding you back, you owe it to yourself to look into high-performance in-memory technology.

Who Provides This Technology?

High-performance in-memory technology has been around for a while, and there are two companies offering similar solutions – DataKinetics, a leader in Mainframe Performance and Optimization Solutions, and of course, IBM.

Originally published on Planet Db2.

Regular Planet Mainframe Blog Contributor
Larry Strickland, Ph.D. is the Chief Products Officer of DataKinetics. He has been a C-level executive for years, making technology simple for the end user, and understanding the impact of technology on the user. He is passionate about product innovation that helps clients solve their most challenging issues while delivering solutions that transform their business. With Larry’s long-term work with IBM and the mainframe community at large it has earned him the honor of being recognized as an IBM Champion.

Leave a Reply

Your email address will not be published. Required fields are marked *