Recently, we helped one of our clients – a large investment bank of America – to “find” 30 million dollars per year – money that they weren’t able to obtain for technical reasons – when we helped them get it, it was like found money to them.

Before our association, they were running mainframe systems to do their business-critical transaction processing, using their own state-of-the-art custom mainframe applications to do online processing, and batch processing.

But they had a small problem – their batch windows were full – their total batch window time was 8 hours, dispersed throughout the day, and they needed more than 7 hours of that time to complete processing of the day’s world-wide business transactions. It wasn’t a real problem – unless something went wrong. Something going wrong with a business computer? Does that ever happen? (I’ll let you answer that one – based on your personal experiences with your own computers.)

They have the best business computers available, and their custom applications are excellent high-performance transaction processing concerns, but sometimes batch runs crap out. They still do occasionally. Why? The reasons are varied, but it mostly comes down to human error. For example, applications are updated daily to allow the bank to react to changing business conditions – which is normal – but in a busy day, sometimes buggy code makes it into production. Sometimes it doesn’t hurt, other times a batch application can under-perform or even abend.

The bank, being a cutting-edge technical shop, has a bunch of smart people who can fix serious issues like this in a few minutes or an hour, and get the production system up and running again to finish the day’s processing. Their problem was, depending on when a batch application died – if it was half-way through a batch window, they weren’t going to be able to complete the processing within the window. And often that meant that the previous day’s processing was not completed before the next business day.

Well, that’s a big deal – and for that, the bank will not be able to collect interest on incomplete transactions for that day. If that happened 3 or 4 times a month (and it did), they could stand to lose as much as 30 million dollars a year (and they did – every year)! Yes, you read that right – they were leaving 30 million dollars a year on the table!

But is that a big deal? Well, this bank had assets in excess of 100 billion dollars, and was growing its business at a rate about 10 billion dollars a year. So a measly 30 million dollars is like a raindrop in the ocean. But let’s not kid ourselves 30 million dollars lost is not a raindrop in the ocean – well, it’s more like a teardrop – or 30 million of them…

Either way, this was a significant business problem that the bank’s IT organization – and no doubt the board of directors – took very seriously. What were the bank’s options to mitigate this problem?

One option was to change their business applications into something that wouldn’t break, or run faster. Great idea, but as I mentioned, their applications were THE best anywhere at that time. There wasn’t anything faster, anything more reliable, anything better.

Another option was just to buy more machine to fix the problem with lots of money and brute force – surely that has to work! Well, it wouldn’t matter how much machine you throw at problems like this – faster machines would end up waiting for data just as their current machines were…

Still another option was to look at their business practices – perhaps their applications were too valuable to update – maybe they needed to keep them steady-state – only update them once a month or every six months. Surely 30 million dollars could demand this from the IT department? Well, there’s that pesky ever-changing business environment to deal with – changing regulations, interest rates and exchange rates. And those darn competing banks always coming up with ways to steal market share from them. And their own annoying marketing geniuses coming up with ways to stay two steps ahead of the competition. Nope, these applications had to be changed on a regular basis, just to maintain their lead in the market. They weren’t going to voluntarily put hand cuffs on the business.

Yet another option was to come up with ways to make their applications just run faster. But they had the best minds working on that now, and they were on the leading edge as it was. So, what about a technology company that specializes in this type of work? Unfortunately, the big mainframe software companies didn’t have much to offer. But there was one company that was solving this problem for a couple of competing investment banks in New York City already.

And that’s when DataKinetics came into the picture at the biggest investment bank in America. Our product allowed them to modify their existing applications – yes, just modify them – not rewrite or replace them -to make them run faster, A whole lot faster.

The secret sauce that we offered was data analysis – figuring out what data was being accessed most often – think interest rates, exchange rates, customer information like account numbers, account balances and even names and addresses – and providing ways to access just those bits faster. The interesting thing is that this type of data accounts for a very small proportion of their total data – like 5% or less – but represents the bulk of the workloads.

The solution is to take that data, and put it into memory – high-performance in-memory tables, and to let the banks applications access it from there. It required no changes to the application logic, and no changes to the bank’s databases. And what difference did that make? It was astounding.

The bank’s problems in this area were solved. Their batch runs that at one time took more than 7 hours in total to complete – in several batch windows –were now finishing in less than 30 minutes in total. Yes, you read that right. A better than 70% improvement – just by adding a little magic to their existing systems.

The business results were no less astounding. That 30 million dollars? Well, they got all of that after they implemented an in-memory technology solution. That was 30 million dollars of found money. Money that was lost to them and not attainable. They collect it now, and won’t ever be in that position again. Even better, the in-memory solution sharply reduced the resource usage of their machines, which resulted in an additional monthly saving of 150 thousand dollars in operational expense. Now that is truly a drop in the ocean, but ask any banking exec – any found money is a good thing.

You can see the customer success story here.

Originally published at www.dkl.com

Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.

Leave a Reply

Your email address will not be published. Required fields are marked *