Reducing or eliminating costs is an important goal for every IT department. Each decision regarding your computer resources must be weighed not only on the value that they can deliver to your organization, but upon their cost to procure, implement, and maintain. And, in most cases, if a positive return on investment cannot be calculated, the software won’t be adopted, or the hardware won’t be upgraded.

An often overlooked opportunity for cost containment comes from within the realm of your capacity planning group. Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products. But capacity planning is perhaps a misnomer, because this group should not only be planning your capacity needs, but also managing your organization’s capacity. Actively managing your resources to fit your demand can reduce your IT department’s software bills… especially in a mainframe environment.

Why is the mainframe especially relevant? Well, the perception that the total cost of mainframe computing is high continues to linger… and software is a large portion of that cost. The pricing model for most mainframe software remains based on the capacity of the machine on which the software will run. Note that this pricing model reflects the potential usage based on the capacity of the machine, not the actual usage. Some vendors offer usage-based pricing. You should actively discuss this with your current ISVs as it is becoming more common, more accurately represents fair usage, and can save you money.

IBM offers license charging methods that are based on variable workload usage, the predominant method being advanced workload license charging, or AWLC.  AWLC applies to many of its popular software offerings, including products such as z/OS, DB2, IMS, CICS, MQSeries and COBOL. It is a monthly license pricing metric designed to more closely match software cost with its usage. Some of the benefits of AWLC include the ability to:

  • Grow hardware capacity without necessarily increasing your software charges
  • Pay for key software at LPAR-level granularity
  • Experience a low cost of incremental growth
  • Manage software cost by managing workload utilization

Basically, what happens with AWLC is that your MSU usage is tracked and reported by LPAR. You are charged based on the maximum rolling four hour (R4H) average MSU usage. R4H averages are calculated each hour, for each LPAR, for the month. Then you are charged by product based on the LPARs it runs in. All of this information is collected and reported to IBM using the SCRT (Sub Capacity Reporting Tool) or the MWRT (Mobile Workload Reporting Tool). Both use SMF data, namely SMF 70-1 and SMF 89-1 / 89-2 records. So you pay for what you use… sort of. You actually pay based on LPAR usage. Consider, for example, if you have DB2 and CICS both in a single LPAR, but DB2 is only minimally used and CICS is used a lot. Since they are both in the LPAR you’d be charged for the same amount of usage for both. But it is still better than being charged based on the usage of your entire CEC, right?

There are other variations of AWLC, including recent IBM pricing models such as Mobile Workload Pricing (MWP), zCAP (Co-located Application Pricing), and Country Multiplex Pricing (CMP). Or you may have heard of VWLC, which is an earlier version of AWLC for older mainframe models. The actual prices and discounts will vary from model to model, but at their core they all work basically the same as AWLC, so in this post I will simply use the term AWLC to apply to all of IBM’s sub-capacity pricing models.

Soft Capping

You can take things a step further by implementing soft capping on your system. Soft capping is a way of setting the capacity for your system such that you are not charged for the entire capacity of your CPC, but at some lower defined capacity.

Without soft capping you are charged the maximum R4H average per LPAR. With soft capping, your charge by LPAR is based on the maximum R4H average or the defined capacity that you set, whichever is lower.

The downside to soft capping is that you are setting limits on the usage of your hardware. Even though your machine has a higher capacity, you’ve set a lower defined capacity and if the R4H average exceeds the defined capacity, the workload that can be processed by your system is capped at the defined capacity level… that is, until the R4H average dips below the defined capacity level, at which point your system is no longer capped.

Sites that avoid soft capping usually do so because of concerns about performance or the size of their machines. This can be a misguided approach because soft capping coupled with capacity management can result in significant cost saving for many sites.

Of course, it can be complicated to set your defined capacity appropriately, especially when you get into setting it across multiple LPARs. There are tools on the market to automate the balancing of your defined capacity setting and thereby manage to your R4H average. The general idea behind such tools is to dynamically modify the defined capacity for each LPAR based on usage. The net result is that you manage to a global defined capacity across the CPC, while increasing and decreasing the defined capacity on individual LPARs. If you are soft capping your systems but are not seeing the cost-savings benefits you anticipated, such a tool can pay for itself rather quickly.

Summary

Managing mainframe software costs by adopting AWLC and soft capping techniques can help your company to assure a cost effective IT organization. In today’s cost-cutting, ROI-focused environment, doing anything less than that is short-sighted.

Regular Planet Mainframe Blog Contributor
Craig Mullins is President & Principal Consultant of Mullins Consulting, Inc., and the publisher/editor of The Database Site. Craig also writes for many popular IT and database journals and web sites, and is a frequent speaker on database issues at IT conferences. He has been named by IBM as a Gold Consultant and an Information Champion. He was recently named one of the Top 200 Thought Leaders in Big Data & Analytics by AnalyticsWeek magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *