Contributed by Rubin Worldwide
Written by Dr. Howard A. Rubin, Professor Emeritus Computer Science City University of New York

This is the first of a two-part article on IT benchmarking. Watch for part-two in the April issue of WSTA Digital News

Part 1: Background and “Benchmarks Behaving Badly”

To some in the Banking and Financial Sector, the announcement that the company is about to embark on an IT benchmark evokes concerns of an endless number-seeking exercise of questionable value.  To others, it results in an aura of fear of being measured against an external consultant’s “IT report card”.  To others, the whole process is clouded by a fog of doubt that there is any organization (or data) out there that could be the basis of a valid comparison coupled with the belief that the company itself is so unique that external comparability simply does not exist.  This latter perspective is usually supported by discussions of the company’s business mix, geographic footprint, risk profile, etc.

But to others, perhaps the most informed, such an announcement is not noticed at all, as benchmarking has been woven into the fabric of the way the organization works – in which benchmarking is not an episodic event but rather a continuous evaluation of the organization’s position relative to meaningful external data—in a sense, the “market data” of IT.   And those organizations mine the benchmark data and their own data using analytics that, like a “heat seeking missile,” enable the formulation of hypotheses (and actions) to improve both the technical performance and business value performance of the technology organization.

From a purely definitional vantage point, let’s define a benchmark as “a standard or point of reference against which things may be compared or assessed.”    Other definitions are broader and perhaps add additional and important context and objectives: “A measurement of the quality of an organization’s policies, products, programs, strategies, etc., and their comparison with standard measurements, or similar measurements of its peers.  The objectives of benchmarking are (1) to determine what and where improvements are called for, (2) to analyze how other organizations achieve high performance levels, and (3) to use this information to improve performance.

However, the dominant issues as to why IT benchmarking is not a standard integrated practice have to do with the external comparability issue mentioned earlier and past experiences of “victims” of benchmarks that have been poorly developed, misused, or have simply decayed in relevancy under scrutiny.  Imagine the reaction at a company when the IT benchmarks being provided, although having a large sample size, represent the IT profile of businesses that have lower margin and growth than the company or perhaps one-tenth the revenue!

With this short background as a foundation, the best way to proceed in “Building a Better Benchmark” is to work backwards from what the organization hopes to learn and how to attain those learnings.

Start with More Questions Than Answers – The Basics

Clearly there is no shortage of questions to be asked/addressed via benchmarking.  However, organizations that have been able to leverage benchmarking into a value-adding tool (as opposed to a time-draining exercise) tend to ask similar types of questions, including:

  • How do we look as compared to our peer landscape?
  • How do we look as compared to the technology scale economics of our sector?
  • What can we learn from other sectors?
  • How do we compare in our technology economics with the best in class profile of top performing peers overall and for each of our business segments? (synthetic benchmark)
  • How does our yield on our technology compare (technology intensity versus margin)?
  • How does our core technology operating efficiency compare? (Run the Business costs)
  • How does our investment profile compare? (Change and Build the Business costs)
  • How are our technology expenses “moving” with our business? (change in revenue and change in tech expense)?
  • Do we have an appropriate expense structure to support the volatility of our business (fixed vs variable and response time)?
  • Is our infrastructure size and expense consistent with our business and business mix?
  • Is the economic efficiency of our infrastructure moving with the market? (Moore’s law, etc.)
  • How is our infrastructure economic efficiency moving against us? How do we compare with best performing peers? From what other sectors can we learn; the “marketplace”?
  • Does our infrastructure have the needed elasticity?
  • Are we effectively managing demand? Are we sized right?
  • Other key aspects that can considered via benchmarking:
    • Staffing, mix, skills, and use of outsourcing
    • Applications: Portfolio size and aging; Development and Maintenance practices
    • IT Supply Chain and Vendor Management
    • Cybersecurity, risk and compliance
    • Innovation and adoption of new technologies

If you examine this list, it should be evident that all of these address issues that an IT organization needs to have answers to in the course of doing the business of IT.  These are in many ways business questions and not just pure tech questions.


Picking a Peer Group and Why You Need a “Synthetic Benchmark”

The value and validity of benchmarks rise and fall with the selection (and availability) of what is typically called a peer group.   Selection of the peer group is perhaps one of the most critical success factors in doing a benchmark.

For example, would it make sense to compare JPMorgan Chase against Bank of America, Citi, and Wells Fargo?  Well, perhaps from a revenue size perspective, “Yes,” as they fall into the $70B to $100B range.  But from a top-level comparison perspective (or benchmark) of their technology costs, the answer is “No.”  While their revenue class is perhaps the same, the sources of revenue (revenue balance by business segment) are quite different.  While JPMorgan Chase, Bank of America, and Citi generate perhaps 30% of more of their revenue from their investment banking, Wells Fargo does not.  As compared to Wells Fargo, JPMorgan Chase operates in more countries – and Citi more than any of them plus Citi’s retail business is quite different.

As another example, consider Morgan Stanley and think about what peers come to mind.  Companies that come to mind probably include Goldman Sachs, Deutsche Bank, Credit Suisse, and UBS, as well as business segments of the aforementioned big banks.  But wait, that won’t work well either, as Morgan Stanley is roughly 50:50 Capital Markets (their Institutional Securities Group) and Wealth Management.  None of these possible peers are a good match based on business mix.  So, how should a peer group be selected?

It is safe to benchmark “at the top,” looking at the macro-economics of the technology of these companies and produce a comparison at that level to deal with the conventional questions that are asked about any of them in such natural groupings.  But to do a meaningful benchmark tuned to business mix and business aspirations of operating margin, ROE (return on equity), risk profile, etc., a synthetic benchmark should be constructed.  Specifically, break the company down into its business segments, identify meaningful peers by business segment, and then create a “synthetic benchmark” as the sum of the parts of the individual analyses.  Seems too hard?  It is not.

For example, looking at JPMorgan Chase’s 2014 annual report, we can see how they model their businesses and their overall aspirations.  As a business they targeted the performance of “best in class” peers in the context of efficiency and returns.  Looking at the fine print, the peers by business segment include Wells Fargo in Consumer and Community Banking and Citi in Corporate and Investment Banking.

Source: JPMorgan Chase 2014 Annual Report

Consequently, it makes sense to develop your IT benchmark using the following model:

  1. Identify target performance by business segment; and
  2. Benchmark against the IT characteristics of those you intend to “meet or beat” in the marketplace for each segment.

Following is an example with four different banking profiles for a mythical $100B bank.

  • Bank A has three business segments: Retail, Capital Markets, and Wealth Management with a 50:40:10 revenue balance.
  • Bank B has only Retail and Capital Markets with a 40:60 balance.
  • Bank C has only Capital Markets and Wealth Management with a 50:50 balance.
  • Bank D has only Retail and Wealth Management with a 40:60 balance.

Notice that the estimated IT expenses range from a low of $7.6B to a high of $9.6B.  Business mix is the factor that drives the numbers.  Look at the NIE/Revenue ratio: it ranges from 52% to 63%.  As you can see, business performance and related benchmarks are quite business mix sensitive and that is why you need a “synthetic benchmark.”

Source: Dr. Howard A. Rubin


This is the first of a two-part article on IT benchmarking.  Watch for part-two in the April issue of WSTA Digital News.

Dr. Howard A. Rubin is Professor Emeritus of Computer Science at City University of New York and an MIT CISR Associate.  He can be reached at and +1 914-420-8568.