Applying a Metrics Report Card

min read
© 2008 Martin Klubeck and Michael Langthorne. The text of this article is licensed under the Creative Commons Attribution-NonCommercial-No Derivative Works 3.0 license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
EDUCAUSE Quarterly, vol. 31, no. 2 (April–June 2008)
Good Ideas
Applying a Metrics Report Card
Evaluating the IT department from the customers' point of view determines strengths and weaknesses in IT performance

"So, how are you doing?"

If the president of your university asked you this question about the health of the IT department, what would you answer? And, more importantly, how would you know?

Students can check their transcripts at the end of each term to see how they're doing. The overall GPA quickly communicates overall academic performance, and they can see the grades received for each class.

Called a report card in grade school, the transcript highlights a student's strengths and weaknesses. Using a grading rubric, the student can determine if he has specific areas of weakness. The grading rubric for each class combines multiple items: tests, quizzes, participation, homework, a final exam, and projects. If a student did poorly in one of these areas, he can determine where he has a problem. Did he do poorly on tests because he didn't know the material or because he does poorly on tests in general? How did he do on each test? How did he do on tests in other classes?

Grades are not prescriptive—you can't know what caused the poor test results by looking at the score—they are diagnostic. The complete report card (transcript and breakdown of grades) reflects the student's academic "health" and helps in identifying problem areas. If the professor shares results of the graded components during the semester, the student can adjust accordingly. The most beneficial aspect of grading is frequent feedback.

The same holds true for simply, clearly, and regularly communicating the health of the IT department to university leadership, the IT membership, and our customers. Providing a report card enables the IT department to check its progress and overall performance—and adjust accordingly. A report card won't tell us how efficiently the IT department functions, but it gives the insight needed for identifying areas for improvement.

Balanced Scorecard versus Report Card

So, what's wrong with using the balanced scorecard (BSC) for this purpose? Isn't the report card the same thing? Not really.

The BSC is based on quadrants for financial, customer, internal business processes, and learning and growth.1 The quadrants can be tailored to align more directly with each organization's needs, however. Where most explanations of the BSC fall short is in the specific metrics used to develop each view.

The four quadrants are very different. While this gives a wide view of the organization, looking at each quadrant might require also considering the environmental criteria for that quadrant. In addition, unique metrics make up the score for each quadrant, complicating use of the BSC even more.

Instead of the BSC quadrants, the report card is based on four categories derived from how we measure rather than what we measure:

  • Effectiveness—the customer's view of our organization
  • Efficiency—the business view of our organization
  • Human resources—the worker's view
  • Visibility—management's need for more insight into the organization

Each of the four categories provides a different view of the organization. The report card uses the effectiveness quadrant exclusively because it offers the greatest return for the investment required. The organization should strive to reach a state of maturity that allows it to measure all four categories, but effectiveness is an excellent place to start. If you ignore customers and lose their support, it won't matter how efficient you are, how happy your workers are, or how much insight your management has. Even in the monopolistic academic IT environment, we have to please our customers first and foremost.

The report card lets us look at ourselves through our customers' eyes, focusing on our services and products. For each key service the organization provides, we receive a grade. That grade is made up of components like any rubric. In the case of effectiveness metrics, the graded components are:

  • Delivery of service (quality of product)
    • Availability
    • Speed
    • Accuracy
  • Usage
  • Customer satisfaction

This grading rubric is a good standard partly because it allows for customization. Each component is worth a portion of the final grade for the service or product. The values can be weighted for each key service, and the aggregate of the grades becomes the organization's GPA for the term. The organization might be failing at one facet of effectiveness while excelling in others. Even with a decent overall grade, we would know which areas needed attention. The weak area might require working harder, getting additional help, or dropping that service/product completely. It should be an option to drop a service if we are failing at it or if we realize it is not a core part of our business.

An Example of the Report Card Applied

A human resource office (HRO) of a large organization decides to use the report card as a first step in implementing a metrics program. The HRO offers many services, but it is important to identify the credit-earning items or key services and products. The HRO, with the help of its customers, selected the following as essential:

  • Provide training
  • Counsel on benefits
  • Counsel on hiring
  • Counsel on firing
  • Provide assistance through a help desk

For each of these services, the HRO diligently identified metrics for each graded component. Let's look closer at one of the services, "provide training."

Delivery (Quality) of Training

For the key service of providing training, the HRO identified metrics for determining how well it delivered its service. They asked the following questions:

  • Was training available when wanted? When needed? (Availability)
  • Was training delivered in a timely manner? How long did it take to go from identification of the training need to development of the course and actual presentation? (Speed)
  • Was it accurate? How many times did the HRO have to adjust, change, or update the course because it wasn't done correctly the first time? (Accuracy)

Before going on with the example, it's worth repeating that these measures are only indicators. In this case, it might not be essential for the HRO to achieve perfection in each training offering. Perhaps it is acceptable to update and improve the offering with each presentation. The "Accuracy" metric is not intended to seek perfection the first time out. Accuracy should be used for identifying how much effort, time, and resources go into reworking a task. This approach offers possible savings in the form of improvements along the way.

Usage of Training Offerings

For the next measurable area, usage, the HRO asked the following questions:

  • How many potential customers are using other means of training?
  • How many people are using the training? Of the potential audience, how many are attending?
  • How many departments are asking for customized courses?

If no one attends the training, the training is ineffective, no matter how well delivered. The time, money, and effort spent in developing the training were wasted. In essence, usage is the customers' easiest way of communicating their perceived benefit from the service. Regardless of what they say in customer satisfaction questionnaires, usage sends an empirical message about the value of the service.

Customer Satisfaction with Training

While usage is a good indicator of customer happiness, we still should ask about their satisfaction with our offerings. Why? What if your offering is the only one available? Even given a choice, customers choose services and service providers for more reasons than satisfaction with the given service. Customers might tolerate poor service if other factors make it logical to do so.

For customer satisfaction the HRO asked:

  • How important to you are the training offerings?
  • How satisfied are you with the offerings?
  • Would you recommend our offerings to a coworker?

These questions were all asked in a survey with a numeric scale. We're deliberately not recommending a specific tool or scale—the important differentiator for the tool you choose is simply belief in the answers. Find a scale that you can believe in.

A Warning

The three factors of delivery, usage, and customer satisfaction can tell us how effective we are, but many times the recipient of the data doesn't believe the data are accurate. Management decides the data must be wrong because they do not match preconceived notions of what the answers should be. Management rarely asks to see data to identify the truth; they normally ask to see data to prove they were right in the first place. If (and when) the data don't match their beliefs, the following accusations might be made:

  • The data must be wrong.
  • The data weren't collected properly.
  • The wrong people were surveyed.
  • Not enough people were surveyed.
  • The analysis is faulty.

Actually, challenging the validity of the data is a good thing. Challenging the results pushes us to ensure that we collected the data properly, used the proper formulas, analyzed the data correctly, and came to the correct conclusions. Many times, a good leader's intuition will be right on the mark. But, once we've double (and triple) checked the numbers, we have to stand behind our work. If, once the data are proven accurate, management still refuses to accept the results, it might mean the leadership (and the organization) is not ready for a metrics program. This possibility is another good reason to start with effectiveness metrics rather than trying to develop a more comprehensive scorecard.

Conclusion

Balanced scorecards are very useful in helping organize and communicate metrics. The report card takes the BSC approach a step further (by doing less) and simplifies the way we look at our data. Rather than grouping the metrics by type of data, we look at our services in light of the customers' view of our performance.

The report card communicates the IT organization's overall GPA, including the grades for each key service or product, and allows drilling down into the rubric to see the graded components. This provides a simple, meaningful, and comprehensive picture of the health of the IT organization from the customers' viewpoint. It makes a good starting point for a metrics program. It also allows the IT organization to appreciate the value and benefits metrics can provide, especially as a means of communication, before embarking fully into more threatening data.

Endnote
1. Robert S. Kaplan and David P. Norton, "The Balanced Scorecard—Measures That Drive Performance," Harvard Business Review, (January/February 1992), pp. 71–29.
Martin Klubeck ([email protected]) is Strategy and Planning Consultant for the Office of Information Technologies, and Michael Langthorne ([email protected]) is a Program Manager for the Office of Information Technologies, at the University of Notre Dame, Indiana.