Cybersecurity: When Will We Know If What We Are Doing Is Working?


© 2009 Clint Kreitner. The text of this article is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License (

EDUCAUSE Review, vol. 44, no. 5 (September/October 2009): 62–63

Clint Kreitner (, Founding President/CEO of the Center for Internet Security (CIS) from 2000 to 2008, currently serves CIS as Strategic Advisor.

Comments on this article can be posted to the web via the link at the bottom of this page.

In less than a generation, the electronic neighborhood called the Internet has established itself as the connecting mechanism bringing individuals, governments, corporations, colleges/universities, and other entities into a truly global system operating at electronic speed. This mechanism has affected political, economic, social, and educational interactions in a way that has produced significant benefits. However, when it comes to knowing how to cost-effectively protect the cyberinfrastructure and the information that flows through it, we are all in uncharted territory.

This is why, after expending considerable financial, human, and technology resources in recent years to improve cybersecurity, senior public- and private-sector executives and legislators who have responsibility for allocating these resources are asking questions about the effectiveness of these expenditures:

  • Are we more secure now than we were last year?
  • Are we spending too little? Too much?
  • Which of our security investments are producing the most cost-effective results?

These questions are appropriate and timely because they motivate us to define and measure the desired outcomes (results) of our cybersecurity efforts.

This new cyberinfrastructure presents potential not only for significant benefits but also for serious disruption in the conduct of human affairs — for the United States and globally as well.1 Sensing the consequences and likelihood of such disruptions, we have thus far focused on articulating risk-assessment scenarios, creating security policies, striving for compliance with a bewildering array of regulations and standards, and taking a shotgun approach to implementing protections we think might be effective. In a real sense, information security remains more of an art than a science at this point in its short history. If we are honest with ourselves, we will acknowledge that we are basically doing seemingly rational things and hoping for favorable results even though we lack consensus on a definition of "favorable results."

The subtitle of this column is, "When will we know if what we are doing is working?" Answering that question requires three essential, systemic components, yet to emerge:

  • A widely accepted, overarching definition of success
  • Consensus metrics for measuring progress toward success
  • A comprehensive feedback learning mechanism

The remainder of this column will put forth a conceptual vision/framework for these three essential elements.

A Definition of Success

Every enterprise — be it a government agency, a corporation, a sports team, a church, a college/university, a hospital, or a committee — exists for the purpose of achieving a mission in the form of one or more specified outcomes, usually in terms of products and/or services. The mission defines the purpose of the enterprise.

A vehicle-manufacturing plant, for example, employs thousands of processes to produce a high-quality finished vehicle (the outcome). Hospitals integrate the processes of radiology, anesthesia, surgery, and nursing to remove a cancerous tumor and (hopefully) produce a cancer-free patient (the outcome). Colleges and universities utilize instructional processes to produce students with certain knowledge and critical thinking skills (the outcome).

By focusing on the desired outcome, we can provide the following logical definition of cybersecurity success: "A sustained reduction in the impact of security incidents." It is perhaps helpful to think of cyberinfrastructure in the same way we think of the electrical grid. Both are successful when they function as designed and as needed. Like a power outage, a security incident is an undesirable outcome in the form of an impairment of function or availability. If we were not currently experiencing costly and disruptive incidents exploiting cybersecurity vulnerabilities, would we even be discussing the subject? If we were not experiencing operational disruptions and the costs of disclosure associated with security incidents, would we be worrying about cybersecurity? Probably not.

A notable benefit of this conceptual construct is its applicability to confronting cybersecurity incident damage at the individual enterprise level as well as the national or international level.

Metrics for Measuring Progress toward Success

Keeping "a sustained reduction in the impact of security incidents" in mind as the desired overall outcome, users and operators of the cyberinfrastructure can work together to come to consensus on outcome metrics that will foster the insights and learning so crucial for bending the security incident impact trend curve downward.

A security metrics consensus effort currently under way, led by the Center for Internet Security (CIS), has produced an initial set of twenty metrics definitions in six business functions: Incident Management, Vulnerability Management, Patch Management, Configuration Change Management, Application Security, and Financial.2 For example, the following are the consensus metrics for Incident Management:

  • Mean-Time to Incident Discovery
  • Incident Rate
  • Percentage of Incidents Detected by Internal Controls
  • Mean-Time between Security Incidents
  • Mean-Time to Recovery

This is clearly just a beginning, but the value of the consensus effort is its determination of precise definitions for each of the metrics so that these metrics can be widely used, without ambiguity, within or among various enterprises. Unlike other fields (such as finance, where everyone knows what the "net income" metric means), cybersecurity to date has lacked uniformly accepted metrics definitions.

A Learning/Feedback Performance Improvement Mechanism

The systems view posits that an unacceptable outcome generally indicates that one or more of the processes that produced that outcome need improvement. Monitoring the outcome of enterprise processes supports a critical feedback loop required for producing better products and services in the future. This learning/feedback mechanism is a fundamental characteristic of the competitive free enterprise system and is an essential component of any performance improvement effort.

Examples of this commonly used mechanism come from many sectors. Recent news accounts reported a number of "proximity events" at a major New York airport, where the paths of airplanes taking off and of others landing had come unacceptably close. Each of those events triggered an immediate event analysis, which resulted in procedural changes to the control tower protocols for handling takeoffs and landings. The outcome was unacceptable, so the underlying processes were changed.

The collapse of the I-35 highway bridge in Minneapolis revealed a serious flaw in the design of its steel structure. The learning/feedback from that incident was immediately relayed by the National Transportation Safety Board to engineering firms engaged in bridge design throughout the United States so that they could modify their design methods in order to prevent a recurrence of a bridge failure due to that cause. A review of the design of existing bridges was also initiated to determine if any other bridges were at risk for the same kind of failure.

Most people are willing to drive on highway bridges and travel by air because the mechanisms for recording and promulgating incident information have produced an acceptable level of risk. For example, every time an aircraft accident occurs, information is gathered to learn what caused the accident. That learning is fed back into aircraft manufacture and operation. Likewise, most people are willing to consider surgery as treatment for cancer because the mechanisms for recording and promulgating medical incident information have produced an acceptable level of risk when weighed against the risks of letting the disease take its course without treatment. An example of an undesirable surgical incident outcome that hospitals and surgeons diligently track is post-surgery staph infection rates. A rate above acceptable norms results in an immediate evaluation and in changes to the surgery treatment protocols.

The concept here is one of a causality-driven learning loop followed by implementation of corrective actions, based on what was learned, to prevent recurrence of similar undesirable outcomes. Finding out what caused an incident and implementing fixes so as to reduce the likelihood of the incident happening again is a simple yet time-honored approach to improving performance in areas as diverse as medical surgery and industrial safety. However, there are no systematic, comprehensive, and efficient mechanisms for learning from cybersecurity incidents.


It is beyond the scope of this column to suggest the details of a national or international mechanism for recording and promulgating security incident information across individual enterprise boundaries. Presumably, the U.S. Computer Emergency Readiness Team (US-CERT) and its international cohorts could play a key role in this task. But in the meantime, by adopting "sustained reduction in the impact of security incidents" as the primary definition of success, by using consensus security metrics to track incident impact trends, and by learning from analyzing those trends against security practices in use, individual enterprises can begin to make more informed security investment decisions, based on data rather than intuition. The profound dependence of the U.S. national and economic security and well-being on the cyberinfrastructure demands no less.

  1. At the national level, the 2009 National Infrastructure Protection Plan and supporting Sector-Specific Plans represent early-stage efforts to confront the cybersecurity challenge, with an initial focus primarily on organizational structures plus collaborative and awareness mechanisms. See the U.S. Department of Homeland Security website: <>.
  2. See <> to download the CIS Consensus Security Metrics document. If you are interested in participating in the CIS Consensus Security Metrics effort, please send an e-mail to <>.