Data: Roads Traveled, Lessons Learned

min read

© 2005 Kenneth C. Green, David Smallen, Karen Leach, and Brian L. Hawkins

EDUCAUSE Review, vol. 40, no. 2 (March/April 2005): 14–25.

Show me the data! In their efforts to better understand, plan for, and make decisions about information technology on campus, higher education leaders need data. They need benchmarking and longitudinal data. They need data about IT budgets, expenditures, and investments. They need data about IT staffing. And they need data about IT programs, planning, policy, and procedural issues at colleges and universities across the United States.

But collecting data about campus activities is not an easy task. What are some of the challenges and complexities involved? How have the key IT data issues changed over the years? What is being done well in the collection of the data, and what needs to be done better? To address these and related questions, EDUCAUSE Review interviewed the directors of three widely cited and respected IT data-collection projects: Kenneth C. (Casey) Green, for the Campus Computing Project (http://www.campuscomputing.net/); David Smallen and Karen Leach, for the COSTS Project (http://www.costsproject.org); and Brian L. Hawkins, for the EDUCAUSE Core Data Service (http://www.educause.edu/coredata/).

1. Let’s begin with a little history about these three projects. How would you describe your project? How is it similar to and/or different from the others?

Green: As the participant with the oldest project—but not the oldest participant in this conversation—I guess I get to go first. The Campus Computing Project began in the spring of 1990. At the time, there was very little information about campus IT planning and policy issues affecting academic computing, focused on the role of computers—and later a wide range of technologies—in teaching, learning, instruction, and research. Charlie Warlick, at the University of Texas, used to survey his colleagues about "heavy metal": the number and kinds of computers on campus. Concurrently, the CAUSE ID Service of the early 1990s, the precursor to today’s EDUCAUSE Core Data Service (CDS), focused on administrative computing. But there were no data about academic computing per se or, equally important, the emerging role of personal computers in campus life.

The Campus Computing Project differs from the EDUCAUSE CDS and the COSTS Project in that we focus primarily on IT planning and policy issues. In contrast, the other two projects focus more on IT organizations, budgets, and ERP-deployment issues. There is, I should add, very little overlap between Campus Computing and the other projects. We feel that we complement and supplement, rather than compete with, the EDUCAUSE CDS and the COSTS Project.

Smallen/Leach:: The initial focus of the COSTS Project was on determining the cost of providing IT services for institutions of higher education. Although there were some data about IT services in the for-profit sector, that data, particularly for staffing, seemed out of line with what we observed on campuses. We also knew that by itself, that information isn’t too helpful, so we began to develop a set of benchmarks to help institutional leaders understand their IT investments, particularly in comparison with peer institutions. By benchmarks, we mean ratios that can be compared across institutions. The data that participating institutions collect for COSTS include institutional demographics as well as IT budget and staffing data. We use the demographics to normalize the comparisons for institutional size and employee base. Areas can then be identified for closer examination by looking at the detailed data. This process resulted in the first useful IT benchmarks related to higher education.

A primary goal of the COSTS Project is to provide current, relevant financial data. We don’t look at policy or organizational issues. That is an important contribution of the two other surveys. The COSTS data are timely, since we collect current budget information. We begin data collection each fall for the current year, and institutions that submit data by the fifteenth of a month get a benchmarking-analysis report at the end of that month and each month after as additional institutions join. We don’t have a single deadline: the earlier an institution submits data, the earlier it will get results. We also capture institution-wide IT costs, the support provided beyond the central IT organization.

The EDUCAUSE CDS collects data on spending and staffing in a similar way to COSTS, but the CDS focuses on actual expenditures rather than budgets. Thus COSTS is releasing 2004–05 data to participants now; the EDUCAUSE CDS will release 2004-05 data in the summer of 2006. The staffing categories in these surveys are somewhat different, but institutions should have no problem sorting their staff into either set of categories. In addition, the EDUCAUSE CDS tracks detailed policy decisions related to staffing and expenditures, such as reporting structure or approach to developing a replacement plan for technology.

Probably the least overlap exists between the COSTS Project and the Campus Computing Project surveys. The Campus Computing Project data are heavily focused on policy decisions, with few survey questions related to staffing and budgeting.

Hawkins: The EDUCAUSE Core Data Service began in 2000. EDUCAUSE brought together a special task force to define a data-collection program. The CDS was, in some ways, a follow-on to the CAUSE ID (Institution Database) Service, which was discontinued in 1996 because respondents felt it had become too cumbersome. Members of the task force felt that it would be most beneficial if individual data captured with a survey instrument could be examined through an interactive database service. This would allow participants to define specific schools to be part of peer groups of directly comparable institutions whose identifiable data could then be examined. The result is the Core Data Service, with a focus on the service—not the survey. Its value lies in the Web-based tools, the trend-analysis tools, the statistics, the ratio analyses, and the graphic displays, not just the questionnaire itself. It is this total service that has made the EDUCAUSE CDS successful thus far.

We believe that the CDS is complementary to the COSTS Project and the Campus Computing Project. Each does something quite different. As Casey noted, the Campus Computing Project focuses on a wide range of campus planning and policy issues, such as services on the campus Web site and IT deployment for instruction, and also on CIO opinions about selected campus IT issues, such as a rating of the IT infrastructure and major IT developments affecting the campus over the next few years. The EDUCAUSE CDS, on the other hand, covers the key—or, as the name suggests, the "core"—issues in five areas: IT organization, staffing, and planning; financing and management; faculty and student computing; networking and security; and information systems.

The COSTS Project was the first major effort to examine IT financial and support data and to develop benchmarks. Dave Smallen was a member of the CDS task force; we wanted to tap his expertise from the COSTS Project. As Dave has noted, the EDUCAUSE CDS and the COSTS Project differ in that the COSTS Project uses projected budgets for the upcoming year, whereas the CDS captures actual IT funding received for the previous year. The goal of the EDUCAUSE CDS was to get as many schools to participate as possible. For those institutions that generate a significant portion of their IT operating budget from chargeback revenue (most characteristic of large, public research universities), these figures are generally not budgeted; therefore, using budgeted figures either eliminates the participation of these institutions or grossly underestimates the actual funds and operations. Furthermore, many campuses have experienced, in the last few years, fairly dramatic midyear budget adjustments in these difficult economic times. Therefore we felt that examining actual funding received and expended for the previous economic year would provide a more accurate reflection of IT costs.

Finally, a key difference between the CDS and the other two survey efforts is that with the EDUCAUSE CDS, the participants can see individual campus responses and can then analyze these responses by clustering various institutions into peer groups using built-in filters, allowing for in-depth analysis without having to know statistics and without getting aggregate data, which can potentially muddle interpretation.

2. How has your project changed over time?

Hawkins: Although we have made minor tweaks to definitions and have given clarifications for a very few questions, the EDUCAUSE CDS effort has remained quite stable over the last three years. The response from our nearly 850 participants has indicated that this tool is incredibly useful to them. The main changes have concerned ways to enhance the service, by adding more functionality and built-in benchmark ratios. The plan is to continue with this approach.

Smallen/Leach:: We have fine-tuned and simplified the data collection over time. For example, we used to collect data on both actual spending from the previous year and current budgets. But people told us it was too much trouble to collect that much data. They wanted the most current data, so we switched to collecting only budget data.

At the very beginning of the COSTS effort, we focused on understanding one service at a time. We collected data only about computer repair services or only about help-desk services. After two years, participants indicated that it would be more valuable to understand budgeting and staffing across all IT service areas. In 1997 we started to collect the broader IT data and to develop the benchmarks. One thing hasn’t changed: the survey participants are mostly institutions from the liberal arts and the master’s Carnegie Classifications, although we have had a few doctoral and research institutions participate in recent years.

We think that the audience and the value of the survey have broadened over time. In the early years, we focused on creating a tool that would help us, as IT directors, and that would be useful to our colleagues. In recent years, we have used data and interpretation of the benchmarks to try to open a dialogue between senior administration and IT leaders about strategy choices. That effort resulted in a white paper, published by the Council of Independent Colleges: Information Technology Benchmarks: A Practical Guide for College and University Presidents (June 2004). We view the facilitation of this dialogue as a major achievement of the COSTS Project.

Green: Compared with the COSTS Project and the EDUCAUSE CDS, the Campus Computing Project has probably experienced more change in the focus of the questionnaire. The project began with a focus on academic computing. However, over the past decade, the boundaries between what were once the separate domains of academic and administrative computing have become porous. Additionally, the Internet and the Web, if they did not change everything—the old mantra of the Internet economy—certainly changed many things, including computing and technology services on campus.

Consequently, the Campus Computing Project questionnaire has changed—it had to change—to reflect these changes. For example, in 1998 we began tracking campus services on the Web: online course registration and some twenty-four other services. Similarly, we’ve been monitoring the use of course management systems since 2000 and portals since 2002. In the past two years we’ve added other new questions—about open source, appropriate use policies to stem inappropriate peer-to-peer (P2P) activity, e-portfolios, and campus efforts to deal with spam.

3. What are we doing well regarding the collection of data about campus IT issues? What do we need to do better?

Green: I think the campus IT community has made great progress over the past decade, and I’ll credit all three projects with aiding, informing, and enhancing the campus conversations about IT issues by providing real data. That said, from my perspective, the COSTS Project and the EDUCAUSE CDS have undertaken the most difficult task—staffing, budget, and financial data. I’ve long maintained that a small army of forensic accountants would be needed to help leaders understand just how much money any one campus spends on computing and IT. The COSTS Project and the EDUCAUSE CDS have done this extremely well and at great benefit to the campus community.

Hawkins: There is no question that all three projects provide IT leaders and other campus executives with much useful planning and management information. That said, the area in which all three projects probably have the same inherent limitation is getting at accurate information regarding computer equipment and staffing external to the central IT organization. The larger and more complex the institution, the less likely it is that highly accurate information can be obtained for decentralized IT, yet such data are very important to understanding how much the IT is actually costing the institution and to understanding that IT is not just a central IT unit’s concern but is an institution-wide asset that needs to be viewed holistically.

Smallen/Leach:: Looking at trends over several years has helped to establish the use of benchmarks. In addition, we have promoted the use of the "typical range"—or the middle 50 percent (the scale between the 25th and 75th percentile)—as a way to use benchmarks to identify areas requiring further study. By looking at benchmark values that fall outside of the typical range, institutions can search for opportunities to develop efficiencies or to confirm strategic emphases.

We agree with Casey and Brian that together, the three surveys offer an abundance of complementary data about what IT services cost, what policy choices institutions make, and the way IT services are organized. The biggest missing piece is the "quality" dimension. As with many surveys in higher education, the focus is on what is easily measured, namely inputs. Unfortunately, by not relating this data to outcomes, we miss an important opportunity to inform the discussion. For example, we know from the COSTS data that IT staff at master’s institutions typically support twice as many people as IT staff at selective liberal arts colleges. Do the benchmarks mean that master’s institutions are able to deliver equivalent services through economies of scale or better processes, or are there different expectations that reflect institutional rather than IT considerations? Even among similar institutions, we haven’t been able to factor in the quality of services when comparing budget or staff benchmarks.

Complicating the discussion on quality is that although we can probably agree on what the service is, we have a more difficult time agreeing on how to define service standards and relate those definitions to cost. A college can run a network, but what does it mean to have a reliable network with adequate throughput and response time? How secure is the network, and what does that security cost? What is an appropriate expectation for the repair of desktop computer equipment? If a certain proportion of resources is allocated to Web services, does the college have a "better" Web presence that brings it competitive advantage, or is this simply a waste of funds?

4. What have been the major changes in the key IT data issues over the past decade? What’s your best guess about the "big issues" ahead for the next decade?

Hawkins: The changes in data collection in the last decade vary in both content and methodology. When CAUSE did the ID Service, the only methodology available was paper-and-pencil; now, Web-based tools make the whole process much easier and quicker. In terms of content, the growth and use of the Web, the challenges of security, the focus on campus decision-support systems, the growth of instructional computing, the use of course management tools, and the extent of campus networking and wireless connectivity have changed the focus and breadth of IT on campus. We therefore need to keep the content of such surveys up-to-date, but we also need to provide longitudinal perspectives on the many IT areas that do not change in such a volatile fashion. These efforts involve comparing practices, as well as quantitative data; they involve looking at the way IT is managed, with more focus on solutions and approaches and less on simply counting things.

Regarding the future, it is probably safe to say that wireless and security issues will continue to escalate. The key to data collection in this area is to have the ability to modify the survey instrument as needed to cope with the changing landscape and not to focus on every fad that comes along.

Smallen/Leach:: Each year, a greater percentage of the IT operating budget goes to supporting what already exists. Colleges have equipped their employees with computers, and many now strongly recommended that their students buy computers. Campuses have built extensive infrastructure (networks, servers, computer labs) and are now struggling with how to sustain, replace, and support these complex environments. The result has been a focus on understanding sustainability issues. Over the last decade, there has been a general recognition that the replacement costs for technology need to be made part of the annual operating budget rather than considering technology purchases as one-time costs. This has changed the data for equipment expenditures.

In recent years, constraints in overall institutional budgets have encouraged the alignment of IT priorities with institutional priorities. That’s a good thing! It requires that we understand the staffing levels needed to provide IT services for those priorities.

In the next decade, colleges will continue to struggle with the need to support increasingly complex IT environments and ideas for innovation with constrained resources. For example, larger percentages of IT budgets will have to be devoted to network and desktop security issues. Curricular uses of technology are still immature and require substantial levels of support. All this will play out in an institutional environment in which revenues cannot be increased sufficiently. Whether benchmarks can shed light on this increasing complexity remains to be seen.

Green: Clearly, the emergence of the Web and the Internet in the mid-1990s constituted a major change for computing and IT issues on campus. Wireless may have similar consequences in terms of a new, ubiquitous technology. Certainly, the campus conversations and aspirations for open source will be important. And security—as Brian, Dave, and Karen noted—looms very large.

My best guess about the next big issue? I think that when we step back, we can see that the first two decades of the much-hyped "computer revolution in higher education" were fostered and fueled by great aspirations for the role of computing and technology, particularly in teaching, learning, and instruction. I sense that we’re in a transition—in the movement from aspirations to accountability and assessment. Faculty, presidents, provosts, and others on campus are asking all of us involved with IT to document the impact and benefits of technology: "Show me the benefit of IT in instruction, show me the benefits of the millions we are spending on new ERP systems." To date, the evidence of impact—of a "return on investment" affecting learning outcomes, academic productivity, or organizational effectiveness—is ambiguous at best. Much of what we do is based on evidence by individual epiphany or a kind of "sum, ergo" expertise. We need to do better.

5. How does the decentralized nature of IT activities, budgets, expertise, and operations affect your efforts to collect data for your survey?

Smallen/Leach:: The COSTS Project focuses on institution-wide IT budget and staffing issues, so we attempt to collect data about IT wherever it exists on campus. That’s not always easy. As IT has permeated the operations of every part of the institution, the question of who—what unit—is providing IT support has become more difficult to answer. It is important to disaggregate central and non-central IT. The COSTS Project calculates most benchmarks in two ways: based only on centralized support; and based also on total support. For example, in 2003-04 among the liberal arts colleges and master’s institutions, roughly 90 percent of IT support was provided centrally, whereas at doctoral/research institutions, that number was closer to 80 percent.

Smaller institutions, which still provide their IT support in a highly centralized fashion, have an easier time locating the IT support. IT leaders count what they can identify, so we are fairly certain that COSTS participants understate the cost and staffing needed to support IT, but we have to start somewhere. Representatives of many participating institutions have told us that just the process of collecting data has often given them an improved understanding of their own IT environments.

Green: In one sense, all these projects depend on "the kindness of strangers." And on many campuses, the respondent—typically the CIO or the most senior campus IT officer—may also be dependent on the "kindness" and cooperation of colleagues in other units.

The Campus Computing Project questionnaire focuses primarily on institutional planning and policy issues, along with some opinion items. The underlying assumption is that the respondent, typically the CIO or a senior campus technology officer, is familiar with key issues that affect IT planning and policy. Also, unlike COSTS and CDS, we do not collect much data on budgets. Consequently, we try—indeed, we hope!—to minimize the dependency on other units and other campus officials for the survey data.

Hawkins: The increasing decentralization is a reality on many campuses, and it creates some serious challenges in collecting this information. The EDUCAUSE CDS asks the central campus IT officer (who receives the survey) to estimate the expenditures, staff, and hardware outside of the central organization. This is clearly an estimate, and we drop from the survey those respondents who declare that they cannot make such estimates from any subsequent analyses. Clearly, the measurement of these data is less refined, but it seems better to collect the best information possible than to simply ignore this area because it is difficult to quantify or potentially less accurate. In some cases, the very act of asking the questions has caused some campuses to try to get a better handle on this information.

6. What are some of the complex issues that you grapple with in the design of your survey?

Green: Anyone reading this interview who has completed the Campus Computing Project survey will probably say that the major design challenge is length. The questionnaire is, admittedly, long—about eight pages. I suspect this may be a common complaint about all our surveys: our respondents want all the data we can collect, but they complain that our questionnaires are too long.

Beyond the length issue, the key question is currency: identifying and anticipating those issues that are emerging and important and adding them to the annual survey. One example here from the Campus Computing Project involves campus policies about P2P networks: we added this item to the 2003 survey. Our data reveal that most campuses do have appropriate use policies. I think this has helped to inform the broader conversation, both on campus and with critics who continue to target campuses on P2P issues. Similarly, new survey items on open source, spam control, and e-portfolios have, I believe, proven to be informative for the campus community.

I depend on the kindness of colleagues and corporate sponsors to keep the survey current. Each year I send a draft copy of the survey to 120–150 people in both the campus and the corporate communities for comments and suggestions. This process has been essential to keeping the questionnaire up-to-date and to adding new items that reflect emerging and important issues.

Hawkins: Designing the EDUCAUSE CDS survey involved many challenges, but probably the most complex issue was deciding whether we would collect actual IT funding received for a fiscal year or projected IT budgets. As mentioned above, this is a key difference between the EDUCAUSE CDS and the COSTS Project. The CDS examines actual funding received and expended for the previous economic year rather than the funding data projected for the coming year. Ours is clearly a retrospective approach, looking back to the previous academic year, rather than a prospective approach, anticipating the upcoming academic year. A good case can be made for either methodology, but for the EDUCAUSE CDS, the inclusion of larger institutions and public institutions—resulting in a larger database of participating institutions—warranted the use of the retrospective approach.

Another challenge involved designing a survey instrument that could accommodate all types of institutions. The EDUCAUSE membership is as broad as higher education itself, ranging from small private colleges to community colleges to private research universities to state comprehensive universities that are part of a multicampus system. The survey thus has to be worded so that every participant can relate to the questions.

Smallen/Leach:: For the COSTS Project, one complex issue involves choosing the data elements that make up a benchmark ratio. For example, when we were developing a high-level measure of the support load for the IT organization, we considered things like the number of full-time equivalent (FTE) students, FTE employees, and FTE faculty as estimates of the potential support load for the campus. We ultimately decided that the total headcount (employees + students) was the best estimate. It is well-known among IT staff that the best proxy for the IT support needed by an individual isn’t the number of courses a student takes or the number of hours an employee works; rather, it is the individual’s IT knowledge and the complexity of what the student or employee is doing. So we use the headcount as the numerator of the support-load benchmark and the FTE IT staff (including student workers) as the denominator. We think it is important that student help be factored into the support equation because most IT organizations could not function without these students. Thus we arrive at the Staff Support Level benchmark (total headcount [employees + students]/total FTE IT staff) as a good barometer of the workload a given IT organization is handling.

We have also determined that looking at multi-year trends tends to be more informative than looking at a particular year. This is partly related to the "lumpy" nature of IT spending. For example, implementing an ERP system or replacing all the network electronics might have a major impact in one budget year, throwing off the benchmarks during that year. Looking at benchmarks over time, however, gives the institution a better sense of what "baseline" IT support needs.

Another decision involves whether to collect data for the expected IT budget or for the actual amount spent. Timing is important. We wanted the data to be useful for proactive planning, and we felt institutions could provide the IT budget almost a year earlier than the actual expenditures. We also felt the budget reflected the "plan" for IT rather than the unexpected allocations or constraints that might appear during the year. In addition, people told us that it was easier to provide the budget number. So, to promote high-level understanding and to maximize participation, we collected the budget data.

As mentioned above, the decentralization of IT support presents challenges. We decided that it was important to try to collect all IT support and budgets, wherever they reside in the institution, so that we could get a more complete picture. This decision probably results in the low participation by doctoral/research institutions. These institutions are often so decentralized that they have great difficulty collecting the data for decentralized support.

New investments in infrastructure and other major capital projects also present problems for data collection. Often these IT projects appear as part of multi-year construction efforts outside of the operating budget. Although the ongoing support needs of these IT investments are often incorporated in the operating budget, the replacement costs are not always included.

7. All three projects report disaggregated data—in other words, means are reported by campus sectors and segments. Why is this important?

Hawkins: Although the annual CDS Summary Report does report data by Carnegie Classifications and by public and private control of institutions, the ability to completely disaggregate data, down to the individual campus, constitutes the unique value of the EDUCAUSE CDS. When data are aggregated, differences are lost, and the aggregate data often disguise important differences. There are differences due to the size of institutions, the control (public or private) of institutions, and the mission of institutions, as reflected in Carnegie Classifications. Being able to understand differences in these groupings is critical to making the data understandable and relevant for decision-making. An example is the question, "What percent of undergraduate students at your institution use their own personal computers?" The response for all institutions is 65%, but when the data are disaggregated by private vs. public institutions, 78% is the average for private institutions, compared with 55% for public institutions. Very important information is hidden if the data can’t be "cut" multiple ways. The EDUCAUSE CDS tool allows this cutting.

Even when these basic filters or disaggregated groupings are applied, the complexity and differences in higher education institutions create ambiguity and potential misinterpretation of data. The environment at a large private research university can be quite different from that at many other institutions with these same characteristics. There are huge differences in financial or support responses, depending on whether the institution has a medical school or an engineering program, for example. That is why being able to aggregate data based on characteristics that best define institutions that are comparable to your own allows the highest level of comparison. And that is what makes the EDUCAUSE CDS effort unique. Most of the time, survey respondents are unwilling to allow their data to be shared because of misuse of the data by outsiders. But the trust and credibility that EDUCAUSE enjoys, accompanied by a very strong CDS agreement on use of the data, permits this kind of data access, a relative rarity in survey research.

Smallen/Leach:: In benchmark comparisons, it is clear that institutional wealth (e.g., measured by total institutional budget per capita [employees + students]) plays a big part in IT decisions. That is, institutions that have more financial resources per capita tend to spend more on IT per capita. Institutional size, mission, and wealth all provide a context that must be understood to interpret IT data. We say that the most informative comparisons are among true "peer" institutions, especially if a school can identify peers of similar size. In fact, we encourage peer institutions to participate together in the COSTS Project so that they can easily compare their benchmarks with those of similar institutions.

A high-level surrogate for peer-comparison purposes is the Carnegie Classification of Institutions of Higher Education. Although there are variations among institutions within the same classification, we have seen greater variations between the COSTS benchmarks across different Carnegie Classifications. In 2003-04, for example, the Staff Support Level benchmark (total headcount [employees + students]/total FTE IT staff) for liberal arts colleges (71) differed significantly from the SSL benchmark for master’s institutions (151).

Green: I agree with Brian’s statement: aggregate data disguise important differences. A single number portraying all U.S. colleges and universities—"x percent of students own computers, or y percent of institutions have online course registration"—may be interesting, but such numbers are not particularly informative, especially for campus planning and policy efforts. Campuses are most interested in tracking their peer institutions. For example, the user-access and faculty-support issues confronted by community college CIOs differ from those confronted by their counterparts in research universities or private liberal arts colleges. Consequently, I’d argue that it is essential to disaggregate data by segments and sectors.

8. There’s an old joke about benchmarking data: if the data show that you lag behind your peers, you use the data to document the need for more money to catch up; but if the data show that you lead the pack, you lobby for more money to maintain your leadership. Does data collection really inform institutional decision-making, or does data collection simply fuel the competition between institutions, what some would call the "technology race"?

Smallen/Leach:: This story has a variation, known among institutional researchers. It is said that institutions choose their peer groups so that whatever is being compared, they will be in the middle. Institutions that appear below the middle try to get to the middle, and those that are above the middle attempt to maintain position. The result of these strategies is that institutions, for the most part, maintain the same position. To some extent, this is what we’ve seen among institutions using comparative data, such as the benchmarks developed by the COSTS Project. Participants sometimes tell us that they need the data to make a case for more IT staff or more spending. The benchmark comparison with peer institutions, together with the case for needed services, probably enables the institution to make a more informed decision.

Our sense is that institutional goals and objectives ultimately drive spending in all areas of the budget. Institution-wide budget constraints provide the broad parameters in which IT must operate. If the COSTS benchmarks can help inform the cost of services, then institutions might at least be able to avoid adding new services without sufficient funding and staffing to support them. In the end, the culture of institutional decision-making will probably have more to do with how the data are used than any conclusions reached by IT surveys.

Green: We’ve seen a number of instances where planning and policy-making have clearly been informed by the survey data. Several state systems and consortia mandate that all institutions or members participate in the Campus Computing Project annual survey, and they ask for special profiles. Campuses use custom reports from the Campus Computing Project as a resource for accreditation reports, strategic planning, and presentations to advisory groups and trustees.

But my other, informal indicator of the importance of data—in this case, the Campus Computing Project—is the number of people who, over the years, have tagged me in the hallways at the EDUCAUSE annual conference and elsewhere to introduce themselves and say: "Thanks for the good work that you do. The questionnaire has helped me think broadly about IT issues. The survey data have been very useful." I’m sure that Dave, Karen, and Brian have had similar hallway experiences.

I suspect that the "technology race" is a secondary consequence. Yes, campus officials want to know how their institutions compare to peers. But my campus conversations suggest that institutions are not primarily concerned with beating the competition; rather, each institution simply wants to "do better" than it has in the past.

Hawkins: It is our strongest belief that accurate and reliable information is essential for institutional decision-making; that is the reason we started the EDUCAUSE CDS. We also believe that the data should be related to institutional goals and outcomes, as Dave and Karen noted earlier, and should not be focused on as an input measure that can or should be looked at in isolation. Yes, the CDS data and the data from the other surveys can be used to fuel a "technology race." But this information can also be used to precipitate campus discussions. We have received anecdotal information that simply filling out the survey instrument has caused some CIOs to start to think about various financial and operational issues differently and to start to talk about them as well. This result has real value.

Ultimately, I think Casey, Dave, Karen, and I all agree that IT investments must be judged in the context of campus goals and mission. The measure that matters is whether IT advances the institutional capacity in terms of learning, discovery, engagement, and service.