An Interview with Brian Voss, CIO at Louisiana State University

min read

Richard Katz (RK): Brian, thanks for taking time out of your busy schedule to talk with me, and congratulations again on becoming CIO of Louisiana State University (LSU).

Brian Voss (BV): Thank you, Richard. I have to tell you that I was proud to be associated with LSU from the moment they appointed me, but I’m especially proud after having been through events following the aftermath of Katrina. This place and these people are just outstanding.

RK: As the events unfolded in Louisiana, we all watched with a sense of dread, wonder, horror, anger, and admiration. What’s it like now in Baton Rouge? How has LSU been affected?

BV: Baton Rouge and the LSU campus came through Katrina relatively unscathed. We had 70 mph gusts on this, the “light side” of the storm. There were significant power outages in and around the city; some lasted for days. The campus as a whole never lost power, although some buildings did sporadically, and we did run the data center off back-up generator power for about thirty-six hours to reduce the load on the campus.

As the storm passed on the afternoon of Monday, August 29, things looked “good to go” for the rest of a relatively normal week on campus. The university administration met at five o’clock that Monday evening, and we decided to stay with a “closed” status on Tuesday. We planned to reopen the campus for faculty and staff on Wednesday and have the students return on Thursday morning. There was some clean-up to do on campus, and some buildings needed some attention. But there was nothing major or scary. Then, as word began to spread about how ravaged southeastern Louisiana was, things started to get worrisome. And then, very early on that Tuesday morning, the levees failed in New Orleans, and things became very bad, very fast.

By afternoon on that Tuesday, it was apparent that an epic disaster was occurring in New Orleans. The state’s emergency response agencies notified LSU Chancellor Sean O’Keefe about a plan to use LSU facilities—specifically, the Pete Maravich Assembly Center (PMAC) and the Field House—as medical triage and special needs shelters respectively. And by nightfall on that very day, all this began to form up in earnest.

As a result, Baton Rouge saw its population double in the space of less than a week. A metropolitan area of about 250,000 grew to over a half-million. Every hotel room remains occupied, and more than 75 percent of the houses that were on the market before the storm are now sold. Aside from everything else, this aspect has only added to the impact of the disaster.

When classes did resume at LSU on the day after the Labor Day holiday, our student population had grown as the university—through arrangements made by the Louisiana Board of Regents (our higher education commission)—took in students from the affected institutions. By the time a special late-enrollment period ended on September 9, more than 3,000 new undergraduate students and nearly 300 new graduate students had been enrolled. This was a growth of more than 10 percent in ten days!

RK: What role has your organization been playing?

BV: As these special facilities came online, they needed IT services and support. Our first calls were for telephone connections and services, data connections, and mobile computers (laptops). As these centers started to receive patients and evacuees, their need to keep track of information and to have open lines of communication off campus increased steadily with each passing hour. So in the early hours, we performed mostly networking and desktop IT functions, to provide support to the agencies, doctors, and volunteers who were streaming into the PMAC and the Field House.

We continued to expand the IT infrastructure, with more than 120 telephones and nearly as many data connections (on top of wireless). You have to understand that these facilities were not that well-connected; we didn’t have a pre-plan for this level of infrastructure in those buildings, so we had to build from scratch, with what we had in place as a base and using what we had in our IT storerooms. The facilities have long been designated as “special needs capable” on normal hurricane evacuation routes; we even brought them up during Hurricane Dennis early this past summer. But they weren’t prepared for what they were ultimately going to be: by some reports, the largest field hospital and special needs facility in the history of modern disasters in the United States. The PMAC and the Field House treated more than five thousand individuals in the course of just over a week.

As that response system grew, like any other function in the twenty-first century, so did the IT needs. As volunteers responded to the call, we needed to get information systems in place to track them. We did a lot of what I call “quick and dirty” IS work over the course of the days that followed. Sometimes, we’d put these things together based on needs expressed at 8 a.m., and by the time they were ready later that day, the need would have shifted to something else. But our systems staff stayed up with this, accepting what was to “stick” as useful and simply setting aside those things that became unnecessary. By the height of the response supporting the activities on campus, all parts of our IT service organization were involved. Even our High Performance Computing group was busy, implementing a previously scheduled consolidation of system support with the Center for Computation & Technology, led by Ed Seidel, and working those resources hard in support of the post-hurricane analysis being conducted by researchers associated with the LSU Hurricane Center.

Even those folks in my shop who weren’t called on for IT support were charged with giving comfort to our colleagues on campus. In order to quickly implement a 7x24 information hotline, we set it up in the Frey Computing Services Center to be manned by staff from the Public Affairs organization of LSU, buoyed with volunteers for that effort from around campus. We made sure that they had—in their temporary location—food, drink, and the comforts we could provide. This was a little bit of a stretch from our IT mission, but it epitomized the service culture of our shop. I was very proud.

RK: Has the corporate community stepped up to the challenge? How are you working with vendors?

BV: Almost instantly, we started to run low on supplies. So we put out emergency calls and orders for things such as IP phones (much quicker to set up than regular telephony) and laptop computer equipment. Our vendors responded by quickly filling and shipping our orders and then topping them off with donations of equipment. I have to acknowledge Avaya (our telephone equipment provider), IBM (which donated one hundred laptops and brought in its emergency response team, which ultimately plugged in to the state’s efforts), and Microsoft (which donated equipment and made available a team of experts—one based in Baton Rouge—to provide its emergency response systems in support of the operations at the PMAC). Microsoft had some equipment brought in with a National Guard convoy over the weekend of September 3. The Humvees dropped off about thirty-five boxes of equipment and materials. Cisco provided a “care package” of donated networking gear, and Panasas gave us an “indefinite loan” of 20 TB of high-performance disk, which we loaded up with FEMA images of the post-Katrina landscape of southeastern Louisiana for research analysis. I found out, in this time of need, just how strong our vendor partners were!

RK: How would you characterize the state of IT at the colleges and universities most heavily affected by Katrina?

BV: They were put temporarily out of business. The University of New Orleans (UNO)—part of the LSU system, as is the LSU Health Science Center (HSC), which is also based in New Orleans—could not get to its campus immediately (although the chancellor of UNO did go onto his campus with a military escort and managed to snag the data drive from the mission-critical support server!). UNO resumed classes at their Jefferson Center on October 10, becoming the first New Orleans–based institution to reopen (albeit outside of New Orleans). I know they’re proud of that. Their plan is to resume classes on their campus, if possible, for the spring semester. LSU HSC has temporarily relocated to Baton Rouge and is using a variety of facilities in the area and on the LSU campus to continue their operations. They have even acquired a large ship (docked in the Port of Baton Rouge) and trailers to provide interim housing for faculty, staff, and students.

Tulane, Xavier, and Loyola were also put out of business, as were parts of the Louisiana Technical College (LTC) system and other community colleges in the New Orleans area (LTC was further affected by Hurricane Rita). A quick tour of their Web sites indicates that Tulane is operating its administration out of temporary headquarters in Houston (where it had to deal with the potential approach of Hurricane Rita later in September) and plans to resume classes in January. Xavier and Loyola are also attempting to resume classes this coming spring. I know all are using their Web sites to provide updates on their status, and I urge those interested to monitor these sites for specific details.

In the first days, one of the UNO staff got the Web pages out on a laptop! And we at LSU helped UNO reassign its DNS locations and get a splash Web page operational within forty-eight hours. LSU is going to play host to UNO’s IT infrastructure as it starts to rebuild to provide basic services, and I’m working with UNO CIO Jim Burgard to give him the support he needs to get things working in a “candlelight” mode. As the weeks have passed, that particular operation has started to rebuild and reestablish IT services. I stopped by the large room we made available to them as a temporary home, and it was packed with IT staff working diligently. We also put them in touch with the company to whom LSU outsources student e-mail, and they were able to get a new e-mail service up and running for their students in a matter of days. But as the UNO folks begin to think beyond a recovery action with temporary locations and services and contemplate a return to their own facility, I know they’re going to be facing a host of new challenges.

RK: Are different sectors of higher ed, such as community colleges, being affected differently? Are all our colleagues getting the help they need?

BV: I know firsthand what’s going on with UNO and the LSU Health Science Center; we’re providing some temporary support to Southeastern Louisiana University as well. I’m getting information about other area colleges and universities indirectly or through their Web sites, as I mentioned previously. I think the work EDUCAUSE did to make available a place for these schools to request support and get it matched with help from member institutions is going to be of great value in the weeks ahead.

The state made some good calls. The affected students are being accommodated at other state schools (LSU, primarily), and the faculty at those locations are going to be paid through the fall semester and are being urged to find teaching assignments at other schools to help handle the added class loads. It’s actually working.

This aspect of help, though, is a two-edged sword. I know there have been generous offers of support and assistance from our colleagues around the nation. These are a comfort, but we also need to realize that some of these offers will best be taken advantage of later on. I think in the aftermath of this kind of catastrophe, there’s a desire to rush in and help. But the kind of help needed in the early days is different from that needed later on. What’s going to be key is to be responsive to requests that come from these institutions, when they come—even if that’s well into 2006. Some institutions will welcome support down the road, once they’ve had a chance to assess the situation and have also seen the surrounding infrastructure settle down. It’s hard for a CIO in an affected area to accept personnel help when the CIO can’t find a safe and comfortable place to house that help when it arrives. I know I had some generous offers from vendors, but they required places for their staff to stay – something we couldn’t provide because all available rooms in the region were full! So I hope that we can give these CIOs the time they need to assess what kind of help they need and that when they make the inevitable request for support, we’ll be responsive to their needs.

RK: What will it take to get the IT operations at these institutions up and running?

BV: They’re going to need a place to start to rebuild, to get “online” as best they can before their campuses are reopened for onsite recovery activities. We’ve basically provided UNO with floor-space and some loaned servers to get that process started, but they too got quick responses from some of their vendor partners and actually had more equipment delivered the second week after the storm. Things are still developing in that regard, but to my knowledge, all the affected institutions have made some good progress in recovering and in reestablishing their operations. Let me say again that although the offers for help have been quickly made and generous, this is not going to be something that fades with the news cycle. These institutions are going to need help for a long time. I’d ask our colleagues to be responsive to requests for the next several months—and possibly throughout this academic year. Many are still in shock, dealing not only with professional issues but with personal ones as well. Remember, these people didn’t lose just their offices and data centers; many also lost their homes and loved ones. Dealing with the issues surrounding recovery of their operations may be down the road a bit as they deal with these personal losses first.

RK: Data from the EDUCAUSE Core Data Service and from ECAR studies suggest that many of our colleagues do not have disaster recovery and business continuity plans, let alone hot sites. What have you been observing?

BV: That’s accurate. Disaster recovery (DR) has long been “that thing we’ll get to when we get done building infrastructure and services.” For the affected institutions, that’s biting hard right now. Some had their off-site back-ups stored only blocks away from their campus—a good strategy for a tornado, maybe, but not for the kind of city/area-wide disaster we saw with Katrina. I took a hard look at my own DR here. Had Baton Rouge experienced such a disaster directly, we’d have been in this same boat of IT darkness. From what I’ve observed, DR was not done adequately to prepare for this kind of disaster. Not by a long shot.

RK: How have these observations changed the manner in which you go about your job? Has this experience altered your priorities?

BV: How could it not? I was also one of those CIOs who looked at DR as a downstream step. I figured that when we built something to recover, we’d start on disaster recovery. But it wasn’t on my immediate horizon. Now, that’s changed. I know I’m going to need to chase funding for building IT infrastructure across teaching and learning, enabling research, providing quality information systems and environments for LSU, and improving the student IT experience. But now, DR is taking its place right alongside those other exercises. We have to plan for this kind of thing.

RK: Do disaster recovery and business continuity planning need to move up on our priority lists?

BV: Absolutely they do. I think we’ve had a failure of imagination here, even after 9/11. DR seems to be focused on a spot-on disaster at one’s data center or campus (and even that focus, as I said, was down at the bottom of the list of priorities). But Katrina has taught us what happens when not just a building or a campus is lost but a whole city. Many colleges and universities are in major cities. What if they had some sort of similar wide-scale disaster? I think no one is prepared for that. No one.

But I’ve also learned there’s a related aspect: what if you’re the last one standing? What if everyone else is in the ditch, and your data center is the only one with lights and network connectivity? What will you do if called on to help restore systems and IT services used by your neighbors? And what if—as happened at LSU—the call goes beyond higher ed IT and requires your large, public university to serve the greater community in a broad set of ways? CIOs know just how strategic IT is to the university’s function. We have a lot of literature on that, and I think for the most part, we have been successful in getting executives to see the role of IT (thanks in large part to the efforts of EDUCAUSE). But CIOs also must realize that our entire society’s function is now heavily based in IT enablement. When disaster strikes, if your campus is on the outskirts, you’re going to find yourself in a situation like LSU’s. Overnight, the CIO’s mission will broaden unimaginably (well, not beyond imagination any longer!). So I think CIOs need to prepare for both sides of the DR issue: what if we’re hit—and what if we’re the only ones not.

And finally, I think the entire community needs to be thinking about this on a nation-wide basis. Off-site storage, stores of equipment, and even local hot sites are not going to be enough to deal with a calamity of this scale. We need to think about regional and national collaborations that might allow for us all to build a grid of DR infrastructure, which could be deployed in the parts not affected to serve the parts that have been affected.

RK: EDUCAUSE discussion lists have been alive with reactions to Katrina. People want to help. As you know, EDUCAUSE has deployed a site to circulate hurricane-affected members’ needs and other institutions’ offers of help. How are you advising people who need help?

BV: If we can’t help them—and we can’t help them all, given the tasks we’ve been engaged to do—we’re sending them to the EDUCAUSE site. I’m advising people to not be bashful about asking for help, no matter how far away that help may be. The higher education community—as represented by EDUCAUSE and Internet2 members—is very generous. People want to help. We just need to match needs with capabilities.

RK: How has this experience changed you? Is there a single piece of advice that you might want to offer our readers?

BV: What it has done is made me even more dedicated to building a first-rate IT environment at my institution. LSU has a good IT environment and infrastructure, and there was a desire to make it better, as discussed in the National Flagship Agenda (LSU’s strategic plan vision). But I’m now even more dedicated to building the best here, so that we can serve the campus and institution in the best of times and so that we’re ready to do an even better job than we did in the worst of times. And you can bet I’ll be including a “worst case scenario” in building that infrastructure.

This disaster has also shown me what wonderful staff I have—how dedicated they are to this institution and to their state. I thought that this staff was good before I took the job, and I felt more strongly about this as my first few months passed. But through this event, I’ve come to know it, to be convinced of it. They have what it takes to handle a crisis; they have what it takes to advance LSU’s IT future over the long run. My staff were a silver lining in an otherwise very dark situation.

As I said above, this experience has made me reevaluate the priority I will place on DR in my upcoming IT strategic planning process. During my job interview, I was asked: “How will you prioritize everything that needs to be done here, given scarce resources?” My response was to say that this is the wrong question. The right question is: “How will you go about getting the resources you need to get everything done that needs to be done?” Now, part of that “everything” is disaster recovery and business continuity planning. In my recent (September 9!) reorganization of my unit, I elevated DR to a higher level and put one of our most experienced (and DR-devoted) staff members in charge of the function, reporting within the IT Policy & Security and Information Integrity function within the Office of the CIO. We’re starting on this right now. We’ve put together a list of about $3 million worth of things we need to bulletproof our data center and added the list to the institution’s emergency needs assessment (which was formed in the first week after Katrina). But as we go along, we’ll be looking at that second facet I mentioned more directly—because we now know what it means to be the last one standing.

A single piece of advice? Pay heed to the two encounters with hurricane destruction in the southeastern United States this fall and what that means to your institution’s IT disaster recovery and business continuity planning. Not everyone lives in areas threatened by hurricanes. But the areas threatened by hurricanes and tornadoes, those endangered by earthquake devastation, and those possibly affected by wide-area disasters of a terrorist attack, industrial accident, or pandemic disease cover nearly all of us. Don’t have a failure of imagination. Imagine what would happen if your campus was destroyed and left unusable for weeks or months but if your institutional community was still going to need IT service and support. Imagine, further, if the metropolitan area surrounding your institution was also left unreachable for days, weeks, or months. And imagine what would happen on your campus if a nearby major metropolitan area suffered a total disaster and you found yourself on the edge of that disaster. The role of your institution would quickly broaden beyond your traditional focus.

CIOs can no longer say they can’t imagine what could happen—because it just did.