In this episode recorded live at the 2024 EDUCAUSE Annual Conference, hosts Sophie and Jenay discuss how higher education can use available resources to address cybersecurity risks surrounding artificial intelligence with guest David Seidl.
Takeaways from this episode:
- Learn why the advent of artificial intelligence is highlighting foundational competencies in higher education technology.
- Understand why personalized cybersecurity training is key to creating safer institutions.
- Consider how to balance the business cases for using AI and data with cybersecurity and privacy concerns.
- Think about how data is "the new gold rush" and why that changes our approaches to everything from data privacy to business strategy.
View Transcript
Sophie White: All right. Hi everyone. Thank you so much for joining us today and welcome to EDUCAUSE Shop Talk. I'm Sophie White, I'm a content marketing and program manager for EDUCAUSE I'm Jenay Robert. I am a senior researcher at EDUCAUSE. And today we have with us our guest, David Seidl, vice president of IT and CIO at Miami University of Ohio, not Miami, Florida, with us on the show to discuss AI and cybersecurity. So I'll tell you all a little bit about David, we'll have a conversation, and then we'll ask for audience questions at the end if you all have any for us. So in his role at Miami University, David has been heavily involved in Miami's move to Workday and has led Miami IT through major initiatives including identity management modernization with Rapid Identity CrowdStrike, EDR implementation and navigating Google's storage changes during his time at Miami IT services has been named a top 100 place to work in it three times in a row and has won two CIO 100 awards for innovative IT partnerships. David has a background in cutting-edge work and co-led Notre Dame's move to the cloud. He also served as Notre Dame's director of information security, led Notre Dame's information security program and taught a popular course on networking and security for Notre Dame's Mendoza College of Business. He has written twenty-four books. I had to double check this was not a typo.
David Seidl: Just leaps happening.
Sophie White: Okay, we'll talk about that with a focus on security certification and cyber warfare. David is also a member of our EDUCAUSE board of directors, so he's very involved in EDUCAUSE as well.
David Seidl: I was going to try and get the intro to be David is a big geek and they wanted a little bit more than that.
Sophie White: We want a little more. So yeah, twenty-four books. Can you tell us how that happened? How do you do anything else other than write books?
David Seidl: I always start by fessing up to my origin story and that is that my parents are both librarians.
So if you are raised around books and think about books, then when somebody says, Hey, would you like to write a book? Your answer is of course, yes. Little do you know that that's a slippery slope. And so when I started at Notre Dame, my coworker Mike Chapel was writing a very successful security certification book and I kept going into his office and going, I'm kind of interested in that. And I annoyed him enough by pestering him that he eventually said, would you like to be a tech editor? Which is a much harder job because then you're correcting your friend about the things he got wrong on a technical basis. And I did that for three authors and they said You were really helpful and you actually helped us make the book better. And then Mike said, would you like to co-write with me? And that is a little bit over a decade ago now, and we did the first book. We really enjoyed working together on the second book. We really enjoyed working together. We have now done twenty-two of those books together across that time and it's just a fun thing to do, although it does kind of sneak into your nights and weekends every so often.
Jenay Robert: Can you say a little bit more about what the books are? I mean these are, you said certification?
David Seidl: Yeah, so if you want to get a security certification like the C-I-S-S-P, I write the test prep book for that, the questions. So if you want to take a bunch of practice exams, I write that the security plus the CYSA plus, I've done pen test plus a bunch of the certifications that you do in the security space. Then I've done two editions of a textbook on cyber warfare for Jones and
Jenay Robert: Barlett. Not like curl up on a cool day with fireplace and we're here to talk
David Seidl: Cybersecurity maybe. Maybe it's so it used to be that I would give a copy of the book to my parents, my first one I dedicated to them and my mom eventually said, stop. We're not reading though. And I tried to open it up and I don't understand any of this. And she said, we're glad other people buy them because that's probably good for your career. But yeah, they're in a niche. If you want to get a certification, we try and write really approachable certification books.
Sophie White: You need a new book. She too. By the 24th one . . .
David Seidl: I am actually almost out of space for the next two when they come out. But it's kind of fun that I have my shelf of books and seeing how security has changed over the decade that I've been writing and the new topics that have come up. So AI is a very recent edition that's been becoming more visible. And so that's been a thing that we're starting to include keeps me playing and enjoying a space that I came from and I really appreciate that.
Jenay Robert: I love that and I love that cool fact about you now it's just a neat thing to note.
Sophie White: On that note, I'm curious in writing all of these books and you mentioned cybersecurity has changed over the years since you've been writing them. What are you seeing that look like? What's the transition been like from point A to point B?
David Seidl: I'm going to go back almost twenty years and when I started as a cybersecurity professional at Purdue, we would show up in people's office and say we're here with the security team. And they said basically, what's that? And we would say, we need to do security. And they'd say why? And so we had a very uphill battle through 2003 to 2007. I was there just getting people to actually talk to us. And I'm really proud by the time I left, people would call us before they started doing things instead of after they had done them and they'd say, Hey, we're thinking about doing something. We'd like you to be engaged. And so we now live in a world where everybody has some awareness of cybersecurity, even if it's only because it's on CNN and something bad happened or there was a CrowdStrike issue and your flight got delayed.
All of those things are out there and in the space and at the same time, organizations and institutions have learned that is the discipline and is now treated more like a discipline. It's funded more like a discipline. It's staffed more like a discipline. The other change that happened is now that because it's so common, you don't always have to have a security person in the room to represent security and that's transformative. It also means that security has been a problem and we've had to address it. The thing I'm looking at now is accessibility is in that same curve. And so my accessibility director says, nobody listens to me. And I said, okay, you're from 2003, 2007, I understand this path. We're going to walk it. He's actually made a lot of those steps. Now people talk to us about accessibility really. So that's exciting too.
But it's changed a lot. The human factor is still a big issue and a lot of the time we joke about the technology is different, but the underlying problems are the same. So AI is a new technology, but the underlying problems with AI are at a meta scale, the same sort of problems we've had before. Is your data where you expect it to be? Are people sending data where you wouldn't want it to be? Do you introduce new risks of the new technology? Those are all the same questions. It's just got AI on the end of it.
Jenay Robert: Yeah, I'm having those conversations all over the place because my last year and a half or two years has been completely consumed by AI related research. But what's exciting about that to me is that it gives me a chance to dabble across the institution because this is something that, as you said, is really just shining light on some things that we've always been concerned about or should have been concerned about in some cases. But it's just kind of shining a brighter light on those things to data governance, the privacy and security accessibility, the gamut of things of the institution
David Seidl: And the inflection point that some of those AI issues come from came before ai and it was the app economy and people being able to adopt things in the cloud very easily. So when you had to go buy a piece of software installed on a machine that added some difficulty, most of our faculty figured out their end runs around that already anyway. But now if you just click on it, it's on your machine. If you use it through the web, it's in a browser. And so the adoption curve and the difficulty of adoption is negligible if there at all. And so that means it's easier to bypass institutional policies without even thinking about it. We don't even always know when we're using ai. You're not even always aware that AI is behind the scenes. You're not always aware that the contract or the license that you've clicked through says, we can use all of your data. Thank you. And so all of those concerns have been real for a long time. Just this is bringing it to the forefront ways that we are finding harder to ignore than it was before.
Jenay Robert: Yeah, I've said a couple times lately because we just published this cybersecurity and privacy horizon report and I was one of the researchers on that project and I've worked with the community for a while now through educ, and I've said a couple times lately, I used to work at higher ed institutions before I worked at EDUCAUSE. And no matter where I worked or what my role was, I always thought that privacy and security were someone else's job. And you're laughing because it was my job, everybody's job, and all this time I just didn't know I, I'm a hard worker. I care about the mission of higher ed, I care about the safety of our community and our students as individuals, and yet I didn't realize that I had a lot of responsibility in that space. So I appreciate the work your community does to raise that awareness.
David Seidl: I was in a giant auditorium for a security Halloween event almost 20 years ago and somebody said something very similar and they said, this isn't my job. And the whole room turned around and looked at her because that showed a misunderstanding of our role, particularly for students because in the context of students and their data and why we should protect that data and the whole room is thinking, wow, you don't get the job. We're higher ed. We are here for our students in the whole way. If a student is having a problem, we should be there as an institution to help them through that problem. If we are trusted by that student with their data, we should be doing our best to protect that data regardless of our role across the institution because we have been trusted with that. And I think she caught on that the whole room was looking at her, but I'm glad that more and more people are recognizing that and kind of having that moment and realizing.
Sophie White: Yeah. I'm curious if you have specific strategies of how you found, if someone says, this is not my job, you to understand the mission of higher ed and why this applies, but do you have any kind of tips for cybersecurity professionals to make them spread the word about why it's important?
David Seidl: I think that the first thing about understanding security for most folks is having it personalized. So why does it matter to me? You had that moment where it suddenly was like, oh, this is me for a long time. As technologists, we tended to tell people the technical explanation or a you have to because it's what the institution's adopting or some other thing that wasn't personal. And so it helps if you can build that sense of connection. And so get out, find out their, why are you here? A lot of us are in higher ed because we care about the social good that higher ed generates. We care about giving students the education that will allow them to transform the world. Okay, that's a good why. So then can we put cybersecurity into context that is meaningful for that? Why
We can do that. We can talk about it that way. Take a phishing scam. All of us get phishing scams in our email all the time. Why does awareness matter? Because if you give up your password and your user account the data that belongs to your student, that student record could be exposed. And that's not okay. So I have to care. I have to look at my email a little more carefully. I have to think about things I have to turn my brain on. I have to have my coffee before I click on things. All of those are true. So find the why and meet people where they are instead of expecting them to come to you with your reasoning.
Jenay Robert: Yeah, I think that connection, when I think back to when I was working at institutions and didn't understand cybersecurity and privacy the way I do now, that missing link for me was I didn't realize that I was an entry point into the institution no matter where I was. And we talk about this in the cybersecurity and privacy horizon report about how the quote perimeter is changing, and this is something you'll have to speak to because I'm not in the field, but that when I was writing it this year, I'm looking at the data from the panel and thinking, I never realized how nebulous this idea of you think about security in my house, I have walls around my home, I have property lines, there are things that I can very clearly mark as my space. And that's not really as clear with an institution, right? Can you talk a little about that?
David Seidl: When I started working in security, we tended to draw diagrams that looked a lot like a medieval castle to talk about our security infrastructure. We'd say layers of security like the moat is the firewall and the firewall protects us from all of the invaders. And then as we've brought phones onto networks, as we bring personal devices, as we allow apps in and out, we run these hybrid infrastructures. The moat has bridges and tunnels and helicopters flying over. The moat doesn't work anymore. And so we started to talk about, you've heard about concepts like Zero Trust, where we assume that everything might be insecure and we have to then do validation verification all the way through the process and just consistently check. We've looked at whether we should have our individual devices have security tools on them. So we deploy something like CrowdStrike to our endpoints to make sure that they are protected where we would've set a firewall as an up.
Now we watch everything that's happening on that device and our faculty members and our students and our staff say You're watching everything. They say, yes, we're watching for bad, we're not watching for all the rest of the things that you're doing. So you're going to be fine. We don't have the time or resources to look at everything that you do and we don't want to. But you look at all those layers and you take them throughout your infrastructure because you are very porous in a lot of ways. You can still have secured things, so you could still build that box, but in most cases we're pretty porous and things are going to come through.
Sophie White: That's a great point. And then I guess how is that perimeter changing with AI and the advent of that, do you think
David Seidl: It's another set of helicopters and bridges,
So there's a temptation to put everything that you do into an AI to get some feedback on it. So in a session I did yesterday, I asked how many people use Grammarly, and I'll look at it at our audience, say, how many of you use Grammarly? A few. Do you think about the fact that all the stuff that you put into Grammarly is leaving your institution going somewhere else? A couple half nods. If I was a malicious actor and malicious actors who are watching, don't listen to this. This is not for you scrolling stop, just go right past. Gee, wouldn't it be really interesting to see all the things that Grammarly sees in a day from our organization? Because I think you'd see everything from maybe performance reviews and people who are being fired to things that could help you invest in the stock market because you might get disclosures early.
I don't know how many of us think about things like Grammarly as a potential data leakage issue because most of us think of it Grammarly as an excellent tool for helping you build better writing. And we're doing the same thing with many other AI tools where we go ask chat GPT to make us smarter. And that means that some of that data is leaving. Now the good news is that you can sign an enterprise contract. Our Google contract says that Google's not going to use what we feed to Gemini as stuff that's going to feed the large language model to get better. In many other cases, we are clicking on things and saying, sure, that's fine. I get a free service. And so that data is leaving. Samsung banned AI internally for a while because some of their critical information about I believe a chip they were developing was leaked because an engineer asked some questions about it with chat GPT and then GPT trained on that data.
And if you asked it, what would be a great way to compete with Samsung? It would provide you some hints based on Samsung. It's internal documentation. So there are issues around all of this, and it's also a very quickly moving space. So one of the biggest challenges for organizations is when technology moves faster than humans understanding does, and AI is new and it's cool and it's appealing, and none of us are ready for all the things that we have to think about there. How many of you actually read contracts? Nobody is going to say us. Nobody's read the entire contract. Did you click through on it? Yes. When's the last time you clicked? Okay on something? Probably in the past week you've clicked okay on something and didn't read the contract. None of us read those contracts, but we're all agreeing to them and that data's going somewhere. And that's probably data that we've probably in some cases signed a contract that says, I'm going to be careful with this or it's policy and our institutions that we're going to be careful with it. Our heads are just not ready for that. Our institutions are not ready for that, and we have to start thinking about it and making policy and awareness decisions that match, that need
Jenay Robert: Something that gives me a feeling of overwhelm. I don't know how you're in this career because I would be constantly, I would never sleep. But something that makes me feel really overwhelmed is to think even if my institution dotted all the i's crossed, all the T's, and we were only contracting with trusted partners, but now our security is only as good as their security, they're not going to use our data poorly, but somebody might get into those systems. How do you manage that?
David Seidl: Some of it is in vendor assessment. So for the things that are critical, you probably can't do this for every single thing. The software that runs the linear accelerator in your science department that three physicists use, you're probably not going to look at that as thoroughly as you're going to look at your Microsoft contract or your ERP contract. So you need a triage. We need to spend the time where it's most important, where that will have the largest impact. But you go look at those vendors and say, Hey, can you show me some external audit data that says you have good security processes? How many breaches have you had in the past year, in the past three years, in the past five years? And what did you do afterwards? Did you do good things or bad things? We have removed a vendor that had a breach, didn't learn, and then had another breach, and they were one of our critical software packages.
And we at the contract renewal timeframe switched vendors because we said two strikes is plenty. We don't need three strikes, two strikes is plenty. We will switch. And we took the overhead of doing that because we knew that we wanted to reduce that risk. But you approach it institutionally and then you have to approach it from the other end and make sure that people are aware and making as good of decisions as you can equip them to on a day-to-day basis for most people. And that's education, that's awareness, that's getting in front of them. And you know that you will have people who inadvertently or on purpose make different decisions, but you can reduce the risk and the way you sleep as A-C-I-S-O, the way you sleep as a security person is have you done all of the reasonable things that you did should and can do? Have you expressed the risks to leadership so that they're aware? And then do you have a plan that you've exercised for what happens when it goes wrong? And if you've done those, you can try to sleep at least.
Sophie White: So I'm curious, when we were starting this conversation, you were talking about how, I just want to bring it back to AI a little bit because everyone's asking about it. So we talked about how AI is exacerbating existing cybersecurity issues that we've had for a long time. Can you talk a little bit about what those issues are and kind of how the landscape is changing with ai? We talked about data governance and quality, et cetera.
David Seidl: Yep. So ai, so the new gold rush is data. And we as higher education institutions have some very valuable tracks of data. We have our research data, we have our data about our students and our institutions. So think about your alumni, think about all of that data that you have. We have years and years of human sourced writing and that's very valuable. So the major AI companies, we're all contracting with Reddit because why not? Because all stuff on Reddit is brilliant, but it is a huge amount of human generated content and is a huge amount of people interacting and q and As. That's really valuable if you're trying to build a system that will respond in ways that make sense to humans when you ask it a question. And so data is big. I will also point out that when you feed your AI all of right, its data and then you ask it a question like how many rocks should a human eat per day? Do you know what the answer was from ai? One to two is a healthy number. It turns out that if you ask Reddit data sources some of those questions, people have been trolls on the internet and some of that, some those answers come back. So I'll give you a second question,
Sophie White: Same thing on my rock intake . . .
David Seidl: But I have not been nearly where I should . . .
Jenay Robert: Since I enjoyed a good rock.
David Seidl: Are either of you cooks? I'm going to ask you one of the hardest questions you've ever had as a cook. What's a great way to keep cheese on pizza?
Sophie White: Oh, I've heard about this. With glue, right?
David Seidl: It's true.
Jenay Robert: That's new to me.
David Seidl: Apparently you'd ask and not be toxic. So we are in a data gold rush where we want the AI to have a bunch of useful data and in some cases we would like it to have knowledge. But remember, AI does not currently know things that just understands how to respond like a human. In some cases I call it highly advanced Mad libs for this round of ai.
Sophie White: Love that.
David Seidl: It's giving you reasonable things that could go into a sentence but it doesn't understand what the sentence is in its current version. That's beginning to change where logic and some of the other things are starting to be more coherent, but data is really important. At the same time, AI is a very useful tool. One of my staff members did a thirty days of AI and every day he would use AI for a different task including asking it what he should ask the doctor because he and his partner, they both say We're not great at asking the doctor the questions we should when we go in for an appointment and said, AI helped a lot with that. So it's compelling to do that, to ask that question, what did he provide to the AI engine information about medical status for him and his partner? And when you ask that kind of question, your data is leaving.
If you are a researcher and you ask AI about something in your research area, that data is going to the AI and being processed and what the AI vendor is doing with that data can vary contractually and also based on whether they're securing that data well because it's probably going somewhere and sitting there for a while. If you are a 23andMe customer, I'm not going to ask anybody to raise their hands. 23andMe might be getting ready to sell your data as they get acquired because they're likely to go out of business as they currently exist. Are you concerned about your DNA data being out there and then being ingested into an AI system that might be interesting? Would you like to get the email that says, Hey, it looks like you're susceptible to this medical disorder. We bought 23andMe data, we'd like to sell you insurance for that. Would you feel weird about that? So we have all of these issues and a super compelling tool that really can make us more effective in our day-to-day work in many ways, and they are both the same thing.
Jenay Robert: I might be showing too much of my naivete here, but this idea of companies acquiring other companies purely because of the data was something completely new to me in the last few years. I didn't know this was happening in the world. And again, through my research, I'm a smart person. This is why it's so important to talk about these things. There are really smart people on our campuses who just don't know. And that's why I'm happy to say I didn't know. It drives the point home of how important it is to raise awareness. But that aside, that's a whole thing. Companies that are sitting on data that we've said, yeah, you can use my data for this. And you don't realize that 10 years from now, that contract that you had that's just gone and now you've got your data with some other company entirely.
David Seidl: I'm going to ask you, I'm a history buff and geek, so I'm going to ask you a question that you will not expect. Do you know why steel from sunken battleships from the beginning of World War II is valuable in today's economy?
Jenay Robert: No idea.
David Seidl: Because modern steel has a slight amount of radioactivity because of the nuclear era. And so steel that's been underwater does not have that radioactivity. And you're going to say, why is David talking about this? Because data before the advent of AI has not been polluted by AI and data after the advent of AI in many cases now has AI involved whether it is bots or something else. We are now living in an era of polluted data where AI is interacting with ai. And in some cases, if AI talks to ai, it goes just as far off the rails as you expect. If you go look at a modern X conversation, you're going to see all kinds of bots out there arguing with each other to varying levels of, wow, that's dumb. We now value pre AI data even more. And people have pre AI data sets that are also going to be shopped for and looked for and used. And yeah, that one blew my mind. Like, oh yeah, there's a pre AI moment where my data might be more valuable. Actually human generated.
Sophie White: Around when is that era? I'm trying to think. Obviously ChatGPT brought AI to the forefront, but it was around,
David Seidl: It was around before doing something. It was not as universal . . .
Sophie White: Right?
David Seidl: So you could go back two years and probably be mostly okay if you went back ten years, the data sets start to be less comprehensive because not as many people were using the internet. So somewhere in there, it's probably never going to be completely pure, but there's conversations about what if you want a only humans doing it dataset,
Jenay Robert: Right? I'm picturing two robots sitting at a restaurant and then the robot way comes with a bottle of this is a fine bottle of data from 1980 . . .
David Seidl: Would you like to consume this so you can sound more like a human.
Sophie White: Yeah, the older the data, the better it is great image. I like make an AI image. I'll ask ChatGPT. We need to generate that way.
I just lost, my train of thought. I was thinking about shifting a little bit to the Top 10 that we talked about today. Number one issue is the data-empowered institution and just how powerful data is in terms of decision making and also addressing even these trust issues that we're seeing in higher ed. So that's tricky because data is so dangerous on one hand when it's taken by threat actors taken by their wrong people and then at the same time it's so valuable. So how do you balance that in terms of data collection?
David Seidl: And it's compelling because the more data you have, the more you could do with it at risk in theory and putting it somewhere central to make it useful to you also creates a really nice target. And so you will see malicious actors looking for data sets to acquire and then potentially sell or resell. And yet if we don't do that for our institutions, we have failed. And if we don't give our institutions AI tools, and a lot of us think AI and we just think LLMs right now, large language models chat GPT, that is a tiny fraction of the things that we can call AI machine learning where we're doing algorithmic improvement, where we're doing things that can identify cancer or something like that. That space is real there. And there are other types of AI that we're going to see as we develop new models as we talk about quantum computing as we build more human-like AI that also is there.
So it's not just chat GPT when we talk about this. We need those things to make our organizations better because we as humans do not have the attention spans with the capability to work with data at that scale as effectively as we should. Those few people in organizations who are really good at it are rare and wonderful resources. But wouldn't it be nice if everybody was able to do that relatively easily? So an example at a very small scale that came up, we do a recurring data summit at Miami and we bring all of our data practitioners together and we talk about how we're using data, what the tools are, and we help each other by thinking. And so I was in a room with one of our staff and she has all of the questions that get asked of Miami on social media, it's in an Excel spreadsheet and they just put 'em there, like the interns put that in there.
They said, we have a problem, we have all the questions that get asked and we need to do some rough sorts of these to figure out what the categories of questions are. And historically, how would we have all done this? We would've grabbed a student employee and said, here's a pile of post-it notes and a whiteboard. I need you to go through 40,000 entries and tell me what the thirteen buckets, so you're going to get out of this arc and that poor miserable student employee would've done that because that's part of their internship and you would've gotten some buckets out of it and it would probably taken a week or two if you had a large volume of data. And I said to her, what if we fed it to chat GPT and asked it to just summarize this data set? She said, I haven't tried that yet.
She says, I'm going to do that while we're asking the next question. So the next question gets asked and then she raises her hand, she says, I've got it categorized. I know what the most common categories of questions are and I looked at it Human in the loop is important to this phase. Always do human in the loop. Look at what you're doing. She says, it all looks reasonable and we've got a week of our students' time back to do something else with in our social media group. And so we're looking at solutions like that where we can use AI tools that are now commodity tools to save us time and make us better. And we still know it's a little dangerous, but we also know that we want that benefit. And so we're trying to make informed decisions about what we do. If somebody magically got our spreadsheet of 40,000 questions, Voya request or something like that, that's probably fine. That's on social media, that's public. And so that was a really good decision and it helped them make a job less miserable and got us information right away.
Sophie White: And that seems like a valuable expert exercise for any students who are involved too to say, let's double check that this data looks reasonable before we use it and have that whole conversation. Because I think as we're looking at the mission of higher education, teaching how to use these ais is going to be essential too, whether we like it or not.
David Seidl: If you're upgrading to the most recent version of iOS on your iPhone, it is now doing text message summaries when you have not read a series of text messages including I'm breaking up with you, I want my stuff back. That kind of summary, it will summarize anything and it gets it wrong sometimes because it's trying to do a summary. And so I'm reminding people the AI summary that you do when you search the AI summary that you do when you look at your text messages, go look a little deeper. It may not have done what you wanted it to do. It might be a little dangerous, but we're going to be surrounded by it and I suspect our audience, some of you have kids or grandkids who are TikTok users and when they want to know something, they search TikTok instead of Google. And there's a whole generation that does that. We are now getting to the next generation who will ask an AI. And so you might want to think about how many rocks you should eat every day and how you will teach the people who are growing up in a world where you search with AI to question that and to make good decisions about the responses they get from these useful tools that we still have to think about.
Jenay Robert: Yeah, that reminds me of in the Horizon Report, there is a scenario in the privacy and security Horizon Report. There's a scenario where we talk about cybersecurity and privacy education becoming a foundational element from kindergarten through lifelong learning programs. And for those who aren't familiar with the horizon report scenarios, they're not meant to predict the future. They're meant to think about potential possibilities, pieces of things that we might see and plan for and aspire to or avoid. And so our panelists helped us think about this potential future where from kindergarten on in developmentally appropriate ways, we are constantly bringing this to our students about cybersecurity and privacy. And the reason is because we are living in this digital world. And so just as you were saying, David, you've got kids going to AI to ask these questions from day one. So I think that cybersecurity and privacy from an early age is not out of the realm of possible things in the
David Seidl: Near future compelling and easy. And it's not just do my homework. For me it is interaction. And I think probably most of this audience, if you're here talking about AI interest AI, you saw open AI's demo where it was teaching math and it was adaptively teaching math to a student. And as somebody who struggled with math in high school, because I didn't respond well to the teacher that I had, I wish I'd had an AI to say, Hey, I'm not learning this well. Can you teach me in a different way than my current teacher is? And so there's very transformative things that are important for us to get right, but also that are going to open up a whole new world for our students that we all have to adjust to so that our institutions remain relevant and inclusive of the students we're going to be getting over the next couple of decades.
Sophie White: Absolutely. I love that. I think that's a great hopeful note on the promise of AI. So from there, I want to turn it over to the audience. If you all have any questions for us. If you have a question, raise your hand and Kelli will bring you a microphone. If you could say your name, institution and your question, that'd be great.
Jenay Robert: We did such a good job that there's no rock left unturned. You see how I looped back to rocks there all?
David Seidl: The way back? All the way back. There were a lot of rocks in the Top 10 slides today, so I was probably part of it.
Sophie White: Talk are the theme of EDUCAUSE Top 10, we theme of the day. It's okay, what's the word? If you don't have questions, we can also keep yapping as I'm seeing on social media.
David Seidl: I can ask myself a question that might be interesting, I might find interesting. So we started by talking about cybersecurity programs and AI. And so one of the things we were talking about yesterday was how can cybersecurity programs safely leverage AI? And that's interesting because we're now bringing a risk agent into our organizations. The Horizon Report has one example from Miami where one of my security analysts who is not a programmer by trade if needed to, he can read some code, he can write a little bit, but we needed to do a institution-wide web crawl for a specific plugin that was problematic. And all of our developers are strangely enough working on our ERP. And so it was, do you disrupt the tens of millions of dollars effort to go find a security issue or do we do something else? And he said, I actually want to go use chat GPT and see if I can build a tool using it for coding and get that to do all the identification.
So we actually built an end-to-end web crawler tool that allowed us to find that and then tweaked it using chat GPT through a feedback cycle and built a tool that was extensible enough that we were able to report to our web team every location that plugin was used, because in some cases it was not part of the template. In some cases it was manually added. And so he gave them the list of every location. We were able to purge that plugin across the entire institution because of that. And he took a couple of days to work through it to get to that product. That was a really good use of what it was. And there was a human in the loop and he was looking at it, and when he would demo it, he would keep an eye on what the output was to make sure it didn't go astray, but he was able to do something he couldn't do by himself. He saved us a developer, he built a tool and he taught the rest of the security team how to do that. So big win across the board,
Sophie White: Great.
David Seidl: And none of that was particularly risky for us. We can do data analysis and we can take large data sets and have it looked at in reasonable ways and they can query those data sets using ai where those of you who might operate in the security field, looking at your sim, looking at the amount of data that come in logs, it's too much for humans to comprehend. So you use your sim, what if you could query your SIM and have your SIM actually acting like a security analyst because you told it dear chat, GPT, you're a security analyst, you consume threat feeds. When you see things that are wrong, you bring them up and you ask another AI and you both agree, you can start building these things into your program and increasing your effectiveness. We are now at that point where we can start doing those cool things. So I think it's powerful. I think we should be cautious, but we should also be excited.
Jenay Robert: I think that's a good ending point right there. Be cautious, but be excited. I love that. Thank you for that example. Thank you.
David Seidl: Thank you.
Sophie White: Yeah, I think that's good. Last call. Anyone have a question? Oh, we've got one. All right. Question. Thanks for being our brave volunteer.
Audience Member: So David, thank you very much for your insight on this. I mean actually looking forward and looking backwards at the same time. So here's my question is I'm of the opinion that existing policies often cover the need for ai, and so it's, Hey, we've had these policies in place for years that governed the data. It sets it up accordingly, but if we follow that and apply that towards how AI is using and consuming and then regurgitating the data, is that applicable or not? Or am I completely off the mark with that?
David Seidl: I more than a hundred percent, I a thousand percent agree with you on that. Our existing policies, if they're well written already cover most of the cases that AI is involved in. I think it's good to have an ethos or an ethics statement from the institution about how and where you use AI. So you might say that we will have a human in the loop that we will be cautious about data, that we would be cautious about algorithmic bias because there are things that are specific to AI that you might want to write about, but your acceptable use policy, your data policies already cover the things that are there for ai. Just people don't think about them because it's AI and it's clearly totally different. We've gone through this in multiple rounds over a couple of decades of, oh no, it's the internet, it's totally different.
It's cell phones, it's totally different. All of those things. And it's not just, we don't think about it that way. We don't categorize it that way. So what do you do? A lot of institutions are looking at building AI policies, and again, we may not need many, but you may need a communications campaign that says, Hey, ai, here's how the policy is relevant. All the folks in the room who have been involved in student discipline know that when you have to point to an acceptable use policy or in a student discipline policy, you have to explain to the student why the thing they did was against the policy and Oh, no, no, that doesn't apply to me. I'm doing it this way. Nope, the policy still applies and you explain that. I think we're going to do that a lot about our data policies.
Jenay Robert: Yeah, I'll add that our research team published an AI Policies and Guidelines Action Plan a few months ago, and we took a similar approach, again, working with panelists as we do to giving advice to the community about how to create those policies and guidelines where we said, start with the policies you have in place. And our panelists were adamant about this. A lot of policies are already going to cover, but we do need this process to uncover those use cases. And that's really challenging in higher ed because of this little thing called higher ed silos that you might be familiar with. It's really hard to communicate across those silos and really understand maybe you're sitting in A-C-I-S-O position and you don't know what's happening in every unit across perhaps multiple campuses. So that communication piece is really important. And then what are the policies? What new policies do we need? And then what new guidelines, as David was saying, there's going to be some communication around how policies are applicable, but I think it really starts with those use cases. And you don't know what's happening on the campus unless you talk to people.
David Seidl: I had an example in a procurement policy, and our procurement policy said that you cannot buy hardware or software without working with central it. And people are like, oh, but it's a cloud service, and that is both software and behind the scenes hardware somewhere. But in people's minds it was different. And so I said, okay, we're going to do a policy update. We are just going to write or cloud service or service into that policy so that when people read it, it's visible that that's included because people just weren't reading it that way. So you may go through your existing policies and just say, and AI or and something else, and write something broadly up to serve you.
Jenay Robert: That brings up a really good point about how challenging some of these things are, because that common understanding is so different. And I think with ai, this is exacerbated because of something you said earlier about how it's not just generative ai. AI is a large field and there are many technologies that use it.
David Seidl: Everybody wants to bake it in now. So even if you don't think you have ai, your vendor's probably running some sort of AI tooling or wants to sell you AI tooling, and so it's going to be everywhere if it's not already.
Jenay Robert: So this is where another research finding comes in our AI landscape study for 2024, which stay tuned, there's a 2025 version coming out, so in February, but in our 2024 version training for faculty, students and staff were the top three elements of institution's, operationalization, that's a big word of their AI strategy. We gave them a long list of things. What are all the things that you're including? And it was training, training and training, and that's just so vital at this point.
Sophie White: I am curious, I think we only have a couple minutes left, but in terms of AI policies and AI governance at your institution, what stakeholders have you been collaborating with to decide, are we adapting these existing policies to include ai? Are we building something new? What does that collaboration
David Seidl: Look like? We have been doing it. We know you got to work all the way across the institution. So not only were we using existing governance structures to be able to do things rapidly, but we build an AI task force and began our work there. That's paused. There's some unionization related items that we've been working through, but that task force was across the entire institution. We believe that's important and critical to how we're going to do it.
Sophie White: Okay. Are you involving students at all in work?
David Seidl: We are more in the feedback loop because we've been talking to a bunch of students about how they're using ai. So as we take it through more formal governance, the goal was to get governance out to our university senate, which is a full university senate, and then get the feedback through there That is still waiting, but we'll involve all of that.
Jenay Robert: Great. Thank you'll stay tuned for an update.
David Seidl: Time, maybe next time we get together. Yeah.
Jenay Robert: Beautiful.
This episode features:
David Seidl
Vice President of IT and CIO
Miami University
Jenay Robert
Senior Researcher
EDUCAUSE
Sophie White
Content Marketing and Program Manager
EDUCAUSE