In this episode, we’ll explore how cross-institutional collaboration can help institutions keep pace with advancements in artificial intelligence (AI), address the growing digital divide in AI adoption, and examine how AI is reshaping teaching, learning, and the value of a degree.
Takeaways from this episode:
- Collaboration across higher education can help institutions stay up-to-date on advances in artificial intelligence (AI).
- The advent of AI is exacerbating the digital divide—both between institutions with disparate resources and within institutions. This episode explores how institutions can address these gaps.
- AI is challenging higher education to rethink its value and what it means to teach and evaluate students.
- Institutions can balance AI policy implementation while still fostering innovation.
This episode discusses the 2025 EDUCAUSE AI Landscape Study: Into the Digital AI Divide.
View Transcript
Sophie White: Hello everyone and welcome to EDUCAUSE Shop Talk. I am Sophie White, I'm a content marketing and program manager with EDUCAUSE, and I'll be one of your hosts for today's show.
Jenay Robert: And I'm Jenay Robert, I'm a senior researcher at EDUCAUSE, and I'm your other host.
Sophie White: Great. So today we'll be talking about AI and higher education. We'll dive a bit into the EDUCAUSE AI Landscape Study as well. And we're really psyched to have two special guests on the show with us today. Both are really innovators in the AI and higher education space and they both also support our EDUCAUSE AI community group. So we're really excited to have them. Joe Sabado and Danny Lou are here with us today, so I'll just introduce them in a bit and then we will get started with our discussion. So Joe Sabado is a Deputy Chief Information Officer at UC Santa Barbara, and brings over twenty-seven years of IT experience and holds a BA in political science with a minor in Asian American studies from UCSB, and an MBA in information technology management from Capella University. He co-directs UCSB's AI community of practice and co-leads the EDUCAUSE AI Community Group, a first-generation immigrant from the Philippines who arrived in the US at age eleven.
He serves as a possibility model for other immigrants and underrepresented students having risen from web developer to technology executive while maintaining a deep commitment to mentoring students and staff from marginalized communities. Joe's Life Mission centers on making positive global impacts through education, compassion, and inspiration. What a great Joe, thank you for all you do and for being a possibility model. I love that phrase. So we'll have to dive into that in a bit. Danny Liu is a professor of educational technologies at the University of Sydney. He is a molecular biologist by training, programmer by night researcher and faculty developer by day and educator at heart. A lot of things going on there that's great. Multiple international and national teaching award winner. He is professor of educational technologies at the University of Sydney where he co-chairs the university's AI and education working group and leads the coy AI initiative that puts educators in the driver's seat of AI. So Danny, you're doing a lot too. And also thank you for joining us. I know with time zones things are hard sometimes and you woke right up and then joined this conversation, so we appreciate it. Great. So yeah, let's dive in. AI and higher education. There's a lot to talk about here. Jenay, I know you've been kind of going all over the world to discuss AI in higher ed too. We're recording this in February, 2025. So I guess what do you think, what is top of mind for all of you right now in terms of what those AI discussions look like in higher education?
Jenay Robert: I'm so glad that you said when we're recording it too, because this is the topic that changes. There are a handful of topics in the world right now that you have to know exactly what time and date you're discussing them to not understand the relevance and AI is one of them. Thanks for that framing.
Sophie White: Of course. February 3rd, 2025, if that helps, at 4:11 PM Eastern Time.
Jenay Robert: Oh yeah. What's top of mind for you, Joe and Danny?
Joe Sabado: Well first of all, Sophie, Jenay, thank you so much for the invite. Well, there's a lot of practitioners that you could have invited, so really appreciate this opportunity to partake in this conversation. Danny Danny's been a big fan of Danny from across the world, so I always follow what he says. I've always said he is one of the credible people in AI and higher education and I really mean that. So it really is a pleasure to be able to discuss AI in this forum. So I think a couple of things that come to mind. One is just the speed of how this technology is introduced. I've been in this profession long enough that when the web came out or the internet, it's like, oh yeah, it's about three years. Then all of a sudden it's like it sped up. And then nowadays I spend a lot of time online and just watching what's going on.
But if you look at what's happened the last couple months, of course all the way back to 2022, I think it was when Chad got introduced, the philosophy of the technology being introduced is amazing. And I think the gap between higher education and technologies introduced, I think is getting wider and wider. So that's the first piece. The second piece is the equity issue. And I think this is something that I think the digital divide amongst institutions, amongst departments and even individuals, I worry about that because it does, to be able to implement AI, generative AI on campuses requires a lot of investment of competency to be able to do that well. And I worry about a lot of institutions that don't have the capacity or competency to be able to provide this very transformative technology at the institutional level. A lot of things going on, I mean Danny of course operates at institutional level. I do too somehow, but somewhat, but experiment a lot individually. But it's this leap for institutions to provide access to technology and the competency. Those are the ones that's on my head, my mind.
Danny Liu: And having, sorry, thanks for having me on board as well. It's really great to be here and I love the work that Joe is doing and also love the work that the edgy core does around this space to kind of drive it conversationally internationally. Joe mentioned the idea of velocity. I want to mention another similar word value. What I'm seeing is a lot of rethinking about the value of higher education and I kind of split value into two main parts, which is around integrity and also relevance. And I think that a whole AI thing has made a lot of higher education institutions rethink what it means to have integrity value, but also relevance value of their award programs. Another thing is also a bit high level is thinking around how a lot of the mindsets that we've had in higher education have started to need to shift now because of this kind of seismic earthquake that is coming through. And so it's really interesting seeing how people adapt and react to this and how different institutions and different individuals actually adapt.
Jenay Robert: Great, I'm glad you hit on that piece about really just kind of questioning what are we even doing here. I think that's what makes this such a different technology. We've gotten excited about technologies before, we've had revolutionary technologies before, but I think from the beginning we all felt that there was something quite different about AI and generative AI. And not just because the technology itself is different, but it made us feel very different. And I know for me and for a lot of us, I think it pulls on that string of like, okay, what does it actually mean to be a thinking reasoning metacognitive human? What does it mean to teach? Why am I teaching? What am I trying to teach? What does it all mean basically? And I think that's why we're seeing something that we mentioned in the AI Landscape Study this year is that we're now starting to see a bit of a division between faculty and staff who are very excited about the technology and some who are very much opposed to the technology and there aren't a lot of technologies that create this kind of division at our institution. So I think that's one of the things that as we look at the developing over time kind of there was initial panic and excitement and now things are kind. We're trying to find what is our normal with this and not what other really interesting developments that I have on my mind now.
Sophie White: Yeah, I think that's a great place to dive into the AI Landscape Study, but Danny or Joe, if you have examples from your institutions, let us know. I thought that was really interesting as I was reviewing it that there was a quote, I think about even an alarming lack of collegiality. I think faculty and staff who are on different sides of this divide. So there are multiple kinds of divides going on right now too. So that's a really interesting cultural phenomenon. And then Joe, I was glad you mentioned the digital divide too because that was a theme of this AI Landscape Study and I thought it was interesting that it was saying in terms of personal use for efficiency, things like that, folks at less resourced and more well-resourced institutions seem to be using AI about the same. But in terms of major programmatic type of initiatives, there were major differences in resources that could be put towards AI. So I could see us only increasing that digital divide, digital AI divide as time goes on.
Joe Sabado: Yeah, I was really surprised. But the comment about the division between faculty and staff, and I think that's really something that I am really proud of UCSB for having that kind of, we have different perspectives, faculty, staff, and students, but I think there's a sense of we're learning together. And I think it goes back to the idea of how do we connect people through AI? And as a matter of fact, one of the wise leaders as student leaders that we had for a symposium last year, I mean I use this quote all the time now, and her comment was that we need to have more conversations between staff, faculty and students because we don't have that in the conflict. We'll continue again to divide us, but I think what we found where our community of practice is about 275 and that group we have four special interest groups. So application development, workplace productivity, teaching and learning, and then research is that faculty and staff and they all want to work together. So again, it goes back to the idea of how do we provide access to the tools at the institutional level. So I was really surprised by that, but I think it does go down to, we talk a lot about technology, but at the end of the day it is about people and where they fit in relevance as Danny said.
Danny Liu: Yeah, the collegiality angle is interesting. I think it's interesting to see it from the perspective of individual collegiality between individuals in the institution, but it's also interesting to see it in terms of collaboration between individuals too and being a mix of that. Some institutions are very good at collaborating or sharing as some institutions that are quite less so they're thinking, no, this I need to make my own stamp here and so I'm going to recreate the wheel. We don't have time to recreate the wheel. This issue is so complex, so fast moving like Joe said, that we can't do this. We need to work together much more than we are already doing. So I think that's a very important part of it too.
Joe Sabado: Danny, can I ask you to expand on the latest white paper you did? I think, again, I'm a fan of yours and I think it makes sense. You did post some questions about the purpose of higher education and also you develop the framework called craft. I'd love to hear more about that. I think it's pretty cool.
Danny Liu: Alright, thanks Sarah. So we were working with the Association of Pacific Rim Universities, so universities about 60 odd universities around the Pacific Rim. And over the last in 2024, we were working with a bunch of them to think about talking how they institutionally were able to adopt or not adopt all barriers and benefits that they saw from adopting AI. And we came up with every conversation we had, we tried to put things together into a model or framework that would make sense for other institutions and we came up with this craft acronyms. C-R-A-F-T stands for culture rules, access, familiarity and trust. And all five of those elements we found were quite necessary to allow an institution at all levels of institution to actually be able to adopt AI Well, and I actually saw a lot of craft in the landscape report, which is fantastic. So we might explore various parts of that today.
Jenay Robert: Yeah, if you want to call out now a couple of examples of things that you saw in that overlap, I would love to hear where those connections are. I’m very interested in that.
Danny Liu: Yeah, fantastic. I've got notes I was reading today. Perfect. So I think there was a lot in the landscape study about thinking around making sure that there were policies and guidelines and those kind of things around and examples of that. And that kind of goes towards the R of craft the rules. And we've found that the rules is actually a really important part to get people thinking along the right ways with this, if you don't have the right rules in place, then you get angle use of the AI and that's not the place we want to be. The next thing, as Joe was saying before, is a very big thing on equity. It has to be equitable access, otherwise there is a massive digital divide that is just going to grow stronger. And so the A part of CRAFT is about access and making sure that everyone, staff, faculty, students have the same access to frontier tools that they can actually use.
There was a lot in the landscape study as well about increasing staff and faculty training, which was really great to see. And also that kind of collaboration and sharing between educators. And that was fantastic because that kind of corresponds with familiarity in crop because it's about building that skill, the awareness and the capacity around using AI, well the C and the T of craft. So the culture and the trust of craft came through a little bit. I saw in the landscape study it was kind of thinking around into institutional sharing. Do we have a culture of being able to share? Do we have a culture of collegiality around these really fast moving technology spaces? Do we have the ability to build that trust between different stakeholders here? Do teachers trust students anymore? Do we trust AI vendors? Do universities trust their faculty and staff to make the right decisions and those kinds of things. So yeah, a lot of really great crossover between them.
Sophie White: Beautiful. Yeah, the collaboration. I love hearing different institutions, different countries. We're representing multiple parts of the world here in the ways that we can partner together in order to address these AI issues. So thanks so much for sharing that and adding to the research. I'm curious, Jenay, I know you've been traveling around quite a bit talking about AI in different places too. Would you agree with these themes as key ones that are resonating for you or anything else you want to add to that?
Jenay Robert: Oh, for sure, especially thinking about the cultural piece. I think that's really important, not just in a micro sense of what's the culture of the institution and how our both leaders and individual contributors working together to create that culture. But then there's also the larger cultural questions around where are you in the world and what are the social and political issues for you, where you are and what's that backdrop? So the people who get into the deep nerdy thing conversations with me know that I will have those like, okay, what's the social and political ramifications of this? What are the drivers? And that ties into our horizon work. We talk about that too. But yeah, no, the cultural piece is so incredibly important. Then that phrase about culture eating, what is it? Culture eats something strategy. There you go. It's so true. And we're definitely seeing that with AI and I'm curious if you all feel the same way, but I'm seeing institutions that already have that culture of exploration and excitement around new things and collaboration across the institution and good data governance and all of that, those cultural pieces position themselves, they were positioned really well to take on this new era.
Joe Sabado: Absolutely. I think if you look at the front runners, the leaders in higher education suddenly of course is one of 'em. But UC San Diego, UC Irvine, ASU, Notre Dame, Harvard, those are institutions that have committed from my informal studies and knowing some of the folks in this institution, they've had the foundation to make sure that AI is adopted very well. I think that's also come from the top. I know for this institutions they have leadership, they're open to innovation, they put investment, they invest in a lot of these AI initiatives, and I think culture does matter, right? And I think in terms of just the rate of adoption or whether AI isn't accepted, I think in higher education there's a concept of shared governance. So you're always going to have those different perspectives, but I think leadership really matters quite a bit. And then just a culture of innovation and just letting people go. So I think risk aversion as well, right? About how risk aversion institution and you see those playing out in different places where for example, a SU, they adopted open AI I think February, I think last year for that. And now they've had 450 or more projects, and so they just went all in. I'm pretty sure this is not risk-free. I know that to be true, but they did guys where they're now.
Danny Liu: So Joe, I'm curious actually, you have a very broad and deep remit as deputy CIO, and so I'm wondering if you could tell us more a bit about that kind of cultural piece, a leadership piece, the innovation piece, what else are you seeing around your own institution and other institutions about how to actually get that done? What needs to happen, who needs to make it happen and those kinds of things?
Joe Sabado: Yeah, I think this is where I look at the frameworks, the readiness framework for EDUCAUSE in terms of just looking at what are the elements that enable institutions to adopt this. So of course people, literacy, technology, governance. I think the culture piece again just matters quite a bit. I think those are the things that come to my mind. And strategy of course, I think AI is seen as a strategic investment as opposed to just yet another technology to adapt. I think that matters quite a bit. So it does come down to mindset with having those different components. But Jenay, Sophie, I feel like I've forgotten one or two elements of the readiness assessment. What are the other ones?
Sophie White: Jenay, do you know?
Jenay Robert: So we're actually fun spot. No, no, no. Listen, I never put a researcher on the spot. No, I'm kidding. We are rewriting the assessment right now. So what you'll see in the new version, which I think will not be out yet by the time this records, but will be soon, I want to say April-ish. So you'll see a lot of that focus on not just the data systems and the things that we have to have in place technology wise, but also on the culture, like what we're talking about now, the cultural pieces, how leadership are valuing the technologies, how leadership are showing up and engaging in the process. We're looking at workforce impacts really carefully as well. So how are AI responsibilities codified into job roles, for example? And that's something that we look at in the landscape study pretty well. I think we're doing a pretty good job of keeping an eye over time on impacts and how things are changing.
So seeing an increase in workforce impacts. So that's something that we really want to keep an eye on as well. But actually I know it's not out yet, but I do like this project and what I want to talk about with the assessment and the biggest change that is coming out with that is that we've built in more flexibility for institutions taking the assessment. So instead of saying that there's only one way for an institution to be ready for AI, and then you take this assessment and say, how ready are you? We're acknowledging that all institutions are approaching this from a different angle and having different goals. And so in the way we built the response options and the way that you can gauge yourself, there is that acknowledgement like, yes, I've reached this goal to the extent that I want to. For example, I can't remember the last time I took an assessment where the answer was I've done as much of that as I want to. But we built that in to give that flexibility. And then the other thing is we're giving a much more robust framework for people to interact with each other after they take the assessment. And so going from assessment to action planning, I know that was a very long answer to your very simple question of what's in the assessment, but as soon as you mentioned it, I thought, I'm excited about this and I want to talk about it.
Joe Sabado: Thank you and Jenay for inviting me to be part of the conversation. I do like the second version of this because it does a lot of co-creation and again, just more variability in the approach. But one thing, Jen and I were part of a panel sometime in December, I think it might've been in November, and there's one thing you said is that when you were asked how do you know if an institution has even has an AI program goes look for their principles, is that even on their institutional website?
Jenay Robert: Yeah, this comes from my training as a qualitative researcher, I have to say, I always say don't tell anybody, but I'm only pretending to be a survey researcher at heart. I'm a qualitative researcher. So the question about, yeah, I think it was how do we know people are upholding trustworthy AI or just something along those lines. How can we measure institutions, how institutions are evaluating AI? And it's look at the artifacts. That's something that comes from my training is always look at what those products are, especially what's public facing. But also if you can get behind the scenes and talk to some people, that'll tell you some answers too.
Sophie White: That's really interesting, and I noticed that I need to find the statistic, but in the landscape study having, I'm just scrolling through it right now. Institutions that have a holistic plan for how their institution will address AI was lower than I expected. Jenay, I don't know if you want to chime in on that, but even several years after ChatGPT has come out, I think the number of institutions who have a streamlined plan for how to address AI was lower. And your comment in the landscape study was that that could improve. I feel like looking at artifacts, talking to folks about the culture is one way to see that. But what do you think institutions should be doing in order to build a more holistic plan for AI?
Jenay Robert: First, I'm so glad you brought this up. I feel like this is a much more nuanced issue than what we had space for in the report. So this is great. Exactly why I love these conversations. One is I think we say there's room for improvement in this area. Yes, that's true. I think institutions need to work on making sure they have some agreement over between different colleges and between different areas, functional areas on how they're approaching AI, some agreement is the key there because we still want to leave space for flexibility. I was asked recently about should institutions have standard policies and guidelines for all students on their campuses in all classes? And I said, no, I don't think so. I think we need to leave space for differences by what it is that you're learning. There's content area differences, we value faculty autonomy in higher education and that hasn't gone away.
We still value that. So I think it has to be a balance. It has to be, there are certain things that we all agree, we need AI tools that are safe, that protect our data, that are not amplifying societal biases and so forth. We're trying to put those safeguards across the institution, but at the same time, we recognize that different disciplines, different course levels, different individual instructors will want to use these tools differently. Operational units. So it's not just student facing, it's behind the scenes as well. Different operational units have different needs and we want to allow for that flexibility.
Joe Sabado: Interesting. One of the areas where I've been studying is policies, institutional policies. So there's one research paper that was done on Big 10 I think it was. And so that paper examined Big 10 and how they constructed their policies and guidelines and then also went through, I think they're called A SCU now, I think they changed the name. So I went through about more than a hundred college universities, and it's rare for me to find a policy that's at the highest level. Typically it's from the provost or EVC level and then partnership with it. So what I found is a pattern is there's policies when it comes to security and that has to be in place, data policy, security policy, but then there's a mix of guidelines. And I think that's where, to your point A, can you impose a policy across the board in terms of academics? I don't know if it's ever going to happen for the reason that you said, but I think that's the pattern I've seen is there's policy from it and there's guidelines when it comes to teaching and learning.
Danny Liu: Oh, sorry. You guys happy?
Sophie White: Oh no, you can go. I'm still processing it. So dive in.
Danny Liu: I was going to say one of the other interesting takeaways that I got from the landscape study was the impacts that people thought were honest, and Joe's mentioned policy and the importance of that, which is really, really important. I picked up on other things around on the ground stuff that the educators at day to day we deal with including academic integrity, coursework, curriculum design, and the impact of AI on all of those things. And I'm wondering around the kind of faculty autonomy angle. Something that we're seeing at least in this part of the world is yes, we do want to give faculty autonomy and agency in this and also students autonomy and agency. But the other side of that coin is confusion that if there's too much flexibility, then students get confused. Every course they go to, every module they do is like, well, there's a different rule here. I dunno what to do. Especially in terms of academic integrity. One of the things that I really fear is seeing a headline in five years time that reads something like breach, collapsed, killing hundreds engineer use generative AI and got away with it in their university assignments. And they were never found out.
And I'm not trying to say we shouldn't use AI, I think we should use AI as part of assessments, but we also need to have that integrity there that there needs to be some assessments that can actually check that that engineer knows how to build a bridge. And so I wonder where that kind of fine line is between faculty autonomy and just really one, reducing confusion, but also allowing higher education to know that their graduates have the knowledge and skills and disposition to actually be able to enter the workforce.
Joe Sabado: And that is one of the things that I trying to pick up too is the workplace readiness. And I do feel like there's a gap that exists there. I mean before even AI existed, right? There's always been the gap between the work workforce expectation and then what academics say, oh yeah, we're preparing our students workforce. But even more so now. And I think I do follow groups that says no AI at all in my classroom, and I do worry about that, but also I think that it brought up something that's really important. And I think Jenay, you had reposted something about your involvement with, I think there was an article about a student who was accused of academic misconduct. And I think that's a piece where even when early on with ChatGPT came board said, if we don't know, as for example, the conduct office don't know how to adjudicate that because they're not using it, how are we supposed to properly adjudicate those things? And I worry that students are being brought to the hard process because they're accused of something that they're not doing because the faculty or the institution don't know how AI works. It's not a judgment call that been a question that I have. It's like we can argue about, I don't want this in my space, but at the end of the day, we work for institution that's search students and research. So that's the piece where I kind of scratch my head. It's like, what's that point that Danny says autonomy versus actually having some standards?
Sophie White: It's complicated. His workplace readiness means not using AI to do your engineering exams to be an engineer, but it also might mean in the future knowing how to use AI in your workplace effectively and with these boundaries. So where do you find that balance between the two?
Jenay Robert: Yeah, and I think it's an exciting element of the AI conversation. For me, my PhD is in curriculum and instruction, and so for a long time I've been part of these conversations around assessment. How are we assessing students? What are the skills that we want students to come out of college with and how do we make sure that we're actually assessing those things? I usually joke that one of the hard that you're ready to defend your dissertation when somebody asks you, but what even is learning? And you're like, okay, I need to get out of here. But point taken. It's true. What does it actually mean to learn the things you need to learn, whether it's for workforce or for being a well-rounded human. It's not like we have a guidebook for a lot of these things. Even the best learning scientists have a lot of unknowns around those things.
So yeah, I mean I think it's very complicated. I think there's the layer around learning what you need to learn in your discipline, but also learning how to use new tools. There's also, I don't hear quite as much as I wish I did about these learning progressions in higher education. So you start off, my undergrad degree is in chemistry, and so I come in, start off learning whatever, differential equations, calculus, I'm trying to understand some of these basic math principles. Did I ever do those things when I was a practicing chemist? No, I never had to derive an equation, but I had to learn the logic around that. I had to kind of understand how the world works and how the world worked that tiny, smaller than you can see level so that I could understand chemistry. So that maybe when I had a tool like ChatGPT, I could say, oh, that doesn't ring true. That's not right. I need to ask it a different question or a better question. But the challenge for educators is those very basic initial skills that kind of set up how the world works, those you can use ChatGPT to just get the answer. So that's where I think the biggest challenge is, is not so much in the upper level courses or graduate courses, I think it's really those introductory courses where you're learning those basic things, those pieces of knowledge, curious what you all think about that.
Danny Liu: So Jenay as a curriculum and instruction specialist, that's really great to hear. I want to ask you or everyone here about that idea, what is learning anyway and how do we encourage and as well as evaluate learning, there's terms thrown around assessment of learning, assessment for learning, assessment as learning. Obviously we need all of those in the AI space and higher education. How do you assure that students have learned while also using assessment for learning, but having AI integral part of that?
Jenay Robert: Joe, just answer this question and fix it for all of us. You got the answer. I know you do. Come on.
Joe Sabado: I did think about this and sometimes I was, I'm a first generation student. I still remember my experience when I was an undergrad and just how difficult it was for me to learn because again, maybe it's just me, maybe I'm unique in this sense, but this idea of how do you assess learning? It's always been an issue back then. It's like, oh, it's great, it's great, it's great. And my mind doesn't work that way. I mean to a point where I need to understand what it is at first. So the idea what's in the seat, the credit hours and the grades, I could never get into the system. And I struggled with MIT Libra, but then when I left university and then went to the workforce UMBA, it's a competency that really made me really get excited about learning. And so to me it's like does this divide between what is it that institutional level learning, which is you get an A c, you pass on, but to me it's like if I can apply it, I think there's a divide there, competency based versus a grade based and then the buts in the C of Carnegie credit system.
I'm not a professor, I've taught classes, but it really is something when it's like, what do you even assess? What's the learning outcomes? And I defer to you educators to answer this, what's the learning outcome in the first place for higher education and what's the purpose of higher education? I didn't like to solve the problem.
Sophie White: Questions. We've been wanting a lot of these with AI, which is fun for me. I think AI is really bringing up this question of what does it mean to be human? What does it mean to learn? How do we interact with each other, which is a really fun and scary and interesting conversation to all to be able to have.
Joe Sabado: Well, can we ask Dan what he thinks?
Jenay Robert: Yeah, let's turn his question back on him. Danny, what's your answer to this question? What does it mean to learn?
Danny Liu: I think Joe's absolutely right that I think we need to reevaluate, reassess our learning outcomes and think about what we value in higher education and what students need to learn now, given that there are intelligences in the world now that humans have created, which are more intelligent than us and are going to increase. So I think that is really a key thing for us in Australia. We have a, Australia I think is known as a nanny states in many parts of the world. And so we're a bit more controlled than other parts of the world. And so for us, I think the academic integrity piece is balanced around the idea that we'll need to have assessments that we'll need to lock AI out, and we'll need to have those assessments in order to assure that students have learned and can do the things that they need to do. And we can only do that kind of AI lockouts in secured, supervised face-to-face scenarios. We can't do it in any other scenario. No other scenario is actually secure and can assure learning. And then for everything else, basically it's engage with AI. Let's help scaffold and support students in how they work with AI. And those more open assessments can be integrating AI and actually accept. So I think we need both of that, the assurance element, but also the integration element.
Joe Sabado: It's just the two lane assessment that I read from you, right?
Danny Liu: Yeah.
Jenay Robert: Tell us about that.
Danny Liu: So we're a fan of analogies here. And so we're saying basically that into the future, the only path forward that we see is steering assessments into one of two lanes. One being that kind of assurance face-to-face secure supervised thing where you can lock AI out if you want to. You can include AI if you want to as well, but obviously control it in that environment as lane one. Lane two assessments are where we have basically everything else, and it's a situation where we cannot ban, block detect, restrict AI use. And so we shouldn't at all because if you have an assessment where you say you can only use AI for brainstorming or you can only use AI for editing, you have no way to stop students in your class or detecting them without anything else with AI. And so it makes your assessment invalid to put in those restrictions you cannot police or enforce. And so for our lane two assessment, the open assessment is basically saying, well, we need to engage with students on this. Got to support and scaffold the use of AI and this assessment as learning and also assessment for learning versus the lane one, which is assessment of learning.
Joe Sabado: I got a follow up question, Danny, and this is something, the deep research, the function that came out from open AI a couple days ago. So thinking about this, so higher education is still trying to figure out whether student wrote the paper or not. And this one actually can go through research and do all of the, you can no longer, for example, the track, the activities. And I think that's one of the solutions being put out there. I think Grammarly does. So how do you use that I, how do you consider that where you have ages now that can go do research with one prompt and it'll write it for you and you can see the activity on Google doc. How would you approach that?
Danny Liu: Yeah, I think the key question is how do you know the student has written the paper and you can't unless they're sitting in front of you. And that's just the way of the world now with these AI tools getting better and better and more and more undetectable. And so I think people need to very quickly, two years ago already, they need to start getting rid of the idea of an AI proof assessment or an AI resistance assessment. There is no such thing, absolutely no such thing. The only AI proof or AI resistance assessment is the one that you have the student in front of you speaking to you, writing in front of you, typing in front of you or something. And that's the only AI proofness you can get until we all get rain implants and then we're in a whole another planet. But now, at least if there's students in front of you, that's the only assurance you can have.
Jenay Robert: Yeah, I think this analogy comes to me that a friend of mine talks about this thing called, she calls it option three. So she says, you have two options in front of you. You don't like either of them. And so you keep wishing you hoping for option three, but it doesn't exist and you're sort of frozen in analysis and in discussion and in debate because you really want a third option. So you'll argue against option one, you'll argue against option two, but those are your only options. So it's a choice between something that hurts and something that hurts first, and you got to decide which one you want.
Danny Liu: And that's I think why we're using that kind of really simple two-lane analogy approach. There is no third lane right now. All of our assessments are driving between two lanes of traffic, and that's a super dangerous place to drive because we cannot assure that learning has happened. And so we're going to get those engineers who build bad ridges, and we also have no way to actually help with AI.
Jenay Robert: And I think there's a real risk, I was talking about this with some colleagues last week. There's a real risk of starting to put a lot of blame and punishment on the end user for a problem that's actually systemic. And this is a common thing that happens across all industries, across every part of the world that our system is broken. We haven't quite figured out how we want to get the outcomes we want, and you're not making it easy. So we're going to just punish you for not complying or for not self-policing, if you will. And that's a real concern I think in the AI space, especially as Joe, you mentioned some news stories that have been coming out lately, and more and more we're hearing about students getting into trouble whether they're using these tools appropriately or inappropriately. I can never comment on that. I don't know to any point we don't know, but we do know that students are starting to get a lot of pressure from that, and that's a concern.
Danny Liu: So that systemic issue is really interesting. Joe, I'm wondering from your perspective what you are seeing as the systemic issues of higher education that are that leading us to put band aid on things but actually need true fixing.
Joe Sabado: Wow, that's deep. It reminds me of this question about this idea of resilience, of putting them imposing that responsibility on our staff and user, but then we have to ask ourselves, what's the role of the system, the institution in the system in that play? So we often ask our students what adapt our practice that may be obsolete or maybe not working anymore. But I think the question is how do we then adopt institution to adapt to the reality of today? I think we have educators who maybe have the studies like ten, twenty years ago, and they're still holding off of that practice. And so to me, it does go back again to beyond AI. It's like how do you meet people where they are? But when, I mean people are, it's not about geographic, but the mindset. So we have different generation, and so the way we learned and we conducted ourselves even five years ago, three years ago, is different now.
So to me, the challenge and opportunity is again, how do administrators, how do educators, how do staff and faculty can go out there to meet students? I think it does come down to that, right? Understanding who we serve and how do we serve them and vice versa, co-create that value for higher education and not just treating regurgitating knowledge as we think they are. I mean, to me, that's the exciting thing about higher education is critical thinking goes back to that is how do we co-create value for the future? Students are also co-producers. So that's my answer is I think the system, we call it resilience, you call it adherence to tradition, but in some ways that may be hindering us to adapting this new reality.
Jenay Robert: Yeah, we're getting close to that end, but I do want to just underscore how important that is. I remember as a grad student starting to learn about resilience and grit and all of these things that we used to study in higher ed thinking, oh, wouldn't it be great if we could teach students how to be resilient or how to have grit? And then as a student, that evolution was, oh, wait, no, we need to teach our institutions how to be resilient and how to support people where they are. So I want to reiterate how important that point is, but since we are meeting close to the end, Sophie and I were thinking maybe it would be nice for the two of you to talk a little bit about the community group that you lead. I know you have a co-leader you wanted to give a shout out to, but I think it's such a nice resource for people listening to come and join the community group and keep having these conversations.
Sophie White: And I think in general, just any tips you have for how this issue, as we've talked about, is changing rapidly and it requires systemic conversations across higher ed. So if you have tips for how you are engaging with this larger higher ed community to figure out these AI issues, we'd love to hear about that. I think that really underscores resilience, the ways that we can come together as a community to solve these issues together.
Danny Liu: Yeah, I think the community angle is really important. We can only do this by collaborating with each other. And Heather Brown from Tidewater Community College, who was one of the founding co-leaders of that edgy course community group has immense amazing ability to bring people together. That kind of passion for sharing and for collaboration for collegiality. I think that we all need to really have in this. And Joe, likewise as well. It's been a really interesting group to be part of because you see a lot of people with just really real raw questions coming through, people just helping each other out. Joe, what's your perspective and experience?
Joe Sabado: No, I think that's been my experience, and thank you Danny and Heather for inviting me to the colleague for this. I think it really does model what a community should be like. Now. I think you'll see differences opinion, but I think there's a sense of sharing and Danny, Heather, and I, and we share resources and our perspective in our different ways, but there's also the respect there. And I think it's so much fun to see the community come together. Here's something we're working on is based on the survey that Heather prepared, I think shared at the EDUCAUSE Conference. So we are looking at maybe doing a monthly in a spotlight, and we do have a lot of, like I mentioned already, several institutions including University of Sydney, but we're going to highlight those practitioners and scholars and institutions that have done it well. And so that's coming our way before this year's over.
Sophie White: That's exciting. Thank you for really spearheading that too. Yeah, I guess I'll add a metaphor here, but I am always so inspired by the power that community has to overcome challenges. So if anyone's watching, I have a sling on, I dislocated my shoulder this weekend, so it's been kind of rough to do things with one hand. But I've already found the power of community in terms of friends helping me carry groceries up my stairs, my coworkers who are commenting on Google Docs because I can't do that with one hand. And we're just all so much stronger when we work together, and that's something that inspires me about higher ed, that there's competition in the best of ways, but there's also so much collaboration that really elevates all of us worldwide. So thank you for all that you do in the community group and all of the fantastic examples that you all shared today.
This episode features:
Joe Sabado
Deputy Chief Information Officer
UC Santa Barbara
Danny Liu
Professor of Educational Technologies
The University of Sydney
Jenay Robert
Senior Researcher
EDUCAUSE
Sophie White
Content Marketing and Program Manager
EDUCAUSE