Artificial Intelligence: Lessons Learned from a Graduate-Level Final Exam

min read

The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.

Drawing of computer monitor with a mortar board on top and one person holding a lightbulb in front of it and another holding a magnifying glass
Credit: VectorMine / Shutterstock.com © 2025

Higher education is accustomed to incremental adjustments over long periods. The introduction of consumer-grade artificial intelligence (AI) caught most educators and administrators off guard and unprepared to cope with the impact on the classroom and the speed of student adoption.

In the fall of 2024, members of the Health Sciences educational leadership team at the Dartmouth Geisel School of Medicine anticipated the need to address student use of AI in the Master of Public Health (MPH) program. As members of the leadership team, we worked with faculty and staff before classes started to create rules that were incorporated into the student handbook and course syllabi. Despite our intentions to anticipate and address how this new technology would influence equitable learning, an unusual situation led us to engage deeply with students and revise our rules and approach. This article describes how we dealt with a potential academic-integrity issue and what we learned from the experience.

An Unusual Situation

A professor notified us that thirteen of forty students in an MPH course had submitted answers on a final exam that had similar paragraph structures, introduced the same new content that had not been taught in the course, and used the same novel vocabulary (also not part of the course). Additionally, students referenced an updated cognitive model that was initially used in 2021 and went beyond the presented curriculum. The teaching team filtered through a commercial AI search engine, and the same paragraph structure, new content, and vocabulary used in the students' thirteen answers appeared. It raised suspicions that commercially available AI software had been used to generate the students' answers.

It is important to note that the final exam was an open-book, take-home test that students had three days to complete. Clear instructions prohibited collaboration with other students on the content once the testing window opened.

A Unique Response

In response to the observed similarities across the thirteen exams, the professor contacted the MPH educational leadership team, and we agreed not to respond to the students right away (we also held off on releasing grades) to give us time to learn more about the situation. This delay allowed us to challenge the assumption of an honor-code violation and gave us time to investigate the issue with open minds and learn about the students' experiences. Working as a team, we established a set of principles to guide our analysis of the situation (see table 1).

Table 1. Guiding Principles and Action Items

Guiding Principles

  • Assume the best intentions.
  • Establish bidirectional trust between students and faculty.
  • Maintain the integrity of the degree program.
  • Be fair to all students in the course.
  • Keep faculty engaged.
  • Optimize bidirectional learning with students, faculty, and staff.

Action Items

  • Review the policies in the student handbook.
  • Review the course syllabus and assignment instructions.
  • Hypothesize and generate scenarios that could explain the thirteen similar responses:
    • They used AI to prepare prior to the exam window.
    • They studied in groups with AI-generated preparatory materials.
    • They created and circulated AI-generated study guides among themselves.
  • Create an interview schedule with students based on these principles:
    • Students must feel psychologically safe to respond candidly during an interview.
    • Students must be assured of their anonymity to avoid any reputational harm.
    • Students should feel trusted and know that we hold them in the highest esteem.

We also created an interview checklist to ensure that each student had a similar interview experience (see table 2). In addition to the checklist, we sought to reinforce the following points:

  • The exam was open-book and permitted the use of study materials.
  • Studying with AI before the exam was allowed; however, using AI during the exam was not.

An important goal was to create a spirit of partnership with the students during the interview by asking for their help understanding the situation. For this reason, we emphasized the confidential nature of the process and our desire to learn from them. Depending on the nature of the exchange, we asked impromptu questions to get their advice.

Table 2. Interview Checklist
  • Address the issue.
  • Defuse unnecessary anxiety.
  • Reiterate that there was a similarity on the exams, both in content and structure.
  • Tell the students that exploring this issue is a learning opportunity for all of us.
  • Remind the students that we know and trust them.
  • Inform the students that we recognize plausible explanations for the results.
  • Tell the students that we are not accusing them of plagiarism.
  • Inform the students that we are treating this as a teachable moment.
  • Let each student know that no one knows about our meeting beyond the director of student affairs, the course director, and the teaching assistant.
  • Reassure the students that no one in the administration harbors any suspicion regarding their integrity.
  • Close with a teaching moment:
    • There are a lot of ways this could have occurred.
    • Do not let something as small as a numerical assessment challenge your sense of pride.
  • Invite the students to respond.

A Positive Resolution

The director of student affairs conducted all thirteen individual interviews over five days via Zoom. Overall, students appreciated the transparency around the issue and the level of detail discussed during the interview. Students also appreciated how the process was framed and the assurance that nobody was being accused or threatened. After the interviews, students said they felt a heightened sense of trust in the degree program. Several themes emerged from the interviews (see table 3).

Table 3. Interview Themes (Student quotations are used with permission.)

Reactions that Indicated Surprise

  • "I was stunned and surprised to get an email from you. When I read the email from [the course director], I didn't even read the whole thing because I knew it wasn't going to involve me."
  • "I didn't think twice about AI and cheating because it wasn't relevant to me. And then I got this email from you. I am sure this is a mistake."
  • "Thank you for keeping this so private. Even being suspected is hurtful."
  • "I really appreciate the approach you took and remembered who we are. We're not cheaters."
  • "I don't know who else got called in, but all of my friends are so thankful that you kept our names from administration and leadership."

Explanations that Were Revelatory

  • "The TA gave us a study guide with things we should definitely know. I used AI for all of the questions to prep before the exam."
  • "Everything I did was all in bounds. I used AI like I would use Google to look stuff up when I was studying."
  • "You can check the security camera and the history on my computer right now. I never left the library and never logged into ChatGPT during the exam."
  • "Isn't this the appropriate way to use it for studying? Wait, are we allowed to use it to study?"
  • "To be honest, I program AI, and I could easily use it in a way that could never be detected and make everything be in my voice and link to my previous homework answers. I didn't cheat. I used it to prepare."

Employers Expect Us to Know How to Use AI

  • "Every consulting firm uses it. And at job interviews, they ask me if I am skilled in using AI. I need to know how to use it."
  • "Imagine . . . applying for a hospital job, and [you] don't know how to use Google or PubMed. I was told to get comfortable using AI in an [job] interview. So, I figured I'd start practicing in school."
  • "My friend said if I wanted to work at [health policy consulting group], I should get in the habit of checking my work against AI to make sure I didn't miss anything."
  • "I am practicing for the workplace in school. I thought it was OK to study with AI. Is it?"

Discussion

Throughout the interview process, we operated with a collaborative spirit. The interviews were so helpful that we continued working with students the following spring term to establish AI-use rules for a required MPH course (see table 4).

Co-Creating AI Use Rules

At the beginning of the spring term, we worked with students to determine how best to navigate AI use so that it optimizes learning and evaluation integrity. Together, we created a table with three columns to identify (1) what is always OK because of ubiquitous adoption (e.g., grammar checks built into Microsoft Word) or productivity enhancers; (2) what is never OK unless it is specifically assigned (otherwise, it would violate academic integrity); and (3) what needs permission because it depends on the circumstances.

Table 4. Coproduced Rules for Using AI on Assignments
Always OK Needs Permission* Never OK (unless part of a specific assignment)
  • Checking grammar
  • Using as a thesaurus
  • Using as a dictionary
  • Using for learning enhancement (having AI explain course content)
  • Using for background research (must cite)
  • Writing an email
  • Using AI-generated images (must cite)
  • Brainstorming ideas/topics for papers
  • Writing figure labels and titles
  • Summarizing pre-work and other class readings
  • Writing statistical software code
  • Writing responses to quiz/exam/homework questions
  • Looking up answers to quiz/homework/exam questions
  • Generating data that undermines academic integrity
  • Rewriting responses using direct language offered by an AI program
  • Plagiarizing AI (i.e., direct copying without citation)

*Students must ask the course director or refer to syllabi for AI policies in these circumstances.

We reviewed the table with students several times throughout the course and adjusted it based on students' questions and suggestions.

Concluding Thoughts

As educators, we have the privilege of sharing our expertise and life experiences with students who are eager to learn. But how can we evaluate students fairly to validate their knowledge? AI complicates knowledge assessment because it fundamentally alters how we gauge students' understanding of curricular content. This experience provided us with several key takeaways:

  • Students study with AI to check their understanding of course concepts against sources outside the assigned materials.
  • Employers want students who are proficient in AI use (akin to proficiency using search engines and spreadsheet software).
  • Communicating our learning goal clearly and openly created conditions that allowed students, faculty, and staff to be candid and forthright.
  • Embracing AI technology (as opposed to vilifying it) helped reframe the conversation to explore solutions for current and future situations.
  • Managing the situation from a position of trust strengthened the relationship between administrators, faculty, staff, and students. Things could have gone sideways if we had presumed students were being dishonest.

Of course, this experience occurred while all Dartmouth faculty members were grappling with how to deal with AI in the classroom and looking for direction from program leadership. Many important concerns were shared during a series of faculty meetings throughout the year, including the need to require students to do the work necessary to gain the intended skills, knowledge, and competencies from each assignment. AI can play a role in this, but it cannot be a substitute for student learning.

Several professors expressed concern about their proprietary content being shared with AI engines and becoming more widely available. The AI policy section of our syllabus template was promptly updated to address this crucial intellectual property issue.

Faculty recognize the potential of AI to enhance teaching and learning; however, they also understand that harnessing the power of this new technology requires a deliberate approach to optimizing assignments. Careful thought and often significant changes to existing coursework are required. Although incorporating AI into classes can seem daunting due to the time needed to design and thoughtfully implement it, faculty acknowledged that it is already a part of everyday life and that employers expect new hires to use it to enhance productivity and efficiency.

Although our faculty has not reached a consensus on how to use AI in the classroom, we agree that embracing it as a new, ubiquitous tool requires rethinking many assignments to maximize learning and assess mastery. The Dartmouth MPH program is stronger because of this experience, which could have easily resulted in a generational rift if it had been handled differently. AI will certainly pose new challenges and require ongoing engagement between students and faculty based on integrity, openness, and—most importantly—trust.


Craig R. Westling is Associate Dean for Health Sciences Education at Dartmouth Geisel School of Medicine.

Manish K. Mishra is Assistant Professor, Director of the Learning Environment Office, and Director of Professional Education at The Dartmouth Institute for Health Policy and Clinical Practice at Dartmouth Geisel School of Medicine.

© 2024 Craig R. Westling and Manish K. Mishra. The content of this work is licensed under a Creative Commons BY-ND 4.0 International License.