Teaching with GAI in Mind

min read

In the age of generative artificial intelligence, how should instructors change the way they teach?

hand holding a drawing of a lit lightbulb with Ai inside it
Credit: Nirat.pix / Shutterstock.com © 2023

"Yesterday's home runs don't win today's games."

—Babe Ruth

It's a few weeks before the semester, and you have started going through everything you will teach next year, but this time it is different. In the past, you may have relied on papers and long-answer essay questions to test your students' knowledge, but that no longer works. Yes, there were a few students who plagiarized their way through, getting caught by your trusty plagiarism checker and sent to the Academic Integrity Committee, but that wasn't an endemic problem. Now, in the age of generative artificial intelligence (GAI), you find yourself adrift. What can you teach, and how should you teach it? Should you try to stop your students from using GAI by spending your semester trying to police the unpoliceable, or should you embrace the tool? What is the best strategy? Let's take a deep dive into this topic so that you can make these decisions in an informed way.

The Teaching Landscape Has Shifted

On November 30, 2022, ChatGPT launched. Although large language models (LLM) already existed, this was the first easy-to-use, web-based interface to an LLM. It was also free, available, and very human sounding. One only had to enter a prompt, and the dang thing would pump out an entire academic essay on any topic, or an application letter, or a page of code. It could correctly identify which formula to apply to solve a physics problem and even explain complex legal, medical, and philosophical issues. Particularly for academics, this was a nightmare of horrific proportions. We went through a moment of disbelief, followed by anger, and then an acute existential crisis: "What can I teach my students when GAI can replace anything I might teach?"

In my department, English, instructors were especially bereft. What was the point of teaching writing when GAI made even our most basic writers into magnificent communicators overnight? I remember grading final essays in our last week of classes at Georgia State and noticing that my non-native speakers were suddenly fluent, my basic writers were capable of beautiful phrasing, and my competent writers did not make even the smallest grammatical error. I had used Grammarly in my teaching for years, but this went far beyond Grammarly. It seemed that all my students had suddenly become competent and fluid, as if they no longer needed any assistance from me to become great writers.

At this point, many of us were at the end of our grading rope. OK, maybe students had used this tool, but frankly, it was too late in the semester to worry about it. We would figure out what to do about ChatGPT after winter break. For now, we just graded the assignments we had, but our discussions in the hallways suddenly became philosophical: What was the nature of humanity, and what differentiated us from machines? Could computers have a soul? What is the essence of education? Why have grades become so much more important than education? And was that instructor over there, the one who has never taught using computers and has made his students physically go to the library every semester, actually right? Maybe he was ahead of the curve and not behind it, as many of us had thought.

The Rise of AI Detectors

By the start of spring semester, we were all asking questions about how to teach with GAI in the mix. I wrote a short piece, "A Brief Summary of the Capabilities of ChatGPT," and circulated it first within my department and then through the blog for the Center for Excellence in Teaching, Learning, and Online Education (CETLOE).Footnote1 Although we were promised relief by the release of tools such as GPTZero and the somewhat late addition of a Turnitin tool to alert us to the use of GAI, we soon found that these tools came with a strong caveat: they were not reliable. They often flagged perfectly innocent students who hadn't used GAI at all and let students who had used GAI go undetected. The Turnitin detector includes the following strongly worded notification:

Turnitin's AI writing detection capability is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any academic misconduct has occurred.Footnote2

Some instructors became obsessed with trying to determine whether their students had indeed "cheated" by using GAI to write their papers. They spent endless hours trying to figure out which students were cheating and which students were not. There was even a case of a professor in Texas who failed the entire class for using GAI, which caused a delay in the graduation for the students in the class.Footnote3 GAI is not like regular plagiarism. It cannot be easily distinguished from human creation because each word is generated separately using a generative pretrained transformer (GPT) model:

The GPT models are transformer neural networks. The transformer neural network architecture uses self-attention mechanisms to focus on different parts of the input text during each processing step. A transformer model captures more context and improves performance in natural language processing (NLP) tasks.Footnote4

The task of creating content using GAI is performed in such a way that every output is different, even if the prompts are exactly the same, making detection much more difficult—in fact, virtually impossible. In addition, it is easy to camouflage GAI use by substituting phrases, rewriting sections, and making intentional mistakes in the text. In fact, this problem is so serious that OpenAI has retracted its AI detector tool because of  poor accuracy.Footnote5 Right now, there are hundreds of tutorials on fooling AI detectors on YouTube (e.g., "How to Trick AI Content Detectors"). Our students, to be sure, are watching every tutorial out there in an effort to keep their GAI usage secret.

Some instructors have suggested that they can tell what is GAI text and what is not. A colleague said, with a straight face, that she always knows when a student has used GAI to create a paper. I listened to her and smiled at her innocence, knowing full well that my students—less than a week after ChatGPT was released—had already told me that they had easily trained the model to replicate their own writing style. These students had completed papers in their courses without any problems, suggesting that the papers she could "always" pick out were not the product of GAI. Instead, they are the product of poor prompting of GAI by students who haven't learned how to train the model. Even if my colleague could tell that they had used GAI, she wouldn't be able to prove it. Students who are sly in their use of GAI have an advantage. It is an academic arms race. Will instructors catch up? It is unlikely. Instead, we need to embrace the inevitable and make our classes relevant again.

The Issue of GAI Hallucination

During the spring of 2023, I realized that years ago, I had made a pedagogical decision that was very useful in the age of GAI: I had focused on teaching texts that were not widely known and that had been mostly ignored by the academic community. I had done this because I wanted my students to rely on their own research into the texts and not on specific critical perspectives of the texts. What I found out in the age of GAI is that LLMs, which are normally able to rely on thousands of academic articles about specific texts, were unable to draw any information from the texts I used, resulting in terrible GAI hallucinations (i.e., made-up, false information). Even though the short stories I used were very brief, some students relied only on the GAI text and not on their own knowledge. This resulted in giant swaths of fabricated materials generated by the GAI—from crazy analyses of texts that didn't exist to in-text citations and works cited for authors, books, and articles that were complete fantasy. Since then, my students have learned to add those short stories to their prompts in order to get more accurate information, but there are still problems with their essays when they do this because the GAI doesn't always stick to the information they have fed it.

Needless to say, I had no need for any AI detectors in the course. Those students who had clearly not even read or edited the works they were submitting for a grade were presented with an F. There was no need for policing, worrying about, or trying to track down what they might have done. I did let them rewrite those papers, but they were academically chastened. They had learned a valuable lesson about trusting GAI, and their revised papers (which had to be on a completely different subject) turned out to be carefully reasoned and even more carefully documented.

Although GAI can produce research papers and documentation, it is especially prone to hallucinations when it must swim into the depths of academic, legal, medical, or philosophical discourse. If it can't find relevant information on the topic, it will make it up. So, in-text citations and works cited not only are incorrectly formatted (which I have found is almost always the case), but also contain fabricated authors, dates, and publications. A Texas law firm learned this lesson the hard way when it used ChatGPT to generate legal briefs in an aviation lawsuit: "The chatbot, which generates essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn't been able to find through usual methods used at his law firm. Several of those cases weren't real, misidentified judges or involved airlines that didn't exist." Two lawyers were fined $5,000 each, to serve as an example to other lawyers who might try the same tactic.Footnote6 Although reading about this case may be humorous, the effects of such serious errors on the case were irreparable. One can only surmise the serious repercussions of falsified medical studies, incorrect legal opinions, and faked academic works. It is important to clarify the importance of verifying and doublechecking every source in a paper and to warn students of the danger of hallucinations.

The judge in the Texas case said that the lawyers had "abandoned their responsibilities" by using falsified information. The use of that language struck a chord with me. When a student submits a paper to an instructor, the student takes responsibility for the information and stands behind the sources provided. Even students who are just starting out in their academic journey must get in the habit of documenting every fact, checking every source, and providing the correct works cited for their papers. Especially in the age of GAI, we must hold students responsible for proper documentation to ensure that the documentation of sources becomes ingrained in their scholarly process.

When using GAI, one usually does not know the source of the information. GAI models collect knowledge from the internet and do not provide the sources of the information provided in the textual output, requiring GAI users to invert the process of writing a research paper. The usual process involves identifying one's research interest, identifying research sources, reading those sources, taking notes on the information available, and then writing and documenting the paper. With GAI, the process may be more like the following: identifying one's research interest, writing the paper, eliminating hallucinations, identifying knowledge in the paper that must be documented, finding the sources, and documenting the paper.

Writing Research before GAI Writing Research after GAI
  1. Identifying research interest
  2. Identifying source material
  3. Reading and writing research notes
  4. Writing paper with documentation
  5. Completing works cited or bibliography
  1. Identifying research interest
  2. Writing paper with GAI
  3. Checking for hallucinations
  4. Identifying academic knowledge
  5. Identifying source material
  6. Documenting sources
  7. Completing works cited or bibliography

Yes, students may spend less time writing their papers, but they may spend as much or more time "fact checking," analyzing, and documenting. This approach will require retraining our students to pinpoint factual content and differentiate it from false content (hallucinations). It will also require that students focus on what must be documented and what does not have to be documented (i.e., academic facts vs. personal opinions/experience). This process may open an entirely new avenue of pedagogical concern as students begin the arduous task of analyzing their "own" generated texts to correctly identify areas that require attention to academic integrity. Students will need critical thinking and documentation skills, both of which may not have been emphasized in the past.

Teaching with or against AI

GAI is here, and it is not going away. The world of teaching that we knew before November 22, 2022, will never return. So, ignoring GAI, pleading with your students not to use it, and then trying to police them is a terrible waste of time. You may be able to prove that one or two of your students have used GAI, but chances are that the ones you catch will unfairly fall into the categories of non-native speakers, those who have learning or physical disabilities, and those who have been subjected to inadequate education because of economic or social disparity.Footnote7

Instead, as an educator, make sure that you protect academic integrity while finding ways to live with GAI. To be ready for the challenges ahead, you must teach either with or against GAI in everything you do in your courses. This does not mean that you have to make a decision between "teaching with GAI" and "teaching against GAI" for everything. You can make this decision on a case by case basis for each exercise, exam, and paper in the course.

Teaching with or against GAI involves two important steps that every instructor should take right away. The first step is to make a syllabus statement. Again, the syllabus statement does not have to say that you have a uniform policy regarding GAI. It can indicate when and how GAI can be used in your course, including a serious statement about academic integrity in the age of GAI. For example, my syllabus statement is in the form of "Best Practices" guidelines (see table 1).

Table 1: Best Practices for Using GAI in My Class

GAI definitely has a place in academia, but you should use GAI tools very carefully. The following are "Best Practices" for using text-based GAI:

  1. BE AWARE OF THE UNIVERSITY'S HONOR CODE. The honor code is no joke. You can fail a class, be suspended, or expelled for academic dishonesty. Make sure you are using GAI only to create outlines and drafts but NEVER the final copies of an essay. In addition, NEVER use ChatGPT or other GAI models to complete exams or discussion posts within the LMS program.
  2. BE AWARE OF GAI "HALLUCINATIONS." If a large language model (LLM) like ChatGPT can't find an answer, it will make one up. This is called a "hallucination." GAI will hallucinate not only "facts" in your paper but also references. If you don't know what you are writing about, please do not use GAI.
  3. START WITH A STRONG CLAIM. For any essay assignment, you must begin with a claim. The claim should be specific and arguable. If you need help drafting a strong claim, ask your friendly neighborhood chatbot to help you make it stronger. Remember, when you write this paper, you are making an argument for the way you see the material. To make a great outline or draft for your essay using GAI, you must begin with a strong, clear, and arguable claim.
  4. USE WELL-CRAFTED PROMPTS. "Prompt engineering" refers to the way in which you enter the prompt into a GAI model. Prompts must clearly identify the task you want the GAI to perform, how you want the GAI to perform the task, and what output you want. Here is a link to some useful instructions about how to correctly prompt ChatGPT.
  5. CAREFULLY READ THE DRAFT. You may have to ask the GAI model to revise the draft several times before you get anything you might consider plausible or usable. You can also ask ChatGPT how to improve your draft.
  6. REVISE, REVISE, REVISE! Do NOT leave the draft as it is. You need to carefully revise it, often adding levels of detail and clarity to what the GAI model has given you. You will also need to add in-text citations and a reference page in MLA 9. You must add these separately, since most GAI models are incapable of adding real citations in the proper format.
  7. USE ZOTERO. Zotero is an awesome tool that should help you throughout your college years and beyond. Please spend the 20 minutes needed to learn it. Don't use ChatGPT for any in-text citations, works cited, or reference pages.
  8. FORMAT YOUR WORK. First, remove GAI formatting from your work. Remove the box and highlighting. Then make sure your work is formatted according to the required style manual.
  9. CHECK YOUR WORK. Grammarly has become expensive, but there are some great GAI alternatives. Try Quillbot or Wordtune to check your work, especially in Google Docs.
  10. USE GAI AS A TUTOR. Here is a prompt you might consider using to improve your writing:

    I want you to act as a GAI writing tutor. I will provide you with an essay that I need help improving, and your task is to use artificial intelligence tools, such as natural language processing, to give me feedback on how I can improve the composition. You should also use your rhetorical knowledge and experience of effective writing techniques to suggest ways that I can better express my thoughts and ideas in written form. Write "Please paste your essay in the text box" and wait for me to paste my essay into the text box. When you have completed your analysis of the essay, end with the prompt, "Please paste your essay in the text box" so that I can paste my next essay.

Depending on your teaching field and what you will be focusing on, you should find a syllabus statement that will help you teach both with and against GAI in your class, depending on the assignment. Again, you don't have to make a uniform statement either for or against GAI, but you should be both clear and clearheaded about how your students might use GAI in your classroom. Remember, GAI is here to stay; students need to know how to use it for their future, and they need to understand the ethical and academic implications of using it within the safe parameters of your classroom. Let them experiment a bit, but keep them where you are comfortable while allowing them to roam. Let them know that there is an effective way to use this tool but that you don't want them to use it all the time or substitute it for learning. Most of your students will be grateful that you have been clear about your policies—whatever they are—and happy that you haven't completely outlawed GAI in your classroom.

The second step is to go through every assignment, discussion, and exam you give in your class. Ask yourself, "If I were a student, how would I try to use GAI to get around the work for this lesson?" You must analyze every lesson against that standard. I recommend that you don't try to do this all at once at the beginning of the semester. Instead, change your lessons weekly, since there is a strong possibility that what you think is GAI-proof might become GAI-capable at any moment during the semester. If you don't know what GAI is capable of doing (this changes daily), spend some time researching the current capabilities of GAI tools.  Along with ChatGPT (OpenAI), tools such as Bard (Google), Llama (Meta), and Bing (Microsoft)are becoming increasingly sophisticated and capable by the day. Some are better at text, some at code, and some at analysis. (I haven't even discussed text-to-graphic programs, which are improving daily as well.) I have been setting aside Sunday afternoons to check my lessons and research GAI tools by using a Google filter for "the past 24 hours." I also subscribe to TLDR's AI list, and I have a Google Alert for "AI in Higher Education," which I check daily. (But then again, I am a bit obsessed with this subject!)

Teaching with GAI

Let's say I want to teach with GAI. What is the best way to integrate GAI into a class? My best advice is to learn as much as you can about GAI now, while you have some time. The most important step I took toward this goal was to enroll in a free Coursera Course, "Prompt Engineering," offered by Vanderbilt University (they will also provide a certificate of completion for $49). I learned about the basics of prompt engineering, which was very helpful, but I also learned something even more important: some of the capabilities of ChatGPT. This gave me valuable ideas about how to use GAI in my courses and put me one small step ahead of most of my students, which is always a good thing. For example, I learned how to give ChatGPT several examples, called "few shot prompts," to increase its effectiveness, how to expand outlines, and how to use formatting for output. I also realized something that has become basic in my understanding of how to use ChatGPT and other GAIs: I have been doing prompt engineering for over 30 years! I haven't been prompt engineering LLMs, but I have been prompt engineering my students. With every assignment I have written over the years, I have included the purpose of the assignment, what kind of output I want, how I want that output formatted, and examples of what I want. This is exactly what we need to do when we prompt GAI models to give us the output we're looking for.

My takeaway from the prompt engineering course was that I should ask my students to do the following with AI:

  • Check research claims to see if they can be improved
  • Make, and expand, outlines for their papers
  • Provide drafts for papers they are having difficulty starting
  • Use text-to-graphic programs to illustrate their essays
  • Provide graphic examples of metaphors
  • Use prompts I give them so that GAI can act as a tutor that will point out areas where their papers can be improved

I also required my students to share any outlines, drafts, or analyses that were done with ChatGPT, and I compared those drafts with their final product to ensure that they weren't using the tool to do their work for them. (See the "compare documents" tool in Microsoft Word for this task.) In experimenting with this strategy, I was pleasantly surprised to find that my students were writing much more scholarly work, their arguments were more beautifully nuanced, and they had spent much more time documenting their work. Of course, as I mentioned, I use short stories that are unavailable in GAI programs, and this has made my work much easier to manage.

What can you do if you are teaching a class in mathematics, physics, computer science, or other STEM-based courses? The process is similar. You need to check ChatGPT and other GAI programs for what they can do, and then begin integrating that information into your lessons. For example, let's say you are teaching computer science, and you give your students code that doesn't work. Tell them to run the code and find out what it does or doesn't do, use a GAI program to debug the code and then run it again. Was the problem solved? How? Was there any other way that the problem could have been solved? These kinds of lessons not only will make your students aware of what GAIs can do to help them with tasks, but also will alert them to ways in which they can work independently to find errors in the code. I have found that compare-and-analyze exercises are very helpful in clarifying my students' thinking about a task that they have completed with GAI.

If you teach mathematics or physics and you have relied heavily on story problems to test your students, I'm sorry to say that those story problems are exactly where GAI excels. Instead, give students the mathematical problem, and then ask them to develop a story. (Test this in ChatGPT first, but I think it will work.) Try to think of different ways of presenting a problem that would be difficult for your students to enter into ChatGPT or another GAI. Better yet, ask ChatGPT or another GAI program how you can present a story problem to your students in a way that cannot be solved with GAI. ChatGPT is great at coming up with teaching strategies.

Teaching against GAI

There have been many suggestions about teaching "against GAI," but they mainly involve face-to-face courses in which the instructor can physically observe students in the classroom. However, many of us cannot do this because we teach online or in lecture-style courses in which completing problems on the whiteboard, answering questions in person, or writing with pen and paper are not logistically possible. How can someone teach "against AI" in, for example, an online course? Again, I highly recommend learning as much as you can about GAI before you attempt to craft a lesson. What is impossible this week may be possible next week because GAI constantly changes and evolves. Also, you may assume that GAI cannot do something that it can, in fact, do (e.g., access information after 2021). In addition, you may find that basic GAI programs (e.g., ChatGPT-3.5) may not be able to accomplish what a premium version (e.g., ChatGPT-4) can do. This leads to the general problem of possible unfairness, with access depending more on a student's economic condition than their knowledge.Footnote8

Although I can't give you foolproof anti-AI instructional materials, I can offer some ideas for assignments that discourage the use of GAI, at least at this moment in time:

  • Lessons requiring personal opinions and/or experience
  • Group work
  • Lessons based on materials that have not been widely circulated or discussed
  • Video and audio presentations
  • Lessons based on the physical performance of a task

Many educators have called for new assessment tools that are based less on outcomes and more on the assessment of the development and progression of skills. Most of us are familiar with project-based learning (PBL)—from building dioramas in second grade to nursing practicums in college—but many of our educational programs have not adopted PBL in meaningful ways, especially in the humanities and social sciences.Footnote9 Even when we do use PBL lessons, we often require our students to provide reports of PBL activities rather than assess their actual performance. However, in the age of GAI, utilizing the video and audio artifacts of PBL, especially in online courses is important. Unfortunately, evaluating video and audio artifacts is more difficult, but if instructors have a clear rubric for evaluation, students and instructors may find PBL evaluation as easy as, if not easier than, reports of PBL activity.

In STEM fields, many have agreed with the idea, which was true in the earliest moments of ChatGPT-3.5, that GAIs perform poorly in mathematics. However, with ChatGPT-4 and the Wolfram Alpha plugin, this is not necessarily the case. If you take a quick look at the you will find amazing things these tools can do for your students. As I write this article, I have observed that students can access natural language answers to almost any query, including algebra, geography, geometry, algorithms, step-by-step instructions, and statistics. Simple ChatGPT-3.5 is capable of accessing information only before 2021, whereas GPT-4 makes current information available to any student who can afford the $20 per month that the premium service requires. Programs such as Bard (Google), Copilot and Bing (Microsoft), and new tools such as Perplexity AI are capable of accessing the web for free and even include references.

So, if you are planning to teach against GAI, you must be as prepared as someone who plans to teach with it. You will have to carefully plan your lessons and test each one right before you give it to ensure that your students must depend on their own intelligence to complete the assignments.

Some Final Words

What is left for you to teach when GAI tools are all around us, and how can you stay relevant? Figuring that out may take some time. When I started this journey, I asked myself, "When ChatGPT is doing the writing, how am I relevant?" That was when I started to make a list of the things that my students don't understand when they approach scholarship, even when they have access to a powerful tool like a GAI. The truth is that after achieving a PhD and teaching for 30 years, I have realized that this time was dedicated to knowing what has already been done in my field. This is essential knowledge because you cannot innovate until you know what is innovative. This is why we lead our students through a careful pathway of knowledge and experience on their way to a degree. Students must understand the difference between the sun (human knowledge) and the moon (the reflection of that knowledge.) Human innovation is the sun and AI is the moon. We can't have the reflection of human knowledge without human knowledge. Human knowledge is what will feed and innovate AI, and a critical understanding of subject knowledge is essential to that process. You can't write with AI unless you understand what is being written and what is correct and incorrect. The essence of meaning, the core knowledge of ethics, the humanities, and the democratic nature of education are what fuels AI and keeps it going. Although AI is a conveyor of knowledge, it is not the origin of knowledge.

I realize that my students, as smart and capable as they are, haven't scraped the surface of knowledge. So, it is with full discernment of my subject area that I will direct most of my teaching in the coming years. Yes, my students can write a paper that I have assigned, but it is my comprehensive knowledge (which they lack) that allows me to choose a reading, appreciate its significance, and provide students with guidance for a deeper understanding. Essentially, my students don't understand the following:

  • What they don't understand
  • Why they don't understand
  • How they don't understand
  • What is a problem and what is not
  • The context of a problem
  • How to ask a question
  • How to focus
  • How to make a claim
  • Where to look for answers
  • How to look for answers
  • What has already been said on this topic
  • What is original thought about the topic and what is not
  • What is significant knowledge and what is not
  • What to do with new information

With the help of GAI, my students can write. They can get past the issues of fundamental communication in English. That is a huge burden lifted from my shoulders. I don't have to teach grammar or punctuation in the Age of AI. Now, instead, I can get to the real work: teaching my students the significance of ideas within the context of a subject area and helping them develop the critical thinking that is needed to analyze and understand complex ideas. This is the essence of education, and it is something we can unlock in our students with the assistance of GAI.

Notes

  1. Michelle Kassorla, "A Brief Summary of the Capability of ChatGPT," Moving Forward: Teaching in Uncertain Times, January 4, 2023. Jump back to footnote 1 in the text.
  2. "AI Writing Detection," Turnitin (website), accessed December 1, 2023. Jump back to footnote 2 in the text.
  3. Natalie O'Neill, "Texas Professor Flunked Whole Class after ChatGPT Wrongly Claimed It Wrote Their Papers," New York Post, May 18, 2023. Jump back to footnote 3 in the text.
  4. "What Is GPT?" Amazon Web Services (website), accessed 12/1/23. Jump back to footnote 4 in the text.
  5. Jason Nelson, "OpenAI Quietly Shuts Down Its AI Detection Tool," Decrypt, July 24, 2023. Jump back to footnote 5 in the text.
  6. "Lawyers Fined for Filing Bogus Case Law Created by ChatGPT," CBS News, June 23, 2023. Jump back to footnote 6 in the text.
  7. Andrew Myers, "AI-Detectors Biased Against Non-Native English Writers," Stanford University Human-Centered Artificial Intelligence new release, May 15, 2023. Jump back to footnote 7 in the text.
  8. Michale Trucano, "AI and the Next Digital Divide in Education," Brookings, July 10, 2023. Jump back to footnote 8 in the text.
  9. Pengyue Guo et al., "A Review of Project-Based Learning in Higher Education: Student Outcomes and Measures," International Journal of Educational Research 102 (2020). Jump back to footnote 9 in the text.

Michelle Kassorla is Associate Professor of English at Georgia State University.

© 2023 Michelle Kassorla.