When used thoughtfully and transparently, generative artificial intelligence can augment creativity and challenge assumptions, making it an excellent tool for exploring and developing ideas.

Just about everyone seems to be using generative artificial intelligence (GenAI) tools like ChatGPT these days. After all, ChatGPT reached 100 million users within two months of its launch—faster than any tool in history. However, two years after the introduction of ChatGPT, sustained engagement with the tool has only grown slowly.Footnote1 Although ChatGPT can perform many search-like tasks, its user engagement remains significantly lower than the billions of queries Google processes daily. The gap between perceived ubiquity and actual user engagement suggests a widespread misunderstanding about the capabilities of large language model-based tools like ChatGPT.
The glaring contrast between the perceived ubiquity of GenAI and its actual use also reveals fundamental challenges associated with the practical application of these tools. This article explores two key questions about GenAI to address common misconceptions and encourage broader adoption and more effective use of these tools in higher education.
What Makes GenAI Unique?
To begin addressing the misconceptions around the capabilities of GenAI, we in higher education need to confront a core misunderstanding: GenAI is not just another app. It is fundamentally different from traditional software programs. This difference is categorical, challenging past assumptions about what software can do.
For example, traditional software is entirely dependent on predetermined instructions. It can pass a test such as the bar exam if it is programmed to take the test and has a database with all the questions and answers. This foundation of creating a fixed process to produce a predictable result is the essence of coding.
GenAI is fundamentally different. Instead of following preset instructions, it predicts probable responses based on previous training. In this way, it can generate answers to a test it hasn't been programmed to encounter without access to any answers and achieve excellent results. How does it estimate correct answers with such precision without any knowledge of the topic? This tension between knowing and not knowing is the essence of what makes GenAI categorically different.
"Not knowing is most intimate." Book of Equanimity, Case 20Footnote2
During a test, humans rely on memory, recalling previously acquired knowledge when answering questions. However, memory retrieval is fallible. We may believe we have recalled the correct answer or read the question correctly when, in fact, we have not. When we can't remember a precise answer, our brain may employ pattern recognition to predict the most likely answer based on contextual clues.
The neural network used by GenAI isn't like the human brain, but there are similarities, such as pattern recognition and probabilistic reasoning.Footnote3 The software is trained with data before it is operational but does not retain knowledge of that data. This training process could be called "learning," but GenAI is not capable of memory recall. Instead, its ability to recognize patterns allows it to generate output that looks like recall. When GenAI predicts a wrong answer, it is called a "hallucination." Yet, regardless of whether it gives the right or the wrong answer, it is making the answer up! It has no knowledge and is not referencing any verified data source.
Therefore, skeptics conclude that since GenAI sometimes generates inaccurate responses, and since a classical computer program pulling from a database will answer every test question correctly, GenAI is inferior to traditional software. This conclusion is rational, but it misses the point. Software that can pass a complicated and nuanced test based solely on highly precise predictions is remarkable. GenAI is the first tool capable of mimicking a form of human thought and reasoning.
GenAI is not a replacement for human expertise, creativity, or critical thinking. It is a powerful tool that can augment everyday knowledge work in new ways. Its ability to solve complex problems and engage in iterative conversation can provoke the generation of meaningful insights. It's a leap forward, much like the shift from early calculators to modern computers. When its strengths and weaknesses are properly understood, GenAI changes how we interact with technology: from something we use to get answers to something we work with to develop ideas. However, the unique capabilities of GenAI also make it easy to misuse.
How Is GenAI Misused?
Misuse of GenAI manifests in three distinct ways. The first relates to the perceived quality of GenAI output. Perceptions about the quality of GenAI output compared to human-vetted sources, such as Wikipedia or scholarly publications, are misaligned. There is a presumption that GenAI outputs are always factual. However, GenAI does not have access to facts, does not understand truth, and does not use fact-checking or source review.Footnote4
On the other hand, Wikipedia provides citations and has a community of editors. Although Wikipedia is not perfect—it includes a disclaimer reminding readers that its content is not always reliable—because it is human-vetted and grounded in citations, it has a fundamental connection to truth.Footnote5 In contrast, GenAI operates through abstraction: Its output is detached from verifiable truth.
Because its responses are well-phrased, confident, and often accurate, GenAI gives the impression of being consistently factual. As a result, we are surprised when a GenAI tool generates an incorrect response. How can something that appears so authoritative and sophisticated be wrong? This tension can lead us to dismiss it or downplay its capabilities. Instead, we should frame GenAI as a sounding board to help develop our thoughts, explore possibilities we may not have considered, and challenge our assumptions. Like Wikipedia, GenAI is a great starting point for exploring ideas.
When it comes to addressing open-ended questions, GenAI often surpasses search engines like Google. Although Google Search remains an essential tool for finding facts and authoritative sources, GenAI is better at providing relevant answers. If Google replaces Google Search with GenAI, as it has been doing for the past year, access to authoritative sources will become intermediated by a tool that fundamentally does not have access to truth.
The second way GenAI misuse manifests is through casual engagement. When given incomplete or vague questions, GenAI may misinterpret the question or provide a generic answer. For example, a student who prompts a GenAI tool to write a five-page paper on religious diversity in the Ottoman Empire is not engaging in any work or learning. Any pursuit of AI that instantly answers our questions before we even understand our own thoughts is profoundly dehumanizing. What some faculty dismiss as laziness is far more serious: It is an act of substituting human thought for superficial machine prediction.
By contrast, a student who asks, "Help me think through how different religious communities in the Ottoman Empire coexisted. What kinds of tensions and collaborations existed, and what role did external forces play in shaping the legacy of this system?" is already in the midst of critique and driving a thought process. The student is using the tool to push their thinking further. They are not asking for the answer but instead using GenAI as a catalyst for exploration, creating new lines of inquiry, and refining and deepening their own thought process. However, even with a thoughtful and well-crafted question, the chatbot's initial response often does not reflect the tool's full capabilities, which typically requires ongoing, iterative engagement. In one sense, this type of engagement highlights an inefficiency in the current state of the technology; in another, it reflects the time it takes to fully express a thought.
GenAI is inherently conversational and is designed to explore intricate ideas and diverse perspectives. Rigor is required to use it effectively. That means time and effort must be put into the conversational process: asking follow-up questions and directly challenging parts that are unclear or possibly wrong. This may sound like more effort than it is worth, but most of the time, GenAI is a massive time multiplier. A focused conversation with GenAI significantly amplifies the quality of what it would otherwise produce.
Deception is the third type of misuse. GenAI can produce a large amount of information, but if no effort is made to evaluate or revise its output, it's just noise. Passing off such information as human work is dishonest and unethical. Information produced by GenAI is unmoored from subjective experience or objective fact, making it especially problematic if presented as original or creative thought.
For example, using the earlier example of a student submitting a paper produced entirely by GenAI, the issue goes beyond academic dishonesty. Faculty reviewing these papers believe they are engaging with human-generated ideas. An AI-generated paper may be well-written, but it can still be irrelevant, redundant, or incoherent. Its human-like quality implies intention and creates an expectation of coherence. Yet when the internal logic of such a paper breaks down, faculty are left not only recognizing merely flawed content but confronting the illusion of thought where there is none. This ethical harm is comparable to synthetic plagiarism. It is a false claim of authorship that fundamentally undermines trust and falsifies human intellectual discourse.
Summary
Over twenty years ago, Google Search transformed how people access information. Today, GenAI is changing how we engage with the simulation of knowledge. While search engines retrieve established answers, GenAI creates new possibilities. Not only does it provide a space for exploration and ideation, it is also a novel form of learning. Its value is not in delivering facts but in helping people develop more comprehensive forms of expression. GenAI is an incredibly powerful tool for higher education, not as an authoritative source but as a partner in thought development.
We are only two years into the GenAI experience, and the hype has already been excessive. The hype has been fueled partly by AI companies presenting speculative future capabilities as though they are present realities and blurring the line between what GenAI is today and what it may become. The result is a public discourse that struggles to distinguish between the appearance of capability (let alone the assumption of inevitable future advancement) and the actual limitations of these tools.
Leading GenAI companies like OpenAI recognize that a significant gap exists between the potential of the technology and its everyday use. They also understand that competing models such as DeepSeek are becoming more accessible and widespread. GenAI companies have responded to these developments by designing shortcuts for various new use cases, presenting a challenging shift for educators. OpenAI, for instance, has launched a series of tools aimed at broadening engagement: GPT search (October 2024), Sora for video generation (December 2024), Operator for task automation (January 2025), Deep Research for high-quality citations (February 2025), and, most recently, advanced image-generation capabilities (March 2025). While some of these new tools will increase exposure to ChatGPT by simplifying routine tasks, their design tends to prioritize efficiency over intellectual engagement. Yet, it is only through reflective and critical use that educators can unlock the deeper value of GenAI as a thought partner.
Regardless of future advancements, GenAI is already a powerful tool for ideation. Writers can use GenAI as a conversation partner with which to have a back-and-forth about various word choices. Linguists can use it to critique and provide alternate translations and nuanced interpretations. Students can use it to help them understand particular subjects, and it is a thought partner with inexhaustible patience.
Using GenAI regularly can help us understand not only its usefulness across our many interests and endeavors but also its limitations and imperfections. There is value in engaging with GenAI in its current, imperfect form because it will help prepare our community for how it might evolve. When the internet was new, it felt overwhelming. GenAI evokes a similar feeling. Now is a wonderful time to rigorously explore this new tool in the classroom.
Notes
- Krystal Hu, "ChatGPT Sets Record for Fastest-Growing User Base – Analyst Note," Reuters, February 2, 2023; ChatGPT has approximately 400 million weekly users as of February 2025. See Shubham Singh, "ChatGPT Statistics (April 2025): Number of Users & Queries," Demandsage, February 28, 2025. Google, by contrast, has 115 billion searches per week. That means ChatGPT's usage is around 0.35 percent of Google's. See Anthony Cardillo, "How Many Google Searches Are There Per Day? (March 2025)," Exploding Topics, March 6, 2025. Jump back to footnote 1 in the text.
- It may sound paradoxical, but the strength of GenAI is its lack of "knowledge." This runs counter to Western epistemology, but in this Zen koan, "not knowing" is intimacy, not ignorance. It describes a mind unburdened by assumptions and open to what is. Of course, software does not possess knowledge in the human sense. But this framing helps clarify the contrast between GenAI and traditional software: GenAI is effective precisely because it does not rely on static information but instead because it is open to possibility. Jump back to footnote 2 in the text.
- The ways in which ChatGPT differs from the human brain are too long to list, but the most important is that ChatGPT does not possess consciousness. It does not have intentionality. Because the tool has no subjective experience, its output has a certain consistency. ChatGPT tends to reflect learned patterns, which are sometimes novel, but it cannot create original ideas. Consequently, although its output may, in some ways, appear creative, such "creativity" is fairly distinct and repetitive. Jump back to footnote 3 in the text.
- Several companies, including OpenAI, Microsoft, Google, and Anthropic, have been developing layered systems to augment their LLMs to partially address this gap. These retrieval-augmented generation systems have been a significant improvement in model outputs, but don't be deceived: the LLM itself is distinct and is fundamentally separated from truth. As of this writing, it remains perfectly capable of taking a valid citation and fabricating the attribution. Jump back to footnote 4 in the text.
- It is seldom recognized, but consider this: How many websites on the internet include disclaimers about not being an authoritative source? See "Wikipedia:Citing Wikipedia," Wikipedia, updated February 14, 2025. Jump back to footnote 5 in the text.
Brian Basgen is Interim Vice President, Technology & Facilities, at Emerson College.
© 2025 Brian Basgen. The content of this work is licensed under a Creative Commons BY-NC-SA 4.0 International License.