Get The Gavel
A weekly SCOTUS explainer newsletter by columnist Kimberly Atkins Stohr.
Khan’s explanation for the value of this exercise to the student is that it “unlocks learning literature.” How so is left as self-explanatory.
A literature teacher might view this a little differently.
Far from unlocking learning literature, Khanmigo provided an easy way for the student to stop thinking or interpreting. No need to struggle through the text or seek to understand it in one’s own terms. The fact that the chatbot is pretending to be Jay Gatsby is a gimmick. The answer would have been the same if the student had asked another AI chatbot without any role-playing. (I know this because I tried it with ChatGPT and Claude.)
Having a chatbot give answers like this forecloses the kind of inquisitive analytical thinking process that comes through human-to-human interaction, whether teacher-student or peer-to-peer. It declares a problem solved.
Imagine the difference if students who had just read aloud passages from the book were paired up and told to play the same role-playing game. That would be creative, challenging, and open-ended. If Khan’s TED Talk teaches us anything, it is how little those selling AI for education understand the processes of human creativity, interpretation, and learning.
Universities are clamoring to partner with companies selling chatbots and generative AI products and providing workshops for faculty on how to teach with AI. Some are circulating Khan’s TED Talk, because there is no substantial difference between the Khanmigo tutor and the chatbots universities are now subscribing to for their students and faculty. The value of generative AI to higher education is simply presumed, because it is impressive technology being sold to us by powerful corporations.
If AI holds any potential to improve education, it will not be by enabling shortcuts to humanistic learning. The humanities and other disciplines can thrive in the age of AI only by fostering and showcasing distinctively human capacities. And as AI becomes embedded in K-12 education, the importance of teaching students in universities to think for themselves will only grow. In an educational environment that is otherwise saturated with AI, the humanities can provide a human-centered haven.
There may be advanced research applications for AI in the humanities, for example, in decoding ancient languages. Certainly history, literature, philosophy, and other programs should be teaching about AI and the relationship between technology and society. But when it comes to the teaching and learning done at universities, opening the door to AI is inviting in a virus that can destroy the enterprise itself.
I worry that universities have been too quick to accept the premise that society in general and universities in particular are likely to benefit from generative AI.
The temptation for universities to go all-in on AI is understandable. Anyone can see that we are only at the beginning of a new stage of the knowledge revolution and that nearly all realms of human labor will be affected. Universities naturally want their students to be employable in a changing economy that looks set to rely on this emerging technology.
They also want their faculty to use the new technology to be competitive in research and not to appear to be falling behind when new teaching methods emerge. In addition to partnering with technology companies to make AI tools available to students and faculty, universities are busy drafting guidelines for responsible AI use. They’ve created courses and online modules to show faculty how to incorporate AI into their teaching.
The presumption is always that generative AI is preordained to play a positive role in teaching and learning and that the risks of harming the way students learn (let alone the integrity of their degrees) can be controlled.
The arguments for integrating AI into teaching are framed around benefits in “efficiency,” “speed,” and “freeing time,” as if these qualities will necessarily improve the students’ experience or what they learn. I have heard repeatedly that using AI to reduce instructors’ supposedly tedious work, such as grading, syllabus-writing, and course preparation, will free time for more one-on-one faculty-student interactions.
But I am highly dubious that the time savings will go from one to the other. And if universities do not require their faculty to do their own grading and prepare their own course materials, they will inevitably dilute the value of their product.
The integrity of both the teaching and the learning done in universities is bound to be harmed by this AI leap of faith. For students interested only in a grade on the way to a credential, there have always been shortcuts, from outright cheating to what we might call bullshitting. Teachers tend to be adept (albeit imperfectly so) at picking out the bullshit artist who tries to get by, out of necessity or choice, with minimal engagement with whatever they are meant to master. But the marketing of generative AI as a learning tool by both corporations and universities is turning otherwise good students into bullshit artists — a problem that accelerates as good students see the bullshit artists rewarded and their own hard and naturally imperfect work comparatively devalued. Even the most ambitious and intellectually curious students will take the shortcut if they see themselves insufficiently rewarded for doing the hard work themselves.
The humanities are particularly vulnerable to being undermined from within by AI. If all we want is answers, Khanmigo can provide them and can even parrot F. Scott Fitzgerald’s literary creation in the process. If we want to grade faster or understand the gist of a book or article instead of reading it, generative AI tools can help with that too. And if we want to train students to produce comprehensible texts, rather than creatively compose them, AI can do that all on its own. If we embrace that model, the humanities are indeed headed for the irrelevance some predicted long before the ready availability of ChatGPT.
Much ink has been spilled on the mixed record of humanities disciplines to demonstrate their utility to students. Defenders of the humanities correctly point to the important skills students develop such as writing, analytical thinking and reading, and argumentation, that make them better at almost any career, and the power of knowledge to make people more complex thinkers.
Equally important, however, the study of human culture acts as both a generator of creativity and a storehouse of human knowledge. Without all of those people writing (and painting, and composing, and so on), the machines would have no corpus from which to learn. But as computer-generated imitations of human creativity become better and better, distinguishing between the imitation and the human becomes increasingly difficult, to the point that people are bound to question whether one is qualitatively better than the other.
There is an alternative path. It starts by stating emphatically that without preserving the study of human creativity and culture at universities, this mode of human knowledge will be lost. When universities began to sprout about a millennium ago, human thought made no distinction between philosophy, theology, and science: All higher education was about seeking to understand the human, divine, and natural worlds. Over time, disciplines and fields of humanistic study — including literature, history, art, music, philosophy, religion, politics, and law — developed at universities to criticize, think about, and comprehend what we broadly call culture. Now, much of that human creativity and knowledge, compiled in the form of centuries of writing, is being used to train machines to write, compute, and generate images and sounds as humans do. The readily available large language models, chatbots, and other generative AI tools are highly skilled parrots, with a capacity for recitation far vaster than a single human can possess.
The opportunity and challenge that AI presents to the humanities is for scholars, teachers, and students of the humanities to demonstrate that the parrots are poor substitutes for human creativity and human-to-human engagement. We need to make the argument — to administrators, students, and the public — that human creativity is unique and there is value in the intellectual struggle to understand our place in the world. For our societies to maintain our level of creativity and preserve subsequent generations’ abilities to access, understand, and contribute to the totality of human knowledge, students need to develop the ability to think, write, read, compose, and understand without generative AI.
This means some humanities disciplines will have to unshackle themselves from the models of educating and evaluating students that are the current norm. Educators have an opportunity to make their fields more rigorous, because they will have to demand more from students than what the parrots can deliver.
To expect students to master a deep understanding of texts without AI, educators will have to engage students in environments where human intellect is on its own. To train students to produce texts and other forms of human thought, experimentation, argumentation, and creativity, educators will need to shift away from modes of teaching and assessment where AI provides seductive shortcuts. Likely more of the educating will need to be done in the classroom itself. It might mean smaller classes, a tutorial model, writing assignments that are more closely supervised and scaffolded, and placing greater weight on oral and written examinations that require true mastery of the subject materials. The growing field of digital humanities has a role to play here too, as new tools for digital storytelling, analysis, and exhibition have created novel ways of organizing and presenting humanities research that are still wholly human in creation and curation.
The more our lives get pulled into an algorithmic vortex, the greater public desire builds, especially among younger people, for real life, not lived online or in front of a screen. People are not buying LP records and film cameras purely out of nostalgia. Jonathan Haidt’s “The Anxious Generation,” about the ill effects of smartphones and social media on young people, has sat high on the nonfiction bestseller lists for well over a year. And we have already forgotten that the much-hyped educational promise of laptops in classrooms turned out to be something worse than a mirage. Yet our societal embrace of AI as the next essential technological development, and our universities’ rapid embrace of the technology companies’ claims about the benefits to education, look set to harm the intelligence and happiness of another generation.
In this environment, the human creativity at the essence of the humanities is their greatest asset. Purveyors of the humanities can chase the tides, accepting the argument that training “prompt engineers” is the future for all of us in all fields, and get washed away.
Or we can build an island oasis of human creativity, where students are taught by people who do their own reading, writing, and class preparation, and where they learn to read, write, think critically, and create using their own minds.
The opposite is also true. If we accept the argument that the parrot is as good as the human, why should people develop these skills in themselves? Humanists need to articulate that what’s at stake is our ability to think about ourselves, a job we are dangerously close to handing over to machines.