SHASS professors share wide-ranging views on AI in the classroom
Prof. Graham Jones: “Practically everyone in SHASS will need to take some kind of stance on AI”
Since OpenAI launched ChatGPT in November 2022, the chatbot has generated trillions of words, upending traditional modes of humanistic education in the process. At MIT, all students are required to take eight humanities, arts, and social science (HASS) classes, including two introductory Communication-Intensive HASS (CI-H) courses, but the vast majority major in technical subjects. Thus, for HASS professors who wish to meaningfully engage with their students, the challenges and opportunities posed by AI may be particularly profound.
Large language models (LLMs) can summarize philosophy papers, make flashcards about the Civil War, and translate ancient Chinese poetry. One of their primary uses, though, is in the act of writing itself: workshopping the user’s words or producing new ones out of whole cloth. A Sept. 2025 National Bureau of Economic Research study found that 24% of consumer ChatGPT messages asked for writing assistance, making it the third-most common query category after “Practical Guidance” and “Seeking Information.”
It’s not hard to see why. When The Tech asked ChatGPT to generate a sentence to follow the one opening this article, the tool came up with the following: “Its influence has forced educators and institutions to rethink not only assessment and authorship, but the very skills students must cultivate to thrive in an AI-saturated world.”
The sentence is grammatically and factually correct. It expresses a cogent point that naturally flows from the initial prompt. If a prospective news writer turned it in for editing, they’d probably be told to cut down the word count, and maybe mention MIT, but keep the main idea.
So, in an uncanny echo of ChatGPT’s pronouncement, we might ask the following: What happens when students never learn how to write by themselves? What will be the effect on their academic, professional, and personal lives? What and how will they think? How might a HASS instructor structure their classes to accommodate this new world while continuing to instill fundamental skills?
Graham Jones, a professor of anthropology and the Undergraduate Education Liaison for the School of Humanities, Arts, and Social Sciences (SHASS), has helped organize several workshops this fall centered on the problems and possibilities of AI. The first, led by Professor Dwai Banerjee of MIT’s Program in Science, Technology, and Society, introduced key concepts about LLMs, while the second, led by Professor of History William Broadhead, focused on “AI-proofing” reading or writing assignments.
The director of Global Languages Per Urlaub’s upcoming workshop will discuss how AI can enhance learning. Finally, over IAP, faculty will workshop their spring syllabi. Chanh Phan, the SHASS Academic Programs Specialist, said that 86 faculty members have registered for one or more of the first three workshops, and about 50 have attended the first two.
Jones has observed a wide range of views in the workshops. Still, he said, “it has become unavoidable that practically everyone in SHASS will need to take some kind of stance on AI.” He believes that he exemplifies the SHASS faculty’s “deep ambivalence” regarding AI, reflected by his teaching approach. This semester, Jones is teaching two courses. Humane User Experience Design (21A.S10), co-taught with computer science professor Arvind Satyanarayan, considers how to build AI companions. In his more traditional Magic, Science, and Religion class (21A.520), Jones has pivoted towards “in-class experiential activities” in an effort to “AI-proof” work.
Although Jones is unsure whether MIT or SHASS can create generally applicable AI policies, he thinks any assessments of the forthcoming Task Force on the Undergraduate Academic Program (TFUAP) recommendations will need to consider AI.
Some might wonder whether these efforts are even worth it. In a widely-discussed piece for the New Yorker, author and professor Hua Hsu writes that humanities professors may ask, “Why bother teaching writing now?” He discusses teachers inured to widespread AI usage and students unable to get through class without it.
Still, Jones remains optimistic. In his opinion, students “want their education to help them be their authentic selves,” and one of SHASS’s strengths lies in its “personally meaningful” modes of learning. He believes that even though teaching methods will require adaptation, the humanities’ historical benefits can continue to exist alongside AI tools.
The Tech reached out to several CI-H instructors to ask about their strategies for teaching and learning in an age of AI.
Urlaub hopes that his CI-H course, European Thought and Culture (21G.059), can help his students grow as “empathic listeners, close readers, critical thinkers, sophisticated writers, and passionate debaters.” He believes that in the modern world, all the qualities above will require an understanding of AI’s role as a “mirror” to human values. Urlaub highlighted the humanities’ ability to “offer laboratories for exploring the affordances and limitations of technology.”
Urlaub has incorporated AI into his teaching with a custom GPT called “PostwarSocrates” and a “low-stakes” assignment wherein students used Perplexity to identify the locations of scenes from a European TV show. Although he prohibits his students from using LLMs in their weekly writing reflections, for the final project, he has invited students to pick from six distinct categories of AI use. These range from complete independence to having an AI “co-drafter” with a human “editor.” Students who choose to use AI are asked to submit a 500-word reflection on their choice and its effects on their learning. Urlaub hopes that this multifaceted approach will “help students evaluate technologies critically and constructively” and guide them towards “productive engagement rather than superficial use.”
On the other hand, Ken Urban, a Senior Lecturer for Dramatic Writing in the Theater Arts department, believes that “AI is robbing us of the difficult work of thinking and creating.” He is currently teaching the CI-H class Script Analysis (21T.131) and Writing the Full-Length Play (21T.350), a more advanced course. Students in both classes must sign an honor code at the beginning of the semester. For most assignments, they cannot use AI, but when they are permitted to use it, they must cite it.
If a student seems to violate the policy and their work “features the common tropes of AI-generated text,” Urban uses Internet resources to look into the likely sources, then speaks to the student if his initial guess is supported. He says that students who violate his policy usually “didn’t realize the ethical reasons behind [his] rationale.”
Philosophy and Women’s and Gender Studies Professor Sally Haslanger has taught at MIT since 1998. As a defense against AI, she now requires students in her CI-H course, Classics of Western Philosophy (24.01), to draft their first two papers through closed-book, closed-note in-class exams, revise them after receiving instructor feedback, and take an oral exam based on their third paper. Haslanger noted, however, that writing exams by hand can be difficult for students. She hopes that MIT invests in technology for computer-based, closed-book, and closed-notes exams and that the Subcommittee on the Communication Requirement reassesses CI-H requirements given AI-induced changes.
Benjamin Mangrum, an associate professor of Literature, has taught CI-H courses every year since joining MIT in 2022. In that time, he has seen both students who “hate” AI and those who are “pretty explicit in their belief that it makes writing and other traditional humanistic activities seem obsolete.” In his current CI-H course, Reading Nonfiction (21L.050), which centers on the “craft of writing,” Mangrum prohibits AI use. However, he has revised his policy each semester and is “not opposed” to all types of AI use. He also uses AI to generate writing for in-class student analysis, as “it tends to be very competent but bland, and it’s useful to think about those qualities of prose.”
Mangrum believes that AI has intensified an existing popular view of the humanities as irrelevant. He said, “It tells many people what they already believe about the value of spending time studying history, philosophy, art, literature.”
Noel Jackson, also in the Literature Department, has addressed this problem by developing classes in which AI itself is “less relevant.” Last semester, he taught a course on the practice of walking in literature and film. Every week, students had to document their walks in journal entries that reflected on relevant reading material. Jackson knows that students can input enough information to generate a “plausible” entry. Still, he noted that “it would be easier, and I think more enjoyable too, if that student simply took a walk.”
In another course, Jackson has centered his pedagogy on questions generated by students. He believes that if students take the time to enjoy their own thoughts and those of their classmates, “we may find that the ‘superintelligence’ that purports to generate definitive answers for everything is not as nearly as interesting or as useful as what we can accomplish ourselves.”