Course 6 and 18 faculty members share mixed perspectives on AI in the classroom
Some faculty encourage the use of AI, others ask students to report AI usage, a few ask the students to avoid it completely
Recently, the rise in generative AI technologies, along with accompanying calls for AI fluency, have shaped how undergraduate computer science programs approach their curriculum. At MIT, faculty have been experimenting with the best way to handle AI usage. While some electrical engineering and computer science (EECS, Course 6) faculty remain optimistic about AI in education, others have banned AI in the classroom, citing the need to build strong technical foundations in Course 6 classes.
According to MIT’s undergraduate majors count for 2025-2026, 44% of the undergraduate population is majoring in Course 6; double majors in EECS make up 33% of all double majors.
Given MIT’s top rankings in computer science, along with aerospace, chemical, computer, materials, and mechanical engineering, the trajectory of these programs amidst AI usage remains under scrutiny. Will the caliber of MIT engineering students deteriorate or improve, especially those in coding-intensive majors like computer science and engineering (Course 6-3) or artificial intelligence and decision making (Course 6-4)? How will courses adapt to advances in generative AI?
To answer these questions, The Tech reached out to Samuel Madden, College of Computing Distinguished Professor and the Faculty Head of Computer Science, MIT EECS. Madden acknowledged the impact of AI on teaching and research in EECS, pointing to the addition of Course 6-4 in Fall 2022 and the class-level exploration of AI usage in assignments.
Course 6 classes are split broadly into two categories: applied and theoretical. Classes where students are expected to code, like Introduction to Computer Science Programming in Python (6.100A and 6.1000), Fundamentals of Programming (6.1010), and Software Construction (6.1020), tend to adopt a more open-ended approach to AI. For example, 6.100A permits AI usage to understand concepts and even advertised an AI tutor, Pytutor, on the website, even though it prohibits generating code. Problem sets still comprise a large portion (35%) of the total grade.
6.1000 also has a fairly liberal AI policy that welcomes AI usage for conceptual understanding and generating code examples, but prevents students from generating code solutions. In their reasoning on the class website, 6.1000 faculty notes that “seasoned programmers frequently collaborate and increasingly use AI tools to boost their productivity,” but such programmers have solid foundations. The website says that 6.1000 aims to develop independence in programming skills.
However, the weighting of psets is significantly lower than 6.100A, accounting for only 20% of a student’s final grade. Furthermore, the autograder score is worth two-thirds of the credit on each pset, while the remaining one-third comes from a checkoff conversation with course staff. According to the class website, checkoffs are short interviews during office hours about code design and overall conceptual understanding, which could address AI usage.
6.1010 uses a red-yellow-green light visual graphic for their policies around labs. Collaboration with chatbots and other AI tools is yellow (maybe), discouraged for most purposes and forbidden for code-level work. Consultation of any outside code not written by the student is red (no), including work produced by AI-based code-completion tools like Github Copilot.
Interestingly, 6.1020 is anti-AI in comparison, outright prohibiting the use of ChatGPT or Copilot or any other tool to generate code. The weighting of psets, however, is a high 45%.
More theoretical and algorithmic classes like Mathematics in Computer Science (6.1200) and Introduction to Algorithms (6.1210) tend to prohibit AI use entirely. For 6.1200 in Fall 2025, the class bans the copying of completed solutions from generative AI, and psets are weighted just 25%. This follows a significant decrease from Fall 2023, when their weight was 35%. The most drastic change comes from 6.1210: Fall 2024 grading weighted 25% to psets and 75% to exams, while Fall 2025 is 5% psets and 95% exams.
Professor of Computer Science Manolis Kellis believes AI usage has both benefited and limited Course 6 research, classwork, and long-term initiatives. Nonetheless, Kellis isn’t as worried about AI’s impact on Course 6. “Students who care learn more, deeper, faster, better,” he wrote. “Students who don’t cheat more easily.” Kellis pointed to new AI-powered resources for students, perhaps a nod to generative AI’s ability to act as a tutor and supplement in-class learning. Kellis teaches Machine Learning for Computational Biology (6.8701) and allows ChatGPT on psets but requires a link to the chat.
Since Mathematics (Course 18) is a popular double major for Course 6 majors, especially for Course 6-3 and 6-4 students because of requirement overlaps, The Tech also reached out to Professor of Mathematics and Associate Department Head William Minicozzi. According to Minicozzi, the department is now requesting that all undergraduate and introductory graduate classes include in-person assessments. For example, Geometry of Manifolds (18.965) had an in-class midterm and will also have a three-hour final exam; before this term, the grade was solely based on psets. Some Communication Intensive in the Major (CI-M) classes use oral presentations instead of exams. “AI was a big factor in this change, but not the only factor,” Minicozzi wrote.
Minicozzi also surveyed a few other Course 18 classes and found a common theme of reducing weights for psets. In Multivariable Calculus (18.02), psets now count for just 20% of the final grade, whereas they counted for 30% of the final grade in Fall 2024. In general, though, Minicozzi believes that AI policies are being subjected to experimentation in various ways. “Some faculty encourage the use of AI, others ask students to report AI usage, a few ask the students to avoid it completely, and one class has instituted pset checks where students are given a short oral exam about their psets,” he wrote.
Minicozzi is “cautiously optimistic” about the eventual impact of AI on [Course] 18 classes. “That said,” he conceded, “it certainly does make grading more difficult.”
Aside from its impact on academics, there has also been a discussion on the ethical use of AI. According to Madden, MIT EECS has been working jointly with the Social and Ethical Responsibilities of Computing (SERC) initiative of the Schwarzman College of Computing to incorporate SERC-related material and modules into their classes.
The new Course 6-4 major includes a SERC requirement, chosen from a list of subjects that include “substantial consideration of the ethics and broader impacts of AI,” such as Ethics of Computing (6.C40J) or AI, Decision Making, and Society (6.3950). Madden expects issues such as AI ethics and AI safety — the design of AI systems to prevent unintended harm to humanity — to “increasingly be integrated into [EECS] modules and courses.”