News

Over a thousand MIT affiliates respond to The Tech’s LLM usage survey

About two-thirds of undergraduates are “very concerned” about over relying on LLMs

In the last few years, generative artificial intelligence (AI) tools and large language models (LLMs) such as ChatGPT have significantly changed how MIT students work, study, and live. 

Interested in quantifying the impact of LLMs on the MIT community, The Tech sent out an LLM usage survey from Nov. 4 to Nov. 18. Over 1,000 MIT affiliates responded, including 659 undergraduates and 248 graduate students or postdocs. 18 faculty and 72 staff members responded. 

The Tech collaborated with the Undergraduate Association to collect responses and was supported by MIT’s Institutional Research office in survey design and methodology. The anonymous survey asked people about their involvement in AI, how they use LLMs in their lives, and their views on LLM usage in different areas, such as their education, career, and research. 

To consider potential response bias, The Tech collected information about the respondent’s major or department. Overall, undergraduate respondents aligned with student body enrollment demographics. For graduate students and postdocs, however, representation was disproportionately high from the  Mechanical Engineering and Aerospace Engineering Departments, and low from the Electrical Engineering and Computer Science and Mathematics Departments. The data were not adjusted for these representation biases.  

Majority not involved in AI development and/or alignment 

Although over half of MIT undergraduates major in electrical engineering and computer science (Course 6), only about a third of undergraduate respondents stated that they are “currently involved” or “plan to become involved” in AI development, which involves creating or improving AI systems. Less than 25% of undergraduates are “currently involved” or “planning to become involved” in AI safety or alignment work. A much larger percentage of undergraduates (over 40%) reported currently using or planning to use AI applications in their work or research.  

Among Course 6 undergraduates, however, over half are currently involved in or plan to become involved in AI development, and about half are currently involved in AI applications. Furthermore, about a third are currently involved or plan to become involved in AI safety or alignment work. 

Respondents use LLMs for a variety of tasks

Out of 592 undergraduates who responded to the question of LLM usage frequency, nearly half reported using LLMs every day, while around 30% use the tool a few times a week. This means that only about a quarter use LLMs a few times a month, rarely, or never. Like undergraduates, most graduate students and postdocs use LLMs regularly: out of 239 respondents, 46% use LLMs every day, and 35% use them a few times a week. 

Undergraduates stated that they used LLMs for a wide range of tasks in their academic, personal, and professional lives. Over 80% of respondents use LLMs for studying and explaining concepts from class materials (lectures, problem sets) and around 70% use LLMs for coding help. Completing coursework and summarizing papers were both over 50%. Meanwhile, graduate students and postdocs used LLMs the most for programming help (88%), brainstorming ideas (55%), summarizing papers (52%), and writing essays (48%). 

Over half of undergraduates reported using LLM output for coding help, either directly with minimal edits or as a draft with heavy revisions. For completing coursework and writing essays (e.g. CI-H assignments), this percentage was much lower, at under 20% and 15%, respectively. Instead, over 40% use LLMs as a source of inspiration or to check for completion, and under 30% for writing essays. 

A large fraction of respondents agreed that LLM usage has helped them save time. Over 35% of undergraduates said that LLMs have saved them a lot of time, while over 45% believe that LLMs have helped save them a bit of time. The results for graduate students and postdocs closely mirrored that of undergraduates. 

Although most respondents use LLMs regularly (a few times a week or daily), a majority expressed concerns about LLM usage, especially overreliance. About two-thirds of undergraduates stated that they were “very concerned” about overreliance, closely paralleling that of graduate students and postdocs at around 60%. The greatest concern following overreliance was inaccurate or misleading outputs: around 45% of undergraduates are “very concerned,” while over 50% of graduate students and postdocs are “very concerned.” 

Views of AI usage in the classroom are mixed

Undergraduates consider their instructors to have mixed views on AI usage in the classroom. Some encourage use but require reporting, while others discourage or ban usage. Nearly 60% of respondents said that none of their instructors heavily encourage usage, but about half stated that all of their instructors mention AI usage. Although undergraduates noted that their instructors have a spectrum of views regarding AI usage, over 75% believe that their professors are clear about their expectations for using AI in coursework.  

When asked about the possibility of having AI-based learning tools like teaching assistants or LLM-based oral exams, the majority of students were “very uncomfortable” or “somewhat comfortable” about these potential technologies. About two-thirds had unfavorable views of AI teaching assistants, and over three-quarters were against the idea of LLM-based oral exams, in which a student would have a conversation with an LLM that tests their knowledge. 

About half of undergraduates believe that AI usage in coursework should be left up to the professor and departments to decide. Around 20% believe that AI should be allowed with limited use in all coursework, while another 20% believe that AI should not be allowed except for a few situations. Meanwhile, only about 40% of graduate students and postdocs agree that professors should decide on AI usage, and over 30% believe that limited or transparent AI usage should be allowed. 

Influence of AI in academics and careers is not strong 

Only a minority of undergraduates found AI to be an influence in their choice of major (22%) or activities (15%). The role of AI in influencing career choices and course selection was higher, at over 30% for both. Despite this, nearly 70% of undergraduates agreed that proficiency with AI tools would be critical for their career. However, only about a quarter of respondents believe that MIT is preparing students to use AI tools in professional settings, which was also reflected in the graduate student and postdoc group. 

Respondents wary of LLM usage’s impact on information integrity and education 

The last survey question asked respondents to rate how they felt about the impact of LLM usage in the following categories: economic equality, healthcare access, democratic institutions, information integrity, scientific research productivity, and education outcomes. There were five options for the rating, ranging from “very negative impact” to “very positive impact.”  

According to the undergraduate survey results, information integrity and education were rated to be negatively affected by LLM usage at over 75% and over 50%, respectively. Trailing behind these two areas was democracy at over 40%. On the other hand, more than half of undergraduates rated scientific research and discovery to be positively affected by LLM usage. Meanwhile, nearly 60% of students weren’t sure about the impact of LLM usage on economic equality and healthcare access. 

Many respondents provided additional comments about their concerns about people’s overreliance on LLMs and the negative impact of LLM usage on learning outcomes. A lab assistant for some classes noted that students tend to have lower quality work and poorer understanding when over-relying on LLMs, citing an example of students who autocomplete a line of code incorrectly and then ask the LA the reason for the error. While they believe that LLMs can be useful in industry, they don’t think the tool should be used when learning foundational skills. 

Other respondents explained why they barely use LLMs despite their widespread use in education. One said that they restricted using ChatGPT to answer “very specific questions” they had about the lecture when they couldn’t determine the answer themselves or were unable to go to office hours. “I’m worried about becoming over-reliant on it and allowing my own research skills to fall into disuse,” they said.

Another said that they insisted on wanting to use their “actual brain” instead of AI for classes, and disapproved of classes such as 7.012 asking students to use AI. “It’s encouraging cognitive offloading and deskilling,” they said. 

Although many comments about LLM usage in education tended to be pessimistic, some had a more neutral and mixed view. “If students rely on AI for cheating and doing everything for them, then the outcome is very negative,” one wrote. However, they believed that if AI was used as an “adaptive tutor for everyone,” this could lead to personalized learning and have a positive impact.