Opinion

Your evaluations are meaningful to us

Subject evaluations help departments build better courses

With another end of semester upon us, finding time to complete subject evaluations is often difficult given the usual crunch of papers, projects, and exams. I wanted to briefly describe how these evaluations are used at the Institute, encourage students to fill them out, and offer some suggestions for how we might look to improve upon the way in which subjects are evaluated.

Subject evaluations are largely (if not completely) done online at the Institute. While some programs run their own online systems (in particular Courses 2 and 6), the Institute has also developed an online system that is now used by most other programs, including my home department, Course 16. Speaking as an instructor, the subject evaluations are a critical source of information for what worked and what did not work in a subject. By combining that information with the instructor’s sense of the course, as well student performance, a much more accurate picture arises of ways to improve the subject and how it is taught. I like to think of this as a state estimation and feedback control system — by combining a bunch of incomplete data sources, a better estimate of the state of the subject can be determined. This lets us make changes to improve the subject and better understand the dynamics of teaching it.

Subject evaluations also are an important part in faculty annual performance reviews and tenure and promotion cases. For example, all faculty in the School of Engineering report in their Faculty Personnel Record (i.e. a more detailed form of a curriculum vitae used at MIT for performance and promotion reviews) the average rating of overall teaching effectiveness and the overall subject quality for every subject they have taught.

In Course 16, we have a formal reflective process that began over ten years ago. All faculty that teach an undergraduate subject in our department write a memo at the end of each semester to reflect on whether students have met the subject learning objectives, describe what changes were taken to improve the subject from the last offering, and suggest changes for the next offering. Then, each faculty member meets with the associate department head to discuss these reflective memos. As the current associate department head, I prepare for these discussions by reading the memo and looking at the student evaluations to ensure that the student feedback has been considered in the faculty’s reflections. These reflective memos and the subsequent discussions are among the most thoughtful discussions I have had about improving teaching.

What might we do better in evaluating our subjects? I will focus largely on what I know best — that is, the evaluation process we use in Course 16. The new online Institute-wide subject evaluation system offers a lot of flexibility in the questions that can be asked, yet I find the default set of questions to be constraining and not representative of the wide-range of teaching techniques utilized throughout the Institute (e.g. there are no questions relevant to problem/project-based learning, laboratory experiences, hands-on activities, etc.). Furthermore, perhaps a common set of questions might be developed for CI-M subjects that allow best practices in teaching CI-Ms to emerge.

The reports produced by the online system could also be much richer. Currently, for quantitative questions, the reports include only averages and standard deviations, though I find histograms to be much more useful in many circumstances. Another dimension would be to allow automated comparisons among a set of subjects. Finally, in this age of budgetary pressure, I wonder if we might have an opportunity to move towards a single online evaluation system and in the process not only reduce cost but also actually learn more about effective teaching by facilitating comparison.

MIT is a data-driven place, and subject evaluations are a critical part of the data used to improve the MIT education. If you are taking a subject, definitely take the time to complete an evaluation. We want to hear a range of opinions — from constructive criticism to positive feedback.

Best wishes to all for a successful close to another semester.

David L. Darmofal PhD ’91 is the associate department head and professor in the Department of Aeronautics and Astronautics, and a MacVicar Fellow.