News

3.091 to return to traditional lecture format

Prof. Cima says edX trial results ‘not yet conclusive’

CLARIFICATION TO THIS ARTICLE: 3.091 is returning to a lecture format for the Spring 2014 semester. No decisions have been made on whether to continue the online assessment format for Fall 2014 and beyond. A full report on the class will be submitted late February.

3.091 is reverting back to its original format of lecture/recitation as the semester-long experiment comes to an end. Findings from the experiment are not yet conclusive, 3.091 Professor Michael J. Cima stressed in an email to The Tech; the full report to the Committee on the Undergraduate Program is due at the end of February.

The experiment took elements from Cima’s 3.091 course on edX, a massive open online course platform developed by MIT and Harvard, and translated them to classroom learning. Before each lecture, students were instructed to watch online learning sequences. Instead of p-sets, midterms, and a final, students in 3.091 took a series of 37 online assessments throughout the semester.

Despite the implemented changes, pass rates for this semester did not differ much from previous years. According to Cima, 5 percent of the class failed last semester, compared to 6 percent in 2012 and 4 percent in 2011, so there is no statistical significance. However, “outcomes, as measured by success at solving problems, is substantially improved over the traditional format,” said Cima.

The number of week flags was substantially lower this term. Compared to the 29 flags given out in 3.091 in fall 2012, only 2 people were issued flags this year. Cima states that the new format of the class seemed to help the lower half of the class more.

“My impression is that the new format had greater positive impact on the less motivated of the students,” Cima said. “That is, those students who would have been in the lower half of the class substantially improved their performance. This is why we had only one tenth the number of fifth week flags as compared with past years.”

Conversely, he commented that the more motivated students did not seem to have as much of a chance to shine with this new format.

Sabrine A. Iqbal ’17 felt that the assessments shortchanged the material. “Since passing a certain number of assessments was almost the only requirement, it meant most people didn’t really care about the material,” Iqbal said. “Also, the assessments could be taken multiple times, so many people I know would not even study and would just guess the answers because they could go back and try again.”

Due to concerns that some students were going to try to pass the class by only taking the assessments, Cima implemented mandatory attendance — students had to attend at least 80 percent of lecture, and 80 percent of recitations. Students that did not attend 80 percent of lectures and recitations were at risk of failure. Cima told The Tech in September that the policy was put in place in order to preserve the integrity of the experiment as well as to avoid breaking rules which bar GIRs from being offered completely online. “The data supports that attendance does correlate with outcomes,” Cima said.

Student Reactions

Subject evaluations did not differ considerably from previous years, with the overall rating of subject going from a 4.6 for Fall 2012 to a 4.2 for Fall 2013.

Some students appreciated the what the experiment did for the class. For Helen L. Zhou ’17, the new format made for a lower stress environment. “Multiple attempts at each assessment allowed students to be more relaxed while taking these assessments,” Zhou said.

Stephen Guo ’17 also appreciated the lessened stress. As he points out, the assessment format meant that there were fewer points of concentrated stress, but “it was challenging because people essentially had to pass on a week by week basis,” he said.

Others, such as Anna Jungbluth ’17, don’t regret taking class, but she said the new format is not something that she would recommend. Compared to her other classes, she felt as if she didn’t learn much. For her, midterms were the times in which she sat down and fully learned the material, and “the time when I understand the connections between several topics.”

The assessments meant that she never really went back over the material, and after she passed an assessment, she wouldn’t worry about the material anymore, Jungbluth mentioned.

Jonathan T. Morrell ’17 found that the online learning sequences gave him a lot more flexibility in learning, but he thought that the assessments should’ve been better at evaluating student understanding. “Currently they assess the answer you get from doing a problem, however the answer is obviously secondary to the process you take to get there. If there was a way to better assess that, the class would be much improved,” Morrell said.

Nevertheless, according to TA Zhaohong Han, the new format changed students’ approach to learning; instead of coming in right before exams, they would “pay more attention to the content in recitation now and ask more questions every time.”

TA William F. Dickson ’14 felt that students “understood connections between stuff earlier in the semester, much better than students in the past” as a result of the assessment based format.

He acknowledges “the only problem might be that there is no cumulative exam.” However, he feels that if given the final exam, the majority of students would not have had to cram.

The course ran into some bugs early in the semester, which were fixed as the semester progressed. One issue that many students ran into, according to Dickson, was that students would have conflict with the original assessment times. However, the staff resolved the issue by allowing extensions in the case of conflicts. Moreover, some students ran into problems with the assessment software itself, but the staff fixed this as the semester progressed.



3 Comments
1
Anonymous about 10 years ago

took 3.091

still know nothing about chemistry

2
Lorenzo Sadun '81 about 10 years ago

The reactions to the flipped class are fairly predictable. The same thing happens across the country, and across subjects. Student learning outcomes generally go up, especially when measured by mastery of concepts, rather than rote problem-solving. Student PERCEPTIONS of how much they learned generally go DOWN, at least for the first year or so. (My student evaluations dropped by almost an entire point when we flipped the calculus classes at U. Texas) Then the instructors get the hang of the game, and student evaluations rebound and settle higher than they started.

3
Anonymous about 10 years ago

This is completely embarrassing. You're a seasoned researcher and entrepreneur, not a God. Of all people, you should know best that you're capable of failure.

It's no secret to anyone that your experiment failed. Your strategy of having separate content in class, online, and in recitation was poorly conceived. It operated on the assumption that people would do the online materials and listen in class.

On family weekend, you presented data which showed that a minority of students completed the online materials. You also showed that instead, students were visiting the assessment room multiple times. You characterized this as a success because it was evidence that students were being forced to learn.

The truth is that students went on Day 1 to figure out what the hell the exam was about, Day 2 to pass, and on Day 3 the material promptly exited their head. You used non-cumulative unit tests which enabled students to not learn anything.

What's most embarrassing is that you failed to adapt despite being well aware of your failure. Your report is predictable. It's going to say that there's evidence your strategy was a success, but there's not enough data to know for sure. You're going to point out how last year students were able to avoid learning a concept throughout the entire semester. You'll mention how students could just skip a concept question multiple times and still pass, and that's bad. You're going to say that your strategy, which forced students to learn each unit, is better because students learned more. That's what this quote means, to anyone who was wondering:

"outcomes, as measured by success at solving problems, is substantially improved over the traditional format"

Unfortunately though, you don't have data to show students were able to retain content beyond 10 days after the material was taught. Let's call that a happy accident because deep down you know, and TAs and students have told you, that students didn't retain anything. They just wanted to pass the assessments as naively as possible and get on with their day.

Your experiment failed. It's not a big deal, but do us a favor and be honest about the shortcomings. Plenty of people believe in MOOCs. Just because your implementation failed doesn't mean they all will. MIT is smart enough to recognize this. It's okay that you failed.