Opinion

LETTERS TO THE EDITOR

EdX is committed to ensuring rigor

The Tech has informed me, that in his most recent opinion article, Tea Dorminy raises an excellent basic question about MITx and edX content: is it as hard as the content in an MIT course? The Tech has also asked me to respond: I’m delighted to do so.

When we launched MITx and then edX, we were very clear with ourselves that courses must not in any way be dumbed down: courses offered on edX are in fact as rigorous as the on-campus courses, and are generally taught by the same faculty. We believe that this commitment makes edX unique among platforms. In the February 8, 2013 issue of The Tech, Ethan Solomon wrote favorably about this online content’s rigor. Moreover, I myself have taught 6.002x on edX and 6.002 (Circuits and Electronics) on campus (for many, many years), and I can say that each course is equally hard!

I also understand that Dorminy argues that OCW should be supported by MIT. I couldn’t agree more. OCW is a great resource (and in fact 6.002x leverages OCW material), and over the past decade it has been a leader in expanding access to education.

—Anant Agarwal, president of edX

The Tech has invited me to add to Anant Agarwal’s thoughts on the issues we have been told are raised by Tea Dorminy’s column. I think Tea raises important issues­ — and I agree with Professor Agarwal’s thoughts. I’m very excited by what online learning can bring to the world and to our campus, and I am committed to seeing that the instruction of MIT content is of equal rigor, whether it is done in a classroom or through the Web. Having taught an online version of 6.00 (Introduction to Computer Science and Programming), I can attest that while the presentation of content and interactive problem solving may vary because of the medium, the rigor does not, and the questions we asked students to solve using online tools were comparable to those we ask of our on-campus students. And let me also echo Anant’s enthusiasm for OCW: MIT is committed to it.

—Chancellor Eric Grimson PhD ’80

A gun for every student is not the answer

A recent article by Tea Dorminy, which advocated gun ownership among MIT students to protect the community from gun violence, left me jolted in disbelief. The author claimed that if MIT students all had guns, then they would not have panicked when warned of the presence of a gunman on campus. This claim worried me almost as much as last week’s warning did. A consequence of this proposition, if realized, would surely be that we won’t have any warning the next time someone decides to brandish a weapon with an intention to use it. I don’t see how that’s not a frightening thought.

Is taking on a gunman yourself really as enticing as Dorminy makes it sound? I was fond of marksmanship and rifle training a while ago too, but I would do anything to avoid a face-off with someone who wants to shoot people, even if I was equally equipped and sufficiently skilled. In fact, I think most people would fall short of being as good as Bruce Willis or Sylvester Stallone at picking up weapons and effortlessly kicking a bad guy’s butt. Even if valor was abundant, how exactly does Dorminy propose to put a gun in the hands of every student? I know I probably wouldn’t be able to put in the time, diligence, and money required to own one. If there are any others at MIT who feel the same way, allowing ownership and having a partial gun-bearing population on campus would hardly make all of us feel safer.

I’m not the hero Dorminy wants each of us to be, and I would rather not have to try and be one in a situation that is overrun with villains. Unlike the characters played by Bruce Willis and Sylvester Stallone, we do have the choice of trying to avoid such situations altogether.

—Rishabh Kabra ’14

We are too quick to venerate Nate Silver

You give Nate Silver too much credit in saying that he correctly predicted the 2012 electoral outcome in all 50 states and D.C.

From May 31 to Nov. 6, the FiveThirtyEight blog gave 160 daily forecasts for the number of electoral votes that President Obama would win. Not a single one of those 160 predictions turned out to be exactly right. Every forecast understated the president’s eventual victory. The closest they came was on Oct. 4, when FiveThirtyEight estimated that President Obama would win 321.2 electoral votes. He won 332. But that was the closest.

In 2004, my colleagues and I published an evolving election prediction in The Tech, based on propagating the uncertainty of the state polls. We weren’t the first to do this and obviously we weren’t the last. We also “called” that election correctly, in that The Tech’s final published prediction, on Oct. 29, 2004, was that President Bush was more likely to win, and, lo and behold, he won.

Not that impressive, I agree, but in general there is no great way to judge the accuracy of an estimate for the probability of an event that only happens once. Our model was public and replicable, like the ones social scientists publish every day, but anybody who tries this is going to end up using the same boring textbook methods. Venerating Nate Silver’s results because they came from a secret model that seems like magic, exaggerating their accuracy, and calling him a “witch,” is the opposite of what good scientists should do. Our job is to dispel mysteries!

—Keith Winstein G



3 Comments
1
Anonymous almost 12 years ago

Nate Silver: He never "estimated that President Obama would win 321.2 electoral votes". You are confusing his expectation (over all trials, what was the average number of electoral votes received) versus what he deemed the most likely outcome. Indeed, before the election, his final map showed all 50 states and DC correctly (meaning he had the probabilities in favor of the eventual winner), including a late shift of Florida to Obama that most other polls didn't catch.

2
Keith Winstein almost 12 years ago

I'm afraid I don't agree. The #1 forecast on the 538 site is a forecast (aka estimate or prediction) of the number of electoral votes each side would win. Whatever this represents, it never got closer than 10.8 votes of the eventual outcome. It's mistaken to say that 538 got the result exactly right when the headline forecast was never quite right, even over 160 separate ones.

Then there are predictions of Obama's chance of winning (#2) and of the popular vote (#3).

The #4 item was an estimate of the state-by-state probabilities of an Obama victory. You make a big deal of the fact that 538 estimated, the morning of the election, a 50.3 probability of an Obama win in Florida, and Obama indeed won Florida. Here are three reasons that's not so great:

(1) Estimating the probability at 50.3, and then having the event happen, is just not that impressive! If you flip a coin and estimate the chance of heads at slightly more than 50-50, and then get heads, that doesn't mean much. You weren't "right," any more than if the coin landed on tails, you would have been "wrong." The 538 blog did not make a call that Florida would go for Obama. It said the probabilities were almost even.

(2) We can assess the accuracy of the probability estimates by looking at predictions in multiple states. E.g., what if a poll aggregator had predicted a 50.3 probability of an Obama or Romney win in each state, and each state was won by the candidate where it had assigned slightly more probability? By your metric, the site did fantastically, but in reality that is terrible performance. An event with 50.3 probability will only occur 50.3 of the time. If it occurs all the time, the probability estimate is wrong.

(It's a little trickier because the probabilities aren't independent, but 538 doesn't let us see the realizations to figure out whether this makes a difference.)

When you mathematically assess how close FiveThirtyEight's odds forecasts matched the real outcome, you find out it was empirically the third-most accurate site, behind Votamatic and the Princeton Election Blog, and ahead of two other poll aggregators. See http://appliedrationality.org/2012/11/09/was-nate-silver-the-most-accurate-2012-election-pundit/

So, as far as we can empirically evaluate their accuracy, FiveThirtyEight's election-day odds estimates were literally in the middle of the pack.

3
Keith Winstein almost 12 years ago

(4) So far we have only been talking about the forecast made on the morning of election day. But this is the least interesting forecast, because by that point the actual outcome will be known within a few hours anyway! Because FiveThirtyEight uses different methods BEFORE election day (incorporating the stock market and other economic data in diminishing contribution) vs. on election day (polling only), the election-day accuracy may tell us little about the accuracy one month prior.