New Media Lab algorithm brings Halloween scares to the world of AI
Visitors to “The Nightmare Machine” website can help train an algorithm to create scary images
A team from the Media Lab recently debuted an artificial intelligence project called The Nightmare Machine in time for Halloween. The project involves training a deep learning algorithm to generate scary depictions of buildings and human visages.
Visitors to the project website, nightmare.mit.edu, can vote on the scariness of computer-generated “Haunted Places” and “Haunted Faces” to help train the algorithm to create optimally scary images.
The Tech reached out the team, which comprises post-doc Pinar Yanardag, research scientist Manuel Cebrian, and associate professor Iyad Rahwan, to learn more about the motivations and goals of the project.
The Tech: What motivated your team to come together for this project?
Manuel: Following the tradition of MIT hacks, we wanted to playfully commemorate humanity’s fear of AI, which is a growing theme in popular culture. We found it appropriate to explore how machines themselves can generate the scary content. So we launched the Nightmare Machine, a website that showcases horror imagery created by cutting-edge Artificial Intelligence.
Pinar: We know that AI terrifies us in the abstract sense. Scholars have long commented on the phenomenon of the uncanny valley, which describes how people feel a sense of eeriness and revulsion at robots that appear almost, but not exactly, like real human beings. But can AI elicit more powerful visceral reactions more akin to what we see in a horror movie? That is, can AI creatively imagine things that we find terrifying?
TT: How will you evaluate the efficiency of your machine learning algorithm and the capability of the AI?
Pinar: It’s interesting to note that the generated faces are equally creepy from the AI’s point of view, but people find some of them quite scary, while others not so much. So that reveals that there is extra information in how humans perceive horror that can be exploited to make even scarier faces [based on what] you suggest. Maybe in the future, we can generate “personalized" horror images were we tailor the generation process to the individual data.
TT: What’s the next step after this project?
Manuel: For now, this is just a fun experiment, in the spirit of Halloween, to explore a new way which machines can scare us in the more visceral sense.
Iyad: Our research group’s main goal is to understand the barriers between human and machine cooperation. Psychological perceptions of what makes humans tick and what make machines tick are important barriers for such cooperation to emerge. This project tries to shed some light on that front, of course in a goofy hackerish Halloween manner!
TT: What has the reaction been and how much participation have you seen?
Manuel: So far, we have collected over 800,000 individual evaluations of our fully computer-generated images and we exceeded one million visitors just in one week! We’ve also gotten encouraging feedback through social media channels. Here are our personal favorites, for how insightful they are:
—drstefdirusso from Twitter: “Just checked out @nightmare_mit. What scares me more than the images is that a computer knows no boundaries as to what is too grotesque...”
—dia80, from hackernews: “Deep torture, anyone? Gradient descent on stimuli to get what you want out of adversaries. Kind of scary.”
—Andrew McAfee from Twitter:“Nightmare Machine” is cool, but how hard was it to make the 2016 debate look terrifying?”
—LPSandroni from Twitter: “teaching a machine how to be terrifying....just let them live they will discover by themselves :-)…”