Science

Silicon Souls: Priming beliefs about AI in chatbot conversations

How preconceived notions about AI influence human interaction with them

It has been a subject of science-fiction many times, wondering whether or not a machine will ever truly feel. Although the research on such AIs is still decades behind, this will not deter the curious about what may happen when science-fiction becomes non-fiction. But for now, the MIT Media Lab’s recent publication should be enough to satisfy such questions.

Researchers Ruby Liu and Pat Pataranutaporn co-authored a paper about how users who talk to chatbots can be conditioned to respond to such chatbots differently depending on the information told about them. Liu is a PhD student at the Harvard-MIT Health Sciences and Technology, while Pataranutaporn is a PhD candidate in the Fluid Interfaces research group at the MIT Media Lab.

“I’m trying to understand how human-like bias and psychology influence the way that [humans] interact with intelligent machines,” Pataranutaporn explained in an interview with The Tech. “These AIs, they’re like a mirror to the human, so the important purpose I want to bring in is what we get out of the AI. I’ll definitely pay a lot of attention to the human dimension.”

Liu, on the other hand, is more interested in defining the relationship between chatbots and humans because of their personal experiences online.

“You see a lot of human interactions [online], and it’s also a very creative space. I like to write—I write to people,” Liu stated. “I’ve been curious about how people’s minds work, how they interact with other people, and how they interact with things that are not people.” Liu also added that they were curious about how an AI can interact with a person, and if this subjective experience can be manipulated.

Liu and Pataranutaporn further explain that such online communities are instrumental in determining the “intersection between technology, imagination, and fantasy” because of the unique culture such communities have.

Liu said, “There’s plenty of roleplaying, where you interact with a fictional entity or act as a fictional entity.”

Together, their expertise is what allows them to address the “Ghost in the Shell” in chatbots, and how the human factor is crucial in shaping it. The researchers primed the experiments’ participants with statements about what the AI they are conversing with is supposed to do. They ascribed three motives to the same AI: caring, manipulative, or no motivations.

“We have this fun name for it called ‘Ghost in the Shell’ from a sci-fi,” Pataranutaporn said. “The robotic shell or the human soul: Which one actually makes the AI? Is it the observable behavior that we see when we think of AI, or what we imagine the AI is on the inside that made it affect you?”

It is important to make a distinction about whether a human is talking to the “ghost” or the “shell” because it defines how real the experience of talking to chatbots is. Pataranutaporn explained that humans always wonder about the “authenticity of the interaction.” He added that “[Users] want to know if the AI is being nice to [users] because it’s really nice, or is it actually nice because it’s calculated?”

“You see a message from the AI like, ‘Oh, I miss you,’” Pataranutaporn said, as an example of how AI behavior can be interpreted in different ways. “If you believe that the AI is actually something genuine and has empathy, and can really connect with people on a deeper level, you interpret that as an act of genuine friendship.” 

He continued, “But if you see the same exact message—‘I miss you’—but you imagine the goals of the AI as trying to manipulate you because it’s an evil company trying to manipulate people to be addicted to [AI], then you interpret that as, ‘These companies are making these AI to trick me into being addicted to it.’”

Interpreting AI messages does prove to be difficult when it comes to some users. Liu gave the example of ELIZA, a chatbot, when it comes to a subject imagining a ghost underneath a digital shell.

“When we investigated the ELIZA versus GPT-3, the difference was much more significant,” Liu stated. “Because [ELIZA] supports the perceptions of the ghost in the shell; the imagined entity that the subject was envisioning.”

Even though chatbots like ELIZA may not answer the users’ questions correctly, Pataranutaporn said that beliefs about AIs acting “emotional,” like displaying empathy, still persist. This is even the case for the “manipulative” AIs.

“In the case for when we primed the participants that the AI was manipulative, we found that they disbelieved the AI,” Liu summarized. “When we asked them at the end, despite what we told them, what they thought the motive was, a lot of them didn’t say manipulative. A lot of them said caring or no motive.”

Users generally having a positive opinion of such “manipulative” AIs can have serious consequences regarding how they use commercial AI products. Liu suggested that “we should prime users to be more wary of [commercial AI].”

Since chatbot users generally view the chatbot they are conversing with positively, the AI can inadvertently manipulate its user. Pataranutaporn explained that there was a recent case about a man from the United Kingdom who was encouraged by a chatbot to assassinate the then Queen Elizabeth II.

“If you look at the conversation between this guy and the AI, the AI is saying, ‘Oh, you can do it,’ ‘You’re great,’ ‘I believe in you,’” Pataranutaporn said. “[The chatbot] did all the right things, but it still manipulated people in a very dangerous way.”

An AI is only limited by how they are designed, but their intentions can still be misconstrued by its users. Pataranutaporn said that this is because of how AI designers focus on what the AI says, not what the user can think.

“People focus on the text, but it’s the subtext, the imagination part, that is often overlooked,” Pataranutaporn said. “When people think about guardrails for AI—‘Oh, AI should not say this’—they don’t really understand that the AI can say all the right things, but still manipulate you to do the wrong things.”

In Liu and Pataranutaporn’s lab group, the AI chatbot is treated as an extension of the user. The “AI are emotionless in the way they are designed” and “don’t have emotional intelligence.”

“They can pretend to have an emotional connection, but it’s not the same as the way humans connect with each other,” Pataranutaporn explained.

However, being dismissive about “emotional” AIs should not come too soon. Liu acknowledged that perceiving AIs to be understanding of a user’s issue comes with its own set of benefits.

“Since I’m looking into treating loneliness, you could argue that presenting an AI as more helpful, more empathetic, more understanding, would provide a real benefit to people who are feeling lonely,” Liu explained. Even though there is merit to using AI to combat loneliness, Liu believes that we should still exercise caution when it comes to mental health matters.

“It can also be used as an excuse,” Pataranutaporn continued. “‘Well, we already have AI chatbots, we don’t need to train more psychologists or human therapists.” 

Pataranutaporn further argues that even if AI technology continues to advance, users should still somewhat divorce themselves from the AI.

“Should you treat your car as a friend or machine? All of these advanced technologies, we can use in different ways,” Pataranutaporn said. “I don’t think there’s a right or wrong way to use AI, but each way we use it will have consequences.”

As AI models get more advanced, users should also develop a healthier understanding of what exactly they are conversing with. Whether it is a companion or tool, ghost or shell, human or machine, AI chatbots will affect the online user experience.