Photo: Knight Center for Journalism via Flickr
Are you missing face-to-face conversations yet? It’s hard to believe that just a month ago the coronavirus was still largely a far-away problem that didn’t impede the average American’s day-to-day life. Real-life interactions with anyone outside of our immediate families have become a thing of the past, at least for the next month or so. As such, we’re all relying a whole lot more on texting to keep in touch with friends and family.
Texting can sometimes be a poor substitute for talking, however, and it’s common for the nuances of speech to be lost in a text or iMessage. A new piece of research by Cornell University investigated the intricacies of remote communication and concluded that artificial intelligence can serve as a useful buffer, or mediator when online chats are about to go off the rails.
In an experiment involving college students, researchers noted that in several online conversation scenarios participants reported trusting the person they were speaking with more if they used AI-generated smart replies. Even more fascinating was the finding that the AI was only seen as the driving force behind a conversation if the chat went badly.
When an AI-assisted talk went well it was interpreted as another person simply using a tool to help them communicate, whereas if the conversation went poorly it was the AI’s fault almost entirely.
These observations set the stage for such suggestions to take a more proactive role in online chats than just “On My Way!” An AI algorithm could conceivably pick up on a conversation potentially becoming contentious or misperceived on one end and suggest conflict-resolution strategies.
“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” explains Jess Hohenstein, M.S. ’16, M.S. ’19, a doctoral student in the field of information science and the paper’s first author, in a press release. “This introduces a potential to take AI and use it as a mediator in our conversations.”
It’s probably safe to say that, given a choice, not many people would want an algorithm to write out their conversations. That being said, we’re all in a unique situation these days and if these applications can help us better convey our complex human feelings over text from time to time, then so be it.
It’s certainly ironic; an AI algorithm may just help us maintain a level of “humanity” in our online conversations during this pandemic.
The research team originally set out to explore the various ways that smart replies and AI systems, in general, are influencing how people communicate with one another and interact. Even before this pandemic, the use of a predetermined reply in a text conversation would always fundamentally change that interaction, even if only slightly.
“Communication is so fundamental to how we form perceptions of each other, how we form and maintain relationships, or how we’re able to accomplish anything working together,” says co-author Malte Jung, assistant professor of information science at Cornell.
“This study falls within the broader agenda of understanding how these new AI systems mess with our capacity to interact,” Jung adds. “We often think about how the design of systems affects how we interact with them, but fewer studies focus on the question of how the technologies we develop affect how people interact with each other.”
The team behind this study believe AI-generated responses can act as a “moral crumple zone,” by reducing the responsibility placed on a person when a text conversation goes badly.
“There’s a physical mechanism in the front of the car that’s designed to absorb the force of the impact and take responsibility for minimizing the effects of the crash,” Hohenstein concludes. “Here we see the AI system absorb some of the moral responsibility.”
The full study can be found here, published in Computers in Human Behavior.