Teaching Machines To Think Before They Speak

Human conversations consist of more than what you may think. When you are conversing with a friend, you’re taking in more than just the words they’re saying. Your brain is simultaneously reading between the lines, interpreting the emotions and intention beyond the surface of the interaction.  

Imagine if machines could engage in conversations like humans do. What if artificial intelligence, given a chat or dialogue history with a human, could use context and reasoning to infer how the other person is feeling and give an appropriate response? 

Pei Zhou, a graduate research assistant at USC’s Information Sciences Institute (ISI), and Jay Pujara, Director of the Center on Knowledge Graphs at ISI, have been working on developing more human-like responses in machines since last spring.  

 “The idea is that models should have some internal thinking or interiority to contemplate what’s happening in the dialogue, not just blindly responding,” Pujara explained.  

 Previous response generation (RG) models were trained to produce responses in an input-output style similar to a “muscle reflex”–they reply to a given statement with a set response they were programmed with.  

 Consider a scenario where a friend spilled the dinner they had prepared. There are multiple angles you may consider before responding. You may wonder if you should order food now, or if your friend is upset about the accident and needs your comfort.  

 Zhou says that current models trained on the existing dataset are “more likely to perform on the generic side” in these types of scenarios. This leads the conversation to seem more “bland and plain” because it may indicate disinterest or apathy, he added. 

 The key is programming machines to be able to perceive what the human is conveying, and, with that information, choose an empathetic response that contributes to the conversation in a meaningful way.

In his preliminary research, Zhou examined theory work on communicative literature and psychology. He found that humans approach conversations through the lens of collaboration, with an end goal of reaching mutual beliefs and knowledge–called grounding 

 Having common sense is critical to building common ground. “We don’t always explicitly say things, so we need to make a lot of educated guesses during conversations,” he said. 

 These educated guesses, or inferences, require us to choose between different reasoning paths and evaluate what might work best in the current situation. For machines to be able to respond like humans, they must be able to exercise this aptitude as well. 

 “We want to diversify machine responses given a particular scenario,” Zhou explained.   “Each response should be guided by some inference question, such as ‘what is this person feeling?’”  

 People-Based Research 

 Since there wasn’t sufficient existing data on this category in common sense and cognition, the team gathered their own. They conducted a data compilation process with two stages: inference collection and response collection. 

 The team secured volunteers and split them up into two groups. They asked the first group to answer a set of questions about a dialogue, considering what is happening now, what might happen next, and how the speaker and responder may feel.  

 Then, they had another group write a response based on the thought space the first group came up with, without using the same words. Having two independent groups was crucial to ensure high quality results.  

 “We had different people paraphrase those ideas without having an overlap in the people doing the annotation, which made it so they were really motivated to come up with interesting and novel responses,” Pujara noted.  

 They then used these responses to train a model to execute a reasoning process–effectively teaching a machine to “think” and use what they’re thinking to come up with multiple justifiable responses. 

 Striking Up a New Friendship–With a Robot? 

 So, machines can converse similarly to humans when equipped with inference skills and the ability to find common ground. But wait, there’s more–the research is still developing and the potential implications are huge.  

 “We want to make a model that can chat like a friend,” Zhou explained. “We want them to be enjoyable to talk to and there if you need emotional support or help with a conundrum you are facing.”  

 There are many moving pieces that need to be worked through before bots reach this degree of humanity.  

 “We need to have models think about the implicit state of the world before they act,” Pujara continued. “We have these models that can think but we still haven’t figured out how to use what they think to optimize how they act.” 

 In October, Zhou’s paper, “Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality” was accepted to the Empirical Methods for Natural Language Processing (EMNLP) conference that will take place in December 2022. 

 The ISI team is not stopping their work there. Next up is finding a way to program models with the ability to discern the best response for a given situation, given the multiple dialogue paths it has to choose from. 

 Video Games–Useful For Academic Research?  

 Zhou has been exploring this concept by continuing to work on his internship project from this summer at the Allen Institute for Artificial Intelligence (AI2). His focus is learning more about the notion intent through studying the interactions in the adventure game, “Dungeons and Dragons.” 

 In the video game, a host player called the Dungeon Master and a variety of players with their own characters. The game is goal-driven, structured like a mission, with the Dungeon Master guiding the players on a path. 

 Everything that happens in the game is “purely based on language interactivity,” therefore, its data is the perfect application to study the notion of intents.  

 Zhou, alongside researchers from universities across the nation and in association with AI2, is using the goal-driven dialogues in Dungeons and Dragons to study the “intentionality and groundedness” of communication.  

 By focusing on how the Dungeon Master guides players through the game using language, the team aims to learn more about modeling machines after the human mind.  

 With these discoveries, we may very well be looking at a future with robots that think like us, act like us, and even become our friends. 


Substack subscription form sign up