The research announced at the AAAI-20 Conference in New York gives computer systems the ability to better comprehend and infer from natural language.
Researchers from the MIT-IBM Watson AI Lab, Tulane University and the University of Illinois this week unveiled research that allows a computer to more closely replicate human-based reading comprehension and inference.
The researchers have created what they termed “a breakthrough neuro-symbolic approach” to infusing knowledge into natural language processing. The approach was announced at the AAAI-20 Conference taking place all week in New York City.
Reasoning and inference are central to both humans and artificial intelligence, yet many enterprise AI systems still struggle to comprehend human language and textual entailment, which is defined as the relationship between two natural language sentences, according to IBM.
There have been two schools of thought or “camps” since the beginning of AI: one has focused on the use of neural networks/deep learning, which have been very effective and successful in the past several years, said David Cox, director for the MIT-IBM AI Watson Lab.
Neural networks and deep learning need data and additional compute power to thrive. The advent of the digitization of data has driven what Cox called “the neural networks/deep learning revolution.”
Symbolic AI is the other camp and it takes the point of view that there are things you know about the world around you based on reason, he said. However, “all the excitement in the last six years about AI has been about deep learning and neural networks,” Cox said.
Now, “there’s a grouping idea that just as neural networks needed something like data and compute for a resurgence, symbolic AI needed something,” and the researchers theorized that maybe what it needs is neural networks, he said. There was a sense among researchers that the two camps could complement each other and capitalize on their respective strengths and weaknesses in a productive way, Cox said.
“The work we’re doing in the AI lab is about neuro-symbolic AI. It’s a mix of the ideas of symbolic AI and neural networks.”
The paper provides examples of the ways in which researchers are starting to mix together classic symbolic AI with ideas from neural networks, he said.
For example, a human would know that if someone says they are walking outside, and they are inside eating lunch, those two statements are contradictory, Cox said.
“We find those are so natural, but we don’t have AI systems that can naturally” make those same interpretations. “This team is mixing together neural networks and symbolic AI and using a combined system to solve a problem.”
In the paper, the researchers wrote that they are presenting an approach that complements text-based entailment models, which are fundamental tasks in natural language processing, with information from external knowledge sources.
The use of external knowledge helps the model to be robust and improves prediction accuracy, the researchers wrote. They said they found “an absolute improvement of 5-20% over multiple text-based entailment models.”
Sentiment analysis is in use today, Cox said. “A relative understanding of shallow text will give a solution.” But if you read a science textbook and then try to pass a quiz, for example, you need to have a deep understanding of what the data in the textbook actually means.
The team found that infusing knowledge graphs, which are representations of things that are known, with neural networks “was more powerful than any methods that have come before that just relied on neural networks without knowledge graphs,” he said. “This mining of ideas was more effective.”
Cox stressed that the researchers are in the very early stages of research but believes that this is a technology “that we think will impact many industries.”