Press "Enter" to skip to content

The Future of Machine Learning: A New Breakthrough Technique

Researchers have developed a technique called Meta-learning for Compositionality (MLC) that enhances the ability of artificial intelligence systems to make “compositional generalizations.” This ability, which allows humans to relate and combine concepts, has been a debated topic in the AI field for decades. Through a unique learning procedure, MLC showed performance comparable to, and at times surpassing, human capabilities in experiments. This breakthrough suggests that traditional neural networks can indeed be trained to mimic human-like systematic generalization.

Research shows new promise for “compositional generalization”

Humans innately understand how to relate concepts; once they learn the notion of “skip,” they instantly grasp what “skip twice around the room” or “skip with your hands up” entails.

But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks—the engines that drive artificial intelligence and machine learningMachine learning is a subset of artificial intelligence (AI) that deals with the development of algorithms and statistical models that enable computers to learn from data and make predictions or decisions without being explicitly programmed to do so. Machine learning is used to identify patterns in data, classify data into different categories, or make predictions about future events. It can be categorized into three main types of learning: supervised, unsupervised and reinforcement learning.” data-gt-translate-attributes=”[{“attribute”:”data-cmtooltip”, “format”:”html”}]”>machine learning— are not capable of making these connections, known as “compositional generalizations.” However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.

Breakthrough Technique: Meta-learning for Compositionality

Researchers at New York UniversityFounded in 1831, New York University (NYU) is a private research university based in New York City.” data-gt-translate-attributes=”[{“attribute”:”data-cmtooltip”, “format”:”html”}]”>New York University and Spain’s Pompeu Fabra University have now developed a technique—reported in the journal Nature—that advances the ability of these tools, such as ChatGPT, to make compositional generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks—the engines driving ChatGPT and related technologies for speech recognition and natural language processing—to become better at compositional generalization through practice. 

Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allows these systems to unlock new powers, the authors note.

“For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the authors of the paper. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”

How MLC Works

In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally—for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.

Testing the Technique

To test the effectiveness of MLC, Lake, co-director of NYU’s Minds, Brains, and Machines Initiative, and Marco Baroni, a researcher at the Catalan Institute for Research and Advanced Studies and professor at the Department of Translation and Language Sciences of Pompeu Fabra University, conducted a series of experiments with human participants that were identical to the tasks performed by MLC. 

In addition, rather than learn the meaning of actual words—terms humans would already know—they also had to learn the meaning of nonsensical terms (e.g., “zup” and “dax”) as defined by the researchers and know how to apply them in different ways. MLC performed as well as the human participants—and, in some cases, better than its human counterparts. MLC and people also outperformed ChatGPT and GPT-4, which despite its striking general abilities, showed difficulties with this learning task.

“Large language models such as ChatGPT still struggle with compositional generalization, though they have gotten better in recent years,” observes Baroni, a member of Pompeu Fabra University’s Computational Linguistics and Linguistic Theory research group. “But we think that MLC can further improve the compositional skills of large language models.”

Reference: “Human-like systematic generalization through a meta-learning neural network” by Brenden M. Lake, and Marco Baroni, 25 October 2023, Nature.
DOI: 10.1038/s41586-023-06668-3

Source: SciTechDaily