Press "Enter" to skip to content

Decoding Human Memory and Imagination With Generative AI

UCL researchers used generative AI to model brain functions, uncovering how memories are formed, replayed, and used for imagination. The study emphasizes the reconstructive and predictive nature of memory, offering new perspectives on human cognition. Credit: SciTechDaily.com

A UCL study using AI models advances our understanding of memory, showing how the brain reconstructs past events and imagines new scenarios.

Recent advances in generative AI help to explain how memories enable us to learn about the world, re-live old experiences and construct totally new experiences for imagination and planning, according to a new study by UCL researchers.

AI Models Mimicking Brain Functions

The study, published in Nature Human Behaviour and funded by Wellcome, uses an AI computational model – known as a generative neural network — to simulate how neural networks in the brain learn from and remember a series of events (each one represented by a simple scene).

The model featured networks representing the hippocampus and neocortex, to investigate how they interact. Both parts of the brain are known to work together during memory, imagination, and planning.

Lead author, PhD student Eleanor Spens (UCL Institute of Cognitive Neuroscience), said: “Recent advances in the generative networks used in AI show how information can be extracted from experience so that we can both recollect a specific experience and also flexibly imagine what new experiences might be like.

“We think of remembering as imagining the past based on concepts, combining some stored details with our expectations about what might have happened.”

Memory Replay and Prediction

Humans need to make predictions to survive (e.g. to avoid danger or to find food), and the AI networks suggest how, when we replay memories while resting, it helps our brains pick up on patterns from past experiences that can be used to make these predictions.

Researchers played 10,000 images of simple scenes to the model. The hippocampal network rapidly encoded each scene as it was experienced. It then replayed the scenes over and over again to train the generative neural network in the neocortex.

The neocortical network learned to pass the activity of the thousands of input neurons (neurons that receive visual information) representing each scene through smaller intermediate layers of neurons (the smallest containing only 20 neurons), to recreate the scenes as patterns of activity in its thousands of output neurons (neurons that predict the visual information).

Implications of the Study

This caused the neocortical network to learn highly efficient “conceptual” representations of the scenes that capture their meaning (e.g. the arrangements of walls and objects) – allowing both the recreation of old scenes and the generation of completely new ones.

Consequently, the hippocampus was able to encode the meaning of new scenes presented to it, rather than having to encode every single detail, enabling it to focus resources on encoding unique features that the neocortex couldn’t reproduce – such as new types of objects.

The model explains how the neocortex slowly acquires conceptual knowledge and how, together with the hippocampus, this allows us to “re-experience” events by reconstructing them in our minds.

The model also explains how new events can be generated during imagination and planning for the future, and why existing memories often contain “gist-like” distortions – in which unique features are generalized and remembered as more like the features in previous events.  

Senior author, Professor Neil Burgess (UCL Institute of Cognitive Neuroscience and UCL Queen Square Institute of Neurology), explained: “The way that memories are re-constructed, rather than being veridical records of the past, shows us how the meaning or gist of an experience is recombined with unique details, and how this can result in biases in how we remember things.”

Reference: “A Generative Model of Memory Construction and Consolidation” 19 January 2024, Nature Human Behaviour.
DOI: 10.1038/s41562-023-01799-z

Source: SciTechDaily