Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Russian scientists have proposed a new memory architecture for AI, which repeats the principles of the human brain. While learning, neural networks constantly forget old data. Such "dementia" remains the main obstacle to the creation of self-driving cars or lifelong medical assistants who need to continuously adapt to changing conditions. Experts "peeked" at the principle of the human brain and embodied it in a computer model, which gave the AI stable memories. Experts believe that the technology will find application in the creation of unmanned vehicles, robots and drones.

AI memory, as in the human brain

MIPT specialists have proposed an original way to combat the so—called dementia of artificial intelligence, a typical problem in which AI, learning new tasks, "forgets" previously learned information. To solve this problem, scientists have developed a new memory architecture, the mechanism of which they "peeped" into the human brain. Now it is implemented in the form of a computer model, but work is already underway on neuromorphic processors, where this principle will be implemented physically.

ИИ
Photo: MIPT

— Perhaps we have found the answer to one of the main mysteries of the brain.: how does he manage to learn new things without erasing the old "files". It's all about the constant restructuring of neural connections — rewiring. It is he who transforms fragile short—term memory into durable long-term memories," said Sergey Lobov, a leading researcher at the Laboratory of Neurobiomorphic Technologies at MIPT.

The scientists explained that the neural network of the brain, like a computer neural network, works on the principle of a map. In the process of learning, "memory traces" are formed in it, similar to well-trodden paths in the forest. But if new routes are laid, the old trail will quickly blur and become invisible. The same thing happens inside neural networks: absorbing information, they constantly rewrite their parameters and forget the old ones. This effect, when memory becomes unstable due to adaptation to new conditions, is called "catastrophic forgetting."

To rid AI of this ailment, the university's specialists borrowed the principle of the human brain. The approach is based on a mechanism for rebuilding neural connections, or rewiring, which works together with conventional learning processes, allowing the system to retain old information and simultaneously assimilate new information.

Нейроны
Photo: MIPT

— At first, the network learns under the influence of external signals: connections between neurons are strengthened, short-term memory is formed. After that, the external signal turns off and the AI is left alone with itself. At this point, the rewiring is activated. The system independently rebuilds the network structure, literally "imprinting" this pattern into the communication map. We called this process self—organized memory consolidation: short-term memory is transformed into long-term memory, becoming fixed as a stable structural change in the architecture of the neural network," explained Sergey Lobov.

To test the effectiveness of the new memory architecture, the scientists simulated the learning process and tracked how many bursts of activity the neural network was able to withstand before losing information. If a regular network forgot data after 1 thousand bursts, then a neural network with a rearrangement of connections could withstand up to 170 million bursts.

Why is new memory needed in data technologies?

Yaroslav Seliverstov, a leading expert in the field of AI at University 2035, told Izvestia that the problem of forgetting AI is becoming critically important for autonomous robots that need to gain experience interacting with objects, or for unmanned vehicles facing new traffic situations. This is the main barrier to the creation of truly flexible and independent machines capable of evolving like a living being.

— Instead of evenly updating all the connections of the neural network during training, the new architecture selectively modifies only those synapses that have low weight and do not carry critical information. It resembles the mechanism of our memory, where new memories are formed without destroying the old ones. The stated increase in the duration of information storage by hundreds of thousands of times looks revolutionary, since it exceeds the capabilities of existing analogues by several orders of magnitude," the specialist said.

Автомобиль
Photo: IZVESTIA/Dmitry Korotaev

In industrial robotics, such systems will allow the creation of universal robotic manipulators that will be able to master new operations with parts without forgetting previous assembly skills. For self-driving cars and drones, this means the ability to continuously adapt to unique road conditions or landscapes, accumulating unique experiences without the intervention of engineers. Their use in personalized medical diagnostic systems that can evolve along with the patient's medical history, or in smart homes that flexibly adapt to the habits of residents, looks promising, said Yaroslav Seliverstov.

When learning a new task, a neural network sometimes loses up to 90-99% accuracy in solving old ones, said Anton Averyanov, TechNet NTI market expert and CEO of the ST IT group of companies.

— Experiments on simple tasks provide almost 100% preservation of old knowledge while using new memory. However, there are concerns related to the fact that none of the biologically inspired mechanisms today scale to modern large linguistic or multimodal models with hundreds of billions or trillions of parameters. But in the next 5-10 years, in theory, this can be integrated into small autonomous drones and swarms of drones that need to work for years without retraining," said Anton Averyanov.

Мозг
Photo: Global Look Press/Science Photo Library via www.im

Data on the work of neural networks is now actively used in medicine, for example, for neuroprosthetics, Olga Valaeva, a clinical psychologist and head of development programs at Lomonosov Moscow State University, told Izvestia. Work is actively underway on direct intervention into the work of deep brain structures in diseases such as Parkinson's disease. In this case, implantable devices using a spike neural network can regulate the electrical activity of certain areas of the brain, which leads to a significant reduction in symptoms and an improvement in the quality of life of patients.

— When simulating such complex brain functions, the formation of impulse neural networks with long-term memory becomes vital. Long-term memory allows systems to memorize and adapt to new data, which is crucial for successful simulation of complex mental processes. These networks can learn from experience, which makes them more flexible and capable of self—learning," said Olga Valaeva.

The new approach may find application where AI requires long-term autonomous learning. Autonomous systems equipped with "super memory" will be able to constantly adapt to a changing world. This is necessary, for example, for freight forwarders, smart assistants or security systems, says Kirill Rappa, CEO of GTI, NTI expert.

Переведено сервисом «Яндекс Переводчик»

Live broadcast