The study presents large brain-like neural networks for artificial intelligence

This article was reviewed based on Science X’s editorial process and policies. The editors have highlighted the following attributes ensuring the credibility of the content:

verified

peer-reviewed publication

correct






Dynamic neural responses of LTC-SN. (a) a MNIST sample is inserted sequentially into the LTC-SNN, pixel by pixel along the line direction. (b) for illustration purposes, we computed histograms of the mean dt/t values ​​resulting after training for this single sample (c) mean responses of neurons in terms of firing rate pooled by dt/tm values, exp(dt/ tad p) (d) Example of dynamics of the inverse time constant dt/tm for four randomly selected neurons during sequence presentation. Credit: Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00650-4

In a new study in Nature Machine Intelligenceresearchers Bojian Yin and Sander Boht from HBP partners Dutch National Research Institute for Mathematics and Computer Science (CWI) demonstrate a significant step towards artificial intelligence that can be used in local devices such as smartphones and in virtual reality-like applications, protecting privacy at the same time.

They show how brain-like neurons, combined with novel learning methods, make it possible to train fast and energy-efficient spiking neural networks at large scales. Potential applications range from wearable AI to speech recognition and augmented reality.

While modern artificial neural networks are the backbone of the current AI revolution, they are only loosely inspired by real biological neuron networks such as our brains. The brain, however, is a much larger network, much more energy efficient, and can respond ultrafast when triggered by external events. Spiking neural networks are special types of neural networks that more closely mimic the functioning of biological neurons: Neurons in our nervous system communicate by exchanging electrical impulses, and they do so only sparingly.

Implemented in chips, called neuromorphic hardware, such spiked neural networks promise to bring AI programs closer to users on their devices. These on-premises solutions are good for privacy, robustness, and responsiveness. Applications range from speech recognition in toys and appliances, to healthcare monitoring and drone navigation to local surveillance.

Just like standard artificial neural networks, spiking neural networks need to be trained to perform such tasks well. However, the way these networks communicate poses serious problems. “The algorithms needed for this require a lot of computer memory, allowing us to train only small network models mostly for smaller tasks. This holds back many practical applications of AI so far,” says Sander Boht of CWI’s Machine Learning group. In the Human Brain Project, he works on learning architectures and methods for hierarchical cognitive processing.

By imitating the learning brain

The learning aspect of these algorithms is a big challenge and they cannot match the learning capacity of our brain. The brain can easily learn from new experiences immediately, changing connections or even making new ones. The brain also needs far fewer examples to learn something, and it works more energy-efficiently. “We wanted to develop something closer to how our brains learn,” says Bojian Yin.

Yin explains how it works: if you make a mistake in a driving lesson, you learn immediately. Correct your behavior right away, and not an hour later. “You learn, so to speak, as you take in new information. We wanted to mimic this by giving each neuron in the neural network a bit of information that is constantly updated. This way, the network learns how the information changes and doesn’t “not I have to remember all the previous information. This is the big difference compared to current networks, which have to work with all the previous changes. The current way of learning requires enormous computing power and therefore a lot of memory and energy.”

6 million neurons

The new online learning algorithm allows you to learn directly from the data, enabling neural networks with much larger peaks. Together with researchers from TU Eindhoven and research partner Holst Center, Boht and Yin demonstrated this in a system designed to recognize and locate objects. Yin shows a video of a busy street in Amsterdam: The underlying spiking neural network, SPYv4, has been trained in such a way that it can distinguish cyclists, pedestrians and cars and tell exactly where they are.

“Previously, we could train neural networks with up to 10,000 neurons; now we can do the same quite easily for networks with more than 6 million neurons,” Boht says. “With this, we can train high-capacity neural networks like our SPYv4.”

And where does it all lead? With access to such powerful AI solutions based on pinned neural networks, chips are being developed that can run these AI programs at very low power and will eventually appear in many smart devices, such as hearing aids and glasses for augmented or virtual reality.

More information:
Bojian Yin, Accurate online training of dynamic spiking neural networks through forward time propagation, Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00650-4. www.nature.com/articles/s42256-023-00650-4

About the magazine:
Nature Machine Intelligence

Provided by Human Brain Project

#study #presents #large #brainlike #neural #networks #artificial #intelligence

Leave a Comment