New physics-based self-learning machines could replace existing artificial neural networks and save energy

Rate this post

This article has been reviewed in accordance with Science X’s editorial procedures and policies. The editors have highlighted the following attributes while ensuring the credibility of the content:

fact checked

Peer-reviewed publication

Evidence


Learning with light: This is what light wave dynamics looks like at work in a physical self-learning device. Both its irregular shape and its development in stark contrast to its peak (red) period are significant. Credit: Florian Marquardt, MPL

× off


Learning with light: This is what light wave dynamics looks like at work in a physical self-learning device. Both its irregular shape and its development in stark contrast to its peak (red) period are significant. Credit: Florian Marquardt, MPL

Artificial intelligence not only provides impressive performance but also creates significant energy demand. The more energy it consumes, the more work it is trained to do.

Victor López-Pastor and Florian Marquardt, two scientists at the Max Planck Institute for the Science of Light in Erlangen, Germany, present a method by which artificial intelligence can be trained more efficiently. Their approach relies on physical processes rather than the digital artificial neural networks currently used. The work has been published in the journal Physical Review X.

The amount of energy required to train GPT-3, which makes ChatGPT an eloquent and seemingly well-known chatbot, has not been disclosed by Open AI, the artificial intelligence (AI) behind it. According to the German statistics company Statista, this would require 1,000 megawatt hours — about 200 German households with three or more people use annually. This energy expenditure allowed GPT-3 to learn whether the word “deep” was more likely to follow the word “sea” or “learning” in its data set, but by all accounts it did not understand the underlying meaning of such words. phrases

Neural Networks on Neuromorphic Computers

In order to reduce the energy consumption of computers and especially AI-applications, in the last few years many research institutes have been exploring an entirely new concept of how computers can process data in the future. The concept is known as neuromorphic computing. Although it sounds similar to artificial neural networks, it actually has little to do with them because artificial neural networks run on conventional digital computers.

This means that software, or more precisely algorithms, are based on the way the brain works, but digital computers act as hardware. They perform the computation steps of the neural network sequentially, one after the other, by varying the processor and memory.

“When a neural network is trained on hundreds of billions of parameters, that is, synapses, the data transfer between these two components consumes a large amount of energy,” says Marquardt, director of the Max Planck Institute for the Science. Prakash and professor at the University of Erlangen.

The human brain is completely different and would probably never have been evolutionarily competitive if it had worked as energy efficiently as computers with silicon transistors. It would probably have failed due to overheating.

The brain is characterized by carrying out numerous stages of the thought process in parallel and not sequentially. Nerve cells, or more precisely synapses, combine both processor and memory. Various systems around the world are being considered as potential candidates for neuromorphic counterparts to our neurons, including photonic circuits that use light instead of electrons. Their components act as switches and memory cells at the same time.


Artificial intelligence as a fusion of pinball and abacus: In this thought experiment, a blue positively charged pinball is a set of training data. The ball is projected from one side of the plate to the other. Credit: Florian Marquardt, MPL

× off


Artificial intelligence as a fusion of pinball and abacus: In this thought experiment, a blue positively charged pinball is a set of training data. The ball is projected from one side of the plate to the other. Credit: Florian Marquardt, MPL

A self-learning physical machine adapts its synapses independently

Together with López-Pastor, a doctoral student at the Max Planck Institute for the Science of Light, Marquardt has now developed an efficient training method for neuromorphic computers. “We developed the concept of a self-learning physical machine,” explains Florian Marquardt. “The main idea is to perform the training as a physical process, in which the parameters of the machine are optimized through the process itself.”

When training traditional artificial neural networks, external feedback is required to adjust the strength of billions of synaptic connections. “Not requiring this feedback makes training more efficient,” says Marquardt. Implementing and training artificial intelligence on a self-learning physical machine will save not only energy but also computing time.

“Our method works regardless of what physical process takes place in the self-learning machine, and we don’t even need to know the exact process,” explains Marquardt. “However, the process must meet certain conditions. Most importantly, it must be reversible, meaning it must be able to run forward or backward with minimal energy loss.”

“Additionally, the physical process must be non-linear, meaning sufficiently complex,” says Marquardt. Complex transformations between input data and outputs can only be accomplished by non-linear processes. A pinball rolling across one plate without colliding with another is a linear motion. However, if he is disturbed by another, the situation becomes non-linear.

A practical test in optical neuromorphic computing

Examples of reversible, non-linear processes can be found in optics. Indeed, López-Pastor and Marquardt are already collaborating with experimental teams developing optical neuromorphic computers. This machine processes information in the form of superimposed light waves, whereby the appropriate elements control the type and strength of the interaction. The researchers aim to make the concept of a self-learning physical machine a reality.

“We will be able to introduce the first self-learning physical machine in three years,” says Florian Marquardt. Until then, there should be neural networks that think with many synapses and are trained with significantly larger amounts of data than today.

As a result, there will be an even greater desire to implement neural networks outside of traditional digital computers and replace them with efficiently trained neuromorphic computers. “That is why we are convinced that self-learning physical machines have great potential to be used in the further development of artificial intelligence,” says the physicist.

More information:
Victor López-Pastor et al., Self-Learning Machines Based on Hamiltonian Echo Backpropagation, Physical Review X (2023). DOI: 10.1103/PhysRevX.13.031020

Journal Information:
Physical Review X

Provided by the Max-Planck-Institut für die Physik des Lichts

Leave a Comment