Scientists at the Forschungszentrum Jülich in Germany have successfully trained artificial intelligence (AI) to think like Albert Einstein or Isaac Newton. Using this learning, the model can recognize patterns in complex data sets and form a physical theory around them.
History remembers names like Einstein or Newton because they gave us newer theories that have stood the test of time, as well as myriad experiments conducted by scientists who came after them. Such theories explain observations made by them, but also other phenomena that might be happening around us.
For instance, Newton's laws of gravity help us explain the gravitational force not only on Earth but also help us accurately predict the motion of other planets, the moon, and other celestial objects.
There are two major approaches to forming a new theory or hypothesis. One can begin from known laws of the field and derive new hypotheses from them or try to explain the behavior of an object or a new phenomenon with a new theory. The tricky part, however, is picking the right approach to arrive at the hypothesis.
Before attempting to train AI to think about physics, the researchers were using physics to understand the workings of AI itself. Researcher Claudia Merger at the institute used a neural network to map complex behavior accurately into a simpler system. The AI did this by simplifying complex interactions between system components.
Next, the research team used the simplified system to create an inverse map with the trained AI. As the system returned from simple components to complex ones, it developed a new theory. The approach was similar to what a physicist might opt for, except that the interactions were readable in the parameters defined by AI. The researchers call this the "physics of AI."
Merger investigated how smaller substructures in handwritten numbers are made up of interactions between pixels to demonstrate how the AI model can think. With the help of AI, the researchers theorized that groups of brighter pixels contributed to the shape of the handwritten number.
Using AI is helpful because it allows for a large number of interactions to be studied at the same time. While this increases the computational effort, one can only look at extremely small systems without involving AI.
How the system differs from other AI models already released in the past year is how they are trained and our understanding of how it works. Typically, AI models learn the theory of data used to train them. This learning is hidden in the parameters of the trained system.
Instead of using the physics of AI approach, the researchers have successfully extracted the theory learned by the computer and the language it uses to explain interactions between system components. This can then be used to build a bridge between AI's complex workings and theories humans can understand, Moritz Helias, the lead researcher, said in a press release.
The research findings were published in the journal Physical Review X.