Artificial intelligence helps fast analyze gravitational lenses

4Sep - by aiuniverse - 4 - In Artificial Intelligence

Source –

Researchers from the U.S. Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have shown that neural networks, a form of artificial intelligence, can analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods.

The work, by a research team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, was detailed in a study published in Nature.

The researchers used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster. The distortions provide clues about how mass is distributed in space and how that distribution changes over time, which are linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that is accelerating the expansion of the universe.

Until now, analyzing such images has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models, according to a news release from SLAC, originally named Stanford Linear Accelerator Center. It can take weeks to months for a single lens.

To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods.

Inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information, the neural networks are able to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion, which is needed for future sky surveys that will look deeper into the universe and produce more data.

“We won’t have enough people to analyze all these data in a timely manner with the traditional methods,” postdoctoral fellow Laurence Perreault Levasseur, a co-author of the study, was quoted as saying. “Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.”

In their the team’s brain-mimicking neural networks, “neurons” are single computational units that are associated with the pixels of the image being analyzed. They are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature.

Facebook Comments

Comments are closed.