NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE

Source: analyticsinsight.net

The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is necessary to bring major design transformation with improved performance that can change the way we view computers. The Moore’s law (named after Gordon Moore, in 1965) states that the number of transistors in a dense integrated circuit doubles about every two years while their price halves. But now the law is losing its validity. Hence hardware and software experts have come up with two solutions: Quantum Computing and Neuromorphic Computing. While quantum computing has made major strides, neuromorphic is still in its lab stage, until recently when Intel announced its neuromorphic chip, Loihi. This may indicate the third wave of Artificial Intelligence.

The first generation of AI was marked with defining rules and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second generation was populated by using deep learning networks to analyze the contents and data that were largely concerned with sensing and perception. The third generation is about drawing parallels to the human thought process, like interpretation and autonomous adaptation. In short, it mimics neurons spiking like the nervous system of humans. It relies on densely connected transistors that mimic the activity of ion channels. This allows them to integrate memory, computation, and communication, at higher speed, complexity, and better energy efficiency.

Loihi is Intel’s fifth-generation neuromorphic chip. This 14-nanometer chip has a 60-millimeter die size and contains over 2 billion transistors, as well as three managing Lakemont cores for orchestration. It contains a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). Total, it has 128 cores packs. Each core has a built-in learning module and a total of around 131,000 computational “neurons” that communicate with one another, allowing the chip to understand stimuli. On March 16, Intel and Cornell University showcased a new system, demonstrating the ability of this chip to learn and recognize 10 hazardous materials from the smell. And this can function even in the presence of data noise and occlusion. According to their joint profiled paper in Nature Machine Intelligence, this can be used to detect the presence of explosives, narcotics, polymers and other harmful substances like signs of smoke, carbon monoxide, etc. It can purportedly do this faster, more accurate than sniffer dogs thereby threatening to replace them. They achieved this by training it constructing a circuit diagram of biological olfaction. They drew this insight by creating a dataset by exposing ten hazardous chemicals (including acetone, ammonia, and methane) through a wind tunnel, and a set consisting of the activity of 72 chemical sensors collected the signals.

This tech has multifold applications like identifying harmful substances in the airport, detecting the presence of diseases and toxic fumes in the air. The best part is, it constantly re-wires its internal network to allow different types of learning. The futuristic version can transform traditional computers into machines that can learn from experience and make cognitive decisions. Hence it is adaptive like human senses. And to put a cherry on top, it uses a fraction of energy than the current state of art systems in vogue. It is predicted to displace Graphics Processing Units (GPUs).

Although Loihi may soon evolve into a household word, it is not the only one. The neuromorphic approach is being investigated by IBM, HPE, MIT, Purdue, Stanford, and others. IBM is in the race with its TrueNorth. It has 4096 cores, each having 256 neurons and each neuron having 256 synapses to communicate with others. Germany’s Jülich Research Centre’s Institute of Neuroscience and Medicine and UK’s Advanced Processor Technologies Group at the University of Manchester are working on a low-grade supercomputer called SpiNNaker. It stands for Spiking Neural Network Architecture. It is believed to stimulate so-called cortical microcircuits, hence the human brain cortex and help us understand complex diseases like Alzheimer’s.

Who knows what sort of computational trends we may foresee in the coming years. But one thing is sure, the team at Analytics Insight will keep a close watch on it.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence