What's this neuromorphic computing you're talking about?
Semiconductors are often called the brains of electronics. Over time, the power of these tiny silicon chips has grown exponentially even as their circuitry has shrunk to unimaginably small sizes. Indeed, we've always been able to count on processor improvements to mirror those in hardware and software.
But with power-hungry autonomous vehicles, robots, drones, and other self-reliant machines requiring small yet mighty and energy-efficient chips—and with traditional semiconductors reaching the limits of miniaturization and capacity—researchers say a new approach to semiconductor design is needed.
One promising alternative is already generating considerable interest: neuromorphic computing.
Gartner predicts traditional computing technologies built on legacy semiconductor architecture will hit a digital wall by 2025 and force a shift to new paradigms, including neuromorphic computing. Emergen Research, meantime, says the global neuromorphic processing market will reach $11.29 billion by 2027.
"Neuromorphic engineering is not going to replace general purpose hardware, but it could be hugely important for solving special or specific technology challenges, such as effectively implementing artificial intelligence at the edge," says Emre Neftci, assistant professor in cognitive sciences at the University of California, Irvine, and head of the university's Neuromorphic Machine Intelligence Lab. "Research into this technology is advancing rapidly."
Technology imitating biology
Neuromorphic computing works by mimicking the physics of the human brain and nervous system by establishing what are known as spiking neural networks, where spikes from individual electronic neurons activate other neurons down a cascading chain. It is analogous to how the brain sends and receives signals from biological neurons that spark or recognize movement and sensations in our bodies. As opposed to more traditional approaches, where systems orchestrate computation in rigid binary terms (Is it a 1 or a 0? A yes or a no?), neuromorphic chips compute more flexibly and broadly. Its spiking neurons operate without any prescribed order.
The idea is to get computing devices to the point where they can think creatively, recognize people or objects they've never seen, and somehow adjust in order to take action. Most researchers admit they're not there yet. For instance, if you trained a traditional computer loaded with AI programming to play the classic Atari game "Breakout," where players use a digital paddle and ball to bust through a layer of bricks, it would probably always defeat humans. But if you gave the AI a new paddle that was 10 pixels higher or lower, it would most likely become confused and break. Humans would probably start winning because, unlike AIs, our brains are remarkably adept at understanding cause and effect.
Please read: How supercomputers are identifying COVID-19 therapeutics
Getting technology over that hump won't be easy. But tech firms including Intel, Hewlett Packard Enterprise, IBM, Airbus SE, and Accenture, as well as government research hubs like Sandia National Laboratories and academics worldwide, are exploring it. And advances over the past few years suggest they are making considerable progress.
Last year, for example, Intel showed off a neuromorphic robot that can see and recognize unknown objects based on just one example, unlike traditional models that require extensive instruction and data. Similarly, Intel and Cornell University debuted mathematical algorithms, used on Intel's neuromorphic text chip, Loihi, that closely replicate how the brain "smells" something.
With some gentle guidance from researchers, Intel says Loihi rapidly recognized 10 different odors, a capability that could eventually find application in airport security, smoke and carbon monoxide detection, and quality control in factories. Separately, Accenture Labs demonstrated "Automotive Voice Command," a Loihi-based experiment that could bring voice, gesture, and contextual intelligence command capabilities to vehicles without draining batteries. For example, the system could recognize voice commands, such as "Start my engine," and do so 200 milliseconds faster and consuming 1,000 times less power than a standard GPU, an Accenture researcher told The Wall Street Journal. UCI researchers and students also recently "taught" the Loihi chip how to recognize hand gestures on the fly using a brain-inspired synaptic plasticity rule.
"This technology gets me excited because you can perform all sorts of learning capabilities directly on the chip itself without having to download a model that's been trained on a cloud, high-performance computer, or GPU," says Neftci. "So you can imagine, whether it's a drone or a smart watch, the device is able to learn on its own from basic patterns as it operates."
On the path to proficiency
Sapan Agarwal, a Sandia researcher based in Livermore, California, says this learning occurs much faster than with traditional computers, again because of how neuromorphic computing emulates biology. In our bodies, he notes, the brain performs computations with information that is physically nearby. But in traditional computers, there is separation between processing (in the CPU) and memory (in RAM). Computation depends on moving data between those locations, which creates a bottleneck that slows computing speed, adds to computational costs, and burns tremendous amounts of energy, especially when complex tasks are involved.
In neuromorphic computing, however, all computation can be done in analog or memory. If it is performed in memory—which experts say is the direction the technology is taking—latency is all but eliminated. And power drain is dramatically reduced to the point where a device might operate on a tiny battery for years instead of months or weeks. This opens up a world of possibilities for including the chips in machines that need to perform computationally complex deep learning types of operations locally, such as autonomous vehicles, military drones, and high-performance computers, or in dumbed-down low-power devices that just need to run reliably for long periods of time, like office or store security cameras.
"The neuromorphic research community started out thinking analog was the way to go with this technology but fairly quickly determined there were more advantages to in-memory computing approaches," says John Paul Strachan, emerging accelerators team leader in the AI Lab at Hewlett Packard Enterprise, which is exploring applications for neuromorphic technology. "One of the principles that became clear early on was that you should not separate compute and memory units. Instead, you want distributed processors everywhere with local weights and computations so you are not moving all sorts of data around all the time. That's where you get your big win with neuromorphic computing."
Global development accelerating
As a concept, neuromorphic computing has been around since the 1980s. But it has picked up steam in the past few years because of rising interest in specific types of technology areas, including AI, machine learning, sensors, predictive analytics, real-time data streaming and modeling—all of which could prosper significantly with more processing power and efficiency. Increased investment from automotive, aerospace, and defense industries wanting to implement Industry 4.0 innovation is spurring progress as well.
Please read: From the front lines to the enterprise: The ethical considerations of AI
Oddly enough, though, much of this activity is occurring outside of the U.S. China, for example, appears to be heavily outspending the U.S., according to researchers.
"For China, it's red hot," says Matthew Marinella, a technical researcher at Sandia in Albuquerque, New Mexico. "They are spending a lot of money, and they are really poising themselves extremely well in this area. In the U.S., I think [neuromorphic] has not been greatly understood. Other technologies, like quantum computing, have gained more attention and been pretty well funded."
Advances have also been going strong at the Swinburne University of Technology in Melbourne, Australia, where a team of academics and students recently demonstrated what it called "the world's fastest and most powerful optical neuromorphic processor" for AI. The work resulted from a collaboration between RMIT University and Monash University researchers in Melbourne and the Institut national de la recherche scientifique (INRS) in Canada, Hong Kong, and China.
Prof. David Moss, who led the effort, says the chip is built on optical micro-combs capable of achieving 10 trillion operations per second, which is about 1,000 times faster than previous processors. The technology could revolutionize how calculations are performed in computing systems.
"The implications of micro-combs are just mind-boggling," says Moss. "We're a long way from this, but ultimately, you can imagine having a [system] clock in your iPhone with the accuracy of the best clocks in the world and being able to do things like detect the gravitational redshift or the time redshift of someone standing nearby. And, of course, you'd be able to get rid of GPS systems because everybody would have their own clock and motion sensor. I mean, the accuracy of this technology is just [wild]."
The Melbourne team has also been focused on optical neural network technology that could drastically improve the accuracy and precision of digital cameras embedded in autonomously operating machinery like self-driving cars, which will never succeed if they fail to recognize objects in time to stop. As neuromorphic capabilities continue to improve the ability of AI to recognize behaviors and patterns, the optical technology could also prove useful for high-end cancer or COVID-19 research, Moss says.
"The potential is massive; it's huge," he says. "At the moment, most neuromorphic computing research is focused on how it can be used for devices. But as time goes on and we figure this out, the possibilities will only be limited by our imaginations—not by technology."
As time goes on and we figure this out, the possibilities will only be limited by our imaginations—not by technology.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.