The research team, co-led by bioengineers at the University of California San Diego , presented their results in the August 17 issue ofThe NeuRRAM chip uses an innovative architecture that has been co-optimized across the stack. Credit: David Baillot/University of California San Diego
To solve this data transfer issue, researchers used what is known as resistive random-access memory. This type of non-volatile memory allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and one of the main contributors to this work.
“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the One key contribution of the paper, the researchers point out, is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation.
In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said.The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power-hungry circuits.