Scientists from the University of California, San Diego, have developed novel and innovative steps towards new resistive memory (RRAM). With this novel form of RRAM, the future of neural network computation may look very different. Led by Duygu Kuzum, the team demonstrated their ability to run learning algorithms on a novel RRAM device at December’s IEEE International Electron Device Meeting (IEDM). We believe this new development could significantly improve many artificial intelligence applications. It’s uniquely advantageous to edge devices that require real-time incremental learning without the need to access the cloud.
Kuzum acknowledged the challenges associated with traditional filamentary RRAM for multi-way parallel matrix operations. These operations are the underpinnings of the efficiency and power of today’s neural networks. Her team employs a unique mechanism to produce highly compact 1-kilobyte arrays. Instead, they fabricate several eight-layer stacks which removes the need for selectors and simplifies the manufacturing process.
The researchers made a significant advance by downscaling RRAM down to the nanoscale. Each device now measures only 40 nanometers across. By reducing this distance, performance is significantly improved, and it enables a much larger range of resistance levels. Kuzum’s team reached an incredible feat in setting up to 64 unique resistance states in their bulk RRAM. This achievement is particularly daunting in the context of filamentary RRAM.
“We actually redesigned RRAM, completely rethinking the way it switches.” – Duygu Kuzum
Kuzum’s findings suggest that the San Diego stack runs in the megaohm range, enabling it to run in parallel. This capability makes this new feature especially useful for neural networks, allowing for more sophisticated calculations than conventional RRAM devices. Kuzum’s bulk RRAM stack has greater resistance and greater number of resistive states. This breakthrough enables it to execute complex tasks previously thought impossible.
Kuzum noted that their RRAM has long-term data retention at room temperature which can last for years. This kind of VI performance makes it comparable with normal flash memory. This longevity is especially important for applications in edge computing, where devices will need to learn and adapt to their environments on their own.
The possible uses of this technology go farther than just data storage and crunching speed. As Kuzum explained, “We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications.” This focused development underscores the commitment of the San Diego team to push the boundaries of RRAM technology for artificial intelligence.
As a project collaborator, Albert Talin noted that integration is really a critical quality within this emerging research field. “I think that any step in terms of integration is very useful,” he stated, highlighting the ongoing efforts to refine and enhance these advanced devices.
To be fair, the San Diego group wasn’t the first to develop bulk RRAM devices, but they’ve since made notable strides. Their advances in both miniaturization and 3D circuit formation are a huge leap forward in this emerging tech. Among such technologies, their RRAM shines as a formidable dark horse in the rapidly developing field of memory technologies. That’s due to its remarkable performance metrics and groundbreaking design.

