Researchers at the University of California, San Diego, are minting some serious quarter-dollars by breaking new ground in resistive random-access memory (RRAM) technology. This latest improvement could significantly improve the performance of any application that uses artificial intelligence. Duygu Kuzum and her team have engineered a new kind of RRAM. This new technology holds data at room temperature for decades, competing against legacy flash memory, and solves many challenges of current RRAM technologies.
In an invited talk at the IEEE International Electron Device Meeting (IEDM), Kuzum illustrated the extraordinary durability of their RRAM design. He stressed the revolutionary nature of their approach to holding data for the long haul. As far as retention at the higher temperatures, she said that is still unknown. This may be a game-changing issue for use cases that need dependable data storage in extreme conditions.
Kuzum pointed to the advantages of their bulk RRAM design. Beyond lesser adjustments he further clarified that it is able to perform multiple high-cost operations that are key to neural network models run on edge devices. We’re still busy with characterization and optimization of materials. Our aim is to create a multi-purpose device purpose-built for AI workloads and to open up processing power in the process,” she said.
The San Diego team accomplished impressive advances in the miniaturization of RRAM technology. They even proved successful in demonstrating RRAM devices down to 40 nanometers in size in the most impressive nanoscale result. They attached many such eight-layer stacks together to create a 1-kilobyte array. This design runs without any need for selectors, greatly streamlining the architecture and possibly increasing performance.
Kuzum elaborated on one of the biggest benefits of their new design, a large resistance range. This results in a memory cell resistance of several kiloohms in most of the filament based memory cells. In comparison, their stack gets a resistance in megaohm range. This improvement enables much better parallel operations, something that’s especially important in neural networks.
The experimental research team undertook behavioral experiments that demonstrated a remarkable 90 percent accuracy in working with their RRAM technology. This performance is very similar to the state-of-the-art performance currently achieved with neural networks implemented digitally. Kuzum explained that an eight-layer stack can address 64 distinct resistance values with just one pulse of the same voltage. This is a testament to the device’s incredible versatility and its capabilities of conducting complex computations.
Kuzum went on to point out some of the challenges with filamentary RRAM. This included such patently obvious things as failing to make it easy to do parallel matrix operations, a key operating requirement for modern sophisticated neural nets. The new bulk RRAM design circumvents all these issues, delivering a highly energy-efficient solution to AI applications.
Albert Talin, another researcher who helped on this work, shared his excitement about the progress achieved thus far. “I think any step in the direction of integration is an extraordinary thing to do,” he said. He underscored the importance of bottlenecking device technologies to meet the growing requirements of AI and machine learning workloads.
Based on their extensive research, the San Diego team has created a cutting edge approach to RRAM technology. This approach improves data preservation and quality while enhancing operational function. By rethinking how RRAM switches, Kuzum and her colleagues have laid the groundwork for developing memory solutions that better support the complex demands of modern neural networks.

