That’s why researchers at the University of California, San Diego are on the cutting edge of creating a new type of resistive random-access memory (RRAM) with game-changing properties. This breakthrough may soon reveal the true potential of neural networks, particularly on edge devices. Led by Duygu Kuzum, the team presented their findings at the IEEE International Electron Device Meeting (IEDM) held in December. The new progress in RRAM technology may be the route toward overcoming memory constraints that have crippled AI applications.
Electric shorts Kuzum noted that using filamentary RRAM for parallel matrix ops presents challenges. These operations are key for the success of today’s neural networks. The San Diego group’s RRAM with very high resistance switching, in the megaohm range. Kuzum thinks this very characteristic is just what’s needed for these operations. This reinvention sets up their efforts to be more than just the next hit, a true industry-changer.
Kuzum and her colleagues doubled down on their research. They combined several eight-layer stacks into a 1-kilobyte matrix that does work without the need for selectors. This advancement reduces design complexity, but more importantly, it provides a proof of scalability and API integration potential for AI applications.
The scientists were able to scale RRAM devices down to nanoscale dimensions with a final size as small as 40 nanometers in diameter. That miniaturization is key to driving innovation, with smaller components usually resulting in improved performance and efficiency.
As part of their winning work presented at IEDM, Kuzum and her team illustrated their capacity to execute a learning algorithm. They accomplished this using a completely different kind of RRAM. The findings showed that their device has the capability to store data reliably at room temperature for decades. This exceeds the remarkable decades-long persistence of traditional flash memory.
Kuzum said they are hopeful about their findings for supporting neural network models on edge devices. These models, especially machine learning based ones, must learn from their surroundings and not be overly reliant on cloud connectivity. She stated, “We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications.” This focus on practical use cases highlights the team’s commitment to developing technology that meets the demands of modern AI systems.
The San Diego contingent claimed an awe-inspiring accuracy rate of 90 percent using their RRAM technology. In fact, this level of performance even competes with that of neural networks realized digitally. This degree of understanding is essential, especially given that industries are turning to AI-based tools and services more and more every day.
To create the defect, the researchers synthesized a cell stack of eight individual cells in a repeating structure. Each cell in the calc stack could be one of 64 different resistance values. This cutting-edge method provides the greatest amount of flexibility for the device to conduct complicated and nuanced activities quickly and reliably.
Kuzum is aware that other groups have previously fabricated bulk RRAM devices. She notes her team’s major advances on reimagining the design and switching component of these devices. “We actually redesigned RRAM, completely rethinking the way it switches,” she noted, underscoring the novel aspects of their approach.
Albert Talin, a colleague involved in the research, added, “I think that any step in terms of integration is very useful.” This statement reflects the ongoing collaboration and shared vision among researchers to push the boundaries of what RRAM technology can achieve.

