Combined with other researchers in San Diego, they’ve achieved incredible breakthroughs in resistive random-access memory (RRAM). This breakthrough offers enormous potential to keep up with the growing consumption of artificial intelligence (AI) and hyperscale computing. The team showed that RRAM can hold information at room temperature for up to five years. This conclusion is very important for achieving persistent data preservation across all applications.
What happens as we begin to operate RRAM retention at higher operating temperatures is still ambiguous. Conventional memory technologies have been unable to keep pace with the increasingly complex requirements of advanced models. This gap has created a demand for new solutions such as RRAM.
Duygu Kuzum and her team cleverly designed a complex eight-layer stack of RRAM cells. Each cell is capable of remembering up to 64 different resistance levels! This development introduces the ability to program data with one pulse of the same voltage level, significantly simplifying operation’s programming procedure.
Even for the San Diego researchers, just getting an array of RRAM this small (1-kilobyte) was an impressive feat. This array works completely without selectors, which is a major breakthrough in RRAM technology. In testing, the device produced an accuracy rate of 90 percent. On top of all that, this result underscores its potential effectiveness and efficiency in real world applications.
“We actually redesigned RRAM, completely rethinking the way it switches.” – Duygu Kuzum
Alongside the eight-layer configuration, the team designed a bulk RRAM device that executes complex operations. Prototypical of this new breed of device is a multi-level RRAM, which can provide more resistance levels than conventional RRAM. It works well in the megaohm range of resistance. The larger resistance of bulk RRAM renders it more suitable for parallel operations to continue. This ability is critical for addressing today’s computing challenges.
Kuzum thinks that bulk RRAM can dramatically improve neural network models, particularly on-device or edge-implemented models. Given that AI will continue to be built on more complex, larger models, in both senses of the word, that’s where better memory solutions will be crucial.
The San Diego team set out to make big strides in the opposite direction by miniaturizing RRAM. They engineered devices that were only 40 nanometers wide. This miniaturization is an essential consideration for any successful incorporation of RRAM into increasingly smaller systems without sacrificing performance.
Conventional filamentous RRAM is fundamentally limited by its confined resistance states and difficulty in performing matrix operations in parallel. On the other hand, bulk RRAM has superior performance owing to larger resistance states. Resistance of filament-based RRAM cells typically max out in the kiloohm range. In sharp contrast, bulk RRAM works well over an incredibly broader range.
“We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications.” – Duygu Kuzum
Albert Talin, a third researcher who worked on the project, underscored the need for integration in memory technology. He stated, “I think that any step in terms of integration is very useful.” This sentiment demonstrates the growing realization among people that there is a clear need for better memory solutions. These solutions need to be “future-proof” as computational demands evolve quickly.
This is no small feat. The San Diego team has made a monumental breakthrough. All of their recent breakthroughs move us significantly closer to overcoming the limitations of today’s conventional memory technologies. More importantly, perhaps, they are iteratively improving their designs and optimizing materials. Their innovations will fundamentally change how memory is used both in base AI environments and in high-performance compute environments.

