Breakthrough in RRAM Technology Promises Enhanced Memory Solutions for AI Applications

Unless it’s the University of California, San Diego, researchers who have made some powerful recent breakthroughs in resistive random-access memory (RRAM) technology. Their research has the potential to address key hurdles in memory efficiency for artificial intelligence (AI) applications. Duygu Kuzum directs the team in their innovative new accomplishment. They’ve reduced the size of RRAM…

Tina Reynolds Avatar

By

Breakthrough in RRAM Technology Promises Enhanced Memory Solutions for AI Applications

Unless it’s the University of California, San Diego, researchers who have made some powerful recent breakthroughs in resistive random-access memory (RRAM) technology. Their research has the potential to address key hurdles in memory efficiency for artificial intelligence (AI) applications. Duygu Kuzum directs the team in their innovative new accomplishment. They’ve reduced the size of RRAM to a tiny 40 nanometers and created another variant without filament structures common to traditional RRAM. This technology could improve the processing power that’s crucial to neural networks.

This new kind of RRAM, which doesn’t rely on filaments, is special for another reason. It effectively addresses the shorts and suffering of traditional filamentary RRAM. Meanwhile, Kuzum noted that filament-based RRAM is challenged by the parallel matrix operations required to operate neural networks. The high resistance of filament-based RRAM cells with standard filaments is limited to the kiloohm range. In comparison, the previously created RRAM stack achieves a resistance-state of only 100 kiloohms, drastically improving its functionality.

Kuzum and her colleagues illustrated how their invention could be used in real-world applications. They built the world’s smallest computer—a 1-kilobyte array—out of hundreds of eight-layer stacks of their RRAM. Significantly, this proposed grid took zero selectors, making the design more streamlined and improving integration possibilities.

“We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications,” – Duygu Kuzum

The multidisciplinary research team presented their cutting-edge findings to the world at the IEEE International Electron Device Meeting (IEDM). There, they were able to train a learning algorithm on their freshly fabricated RRAM. The device can realize 64 different resistance values, allowing it to perform more complex operations than conventional RRAM technologies. Field tests have shown an extremely high accuracy rate of 90 percent, which is on par with that of neural networks installed digitally.

Kuzum went on to point out that their bulk RRAM design would be especially suited for neural network models deployed on edge devices. In order for these devices to exist, they need to learn from the environments around them without the need for continuous, base-laden, cloud computation capabilities. The San Diego group’s innovation further allows the RRAM to execute learning algorithms with a single pulse of identical voltage, streamlining processing efficiency.

In retention tests carried out at room temperature, the San Diego team’s RRAM demonstrated long data retention, on the order of years. This natural retention period approaches that of the best available flash memory. Albert Talin, a member of Sandia’s research team, noted that retention performance at elevated temperatures is unknown. These temps are what a lot of computing devices run at anyway.

“I think that any step in terms of integration is very useful,” – Albert Talin

The San Diego researchers have gone even further in creating three-dimensional circuits with their RRAM technology. This innovation makes it possible for more compact designs. It further accelerates memory performance, a necessity as conventional memory technologies can no longer keep pace to power the ever-larger AI models. The “memory wall” phenomenon is a complex and pernicious threat to efficiency computing. This RRAM innovation could be one of the most important breakthroughs in moving past those hurdles.

Kuzum highlighted the broader implications of their work, stating, “We actually redesigned RRAM, completely rethinking the way it switches.” With this multi-pronged approach, we just might be on the cusp of some riveting advancements in memory technology. These advancements will all be much more in tune with the needs advanced AI systems.

As we know, AI is a rapidly changing technology that is affecting all industries. Creative advances such as this RRAM approach developed at UC San Diego will be essential in meeting the challenges posed by escalating processing needs. Applying these breakthroughs to practical applications unlocks massive computational savings. This is a long overdue improvement that will benefit every vocation and avocation under the sun.