Breakthrough in Memory Technology with Bulk RRAM

UC San Diego researchers propelled memory technology forward in a number of categories. They showed that a learning algorithm can efficiently run on a new class of resistive random-access memory (RRAM) known as Bulk RRAM. With this new disruptive technology come enthralling breakthroughs for artificial intelligence (AI) and neural networks, among many other applications. Even…

Tina Reynolds Avatar

By

Breakthrough in Memory Technology with Bulk RRAM

UC San Diego researchers propelled memory technology forward in a number of categories. They showed that a learning algorithm can efficiently run on a new class of resistive random-access memory (RRAM) known as Bulk RRAM. With this new disruptive technology come enthralling breakthroughs for artificial intelligence (AI) and neural networks, among many other applications. Even more, it’s ideal for the edge devices that require strong, local processing power.

What sets bulk RRAM apart from conventional RRAM is its superior number of resistance levels. Each cell in this advanced memory technology can show one of 64 different resistance levels. This powerful capability allows for representation and processing of much richer data. Greater levels of resistance, in the megaohm range, increase functionality for parallel applications. This change is overdue, but necessary to meet the demands of today’s computing workloads.

The researchers were able to create a vertical, eight-layer Bulk RRAM cell stack using one pulse of the same voltage. This methodology makes it easier to combine several layers into a smaller overall structure. This effectively increases the memory capacity without changing the physical size. The San Diego team stacked multiple eight-stratum stacks into a 1-kilobyte grid. This pioneering design runs coolly and effectively without the costly and failure-prone selectors, a breakthrough that could change the landscape of memory.

According to Duygu Kuzum, the senior author of the paper, their research was indeed pioneering.

“We actually redesigned RRAM, completely rethinking the way it switches.” – Duygu Kuzum

The researchers have already accomplished a lot toward linking Bulk RRAM to AI uses. They’ve cracked it, shrinking these devices down to the nanoscale, with each unit measuring a mere 40 nanometers across. This miniaturization opens the door to more complex three-dimensional circuits. Benefit by increasing performance and capacity without increasing the physical footprint of the memory.

Extensive testing misclassification shows that Bulk RRAM can achieve an impressive accuracy rate of 90 percent. This performance exceeds the current state-of-the-art digitally implemented neural networks. This type of performance is hugely advantageous to neural network models that operate on edge devices. These models should be able to learn and adapt on their own, without relying on cloud computing resources.

Bulk RRAM is remarkably adaptable. Its strength lies in its ability to execute large matrix operations in parallel, the bedrock of today’s neural networks. Conventional filamentary RRAM suffers from scaling issues since it operates based on filaments. This contrasts with Bulk RRAM which eliminates this dependency allowing for easier adoption for general computational workloads.

Recognizing the need for even further optimization, Kuzum described efforts to tailor these devices even more for use with AI.

“We are doing a lot of characterization and material optimization to design a device specifically engineered for AI applications.” – Duygu Kuzum

The researchers provided extensive evidence that Bulk RRAM can reliably retain data at room temperature for 10+ years. It turns out that it actually performs better both persistently and dramatically than conventional flash memory. Fortunately, problems still persist for data stored at high temperatures, as most computing devices are commonly used at room temperature or above. More work needs to be done to determine the durability of Bulk RRAM in these environments.

Albert Talin, a colleague directly engaged in this line of research, said achieving integration will be key to moving memory with on-chip processing closer to commercialization.

“I think that any step in terms of integration is very useful.” – Albert Talin

These advances leading up to Bulk RRAM are an inflection point in memory technology. Now, researchers at the University of California, San Diego (UCSD), are addressing traditional RRAM’s shortcomings. This year, they’re rolling out a much stronger counterpart that truly opens the doors for better efficiency across various AI applications and edge computing.