Recent breakthroughs in artificial intelligence (AI) have demonstrated the thrilling possibilities that lie behind reservoir computing. This technology can achieve dramatically greater efficiency and power consumption compared to classical neural networks. This creative solution is not a new one though, its roots go back to the 1990s. It performs in a special way at the edge of chaos, allowing an otherwise small neural network to represent many different potential states.
A research team at the University of California, Santa Barbara has recently discovered an exciting new alternative. Led by assistant professor Sanjukta Krishnagopal and TDK’s section head Tomoyuki Sasaki, they designed a chip that demonstrates the strengths of reservoir computing. This chip only consumes 20 microwatts of power per core. It contains 121 nodes per core and it has four cores. This new development marks one of the biggest leaps yet in developing smarter, more energy efficient AI systems.
Understanding Reservoir Computing
Reservoir computing employs a branching structure that links neurons together without sharp divisions or layers. It is remarkably efficient compared to traditional neural networks as it doesn’t require complex architectures with dozens of layers. Each node within this system comprises three essential components: a non-linear resistor, a memory element, and a buffer amplifier.
Above all, perhaps, one of the most fascinating aspects of reservoir computing has been its being on the edge of chaos. This last phenomenon makes it possible for the reservoir to encode a huge variety of possible states with a small number of neurons. As Krishnagopal explains,
“Your reservoir is usually operating at what’s called the edge of chaos, which means it can represent a large number of possible states, very simply, with a very small neural network.”
This key feature enables reservoir computing to effectively process intricate data patterns with neural-like performance, and in a highly efficient manner.
Power Consumption and Efficiency
Saving power has become one of the central issues driving AI technology development. Existing neural networks still demand massive architecture changes and tuning of billions of weights, requiring a lot of time and energy. Reservoir computing simplifies this process. It lets you influence which weights connecting any two neurons are important during training.
Sasaki emphasizes the major efficiency gains enabled by this new technology, writing,
“However, the power consumption, the operation speed, is maybe 10 times better than the present AI technology. That is a big difference.”
All while using as little power as possible and the fastest speed to run operations, reservoir computing offers a promising solution compared to traditional approaches.
Predictive Capabilities and Future Applications
The predictive capabilities of reservoir computing only add to its allure, making it an attractive alternative for myriad applications. This particular design allows data to flow in one direction—moving forward—remelding itself perfectly to predictive processing and prediction based on past data. Sasaki elaborates on this aspect:
“If what occurs today is affected by yesterday’s data, or other past data, it can predict the result.”
These characteristics make reservoir computing an exciting prospect for real-time and online applications, where fast data analysis and prediction are crucial.


