Stephen Whitelam, a staff scientist at Lawrence Berkeley National Laboratory in California, and a colleague introduced a revolutionary thermodynamic version of a neural network on January 10. Their results, published in Nature Communications, bring to light the remarkable power of thermodynamic computing. This new technique could significantly reduce the energy used to create images.
Whitelam’s new approach, implemented on a thermodynamic computer, represents one such advance. This energy-efficient construction enables this new technology to quickly process multi-dimensional images with radically less energy than standard digital hardware. He stated, “This research suggests that it’s possible to make hardware to do certain types of machine learning — here, image generation — with considerably lower energy cost than we do at present.” As it stands, today’s digital neural networks are burgeoning in energy usage. Thermodynamic computing can function with only one ten billionth of that energy.
As exciting as this technology sounds, Whitelam reminded us that thermodynamic computers are still in their infancy compared to their digital cousins. “We don’t yet know how to design a thermodynamic computer that would be as good at image generation as, say, DALL-E,” he remarked. The results from estimated capacities showed that the thermodynamic approach is a concept in its infancy. It does have the potential to plant the seeds for future sustainable AI practices.
Normal Computing, a New York City startup, created this state-of-the-art prototype chip. The cello is composed of eight resonators, or drums, that are linked together via unique couplers. This unique design allows the chip to do what it’s designed to do, better. It does so without the expensive digital neural networks and inconvenient loud pseudorandom number generators.
Whitelam’s continuing research is uncovering something very promising. Through delightful experimentation, he’s discovered that a training process can cause such a thermodynamic computer to produce representations of handwritten digits. We recursively train neural networks to reverse each of these exact operations. This is what allows diffusion models to create completely novel images based on user provided prompts.
Whitelam further elaborated on the challenges ahead: “It will still be necessary to work out how to build the hardware to do this.” In fact, he published a study on thermodynamic computing in the journal Physical Review Letters on January 20. This research was done on traditional computers and only simulated training that would be needed for a thermodynamic computer.
Researchers on the forefront of this field have begun to investigate the opportunity afforded by thermodynamic computing. If scaled, this approach may be a deeply energy-efficient replacement for conventional digital neural networks. More broadly, this represents a cutting edge development and step forward in the evolution of AI technology and solutions. It further unlocks unprecedented opportunity to minimize the environmental footprint of ML applications.

