Researchers Harness Thermal Noise for Innovative Computing Breakthrough

What if the thermal noise that traditionally hampers the efficiency of computers could be transformed into a power source? This is the aim of a newly emerging field known as thermodynamic computing. A collaboration between researchers at the Molecular Foundry and the National Energy Research Scientific Computing Center (NERSC), both part of the U.S. Department of Energy at Lawrence Berkeley National Laboratory, is making strides toward this ambitious goal. Their findings, published in Nature Communications, outline a design and training framework for a thermodynamic computer that mimics a neural network, potentially revolutionizing energy efficiency in machine learning.

Modern computing consumes significant energy, with a single Google search using enough power to run a six-watt LED for three minutes. This energy demand is largely due to the need to manage thermal noise, which arises from the vibrations of charge carriers like electrons in conductive materials. Classical computers operate at energy scales that are thousands of times greater than these vibrations, leading to substantial energy consumption to maintain reliability. Both classical and quantum computing approaches typically work to suppress thermal noise. In contrast, thermodynamic computing flips this idea on its head, utilizing these fluctuations as a power source. This innovation could significantly decrease the external energy needed for computations while allowing operations at room temperature, setting it apart from many quantum systems.

Stephen Whitelam, a staff scientist at the Molecular Foundry and co-author of the study, explained, “Thermodynamic computing is noise-powered.” He elaborated that a physical device with an energy scale similar to thermal energy will change states over time due to thermal fluctuations. The objective is to program these devices to produce useful outputs without the traditional energy overhead associated with classical and quantum systems.

Despite its promise, two main challenges have hindered the practical application of thermodynamic computing. Current designs require the system to reach thermodynamic equilibrium before calculations can be performed. This waiting period can be unpredictable and impractically long for regular computational tasks. Furthermore, existing thermodynamic computers have been limited to linear algebra problems, restricting their broader application.

In their research, Whitelam and co-author Corneel Casert addressed these limitations by using digital simulations to demonstrate that nonlinear computations, akin to those executed by neural networks, are feasible with thermodynamic computers operating outside of equilibrium. They found that when the components of the computer exhibit nonlinearity, it becomes possible to train the system to perform specified computations without needing to wait for equilibrium to be achieved.

Whitelam noted, “A nonlinear thermodynamic circuit can behave like a neuron in a neural network,” emphasizing the potential of these devices to mimic the capabilities of traditional neural networks for machine learning tasks. By integrating thermodynamic neurons into a connected architecture, the researchers believe they can harness the expressive power of neural networks for complex computations.

While the design is promising, training such a system presents its own challenges. Thermodynamic computers operate stochastically, meaning outcomes vary with each run. The typical training methods for digital neural networks are not directly applicable. To overcome this, Casert engineered a large-scale computational framework utilizing the Perlmutter supercomputer at NERSC, which employed 96 GPUs in parallel to run extensive evolutionary simulations. He evaluated billions of dynamic trajectories to identify optimal network parameters.

Casert implemented a genetic algorithm, starting with various thermodynamic neural networks, assessing their effectiveness, and then mutating the top performers by introducing random noise to their parameters. This extensive simulation effort culminated in over a trillion runs of a thermodynamic computer, resulting in a system that can function with minimal energy once it is built and trained.

“It’s a very different way of optimizing a neural network,” Casert explained. “Training a thermodynamic neural network digitally is expensive, but once it’s developed into physical hardware, low-energy inference becomes feasible.”

The combination of innovative design and training methodologies indicates that a machine-learning computer with significantly reduced energy requirements is within reach. As the field of thermodynamic computing is still in its infancy, next steps include transitioning these designs into practical hardware. Whitelam emphasized the importance of seeking experimental partners to help realize both hardware and software implementations, paving the way for further exploration of this technology.

Additionally, the researchers aim to develop new algorithms tailored for systems that are no longer constrained by the need to operate at equilibrium. Algorithms for nonlinear computations, similar to those used in digital neural networks, must also be created to fully exploit the potential of thermodynamic computing.

“This is an exciting field,” Whitelam concluded. “We’re exploring more efficient ways of computing, and thermodynamic computing is definitely one of them.”

The future of thermodynamic computing holds promise for addressing the growing demands for energy-efficient computing solutions while expanding the capabilities of machine learning applications. As researchers continue to innovate in this area, the implications for technology and energy consumption could be profound.