Professor Todd Austin Introduces LEAN Metric to Optimize Computing Efficiency

Todd Austin, an electrical-engineering-and computer-science professor at the University of Michigan in Ann Arbor, is honoring his alma mater. He introduced a new metric that may transform the design of computer chips. Although the LEAN metric — which is short for Logic Executing Actual Numbers — is a new framework, it is intuitive. It aims…

Tina Reynolds Avatar

By

Professor Todd Austin Introduces LEAN Metric to Optimize Computing Efficiency

Todd Austin, an electrical-engineering-and computer-science professor at the University of Michigan in Ann Arbor, is honoring his alma mater. He introduced a new metric that may transform the design of computer chips. Although the LEAN metric — which is short for Logic Executing Actual Numbers — is a new framework, it is intuitive. It aims to address the fundamental inefficiencies in today’s use of silicon.

The LEAN metric has received both a warm reception and a bit of backlash among the tech community. Critics with concerns over how it’s to be implemented agree on the logic behind Austin’s approach. The LEAN metric points to a deeper realization. Today a huge chunk of silicon in modern processors is going towards capabilities that don’t directly improve the computing workloads. Unfortunately, as Austin points out, only 4.64 percent of all silicon is dedicated towards efficient computing. In contrast, a mere 1.35 percent is allocated for computational tasks not related to data collection.

Austin makes the case to shift the paradigm away from traditional processor architecture, where the goal has been to maximize computing resources and minimize everything else. This call for change is particularly relevant in an era where computing efficiency is compromised by two main factors: precision loss and speculation loss. Speculative execution, a technique where computers predict upcoming instructions and act on them before they are confirmed, frequently results in wasted processing power. In high-end, out-of-order CPUs, it is not unusual to discard two results of speculatively executed instructions for every one that is useful.

To counteract these inefficiencies, Austin imagines a new arrangement of transistors on processors. All he wants to do is to reposition the same 20 billion transistors we already have. In doing so, he will make them more valuable and useful. A score of 100 percent on the LEAN metric would signify that each transistor is actively engaged in computing tasks that contribute directly to the final output of a program.

The ramifications of Austin’s labor reach far outside the ivory tower. The Nvidia Blackwell GPU design exemplifies the problem, as it reportedly allocates over 95 percent of its silicon to tasks unrelated to efficient computing. That’s what makes the Groq chip so captivating — its efficient design. It reserves a full 15.24 percent of its silicon just for computing tasks. These kinds of comparisons highlight the amount of room there is for improvement across the industry.

As the field heads into 2022, the new LEAN metric has arrived at a timely juncture. This moment heralds the end of Moore’s law — the decades-long expectation that the number of microchip transistors would double every two years. It’s not that traditional scaling is running out of steam. Innovative approaches such as the LEAN metric are proving to be valuable resources for shortening and maximizing computer designs.