Qualcomm is at the forefront of a major paradigm shift in computing. It’s done a great deal to make itself one of the first companies ever to announce NPUs – Neural Processing Units – for Windows laptops. These dedicated chips are meant to accelerate AI tools on PCs and form the foundation for an AI-focused ecosystem, upping performance and power both by substantial margins. Qualcomm’s Snapdragon X chip is leading the way in AI processing. At the same time, competitors such as AMD and Intel are releasing NPUs of their own, creating a competitive environment in which innovation will progress quickly and significantly.
The integration of NPUs into laptops is extremely cool. They can achieve an amazing 40 to 50 trillion operations per second (TOPS)! This degree of performance is on par with the tremendous mobile processing prowess of Qualcomm’s Snapdragon chips, long known for their mobile prowess. The demand for AI-driven applications is at an all-time high. Introducing NPUs is only the first step to ensure software can get the necessary computational resources.
Qualcomm’s Pioneering Role
Qualcomm’s power in the as NPU market should not be underestimated. The company’s Snapdragon X chip includes an NPU that the company has relied on to drive improved AI processing power across laptops. With this advancement, devices can process more tokens per second, resulting in quicker, more responsive results while using AI models.
The Snapdragon X has indeed raised the bar on performance. Qualcomm does not act alone. AMD has entered the fray with its Ryzen AI Max, which boasts an NPU rated at an impressive 50 TOPS. NPUs may have been a rare breed among AMD’s 2023 chips listed as such, but it is a major breakthrough for the company. What the Ryzen AI Max does to differentiate itself is integrating CPU cores, Radeon-branded GPU cores, and an NPU all on the same piece of silicon. This architecture really facilitates the AI deployment.
Coming up fast on its heels is Intel, which has built its own NPU both to challenge and directly compete with Snapdragon’s line-up. These companies are advancing the art and science of innovation. As such, the AI processing competition in laptops is shaping into a fast-moving landscape, with major players battling for supremacy.
“NPUs are much more specialized for that workload. And so we go from a CPU that can handle three trillion operations per second (TOPS), to an NPU.” – Steven Bathiche
The Competitive Landscape
The market for NPUs is growing exponentially. In answer, AMD and Intel as well are trying to go beyond the current AI processing limits. AMD’s Ryzen AI Max debuted with a staggering 10 TOPS at the start of 2023. The recent upgrades really put an exclamation point on their ongoing commitment to improve performance, though. AMD is committed to putting different computing components under a common architecture. This process-oriented approach aims to lighten burdensome processes that often require input from multiple, siloed units.
Bigger news still is Intel’s collaboration with Nvidia. The two have indeed formed an alliance to sell hybrid chips that would marry Intel’s CPU cores with Nvidia’s GPU cores. This collaboration allows us to hasten and deepen AI’s impact, unlocking more advanced and effective AI tools. This partnership is a reflection of the critical role that customized hardware is quickly becoming in enabling the extreme performance needs of cutting-edge AI workloads.
The Nvidia GeForce RTX 5090 further raises the bar with specifications claiming an AI performance of up to 3,352 TOPS, surpassing Qualcomm’s AI 100. Nvidia is moving to start to monopolize the high-end GPU market. NPUs are an interesting approach and its innovations will almost surely dictate how other companies design their NPUs.
“By bringing it all under a single thermal head, the entire power envelope becomes something that we can manage.” – Mahesh Subramony
Future Prospects for NPUs
Experts believe NPUs able to process tens of thousands of TOPS will come within the next few years. They hope this local change to take shape in the next two years. This prediction marks the beginning of a new era in personal computing and artificial intelligence. These devices are becoming more powerful by the day. They need to maximize productivity and parallel workloads on multiple compute engines including CPUs, GPUs, NPUs.
Windows has also already started to run AI tasks on local compute natively with their Windows ML runtime. This new system automatically routes AI workloads to the optimal hardware, whether that be a CPU, GPU or NPU. This degree of versatility opens the door to more sophisticated applications. Just think of it—AI personal assistants that can work all day long and adapt on the fly to follow your orders!
“You’ll want to be running this for a longer period of time, such as an AI personal assistant, which could be always active and listening for your command.” – Rakesh Anigundi
>There is still a lot of room for improvement as companies work to figure out the best ways to incorporate NPUs into larger computing ecosystems. As Steven Bathiche notes, “There’s a lot of opportunity and runway to improve.”


