Qualcomm’s Snapdragon X chip, which is already having a huge impact the fast-paced world of computing technology. Specifically, it serves as the backbone for state-of-the-art artificial intelligence (AI) capabilities, including Microsoft’s Copilot+ features. This is a significant advancement in the way that personal computers (PCs) handle AI workloads. They will now be able to use Neural Processing Units (NPUs) to deliver performance never before imagined. Rivals Qualcomm, AMD and Intel are preparing to duke it out for billions in the fledgling NPU market. This new rivalry will bring monumental change — for better or worse — into PC architecture and user experience.
Qualcomm’s own Snapdragon X chip – 5G’s biggest game-changer – has put the company at the center of this technological evolution. Aside from its NPU, Qualcomm has made a bold move by bringing specialized AI processing capabilities to Windows laptops for the first time. This change is representative of a broader trend that’s been remaking the PC industry. We are leaving behind a set of legacy architectures that have driven planning and development since the late 20th century.
As much as Qualcomm is leading the charge, other chip manufacturers are racing neck and neck. In addition, AMD and Intel are both making aggressive moves into the NPU space, with new entrants expected to deliver competitive performance. Tops, trillions of operations per second as their future chips. The arms race driving AI technology has accelerated rapidly. We are on the cusp of a dramatic evolution in what AI processing on PCs will enable over the next few years.
NPU Performance and Specifications
The raw performance capabilities of NPUs are at the core of their allure. The Nvidia GeForce RTX 5090 is unique in larger AI performance benchmark of 3,352 TOPS. By comparison, Qualcomm’s AI 100 struggles to keep up with this performance standard. The real competition between these companies is not just in terms of speed, but how efficiently they can process all those data types.
Steven Bathiche, a leading NPU architect, explained in detail how NPUs are preferable to CPUs for many tasks. He stated,
“With the NPU, the entire structure is really designed around the data type of tensors [a multidimensional array of numbers].”
This specialization helps NPUs handle diverse workloads more efficiently, with ultra-low latency and high throughput. Bathiche went on to explain the unique capabilities of CPUs vs NPUs by stating that
“NPUs are much more specialized for that workload. And so we go from a CPU that can handle three trillion operations per second (TOPS), to an NPU.”
NPUs are becoming increasingly common in the marketplace. Their ability to accelerate AI tasks will surely be one of the major factors consumers look at when deciding among different computing devices.
The Architectural Shift in PCs
>The NPUs’ introduction provides a huge leap in processing power. It represents a revolutionary departure from PC architecture. Companies are now focusing on creating system-on-chip (SoC) designs that integrate CPU cores, GPU cores, and NPUs into a single unit. This integration provides more advanced power management with better efficiency.
This architectural change opens up the opportunity for significant efficiency gains compared to older discrete GPU architectures. Joe Macri, another expert in chip design, discussed the potential drawbacks of having separate memory subsystems for CPUs and GPUs:
“By bringing it all under a single thermal head, the entire power envelope becomes something that we can manage.”
By integrating additional semiconductor components into a single SoC, companies can reduce power and latency through more efficient data flow while improving overall system performance.
“When I want to share data between our [CPU] and GPU, I’ve got to take the data out of my memory, slide it across the PCI Express bus, put it in the GPU memory, do my processing, then move it all back.”
NPUs are poised to take dramatic steps forward ourselves in the coming weeks and months. Industry insiders foresee chips with the capacity of thousands of TOPS making their debuts in the next few years. While this evolution improves impressive capabilities already in existence today, it will further create new possibilities for AI to be used on our personal devices.
Future Prospects and Industry Implications
Vinesh Sukumar from Qualcomm expressed ambitious aspirations for the future of AI on their devices:
This vision•in•motion is indicative of the entire industries push toward creation of more intelligent systems that use all the processing power around us in the most productive ways. After all, every company is already working hard to design a stronger NPU. Simultaneously, they’re looking for new ways to balance workloads between CPUs and NPUs as efficiently as possible.
“I want a complete artificial general intelligence running on Qualcomm devices.”
As this competition continues to develop, it will shape the requirements for new hardware. More importantly, it will determine how software developers design AI applications. Mike Clark emphasized the need for balance in design philosophy:
As this competition progresses, it will not only influence hardware specifications but also dictate how software developers approach AI applications. Mike Clark emphasized the need for balance in design philosophy:
“We must be good at low latency, at handling smaller data types, at branching code—traditional workloads. We can’t give that up, but we still want to be good at AI.”

