When unveiled its new AI200 and AI250 data center accelerators this week, the market saw a familiar story: another chipmaker trying to chip away at ’s stranglehold on AI infrastructure. stock jumped 22%, its biggest single-day gain in nearly seven years, as investors cheered the news.
But beneath the headlines, a more important shift is emerging, one that could upend how investors think about the next phase of the AI boom.
The AI hardware race, long dominated by compute horsepower, may be pivoting toward something far less glamorous but potentially more decisive: memory.
Qualcomm surges 22% and it is betting that the AI chip race is shifting away from raw compute and toward memory capacity, and its massive LPDDR-based design could give it a real edge in the exploding inference market.
From Compute to Capacity: The Bottleneck No One Is Watching
For the past two years, GPUs have defined the AI gold rush, their raw computational power making them indispensable for training massive models like GPT-4 and Gemini. But as AI systems move from training to deployment (a phase known as inference), the physics of performance change.
Modern AI inference workloads are increasingly memory-bound rather than compute-bound. As models grow in size and context windows expand, the challenge isn’t how fast chips can compute. It’s how quickly they can feed data to those processors.


