Qualcomm Goes All In on AI for Snapdragon 8 Gen 3 Mobile Processor
24.10.2023 - 19:09
/ pcmag.com
/ Ai
Qualcomm on Tuesday announced the Snapdragon 8 Gen 3, its premium system-on-a-chip for mobile devices. The 8 Gen 3 will be the engine behind the leading Android phones and tablets over the coming year. Much like last year's 8 Gen 2, the new SoC banks on AI to set it apart from competing chips. What's different is that Qualcomm is pushing generative AI in particular directly onto devices for faster and more secure results.
In addition to on-device generative AI, the Snapdragon 8 Gen 3 makes improvements to core aspects of the SoC, including Snapdragon Connect, Snapdragon Elite Gaming, Snapdragon Sight, and Snapdragon Sound. AI calculations should lead to stronger connections between devices across longer distances, 240fps gaming with more realistic lighting, generative AI backgrounds and in-video object erasers, and higher-quality music streaming.
The first consumer devices are expected to reach the market in just a few weeks.
Qualcomm changed up the chip architecture again to better manage how devices balance performance needs with efficiency. The 8 Gen 3's Kryo CPU is built around a 4nm process and is still an octa-core SoC but now has a single prime core, five performance cores, and two efficiency cores. Last year's 8 Gen 2 and a one-four-three core configuration. The prime core is an Arm Cortex-X4 clocked at up to 3.3GHz, while the performance cores are clocked at up to 3.2GHz and the efficiency cores are clocked at up to 2.3GHz. The CPU has access to a 12MB L3 cache and delivers 30% faster peformance compared to the outgoing chip.
The Adreno GPU sees improvements across the board as well. Qualcomm says it pushes 25% faster performance along with 25% better power efficiency and can support a 1Hz variable refresh rate for low-power displays.
More memory means better multitasking. The 8 Gen 3 supports up to 24GB of LPDDR5x system memory at up to 4,800MHz. Phones that shipped with the 8 Gen 2 on board typically had between 8GB and 16GB of system memory. Jumping to 24GB is a big deal.
The Hexagon neural processing unit (NPU) received a lot of attention from Qualcomm. The NPU has dedicated power rails for accelerators, upgraded micro tile inferencing, and 2x the shared memory bandwidth, which leads to higher speeds pushed into the Tensor accelerator. The whole system has higher clock speeds that deliver 98% faster performance with a 40% improvement in efficiency. The NPU works together with the revised Sensing Hub (up to 3.5x more AI performance based in INT4 support), Kryo CPU, Adreno GPU, and total system memory to form the Qualcomm AI engine.
Qualcomm trained the AI engine on the Meta Llama 2 large language model as well as Whisper for the front end and open-source TTS for the output. Qualcomm