HomeTechnologyArtificial IntelligenceWhat is an NPU? And why is it key to unlocking on-device...

    What is an NPU? And why is it key to unlocking on-device generative AI?

    The generative artificial intelligence (AI) revolution is here. With the growing demand for generative AI use cases across verticals with diverse requirements and computational demands, there is a clear need for a refreshed computing architecture custom-designed for AI. It starts with a neural processing unit (NPU) designed from the ground-up for generative AI, while leveraging a heterogeneous mix of processors, such as the central

    heterogeneous-computing-toolbox
    Figure 1: Choosing the right processor, like choosing the right tool in a toolbox, depends on many factors and enhances generative AI experiences.

    processing unit (CPU) and graphics processing unit (GPU). By using an appropriate processor in conjunction with an NPU, heterogeneous computing maximizes application performance, thermal efficiency and battery life to enable new and enhanced generative AI experiences.

    Why is heterogenous computing important?

    Because of the diverse requirements and computational demands of generative AI, different processors are needed. A heterogeneous computing architecture with processing diversity gives the opportunity to use each processor’s strengths, namely an AI-centric custom-designed NPU, along with the CPU and GPU, each excelling in different task domains. For example, the CPU for sequential control and immediacy, the GPU for streaming parallel data, and the NPU for core AI workloads with scalar, vector and tensor math.

    Heterogeneous computing maximizes application performance, device thermal efficiency and battery life to maximize generative AI end-user experiences.

    NPU-evolution
    Figure 2: NPUs have evolved with the changing AI use cases and models for high performance at low power.

    What is an NPU?

    The NPU is built from the ground-up for accelerating AI inference at low power, and its architecture has evolved along with the development of new AI algorithms, models and use cases. AI workloads primarily consist of calculating neural network layers comprised of scalar, vector,and tensor math followed by a non-linear activation function. A superior NPU design makes the right design choices to handle these AI workloads and is tightly aligned with the direction of the AI industry.

    Qualcomm-AI-Engine
    Figure 3: The Qualcomm AI Engine consists of the Qualcomm Hexagon NPU, Qualcomm Adreno GPU, Qualcomm Kryo or Qualcomm Oryon CPU, Qualcomm Sensing Hub, and memory subsystem.

    Our leading NPU and heterogeneous computing solution

    Qualcomm is enabling intelligent computing everywhere. Our industry-leading Qualcomm Hexagon NPU is designed for sustained, high-performance AI inference at low power. What differentiates our NPU is our system approach, custom design and fast innovation. By custom-designing the NPU and controlling the instruction set architecture (ISA), we can quickly evolve and extend the design to address bottlenecks and optimize performance.

    The Hexagon NPU is a key processor in our best-in-class heterogeneous computing architecture, the Qualcomm AI Engine, which also includes the Qualcomm Adreno GPU, Qualcomm Kryo or Qualcomm Oryon CPU, Qualcomm Sensing Hub, and memory subsystem. These processors are engineered to work together and run AI applications quickly and efficiently on device.

    Our industry-leading performance in AI benchmarks and real generative AI applications exemplifies this. Read the whitepaper for a deeper dive on our NPU, our other heterogeneous processors, and our industry-leading AI performance on Snapdragon 8 Gen 3 and Snapdragon X Elite.

    Qualcomm-AI-Stack-includes
    Figure 4: The Qualcomm AI Stack aims to help developers write once and run everywhere, achieving scale.

    Enabling developers to accelerate generative AI applications

    We enable developers by focusing on ease of development and deployment across the billions of devices worldwide powered by Qualcomm and Snapdragon platforms. Using the Qualcomm AI Stack, developers can create, optimize and deploy their AI applications on our hardware, writing once and deploying across different products and segments using our chipset solutions.

    The combination of technology leadership, custom silicon designs, full-stack AI optimization and ecosystem enablement sets Qualcomm Technologies apart to drive the development and adoption of on-device generative AI. Qualcomm Technologies is enabling on-device generative AI at scale.

    durga_malladi_formal_photo_sized_0
    DURGA MALLADI SVP & GM, Technology Planning & Edge Solutions, Qualcomm Technologies, Inc.
    Pat-Lawlor
    PAT LAWLOR
    Director, Technical Marketing,
    Qualcomm Technologies, Inc.
    ELE Times Report
    ELE Times Reporthttps://www.eletimes.ai/
    ELE Times provides extensive global coverage of Electronics, Technology and the Market. In addition to providing in-depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build experience, drive traffic, communicate your contributions to the right audience, generate leads and market your products favourably.

    Related News

    Must Read

    Top 10 Federated Learning Algorithms

    Federated Learning (FL) has been termed a revolutionary manner...

    Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025

    Bharat set to welcome delegates from 33 Countries,...

    Rohde & Schwarz extends the broadband amplifier range to 18 GHz

    The new BBA series features higher field strengths for...

    EDOM Strengthens NVIDIA Jetson Thor Distribution Across APAC

    Empowering a New Era of Physical AI and Robotics...

    Govt Sanctions 23 Chip Design Ventures Under DLI Scheme

    MeitY approved 23 chip design projects under its Design...

    Rare Earth Export Curbs Lifted by China: India’s Semiconductor and Electronics Sectors Poised to Benefit

    India’s electronics sector, one of the major achievements under...

    MeitY May Announce 2–3 Small Semiconductor Projects Soon

    The Ministry of Electronics and Information Technology (MeitY) has...

    Nuvoton Introduces Automotive-grade, Filter-Free 3W Class-D Audio Amplifier NAU83U25YG

    The New High-Efficiency Audio Solution Ideal for Dashboard, eCall,...

    Top 10 Deep Learning Applications and Use Cases

    A subfield of machine learning called "deep learning" uses...

    Infineon AIROC CYW20829 to support Engineered for Intel Evo Laptop Accessories Program

    Infineon Technologies AG announced that its AIROC CYW20829 Bluetooth...