Mobile
AI in Your Pocket
Every smartphone uses AI to enhance the user experience. Designers use on-device AI to enable new features and reduce reliance on the cloud. Choosing the right AI processor IP for mobile SoC and ASIC designs is essential for a great user experience.

Enhancing the User Experience Through AI
Smartphone makers are increasingly incorporating more AI into their products, including advanced large language models (LLMs) and vision-language models (VLMs). This integration presents challenges, as manufacturers must balance growing computational and memory demands with strict power and size limitations. They can no longer depend on general-purpose neural processing units (NPUs) typically found in application processors (APs), as these are often underperforming and inefficient in terms of power usage. As a result, system architects are shifting towards AI co-processors. These co-processors allow AI processing to be customized for specific smartphone use cases, leading to significant performance improvements without draining battery life or exceeding memory limits. The seamless integration of on-device AI greatly enhances the user experience and serves as a key competitive advantage.
LLMs Moving to the Edge
Consumer expect their smartphones to feature the latest and greatest AI inference capabilities, including today's latest LLMs and more traditional CNN/RNN networks. As smartphone OEMs increasingly move inference processing to the device, they are presented with a unique set of challenges, including efficient computing, increased memory requirements, higher power consumption, and privacy concerns. Origin Evolution™ offers out-of-the-box compatibility with popular LLM and CNN networks. Attention-based processing optimization and advanced memory management ensure optimal AI performance across a variety of today’s standard and emerging neural networks. Featuring a hardware and software co-designed architecture, Origin Evolution is the ideal AI processing architecture for next-generation smartphones, offering an easy-to-integrate, highly extensible, future-focused design.
An Ideal Architecture for Smartphones
Accepting standard, custom, and black box networks in a variety of AI representations, Origin Evolution offers a wealth of user features such as mixed precision quantization. Expedera’s unique packet-based processing reduces much larger networks into smaller, contiguous fragments, overcoming the hurdle of large memory movements and offering much higher processor utilization. Packets are routed through discrete processing blocks, including Feed Forward, Attention, and Vector, which accommodate the varying operations, data types, and precisions required when running different types of networks. Internal memory handles intermediate needs, while the memory streaming interface interfaces with off-chip storage.
Purpose-Build for Your Application
Customization brings many advantages, including increased performance, lower latency, reduced power consumption, and eliminating dark silicon waste. Expedera works with customers to understand their use case(s), PPA goals, and deployment needs during their design stage. Using this information, we configure Origin IP to create a customized solution that perfectly fits the application.
Future-Proof with In-Field Updates
Ultra Power-Efficient Performance
Users want feature-rich devices with all-day battery life. With the ideal balance of power and performance, Origin IP enables new and emerging AI use cases while requiring far less power than general-purpose NPUs.