Unleash Performance in AI Applications
Expedera’s Neural Processing Unit (NPU) features a unified compute pipeline that eliminates memory bottlenecks to deliver breakthrough performance in Artificial Intelligence (AI) applications. Memory efficiency dictates performance, power and cost in SoC designs. Expedera’s Origin™ is a neural engine IP line of products that reduce memory requirements to the bare minimum, dramatically reducing overhead to unlock performance and power efficiency. But Origin does more. By moving software burdens to hardware, Origin enables a simplified software stack and allows TensorFlow to execute directly in hardware.
Origin achieves a sustained single-core performance of up to 128 TOPS with typical utilization rates of 70-90% (measured in silicon running common AI workloads such as ResNet). This best-in-class performance and utilization allow users to run visual, audio, or text-based (generative) AI models faster with less power consumption than alternative solutions, including native support for INT-based, floating point, and transformer-based networks. And while performance and power are important, so is silicon area. Origin is third-party verified to produce superior performance per mm2 versus competitive solutions, assuring AI chip designers the best combination of processing, power, and area.
AI Enabled Applications
Industrial applications generate huge amounts of data. Due to cost, performance, and latency concerns, processing is moving to the edge. Origin increases system performance and lowers cost by enabling high performance neural network processing in edge solutions instead of in the cloud.Learn More
Automotive systems require stable, reliable, and often complex models. While automotive requirements can vary greatly, Origin offers high performance, increased utilization, and reduced power requirements. Its low latency, deterministic processing can handle high resolution images at highway speeds in real-time during autonomous driving.Learn More
The Origin E1 processing cores are individually optimized for a subset of neural networks commonly used in home appliances, edge nodes, and other small consumer devices. The E1 LittleNPU supports always-sensing cameras found in smartphones, smart doorbells, and security cameras.
The Origin E2 is designed for power-sensitive on-chip applications that require no off-chip memory. It is suitable for low power applications such as mobile phones and edge nodes, and like all Expedera NPUs, is tunable to specific workloads.
Origin E6, optimized to balance power and performance, utilizes SoC cache or DRAM access during runtime and supports advanced system memory management. Supporting dual jobs, the E6 runs a wide range of AI models in smartphones, tablets, edge servers, and others.
Origin E8 is designed for high-performance applications required by autonomous vehicles/ADAS and datacenters. It offers superior TOPS performance while dramatically reducing DRAM requirements and system BOM costs, as well as enabling multi-job support. Even at 128 TOPS, its low power consumption enables the Origin E8 for deployment in passive cooling environments.
TimbreAI T3 is an ultra-low power Artificial Intelligence (AI) Inference engine designed for noise reduction uses cases in power-constrained devices such as headsets. TimbreAI requires no external memory access, saving system power while increasing performance and reducing chip size.
Subscribe to our News
Sign up today and receive helpful resources delivered directly to your inbox.