
TimbreAI T3
Ultra-low Power AI Inference IP for Embedded
Audio Applications
The Expedera TimbreAI™ T3 is an ultra-low-power Artificial Intelligence (AI) Inference engine designed for audio noise reduction use cases in power-constrained devices such as headsets and consumer devices. Featuring 3.2 GOPS (Giga Operations Per Second) of processing power, the TimbreAI T3 sips an astonishingly low 300μW or less power. Available off the shelf as soft IP, TimbreAI is portable to any foundry and silicon process. TimbreAI supports quick and seamless deployments and provides optimal performance within today's advanced audio devices' strict power and area constraints.
Native Execution: a New NPU Paradigm
Typical AI accelerators—often repurposed CPUs (Central Processing Units) or GPUs (Graphic Processing Units)—rely on a complex software stack that converts a neural network into a long sequence of basic instructions. Execution of these instructions tends to be inefficient, with low processor utilization ranging from 20 to 40%. Taking a new approach, Expedera designed TimbreAI specifically as an NPU (Neural Processing Unit) that efficiently executes the neural network directly using metadata and achieves sustained utilization averaging 80%. The metadata indicates the function of each layer (such as convolution or pooling) and other important details, such as the size and shape of the convolution. No changes to your trained neural networks are required, and there is no perceivable reduction in model accuracy. This approach greatly simplifies the software, and Expedera provides a robust stack based on Apache TVM. Expedera’s native execution eases the adoption of new models and reduces time to market.
Silicon-Proven and Deployed in Millions of Consumer Products
Choosing the right AI processor can ‘make or break’ a design. The Expedera architecture is silicon-proven in leading-edge process nodes and successfully shipped in millions of consumer devices worldwide.
- 3.2 GOPS; ideally suited for audio AI applications such as active noise reduction
- Ultra-low <300μW power consumption
- Low latency
- Neural Networks supported include RNN, LSTM, GRU
- Data type supported include INT8 x INT8, INT16 x INT8, INT16 x INT16
- Use familiar open-source platforms like TVN, TFLITE, ONNX, TVM
- Delivered as soft IP: portable to any process
Performance | 3.2 GOPS |
Number of Jobs | Single |
Neural Networks Supported | RNN, LSTM, GRU |
Data types | INT8/INT16 Activations INT8/INT16 Weights |
Quantization | Channel-wise Quantization |
Latency | Optimized for smallest Latency with Deterministic Guarantees. |
Memory | All on-chip: Smart On-chip Dynamic Memory Allocation Algorithms |
Frameworks | TVM, TensorFlow, TFlite, ONNX |
Workloads | Audio de-noising |
Advantages
Ultra-low power implementation for battery-powered devices.
Portable to any process.
Drastically reduces memory requirements, no off-chip memory required.
Run trained models unchanged without the need for hardware dependent optimizations.
Deterministic, real-time performance.
Delivered as soft IP.
Simple software stack.
Achieve same accuracy your trained model.
Simplified deployment for easy integration.
Benefits
- Extended Battery Life: drastically lower your AI inference power budget and greatly extend battery life
- Smaller Implementation: TimbreAI is designed to require the smallest silicon area, allowing smaller, more cost-efficient chip designs
- Simplicity: eliminates complicated compilers, easing design complexity, reducing cost, and speeding time-to-market
- Predictability: deterministic, QoS

Download our White Papers

Get in Touch With Us
STAY INFORMED
Subscribe
to our News
Sign up today and receive helpful
resources delivered directly
to your inbox.