In the short time since emerging from stealth mode, Expedera has quickly become known as the Artificial Intelligence (AI) Inference IP company that delivers the best performance per watt and per area. Our Origin product family scales to 128 TOPS and addresses applications from the edge to automotive to the data center and beyond. We’ve received accolades from the press and analysts such as “Expedera’s architecture is the most efficient” and “We estimate Expedera’s design delivers at least 6x more ResNet-50 performance per watt than [ARM] Ethos.” We’ve gone to production with customers who’ve seen 20X the performance of their CPU-based GPU at less than half the power consumption and shown that our IP produces superior IPS/W against market-leading AI chip companies. So, with all that, today we are revealing something a bit…unique. And perhaps unexpected!
Expedera is proud to announce the production readiness of our new TimbreAI™ product: an ultra-power efficient, ultra-small AI IP designed to support noise reduction algorithms for battery-powered wearable devices. In these types of devices, system designers have the unenviable task of creating hardware that must feature a long-lasting single-charge battery life but must also employ noise reduction AI algorithms. And, as those in the industry know, AI and long battery life are not necessarily friends. The noise reduction algorithms deployed on these devices typically are not particularly processor-intensive and require only a small fraction of the processing power compared to devices like smartphones and automobiles. Noise reduction is most often done in the GOPS rather than TOPS—that’s billions, rather than trillions, of operations per second.
As a result of the lower GOPS requirements, some wearable hardware makers have deployed AI algorithms in the local CPU rather than in a dedicated NPU (Neural Processing Unit). As we spoke about in a recent two-part webinar (here are the links to Part 1 and Part 2 ), CPUs can handle AI algorithms at this lower performance level, albeit not in a power-efficient way. Accordingly, those same wearable system designers have expressed a desire for performance-appropriate NPUs that will address their AI needs without breaking the strict power and area budgets.
That’s the beauty of the TimbreAI T3, a 3.2 GOPS AI accelerator IP that operates in an astonishingly low <300μW power envelope (numbers quoted in a TSMC 22nm process). Among the lowest power AI accelerators ever brought to market (and certainly the lowest power for its performance level), the TimbreAI T3 is ideally suited for processing leading audio noise reduction AI algorithms.
The TimbreAI T3 is a purpose-built NPU. Simply put, it’s a hardware and software-optimized product, containing all the hardware required for accurate processing of noise reduction algorithms, without the power and area overhead added by competitive NPU or CPU solutions. We’ve optimized our software stack to support all the commonly used neural networks that support noise reduction. The goal was to create the optimum NPU IP for any system designer faced with the challenging task of deploying AI in a headset—and we believe they’re going to be astonished at what can be accomplished with TimbreAI.
The product is available now and is offered as standard soft IP, portable to any process node. Expedera stands ready to support your low-power audio noise reduction needs. Contact us for more information.