It’s a momentous day in any company’s life when you get to announce your first customer. Many companies take years to get to this point—some never do. For Expedera, that day is today. As you may have seen in our news release today, the first consumer device with Expedera’s Origin™ artificial intelligence (AI) processing IP is now available for sale.
This is huge for us. While we have plenty of data based on our test chip and third-party reports that detail the performance of our solution, there is no better validation than when one of the world’s largest consumer device makers takes it and integrates it as a hallmark in their device. And to have that happen less than 11 months after the company emerged from stealth mode is a testament to the value and competitive differentiation the Expedera solution provides.
Customers come to us with AI problems. In this case, the customer had specific power, performance, and area (PPA) targets for their next-generation product, but their current solution, and others they had examined, simply couldn’t meet them. This is a common occurrence. AI algorithm development is far outstripping Moore’s Law, meaning conventional silicon-based solutions simply cannot keep up with the advanced processing demands of AI. Often, this means that AI processors that are more than capable of handling the current product generation’s algorithms aren’t able to address the next generation’s needs. This was the case for our customer. They wanted to deploy a new, very powerful 4K video low light denoising AI algorithm in their next-generation system. However, their current AI solution simply could not handle the algorithm in a PPA-friendly manner. In fact, to get the performance they needed, their current solution’s power and area would have to grow by at least 10x!
In conversations with the customer, together we explored their desired neural networks and their PPA targets. To be frank, what they saw as constraints, we saw as opportunities.
We’ve detailed our packet-based architecture before (most recently in a webinar) and demonstrated that the cumulative advantages of our high utilization, tightly integrated hardware/software approach, minimal memory footprint, and flexible building block design combine to deliver industry-leading PPA. In close collaboration with the customer, we were able to far exceed their performance requirements within their power and area constraints. Specifically, the customer was able to increase AI inference throughput by 20X while reducing power consumption by more than 50% without adding area to their chip. Let’s also be very clear. When we talk about AI power consumption being reduced by more than 50%, we aren’t measuring that on a per inference or per-frame basis, we are talking the entire engine. Expedera’s Origin IP, quite literally, gave them 20X more throughput (performance) at less than half the power than their previous generation!
These results are in line with what we continue to see in our engagements. The promise of packet-based engines is a reality, and our approach of working closely with customers to understand their targets and limitations is continually showing that we can far exceed what system designers believe to be realizable.
Today is a good day!