At the recent Embedded Vision Summit, Expedera chief scientist and co-founder Sharad Chole detailed LittleNPU, our new AI processing approach for always-sensing smartphones, security cameras, doorbells, and other consumer devices. Always-sensing cameras persistently sample and analyze visual data to identify specific triggers relevant to the…
Read More
Can Compute-In-Memory Bring New Benefits To Artificial Intelligence Inference?
Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI…
Read More
Sometimes Less is More—Introducing the New Origin E1 Edge AI Processor
All neural networks have similar components, including neurons, synapses, weights, biases, and functions. But each network has unique requirements based on the number of operations, weights, and activations that must be processed. This is apparent when comparing popular networks, as shown in the chart below….
Read More
Measuring NPU Performance
There is a lot more to understanding the capabilities of an AI engine than TOPS per watt. A rather arbitrary measure of the number of operations of an engine per unit of power, this metric completely misses the point that a single operation on one…
Read More
Expedera Hears You: Wearables Need Low Power AI
In the short time since emerging from stealth mode, Expedera has quickly become known as the Artificial Intelligence (AI) Inference IP company that delivers the best performance per watt and per area. Our Origin product family scales to 128 TOPS and addresses applications from the…
Read More