
Accelerate Automotive with AI
FROM IN-CABIN TO ADAS
Faster, Power-Efficient Processing
Automakers have relied on a human driver behind the wheel for more than a century. With Level 3 ADAS systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it will be a long climb—much of the technology that got the industry to Level 3 will not scale in all the needed dimensions—performance, memory usage, interconnect, chip area, and power consumption. The scalability problem is perhaps no better illustrated than in Artificial Intelligence (AI) processing.
Automobiles are increasingly data-centric, with AI leading the charge. Whether deployed in-cabin for driver distraction applications or in the ADAS stack for object recognition and point cloud processing, AI forms the backbone of the future of safer, smarter cars. NPUs (Neural Processing Units) are the ideal vehicle for AI.
Scaling TOPS – How Many TOPS do I Need for L3, L4, and L5?
Self-driving may require six to eight 8K camera inputs and data from LIDAR, radar, ultrasonic, and other sensors. How ADAS may force increases in TOPS (Trillions of Operations per Second) requirements is still anyone’s guess. Just a few years ago, some estimated 24 TOPS would be required for L3—a figure that today’s automakers have already surpassed. Factor in the exponential processing needs for L4 and L5, and a cursory study of today’s mainstream architectures shows that the industry needs a new AI processing paradigm. Active cooling of chips within cars is neither ideal nor commercially feasible, and the costs of using multiple large, leading-node semiconductors doesn’t fit most carmakers’ financial models.
Neural Network Diversity
Automotive NPUs need to perform at hundreds of TOPS or more. They also need the ability to run multiple unique Neural Networks (NN) efficiently—something not typically seen in other markets. While a small consumer device may run a single NN with a small resolution (for example, ResNet50 at 224 x 224), automotive NPUs need to run multiple, high-resolution networks concurrently (for example, ResNext at 1920 x 1080 x 3, Swin Transformer at 2880 x 1860, and DETR 1824 x 940). Automotive also introduces the unknown to NPUs, in that many OEMs have developed NNs that are custom and proprietary.
Best AI for Automotive Uses
Simply put, many of today’s NPUs cannot support the level of NN diversity automobiles require in a highly utilized, power-efficient manner, especially when considering future needs. However, Expedera can – our Origin™ E6 and E8 families can run multiple networks concurrently at up to 128 TOPS per single core (PetaOPS for multiple cores). Furthermore, at an industry-leading 18 TOPS/W, Expedera requires no active cooling while meeting in-cabin and ADAS needs. With capabilities for standard and custom NNs and support for as-yet-unknown future NNs, Expedera is the ideal IP partner for automotive chip design.

"The Future of Automotive SoCs"

Get in Touch With Us
STAY INFORMED
Subscribe
to our News
Sign up today and receive helpful
resources delivered directly
to your inbox.