A high-performance workhorse engine for everyday inference applications
Origin E6 runs the breadth of models for general applications such as smartphones, tablets, and edge servers. Expedera’s advanced memory management ensures sustained DRAM bandwidth and optimal total system performance.
- Performance efficient 18 TOPS/Watt
- Scalable performance from 18K MACS
- Capable of processing HD images on chip
- Advanced activation memory management
- Low latency
- Compatible with various DNN models
- Hardware scheduler for NN
- Processes model as trained, no need for software optimizations
- Use familiar open-source platforms like TFlite
Industry-leading performance and power efficiency.
Architected to serve wide range of compute requirements.
On Chip, L3 and DRAM work together to improve bandwidth.
Drastically reduces memory requirements.
Deterministic, real-time performance.
Flexible for changing applications.
Simple software stack.
Achieve same accuracy your trained model.
Simplifies deployment to end customers.
- Speedup AI inference performance dramatically
- Avoid system over-design and bloated system costs
- Reduces power while improving flexibility
- Optimal performance for power sensitive applications
- Suitable for system critical applications
- Scalable architecture meets a wide range of application requirements
- No heavy software support burden
- Speeds deployment
- Best in class platform support
to our News
Sign up today and receive helpful
resources delivered directly
to your inbox.