05 January 2018
A family of artificial intelligence (AI) processors launched by CEVA has been designed for deep learning inference at the edge.
The NeuPro collection ranges from 2TOP/s for the entry-level processor to 12.5TOP/s for the most advanced configuration.
IlanYona, general manager of the vision business unit at CEVA, said: “It’s abundantly clear that AI applications are trending toward processing at the edge, rather than relying on services from the cloud. The computational power required, along with the low power constraints for edge processing, calls for specialised processors rather than using CPUs, GPUs or DSPs.”
The NeuPro family includes four AI processors, each comprising the NeuPro engine and VPU.
The engine includes the hardwired implementation of neural network layers among which are convolutional, fully-connected, pooling, and activation.
The NeuPro VPU is a programmable vector DSP that handles CEVA’s CDNN software and provides software-based support for new advances in AI workloads.
NeuPro supports both 8bit and 16bit neural networks, with an optimised decision made in real time. It will be available for licensing to select customers in the second quarter of 2018 and for general licensing in the third quarter of 2018.
CEVA will also offer the NeuPro hardware engine as a convolutional neural network accelerator.