The ZIA DV700 Series is a sophisticated neural processing unit tailored for AI inference tasks. Highlighting high-precision FP16 floating-point arithmetic as a standard feature, it allows for the seamless utilization of AI models trained on cloud servers without the need for re-training. This capability supports high reliability and inference precision, making the series ideal for applications requiring high confidence levels, such as autonomous vehicles and robotics. It is designed with a robust architecture ideal for deep learning and is capable of processing multiple AI models including object detection and semantic segmentation, thus offering extensive versatility.
The ZIA DV700 excels in various deep neural network (DNN) configurations, enhancing real-time AI task handling capabilities. Applications such as MobileNet, Yolo v3, and SegNet are just a few examples of the model spectrum it supports, showcasing its adaptive architecture. Equipped with a highly efficient toolkit (SDK/Tool), it is engineered to bridge a variety of standard AI development frameworks like Caffe, Keras, and TensorFlow, allowing seamless model execution.
Moreover, the ZIA DV700 provides up to 1 Top/s of processing power with high bandwidth on-chip RAM ranging from 512KB to 4MB, and supports various frameworks including ONNX. It harnesses 8-bit weight compression to enhance performance while maintaining efficiency, making it a robust solution for intensive AI applications.