
Overview
» Support IEI iRIS-2400 remote intelligent solution
» Intel® Celeron® J1900 quad-core SoC
» Robust IP65 aluminum front bezel
» Aesthetic ultra-thin bezel for seamless panel mount installation
» Projected capacitive multi-touch and resistive single touch options
» Dual full-size PCIe Mini card expansion
» 9V~36V DC wide range DC input
Overview
» 5.7”, 8” and 10.4” fanless industrial panel PC
» 2.0 GHz Intel® Celeron® J1900 quad-core processor or 1.58 GHz Intel® Celeron® N2807 dual-core processor
» Low power consumption DDR3L memory supported
» IP 65 compliant front panel
» 9 V~30 V wide DC input
» mSATA SSD suppoted
» Dual GbE for backup
Overview
•Operating Systems
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit,Windows® 10 64bit
•OpenVINO™ toolkit
◦Intel® Deep Learning Deployment Toolkit
- Model Optimizer
- Inference Engine
◦Optimized computer vision libraries
◦Intel® Media SDK
•Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
* For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
•High flexibility, Mustang-M2BM-MX2 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.
Overview
•Operating Systems
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows® 10 64bit
•OpenVINO™ toolkit
◦Intel® Deep Learning Deployment Toolkit
- Model Optimizer
- Inference Engine
◦Optimized computer vision libraries
◦Intel® Media SDK
•Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
* For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
•High flexibility, Mustang-M2AE-MX1 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.
Overview
Overview
•Operating Systems
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit , Windows® 10 64bit
•OpenVINO™ toolkit
◦Intel® Deep Learning Deployment Toolkit
- Model Optimizer
- Inference Engine
◦Optimized computer vision libraries
◦Intel® Media SDK
◦Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
* For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
◦High flexibility, Mustang-V100-MX4 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.
Overview
•Operating Systems
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit.
•OpenVINO™ Toolkit
◦Intel® Deep Learning Deployment Toolkit
- Model Optimizer
- Inference Engine
◦Optimized computer vision libraries
◦Intel® Media SDK
◦*OpenCL™ graphics drivers and runtimes.
◦Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
- For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
•High flexibility, Mustang-V100-MX8 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
*OpenCL™ is the trademark of Apple Inc. used by permission by Khronos
Overview
•Operating Systems
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
•OpenVINO™ toolkit
◦Intel® Deep Learning Deployment Toolkit
- Model Optimizer
- Inference Engine
◦Optimized computer vision libraries
◦Intel® Media SDK
◦*OpenCL™ graphics drivers and runtimes.
◦Current Supported Topologies: AlexNet, GoogleNet, Tiny Yolo, LeNet, SqueezeNet, VGG16, ResNet (more variants are coming soon)
◦Intel® FPGA Deep Learning Acceleration Suite
•High flexibility, Mustang-F100-A10 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
*OpenCL™ is the trademark of Apple Inc. used by permission by Khronos
Overview
Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel® platforms and maximizes performance.
It can optimize pre-trained deep learning models such as Caffe, MXNET, and Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel® silicon products (CPU, GPU/Intel® Processor Graphics, FPGA, VPU).
Overview