In this article:
- What is the Movidius Myriad X Vision Processing Unit ?
- What versions of the Myriad X are available ?
- The First Vision Processing system-on-chip
What is the Movidius Myriad X Vision Processing Unit ?
The chip, was developed by Movidius, a company acquired by Intel back in 2016, specifically for the Neural Compute Engine it was creating. The idea behind this endeavour is to offer AI capabilities to everyday devices. Movidius already provides the Fathom Compute Stick that enables deep learning capabilities for embedded devices. Intel wanted to take this to a new level, where learning algorithms can train themselves to identify images and words or analyse video feeds. The Myriad X brings such capabilities to devices like drones, cameras, robots and VR / AR headsets. The new VPU is a 16 nm system-on-a-chip that integrates vision accelerators, imaging accelerators, and the Movidius Neural Compute Engine, as well as 16 SHAVE vector processors paired with a CPU. The small chip is able to process up to 4 trillion operations per second, and it has a minimal TDP of 1.5 W.
What versions of the Myriad X are available ?
There are two versions of the Myriad X: the MA2085 coming with on-package memory and exposed external memory interface and the MA2485, which integrates 4 gigabits of in-package LPDDR4 memory. The VPU supports PCIe interfaces, which allows OEMs to integrate several chips in a single device.
The First Vision Processing system-on-chip
Intel introduced the Movidius Myriad X Vision Processing Unit (VPU) which Intel called the first vision processing system-on-a-chip (SoC) with a dedicated neural compute engine to accelerate deep neural network inferencing at the network edge.
The introduction of the SoC closely follows the release of the Movidius Neural Compute Stick in July, a USB-based offering “to make deep learning application development on specialized hardware even more widely available.”
The VPU’s new neural compute engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power. “With the introduction of the Neural Compute Engine, the Myriad X architecture is capable of 1 TOPS – trillion operations per second based on peak floating-point computational throughput of Neural Compute Engine – of computing performance on deep neural network inferences,” said Intel.
Commenting on the introduction Steve Conway of Hyperion Research said, “The Intel VPU is an essential part of the company’s larger strategy for deep learning and other AI methodologies. HPC has moved to the forefront of R&D for AI, and visual processing complements Intel’s HPC strategy. In the coming era of autonomous vehicles and networked traffic, along with millions of drones and IoT sensors, ultrafast visual processing will be indispensable.”
In addition to its neural computing engine, Myriad X combines imaging, visual processing and deep learning inference in real time with:
“Programmable 128-bit VLIW Vector Processors: Run multiple imaging and vision application pipelines simultaneously with the flexibility of 16 vector processors optimized for computer vision workloads.
Increased Configurable MIPI Lanes: Connect up to 8 HD resolution RGB cameras directly to Myriad X with its 16 MIPI lanes included in its rich set of interfaces, to support up to 700 million pixels per second of image signal processing throughput.
Enhanced Vision Accelerators: Utilize over 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.
2.5 MB of Homogenous On-Chip Memory: The centralized on-chip memory architecture allows for up to 450 GB per second of internal bandwidth, minimizing latency and reducing power consumption by minimizing off-chip data transfer.”
Remi El-Ouazzane, former CEO of Movidius and now vice president and general manager of Movidius, Intel New Technology Group, is quoted in the announcement release: “Enabling devices with humanlike visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much AI and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices.”
Neural network technology and product development are moving quickly on both on the training and inferencing fronts. It seems likely there will be a proliferation of AI-related “processing units” spanning chip-to-system level products as the technology takes hold both inside data centres and on network edges. Google, of course, has introduced the second generation of its Tensor processing unit (TPU), Graph core has an intelligent processing unit (IPU), and Fujitsu has a deep learning unit (DLU).
El-Ouazzane, has written a blog about the new SoC, in which he notes, “As we continue to leverage Intel’s unique ability to deliver end-to-end AI solutions from the cloud to the edge, we are bound to deliver a VPU technology roadmap that will continue to dramatically increase edge compute performance without sacrificing power consumption. This next decade will mark the birth of brand-new categories of devices.”
According to Intel, key features of the neural compute stick, aimed at developers, include:
- Supports CNN profiling, prototyping, and tuning workflow
- All data and power provided over a single USB Type A port
- Real-time, on device inference – cloud connectivity not required
- Run multiple devices on the same platform to scale performance
- Quickly deploy existing CNN models or uniquely trained networks