FMS 2018:Do AI in storage

Source:   Editor: Linde Update Time :2018-09-09

We know the prototype proof of concept completed by them and Microsoft Research through the keynote of the startup NGD. The concept of in-suite processing is integrated with computational acceleration units in storage control. This concept is similar to most of the schemes for accelerating on storage, leveraging the advantages of being close to data storage to implement  computation offlaoding, thereby decreasing the latency.

The application of NGD acceleration is popularly used in CNN, such as image classification, face recognition, and Edge applications like vehicle license plate recognition. Unlike Google's TPU Pod, which exists in the form of a cloud data center, applications on the Edge are more demanding for real-time performance. Before the advent of 5G, the path of sending raw data from the end device to the cloud processing is too long. Therefore, the Edge computing and the so-called fog computing still enjoy strong vitality. The benefits of using Edge's processing power are mainly: 1. Delay end. 2. Reduce the amount of data transmission, because it don’t  need the original data and the feature data is enough

Similarly, on the FMS 2018 Marvell and Marvell introduced a concept chip that combines the SSD controller with Nvidia's NVDLA to create a device that can do deep learning reasoning on the end device.

NVDLA's information is available at http://nvdla.org/primer.html. When it was released in 2017, everyone exclaimed that it might end many AI startups' ASIC plans , because NVDLA , taking advantage of Nvidia’s technological superiority for in-depth learning, designed an open source Deep Learning Accelerator. Most of Nvidia's previous investment was in training the market, and reasoning needed to be adapted with new hardware and architecture. From the perspective of occupying the market, the design of open source accelerator is of great significance for increasing the right to speak.

No specific information about the concept chip appears to have been seen at Marvell's booth, except for some information released in the FMS2018 press. The current focus of AI SSD is on the data analysis market, where AI SSD acceleration can be used in specific cloud or edge applications of big data analysis and data annotation. As can be seen from the internal block diagram of NVDLA, NVDLA is mainly for image applications of CNN.

It's not a company like Marvell, but a lot of enterprise-level controller manufacturers are thinking about this direction. For example, Netint.ca from PMC Canadian team showed their PCIE4.0 SSD Controller in FMS.


The processing unit of Video is integrated in the SSD control. As for the processing of high-definition H.265, the current mainstream solution is still CPU-based. Thus the server-level solution of a single Intel X86 is between 4-6 channels and using the hardware acceleration can lead to better performance and power consumption.

GOKE, A-share listed company, also proposed the concept of near data computing and doubted how to reflect the optimization of AI workload in the design of SSD at the Wuhan Storage Semicon Summit. So far in our country, AI with the hottest development  is closely linked with security, to which leaders at all levels attach the most importance. There are also many technical challenges for the technical means of stability and unity such as face recognition, behavioral judgment, headcount, etc.


When the storage of big data analysis is accelerated, we have to talk about BlueDBM, a Korean in this project. Although BlueDBM graduated from MIT and went to UIC Irvien, he adhered to research on big data analysis acceleration so that the original framework was enriched in a lot of content.


Now that graphical computing and Vertex computing have been done, anyone who participated in machine learning may be familiar with what to do next. It is recommended that everyone can pay attention to this star project, supported by Chinese Academy of Sciences when I was at MIT. However, I don't know if there will be supported by Samsung.

Similar to the computing acceleration discussed earlier, we can see a clear trend of increasing support for near data computing and integrating some relatively mature solutions in SSD controllers, which may be a new direction for enterprise-class SSD controllers. Finally, FMS 2018 ended with introducing the SSD controller of a Korean company, the first to use RISC-V as SSD controllers. Furthermore, their performance and architecture distinguish from traditional controllers and worth learning.