SSD Encounters with AI

Source:   Editor: admin Update Time :2019-08-25

The fuel of AI generation—data, is erupting like well.
According to the statistics, people everyday will generate 2.5 quintillion bytes data which is ten to the eighteenth power. The data accounting for ninety percent global total data was produced over the past two years. IDC predicts that the total data will reach 163ZB in 2025. It’s safe to say that data storage and processing efficiency determine the AI competitiveness and future of corporations. Although some researchers take advantage of memristor which is similar to the function of memory to develop a neural network processing mechanism for handling deep learning works. Due to the immature related technologies, it is in the stage of laboratory development.
The venture company InnoGrit has launched a set of SSD controllers Tacoma. One of them is used for data center embedded a neural network accelerator. InnoGrit indicates that it has an edge over its competitors in terms of strength and performance.

As the price of NAND flash show signs of bottoming, these chips have emerged. OEM and data center are expected to take advantage of lower price and continue to turn the hard disk storage of lap top and server to hard disk storage with high advantage in performance, power consumption and size.

Wu Zining, the CEO and founder of InnoGrit, points out that the SSD applications are developing rapidly, especially under the circumstance of low price. “When we talk with our customers from data center, their new designs are based on Flash memory ”he added.

Wu Zining is a Chinese. After he has worked for 17 years in Marvell, he established InnoGrit in 2016. In the past, he also appeared in front of Chinese media. At that time, he put forward a concept that the storage is responsible for is storing without computing. If the storage side can do the computation itself, it can save a lot of data handling work and greatly reduce the bandwidth requirements of the system.

The concrete concept is to input some data into the hard disk. If the computer needs to find out specific content, the computer must compare each of data by itself to find the one that meets the demand and show it up or execute it. But, the data in this process will generate many bandwidth consumption. If the storage media can calculate by itself, the computer only needs to order the storage media to find the specific data and the other job is delivered to storage media equipped with computing functions. Owing to both search and comparison in the local end, there is no need to go through the bus to compare, which is high-efficient and save energy.

In the high-end application, InnoGrit’s Tacoma uses four PCI Gen4 interface to support 16 NAND flash channels. Under the circumstance of 5W peak power consumption, Tacoma provides 15million I/O per second. Tacoma includes a unnamed middle Arm core and Nvidia’s open source reasoning accelerator.

Wu Zining said that if we have this combination, we can use the tool chain Nvidia provides to intelligently process,such as data label or firmware customers can install by themselves. Because the new cases from new customers are based on the design of flash memory, he aware that storage device embed the AI calculation will have more market values.

But, Samsung took the frist step in this concept of InnoGrit. In the fall of 2018, Samsung has launched a SSD with a built-in Xilinx Zynq FPGA which is claimed to be able to handle memory computing acceleration for AI, database and video applications. For the other hand, both Phison electronics and Marvell releases chipset like InnoGrit. But, compare with InnoGrit, the plan of these companies have not embedded with AI calculation.


Tacoma uses 64+8 bit data bus, DDR3/4 and LPDDR3/4 DRAM. And, it provides AES-256, SHA3 and ECC security. The middle Rainier controller is the mini version of this design, supporting eight NAND flash channels, 32-bit and 16-bit data buses, which is used for low-end servers or high-end lap top.

In order to achieve this technology , Wu Zining takes advantage of TSMC 28nm and 16/12nm FinFET process to develop four different controller plans, supporting from 2TB to 32TB capacity. In addition, if the peak power of Rainier has not reached 3W, Rainier will provide more 10million IOPS. And, both Rainier and Tacoma support 7GB/s sequential reading and 6.1GB/s sequential writing.

Mr.Wu points out that these products are ahead of competitors in several indicators. However, at the beginning of July, the the competitor Phison Electronics has announced to launch a SSD controller with eight channels,using four PCIe Gen 4 links.

But, Marvell has announced the three SSD controllers which are similar to InnoGrit controllers. However, they only support four NAND flash channels and they don’t specify any hardware AI acceleration or peak power rating.

In the low-end device, the Shasta controller of InnoGrit is a Soc without DRAM used for 28nm node of client system. It uses two PCle Gen3 links and when the peak power reaches 0.9W, it will support 250,000 IOPS. It’s going to use four PCle links and the performance will be improved twice when the peak power reaches 1.35W.

All chips support 2D and 3D NAND, which reach QLC level. This is a low-latency chip which supports Toshiba’s new type XL-Flash but not support Z-NAND of Samsung. The controller of Tacoma is XL-Flash with 10-μs latency.

The venture-backed startup has offices in China, Taiwan and the United States. But, they claims that they don’t keep business relation with Taiwan SSD manufacturer InnoDisk. The company has already finished series-B funding and put every products into production. But, Wu refuses to leak the name of investors and the financing amount it has raised.

Gregory Wong, the chief analyst of Forward Insights, points out that many corporations are competing in this field. He has surveyed about forty controller manufacturers and finds that the majority are in China. They are applied in the last shipment of 215 million clients and 30 million server of SSD market. Wong predicts that the selling price of client controller is only $2, but the price of server chip can reaches around $15. He points out that this is a hard market. The goal of seller is to imitate Sandforce which is a startup LSI bought in 2011.

Jim Handy, the chief analyst of Objective Analysis, said that the cost of developing controllers is around $50 million and the small-scale SSD company can not afford this charge. But, in terms of an independent company, it’s worthwhile to produce controllers which can sell to many small companies. If they are lucky, many large-scale SSD manufacturers, like Intel, Micron, Western Digital and SanDisk, are likely to buy their controllers and stop producing their own controllers. And, Wong points out that the price of NAND flash has already increased and it’s estimated that the price of OEM will strike the bottom at the end of this year.

In fact, the original SSD has a considerable amount of computing power and the aim is to protect the data integrity of SSD. These calculations were originally processed by CPU and DSP. Nowadays, extra neural network computing power is actually quite reasonable. As this type of design becomes more and more popular, the SSD equipped with computing power can create more diversified AI application. Although these applications will be tried in the server, it is also possible to spread to storage devices in general computing environments in order to accelerate data processing in various AI applications and reduce the extra power consumed by system factors.

 

Related Articles:

FMS 2018:Do AI in storage

How will SSD develop in data center in the future?

How to buy best SSD for laptop?(1)