The development of Intel and Nvidia GPU determines the future of artificial intelligence chips
At the end of May, a major conference on artificial intelligence was held at Pennsylvania State University in central Manhattan. The conference was mostly attended by tech start-ups, as well as AMAT, the world's top manufacturer of semiconductor equipment. The conference was hosted by Pierre Ferragu, who had just resigned as a technical analyst at Bernstein and currently worked on technology research and development at New Street Research, a pure research institution without other business.
From the left was Netronome CEO Niel Viljoen, Mipsology CEO Ludovic Larzul, Applied Materials leader of market information Sundeep Bajikar, Syntiant CEO Kurt Busch and Applied Materials CIO Jay Kerley.
The conference emphasized on discussing which chip will be used for existing intelligent algorithms such as "deep learning." At the same time, this was also a good opportunity to investigate whether Intel (INTC), Nvidia (NVDA) and Xilinx (XLNX) can dominate the computer industry in the future.
Speakers at the conference generally believed that the most conservative estimate for the next five years was that silicon demanded in the AI market will reach $15 billion per year. The question was which chips will be applied to the field of artificial intelligence?
Netronome Systems, founded in August 2003, is a leading supplier of highly programmable semiconductors. It is headquartered in Pittsburgh, Pennsylvania, USA, and has subsidiaries in North America, Europe, Asia and Africa, providing technical and sales support to customers worldwide. Currently, Netronome is manufacturing web accelerator chips.
Niel Viljoen, the founder and CEO of Netronome, thought that with the advent of a new generation chips, there will be less use of Inter microprocessors. He said: "Every change will reduce the burden of CPU."
Viljoen also said: “the architecture of the chip involves materials and accelerators, as well as something we haven’t touched. For instance, like DRAM memory chips, we don’t know how its memory is organized and distributed.”
Although Nvidia's prominent feature is that its GPU is recognized in artificial intelligence, it has been pointed out that there will be more types of chips in the future, and they will pay more attention to memory circuits than just computing circuits. Another company, Mipsology, is creating softwares that can better use Field Programmable Gate Arrays (FPGAs) sold by Intel, Xilinx and Lattice Semiconductor (LSCC), which illustrates the diversity of approaching chips.
Mipsology CEO Ludovic Larzul explained that some of the tasks in deep learning requires traditional "branch" instructions, that is to say, following a narrower "if/then/else" structure of computer commands, but this is not suitable for GPU.
Syntiant, who just has created the "stealth mode" a few weeks ago, proposed a completely different approach to AI chips. Most of Syntiant's executives are from Broadcom (AVGO), and they are working on a memory-focused chip, leveraging the expertise of analog chips and being more suitable for the company's analog devices.
As Syntiant CEO Kurt Busch said, "We can compute in memory, absolutely eliminating the memory bandwidth bottleneck in traditional memory chips."
Syntiant specializes in using these memory chips for "edge" devices, such as cars and smart watches, as well as other devices that have strict power limitations but are often not connected to cloud data centers for machine learning.
An example of AI was a traffic camera. Busch said: "There are traffic cameras in every corner and they take pictures of all the license plates they see, and then send these images to the cloud. So why not let the cloud directly tell the camera which license plate we are looking for, and send pictures to the cloud?"
In other words, such artificial intelligence has moved away from the "training phase" into the so-called "inferential phase", which uses computers that have trained their data to solve specific problems.
Busch believed that in the foreseeable future, system "training" will still be carried out in a cloud that requires a lot of computation.
Applied Materials is the largest semiconductor manufacturing equipment and high-tech technology services company around the world and one of the Fortune 500 enterprises for the global development. Applied Materials has been a pioneer in leading the information age for more than 30 years since it was founded in 1967, providing technology possibilities for the rapid development and rapid growth of the global information industry.
Applied Materials CIO, Jay Kerley, thought that the company has built a lot of internal infrastructure that will be used for AI-related researches, such as solving problems in manufacturing AI chips.
Kerley said, "In most cases, software is driving the development of hardware and TensorFlow is the most popular among the various frameworks developed by Google.” He also considered: "If the GPU can make improvements to satisfy these emerging needs, things will become simple. But the problem is that we need to use GPU to deal with something they are not good at."
Meanwhile, Applied is building giant data centers by using a large number of GPUs from Intel and Nvidia. Therefore, if AI is making progress in the near future, it will be a very good opportunity for chip suppliers.