Auto driving car chip battle

Memory chip
At this year's International Consumer Electronics Show (CES 2016), the intention of expansion in the field of autonomous driving technology is very obvious. In the chip market segment of autonomous vehicles, in addition to well-known companies such as Nvidia, Mobileye, NXP and Texas Instruments, there are many "new faces" emerging. ——— For example, IP provider Ceva and Intel and Qualcomm. Automotive OEMs are welcoming newcomers to these markets with open arms, Egil Juliussen, director of research at IHS Automotive Infotainment and Advanced Driver Assistance Systems (ADAS), said at CES, "This area has suddenly become very lively."

The fog of war?

Until now, investors and the media community have been very enthusiastic about supporting autonomous vehicle technology – sensing, camera, radar and light, mapping, algorithms, deep networks (or non-deep networks) and labor. Wisdom and so on.

But for most of them, it is still unclear how these technologies will end in the evolution of autonomous vehicle design, not to mention who wins and loses in this war.


Mobileye co-founder and CEO Amnon Shashua
Mobileye co-founder and CEO Amnon Shashua said he initially thought that competitors deliberately spread the wrong news about these technologies and wanted to create "fog of war." But he now realizes, "People are really confused because they really don't understand."

At this year's CES, Nvidia's "deep learning" technology and Mobileye show map drawing technology became the most dazzling star at the show. The two companies competed fiercely in ADAS and autonomous driving.

Mobileye bluntly criticized Nvidia at his press conference. Mobileye CEO Shashua pointed out, "What I found in the Nvidia statement is a liquid-cooled supercomputer with a power consumption of 250W and a cost of nearly $10,000. I don't think such a thing belongs to our world at all."

Ceva CEO Gideon Wertheizer described the open dispute between the two powers as "the bridge of investment." In fact, Mobileye's stock fell nearly 10% shortly after Nvidia's announcement, and rose after the press conference at CES.

Multi-source mapping

However, Mobileye's statement does contain a certain amount of important technical weight. In an interview with ETimes, Wertheizer introduced Mobileye's newly developed mapping technology, called the Road Experience Management System (REM), and considered it to be a competitive chip supplier as well as NXP and Bosch. It may be "the most threatening" with first-tier vendors such as Denso.

According to Mobileye, REM can create "multi-source real-time data" for precise positioning and high-resolution lane data, which is an important information layer needed to support fully automated driving.


Back-end work with Mobileye REM technology (Source: Mobileye)

This technology is based on software that processes chips in Mobileye EyeQ. It captures landmarks and road information at very low bandwidths—about 10Kb for every kilometer of travel (in contrast, Google is about 1Gbit per kilometer when locating and mapping HD maps). Mobileye explained that the back-end software implemented in the cloud can be integrated into a global map of pieces of data sent by all cars equipped with car software.

Mobileye's visual interpretation mechanism (which helps compress data) should help automakers create their own Road Book.

Mobileye will lock you

However, it is worth noting that Mobileye's multi-source location coordination system can only work on cars with Mobileye EyeQ chips. In short, "Mobileye is locking its customers." Ceva's Wertheizer pointed out.

Clearly, REM has become more and more successful as more and more cars with Mobileye chips are installed. Shashua believes that REM is attractive to automotive OEMs because "large automakers can take advantage of their scale when creating their own road guides."

Enabling REM is not difficult for automakers, Shashua pointed out, because Mobileye's EyeQ chip already exists in the automotive ecosystem. What is needed to build REM is the EyeQ chip and a communication link -- for example, General Motors can use its own On-Star system.

GM and Volkswagen announced at CES that they support the REM system proposed by Mobileye. Another size and two equivalent customers will soon sign up for REM, Shashua revealed.

It is worth noting that one-third of the global automotive industry has already used the EyeQ chip, Shashua pointed out, "We very much expect the REM to be used in the entire automotive industry." Currently only Toyota and Daimler The two companies have not yet used Mobileye's chips.


Mobileye EyeQ chips are widely used in cars (Source: Mobileye)

Sensor fusion

So far, the main commentator of the self-driving car in the electronics industry has been Nvidia CEO Huang Renxun.

Huang Renxun, who advocates 'deep learning', often educates people that autonomous vehicles need a powerful visual computing system to fuse data from cameras and other sensors. In other words, Nvidia's latest Drive PX2 is dubbed by Huang Renxun as 'supercomputer designed for cars', which will become the standard equipment for cars, which can be used to sense the location of the car, identify the objects around the car, and calculate the most Safe path.


Nvidia CEO Huang Renxun

At this year's press conference, Nvidia also released a deep learning platform called Digits. Nvidia is already testing its own autonomous vehicles with this platform. "Autonomous driving technology is incredibly difficult," Huang said. "It's not as simple as programming 'driving' with the manual of the supervisory office."

In order to significantly reduce the time required to develop and train deep neural networks, automakers need tools like Digits, which are implemented on their server supercomputers, Nvidia pointed out.

According to Huang Renxun's vision, each car company will eventually have an end-to-end system -- ranging from Nvidia Digits for deep neural networks to NvidiaDRIVE PX 2 for network output in cars.

Laszlo Kishonti, founder and CEO of Budapest-based AdasWorks, which develops artificial intelligence software for autonomous driving, points out that the company is working with Nvidia to develop a system for Volvo, a GPU-based system that can instantly handle multiple sensations Information on the detector.

Kishonti said that AdasWorks does not use a processor. "We use GPUs, FPGAs, or any other embedded vision SoC available." But one of the key advantages of using Nvidia solutions is the one-pass code and servo developed and verified on the in-vehicle computer. The code used on the device is exactly the same.


End-to-end depth learning platform for autonomous vehicles (Source: Nvidia)

Compared to Mobileye's focus on visual processing, "Our focus is on merging data from all the different sensors. Vision is just one part of the many sensor data," said Dave Anderson, senior manager of Nvidia Automotive Integration.

Nvidia's DRIVE PX 2 can handle input from 12 video cameras as well as radar, optical and ultrasonic sensors. He explained: "We have combined these data to enable it to accurately detect the target object, identify it, and determine the relative position of the car to the surrounding world, and then find the optimal path for safe driving."

Ceva and Qualcomm are also actively involved

However, Nvidia is not the only company that promotes deep learning for autonomous vehicles. Ceva is also actively promoting the company's own XM4 imaging and visual DSPs, which use Ceva's instant neural network software architecture called Ceva Deep Neural Network (CDNN).

Wertheizer explained that the company's customers will choose a well-trained neural network to pre-process object parameters. By using Ceva's DSP engine, firmware and CDNN, the object parameters described by floating-point networks and weights are converted to fixed-point custom networks and weights without loss of precision, he said.

With Ceva's XM4 DSP, CDNN enables embedded systems to perform "deep learning tasks, and it can learn three times faster than advanced GPU-based systems, with 30 times less power and 15 times less memory bandwidth. "Ceva pointed out.


Ceva Deep Neural Network (Source: Ceva)

At CES, Qualcomm announced the Snapdragon 820 automotive product line that integrates LTE modems and machine intelligence. This series includes Qualcomm's Zeroth machine intelligence platform. The Zeroth project is designed to help automakers use neural networks to create deep learning-based solutions for ADAS and in-vehicle infotainment systems.

But so far, Snapdragon's design orders have been limited to infotainment. Audi chose the Snapdragon 602A processor in the 2017 line of automotive products.

As Ceva's CEO pointed out, the Snapdragon 820A also provides the Automotive Safety Integrity Level (ASIL) score, which is a risk classification mechanism defined by ISO 26262 – a functional safety rating classification for road vehicle standards.

In contrast, the Ceva end-to-end system covering CDNN and XM4 received the ASIL B certificate not long ago, he added.

The three pillars of autonomous vehicles

In order not to be dedicated to Nvidia and other companies that are also promoting deep learning, Shashua reminds people at CES that Mobileye is still far ahead of the ADAS and autonomous vehicle markets. He emphasized that "a camera is not just a sensor, but a part of the brain of an autonomous vehicle."

Mobileye CEO Shashua also teaches at the Department of Computer Science at Hebrew University in Jerusalem. In a speech at the Mobileye press conference at the beginning of the year, he said that "sensing, mapping and planning" is "the three pillars of autonomous driving."

In his view, there are currently two camps dedicated to solving the challenge of driverless navigation. The first camp is a company such as Google and Baidu, which aims to create extremely detailed maps (centre-scale 3D maps of centimeter-level maps) for specific regions, and then use them with low-resolution sensors such as Ganda. The area with maps allows the car to drive automatically in a fully automatic mode.

The problem with this approach is that it is basically impossible to zoom in to the global level, and it is equally difficult to keep the map updated at all times, because the information needed to create the map at the beginning is an astronomical number.

The second method is to create a low-resolution world map and then enhance the map with a higher-resolution sensor on the car—the camera and other sensors. Shashua calls this the preferred method for the automotive industry because it allows cars to "drive around with some automatic functions." This approach lacks human-level artificial intelligence (AI) to handle the sensor's acquisition. The information he pointed out.


Two camps of autonomous vehicles (Source: Mobileye)

The goal of autonomous vehicles is to "go with full functionality," Shashua pointed out, which is exactly where Mobileye proposes REM. REM is an attempt by Mobileye to create high-resolution maps with "more powerful artificial intelligence."

This system design allows all cars using Mobileye technology to effectively generate a world map and create what Shashua calls a 'road book' -- a detailed cloud-based world map. This map can be updated on-the-fly and eventually used by all car manufacturers.

Road guide

Shashua pointed out that every car manufacturer -- Flowserve, GM and other companies -- will develop and own a 'road manual' -- a cross-licensing between each other to form a global index. . When asked if Mobileye would also have a map, he said, "No, we are just technology suppliers." But if the automotive industry begins to discuss the "pay-per-km" business model in the future, Shashua said, "We will participate in the discussion together."

When it comes to Nvidia, the CEO of Mobileye is not vague. "From a certain perspective, the concept of GPU as the golden architecture for visual processing is very wrong." He believes that Cuda is a high-quality programming tool for academics to quickly train neural networks. "In fact, this Tools allow you to quickly complete 80% of your work in developing the necessary algorithms and creating quality display products. But he pointed out that the remaining 20% ​​(from product display to production algorithms) is both difficult and time consuming. Shashua joked, "The remaining 20% ​​is the difference between men and boys."

After all, the chip architecture is not the focus of winning the autonomous car war, the CEO of Mobileye believes that "software and content are the key.

100W Medical Power Supply

100W Medical Power Supply,100W Medical Device Power Supply,100W Medical Power Adapter,100W Rade Power Supplies

Shenzhen Longxc Power Supply Co., Ltd , https://www.longxcpower.com