Go directly to the content of the page Go to main navigation Go to research
voitureautonome_v2

© Google

The NVIDIA Drive PX 2 programme will enable self-driving cars to capture their 360-degree environment in real time and with an extremely high level of accuracy, helping enhance their decision-making capabilities.

Close your eyes and place your trust in your 12 cameras. This is the basic message being put forward by NVIDIA, the graphics processing unit (GPU) specialist staking out a position in the self-driving car market and promising to speed up this new form of mobility.

Following on from Drive PX 1, created in 2015 and tested by autonomous vehicles from BMW, Ford and Audi in particular, the Santa Clara-based company (California) has presented the second version of the supercomputer to players from the automotive industry, providing them with a clear picture of the optimal efficiency of tomorrow’s road travel. Manufacturer Volvo was the first to test out the new version.

Drive PX 2 centralises the data collected in real time by 12 cameras as well as by the sensors and radar already present in the first test vehicles. The capacities are mind-blowing: 12 processors delivering a performance of 8 teraflops and 24,000 billion operations per second.

There is a major difference between the new version and the sensors in the first self-driving cars: Drive PX 2 is capable of filming all the physical objects appearing on a car’s route at 360°. It can distinguish a pedestrian from a dog, or a bike from a scooter, for example, and will be in a position to analyse their trajectories with a very high degree of precision. So far, models of self-driving cars have only been able to recognise rough representations of dynamic or static objects.

12 cameras filming at 360°

Drive PX 2 will be capable of adapting vehicle speed and precautionary measures by analysing things like the irrational and unpredictable behaviour of a drunk pedestrian or a daydreaming passer-by lost in his or her smartphone’s newsfeed.

The transmission of collected data to the NVIDIA neural network, named Drivenet, also makes it possible to optimise decision-making by identifying vehicle models and manufacturer brands. This is useful in determining the presence of an ABS in the vehicle ahead or its shock absorbing capacity – a Mini being more sensitive to shocks than a Land Rover – should a collision be unavoidable and the supercomputer have to make a choice that will limit the damage as much as possible.

Optimised travel times, reduced insurance costs, free time for drivers who become mere passengers: the advantages of enhanced visual analysis capacity in self-driving cars are as obvious in human terms as they are in economic terms.

12/10/2016