A commercial device uses powerful image and information processing to let cars interpret 360° camera views.
Many cars now include cameras or other sensors that record the passing world and trigger intelligent behavior, such as automatic braking or steering to avoid an obstacle. Today’s systems are usually unable to tell the difference between a trash can and traffic cop standing next to it, though.
This week at the International Consumer Electronics Show in Las Vegas, Nvidia, a leading marking of computer graphics chips, unveiled a vehicle computer called the Drive PX that could help cars interpret and react to the world around them.
Nvidia already supplies chips to many car makers, but engineers at those companies usually have to write software to collect and process data from various different sensor systems. Drive PX is more powerful than existing hardware, and it should also make it easier to integrate and process sensor data.
The computer uses Nvidia’s new graphics microprocessor, the Tegra X1. It is capable of processing information from up to 12 cameras simultaneously, and it comes with software designed to assist with safety or autonomous driving systems. Most impressive, it includes a system trained to recognize different objects using a powerful technique known as deep learning (see “10 Breakthrough Technologies 2013: Deep Learning”). The computer is also designed to generate realistic 3-D maps and other graphics for dashboard displays.
“It’s pretty cool to bring this level of powerful computation into cars,” said John Leonard, a professor of mechanical engineering at MIT, who works on autonomous-car technology. “It’s the first such computer that seems really designed for a car—an autopilot computer.”
The new Nvidia hardware can also be updated remotely, so that car manufacturers can fix bugs or add new functionality. This is something few car companies, aside from Tesla, do currently.
So far Audi has emerged as an early buyer; at CES, the company showed off a luxury concept car called the Audi Prologue that includes the Drive PX. A year ago, the company announced at CES that it had developed a compact computer for processing sensor information (see “Audi Shows Off a Compact Brain for Self-Driving Cars”). That, too, included Nvidia chips.
The introduction of Nvidia’s product is a landmark moment for deep learning, a technology that processes sensory information efficiently by loosely mimicking the way the brain works. At CES, Nvidia showed that its software can detect objects such as cars, people, bicycles, and signs, even when they are partly hidden.
Yoshua Bengio, a deep-learning researcher at the University of Montreal, says the Nvidia chipset is an important commercial milestone. “I would not call it a breakthrough, but more a continuous advance in a direction that has been going for a number of years now,” he said.
Yann LeCun, a data scientist at New York University who leads deep-learning efforts at Facebook (see “Facebook Launches Advanced AI Effort to Find Meaning in Your Posts”), also sees the announcement as an important step: “It is significant because current solutions tend to be closed and proprietary, use custom and inflexible hardware, and tend to be ‘black boxes’ that equipment manufacturers cannot really customize.”
At a press event Sunday, Jen-Hsun Huang, Nvidia’s CEO, said the devices will provide “more computing horsepower inside a car than anything you have today.”