Inside NVIDIA’s AI infrastructure for self-driving cars
With the race to bring self-driving cars to market, deep learning is crucial to the automotive industry. At the 2018 @Scale Conference, Clément Farabet, VP of AI Infrastructure at NVIDIA, will discuss in-depth for the first time NVIDIA’s production-level, end-to-end infrastructure and workflows to develop AI for self-driving cars.
Building autonomous vehicles creates scale challenges across multiple areas in the development process and the infrastructure stack. Given the high-risk and costly nature of testing autonomous vehicles in the real-world, billions of simulated and millions of real driving miles are needed to ensure safety requirements are met. However, more than the miles themselves, what matters is training and testing across all possible use cases and scenarios, including 100 different environmental conditions, 100 different transients or variations,and 100 different fault injections. Farabet will discuss how NVIDIA’s team is building an AI platform that can support simulation and validation with hardware-in-the-loop at this extreme scale, from the rack-level design optimized for AI and the server design improvements made to minimize bottlenecks in storage, network, and compute. NVIDIA’s supercomputing infrastructure is able to support continuous data ingest from multiple cars (each producing TBs of data an hour) and enables autonomous AI designers to iterate training new neural network designs across thousands of GPU systems and validate their behavior over multi PB-scale datasets.
The obstacles faced by self-driving cars aren’t limited to the world of autonomous driving. Farabet will share how the problems NVIDIA has solved in building this infrastructure for training and inference is applicable to others deploying AI-based services including innovations in NVIDIA’s inference platform that are pushing the boundaries of what’s possible at scale.
Clément Farabet is VP of AI Infrastructure at NVIDIA. He received a PhD from Université Paris-Est in 2013, while at NYU, co-advised by Laurent Najman and Yann LeCun. His thesis focused on real-time image understanding, introducing multi-scale convolutional neural networks and a custom hardware arch for deep learning. He co-founded Madbits, a startup focused on web-scale image understanding, sold to Twitter in 2014. He cofounded Twitter Cortex, a team focused on building Twitter’s deep learning platform for recommendations/search/spam/nsfw/ads.
Farabet is one of three @Scale keynote speakers, along with David Patterson, Distinguished Engineer at Google and Professor at UC Berkeley, and Jason Taylor, VP of Infrastructure at Facebook. See the full agenda of the 2018 @Scale Conference here.
The 2018 @Scale Conference will take place on September 13 at the San Jose Convention Center. It’s an invitation-only technical event for engineers who work on large-scale platforms and technologies. Building applications and services that scale to millions or even billions of people presents a complex set of engineering challenges, many of them unprecedented. In addition to the keynote from NVIDIA, the event will feature technical deep dives from engineers from Adobe, Amazon, Cloudera, Cockroach Labs, Facebook, Google, Microsoft, NASA, Pinterest, and Uber.