AUGUST 31, 2016

GPUs and deep learning deployments at scale

GPUs are already important compute engines in the HPC top 500, but GPU evolution has also made them a great fit for scale workloads like deep learning and inference. Recent increasing complexity of neural nets has created an exponential computation demand, requiring the use of GPUs for real time and high throughput inference / prediction, not just training. We’ll talk about GPU architectural evolution, changing neural net computation demands, and development flow from training to inference deployment as a utility in the datacenter. We’ll also cover data from large scale GPU deployments, and platform evolution for utility use, where predicting ExaFlop/s (that’s 10^18) capability is no longer a crazy number.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy