The @Scale Conference 2017

GPUs and deep learning

Share

In the last year, GPUs plus deep learning have gone from a hot topic to large-scale production deployment in major data centers. That’s because deep learning works, and the evolution of GPUs has made them a great fit for deep learning and inference. Neural nets, frameworks, and GPU architectures have changed significantly in the last year as well, allowing better solutions to be created more quickly and in more places, moving from niche applications to the mainstream. It also allows them to be used in real time for more industrial automation and human interaction roles. We talk about GPU architecture and framework evolution, scaling out and scaling up training and performance, real-time inference improvements, security plus VM isolation and management, and overall deep learning flow improvements to make development and deployment more devops-friendly.

Related Topics

Join the @Scale Mailing List and Get the Latest News & Event Info

Code of Conduct

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy