RAY, A UNIFIED DISTRIBUTED FRAMEWORK FOR THE MODERN AI STACK

The recent revolution of LLMs and Generative AI is triggering a sea change in virtually every industry. Building new AI applications or incorporating AI in existing applications require developers to stitch together and scale a plethora of workloads from data ingestions, pre-processing, training, tuning/finetuning and serving. This is a very challenging task as different workloads require different systems, each of these systems coming with its own APIs, semantics, and constraints. Ray can dramatically simplify building these applications by providing a unified framework that can support and scale all these workloads. As a result, Ray has been increasingly being used by companies across industries to build scalable ML infrastructures, platforms, and applications. Examples include Uber, Spotify, Instacart, Netflix, Cruise, Ant Group, ByteDance, and OpenAI (to train ChatGPT and other large models). In this talk, I will present the design considerations behind Ray, our experience with using Ray, and the lessons we learned in the process

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy