NOVEMBER 23, 2020

AI @Scale 2020: High Performance Observability Across the ML Lifecycle

The scale and breadth of ML applications have increased dramatically thanks to scalable model-training and serving technologies. Builders of enterprise ML systems often have to contend with both real-time inference and massive amounts of data, prompting increasing investment in tools for MLOps and ML Observability. Data logging is a critical component of a robust ML pipeline, as it provides essential insights into the system’s health and performance. However, performant logging and monitoring for ML systems has proven ineffective within existing DevOps and data sampling approaches. Alessya will discuss the WhyLabs solution to this problem: using statistical fingerprinting and data profiling to scale to TB-sized data with an open-source data logging library, whylogs. She will present the WhyLabs Observability platform that runs on top of whylogs, providing out-of-the-box monitoring and anomaly detection to proactively address data-related failures across the entire ML lifecycle.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy