Scaling ML workflows for real-time moderation challenges at Twitch | Lukas Tencer, Lena Evans, Shiming Ren

Trust & Safety at Twitch is uniquely challenging, as the vast majority of content and chat interactions unfold in real time, across a wide variety of communities with different needs, cultures, and audiences. Mitigating and preventing harm means creating fast-acting models that react quickly to bad actors’ new attack vectors, while giving Creators control over their communities. We need fast-acting models to prevent harm early and scale to all of our traffic, which can be thousands of Requests per Second. In this talk we highlight one specific challenge—channel-level ban evasion—and discuss how we address ML modeling and engineering challenges and fight this behavior on our service. We will touch on a number of strategies, including the Twitch ML Infra & ML Ops ecosystem, and how we build a complete ML stack from pipelining through real-time features, model serving, feature store and other systems.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy