Event times below are displayed in PT.
Video @Scale is an invitation-only technical conference for engineers that develop or manage large-scale video systems serving millions of people. The development of large-scale video systems includes complex, unprecedented engineering challenges. The @Scale community focuses on bringing people together to discuss these challenges and collaborate on the development of new solutions.
Event times below are displayed in PT.
Dr. Ioannis Katsavounidis is a member of Video Fundamentals and Research, part of the Video Infrastructure team, leading technical efforts in improving video quality across all video products at Facebook. Before joining Facebook, he spent 3.5 years at Netflix, contributing to the development and popularization of VMAF, Netflix's video quality metric, as well as inventing the Dynamic Optimizer, a shot-based video quality optimization framework that brought significant bitrate savings across the whole streaming spectrum. Before that, he was a professor for 8 years at the University of Thesally's Electrical Engineering Department in Greece, teaching video compression, signal processing and information theory. He has over 100 publications and patents in the general field of video coding, but also high energy experimental physics. His research interests lie in video coding, video quality, adaptive streaming and hardware/software partitioning of multimedia processing.
Steven Robertson is an engineer at YouTube, where he works on streaming video performance. He's worked on a number of projects there, dating all the way back to the VP9 deployment, so he's helping make sure the AV1 rollout doesn't make the same old mistakes, and has exciting new ones instead.
Rich Gerber is a software engineer on the encoding technologies team at Netflix. While the rest of the team was working on projects like VMAF and shot-based encoding (Dynamic Optimizer), Rich was creating the HDR encoding pipeline. When HDR content arrives at Netflix his software performs quick quality checks and then creates combinations of color (Dolby Vision, HDR, and SDR) before compression. Before Netflix, Rich worked at Adobe and, even further back, Intel on a variety of video and processor specific optimization projects.
During her 9 years at Amazon, Olga Hall has led a series of projects bringing better video quality to Amazon Video customers, among which launching 4K UltraHD was her favorite. She pioneered Resilience Engineering at Amazon Video. Her team builds tools to automate technical readiness for large scale, high profile launches such as The Grand Tour and Thursday Night Football. She is the Global leader of Amazon Video availability; in this role she works with engineering groups across Amazon Retail and AWS to bring an uninterrupted streaming experience to all customers at all times. Prior to joining Amazon, Olga lead engineering teams at Sun Microsystems, delivering high scale distributed services such as java.sun.com and software download center.
Twitch is a video-streaming platform that enables a huge number of individual content creators to share their live experience (gaming, music playing, traveling, etc.) with the community. Typically, broadcasting on Twitch involves interaction between a broadcaster and his/her audience through chat messages. Twitch’s user-generated-content (UGC) interactive live streaming model offers a “lean-forward” experience, which many viewers find more interesting than the traditional linear TV’s “lean-backward” experience. On the other hand, it requires the platform to constantly improve broadcaster’s experience so that content creators feel performing live broadcast and engaging their audience is fun and brings a sense of accomplishment. Due to their content and broadcasting model, Twitch’s video infrastructure is specially designed to deal with their peak concurrent channels that are several orders of magnitude larger than that of professionally-generated-content (PGC) TV stations or OTT-streaming services. Optimizations on transcoder/player design and CDN architecture are made to handle challenges in terms of video quality for gaming content, low latency for interactive broadcast, and cost reduction for small-viewership channels.
Dr. Yueshi Shen is in charge of Twitch's core video technologies. He initiated and built a number of Twitch’s core video capabilities, e.g., cost-effectively live-video transcoding farm supporting over 100,000 concurrent channels, live ABR playback algorithm designed for highly interactive content, HLS-based low-latency (<5s) live HTTP streaming. Prior to Twitch, Dr. Shen led the development of General Dynamics Mediaware’s award-winning H.264/TS ad splicing system and Ambarella’s H.264/SVC codec firmware. He also developed OnLive’s <100ms-latency video streaming technology for their cloud gaming service. Dr. Shen is the inventor of AV1’s SWITCH_FRAME and has filed 15 patents.
Vijaya Chandra currently leads Video Understanding efforts at Facebook, building a critical Machine Learning Platform to serve highly personalized and appropriate video content to Facebook and Instagram users. Previously, Vijaya spent a decade leading projects towards building enterprise storage systems, developing expertise in Distributed File Systems, and Networking.
Rose Kanjirathinkal is a Research Scientist on the Multimodal Video Understanding team within Facebook AI Applied Research (FAIAR) division. Her team's mission is to develop AI that interprets and acts on videos to improve our users' video creation and consumption experiences. She will be talking about techniques for contextual video ad safety currently underway in her team in NYC that will have a direct impact on all video and advertisement experiences on the platform. Prior to joining Facebook, Rose received her doctorate from Carnegie Mellon University's School of Computer Science in 2018. Before that, she was a Research Software Engineer at IBM Research - India. Her interests span Deep Learning, Machine Learning, Recommender Systems & Personalization, Natural Language Processing & Generation, and Social Media Analytics.
Kayvon Fatahalian will report preliminary results from his study of 10 years of cable TV news video.
Kayvon's research focuses on the design of high-performance systems for real-time graphics (3D rendering and image processing) and the analysis and mining of images and videos at scale. He is the Co-PI of the Intel Science and Technology Center for Visual Cloud Computing, a center for large-scale visual data analytics involving researchers at CMU and Stanford.
Sonal Gandhi is a software engineer on the Video Integrity Team at Facebook where she works on reducing harmful content and harmful actors in the video ecosystem. She has worked at Facebook for 4 years in roles ranging from video integrity to encoding systems and compression algorithms.
Zainab has been at Meta for over 8 years where she thrives on building... read more
Denise is an engineer at Meta leading improvements in video playback quality of experience... read more