Event times below are displayed in PT.
Video @Scale is an invitation-only technical conference for engineers that develop or manage large-scale video systems serving millions of people. The development of large-scale video systems includes complex, unprecedented engineering challenges. The @Scale community focuses on bringing people together to discuss these challenges and collaborate on the development of new solutions.
This year, we will be hosting our Video @Scale event virtually on Thursday, October 22.
Event times below are displayed in PT.
Rajeev Rajan is Director of Engineering for Video at Facebook. This includes products such as Watch and Live as well as the Creator Studio platform and the Infrastructure powering all of video at Facebook. Facebook announced in September 2020 that Watch now has 1.25 billion monthly active people. Prior to video, Rajeev was Head of Engineering for Facebook Marketplace. Marketplace launched in October 2016 and serves over 800 million people every month in over 70 countries.
Prior to Facebook, Rajeev worked at Microsoft over a couple of decades on a number of products — Exchange Server, SQL Server, Active Directory, and Office 365. In recognition of his deep and broad contributions, Rajeev was recognized as a Distinguished Engineer by Microsoft in 2016. In his latest role at Microsoft, Rajeev was VP of Engineering for the Office 365 Core Infrastructure team. Prior to Microsoft, Rajeev did his graduate studies in Computer Science at The Ohio State University, Columbus, Ohio and the University of Illinois, Urbana Champaign. Rajeev has a B.E. in Computer Science from the Birla Institute of Technology & Science, Pilani and an M.S. in Computer Science from The Ohio State University, Columbus, Ohio. Rajeev and his family live in the greater Seattle area.
With the recent standardization of VVC (Versatile Video Coding) by MPEG and ITU-T this past July, AV1 by the Alliance for Open Media two years ago and discussions about the next frontier in video coding, we would like to take a step back to discuss and highlight some of the lessons we've learned by the three decades of creating new video coding standards.
We will see how three important factors in video standardization have remained the same throughout the years: test video content, quality metrics and testing conditions. We will then propose alternatives that can hopefully help the video standardization community offer better answers to the question that video practitioners keep asking after they put a new video codec to the test: "where is the advertised 50 percent bitrate savings"?
In the modern internet video streaming application, multiple versions of encoding are generated for a video so that the player can adapt to the various network conditions in order to deliver the optimal video quality and playback experience, which is often described as “Adaptive Bitrate (ABR) streaming”. For one of these versions, typically one resolution is selected along with a target bitrate or target quality, as specified by the QP/CRF value. It is thus an extremely important, yet often difficult, task to select such encoding parameters optimally for all versions. In this talk, we present an approach, based on the Dynamic Optimizer framework and convex hull video encoding, that utilizes a faster encoder with better energy efficiency to achieve near optimal encoding parameter selection for a slower encoder with better quality and compression efficiency. This hybrid architecture outperforms the conventional approach, where a fixed set of parameters is typically used, with an improvement comparable to what can be achieved by adopting a newer generation of codec, but with much less computational complexity.
This talk will cover our recent efforts towards making VMAF a more usable quality metric for the video community, including fixed point-based speed optimization, new BSD+ license, new libvmaf API, and the development of a “no enhancement gain” mode that is more suited for codec evaluation.
Most live video delivery solutions today are built using DASH/HLS or similar delivery formats with “classic“ CDN infrastructure providing scalability and players with relatively complex logic that includes fetching manifest and segments from the network, estimating bandwidth, making ABR decisions.
In this talk we would like to present an alternative architecture for live video streaming which draws inspiration from Linux ‘tail’ command. Imagine a client can “subscribe” to a live video stream and just render audio/video data as it comes from the network - sounds like “regular” TV, right? And the framework can still support adaptive bitrates (ABR), support low latency and all features that DASH and HLS provide.
There are 4 major components in any live delivery system: ingest, processing/encoding, egress and players. In this talk, we are going to cover the last two. We will talk about network and application level protocol that we use to “pretend” that we tail a file, we discuss CDN components that we’ve built to be able to scale the system, we will also discuss how we can do ABR in this system and how we can control latency using HTTP-3 extensions that we’ve implemented.
At Facebook, videos have proliferated to include live, short-form content, high quality videos, and much more. However, each of these types of videos poses unique challenges and often tackle those challenges new stacks had to be built to satisfy those needs. This works, but we’ve found that this made it difficult to share innovations.
We discovered that one of the reasons is that each stack requires different needs from storage with regards to consistency, availability, performance, and reliability. So, we set out to break the CAP theorem (kidding). Jokes aside, while there is no singular storage system that meets these needs, we wondered if we could abstract it to look the same to the application.
In this talk, we present our work on a new file system abstraction called OIL, and explore how the work helps building reusable video pipelines that work for both live broadcasts and VOD. We’ll explore surprising side effects of the work, such as how decoupling the problem has allowed developers to move quickly to take advantage of new storage innovations. And, we’ll peek under the covers to discuss how OIL can leverage the strengths of many storage systems via composition and DAGs.
When our team set out to improve the video playback experience for Facebook users in India, we looked broadly at the issues that mattered most to users and set off to fix those issues. In this talk, I’ll discuss our approach to identifying problems, defining success metrics, and improving our infrastructure across the networking stack, our video encodings, and video players. While we focused on solutions that would have the highest impact for users in India, many of these changes improved playback performance for our users all over the world.
What does MAX_SAFE_INTEGER have to do with CDN efficiency and player request rates? This talk examines achieving efficiency in low latency streaming video through the use of byte-range addressing mode with LL-HLS. We’ll examine the general principles involved and look at some curious edge cases involving 206 responses as well as the gains we get in reducing the player request rate . A working implementation will demonstrate commonality between LL-HLS and LL-DASH in using this approach.
David Zhang is a staff engineer at Alibaba working on databases and storage systems.... read more
Denise is an engineer at Meta leading improvements in video playback quality of experience... read more