Event times below are displayed in PT.
Event times below are displayed in PT.
When used in aggregate, user reporting can provide valuable indicators that can supplement automated systems. Algorithmic-based systems rely on classes of signals that have previously been shown to correspond to spam attacks. However, spammers continuously work to obfuscate their signals through new techniques and user reports can be a clear signal of new attack vectors. Facebook developed a robust and systematic approach to leverage trends in user-generated reports by geography, method-of-attack, location of the spam (e.g. News Feed, Groups, etc.) and type of content (e.g. photos, shares) to identify spam attacks. Attacks identified through this system are categorized based on current and new signals which are fed back into algorithmic systems to prevent further attacks using these techniques.
Netflix is the world’s leading Internet television network with over 83 million members in over 190 countries enjoying more than 125 million hours of TV shows and movies per day. We’ll discuss the range of unique abuse-related challenges we face, including techniques we’ve developed to detect and remediate specific issues such as account takeover and payments fraud. We’ll cover techniques for understanding and disrupting the financial motivations and methods that adversaries use to monetize abuse and how we collaborate both internally and externally to keep our members safe.
To effectively fight spam, we need an unbiased estimate of how much bad content there is in the ecosystem and where it resides. In this presentation we discuss sampling schemes to identify the small percentage of bad content viewed from both user generated content and commercially-motivated content such as ads and sponsored posts. These methods specifically employ ML-derived classifiers to weight the sampling, increasing the volume of bad content in the samples. With more bad content we are able to segment it further, allowing us to measure the prevalence of bad material in certain segments, or as identified by certain policies.
This talk will share app hardening techniques used by Snapchat to prevent unofficial "third party" apps as well as insights into what some of these apps do (and why), who creates them, and how their economy works. We'll talk about some of our future research directions as well as compiler tricks, anti-debugging and tamper detection, and how app hardening aids in server-side machine learning
For most people, spam is a nuisance. Behind this nuisance, however, is a profitable business operation that continues to thrive despite the billions of dollars spent trying to disrupt it. This talk will cover the technical and economic foundations of spam and its evolution as a profit-seeking enterprise, including the emergence of affiliate programs. Using data collected across several research studies, Levchenko will explain the business of spam, from users who buy spam-advertised products to spammers making millions. The talk will describe prior work that identified payment processing as a key bottleneck in spam monetization as well as the results of applying this technique to stores selling spam-advertised goods, and its effect on spam operations. Levchenko will also cover the relationship between spam and other underground activities, including botnets and account compromise.
Effective fake account defense systems are important to preventing spam without impacting product growth. This presentation will discuss some of the methods Facebook uses to understand the performance of fake account detection and remediation with a bottom-to-top operation approach to drive performance.
Fake accounts are a preferred means for malicious users of online social networks to send spam, commit fraud, or otherwise abuse the system. In order to scale, a single malicious actor may create dozens to thousands of fake accounts; however, any individual fake account may appear to be legitimate on first inspection, for example by having a real-sounding name or a believable profile. In this talk, we will describe LinkedIn's approach to finding and acting on clusters of fake accounts. We divide our approach into two parts: clustering, i.e., assembling groups of accounts that share one or more characteristics; and scoring, i.e., classifying each cluster into benign or malicious. We will describe different scoring mechanisms, propose some general classes of features used to score, and show how our modular approach allows us to scale to handle many different types of fake account clusters. We will also discuss tradeoffs between offline and online implementation of the algorithms.
End-to-end encryption, which protects message content so that only the sender and recipient can access it, is gaining popularity in messaging applications. At the same time, there is some concern about its deleterious effects on spam detection systems. At WhatsApp we have successfully launched such "e2e" encryption for over 1 billion people - while also reducing the amount of spam they receive. This talk will discuss techniques we've found successful for preventing spam without access to message content, and some of the challenges we faced along the way. It should help dispel concerns that e2e encryption necessarily means reduced effectiveness of spam detection.
Millions of hosts & guests confidently list & book on the Airbnb platform. Our trust and safety features give them the comfort and confidence in our marketplace. Bad actors can be incentivized to take advantage of the good will and excitement that our product creates. While we face many risk vectors, we’ll focus this talk specifically on the measures we take to detect fake inventory. From machine learning to image similarity clustering, we’re constantly trying to keep our community safe.
Spam fighting isn't just about writing policies, training classifiers, and combating attacks. Leveraging the “secure by design” principal in the context of spam helps create better products with built-in features for preventing and managing large-scale attacks. In this talk, we'll share how Facebook spam-fighters extended their efforts to align incentives and include other product teams within the company in the effort to fight abuse.