EFFecting Digital Freedom

by Jason Kelley

Filter Bots Stifle Your Speech

Filters are common on online platforms.

But anyone who's been put in "Facebook jail" or had their Twitter account dinged knows that these automated filters can't easily determine whether a post actually breaks community standards; they can't check for irony or humor, they miss context clues, and they simply aren't cut out to understand the nuance of written language.  What's more, revelations from the "Facebook Files" have shown that at least that platform has a special program, dubbed "cross-check," which gives some "VIP" users a near-blanket ability to ignore the community standards entirely.

Unfortunately, filters are having a bit of a moment in Washington D.C.

Though not explicitly calling for filters, the Kids Online Safety Act would require platforms to limit certain types of legitimate speech - like conversations around substance abuse - from being shown to people below a certain age.  Companies that can afford to - the big players - will no doubt use their filters to comply, and either ensure that users who are affected by the law see little to no discussion of the topics that are verboten.  The types of content targeted by these bills are complex, and sometimes dangerous - in addition to substance abuse, the law lists discussions of suicide and eating disorders - but discussing them is not bad by default.  It's very hard to differentiate between minors having discussions about these topics in a way that encourages them, as opposed to a way that discourages them.  What's perhaps worse is that the bill vaguely lists "other matters that pose a risk to physical and mental health of a minor" as content that should be limited.  As we've seen in the past, whenever the legality of material is up for interpretation, it is far more likely to be banned outright via oversensitive filters, leaving huge holes in what information is accessible online.

Likewise, Congress is considering a filter mandate bill that would task the Copyright Office with designating technical measures that Internet services must use to address copyright infringement.  Right now, sites like YouTube, Facebook, and Twitch use filter tools voluntarily, to terrible effect, but they are not doing so under any legal requirement.  But corporate copyright owners complain that filters should be adopted far more broadly.  They point to one of the conditions of the legal safe harbors from copyright liability included in the Digital Millennium Copyright Act (DMCA) - safe harbors that are essential to the survival of all kinds of intermediaries and platforms, from a knitting website to your ISP.  To benefit from safe harbors, sites must accommodate any "standard technical measures" for policing online infringement - essentially, they have to implement an agreed upon mechanism for removing copyrighted material.

These measures were meant to be "developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process."

As a practical matter, no such broad consensus has ever emerged, partly because the number and variety of both service providers and copyright owners has exploded since 1998, and these industries and owners have wildly varying structures, technologies, and interests.  What has emerged instead are what you see today: privately developed and deployed automated filters like YouTube's Content ID, usually deployed at the platform level.  For decades, influential copyright owners have wanted to see those technologies become a legal requirement for all levels - and the Strengthening Measures to Advance Rights Technologies Copyright Act (SMART) is the latest in a string of bad proposals that would do so.  The law would require service providers to adopt "designated technical measures" to police potentially infringing activity - i.e., in many cases, filters - approved by the Copyright Office - despite the fact that these filters aren't able to distinguish between lawful expression and copyright infringement, and that they regularly punish both people who make their living sharing videos online and everyday users.

There are tens of thousands of examples of these oversensitive and inaccurate automated takedowns.  Some highlights, which we've memorialized in our Takedown Hall of Shame, include YouTube's system flagging static as copyrighted material five separate times.  Another ironic example: the company flagged video from a New York University Law School panel where the point was to explain how song similarity is analyzed in copyright cases.  The flag was eventually removed, but only after NYU Law reached out to YouTube through private channels, attempting to get an answer to its questions about how YouTube's filter system, Content ID, and takedown policy worked.

The problem isn't just on YouTube: Facebook heeded a takedown request from Sony and muted a musician's own video performance of Bach, because the platform's filters can't tell the difference between different classical musicians playing public domain pieces.  The company only backed down when the musician took his story to Twitter and then emailed the heads of Sony Classical and Sony's public relations.

The problem isn't just copyright filters, of course: in an attempt to battle COVID-19 misinformation, Facebook also kicked the page for Oakland-based punk rock band Adrenochrome offline.  (Adrenochrome is a word popularized by Hunter S. Thompson in two books from the 1970s, but which gained recent popularity amongst conspiracy theorists.)  The site renewed, then removed the page again, and only restored it after we reached out to them.

Twitch has also had its filter failings: when the live streaming site hosted Blizzard Entertainment's gaming conference BlizzCon on its official gaming channel, it replaced a live performance of Metallica with something resembling the music from an ice cream truck - all while leaving the music intact on Blizzard's own Twitch stream.  Given that Metallica put themselves on the frontlines of the fight against digital music downloads in the early 2000s, launching high-profile lawsuits and testifying in front of Congress, it's no surprise Twitch was so twitchy.

So if filters can't tell if one thing is just a copy of another thing, how can they tell if something is definitively hate speech, or a promotion of substance abuse (for example)?  What we've learned from the long, painful history of automated copyright filters is that filters don't work.  And mandates for filters don't just stifle speech, they also have downstream effects on the potential for new providers and platforms to challenge the big incumbents.  If a filter mandate were made law, the largest tech companies will find it easy to implement whatever the standard technical measures are (likely using something akin to their current measures, but turning the "filter" knob up to 11).  The burden of laws like this falls mainly on users, and small and medium-sized services.

Faced with these criticisms, government officials and politicians ought to back away from these ham-fisted plans to regulate online content through mandated technical measures.  But it will take people like you to convince them.  We hope you'll join us at the Electronic Frontier Foundation in fighting back against the filter mandates - before the Internet gets remade to serve the whims of today's politicians and the entertainment industry.

Return to $2600 Index