EFFecting Digital Freedom

by Jason Kelley

Why Did My Post Get Deleted?

You've probably experienced this, or at least seen it happen: a post or an account of yours on a social media platform is taken down because it supposedly violates the rules of the site.  "But this doesn't violate anything," you might say, wondering who made the decision and why - and how to fix it.  Even if you can get the answers, they're often impersonal and inadequate.  These sorts of takedowns, and the opaque response by companies, are one of the biggest frustrations most people face online.  And sometimes, it isn't only frustrating - it's a serious consequence for a society that functions, by and large, through the Internet.

As the number of social media sites has dwindled to basically three or four enormous global platforms, the impact that these few U.S.-based companies have on the ability of everyone in the world to express themselves freely has grown exponentially.  These massive platforms can make it easier for a person to reach larger audiences - but they also give them a dangerous amount of power to control what you, and people all across the world, are able to say.  It's far past time companies responsible for takedowns - often tens of thousands per day - offered better answers to questions about what, why, and how takedowns are done, and made it easier to push back against incorrect decisions.

Content moderation has serious consequences.  Some takedowns are high profile, like YouTube's deletion of evidence of Syrian war crimes, or Instagram's incorrect flagging of posts regarding the Al-Aqsa Mosque, the third holiest site in Islam, as incitement to violence.  Others are more anodyne, like a Brazilian user's Instagram post about breast cancer being automatically removed because it included images of female nipples (the company makes exceptions for breast cancer awareness).  In either case, if your content is wrongfully removed from one of these platforms, or your account is wrongfully suspended, recourse is often limited, and users are often met with a faceless, bureaucratic auto-reply to questions and concerns.

EFF has tracked global online censorship for years, and pushed companies to adopt better standards that make it clearer how many posts and accounts are taken down and why.  It's shocking that after nearly two decades, companies' increasingly aggressive moderation isn't more transparent and accountable.  New evidence from the Facebook Papers (from a former data scientist at Facebook) paints a picture of a company that is seriously grappling with (and often failing in) its responsibility as the largest social media platform - in content moderation and other areas as well.  The papers show problems with making decisions due to the scale of the user base, fear of political blowback, piecemeal enforcement, lack of local cultural and language expertise, and internal programs like "cross check" that classify some users differently from others.

There's no doubt that 2022 will see significant discussion about how we can improve the ways that online platforms moderate content.  That's why we've just completed two projects focusing on how content moderation works, and on how companies can be better stewards of free speech online.

The first project is the just-released revamp of "Tracking Online Global Censorship" at onlinecensorship.org.  The previous version collected examples of online censorship; the new one is a great resource for those interested in the topic.  We've got explainers about the historyof laws protecting online speech, how to appeal moderation decisions, how copyright fits into moderation, and a lot more.  Though censorship and free speech (as well as misinformation and disinformation) have become common discussion points over the last few years, many still don't have a detailed grasp on the content moderation landscape, and the debate often falls into partisan comments about what should or shouldn't be allowed online.  Whatever you think about specific types of online speech, wrongful takedowns happen, and will continue to happen, and understanding the policies and processes behind these takedowns is key to fruitful discussions - and necessary to protecting free expression - going forward.

The second project is the "Santa Clara Principles 2.0," a set of recommendations for companies that EFF and several other digital rights and human rights organizations have now expanded upon.  These principles are initial steps that companies engaged in content moderation should take to provide meaningful due process, and to better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users' rights.  These are fairly simple guidelines - for example, companies should publish clear and precise rules and policies, and ensure that their enforcement takes into consideration the diversity of cultures and contexts in which their platforms and services are available.

The principles include implementation guidelines as well.  "The Santa Clara Principles on Transparency and Accountability in Content Moderation" have been endorsed by (at this writing) twelve major companies, including Apple, Facebook (Meta), Google, Reddit, Twitter, and GitHub.  Endorsement indicates a commitment to support content moderation best practices moving forward - not that the company has met the principles... yet.  Reddit, for its part, has fully implemented them.  You can view the principles at santaclaraprinciples.org.

These two projects should help you to understand how platforms can be better stewards of their users - and how we can build more open, transparent, and equitable online communities.  And perhaps most importantly, they can answer the question of why that post you made disappeared, and what you can do about it.

Return to $2600 Index