Tarleton Gillespie – Do Not Recommend: When Platforms Moderate Through Their Recommendation Algorithms

Tarleton Gillespie visited the institute virtually to discuss his approach to content moderation as part of the wider problem of platform governance, with a look at how platforms are increasingly using reduction rather than removal as a technique. As such, his talk gave us insight on content moderation in a way that most discussion around the issue doesn’t typically consider. It particularly resonated with our own approaches to developing new strategies for displacing fake news and junk content circulation.

He began by contextualizing how critical discussion of platform content moderation evolved over time and how platforms responded. Early critics around the time of Brexit and the 2016 US presidential election were thinking about platforms as a “place, archive or playground for disinformation,” but, by 2018 something had changed. He showed us how platforms were no longer simply seen as “a venue for information”, but as “incentivizing and amplifying content in the design of the algorithm”, by highlighting YouTube’s strategy to reduce visibility of videos that “brush up against policies but do not cross the line”. Their strategy became removal and reduction of misinformation, conspiracy and other “borderline” videos, while raising and rewarding other content through recommendation and monetization in 2019. Other platforms are now doing this too: Facebook reducing virality, Reddit quarantining threads, Instagram’s sensitive content filter, and so on.

[Source: Gillespie, T. September 8, 2021]

Tarelton explained that reduction is becoming an increasingly common strategy for content moderation because it is more appealing than removal. According to Facebook, this is because users are drawn to salacious content wherever the line is drawn, so simply shifting the line will not solve the problem.

[Source: Gillespie, T. September 8, 2021. From “A Blueprint for Content Governance and Enforcement” posted to Facebook by Mark Zuckerburg]

From the perspective of the platform, content reduction also allows for more flexibility, as “bright line rules are hard to write”, and this way they can “anticipate change and decide when to intervene”. Unsurprisingly, this approach is most in line with the interests of the platform, as he described how they can continue to enjoy the benefits of keeping this kind of content: more users, ad revenue and data, all while regaining public trust and avoiding accusations of censorship.

Perhaps the biggest takeaway from this talk came from Tarleton’s analysis of the significance of this shift towards reduction. He described two possible interpretations: 1) This is a more “mature approach to the problem of disinformation”, where platforms are using new techniques in response to the call to moderate harmful content. And 2) Reduction has long been a tactic of content moderation, though it was largely seen as quality control against things like clickbait and spam. So this technique might not be so innovative, rather it is a new context for intervention.

While both of these perspectives might seem contradictory at a glance, his talk ultimately showed how both stories are simultaneously true. He left us with questions he continues to contend with, like, “why do some things sound like quality control and not content moderation?” and more broadly, “how do we fairly go about finding things unacceptable?”. These difficult questions will no doubt continue to be discussed here at the Digital Democracies Institute, and we are excited to see how his work develops in the fast-moving world of content and platform governance.