Op-Ed: Social Media Platforms’ Struggles with Misinformation and Racism: Challenges and Paths Forward

November 11, 2022
person using cell phone

Authors: Dr. Josephine Lukito, Dr. Dhiraj Murthy, Bin Chen, Kathryn Kazanas, Akaash Kolluri, Pranav Venkatesh

From “fake news” screenshots to conspiratorial claims, the lead up to the 2022 Midterm elections has shown that misinformation remains a problem in public discourse. This is especially harmful for minority groups and underrepresented populations, as they tend to be the target of misinformation-motivated vitriol. The combination of both misinformation and racist discourse makes political talk more toxic, increasing intolerance and polarization.

But current efforts to moderate racist misinformation have been insufficient at addressing this problem. From a policy perspective, U.S. federal regulations such as Section 230 under the Communications Decency Act have made it easier for social media platforms to be opaque about their content moderation policies. Stronger digital rights efforts in Europe, such as the recently implemented Digital Services Act, may compel platforms to more fully disclose how they moderate harmful digital content, use algorithms to recommend content, and serve advertisements programmatically to users.

Our research studying misinformation and racist language on multiple social media platforms suggests that this unwanted content persists, across both mainstream and alternative social media platforms. On Parler especially, we find that the combination of misinformation and racist language is used to criticize opposition politicians or to advance fear-based conspiracy theories.

While it is not clear whether U.S. legislators will enact tighter controls as Europe has done, public figures and citizens alike have expressed frustration with social media platforms' current tactics. Two key challenges make it especially difficult for citizens to trust social media platforms: first, ambiguity regarding content moderation strategies and algorithmic use, and second, inconsistency regarding how content moderation policies are enforced.

Based on this, our policy brief proposes the following four solutions. First, platforms should be more transparent with users about their content moderation policies, particularly with regards to what aspects are automated and what proportion of content is human-moderated. All terms of services, inclusive of both data access and user permissions, should be plainly written and unambiguous to clarify the users’ basic rights on the platform.

Second, we advocate for a flagging recommendation system to remove misinformation or racist content that includes both human moderators and algorithmic filtering. While substantial efforts have been made with flagging systems in English, misinformation and hate speech  in other languages (e.g. Spanish, Mandarin, and Hindi) receive far less moderation attention, allowing harmful content to thrive.

Third, social media platforms should collaborate to remove malicious actors, including misinformation spreaders and disinformation agents. Sharing data about misinformation spreaders will make it easier for each platform to detect and remove misinformation and racist language before it spreads amongst a network. To build a more robust content moderation effort, platforms can also collaborate with policy makers, civil society groups, and researchers, as well as a range of social media serving non-English communities.

Last, but certainly not least, social media platforms need to track misinformation for a prolonged period of time. The period after an election can be rife with misinformation, racist discourse, and calls for violence. Rather than tracking misinformation and hate speech in the short-term, efforts should be made to track them for a much longer period of time. This is especially important for finding super-spreaders: those who post a prolific amount of content with misinformation or racist language. This can also include super-spreader bot accounts. Of the roughly 38,000 accounts with racist content we studied on Parler, we found that approximately 5% were bots and that their automated posting can have a particularly outsized impact on the spread of these types of malicious content.

These recommendations are built on a body of research studying the spread of misinformation and racist discourse across multiple social media platforms.

This research was conducted as part of the Good Systems project, Designing Responsible AI Technologies to Curb Disinformation.