Removing online extremists from the platform reduces their followers – but it comes at a price Pipa News

Pipa News |

Conspiracy theorist and US far-right media personality Alex Jones has been ordered to pay US$45m (£37m) compensation to the family of a child recently killed in the 2012 Sandy Hook school shooting.

Jones claimed that being banned from major social media sites or “being removed from the platform” for his extremist views negatively affected him financially and compared the situation to “prison”. But during the hearing, forensic economist Bernard Pettingill estimated that Jones made more money after the conspiracy website InfoWars was banned from Facebook and Twitter in 2018.

So, does online platform switching really work? It’s not possible to rigorously measure impact scientifically, so it’s hard to tell what happens to an individual or group’s overall impact when it’s removed from the platform. Overall, research suggests that removing them from the platform can reduce the activity of malicious actors on these sites. However, it has a price. As people and groups removed from the platform migrate elsewhere, they may lose followers but also become more hateful and toxic.

Typically, de-platforming involves actions taken by the social media sites themselves. However, it can be done by third parties, such as financial institutions that provide payment services on these platforms, such as PayPal.

Closing a group is also a form of removal from the platform, even if the people in it are still free to use the sites. For example, the The_Donald subreddit (a forum on the Reddit website) was shut down for hosting hateful and threatening content, such as a post encouraging members to attend a white supremacist rally.

Sticky Post on The_Donald subreddit, August 2017.

A frame from the anti-CNN GIF tweeted by Donald Trump.

Does unplatforming work?

Research shows that removal from the platform has positive effects on the platform from which the person or group is kicked. When Reddit banned certain forums that victimized overweight people and African Americans, many users active in these hateful subreddits stopped posting on Reddit altogether. Those who remained active posted less extreme content.

Lands al at Jhaver. Activity levels and severe toxicity scores of supporters of the three Twitter celebrities removed from the platform, before and after exiting the platform.

However, the group or person removed from the platform can migrate. Alex Jones continues to work outside of mainstream social networks, operating mostly through the InfoWars website and podcasts. Banning big technology can be seen as punishment for uncensoredly challenging the status quo, strengthening the bonds and sense of belonging among followers.

Gab was created in 2016 as an alternative social network that welcomes banned users from other platforms. Since the US Capitol riot, Gab has been tweeting about these bans as a badge of honor, saying it has seen a surge in users and job applications.

My team’s research looked at subreddits The_Donald and Incels (a male online community hostile to women), which were moved to independent websites after being banned from Reddit. As dangerous communities migrate to different platforms, we’ve seen their footprints shrink, but users become significantly more extreme. Similarly, users banned from Twitter or Reddit showed increased activity and toxicity after moving to Gab.

Timelines for the creation, quarantine and banning of the Incels and The_Donald subdirectories.

Other studies of the emergence of edge social networks such as Gab, Parler or Gettr have found relatively similar patterns. These platforms market themselves as bastions of freedom of expression and welcome users banned or suspended from other social networks. Research shows that not only did extremism rise as a result of loose moderation, but also that early site users had a disproportionate impact on the platform.

The unintended consequences of de-platforming are not limited to political communities, but extend to health disinformation and conspiracy theory groups. For example, when Facebook banned groups discussing COVID-19 vaccines, users took to Twitter and posted more anti-vaccine content.

Alternative solutions

What else can be done to prevent the intensification of online hatred that removal from the platform can foster? Social networks are experimenting with soft moderation interventions that don’t remove content or ban users. It limits the visibility of the content (shadow ban), restricts other users from interacting with the content (replying or sharing), or adds warning tags.

Examples of soft moderation on Twitter: warning tags and shadow ban.

These approaches yield encouraging results. Some warning labels have caused site users to refute false claims. Soft moderation sometimes cuts down on user interactions and extremism in comments.

However, there is the potential for popularity bias (acting on or ignoring content based on the buzz around it) in what issues platforms like Twitter decide to intervene. By the way, warning labels seem less effective for fake posts if right-leaning.

It’s also still unclear whether the soft moderation creates additional avenues for harassment, such as teasing users who receive warning labels on their posts or angering users who fail to repost content.

look forward

A very important aspect of removal from the platform is timing. The sooner platforms take action to stop groups using mainstream platforms to amplify extremist movements, the better. Rapid action could in theory curb efforts by groups to recruit and radicalize their large user base.

But this requires a coordinated effort to get mainstream platforms as well as other media to work. Radio talk shows and cable news play a crucial role in promoting nonsensical narratives in the United States.

We need an open dialogue on the off-platform exchange. As a society, we need to debate whether there should be fewer people exposed to extremist groups.

Currently, deplatforming is handled almost exclusively by big tech companies. Tech companies cannot solve the problem alone, but neither can researchers or politicians. Platforms must work with regulators, civil rights organizations and researchers to tackle excessive online content. The fabric of society may depend on it.


.

Most Popular

Most Popular