TikTok and Reddit: The Efficacy of Curbing Misnformation

 Social media platforms, whether centralized or decentralized, are an integral part of our everyday lives. The constant stream of information has created a breakneck pace for the news cycle. Entertainment, news, and social interactions all have become consolidated within these platforms. Two such Platforms are TikTok and Reddit. 

TikTok’s claim to fame is its revolutionary short-form content. Its original “For-You Page” creates a bottomless feed in which the user is incentivized to keep scrolling. Reddit’s niche on the internet is that it has built upon the anonymous message boards of the early internet and transformed them into communities for knowledge sharing and conversation. It too boasts an extensive library of user-generated and cross-platform shared content. However, its main appeal is its decentralized nature of anonymous posting. 

Chronically online and dopamine addiction tirades aside, the amount of information that is disseminated on these platforms, the speed at which it is shared and consumed, makes it incredibly difficult to moderate the validity of posts’ claims. Too often have I come across TikToks and Reddit posts glorifying hateful rhetoric, conspiracy theories, and even blatant, manufactured lies. 

Users caught in the torrent of a seemingly increasing information war have also become so set in their ways that they opt out of typical fact-checking and media literacy methods. Cognitive shortcuts the brain is programmed to make are wired to offload the heavy lifting of double-checking information to the integrity of the platform from which they consume it. Humans would rather have an external outlet for their reasoning so that it could make room for more pressing matters, like the intrigue of the latest episode of RuPaul’s Drag Race. 

Therein lies the impetus for platforms to implement some sort of internal mechanism to combat misinformation. Think of social media platforms like old Wild West towns, and misinformation spreaders as bandits. The locals can do their best to fend them off. However, they lack the legal authority or the gunslinging skill to save the town; they need a sheriff. Platform moderators are the sheriffs. However, not all sheriffs are created equal, and these bandits are pretty good at what they do. 

TikTok moderation in response to misinformation can be categorized into four levels: algorithmic, expert, reports, and redirects. First is automated detection. According to the 2023 TikTok Transparency report, the platform uses machine learning algorithms to scan for misinformation. Human inputs power this automated flagging system, which serves as a sonar for detecting misinformation in the feed.  Building on the report, there are also direct requests from the government to remove certain types of content. This typically pertains to harmful and illegal content. TikTok will place a warning on certain posts stating that professionals perform the actions shown and should not be attempted at home. This warning will appear on race car or stunt-related content, deterring impressionable viewers from taking action. 

Popularized by Meta during the COVID-19 era, TikTok partnered with outlets like PolitiFact, known for its Truth O’Meter and SciVerify, to counteract political and medical misinformation. Having experts intervene is a slow yet effective method to combat misinformation, and the results of automation and fact-checking services have been successful. In 2023 alone, according to the Transparency Report, over 66 million videos were removed globally in 6 months, with 5% flagged for misinformation. 

Internal studies from TikTok report a 24% reduction in the resharing of user-flagged content, the third level of moderation. Finally, during elections, TikTok will redirect users to information hubs where they can read and watch verified coverage. All of this is still not enough, according to a 2023 study from the Center for Countering Digital Hate, which deemed TikTok’s efforts to curb misinformation about climate change to be “inadequate.” With 50% of misinformed content still appearing in searches, even after moderation efforts. 

Reddit is a bit harder platform to navigate, with its communities, anonymity, and sub-communities, it is a monumental task to enforce moderation. Reddit functions with its own policing system, and communities have moderators, who are members of the community, to flag and delete hateful and offensive posts. This self-imposed policing method has proved somewhat useful over the years, as Reddit’s hive mind serves as the basis for the most niche bits of knowledge. 

Reddit’s most potent weapon lies in its quarantine method. In an Admin update in 2021, Reddit began to “quarantine” subreddits that consistently reported misinformation, outright banning some that were in noncompliance with the platform’s terms and standards. This deletion of communities caused bad actors and the misinformed to scatter, attacking misinformation at its core by making it harder for compromised users to organize.

Where streamlined platforms like TikTok fail, Reddit does well, and eliminating certain accounts and posts can only go so far. The audience, hungry for misinformed content, still thrives. Reddit, however, dismantles their ability to assemble, making misinformation more difficult to spread. 

Leave a comment