Demystifying Portrayals of Harmful Content on YouTube and TikTok
October 16, 2023
Over the past 20 years, advancements in information and communication technology, especially in social media, have drastically altered how we communicate and access information. While these platforms offer numerous benefits, such as easy access to data and a platform for self-expression, they also present serious risks.
Addressing this issue, Associate Professor Weiyu Zhang (NUS Communications and New Media) and her Civic Tech Lab collaborated with Truescope’s Ho Wei Yang to scrutinize the portrayal of harmful content on the dominant video platforms YouTube and TikTok. Titled ‘Demystifying Portrayals of Harmful Content on YouTube and TikTok’, their study seeks to understand the evasive tactics employed by users to bypass platform moderation, especially when posting harmful content.
The findings are alarming. Of the videos analyzed, nearly 40% contained harmful content associated with eating disorders, self-harm, and suicide. A vast majority of these videos were from young females, spotlighting their vulnerability to these issues. The study also shed light on the subtle tactics employed to sidestep platform moderation, including inventive hashtagging, deliberate misspellings, and coded language.
This echoes a concerning study conducted by the University of Oxford this year, which highlighted that exposure to images of self-harm often leads to actual self-harm. Given their high social media engagement, young people are a particularly vulnerable group to harmful content. Despite attempts by social media giants to regulate content, the presence of harmful content remains pervasive.
Ultimately, the study’s findings accentuate the uphill battle platforms face in combating evolving harmful content. A/P Zhang emphasizes the necessity for independent third-party monitors to gauge moderation effectiveness, and recommends a holistic approach towards online safety that involves governments, platforms, and users.
Read the report here.