NUS Civic Tech Lab Joint Study Uncovers Ways Social Media Users Evade Policing of Harmful Online Content

NUS Civic Tech Lab Joint Study Uncovers Ways Social Media Users Evade Policing of Harmful Online Content

October 4, 2023

IN BRIEF | 5 min read

  • A joint study by Civic Tech Lab at NUS and media intelligence company Truescope revealed that close to 40 per cent of YouTube and TikTok videos contained harmful content related to eating disorders, self-harm and suicide.

Information and communication technology has evolved rapidly over the past 20 years, particularly with the emergence and entrenchment of social media in society. Indeed, social media platforms offer fast and easy access to information and the freedom to express oneself.

However, its hyperconnected nature has given rise to new anxieties, including its impact on individuals’ physical and mental wellbeing through exposure to harmful online content such as hate speech and cyberbullying. A recent study by the University of Oxford revealed that looking at images of self-harm usually leads to harm. One key vulnerable group is the young and impressionable who are typically active social media users.

Despite the commitment by popular social media platforms to regulate user-generated content and maintain a safe space for users, harmful content remains prevalent, as evidenced by the push by governments worldwide to enhance online safety. Amidst such growing concerns, the Civic Tech Lab at the NUS Faculty of Arts and Social Sciences collaborated with media intelligence company Truescope to study the portrayal of harmful content on popular video-based platforms YouTube and TikTok.

Elaborating on the reasons for the study, Associate Professor Weiyu Zhang, Director of the Civic Tech Lab shared, “The prevalence of harmful content implies that users could have devised ways to evade platform policing. We felt it was important to uncover the creative techniques social media users adopt when posting harmful content to avoid platform moderation, in the hopes this will grow awareness on the topic and help in a whole-of-community effort to ensure a safer online environment, particularly for our youths.”

Their efforts have culminated in a recently published joint report titled “Demystifying Portrayals of Harmful Content on YouTube and TikTok”.

How Harmful Online Content is Concealed
Analysing 610 YouTube and 508 TikTok videos between 9 March and 9 April 2023, the team found that close to 40 per cent of these videos on the platforms contained harmful content related to eating disorders, self-harm and suicide.

In addition, the majority of these users were young females – the team discovered that over 80 per cent of these videos were published by female users, and of the users who publicly posted their age on their accounts, close to 70 per cent of these users were below the age of 18.

Even more alarming was how content creators skillfully masked harmful content to evade platform moderation via techniques such as hashtag creation and purposeful content creation with subtle differences between Tiktok and YouTube.

Commonly-used hashtags comprised abbreviations and acronyms (“#sh” for self-harm), deliberate misspelling of terms (“#suwerslide” to imply suicide), euphemisms (“#thinspo” for “thinspiration”), and hijacking of famous subjects (“#ednotsheeran” as a reference to eating disorders). Purposeful content creation came in the form of alterations to music and videos, usage of symbols and coded language to present self-harm behaviour whilst evading detection.

These findings reveal the challenge for platforms to keep abreast of fast-evolving harmful content portrayal techniques. Assoc Prof Zhang, who is also a co-author of the report, further emphasised the critical ramifications of exposure to such pernicious content on physical and mental well-being.

“Third-party monitors are needed to independently evaluate the effectiveness of platforms’ content moderation measures,” she stressed. “Educators, parents, and young social media users must be made aware of the monitoring results, while legislators and regulators can use the evidence from both platforms and third-party monitors to make informed policy decisions.”

The report also shared recommendations targeted at a wide range of stakeholders like the government, online platforms, parents and educators, as well as the youth and other vulnerable users. These range from assembling an independent monitoring body (as an alternative source of evidence and not as a replacement of platforms’ self-reports), to having greater human engagement in content moderation, and greater access to social services.

Click here for the full report.


This story first appeared in NUSnews on 4 October 2023.

Scroll to Top