Behind the content moderation strategies of social media giants

New research highlights the potential for enhanced accountability on the part of social media platforms to curb the spread of misinformation and harmful content online

As social media platforms strive to balance the tension between profits and effective content moderation, the consequences of unchecked misinformation and hate speech loom large. The infamous 2017 Rohingya crisis in Myanmar is a stark example of what can happen when social media platforms fail to quickly and adequately moderate harmful content. 

So how should social media platforms navigate accountability and content moderation? A recent study by Dr Conor Clune, Senior Lecturer in the School of Accounting, Auditing & Taxation at UNSW Business School, and his co-author, Emma McDaid, Assistant Professor in the Accountancy School of Business at the University College Dublin, seeks to answer this question. In their paper, Content moderation on social media: constructing accountability in the digital space, the authors analyse the current content moderation strategies employed by five of the world's most popular social media companies.

“Social media organisations are at an important juncture in how they control content posted on their platforms,” explains Dr Clune. “The algorithms they leverage to monitor content posted on their platforms are evolving, and in a few years, should be able to detect all violating content with a 95 per cent accuracy level, if not higher. This is an impressive achievement. However, while this will allow them to control the content posted on their platforms, it will do little to help social media organisations nudge or modify the behaviour of offending/violating users on their platforms.”

Conor Clune.jpg
UNSW Business School's Dr Conor Clune says that algorithms used by many social media platforms will be able to detect all violating content with a 95 per cent accuracy level. Photo: Supplied

What accountability do social media platforms have?

In 2019, the Australian government introduced the Online Safety Act, which established the eSafety Commissioner as an independent statutory office with various powers and responsibilities, including issuing notices to social media companies to remove harmful content. 

While this Act focuses on online safety, it does not explicitly define the accountability of social media platforms or outline guidelines for effective content moderation. This means social media platforms are deciding how to moderate the content themselves with little legal accountability. 

To examine what some companies do to prevent the spread of harmful content (and to find what they can do better), the researchers analysed the publicly available content moderation strategies of Facebook, Twitter, LinkedIn, Instagram and YouTube. Specifically, they conducted a rigorous analysis and comparison of their policies, practices, and disclosures to understand and compare the practices of these five popular platforms. In doing so, they discovered five key stages of content moderation: setting community standards, identifying violations, enforcement, and user appeal/reporting. 

Read more: How to detect and deter customer misbehaviour on social media

Good accountability systems are designed to embed control and conditioning effects, explains Dr Clune. And all five social media companies designed content moderation tools with impressive control capabilities. For example, in most instances, they could identify and remove content posted by users that breached community standards. However, they possess minimal conditioning effects, which means their content moderation process is unlikely to modify users' behaviour.

He says these five social media companies could do more to highlight the platform's community guidelines when a user signs up for the platform. “Short videos on what conduct is considered important would be a good first step. Similarly, after someone offends, it's often the case that the user was unaware that the content they shared violated the platform's community standards,” explains Dr Clune.

“The platform could set a task for the user to complete before they can create and share the content again. For instance, complete a short quiz where users are shown different posts or tweets and asked which community standard was violated in each case. That approach could have the desired learning/conditioning effect that accountability processes should stimulate.”

Social media content moderation-min.jpg
Providing short educational videos about conduct and implementing learning tasks can help users become aware of platform community standards. Photo: Getty

Should social media platforms educate users on community standards?

According to Dr Clune, users should recognise that social media organisations are diligently removing content that breaches their community standards from their platforms. This should be recognised as a positive and important step.

But for content moderation to have a more substantial conditioning effect, social media platforms need to consider how they can make users more aware of the community standards they must uphold when creating and sharing content online and educate users who breach these standards so they can learn and understand how they need to modify their behaviour to avoid future sanctions.

“However, social media organisations have shown less concern about educating users that are subject to concern moderation, meaning many users are left confused or angry when access to these platforms is temporarily or permanently restricted,” says Dr Clune.

“For users who have content inappropriately flagged or removed, confidence in social media organisations would rightly be low. This may dissuade users from using these platforms regularly. I think this is an important consideration for social media organisations to reflect on. Some users are on social media organisations to be disruptive; they will never be conditioned.

“Others may be unaware that their content breaches the platform's community standards. When these users get notifications that they are suspended from the platform, they often react with confusion and anger. They wonder what they are being held accountable for.” 

Read more: Digital activism: how social media fuelled the Bersih movement

Does content moderation limit freedom of speech?

While responsible content moderation is vital in preventing the detrimental impact of misinformation and hate speech on society, there are concerns about limiting what people can say online, which can have profound implications for democracy, public discourse, and social cohesion.

Dr Clune says accountability processes like content moderation should seek to modify user behaviour – and this doesn’t threaten freedom of speech or democracy. It just means ensuring that there are consequences (accountability) for misbehaviour.

“People often get concerned when they hear platforms like Facebook or Instagram are seeking to condition their interactions,” explains Dr Clune. “Usually, the conversation involves discussing how it violates individuals ‘freedom of speech’. People forget that freedom of speech does not mean freedom of consequence.

“If you bully or defame someone through social media, you are open to legal prosecution in the same way you would be if you do it at work or on the street. Social media organisations are responsible for intervening in such activities on their platform. Legislation in the EU will soon mandate this. If social media organisations can’t moderate certain content, they will face monetary penalties or be banned in the EU. It’s almost certain that other jurisdictions will follow the EU’s lead in the years ahead.”

Social media content moderation strategies-min.jpg
Accountability processes, including content moderation, aim to modify user behaviour without threatening freedom of speech. Photo: Getty

What are the implications for social media platforms?

Policymakers are starting to take an interest in content moderation, but mostly from the perspective of ensuring social media organisations identify, remove, and report illegal content shared through social media, explains Dr Clune.

“It's clear social media organisations are developing the capacity to automatically/proactively – that is, before it's seen or reported by other users - identify and remove illegal content on their platforms. They are getting better and better at this annually. This means policymakers are not setting unrealistic expectations when devising legislation to mandate specific forms of content moderation. Most organisations are doing what will become required of them already on a voluntary basis,” he says.

“The biggest concern with social media organisations, as we move further toward automated detection models for identifying violating content, is that they solely prioritise identifying and removing content that breaches their community standards. 

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

“In this approach, little focus is placed on education and conditioning users to change their behaviours. Such an approach could have unintended effects where users get angry with the platform and seek to accelerate their inappropriate behaviours,” adds Dr Clune. 

“This would mean more and more offending content gets posted on the platform, and users eventually reduce their engagement over time,” he concludes.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy