This piece is co-published by the Centre for Humanitarian Dialogue (HD), Build Up and Protection Group International (PGI)

Social media’s harmful role in conflict is ever more apparent. From disinformation to influence operations and coordinated inauthentic behaviour, malign actors have a wealth of digital weapons at their disposal. Their tactics are growing in sophistication.

Mediators must be equipped not only to mitigate these risks but to monitor them too.

Together, we’re combining years of work mediating physical conflicts and understanding the digital space to ensure that mediators and others have the right tools and skills to make peace in the digital world.

At RightsCon 2023, we hosted a joint discussion on “Keeping the digital peace: Monitoring social media peace agreements”.

As part of HD’s expanding digital conflict portfolio, social media peace agreements have been an especially promising avenue to help minimise online risks in fragile and high-risk settings around the world.

Back in 2021, we hosted a RightsCon session on what these accords could look like. Since then, HD has facilitated social media agreements in places as diverse as Bosnia and Herzegovina, Indonesia, Kosovo, Nigeria and Thailand – with more on the horizon.

These agreements can be tied to wider electoral codes of conduct, as in Thailand, or be part of mediation efforts between communities or conflict actors, as in Nigeria’s Plateau State.

While we’ve shown that agreements pertaining to the social media space are possible, the focus is now on pairing these agreements with robust monitoring mechanisms to ensure signatories stick to their commitments.

That’s why we hosted a joint workshop at RightsCon this year to explore what an ideal mechanism might look like – bringing together the mediation experience of HD with the digital investigations background of PGI and the digital peacebuilding expertise of Build Up to identify potential solutions for monitoring social media agreements.

The fascinating, constructive discussion focused on answering three key questions:

Who should monitor social media agreements?

We do not need to create a monitoring mechanism from a blank canvas. Traditional ceasefire monitoring can provide the key ingredients for a social media monitoring body.

Ceasefire monitoring usually involves an international body and the parties to the agreement, so this could be replicated – as long as monitors are seen as legitimate and credible.

Participants discussed the possibility of an international network of monitors who could be deployed wherever and whenever needed, similar to an election observation mission.

We also discussed the need to include stakeholders from a range of sectors – including tech and law – and the role of social media platforms.

A monitoring body should engage with Facebook, Twitter, TikTok and others to help limit toxic or inflammatory content.

While the standards and scope of social media peace agreements often go beyond those of platforms, the tech companies could help the monitoring body with greater access to their data – an increasing challenge as of late.

How should we monitor social media agreements?

There are many initiatives and actors around the world with the technical skills to analyse social media but it is crucial to factor in the local context.

Beyond tools and techniques, monitors need local knowledge to understand the specific signals and behaviours associated with violations of a social media peace agreement.

The monitoring body must also establish clear definitions for these behaviours. We questioned whether these should be agreed upon universally or adjusted to each context.

Proxies often play significant roles in conflicts on the ground, and this extends to the online space. Signatories to an agreement can maintain the appearance of a clean social media presence while paying influencers and marketing firms to disseminate problematic content or carry out influence operations on their behalf.

To address this, monitors must conduct thorough actor mapping to identify potential third-party spoilers, strengthen accountability and help attribute their violations to specific signatories.

As artificial intelligence (AI) continues to evolve, conflict actors will have access to more sophisticated techniques, making it easier for those looking to undermine peace through social media.

That means monitors must be well-equipped to identify AI-related tactics impacting the online space.

What should we do with the results of monitoring efforts?

The “who” and “how” are important factors in developing a monitoring body but will ultimately lack impact without follow-up action.

The type of follow-up depends on the wider objectives of the monitoring mechanism.

Is there a technical goal or a political one? Gathering data on violations is interesting but can we also use the results to engage with conflict actors to make agreements more impactful?

We considered how to manage violations of an agreement. What are the carrots and sticks? Which stick should the monitoring body wield?

A nuanced approach is required as not all violations can, or should, be dealt with in the same way.

A signatory with a minor, first-time violation may simply need informal engagement by the monitoring body to encourage stronger adherence to the agreement.

On the other hand, a signatory with repeated large-scale violations could face a public “naming and shaming” to exert pressure to behave responsibly.

We also explored how to include the public in monitoring efforts to bolster the long-term sustainability of social media agreements once an official monitoring body has stepped back.

What next?

These thoughtful ideas and rich insights at RightsCon 2023 have further motivated and inspired us to keep refining monitoring mechanisms for social media agreements and build a blueprint to take forward.

As the workshop showed, there are plenty of people and organisations keen to help develop these ideas.

Our work does not stop here. We must push an approach that includes a wide array of stakeholders to create effective, legitimate and sustainable monitoring mechanisms to keep the digital peace.

Get in touch with us at digitalconflict@hdcentre.org to be a part of this innovative work.