Platform providers offer anti-terrorism ‘actions’

Published on the 22/05/2019 | Written by Heather Wright


Christchurch Call signatories pen their response – but is it all just virtue signalling?…

The social media platforms who signed up to the Christchurch Call have (allegedly) now committed to corrective actions in a nine step pledge.

Amazon, Facebook, Google, Microsoft and Twitter – all of whom were signatories to the Christchurch Call – say they’ve committed to an additional nine ‘actions’ to address ‘the abuse of technology to spread terrorist and violent extremist content’.

It is critical to consider the wider context in which digital media is having a growing and increasingly negative impact

The steps include five individual actions each company will take, plus a further four collaborative actions they’ll take as a group. Like the Christchurch Call ‘pledges’ all of the actions are non-binding.

Bizarrely, Microsoft seems to be the only one to publish the full text of the pledge. Try googling “As online content sharing service providers, we commit to the following:” (including the quotation marks) – it comes up with just one result – from a Microsoft blog. So we’re taking Microsoft’s word here for what was actually pledged.

The individual actions include identifying appropriate checks on livestreaming to reduce the risk of extremist content being disseminated online.

Those actions could, the companies say, included enhanced vetting measures, such as streamer ratings or scores, account activity or validation processes, and moderation of some livestreaming events.

Just last week Facebook unveiled a one-strike policy for live streaming violations, saying those violating community standards could be banned from streaming for an (undefined) set time. The company, which came in for fierce criticism over the live streaming of the terrorist attack, has also pledged US$7.5 million toward research with universities including Cornell, around image and video analysis technology.

Also flagged in the platforms’ nine steps plan is a commitment to invest in technology to improve the detection and removal of inappropriate content including the extension or development of digital fingerprinting and AI-based technology solutions.

As we’ve noted before the video content moderation tools deployed by the platforms rely on decade-old technology, with under-resource manual interventions also in play. Last week, two months on from the attacks, The Washington Post reported that videos of the shootings were still available on Instagram and Facebook.

The addition of one or more methods of flagging or reporting inappropriate content is also on the cards – albeit something most already have and something which apparently was ineffectual in the aftermath of the Christchurch mosque attacks. However, the companies say “We will ensure that the reporting mechanisms are clear, conspicuous and easy to use, and provide enough categorical granularity to allow the company to prioritise and act promptly upon notification of terrorist or violent extremist content.”

Regular transparency reports and updated terms of use, community standards, codes of conduct and acceptable use policies round out the individual actions each company has agreed to.

Meanwhile, as a group they’ve agreed to work with industry, governments, educational institutions and NGOs to develop ‘a shared understanding of the contexts in which terrorist and violent extremist content is published and to improve technology to detect and remove’ such content more effectively and efficiently. That includes shared data sets to accelerate machine learning and AI, the development of open source and other shared tools to detect and remove content and allowing all companies to contribute to address detection and removal of content from platforms and services.

Crisis protocols, which would enable sharing of information to shut down future situations; and education are also on the list, along with the very general action to ‘combat hate and bigotry’. “We commit to working collaboratively across industry to attack the root causes of extremism and hate online,” the five say. “This includes providing greater support for relevant research – with an emphasis on the impact of online hate on offline discrimination and violence – and supporting capacity and capability of NGOs working to challenge hate and promote pluralism and respect online.”

The nine actions are in addition to the seven ‘pledges’ online service providers have committed to in the ‘Christchurch Call’. Those pledges include taking ‘transparent, specific measures’ to prevent the upload of terrorist and violent extremist content, preventing its dissemination including immediately and permanently removing it. Also included is the pledge to implement immediate, effective measures to mitigate the risk of such content being disseminated through live streaming, including identification of content for real-time review and a review of algorithms and other process that may drive users towards and amplify such content.

The actions and pledges sound good – albeit deliberately vague on any specific detail – and any action from the companies is better than none. But given that none are binding, they are largely symbolic and questions remain over whether any of what is being proposed is actually new or radical enough to stem the tide.

But a report from NZ’s Law Foundation – released ahead of the Christchurch Call meeting in Paris – says while the Christchurch Call was a positive initiative, it falls short of dealing with the scale of the challenge – something it’s likely to also say of the platform companies plans.

“It is critical that the Prime Minister and her advisors look beyond immediate concerns about violent extremism and content moderation, to consider the wider context in which digital media is having a growing and increasingly negative impact on our democracy,” says lead researcher Marianne Elliott.

The report’s proposals include collective action to influence the major platforms, through groups like technology workers and digital media users using their leverage to demand ethical product design.

Fake news, the study argues, can be countered by investing more in public interest media and alternative platforms, leading to a more democratic internet.

It also points to evidence that online platforms enabling citizen participation in decision-making can improve public trust and lead to more citizen-oriented policies.

The report, Digital Threats to Democracy, serves up its own six action points:

  1. Restore a genuinely multi-stakeholder approach to internet governance, including meaningful mechanisms for collective engagement by citizens/users;
  2. Refresh antitrust and competition regulation, taxation regimes and related enforcement mechanisms to align them across like-minded liberal democracies and restore competitive fairness;
  3. Recommit to publicly funded democratic infrastructure including public interest media and the online platforms that afford citizen participation and deliberation;
  4. Regulate for greater transparency and accountability from the platforms including algorithmic transparency and accountability for verifying the sources of political advertising;
  5. Revisit regulation of privacy and data protection to better protect indigenous rights to data sovereignty and redress the failures of a consent-based approach to data management; and
  6. Recalibrate policies and protections to address not only individual rights and privacy but also collective impact and wellbeing.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere