Published on the 14/11/2019 | Written by Heather Wright
NZ Privacy Commissioner says duty of care flouted by US social media firms…
The world’s digital platform ‘oligarchs’ have been put on notice by New Zealand Privacy Commissioner John Edwards (pictured), who served up a scathing assessment of companies hiding behind the protections of home jurisdictions and freedom to innovate arguments, and called for a duty of care requirement to be imposed on tech companies.
Speaking at the International Association of Privacy Professionals A/NZ conference in Sydney, Edwards said companies such as Facebook and Google needed to adapt to the jurisdictions in which they operate – not the other way around.
He called out Facebook, and its comments that the ACCC’s Digital Platforms Inquiry recommendations would put Australia out of step with other countries and make targeted advertising practically unworkable in Australia.
“But why should Australia have ‘alignment and consistency’ to suit Facebook?” Edwards says.
His speech outlined the challenges faced by countries worldwide as they grapple with policy to control the large social media platforms.
“It was predictable and predicted. And the company responsible was silent.”
While data protection laws are being rushed into play in many regions, Edwards says these alone are unable to combat the challenges of the new digital economic order, where systems ‘are designed to be deployed at a scale that regards the entire population of New Zealand as trivial’ and freedom to innovate and free speech are popular mantras.
“It may be that we need some more agile consumer protection mechanise to allow protection and privacy authorities to work together with other consumer safety regulators to respond more assertively to the emergence of harmful products in the marketplace,” Edwards says.
While other industries, such as pharmaceutical or aviation, face rigorous safety testing and regulations to keep consumers safe, Edwards says the tech giants continue to argue that their innovations outweigh potential harms and that the market should decide which ideas fly or die, avoiding any screening or preapproval processes.
It was on this front Edwards was most scathing, laying the blame for the livestreaming of the March Christchurch terror attacks fairly and squarely at Facebook’s feet, and noting the company launched its livestreaming service knowing full well ‘the platform could be used to broadcast rape, suicide, murder and other such content.’
“It knew, and failed to take steps to prevent its platform, and audience and technology, from being used in that way.
“It was predictable and predicted. And the company responsible was silent – confident that the protection afforded by its home jurisdiction [which says providers of interactive computer services are not the publisher of content] would shield it from liability everywhere,” Edwards says.
“They assured the public that a combination of artificial intelligence, and human moderation would mitigate those risks… But who examined the biases built into the AI?”
“The most chilling moment in Facebook’s post-Christchurch response – apart from admitting that the changes they belatedly put in place would have prevented the gunman from streaming his attack – was a statement to US Congress by Facebook’s policy director for counter terrorism, who said its algorithm did not detect the massacre livestream because there ‘was not enough gore’. AI trained on Isis beheadings did not detect and flag a rapidly firing AR15,” he says.
Facebook wasn’t the only focus of Edwards wrath. He noted a New Zealand case where Google’s trending topics algorithms – doing exactly what they were meant to do – triggered after high numbers of searches for the name of the murder accused, which was under a suppression order in New Zealand but had been printed internationally. Details of the defendant’s identity were then pushed out in an alert to Kiwi subscribers, despite the suppression order.
New Zealand’s Minister of Justice laid the blame at Google’s feet saying it failed to ensure court orders were respected, and Google apologised and suspended the alert system, but later was caught out breaching the name suppression again.
“Perhaps a pre-approval or pre-screening requirement to certify digital product and services would be too cumbersome to administer. But if we are not going to insist on independent ex ante review, we should at the very least impose a duty of care on those who launch products without thinking through the potential harm and taking all reasonable steps to mitigate those harms,” Edwards says.
His speech didn’t delve into what a duty of care might look like or how it could be enforced.
Edwards says one of the lessons from Christchurch was that by joining together and presenting an ‘irrefutable moral right, it can force industry to act’, as seen in the Christchurch Call.
“We need to combine internationally to push back against the one-sided offering we get from the companies that profit from our populations’ data.”