­
Privacy watchdogs lining up against Clearview AI

Privacy watchdogs lining up against Clearview AI

Published on the 14/07/2020 | Written by Heather Wright


Security watchdogs_against Clearview AI

Tech subterfuge and the dark arts of biometric algorithms…

Clearview AI’s future is becoming increasingly murky as the controversial AI facial recognition-for-cops system becomes mired in new investigations.

Latest to join the fray is the Office of the Australian Information Commissioner and the UK Information Commissioner’s Office, who last week opened a joint investigation into the company’s personal information handling practices – in particular its use of scraped data and biometrics.

The investigation into Clearview AI ‘follows preliminary enquiries’ with the company, the OAIC says.

The investigation will focus on Clearview’s use of scraped data and biometrics.

The OAIC and ICO investigation came just three days after Canada’s privacy watchdog ­– which is also investigating Clearview’s actions ­– announced that Clearview was ceasing to offer its services in Canada, including the ‘indefinite suspension’ of its contract with the Royal Canadian Mounted Police – its last remaining client in Canada. The Office of the Privacy Commissioner of Canada says its investigations into Clearview will continue, as will its related investigation into the RCMP’s use of the technology.

Clearview, founded by Australian (now US based) Hoan Ton-That, is also facing legal action in several US states, including California.

Clearview AI, which bills itself as ‘a research tool used by law enforcement agencies to identify perpetrators and victims of crime’ allows users to upload a picture of an individual and match it to photos of that person collected from the internet. If a positive match is found, Clearview links to where the photos appeared – potentially providing personal details, such as a person’s name.

The company, whose primary investors include billionaire venture capitalise Peter Thiel claims to have a database of more than three billion images – reportedly scraped from various social media platforms and other websites.

Thiel was the first big investor in Facebook (he’s since sold most of his stake in the company but remains on Facebook’s board), is the co-founder of CIA backed (and notoriously secretive) big data startup Palantir and one-time outspoken Trump supporter who in recent months has distanced himself from the US president’s re-election campaign. Palantir, incidentally, confidentially filed for an IPO last week.

Facebook, Twitter, Google and YouTube all reportedly sent cease and desist letters demanding the company stop taking photos and data from their sites.

The scraping of data may be the focus of the joint Australia-UK investigation, but it’s just one aspect raising the ire of privacy bodies around the world, with the company’s argument that it only uses publicly available images doing little to quell concerns.

Throw in rising concerns over the use of facial recognition, particularly in the face of the BLM movement – witness IBM, Amazon and Microsoft’s recent backdown’s on facial recognition for law enforcement – the mystery of AI biometrics and algorithms and increased focus on privacy, and Clearview appears to be treading dangerous waters.

The technology to identify us all based on a photo of our face isn’t new – but companies capable of developing such tools have tended to shy away from doing so, thanks to privacy concerns. Not so for Clearview, which touts its software as ‘groundbreaking’. That it may be, but the expansive facial recognition database has raised serious privacy concerns with the Electronic Frontier Foundation among those criticising the company and calling for stronger consumer privacy laws to provide protection and recourse against companies using consumers’ personal data without their knowledge or consent.

Not helping the situation is the secretiveness with which some law enforcement organisations have been using the system.

A data breach earlier this year exposed just how widely used the system is, revealing it had more than 600 customers. That data breach ousted the Australian Federal Police and other Aussie law enforcement as users. While the New Zealand Police declined to confirm or deny their use of the system to iStart at the time, it was later revealed they too had used the system – in an unauthorised trial.

 

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere