Published on the 18/06/2020 | Written by Jonathan Cotton
IBM, Amazon and Microsoft won’t supply FR technology to US authorities. What gives?…
Against a backdrop of police violence and civil unrest in the United States, tech giants IBM, Amazon and Microsoft have separately announced they will be stepping back – to varying degrees – from any facial recognition technology deals with US police forces.
The news comes on the heels of Australia’s Department of Home Affairs launching a new biometric system, provided by Unisys, to identify persons of interest, and embarrassing gotchas for police on both sides of the ditch experimenting with the tech.
It is a market undergoing significant growth: Valued at US$3.2 billion last year, research from Grandview Research predicts a US$7 billion facial recognition industry come 2027.
The technology is evolving ‘at an explosive rate’ says the research company.
“Technologies such as biometrics are extensively used in order to enhance security… The biometrics are universal, unique and measurable and thus can be used to provide security solutions.”
Perhaps now is the time to make sure that the technology is adopted in a way that serves the people.
Using connected or digital cameras to detect and measure faces, then matching those features against images stored in a database, facial recognition is versatile technology with new use cases emerging daily.
But following the death of George Floyd at the hands of three Minneapolis police officers, the demonstrations that have followed, and with the knowledge that facial recognition initiatives have repeatedly been found to exhibit and exacerbate race and gender biases, three of tech’s biggest players have announced it’s time for them to pull the plug on their law enforcement/facial recognition projects.
IBM started the ball rolling, announcing earlier this month that it will be stepping back from the facial recognition game entirely.
“IBM no longer offers general purpose IBM facial recognition or analysis software,” wrote Arvind Krishna in a letter to Congress, dated 8 June.
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.”
That should go without saying of course, but Krishna went one further however, calling for ‘a national dialogue’ on ‘whether and how’ facial recognition technology should be used by domestic law enforcement.
“Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”
It’s a big statement, but also something of a symbolic gesture. After all IBM only has a tiny share of the facial recognition market anyway, and a smaller one still of the law enforcement segment. Neither is it actually cutting ties with US law enforcement, with its contracts for AI-based crime-prediction technology still in place.
Following Krishna’s statement however, Amazon then announced that it would be putting a 12 month moratorium on police use of its facial recognition technology to give ‘Congress enough time to implement appropriate rules’.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” the company said in a post.
It might also be a case of once bitten, twice shy for Amazon. The company has been embarrassed by its algorithms before, famously ditching its AI-powered recruiting engine after it was discovered the system was discriminating against women.
“Its algorithm used all CVs submitted to the company over a 10 year period to learn how to spot the best candidates,” said Maude Lavanchy, research associate, IMD Business School at the time.
“Given the low proportion of women working in the company, as in most technology companies, the algorithm quickly spotted male dominance and thought it was a factor in success.”
Now Microsoft has joined the party, with president Brad Smith declaring the company won’t sell facial recognition technology to US police forces until there are federal laws in place regulating it.
If IBM, Amazon and Microsoft wanted to make a statement about the use and misuse of facial recognition technology – and the behaviour of too many US law officials – now is surely their moment. But neither IBM, Amazon or Microsoft have a significant share of the market either. Rather, key players globally include Unisys and smaller organisations such as FacePhi, NEC, Idemia, Cognitec Systems, Precise Biometrics and Fujitsu.
Perhaps a statement needs to be made on this side of the equator too. Earlier this year is was revealed that Australian Federal Police, as well as police forces in Queensland, Victoria and South Australia have been using facial recognition – without supervision or accountability – for some time.
Equally troublingly, Australian police agencies initially denied they were using the service, delivered by controversial facial recognition technology company, Clearview AI.
“The denial held until a list of Clearview AI’s customers was stolen and disseminated, revealing users from the Australian Federal Police as well as the state police in Queensland, Victoria and South Australia,” says Jake Goldenfein, lecturer at Swinburne University of Technology and board member of the Australian Privacy Foundation.
“This development is particularly concerning as the Department of Home Affairs, which oversees the federal police, is seeking to increase the use of facial recognition and other biometric identity systems.”
It’s a similar situation in New Zealand with recent revelations that police have trialled facial recognition technologies without clearance from, or even the knowledge of, superiors or the Police Commissioner.
Now that those lapses have been made public – and we’ve had announcements from three tech giants – where do we actually stand?
It’s likely not the end of facial recognition technology being used by law enforcement and is almost certainly quite the opposite. Given that’s the case, perhaps now is the time to make sure that the technology is adopted in a way that serves the people who will ultimately have to pay for it.