When AI goes wrong: The AIID reveals all

Published on the 16/06/2021 | Written by Heather Wright


AI incident database

Lessons in what not to do with AI…

It’s the AI database you probably don’t want to end up on: The AI Incident Database is a collection of incidents, causing harm or ‘near harm’ in the real world, through the deployment of intelligent systems.

Just as road deaths are recorded in the Australian Road Deaths Database and New Zealand’s Ministry of Transport with the aim of providing statistical evidence and insight into issues, so too, the AI Incident Database (AIID) aims record and learn from AI’s failings in order to develop safer, more ethical AI.

“Avoiding repeated AI failures requires making past failures known,” Sean McGregor, a machine learning architect at Syntiant and developer of the AIID, says in a blog post announcing the venture.

“Avoiding repeated AI failures requires making past failures known.”

He says the AIID, from the non-profit Partnership on AI, was inspired by similar databases in aviation and the Common Vulnerabilities and Exposures system which logs publicly disclosed cybersecurity vulnerabilities and exposures.

“Even well-intentioned intelligent system developers fail to imagine what can go wrong when their systems are deployed in the real world,” McGregor says.

“These failures can lead to dire consequences, some of which we’ve already witnessed, from a trading algorithm causing a market ‘flash crash’ in 2010 to an autonomous car killing a pedestrian in 2018 and a facial recognition system causing the wrongful arrest of an innocent person in 2019.”

While the behaviour of traditional software might be well understood for the most part, machine learning and AI can’t be completely described and tested.

But worse, he says is that the AI community has no formal systems or processes where AI researchers and developers can learn from the mistakes of the past.

The AIID aims to rectify that. Launched in late 2020 it already contains nearly 100 incidents including Microsoft’s racist, homophobic and just generally all-around offensive, Tay Twitter chatbot, feuding Wikipedia bots, autonomous vehicle accidents , bear repellent spraying robots and and a worryingly high – sorry ‘non-negligible’ – number of robotic surgery deaths and malfunctions.

Google tops the current list for AI driven ‘incidents’, clocking up at 15 least incidents, including a number of autocomplete fails leading to defamation claims around the world, including Australia.

Google’s self-driving cars also feature heavily, as do Tesla and Delphi autonomous vehicles.

Amazon is also a repeat offender, with at least five incidents logged.

Microsoft, Apple, LinkedIn, Facebook, Wikipedia… the listing is a veritable who’s who of tech and AI companies, highlighting the issues companies are facing in getting AI right, and the high profile nature of the technology. (It should be noted that the Partnership on AI which hosts the AIID, was founded by AI researchers at the likes of Google, Facebook, Apple and Amazon.)

McGregor hopes the database, designed for product managers, risk officers, engineers and researchers, will provide ‘the most pragmatic coverage of AI incidents through time’, with the goal of reducing negative consequences form AI in the real world.

The AIID’s definition of an incident is broad – ‘a situation in which AI systems caused, or nearly caused, real-world harm’ – and range from Alexa going on a dollhouse shopping spree after a news anchor mentioned a girl saying ‘Alexa ordered me a dollhouse’, to incidents where robots have caused the deaths of co-workers. A suicidal robot which ‘drowned’ itself in an office fountain also makes the list.

Australia’s CentreLink debt recovery system is also flagged in the database and New Zealand makes the list for a robot passport checker which rejected an Asian man’s passport photo for ‘closed eyes’ in 2016.

Many of the incidents centre around algorithmic bias.

The database can be queried for incidents based on keywords, such as ‘facial recognition’ as well as inputs such as the source, and provides links to media reports detailing the issues.

McGregor says the database is designed for product managers, risk officers, engineers and researchers.

“If a product manager discovers incidents where intelligent systems have caused harms in the past, they can introduce product requirements to mitigate risk of recurrence,” he says.

Engineers can benefit, he says, from checking the AIID to learn more about the real world their systems are deployed within and the biases that need to be corrected.

For risk officers he cites the example of an organisation planning to launch an automatic translation feature. The database includes 40 reports based on ‘translate’ including one Facebook incident where a social media status update of ‘good morning’ was translated to ‘attack them’.

“After discovering that incident, the risk officer could read reports and analyses to learn that, although it is currently impossible to technologically prevent this sort of mistake from happening, there are a variety of best practices in mitigating the risk, such as clearly indicating the text is a machine translation.”

While aviation and road fatalities databases get much of their strength from being mandatory – all road deaths, and similarly all air accidents, are automatically feed into the New Zealand and Australian databases – the AI Incident Database has no such requirements.

Anyone can contribute to the database, with McGregor reportedly approving new additions.

 

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere