EU looks to ban ‘unacceptable’ AI use

Published on the 28/04/2021 | Written by Heather Wright


EU Artificial Intelligence Act

Fine of €30m or six percent of turnover on cards…

The EU is proposing fines of up to six percent of a company’s turnover, or €30 million, as it seeks to rein in ‘unacceptable’ use of AI.

To be clear, the EU isn’t looking to ban most AI use but it is focused on regulating AI applications deemed to be ‘unacceptable’ or ‘high’ risk. Unlike New Zealand’s voluntary Algorithm Charter for government departments – dubbed a world first when it debuted last year, the EU’s proposed Artificial Intelligence Act would enshrine in law rules for AI development for all organisations, complete with weighty fines.

AI applications deemed unacceptable risk – that is, those considered ‘a clear threat to the safety, livelihoods and rights of people’ – would be banned under the proposed regulations. That includes AI systems or applications which manipulate human behaviour to circumvent free will – such as toys which use voice assistance encouraging dangerous behaviour, and systems that allow ‘social scoring’ by governments, a la China’s social credit system. However some ‘narrow exceptions’ would exist, such as in preventing terrorist threats or locating a missing child.

“Our rules will intervene where strictly needed.”

High risk systems cover a much wider range, across a wide number of markets from critical infrastructure to education and workers management, law enforcement, migration and the administration of justice. High risk AI systems will be subject to ‘strict obligations’ both before they can be put on the market and after launching.

Among the requirements faced for high risk systems are the need for risk assessment and mitigation systems, high quality datasets to minimise risks and discriminatory outcomes and logging of activity to ensure traceability of results. Appropriate human oversight, and clear information to users are also mandated.

The proposed regulation, which faces an arduous approval process, also includes restrictions and safeguards when it comes to the controversial use of remote biometric systems, such as by law enforcement – albeit with wide ranging exemptions.

Limited risk systems, such as chatbots, would face transparency obligations, while the makers of ‘minimal risk’ systems – which the EU says make up the ‘vast majority of AI systems’ would face no restrictions, though voluntary codes are proposed.

Military use of AI is excluded from the regulation, which is focused solely on the EU’s internal market, but the proposed Artificial Intelligence Act, like the GDPR, would apply to any company selling AI products or services in the EU, not just EU-based companies.

Violations of the rules could see fines of up to €30 million or up to six percent of global annual revenue for companies, whichever is higher.

When it comes to governance, the EU Commission is proposing that national market surveillance authorities supervise the new rules, with a European Artifiical Intelligence Board facilitating their implementation and driving the development of standards.

The proposed legal framework for AI is the latest in a line of work from the EU on AI. Back in 2017, the European Parliament issued recommendations on Civil Law Rules on Robotics, followed by a resolution containing recommendations for a civil liability regime. It’s also issued several reports and white papers on the topic.

The ‘Proposal for a Regulation laying down Harmonised Rules on Artificial Intelligence’ however aims to create ‘the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally’.

Margrethe Vestager, the European Commission’s executive vice-president for a Europe fit for the Digital Age, says “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

“Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Innovation killer or loophole-ridden waste?
But while Vestager has dubbed the proposed rules innovation-friendly some have already expressed concerns that the rules could limit innovation.

Cecilia Bonefeld-Dahl, director general of Digital Europe and a member of the European Commission’s High Level Expert Group on AI, says “After reading this regulation, it is still an open question whether future start-up founders in ‘high risk’ areas will decide to launch their business in Europe.”

She says the new rules need to be streamlined for Europe to become a global innovation hub, and smaller companies will need guidance and financial support, as well as the streamlined processes to navigate the requirements.

“We need to nurture smaller companies through effective sandboxing, not bury them in new rules.”

Despite the wariness, Digital Europe says it does welcome the risk-based approach and says the proposal is ‘a good baseline’ but there’s still work to be done.

“The inclusion of AI software into the EU’s product compliance framework could lead to excessive burden for many providers,” Bonefeld-Dahl says.

“This field is dominated by smaller firms with little to no experience in market access regulations, designed years ago for physical products. It is also an industry that needs to move quickly – to update AI software, apply the latest technological developments or address flaws.”

On the flip side, however, is plenty of criticism that the proposals contain too many ‘significant’ loopholes and lacks meaningful safeguards to protect against discrimination.

Griff Ferris, legal and policy officer with criminal justice NGO Fair Trials, says while the ‘much-needed legislative approach to regulate and limit the use of AI’ is welcomed – and even more so the recognition that the use of AI in criminal justice is ‘high-risk’, the legislation does not go far enough.

“The new legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice.

“The framework must include rigorous safeguards and restrictions to prevent  discrimination and protect the right to a fair trial, which includes restricting the use of systems that attempt to profile people and predict the risk of criminality.”

Fair Trials argues that the exemption allowing the use of even high risk AI in order to safeguard public security gives law enforcement and justice authorities carte blanche to use the systems, ‘completely undercutting any attempts to safeguard against discrimination or protect the right to a fair trial’.

The Civil Liberties Union for Europe, which has previously been outspoken about its opposition to biometric surveillance, which it wants banned, has also hit out at the legislation.

 

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere