Legal issues likely to dampen AI/ML uptake

Published on the 20/06/2018 | Written by Heather Wright


Gartner AI_ML_racial_profiling_legal issues

When machines are making the decisions, they’re not always black and white…

Legal issues are likely to slow uptake of artificial intelligence and machine learning according to a visiting analyst, who has cautioned A/NZ businesses to carefully consider the implications before embracing the technologies.

Gareth Herschel, Gartner research vice president, told iStart that while the impact of AI and machine learning (ML) will be ‘profound’ and will apply across almost all areas of businesses, it must be approached ‘with a degree of care’.

“ML is just another category of analytic technique and it is one that organisations should mature towards. ML and AI are potential endpoints on a long journey, but they are further along the journey than most organisations necessarily need to be today,” Herschel said.

“You can control what data goes in but it’s very difficult to process exactly how that data is used.”

“The impacts will be profound and there’s literally no part of the organisation that could not be affected because what ML and AI are doing is refining your understanding of the dynamics of a particular process or domain – and every part of an organisation could do with that refinement of understanding,” Herschel said.

However, he said limited skilled resources and potential restrictions on how ML can be applied from a social, ethical or legal perspective means some areas of business may ultimately prove to be off-limits.

“From a legal perspective, you need to be able to explain and justify what you did and how you did it,” Herschel said. “The challenge with some of the more advanced ML techniques where you start to get into AI-types of domain is that they work, but it is quite difficult to explain exactly how they work or what they’re focused on in order to work. You can control what data goes in but it’s very difficult to process exactly how that data is used.”

Herschel said while governments will play a role in setting guidelines around some data use, ultimately AI and ML uses are likely to be tested in a court of law.

“The problem is when you start getting into the implications of the data. It might be illegal to use ethnic background as a criteria for decision making, but it may be allowable to use hobbies and interests. The problem is that certain sports are more popular within certain ethnic groups than others. So if you don’t want to allow people interested in basketball to do this thing, say, that is effectively indirect proxy for certain ethnic groups.

“That’s where it becomes a very difficult balance to make and I think it will end up getting tested in a court of law… because there will be concerns around whether certain actions occurred, whether there was discrimination and so on.”

He said A/NZ companies looking to introduce ML or AI into their businesses should first consider what business outcome – such as improved efficiency or increased revenue – they’re seeking.

“Then they can think about the key decisions that are going to really help drive that outcome. Those are business questions. From there you get into what kinds of insights or analysis will allow you to make those decisions more effectively.”

He said at that point, companies need to strike a balance between the analysis used and its sophistication and predictive power versus the ability to explain that analysis.

“Simpler models tend to be easier to explain but don’t necessarily perform as well and that’s where you get into this question of ‘what is the decision being made and how important is it to have a very high degree of accuracy versus a high degree of explainability.”

The message for business?

Be very clear in defining the use cases where decisions or recommendations will be informed by AI/ML algorithms. Where these are non-controversial, but important, aim for a higher degree of sophistication and accuracy. For something that could be controversial with potentially negative outcomes in the form of legal action or negative publicity, it may be more important to focus on explainability and justification for the model.

And, as we’ve seen amplified in multiple media reports recently, what you think may only be for internal use can very quickly become public, so proceed with caution, openness and honesty regardless of internal or external audiences.

 

Questions or comments...

  1. Anonymous

    That is not a photo of Gareth Herschel, but a very poorly chosen image to somehow relate the sub-title of this article: “When machines are making the decisions, they’re not always black and white…”. Is iStart so lacking in imagination you must resort to this to illustrate a completely unrelated point about AI?

    Reply
    1. Hayden McCall

      Thanks for your comment – the story was missing the photo credit which has now been added. This is a (granted – fairly oblique) reference to the negative impacts of racial profiling, which was part of Herschel’s example of the risks of profiling basketballers as black. But noted – and you’re probably right – thank you.

      Reply

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere