A/NZ AI governance ‘a mess’

Published on the 28/11/2024 | Written by Heather Wright


A/NZ AI governance ‘a mess’

Adding new stakeholders to the governance equation…

Luke Ellery has a stark warning for the many Australian and New Zealand organisations who are viewing AI as just another technology, like any other: It’s not. And failure to recognise it’s differences, and provide the governance structures to deal with those differences, can result in a sticky mess.

The Sydney-based Gartner analyst says AI behaving badly or going ‘off-piste’ is a business risk, rather than a technology risk and requires governance that entails a broader set of stakeholders to be considered, with collaboration between diverse teams to both test and monitor models.

“More and more you need a diverse range of people involved to consider all the implications.”

But AI governance across Australia and New Zealand is often ‘disorganised’ and ‘in a mess’, he told iStart.

And some of that starts at the very beginning, with why, and how, organisations are adopting AI, in particular generative AI.

“Some organisations are just trying to use AI wherever they can,” he says.

“They’re looking at different tools and solutions and throwing AI at things and hoping something sticks.”

Others, however, are being more structured, using a business case focus to any addition of AI to their tech stack.

But either way, many organisations are running into ‘a bit of trouble’ when it comes to the governance side of things and Ellery and his Gartner colleagues are increasingly facing questions from customers on the topic.

“There is so much hype around AI and maybe an under-appreciation of the risks involved, and some organisations haven’t thought of AI in terms of their corporate governance and how they govern AI, and if they do have some governance structures in place, they haven’t changed it to address the types of risk that AI, especially genAI, poses.”

Bunnings recently found itself hauled over the coals for its use of facial recognition technology in Australia, while last year Levi’s found itself in the firing line after saying it was going to use AI to support diversity by superimposing their jeans on stock photos of people from different racial backgrounds and with disabilities. Unsurprisingly, there was an uproar.

Ellery holds both cases up as examples of where any controls the companies had in place likely didn’t include the right people on the approval board.

“Especially for AI, it needs to be more than just enterprise architecture and security that are involved. More and more you need a diverse range of people involved to say ‘what are the implications?’ What are the regulatory implications, such as with Bunnings, but also what are the community expectations or our own employee expectations.”

Gartner breaks AI adoption into three models:

– Defend, which is the everyday AI, such as using Microsoft Copilot and the like

– Extend, where AI is used to update existing processes or to supplement them or use agents injecting AI into service applications for example to enable service reps to resolve customer problems and upsell at the same time, and

– Upend, where organisations – a very few organisations, it should be noted – are looking to do something completely new and transforming.

Even for those taking a defend stance, Ellery says governance is crucial.

“You have to ask what types of data is going to be going to be fed into the AI systems, whether you can control it, is it contained, is it your own instance or a public instance.

“And the next question is whether the system learns off your use of that capability. Is it using your prompts and how you interface with that to inform and build the model.”

That applies as much to those using their own instance as for public instances.

He cites a client who had their own instance and said they weren’t concerned about the model learning off the company’s use of the system.

“They gave the example that when they asked it a question and it sent a response, they updated and said change the response to include these elements. That learning is actually the IP in your organisation. It’s the way you are doing things, so there is a risk that if your organisation and other competitors are using the same system, you are diluting your unique IP.”

But Ellery isn’t advocating for separate AI governance boards or committees. On the contrary he’s clear that organisations should always try and use the existing governance mechanisms and enhance them.

“If you have already got some sort of council who reviews new technology procurements, or an architecture review board, use those existing mechanisms and extend them, including the additional teams that can provide guidance around those more sensitive, ethical, regulatory and customer expectations.”

The issue of ongoing governance is one of the biggest areas of concerns among many of Ellery’s clients.

It is often the business managing AI solutions, and Ellery says that can prove challenging as they grapple with vendor relationships and the very different technical risks presented by AI models. While hallucinations are often noted publicly, a less commonly discussed issue, which is a bigger concern from a business perspective, is drift, where the performance degrades over time.

“Whenever I speak to someone who is not in tech and say AI is not a calculator, they get a bit confused. Every computer they have used has been reliable in regards to things such as mathematical answers. But with AI and the flexibility which you are expecting from the code, which makes it more human, it is more variable.

“So managing those systems is more intensive,” he says.

“We see that organisations that deploy AI, especially genAI, need to have some way of monitoring the models performance over time and ensuring its integrity, whether through audit or oversight. New approaches are required to monitor AI – you can’t just observe what staff are doing as a compliance team might do – and that’s another capability we need to help the business with.”

Vendor management responsibilities are ‘a bit heightened’ and business teams will need help to manage AI vendors, some of whom are very large and powerful and less than flexible.

The AI governance framework should also include AI education and training, alongside AI test environments.

“All these things build up to make an overall AI governance framework. It’s not just about the committees, it’s about all the elements including monitoring.”

So what needs to be part of the governance framework and compliance measures?

Ellery says having an AI policy is the first step, especially as more regulation pops up. He advocates looking at what the EU is doing with its AI Act, which he expects to be influential for organisations, even locally.

“Have a policy that talks about the expectations regarding ethics, dangerous AI being prohibited… because at the top you want that policy, agreed at the board or appropriate committee level, that everyone else can work under.”

Then ensure you have the education and training to enable people to understand AI and what its limitations are – beyond just hallucinations.

Back that up, he says, with a supportive AI governance committee, where people can bring up ideas and get recommendations on how to approach it, including taking a business case approach so costs, risks and value can be assessed.

“Having a committee starts building the corporate knowledge that over time updates how we look at the policy, how we look at the training, and creates a more supportive environment for AI in the organisation.”

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere