Finding the AI use case eluding many

Published on the 26/08/2024 | Written by Heather Wright


Finding the AI use case eluding many

And as for governance…

AI uptake among local organisations may be on the up, but identifying specific use cases is proving a challenge for many and organisations are being warned that without a use case, results can prove elusive.

Lou Campagnone, director of AI at Datacom, says it’s not uncommon to see organisations whose boards want them to ‘do something with AI’.

“It might be a cool idea, but you only want to be doing it if you can see the benefits.”

“To some extent you can just stumble upon a result if you’re using a generalist tool, but the customers we see having the best results are the ones who had a specific use case and matched a tool to that use case,” Campagnone says.

Research from Datacom shows two-thirds of Kiwi companies are now using some form of AI, up from 48 percent last year. Of those using AI in their organisation, 80 percent described it has having a positive impact on business operations, while two percent said it hasn’t had a positive impact and 18 percent were unsure.

Respondents – 200 senior managers – weren’t asked about the nature of those results, but Campagnone says many customers are looking at results beyond the financial perspective.

“It’s not so much just about output but outcome. That might be a financial improvement or an increase in efficiency, but you also need to think about improvement in outcomes, such as customer or staff experience, as well.”

For 29 percent of those not using AI, not having a use case was the big barrier.

Campagnone says that lack of use case and strategy, along with a lack of governance, were the big surprises from the research.

She says discovering the use case comes down to ‘almost surgically dissecting’ a role or function, and working out tasks that could be outsourced to AI.

“I talk to customers about what parts of their role they would like to outsource to a clever intern.

“Then we prioritise the tasks based on the benefits to the organisation. Is this a task that might help only one person in the organisation or is it something that if we augment or optimise it using AI, will help many people?”

Other factors, including efficiency gains and the risk factor of the task, also need to be taken into account, she says.

“It’s thinking about it in quite a scientific way and then focusing on that prioritised task.”

Once a use case is found, you can look at the data available and how it is going to be used.

“A lot of people get overwhelmed by data readiness for AI and think all their data needs to be ready at the same time. But if you actually start with the use case you can then start with ‘what data do we have available for that particular use case?’.”

On the flip side, she says data and AI can also be used to work out the best use cases, identifying the ‘smoke signals’ within an organisation, such as complaints coming into a contact centre, and enabling organisations to move into more proactive space of experience enhancement, rather than complaint resolution.

For many organisations, once the first use case is identified a flood follows, with organisations ‘moving from caution to utter enthusiasm’.

“You need to have an innovation pipeline within your organisation where you start to look at not just a case, buy how it will benefit you as an organisation. Because it needs to be a practical use case, but it also needs to be tangible. It might be a cool idea, but you only want to be doing it if you can see the benefits.”

Unsurprisingly, she suggests working on ‘back stage use cases’ – those not visible to the customer – in the first instance, and not going all in. And, crucially, don’t be afraid to drop an AI use case if you’re not seeing benefits, and move on to something else.

While Datacom’s survey shows positive sentiment from Kiwi companies around AI – 70 percent describe it as ‘exciting, I support it’, that may not last, with Deloitte warning that while organisations are investing in AI, enthusiasm among senior executives and boards of directors on the generative AI front is beginning to wane.

The State of Generative AI in the Enterprise: Now Decides Next, based on a survey of 2,770 director to c-suite level respondents across 14 countries, found the ‘new technology’ shine is wearing off. While interest remains high, it was down 11 percentage points on last year to 63 percent.

Tellingly, the large majority of respondents said 30 percent or fewer of GenAI experiments have moved into production.

Gartner too, has warned of a cooling of enthusiasm around generative AI, with the technology heading into the ‘trough of disillusionment’ as the rubber meets the road for companies seeking real benefits from the technology.

Deloitte says globally, governance is proving a challenge – 29 percent of respondents in its survey cited the lack of a governance model as a top barrier to successful GenAI deployment.

That echoes locally too. While Datacom found 61 percent of Kiwi organisations feel well educated on security risk, just 13 percent of those using AI have audit assurance and governance frameworks in place.

“Organisations might be feeling more confident with it… and starting to roll out solutions, but we’re seeing a lot wanting help to operationalise it and take it to the next level,” Campagnone says.

That includes developing AI charters, guardrails and governance frameworks and AI centres of excellence – something Campagnone says is needed ‘otherwise it can get to a point where you don’t have the visibility of all the different solutions in your organisation’.

“It’s really important as part of your AI Charter to think about your soft and hard governance.”

Soft governance is the guidelines and recommendations of safe and responsible AI development for proof of concept stage projects.

Hard governance is needed before moving into production and rolling out to the organisation or customers.

“That’s where we recommend having AI governance forum within your organisation and having some really strict security guidelines.”

Keeping humans in the mix to check AI outputs also remains crucial.

She notes that in order to govern AI, you need good visibility of all the AI solutions across the organisation.

Justin Gray, Datacom New Zealand managing director, says guardrails don’t have to be restrictive or exhaustive, but need to provide clarity for the business around key issues such as AI risks and how these are being managed, and acceptable use of AI and of the data being used by AI tools and applications.

“It is critical that AI is not seen as an ‘add-on’ but is built into overall business strategy. Organisations need to be clear about the benefits that are being targeted and that can only happen if AI is part of your strategy and not an afterthought.”

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere