Six ChatGPT risks your business needs to evaluate

Published on the 30/05/2023 | Written by Heather Wright


Six ChatGPT risks your business needs to evaluate

Privacy Commissioner, Gartner and legal firms weigh in…

Demand for ChatGPT and large language models is apparently running high across Australia and New Zealand, with local companies keen to embrace the technology. But experts are also sounding warnings, saying organisations are underprepared and may lack the oversight and expertise required to manage risk.

According to a recent Gartner survey, CEOs worldwide say AI is the technology most likely to significantly impact their industry in the next three years.

Locally, Brian Ferreira, Gartner VP and executive programs team manager, says there are notable differences in AI views between private and public sector CEOs.

“All agencies should be thinking about the consequences of using generative AI before they start.”

While commercial CEOs in Australia and New Zealand are targeting growth and are more concerned about inflation and recession possibly not being shallow and short, government CEOs are concerned about increasing debt, targeting constrained economic support and curbing additional spending.

“A/NZ CEOs are open to spending more on technology if it supports growth or holds market share, which is why it continues to be a top priority,” Ferreira says,

“In some cases, they chose to increase spending risk for better technology outcomes rather than waiting for technology to prove itself and possibly risk business stability.”

He says generative AI has taken off in A/NZ, with many organisations starting new business, client and operating efficiencies.

“Fast adopters are already upskilling all staff on AI capabilities and view generative AI as a new core business capability that will impact industries and business models,” Ferreira says.

That’s something that’s being seen worldwide. A separate Gartner survey of 2,500 executive leaders worldwide, found 45 percent had increased investment in AI on the back of ChatGPT’s publicity.

But while enthusiasm is high, Frances Karamouzis, Gartner distinguished VP analyst, says organisations will likely encounter a host of trust, risk, security, privacy and ethical questions as they start to develop and deploy generative AI.

Her comments are echoed by partners at international law firm Baker McKenzie, who warn that left unaddressed, organisational blind spots around generative AI and ChatGPT-style technologies’ ethical and effective deployment could overshadow the transformative opportunities offered and cause organisations to lose pace with the explosive growth of the technology.

New Zealand Privacy Commissioner Michael Webster last week outlined his expectations around the use of generative AI by Kiwi agencies, businesses and organisations.

“I would expect all agencies using systems that can take the personal information of New Zealanders to create new content to be thinking about the consequences of using generative AI before they start.”

He’s warning companies to review whether generative AI tools are ‘necessary and proportionate’ or whether an alternative approach could be taken, and to conduct a privacy impact assessment before implementing any system.

Gartner has identified six ChatGPT risks it says organisations need to assess their exposure to in order to build appropriate measures to steer responsible use of ChatGPT.

Risk one is the well-known tendency for ChatGPT and other large language models to hallucinate answers, providing incorrect, though superficially plausible, information.

Ron Friedmann, Gartner Legal and Compliance practice senior director analyst, says legal and compliance leaders need to ensure employees review all ChatGPT output for accuracy, appropriateness and actual usefulness, before it is accepted.

That need for human review is echoed by Webster who says any review of output data should also assess the risk of reidentification of the inputted information.  

Data privacy and confidentiality is also a key concern noted by both Gartner and Webster, with Friedmann warning that companies need to be aware that any information entered into ChatGPT, if chat history is not disabled, may become part of its training dataset.

“Sensitive, proprietary or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” Friedmann warns.

He’s cautioning organisations to explicitly prohibit entering sensitive data into public large language model tools as part of the compliance framework.

Webster warns that personal or confidential information must not be input into tools ‘unless it has been explicitly confirmed that inputted information is not retained or disclosed by the provider’.

“An alternative could be stripping input data of any information that enables re-identification,” he says. “We would strongly caution against using sensitive or confidential data for training purposes.”

“Organisations who implement generative AI should assume that data fed into AI tools and queries will be collected by third-party providers of the technology,” Baker McKenzie adds. “In some cases, these providers will have rights to use and/or disclose these inputs.”

The law firm notes that IP considerations apply both on the input and output side.

In a piece for the World Economic Forum, the Baker McKenzie partners note that the first wave of generative AI IP litigation is already being seen in the US. Getty Images sued AI company Stability AI early this year in the US, accusing it of misusing more than 12 million Getty photos to train its model. It’s also taking action in the UK.

Friedmann says companies need to be aware of the potential to violate copyright or IP protections, given ChatGPT in particular, was trained on vast volumes of data scrapped from the internet data.

A recent paper by academics from the University of California, Berkeley found the data ChatGPT was trained on includes text from copyrighted books, including Harry Potter, George Orwell’s Nineteen Eighty-four, The Hitchhiker’s Guide to the Galaxy, the Lord of the Rings trilogy, the Hunger Games books and A Game of Thrones.

While their report, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT4, doesn’t focus on the copyright ramifications, plenty of others have questioned whether the use of copyright-protected content to train generative AI models is legal – and whether compensation could be claimed for the works.

“ChatGPT does not offer source references or explanations as to how its output is generated,” Friedmann notes. He’s urging companies to keep a keen eye on any changes to copyright law that apply to ChatGPT output, and require users to scrutinise any output they generate to ensure it doesn’t infringe on copyright or IP rights.

Staying on top of laws governing AI bias – another well publicised issue – is also important. Friedmann says that may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls.

Even with that, however, he warns that complete elimination of bias ‘is likely impossible’.

Bias isn’t unique to AI of course, with Baker McKenzie noting virtually all decision making, whether AI-based or otherwise, creates bias. But it says the risk for companies lacking AI governance structures and oversight from key stakeholders, or ones who rely wholesale on third party tools, is the use of such tools in a way that creates organisational legal liability such as discrimination claims in the hiring process.

“Companies that use these tools must develop a framework that identifies an approach to assessing bias and a mechanism for testing and avoiding unlawful bias, as well as ensuring relevant data privacy requirements are met.”

Cyber fraud risks, and consumer protection risk round out Gartners six key risks for companies to consider before embarking on their generative AI journey.

The potential for ChatGPT to be used by bad actors have already prompted plenty of handwringing.

February survey of IT decision makers by Blackberry found 51 percent predicting a successful cyberattack, credited to ChatGPT, within a year, while 71 percent said they believed nation states were already likely using the technology for malicious purposes.

“Bad actors are already misusing ChatGPT to generate false information at scale – eg fake reviews,” Gartner says, noting that LLM models are also susceptible to prompt injection – a process of hijacking the output, enabling hackers to trick the model into performing tasks such as writing malware codes or developing phishing sites.

Audits of due diligence sources to verify quality of information, will be required Friedmann says.

On the consumer side, companies will need to make ‘appropriate’ disclosures to customers about the use of ChatGPT, such as in a customer support chatbot, and ensure they’re complying with relevant regulations and laws. Failure to do so could erode customer trust and cost business – and see businesses potentially charged with unfair practices under some laws, Gartner says.

Baker McKenzie, meanwhile, is urging organisations to take a more holistic approach, moving current approaches beyond siloed efforts and bringing together previously discrete functions under the umbrella of a strong governance framework.

“While many organisations rely on data scientists to spearhead AI initiatives, all relevant stakeholders, including legal, the c-suite, boards, privacy, compliance and HR, need to be involved throughout the entire decision making process,” the law firm says.

“Hand in hand with this, organisations should prepare and follow an internal governance framework that accounts for enterprise risks across use cases and allows the company to efficiently make the correct compliance adjustments once issues are identified.”

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere