Published on the 19/02/2025 | Written by Heather Wright

The C-suite challenge…
The C-suite are split on the appropriate balance between innovation and accountability and ethics in the face of AI’s rapid growth which is outpacing effective governance.
An NTT Data survey of 2,300 C-suite leaders and decision makers across 34 countries shows enthusiasm, especially around around GenAI’s potential, is tempered by concerns about how to advance AI responsibility, with tensions between prioritising innovation and focusing on responsibility.
“While innovation and responsibility are often seen as opposing forces, locked in a tug-of-war, they don’t have to be.”
Nearly a third of respondents believe innovation matters more than responsibility, with one-third saying it’s the other way around. The remaining third rank both as equally important.
Despite the differences in opinion on what matters most, 60 percent are in agreement that there is a significant gap between innovation and responsibility.
The report, The AI Responsibility Gap: Why Leadership is the Missing Link, also highlights the need for leadership guidance, with 81 percent saying the guidance from their leadership team on balancing innovation with responsibility was ‘very important’. The number seeing the guidance as crucial rises as investment increases.
But if they’re wanting guidance, it’s not forthcoming, with 71 percent of CISOs saying their organisation lacks clarity from leaders on responsibility – and that shortcoming leads to innovation taking precedence.
That desire for guidance extends to government regulations, with more than 80 percent of respondents saying unclear regulations are hindering AI investment and implementation.
The survey’s results mirror tensions seen globally on the political front, where the US has been pulling back on AI regulation, while Europe continues to tighten its approach to AI.
One of Donald Trump’s first acts as US President earlier this year was to revoke a 2023 executive order by Joe Biden which required developers of AI systems posing risk to national security, the economy, public health or safety, to share the results of safety tests with the government before releasing their models to the public.
In New Zealand, the government has flagged a ‘light-touch’ approach to regulation, using existing technology-neutral laws, such privacy, consumer protection, IP and human rights protections, rather than specific AI laws. A Public Service AI Framework was released earlier this year, with work on a private sector version underway.
In Australia, AI regulation remains a work in progress. A Voluntary AI Safety Standard including guardrails for best practice was released late last year, with the federal government also proposing mandatory guardrails for AI in high-risk settings.
Both countries have signed up to the UK’s Bletchley Declaration on AI Safety which includes a call for safe and ethical development of AI, in particular ‘frontier AI’ systems which pose the most urgent risk.
The AI Responsibility Gap also digs into why organisations choose to prioritise innovation over responsibility, and vice versa. A greater need for business growth, lack of budget or resources to focus on responsibility and a lack of perceived risk, such as compliance or ethics key factors, were flagged in organisations where innovation mattered more.
Conversely, organisations where responsibility mattered more had clear direction from leaders, a preference for a safe established approach rather than industry disruption, and a need to comply with government and industry guidelines.
“A greater need for business growth drives leaders’ focus on innovation, while a rising insistence on responsibility swings the balance toward responsibility. The two are not intended to be exclusive, but the pressure on executives makes it feel that way,” the report notes.
It says while innovation and responsibility are often seen as opposing forces, locked in a tug-of-war, they don’t have to be.
“The most forward-thinking and high-performing organisations understand that integrating responsibility into the innovation process is both ethical and strategic. It supports sustainable progress while mitigating risk.”
The report recommends adopting a ‘responsible by design’ philosophy, building responsibility from the ground up and end-to-end and aligning initiatives with company values, regardless of what is mandated.
Upskilling the workforce, employing multilevel governance which goes beyond legal requirements to meet standards set by stakeholders, and focusing on global collaboration, forging partnerships and working proactively with governments, academic institutions and global organisations to establish global standards are also recommended.
While 90 percent of executives said they worried about AI security risks, only a quarter of CISOs said they have a robust governance framework in place. A workforce that isn’t ready for AI is also an issue, with 67 percent of respondents saying their employees lack the skills to work effectively with AI, and 72 percent admitting they don’t have an AI policy in place to guide responsible use.
There were also concerns about sustainability, with 75 percent of leaders saying their AI ambitions are conflicting with corporate sustainability goals, forcing them to rethink energy-intensive AI solutions.