Published on the 06/08/2019 | Written by Jonathan Cotton
Academics say we need a government-level plan to cope with the ramifications of AI. But are we at risk of stifling innovation?…
Is it time for a country-wide AI strategy? New reports from both sides of the Tasman say yes, and that the time to act is now.
First up a new paper from the Australian Council of Learned Academies promises that the coming AI boom brings with it big benefits – wellbeing, lifting the economy, improving environmental sustainability and creating a more equitable, inclusive and fair society.
That’s just the beginning however. The same paper warns of the serious implications for workers and the isolated along with privacy and cultural values – just to name a few.
“Further development of AI must be directed to allow well-considered implementation that supports our society in becoming what we would like it to be.”
“This report was commissioned by the National Science and Technology Council, to develop an intellectual context for our human society to turn to in deciding what living well in this new era will mean,” Australia’s chief scientist Alan Finkel says.
“What kind of society do we want to be? That is the crucial question for all Australians, and for governments as our elected representatives.”
It’s a good question, and to that end, the paper examines three broad categories of enquiry: The future AI transformation of the economy; the ethical, legal and social considerations required for broad AI uptake; and education, skill and infrastructure requirements to manage workforce transition and support ‘thriving and internationally competitive artificial intelligence industries’.
So what does the report actually find?
First and foremost, the paper finds AI promising myriad opportunities to improve economic, societal and environmental wellbeing, while also presenting ‘potentially significant global risks, including technological unemployment and the use of lethal autonomous weapons’.
“Further development of AI must be directed to allow well-considered implementation that supports our society in becoming what we would like it to be – one centred on improving prosperity, reducing inequity and achieving continued betterment.”
That focus on broad community wellbeing is certainly a central theme of the paper. The report highlights the importance of communication and community awareness and advocates for ‘strong governance and a responsive regulatory system that encourages innovation’, digital infrastructure that works to include isolated communities, education opportunities and supportive immigration policies, and the creation of an independent AI body made up of government, academia and members from the public and private sectors.
So yes, quite broad, but with a clear central thesis: Regulation today will save us tears tomorrow.
The New Zealand release, dubbed The age of Artificial Intelligence in Aotearoa, is a far slimmer document but hits many of the same marks.
“Issues of disruption and inequality are likely as [AI transformation] occurs.
“However, it is difficult to predict the changes that artificial intelligence will bring. Yet, with any advance in technology, society does not need to blindly adopt it – we can decide how and where the technology can be used.”
It all seems extremely reasonable. But not everyone is convinced that central planning is the solution to AI uncertainty.
“Most discussions about artificial intelligence are characterised by hyperbole and hysteria,” says Wim Naudé, Professorial Fellow at Maastricht Economic and Social Research Institute on Innovation and Technology, United Nations University.
“Though some of the world’s most prominent and successful thinkers regularly forecast that AI will either solve all our problems or destroy us or our society, and the press frequently report on how AI will threaten jobs and raise inequality, there’s actually very little evidence to support these ideas.”
What’s more, says Naudé, that could actually end up turning people against AI research, ‘bringing significant progress in the technology to a halt’.
“As a result of the hype and hysteria, many governments are scrambling to produce national AI strategies,” observes says Naudé.
“International organisations are rushing to be seen to take action, holding conferences and publishing flagship reports on the future of work.
“For example the United Nations University Centre for Policy Research claims that AI is ‘transforming the geopolitical order’ and, even more incredibly, that ‘a shift in the balance of power between intelligent machines and humans is already visible’”.
Naudé says that the tone of discussion about the current and near-future state of AI, much of it ‘unhinged’, threatens both ‘an AI arms race and stifling regulations’.
“This could lead to inappropriate controls and moreover loss of public trust in AI research. It could even hasten another AI-winter – as occurred in the 1980s – in which interest and funding disappear for years or even decades after a period of disappointment.
“All at a time when the world needs more, not less, technological innovation.”
He might have a point too. Much of the discussion around AI certainly does have some of the hallmarks of apocalyptic thinking. So where does the truth lie? And perhaps more importantly, what the hell do we do next?
One thing’s for sure, these are polarising times and it would be a pity if the conversation around AI devolved into the same politically-driven stalemate that currently characterises the climate change debate – with both sides of the argument committed to denying the concerns of the other.
When it comes to AI, surely a little regulation is a good thing – depending on the type, quality and intention behind it, of course. Deciding just what that looks like – and how much is too much – is certainly the question of the moment.