Is the Australian government doing enough to mitigate AI risks?

The Australian government has finally released a response to last year’s public consultation on the safe and responsible use of AI, writes UNSW Sydney's Toby Walsh

The federal Minister for Industry and Science Ed Husic has revealed an interim response from the Australian government on the safe and responsible use of artificial intelligence (AI). The Australian public, however, has real concerns about AI. And it’s appropriate that they should.

AI is a powerful technology entering our lives very quickly. By 2030, it is projected to increase the Australian economy by 40 per cent, adding $600 billion to our annual gross domestic product. A recent International Monetary Fund report estimates AI might also impact 40 per cent of jobs worldwide and up to 60 per cent of jobs in developed nations like Australia.

The impacts will be positive in half of those jobs, lifting productivity and reducing drudgery. But in the other half, the impacts may be negative, taking away work and even eliminating some jobs. Just as lift attendants and secretaries in typing pools had to move on and find new vocations, so might truck drivers and law clerks.

Perhaps not surprisingly, in a recent market researcher Ipsos survey of 31 countries, Australia was the most nervous about AI. Some 69 per cent of Australians, compared to just 23 per cent of Japanese, were worried about the use of AI. And only 20 per cent of us thought it would improve the job market.

Therefore, the Australian government’s new interim response is to be welcomed. It’s a somewhat delayed reply to last year’s public consultation on AI. Over 500 submissions were received from businesses, civil society, and academia. I contributed to multiple of these submissions.

Toby Walsh, Laureate fellow, Chief Scientist at UNSW.ai and Professor of Artificial Intelligence in the UNSW School of Computer Science and Engineering at UNSW Sydney.jpg
UNSW Sydney's Professor Toby Walsh explains that, like the EU, the Australian government’s interim response proposes a risk-based approach. Photo: supplied

What are the main points in the government’s response to AI?

Like any good plan, the government’s response has three legs. First, there’s a plan to work with industry to develop voluntary AI safety standards. Second, there’s also a plan to work with industry to develop options for voluntary labelling and watermarking of AI-generated materials. Finally, the government will set up an expert advisory body to “support the development of options for mandatory AI guardrails”.

These are all good ideas. The International Organisation for Standardisation has been working on AI standards for multiple years. For example, Standards Australia just helped launch a new international standard that supports the responsible development of AI management systems.

An industry group comprising Microsoft, Adobe, Nikon and Leica has developed open tools for labelling and watermarking digital content. Keep an eye out for the new “content credentials” logo that is starting to appear on digital content.

And the New South Wales Government set up an 11-member advisory committee of experts to advise it on the appropriate use of artificial intelligence back in 2021.

Read more: Do you trust AI to write the news? It already is – and not without issues

A little late?

It’s hard not to conclude that the federal government’s most recent response is a little light and a little late.

Over half the world’s democracies get to vote this year. Over 4 billion people will go to the polls. And we’re set to see AI transform those elections.

We’ve already seen deepfakes used in recent elections in Argentina and Slovakia. The Republican party in the US have put out a campaign advert that uses entirely AI-generated imagery.

Are we prepared for a world where everything you see or hear could be fake? And will voluntary guidelines be enough to protect the integrity of these elections? Sadly, many of the tech companies are reducing staff in this area, just at the time when they are needed the most.

The European Union has led the way in the regulation of AI: it started drafting regulations back in 2020. And we are still a year or so away before the EU AI Act comes into force. This emphasises how far behind Australia is.

AI Product Lifecycle Associated Harms.jpg
Diagram of impacts through the AI lifecycle, as summarised in the Australian government’s interim response. Image: Australian Government.

A risk-based approach

Like the EU, the Australian government’s interim response proposes a risk-based approach. There are plenty of harmless uses of AI that are of little concern. For example, you likely get a lot less spam email thanks to AI filters. And there’s little regulation needed to ensure those AI filters do an appropriate job.

But there are other areas, such as the judiciary and policing, where the impact of AI could be more problematic. What if AI discriminates on who gets interviewed for a job? Or does bias in facial recognition technologies result in even more Indigenous people being incorrectly incarcerated?

The interim response identifies such risks but takes few concrete steps to avoid them.

However, the biggest risk the report fails to address is the risk of missing out. AI is a great opportunity, as great or greater than the internet.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

When the United Kingdom government released a similar report on AI risks last year, they addressed this risk by announcing another 1 billion pounds (A$1.9 billion) of investment to add to the more than 1 billion pounds of previous investment.

The Australian government has so far announced less than A$200 million. Our economy and population are around a third of the UK. Yet the investment so far has been 20 times smaller. We risk missing the boat.

Toby Walsh is Chief Scientist of UNSW.AI, UNSW’s new AI Institute. He is a Fellow of the Australia Academy of Science. His most recent book is Machines Behaving Badly: the morality of AI. Prof Walsh is supported by the ARC by means of an ARC Laureate Fellowship exploring “trustworthy AI”. A version of this article was originally published on The Conversation.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy