Australia’s cautious approach to AI Laws
Are we navigating regulation fast enough?
AI is learning and evolving at breakneck speed, transforming industries across the world. However, regulation has not kept up the same pace – and Australia is opting for the slow-and-steady method. Recently, the Albanese government released a discussion paper proposing mandatory guiderails for artificial intelligence, similar to what the European Union has in place. Another round of consultations just ended in early October 2024, and we are hoping to see the government’s response and any proposed amendments to the regulation before 2025.
The proposal has continued the debate on how to best manage the opportunities and risks AI presents. According to Minister for Industry and Science Ed Husic, the government is determined to facilitate ‘safe and responsible’ use of AI, a sentiment echoed globally. However, critics are divided on whether this measured pace is necessary or if Australia should take decisive action sooner to avoid falling behind in the regulatory race.
The growing importance of keeping AI in check
The benefits of AI become more apparent by the day, as it reshapes everything from healthcare and finance to entertainment and manufacturing. But it also introduces a range of concerns, including ethical dilemmas, data privacy issues, and the potential to harm people and wider society through abuse.
Government response to AI has been mixed. From the European Union’s stringent AI Act to the United States’ more fragmented approach, there’s a global demand to strike a balance between fostering innovation and protecting the public. Australia’s strategy so far has been tentative, focusing on voluntary standards while we wait and see how other nations tackle their legislation.
What is the Voluntary AI Safety Standard?
One of the key initiatives in Australia’s current AI strategy is the introduction of the Voluntary AI Safety Standard, designed to help businesses and organisations use AI safely and responsibly. This standard encourages companies to adopt best practices when implementing AI, including ensuring the technology is explainable, unbiased, and secure.
While the introduction of a voluntary standard is a step in the right direction, some have argued that voluntary compliance might not be enough to prevent harmful AI practices. Without mandatory regulations, the onus is on businesses to take an honest approach and not prioritise profit over safety.
The proposed guardrails
Over the past year, the government’s consultation process has found, starkly, that our current regulatory system is not fit for purpose to address the risks of AI. The 10 proposed mandatory guardrails aim to address this, taking inspiration from consistent legislation in the EU and Canada.
It is proposed that organisations developing or deploying high-risk AI systems be required to:
- Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
- Establish and implement a risk management process to identify and mitigate risks
- Protect AI systems, and implement data governance measures to manage data quality and provenance
- Test AI models and systems to evaluate model performance and monitor the system once deployed
- Enable human control or intervention in an AI system to achieve meaningful human oversight
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content
- Establish processes for people impacted by AI systems to challenge use or outcomes
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
- Keep and maintain records to allow third parties to assess compliance with guardrails
- Undertake conformity assessments to demonstrate and certify compliance with the guardrails
Progress or procrastination?
There’s no doubt AI regulation comes with significant challenges. On announcing the guardrails and voluntary standard, Mr Husic described it as “one of the most complex policy challenges facing governments the world over”.
Australia’s cautious approach has both benefits and risks. On one hand, a slower regulatory timeline allows for thorough consultation with industry, ensuring that the resulting laws are balanced and effective. It also gives us the scope to observe international regulatory developments and choose the best elements to incorporate into our own framework.
On the other hand, waiting too long to impose mandatory rules may create a regulatory vacuum, where AI companies operate with minimal oversight.
Then there’s Intellectual Property
According to the government’s discussion paper, high-risk uses of AI should be designed and tested in a way that mitigates their risks, while maintaining an element of human control. Generative AI models such as ChatGPT that can produce video, audio, code and more are deemed more likely to be high-risk – and there’s ambiguity on how intellectual property law applies to them.
Traditionally, only human creators or inventors can hold rights to works or patents. In copyright, AI-generated content raises debates over originality, and reforms may be necessary to clarify ownership and attribution of AI-created works going forward.
Ultimately, how we navigate AI regulation will likely need to evolve in tandem with AI technology itself. The timeline for transitioning to mandatory regulation is going to be crucial in determining the nation’s future in AI governance.
Does your business leverage AI? To learn how the IP law, the voluntary standard or proposed guidelines could impact you, contact our team today.