No
In September 2024, the Australian Government put forward a proposed set of "mandatory guardrails" for use of AI in high-risk settings, but no proposed legislation to give effect to those proposed guardrails has yet been made public.
Yes, at the same time the Australian Government released the proposed mandatory guardrails it published the "Voluntary AI Safety Standard". which largely mirrors the proposed mandatory guardrails and is focused on providing guidance to organisations looking at implementing AI models and systems.
The Australian Government's "Policy for the Responsible Use of AI in Government" imposes requirements on federal government agencies and bodies, including a requirement for agencies to publish transparency statements that detail their AI usage and governance arrangements.
The US National Security Agency’s Artificial Intelligence Security Center (NSA AISC), alongside the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), New Zealand’s National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC-UK), have jointly released new guidance on securing data throughout the AI lifecycle.
This guidance highlights key threats such as data poisoning, supply chain vulnerabilities, and "data drift"—a phenomenon where the statistical properties of input data evolve over time, causing the data to diverge from its original form.
To address these risks, the guidance outlines mitigation strategies focused on robust data management, data quality testing, and continuous monitoring of AI system inputs and outputs.
The Senate Select Committee on Adopting Artificial Intelligence delivered its final report in November 2024, which in combination with the proposed mandatory guardrails, indicated that the Australian Government is continuing to move towards a mandatory regulatory framework for high-risk AI applications, although the Government has repeatedly stated it wants to allow lower risk AI usage to "flourish largely unimpeded".
On 16 April 2025, Australia's Industry Minister, Ed Husic, has affirmed the federal government's commitment to finalising a risk-based regulatory framework for artificial intelligence, despite recent shifts in approach in the US and EU. He stated that, if re-elected in the May 2025 election , a re-elected Labor government would prioritise the introduction of mandatory safeguards for the use of AI in high-risk contexts. We note that the Labor government was re-elected in May.
At the AFR AI Summit on 3 June 2025, Australian Industry Minister Tim Ayres underscored the vital importance of digital technology and artificial intelligence in driving Australia's future economic growth, productivity, and international competitiveness. He:
On 13 June 2025, Australia Treasurer pushed back on union demands to regulate AI at work as it was reported that, in his media interview with the Australian Financial Review, Australian Treasurer Jim Chalmers has rejected union calls for immediate regulation of AI in Australian workplaces, saying that “[r]egulation will matter but we are overwhelmingly focused on capabilities and opportunities, not just guardrails".
No, Australian copyright and patent laws do not currently expressly address AI (including whether AU can be an author of works, who 'owns' AI-generated outputs or addressing the legalities of using copyright works as part of the training of general purpose AI models).
In April 2022, the Federal Court of Australia found in Thaler v Commissioner of Patents that AI could not be an 'inventor' for the purposes of a patent application made under the Patents Act 1990.
In December 2024, the Australian Parliament passed the Privacy and Other Legislation Amendment Bill 2024, which among other amendments, adds a requirement for privacy policies to contain information about substantially automated decisions which significantly affect individuals’ rights or interests (including the kinds of decisions and kinds of personal information used). These requirements come into effect in December 2026.
Otherwise, there are currently no AI-specific requirements in Australian data protection laws.
The Australian Government in October 2024 released guidance on:
Australia does not have a single AI regulator, but multiple agencies oversee AI-related risks:
A dedicated AI regulatory framework is under consideration.
In October 2021, the OAIC determined that Clearview AI breached Australian privacy laws by collecting facial images without consent. This finding related to the scraping of biometric material (in the form of photographs of people's faces) from the internet, rather than any other aspects of the development, implementation or use of AI models.
In February 2025, the Australian government banned the use of DeepSeek on all federal devices due to national security concerns.
*Information is accurate up to 30 June 2025