Regulation
Current Regulatory Landscape
As of April 2024, Australia did not have any generally applicable law specifically regulating the use of artificial intelligence. The country has been taking a gradual approach to AI governance, starting with voluntary frameworks before moving toward mandatory requirements.
Voluntary Frameworks
Australia has established several voluntary frameworks:
AI Ethics Principles: Australia has 8 AI Ethics Principles designed to ensure AI is safe, secure and reliable. These principles are entirely voluntary and include human-centered values, fairness, privacy protection, reliability and safety, and transparency.
Voluntary AI Safety Standard (VAISS): Introduced in August 2024, the VAISS is an easy-to-use guide that helps organizations develop and deploy AI systems safely and reliably. It consists of 10 voluntary guardrails that apply to all organizations throughout the AI supply chain.
Movement Toward Mandatory Regulations
In September 2024, the Australian Government published a proposals paper introducing mandatory guardrails for AI in high-risk settings, which largely mirror the Voluntary AI Safety Standard. This indicates a shift toward more formal regulation.
The Federal Government unveiled plans for a new legislative framework in January 2024 to address risks and potential harms posed by AI systems. This framework is expected to adopt a risk-based approach, with regulations proportional to the level of risk posed by specific AI applications.
International Collaboration
Australia is collaborating internationally as a founding member of the Global Partnership on AI (GPAI) and participated in the AI Safety Summit in the United Kingdom, signing the Bletchley Declaration along with 27 other countries.
Industry Standards
While not AI-specific, Australia has established standards bodies and frameworks that may apply to AI systems:
Standards such as ISO/IEC 27001 (Information Security Management), while not mandatory, are widely recognized industry standards in Australia that could apply to AI systems.
For the AI Risk Architect role, understanding these evolving regulations and standards would be essential for developing appropriate governance frameworks and risk management approaches for AI systems deployed at Deloitte and for clients in Australia.