news-details

How Australia's new AI 'guardrails' can clean up the messy market for artificial intelligence

Australia's federal government has today launched a proposed set of mandatory guardrails for high-risk AI alongside a voluntary safety standard for organizations using AI.

Each of these documents offers ten mutually reinforcing guardrails that set clear expectations for organizations across the AI supply chain. They are relevant for all organizations using AI, including internal systems aimed at boosting employee efficiency and externally-facing systems such as chatbots.

Most of the guardrails relate to things like accountability, transparency, record-keeping and making sure humans are overseeing AI systems in a meaningful way. They are aligned with emerging international standards such as the ISO standard for AI management and the European Union's AI Act.

The proposals for mandatory requirements for high-risk AI—which are open to public submissions for the next month—recognize that AI systems are special in ways that limit the ability of existing laws to effectively prevent or mitigate a wide range of harms to Australians. While defining precisely what constitutes a high-risk setting is a core part of the consultation, the proposed principle-based approach would likely capture any systems that have a legal effect. Examples might include AI recruitment systems, systems that may limit human rights (including some facial recognition systems), and any systems that can cause physical harm, such as autonomous vehicles.

Well-designed guardrails will improve technology and make us all better off. On this front, the government should accelerate law reform efforts to clarify existing rules and improve both transparency and accountability in the market. At the same time, we don't need to—nor should we—wait for the government to act.

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market