news-details

Opinion: Australia has led the way regulating gene technology for over 20 years—here's how it should apply that to AI

Since 2019, the Australian Department for Industry, Science and Resources has been striving to make the nation a leader in "safe and responsible" artificial intelligence (AI). Key to this is a voluntary framework based on eight AI ethics principles, including "human-centered values," "fairness" and "transparency and explainability."

Every subsequent piece of national guidance on AI has spun off these eight principles, imploring business, government and schools to put them into practice. But these voluntary principles have no real hold on organizations that develop and deploy AI systems.

Last month, the Australian government started consulting on a proposal that struck a different tone. Acknowledging "voluntary compliance […] is no longer enough," it spoke of "mandatory guardrails for AI in high-risk settings."

But the core idea of self-regulation remains stubbornly baked in. For example, it's up to AI developers to determine whether their AI system is high risk, by having regard to a set of risks that can only be described as endemic to large-scale AI systems.

If this high hurdle is met, what mandatory guardrails kick in? For the most part, companies simply need to demonstrate they have internal processes gesturing at the AI ethics principles. The proposal is most notable, then, for what it does not include. There is no oversight, no consequences, no refusal, no redress.

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market