Press "Enter" to skip to content

EU AI Act: Australian IT Pros Need to Prepare for AI Regulation

The most recent (and likely final) version of the impending European Union’s AI Act was recently leaked. This is the world’s first comprehensive law designed to regulate the use and application of artificial intelligence, and history shows that, when the EU regulates something, the rest of the world tends to adopt it.

For example, companies doing business in Australia are typically GDPR-compliant, simply because European law requires them to be. The same will likely happen when the EU AI Act comes into power.

Despite the Australian government’s recommendations to be compliant with the EU AI Act, there are no mandatory regulations governing AI use in Australia. That said, Australian businesses will need to start implementing the rules proposed by the EU law if they want to continue scaling and doing business with companies in the EU or even local businesses with EU partnerships.

Australian government advice is to be compliant with the EU

Much as with GDPR, the advice from Australian officials is to be compliant. For example, as noted by the Australian British Chamber of Commerce as guidance to Australian organisations:

“The EU AI Act applies to all businesses that deploy AI systems in the EU market or make them available within the EU, regardless of location. Accordingly, Australian businesses conducting any of the following will have to comply with the legislation:

  • developing and marketing AI systems;
  • deploying AI systems; and
  • using AI systems in their products or services.”

With that in mind, Australian IT pros should watch the EU regulation around AI closely. It is likely to become standardised as best practice even in the absence of local regulation.

The benefit to Australian businesses meeting the requirements of the EU AI regulations is, much as with GDPR, that once they’ve done so, they’re already essentially ready and compliant when the Australian government introduces local regulations.

5 things Australians need to know about the EU AI Act

Currently, Australia does have legislation that covers components of AI, such as data protection, privacy and copyright laws. There is also an Australian AI Ethics Framework, which is voluntary but covers much of what the EU laws aim to legislate.

PREMIUM: Businesses should consider drafting an AI ethics policy.

The eight principles at the heart of the AI Ethics Framework provide Australian organisations with a “best practices” way of thinking about how AI should be created and used, particularly in regards to human safety, benefit and fairness.

The EU approach is essentially taking these philosophical ideas and turning them into specific regulations for organisations to follow. For example, the five key areas that the EU laws will regulate are:

  • Risk classification system for AI systems: These will range from “minimal” to “unacceptable,” and as the AI application is considered to become “higher” risk, it is subjected to greater levels of regulation.
  • Obligations for high-risk AI systems: Most of the regulation is focused on commitments related to ensuring data quality, transparency, human oversight and accountability.
  • Ban on certain uses of AI: Uses of AI that pose an unacceptable risk to human dignity, safety or fundamental rights, including social scoring, subliminal manipulation or indiscriminate surveillance, will be banned.
  • Notification and labelling system for AI systems: For the sake of transparency and accountability, there will also be a notification and labelling system for AI systems that interact with humans, generate content or categorise biometric data.
  • Governance structure: This will involve national authorities, a European AI Board and a European AI Office to oversee the implementation and enforcement of the AI Act.

Outside of those “high risk” AI models — and that will be a small percentage localised to specific verticals like defence and law enforcement — most AI models used by consumer-facing businesses will have light regulatory requirements. Furthermore, for the most part, Australian organisations that had embraced the full scope of the ethical guidelines that the Australian government had proposed should find no difficulty in meeting the requirements of the EU laws.

Lack of mandatory regulation may leave Australian data and AI pros unprepared to meet compliance

However, the nature of voluntary obligations and a lack of a cohesive regulatory agenda mean that not all Australian AI has been done with the full scope of the ethical guidelines in mind.

SEE: Australia isn’t the only country that developed a voluntary AI code of conduct.

This could cause challenges later on if the organisation decides it wants to scale and has already embedded AI processes that breach EU regulation into its business. For this reason, forward-thinking organisations will likely default to following the strictest set of guidelines.

Non-compliance will limit Australian businesses locally and globally

Once these EU laws come into force in June, anyone who has baked AI into any of their products and processes is going to need to move quickly to achieve compliance. According to Boston Consulting Group, the requirements for compliance will be staggered, with the highest-risk applications having the shortest deadline. Most organisations will need to be compliant in 6-12 months, however.

Those that aren’t won’t be able to bring their AI models to Europe. Not only will this have a direct impact if they want to do business there, but it also means that partnerships around AI with other organisations that are doing business in Europe become complicated, if not impossible.

This is why it will be particularly important for Australian organisations to ensure the AI models being used are compliant with the EU AI Act — to not potentially shut themselves out of business opportunities locally in Australia.

Source: TechRepublic