Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement


Zoe McKenzie

How to regulate AI risk without killing innovation

If Australia is going to develop a dynamic AI industry, with appropriate guardrails for its development, especially in education, it will need to take the best of current approaches and not just in the Anglo-Saxon world.

Zoe McKenzieFederal MP

Regulators and legislators worldwide recognise the immense challenge AI  poses. That will be underlined next week when global leaders gather at Bletchley Park – the spiritual home of computer science where the British Enigma code-breaking helped defeat Nazism in World War II – for the AI Safety Summit to discuss how to regulate the risks and harness the revolutionary benefits of the new technology frontier.

At its simplest, AI is just machine learning. Put enough stuff in the machine, teach it some crunching and interpretive skills, and it can make better sense of it than we can. Put enough stuff in the machine, and AI can be a better doctor, a better accountant, a better lawyer, a better author, a better teacher, a better soldier.

The kids are all over it – while many teachers are still trying to work out what a ‘prompt’ is. Reuters

The world is in a global race for AI supremacy.

In a development and investment perspective the battle rages between the US and China.

In a regulatory sense the only jurisdiction giving it a red-hot go is the European Union.


While we have talked about AI for well over a decade, it only entered the everyday consciousness when ChatGPT was launched in November last year. Within months it had more than 100 million users. A proliferation of generative AI models has been released since then, not just relating to words and word-like content such as coding, but also in relation to image, voice and music.

There is no longer a way of knowing if what you read, hear, or see, was created by human a machine, or AI blending the two.

In a regulatory sense the only jurisdiction giving it a red-hot go is the European Union.

The potential of AI is immense, no where more so than in education.

Imagine a bespoke AI teacher, lets call him Terry. Terry is funny. He is wise. He is kind. Terry knows my 12-year-old son better than I ever will, and explains Pythagoras theorem and The Merchant of Venice in a way my son understands.

Terry is in the iPad, if that wasn’t already obvious. In fact, Terry is the iPad. Terry can tell when my son is tired, when he is a bit off colour, when he needs a couple of good jokes to get back into focus.


He may also know when my son needs exercise: “Come on, buddy, let’s go out for a run. I’ll play your favourite music as we go.”

Terry works out my son has an inner ear problem before the GP does.

This is not unrealistic. In fact, this type of technology and interface is already in development in the bowls of a university computer lab at UTS in Sydney.

Imagine further: that my son and his mates spend 8am to 11am reading and studying Shakespeare with Terry the AI avatar teacher, and then turn up to the classroom where a hologram of William Shakespeare appears before them, to talk, present, act, and answer questions, in real time. Sounds like Star Wars, but it is just around the corner – if we want it and can afford it.

Earlier this year, the House of Representatives started an inquiry into generative AI and Australia’s education system, and for some four months, the Education Committee has been taking evidence on where the country is at. The kids are all over it – while many teachers are still trying to work out what a “prompt” is. The kids have been submitting ChatGPT-generated work since last December.

The forthcoming retirement of a significant proportion of Australian school teachers throws up new challenges but also opportunities. Education ministers decided last week to let AI into the classroom, so there is now a desperate need to train current and future teachers in the use of the technology, particularly Large Language Models like ChatGPT or Google’s Bard.


But we need also to be honest about what the technology may be able to teach better, like phonics, for example.

Unlike the internet, or social media, which reached world-wide uptake without any meaningful guardrails to protect us from their perils, real attempts are being made to ensure AI will develop in a safe and responsible way.

It’s easy to see where this can go wrong. When image-generating AI was introduced to schools in Spain last month, boys meshed the faces of girls they liked at school on naked bodies and sent the real-looking AI generated photos round the classroom.

In Australia, we are at the start of the conversation about generative AI use and regulation. It is proposed to give Education Services Australia $1 million to vet AI tools in education, which is nowhere near enough.

Europe is much further advanced, taking an approach analogous to any other good or service wanting to enter the EU27 market. A draft EU AI Bill released mid-year indicates it will separate the technology into risk categories. “Unacceptable risk” AI will be banned, and “high risk” AI will be regulated.

“Unacceptable risk” will include AI that contains subliminal manipulative techniques. And, yes, non-AI internet algorithms already manipulate humans. But the Europeans already solved that in their Digital Services Act, which gives their 450 million residents the choice of algorithmically free social media feeds.


High Risk AI will need to be regulated and approved for use in the EU and “high risk” products will include education. As the AI bill is finalised this year, it will need to answer millions of questions, including: Does the AI component get risk-assessed, or each individual application derived from it? The EU would need an army of assessors for the task.

The EU hopes to finalise its bill this year, and in the hope its brave endeavour to regulate does not kill innovation, has set a target of €20 billion ($33.4 billion) spend on AI each year across public and private sectors. But it is a long way from achieving it, while unconstrained development in Asia and the US continues apace.

But if Australia is going to develop a dynamic AI industry, with appropriate guardrails, especially in education, it will need to take the best approaches, not just from the Anglo-Saxon world, as soon as yesterday.

Zoe McKenzie is in Europe as a guest of the Konrad Adenauer Foundation, and spent two days in Brussels meeting with experts in AI regulation and education.

Zoe McKenzie is Federal Member for Flinders.

Read More

Latest In Technology

Fetching latest articles

Most Viewed In Technology