Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement
Advertisement
Analysis

How government can safely embrace AI

The philosophy and practice of generative AI in government must be safe, smart and sensible

The advent of generative AI is probably the single biggest opportunity in a generation to transform government to improve productivity, efficiency and public value.

It would be hard to miss the generative AI hype and hope has taken centre stage. But it does feel like AI, with its ability to self-generate apparently confident responses to almost any problem, is a moment equivalent to the arrival of the desktop computer and the extraordinary productivity lifting word processing, spreadsheet and PowerPoint applications that came with it.

It’s also undoubtedly risky. There are deep potholes and perils – even maybe existential risks – that will test the limits of governance and regulation. Getting the balance right between safety and innovation is proving challenging and tricky.

AI is already being used to help detect tax fraud and offers a suite of productivity gains for government. Reuters

The good news is that there are ways for governments to be safe, smart and sensible in their engagement with AI.

Federal Industry Minister Ed Husic is developing a risk-based approach and for leaders this will require solid and ongoing analysis of the primary-use cases generative AI is to be applied to.

Advertisement

Banks and professional services organisations are already testing risk-management approaches, including monitoring usage and operating in sandboxes.

Meanwhile, the federal government has issued interim guidance, with practical recommendations, such as agencies avoiding the use of any classified information and registering all corporate generative AI accounts.

A force for good

Generative AI is a subset of big computing and with it the power to cut through much of the grunt work of government. At a top level it offers government major transformational benefits.

These include:

  • Faster and better responses to complex policy questions such as privacy reform.
  • Superior and easier way to capture insights from government’s massive knowledge bank of cases, registries, libraries and transactions.
  • Break down the multi-jurisdictional data complexity that have so thwarted reforms such as digital health.
  • Rapid deployment of code to support digital service portals, apps and feature upgrades.
Advertisement

These productivity improving opportunities include faster community consultation, better customer service, and agile delivery of new products such as digital licences.

Microsoft has invested heavily into the creator of the ChatGPT app, OpenAI, and is looking to embed generative AI capabilities into its 365 desktop applications.

These applications are dominant in government, and come as Microsoft and the Tech Council have predicted up to 70 per cent of the early gains from generative AI are likely to come from productivity uplifts.

Much of this gain comes from automating routine but labour-intensive tasks including synthesising documents and large text-based sources, reconciling data, or transcribing.

Government represents more than a quarter of the economy and these productivity advances are substantial. They are part of the public sector’s response to budget debt, and the perennial task of doing better with less.

Early wins

Advertisement

Generative AI can be used to build customer-experience improvements in government including pre-filling forms, supporting initial comprehension checks in applications, automating status updates, and reducing case working timelines and customer frustration.

Service chatbots or voice agents are beginning to provide conversational access and content in any language 24/7. They can be “tuned” to be able to answer actual questions personalised to your location and situation.

Policymakers too are looking to take advantage of new tools for faster synthesis, consultation and more rapid development of options and possible solutions. Pilot programs are showing how to synthesise vast amounts of consultation feedback.

Closer to the tech itself, generative AI brings the opportunity for programming in human language, rather than having to learn languages such as Java, and autonomous code use will help reduce human errors.

At a worker level generative AI promises the most significant change in white-collar work practices since Google deployed its search engine in the late 1990s.

Fast disappearing are the days of struggling to get started or making sense of data. Generative AI can be consulted with prompts and instantly generate outline possibilities and analysis. With care.

Advertisement

Potholes and pitfalls

Using new technology is never a straightforward story.

Risk-based approaches presume humans are in the loop for any consequential decisions around how the technology is used.

This can be challenging: generative outputs are seemingly 100 per cent confident, but are actually rarely 100 per cent accurate, the so-called hallucination effect.

Humans will have to design appropriate prompts, provide the context, change the data inputs and validate machine-generated content. This should be easier for low-risk tasks such as autofilling suggested content in an email or synthesising non-confidential information.

But using generative AI for assessments and decision-making is, at this stage, considered an “unacceptable risk to government”, according to the latest guidance.

Advertisement

Examples where these risks have materialised include the Australian Research Council using it for grant assessments and where a US lawyer used presented case examples which used generative AI and were found to be fictional.

Shining light on the black box

Understanding the data used within generative AI and its “explainability
is essential to ensure they are ethical and safe, serving the interests of people and communities. This is challenging, as many generative AI apps use highly mathematical probabilistic models that are difficult to interrogate to explain why and how a decision has been made.

But if there’s one powerful and compelling message in the Holmes’ robo-debt royal commission report, it’s that in the interaction of humans and machines, diluting or sidelining the human factor courts disaster.

Commissioner Holmes noted regulation around automated decision-making was patchy at best. She recommended that for any automated decision-making there should be a clear path for those affected by decisions to seek review.

Holmes also suggested departmental websites should contain information advising that automated decision-making was being used and explaining in plain language how the process worked. She also called for business rules and algorithms to be public, to enable independent expert scrutiny.

Advertisement

In the end, the question is as much moral and philosophical as it is practical and operational. It has to start with a basic question: what do we want generative AI to do, and why.

Threading that needle requires fine (and ongoing) judgments about the right balance between protection and possibility, between anticipating, preventing and responding to old and new harms without killing innovation.

Leadership takeaways

Much of the Australian public sector was late to the digital revolution, with the exception of NSW. The Thodey public sector review correctly observed the federal government had been a data laggard.

Generative AI is not going away. Agency leaders need to rapidly build confidence, capability and capacity among their teams. They have to create a clear sense of purpose and value that includes new risk-based governance approaches.

There is also a unique opportunity to rethink fundamentally the policy process itself. This AI moment should be as much a deep and creative reform moment for policy development, as it clearly is about new ways to design and deliver great public services.

Advertisement

There are seven practical things public sector leaders can do to make sure their use of generative AI is safe, smart and sensible.

First, invest in finding out what generative AI is already doing, the relevant guidance and how it might apply into your workplace and agency. You need to know enough to know what you’re dealing with. Avoid centralising knowledge and expertise around an “AI guru”, as this is hard to scale and exposes agencies to being blindsided.

Second, don’t show the technology you are afraid. Go towards it, test it and learn more about it. Don’t delegate responsibility to others. This is core leadership territory, particularly for agency heads, most of whom inevitably set the risk appetite for their agencies.

Third, make sure your agency, and certainly your immediate team, has invested some time to consider how generative AI might affect your work. Think of all the different ways, good and bad, it can affect what you do, the way you work, and the programs and policies you are responsible for. Look at how you could test what’s possible. Set up a “lab” environment where you, your team and customers and partners too, can join together to test the boundaries of “what if”.

Fourth, think about what skills you and your teams need. Generative AI is now readily available, but learning how to safely apply it to your own data requires a mix of cultural and management skills as well as mathematical modelling skills most agencies don’t have.

Fifth, talk about it, engage with others in your team and agency, Join or start a community of practice that will rapidly show you who else is where you are, or perhaps slightly ahead, and from whom you can learn.

Advertisement

Six, invest time to develop a clear sense of purpose, value and impact that frames how you want to harness AI’s potential and manage its risks. Using the tool to better extract learnings from your agency’s archives, case files and projects is a good starting point.

This strategic focus enables the seventh and possibly the most important initiative, the development of an evolving governance system that can adapt as we learn.

A governance framework will include clear processes for generative AI use, as well as policies and standards that borrow from existing cyber, data sharing, information access, privacy and other risk frameworks, as well as the current AI risk framework and internal guidance.

NSW AI guidelines remain the gold standard for Australian governments, but rapid developments around generative AI will probably see these further refined.

Governance also requires mechanisms for oversight, a clear understanding of any legal requirements such as privacy and copyright, and a system that brings to account misuse and project failures.

In summary: think clearly and as openly and collaboratively as possible about the purpose and value you are trying to create by using AI (what’s it for, why it will help); make sure you and your teams understand the guardrails and assurance frameworks and know how to use them properly; above all, be prepared to adapt and change in the light of evidence and feedback from staff, clients, colleagues.

Advertisement

These are the hallmarks of emerging generative AI good practice, which increasingly needs to become a reflexive part of every public leader’s practice and performance.

Martin Stewart-Weeks is a former public servant, ministerial adviser and an independent adviser working at the intersection of policy, public management, innovation and technology. He is co-author of Are We There Yet ? a text on digital change. Connect with Martin on Twitter. Email Martin at martinsw@publicpurpose.com.au
Simon Cooper is a digital transformation in government partner at Deloitte specialising in customer-focused strategy, experience design and digital delivery. He is co-author of ‘Are We There Yet?’ a book on digital change. Connect with Simon on Twitter. Email Simon at simcooper@deloitte.com.au
Tom Burton has held senior editorial and publishing roles with The Mandarin, The Sydney Morning Herald and as Canberra bureau chief for The Australian Financial Review. He has won three Walkley awards. Connect with Tom on Twitter. Email Tom at tom.burton@afr.com

Read More

Latest In Federal

Fetching latest articles

Most Viewed In Politics