AI Governance in the AI Implementation Age: What Federal Government Leaders Actually Need to Do
The hype cycle is over. AI is being deployed across Australian federal government right now. Procurement teams are buying it. Policy teams are experimenting with it. Developers are embedding it into workflows. And in most agencies, governance hasn’t kept pace.
That’s a problem you own.
If you’re an SES Band 1 or 2, an EL2, or a technical lead responsible for data and digital in a federal agency, AI governance isn’t an abstract policy question. It’s a live operational risk sitting on your watch. This article gives you practical steps — not principles, not frameworks to “consider” — to actually govern AI in your agency while implementation is happening around you.
—
The Gap That’s Eating You Alive
Most agencies have one of two problems.
The first: AI is being used and nobody in the senior leadership team has a clear picture of where, by whom, or on what data. Shadow AI is real. Staff are using commercial tools — some approved, some not — and organisational data is moving through systems that haven’t been risk-assessed.
The second: governance exists on paper. There’s a policy document, maybe a framework, possibly a committee. But implementation decisions are being made daily without reference to any of it, because the governance artefacts are too abstract to be operationally useful.
Both situations create the same outcome. When something goes wrong — a model produces a discriminatory output, sensitive data is exposed, a decision is challenged in review — you have no defensible position.
—
Start With an AI Register, Not a Policy
The single most useful thing you can do right now is establish an AI register.
Not a 40-page governance framework. A register.
Document every AI system in use or under active procurement in your agency. For each one, capture:
– What it does — the specific function, not the vendor marketing description
– Who owns it — a named person, not a business unit
– What data it touches — classification, sensitivity, whether it includes personal information
– What decisions it informs or makes — advisory only, or does it drive action?
– What human oversight exists — who reviews outputs and how
– What the risk assessment says — and whether one has actually been done
This is the foundation. Without it, you cannot govern. You cannot brief a minister. You cannot respond to an audit. You cannot make a credible risk claim.
Assign someone to maintain it. Review it quarterly. This is not a project — it’s an ongoing function.
—
Risk-Tier Your AI Uses
Not all AI is equal. Treating a document summarisation tool the same as a tool that informs welfare eligibility decisions is a governance failure.
Build a simple tiering approach. Three tiers is enough:
Tier 1 — Low risk. Productivity tools, drafting assistants, internal knowledge search. Human reviews all outputs before any action. Data used is not sensitive. No regulatory or rights implications.
Tier 2 — Moderate risk. Tools that analyse data to surface insights that inform decisions. Personal information may be involved. Outputs influence but don’t determine outcomes. Requires documented oversight and audit trail.
Tier 3 — High risk. Tools that directly inform or automate consequential decisions — about individuals, resource allocation, enforcement, or compliance. Requires independent assurance, bias testing, explainability standards, and senior accountable officer sign-off before deployment.
Most agencies are deploying Tier 2 and Tier 3 tools under Tier 1 governance. That’s not a gap — that’s a liability.
—
Define What “Human in the Loop” Actually Means
Every AI governance framework talks about human oversight. Almost none define what that means operationally.
“Human in the loop” is meaningless if the human is reviewing 300 AI outputs per day in a queue interface designed to push them through as fast as possible. That’s automation bias dressed up as oversight. The human becomes a rubber stamp.
Be specific about what genuine oversight requires in your context:
– What information does the reviewer see? Do they see the AI output only, or the underlying data and the model’s confidence level?
– What is the expected review time? If it’s two seconds per decision, the oversight is theatre.
– What does the reviewer do when they disagree? Is there a clear pathway, or does the system design make override difficult?
– Are decisions logged individually? Can you reconstruct who reviewed what and what they decided?
If you can’t answer these questions for a tool you’re running, you don’t have human oversight. You have the appearance of it.
—
Procurement Is a Governance Moment — Treat It That Way
Most of the AI your agency will run in the next three years is being bought right now. Procurement is where governance has the most leverage, and most agencies are not using it.
When a vendor is pitching an AI product, your procurement and technical teams need to ask hard questions before contracts are signed:
– Where is the model hosted, and where does data go?
– What training data was used, and what are the known limitations?
– How is the model updated, and who controls that?
– What logging and auditability does the system provide?
– What happens to your data if the vendor is acquired or goes out of business?
– Does the vendor provide an AI Bill of Materials or equivalent documentation?
Vendors who can’t answer these questions clearly are vendors you don’t want running sensitive government functions.
Build these questions into your procurement templates now. Retrofit them to existing contracts at renewal. Work with your legal and procurement colleagues to make AI-specific clauses standard.
—
Your People Are Already Using AI — Address It Directly
The acceptable use gap is one of the most underestimated governance risks in federal agencies right now.
Staff are using AI tools. They’re using approved tools in unapproved ways. They’re using personal accounts to access commercial tools on work problems. Some of them are pasting official documents into public large language models because it’s faster than the approved pathway.
A policy document nobody has read doesn’t solve this.
What works:
Clear guidance, not long policy. A one-page reference card on what staff can and cannot do with AI tools — which tools are approved, what data can go into them, what to do when they’re unsure — is more effective than a 20-page policy.
Leadership modelling. If SES are using AI tools without considering the guidance, expect everyone else to follow. Be visible about how you use these tools and what questions you ask.
A simple escalation path. Staff need to know who to ask when they’re uncertain. Make it easy, not bureaucratic. If the answer to “can I use this tool for this task?” takes a week to come back, staff will stop asking.
No-blame reporting. If someone has used a tool in a way that created risk, you want to know about it. Create conditions where people report issues rather than hide them. Blame-heavy cultures create hidden AI risks.
—
Accountability Has to Be Named
Governance without named accountability is decoration.
In your agency, right now, there should be a named person — ideally at SES level — who is accountable for AI governance. Not “responsible for the team that does AI things.” Accountable. The person who briefs the Secretary. The person who would appear before a Senate Estimates committee if an AI system caused harm.
If that person doesn’t exist, you have a governance gap at the top.
Below that, every AI system in your register should have a named system owner — an individual, not a branch — who is responsible for that system’s ongoing management, risk review, and oversight.
This isn’t bureaucracy. It’s the minimum viable accountability structure for operating AI in a public sector context where decisions affect citizens.
—
What to Do Monday Morning
If you’ve read this far and recognise your agency in these gaps, here’s where to start:
1. Commission an AI register. Give someone two weeks to produce the first version. It won’t be complete. That’s fine. Start it.
2. Identify your Tier 3 uses. Find the AI that’s touching consequential decisions. Audit the oversight arrangements against a real standard, not an assumed one.
3. Review one AI procurement that’s in flight right now. Ask the vendor questions they haven’t been asked yet.
4. Publish a one-page acceptable use guide for staff. Plain English. This week.
5. Name the accountable person. If it’s you, say so clearly. If it should be someone else, have that conversation.
AI governance isn’t about slowing down AI adoption. It’s about making sure that when something goes wrong — and statistically, in a large enough portfolio, something will — your agency has defensible, documented, operationally real oversight.
That’s what leadership looks like in this space.
—
Data Mastery works with Australian government agencies on data governance, AI readiness, and building the internal capability to govern data and AI properly — not just on paper. If your agency is navigating these challenges, visit [datamastery.com.au](https://datamastery.com.au) or get in touch directly. We work with the people doing the real work.
The views expressed in this article are those of the author in a personal capacity and do not represent the views of any Australian Government agency, employer, or client. Data Mastery operates independently and is not affiliated with any government agency.