April 7, 2026

The Five Questions Your AI Impact Assessment Must Actually Answer

Most AI Impact Assessments I’ve seen are compliance theatre. Agencies complete them because they have to. They tick the boxes, file the document, and move on. Then something goes wrong — and the assessment offers no protection at all, because it never actually grappled with the real risks.

That’s the problem this piece is trying to solve.

Before 15 June 2026, every non-corporate Commonwealth entity must maintain a register of in-scope AI use cases, assign an accountable owner to each, and complete an impact assessment before deployment. The Policy for the Responsible Use of AI in Government v2.0 came into force December 2025. This isn’t guidance. It’s mandatory.

But mandatory doesn’t mean meaningful. An assessment that runs to two pages of affirmed principles won’t protect your agency when the ANAO comes, when a citizen challenges an automated decision in the AAT, or when a minister is asked on the floor of parliament why your system produced the outcome it did.

Here are the five questions a real assessment must answer. Not as a checklist — as a test of whether you’ve actually done the work.

1. What law actually applies to this specific use case?
This is where most assessments fall apart immediately. “Legal compliance” gets treated as a single checkbox. It isn’t.

If your AI touches personal information — and almost every APS AI does — you need a Privacy Impact Assessment under OAIC guidance, not a sentence saying privacy has been considered. APP 3 governs what you can collect. APP 11 covers how you secure it. APP 1 requires documented privacy management practices. “The vendor handles privacy” is not an answer. It’s not even close to an answer.

Then there’s administrative law. If your AI influences decisions that affect individual rights — benefit eligibility, compliance action, service allocation — the Administrative Decisions (Judicial Review) Act 1977 applies. The decision must be legally valid. Procedurally fair. Explainable. You cannot automate away the obligation to provide reasons. Natural justice doesn’t have an AI exemption.

The FOI Act 1982 means your decision logic may be subject to disclosure. Document accordingly — because what you don’t document will be inferred.

Generic statements about “meeting all relevant laws” aren’t assessments. They’re placeholders for the work you haven’t done yet.

2. Can you actually demonstrate this system works fairly?
Australia’s AI Ethics Framework has eight principles. Most agencies treat them as values to affirm. They’re not — they’re operational requirements to demonstrate. The distinction matters when things go wrong.

Fairness is where agencies get caught. Your assessment must document what training data was used, what populations might be underrepresented, and what testing has been done to identify discriminatory outcomes across protected characteristics. Not “the vendor confirms the model is fair.” Vendors have commercial incentives to understate bias risk. That’s not cynicism — it’s just how incentives work.

Explainability is the other one. If you cannot explain in plain English how the AI reaches a decision or recommendation, you have a problem. Not eventually. Now. Because a citizen affected by that decision has a right to understand it, a lawyer challenging it needs to examine it, and a Senate committee can ask for it at any time.

Contestability means a real pathway to challenge — not a theoretical one. What does that pathway actually look like? Is there human review? Does the person affected know they can ask for it?

3. Where does the data go — and are you sure?
This is the risk agencies most consistently underestimate, particularly with generative AI tools.

Microsoft Copilot, Google Gemini, and similar tools now in use across the APS involve offshore data processing. Your assessment must establish exactly where data is processed, where it’s stored, and whether it’s retained by the vendor. If you can’t answer that, you haven’t done the assessment — you’ve described the tool and called it governance.

For systems handling classified or sensitive government data, the PSPF applies. ASD’s Essential Eight and the ISM govern AI deployed on government infrastructure. New AI capabilities create new attack surfaces. Your assessment must identify them.

The question that matters: could government information be used to improve commercial models that serve other clients — including foreign governments or private sector competitors? If your vendor won’t give you a clear answer, that’s your answer.

4. Who is accountable — and do they actually have authority?
The June 2026 requirements mandate a named accountable owner for each AI use case. Not a committee. Not a governance framework. Not a reference to “the business area.” A person — typically SES — who is personally responsible for the system’s appropriate use and has the authority to pause or stop deployment if risks materialise.

Most assessments I’ve seen assign accountability to an organisational unit. That’s not accountability. That’s diffusion of responsibility dressed up as governance.

On vendor contracts: agencies cannot delegate accountability to a vendor. If the vendor’s system produces a discriminatory outcome, your agency bears the consequences. Your contract should reflect that — indemnities, audit rights, transparency obligations, termination clauses for material failures. If your contract doesn’t have these, you don’t have accountability. You have hope.

The APS Code of Conduct applies to employees using or overseeing AI systems. That’s not theoretical — it means individual conduct obligations attach to AI deployment decisions. Your assessment should confirm affected employees understand their responsibilities.

5. Who actually gets hurt if this goes wrong?
This is the question that gets answered least honestly. Every assessment I’ve seen spends three pages on governance and half a paragraph on harm.

Start with citizens. If AI influences decisions affecting people, what happens when it’s wrong? Not wrong occasionally — systematically wrong, at scale, for months before anyone notices. Can affected individuals identify that AI was involved in their decision? Can they contest it? Is there a human review pathway, and do people know it exists?

Robodebt is the reference point that should be in every APS AI assessment — not as a legal precedent, but as an operational reality check. Automated systems that produce legally invalid outcomes at scale create catastrophic harm and institutional crisis. The question isn’t “could this happen” — it’s “how would we know if it was happening, and how quickly could we stop it?”

Then consider your workforce. The Work Health and Safety Act 2011 creates obligations around AI-driven changes to work processes. Significant changes require consultation. Surveillance and monitoring capabilities require particular scrutiny.

And consider the political exposure. If this system made front-page news tomorrow, could you explain and defend it? If the Senate Select Committee on AI asked questions, would your documentation hold up? The ANAO has flagged digital and data governance as an emerging audit priority. Assume scrutiny is coming.

The standard
The assessment that protects your agency isn’t the one that passes the quickest review. It’s the one that honestly surfaces risks, documents your reasoning, and demonstrates that someone exercised genuine judgement — not just signed a form.

Proportionality matters. A low-risk internal productivity tool doesn’t need the same depth as a citizen-facing decision support system. But proportionate doesn’t mean perfunctory. And the determination of what’s low-risk is itself a judgement that needs to be documented and defensible.

If you can’t answer all five questions substantively, the assessment isn’t complete. Regardless of what the form says.

The views expressed in this article are those of the author in a personal capacity and do not represent the views of any Australian Government agency, employer, or client. Data Mastery operates independently and is not affiliated with any government agency. This article is for general informational purposes and does not constitute legal or professional advice.

This article is part of the Data Mastery series on practical AI governance for enterprise and regulated environments. If your organisation is working through AI impact assessment design, contact Data Mastery to discuss.
← Back to all articles Share on LinkedIn