{"id":268,"date":"2026-04-07T20:54:50","date_gmt":"2026-04-07T10:54:50","guid":{"rendered":"https:\/\/datamastery.com.au\/?p=268"},"modified":"2026-04-07T20:55:48","modified_gmt":"2026-04-07T10:55:48","slug":"the-five-questions-your-ai-impact-assessment-must-actually-answer","status":"publish","type":"post","link":"https:\/\/datamastery.com.au\/?p=268","title":{"rendered":"The Five Questions Your AI Impact Assessment Must Actually Answer"},"content":{"rendered":"<p>Most AI Impact Assessments I&#8217;ve seen are compliance theatre. Agencies complete them because they have to. They tick the boxes, file the document, and move on. Then something goes wrong \u2014 and the assessment offers no protection at all, because it never actually grappled with the real risks.<\/p>\n<p>That&#8217;s the problem this piece is trying to solve.<\/p>\n<p><strong>Before 15 June 2026<\/strong>, every non-corporate Commonwealth entity must maintain a register of in-scope AI use cases, assign an accountable owner to each, and complete an impact assessment before deployment. The Policy for the Responsible Use of AI in Government v2.0 came into force December 2025. This isn&#8217;t guidance. It&#8217;s mandatory.<\/p>\n<p>But mandatory doesn&#8217;t mean meaningful. An assessment that runs to two pages of affirmed principles won&#8217;t protect your agency when the ANAO comes, when a citizen challenges an automated decision in the AAT, or when a minister is asked on the floor of parliament why your system produced the outcome it did.<\/p>\n<p>Here are the<strong> five questions a real assessment must answer<\/strong>. Not as a checklist \u2014 as a test of whether you&#8217;ve actually done the work.<\/p>\n<p><strong>1. What law actually applies to this specific use case?<\/strong><br \/>\nThis is where most assessments fall apart immediately. &#8220;Legal compliance&#8221; gets treated as a single checkbox. It isn&#8217;t.<\/p>\n<p>If your AI touches personal information \u2014 and almost every APS AI does \u2014 you need a Privacy Impact Assessment under OAIC guidance, not a sentence saying privacy has been considered. APP 3 governs what you can collect. APP 11 covers how you secure it. APP 1 requires documented privacy management practices. &#8220;The vendor handles privacy&#8221; is not an answer. It&#8217;s not even close to an answer.<\/p>\n<p>Then there&#8217;s administrative law. If your AI influences decisions that affect individual rights \u2014 benefit eligibility, compliance action, service allocation \u2014 the Administrative Decisions (Judicial Review) Act 1977 applies. The decision must be legally valid. Procedurally fair. Explainable. You cannot automate away the obligation to provide reasons. Natural justice doesn&#8217;t have an AI exemption.<\/p>\n<p>The FOI Act 1982 means your decision logic may be subject to disclosure. Document accordingly \u2014 because what you don&#8217;t document will be inferred.<\/p>\n<p>Generic statements about &#8220;meeting all relevant laws&#8221; aren&#8217;t assessments. They&#8217;re placeholders for the work you haven&#8217;t done yet.<\/p>\n<p><strong>2. Can you actually demonstrate this system works fairly?<\/strong><br \/>\nAustralia&#8217;s AI Ethics Framework has eight principles. Most agencies treat them as values to affirm. They&#8217;re not \u2014 they&#8217;re operational requirements to demonstrate. The distinction matters when things go wrong.<\/p>\n<p>Fairness is where agencies get caught. Your assessment must document what training data was used, what populations might be underrepresented, and what testing has been done to identify discriminatory outcomes across protected characteristics. Not &#8220;the vendor confirms the model is fair.&#8221; Vendors have commercial incentives to understate bias risk. That&#8217;s not cynicism \u2014 it&#8217;s just how incentives work.<\/p>\n<p>Explainability is the other one. If you cannot explain in plain English how the AI reaches a decision or recommendation, you have a problem. Not eventually. Now. Because a citizen affected by that decision has a right to understand it, a lawyer challenging it needs to examine it, and a Senate committee can ask for it at any time.<\/p>\n<p>Contestability means a real pathway to challenge \u2014 not a theoretical one. What does that pathway actually look like? Is there human review? Does the person affected know they can ask for it?<\/p>\n<p><strong>3. Where does the data go \u2014 and are you sure?<\/strong><br \/>\nThis is the risk agencies most consistently underestimate, particularly with generative AI tools.<\/p>\n<p>Microsoft Copilot, Google Gemini, and similar tools now in use across the APS involve offshore data processing. Your assessment must establish exactly where data is processed, where it&#8217;s stored, and whether it&#8217;s retained by the vendor. If you can&#8217;t answer that, you haven&#8217;t done the assessment \u2014 you&#8217;ve described the tool and called it governance.<\/p>\n<p>For systems handling classified or sensitive government data, the PSPF applies. ASD&#8217;s Essential Eight and the ISM govern AI deployed on government infrastructure. New AI capabilities create new attack surfaces. Your assessment must identify them.<\/p>\n<p>The question that matters: could government information be used to improve commercial models that serve other clients \u2014 including foreign governments or private sector competitors? If your vendor won&#8217;t give you a clear answer, that&#8217;s your answer.<\/p>\n<p><strong>4. Who is accountable \u2014 and do they actually have authority?<\/strong><br \/>\nThe June 2026 requirements mandate a named accountable owner for each AI use case. Not a committee. Not a governance framework. Not a reference to &#8220;the business area.&#8221; A person \u2014 typically SES \u2014 who is personally responsible for the system&#8217;s appropriate use and has the authority to pause or stop deployment if risks materialise.<\/p>\n<p>Most assessments I&#8217;ve seen assign accountability to an organisational unit. That&#8217;s not accountability. That&#8217;s diffusion of responsibility dressed up as governance.<\/p>\n<p>On vendor contracts: agencies cannot delegate accountability to a vendor. If the vendor&#8217;s system produces a discriminatory outcome, your agency bears the consequences. Your contract should reflect that \u2014 indemnities, audit rights, transparency obligations, termination clauses for material failures. If your contract doesn&#8217;t have these, you don&#8217;t have accountability. You have hope.<\/p>\n<p>The APS Code of Conduct applies to employees using or overseeing AI systems. That&#8217;s not theoretical \u2014 it means individual conduct obligations attach to AI deployment decisions. Your assessment should confirm affected employees understand their responsibilities.<\/p>\n<p><strong>5. Who actually gets hurt if this goes wrong?<\/strong><br \/>\nThis is the question that gets answered least honestly. Every assessment I&#8217;ve seen spends three pages on governance and half a paragraph on harm.<\/p>\n<p>Start with citizens. If AI influences decisions affecting people, what happens when it&#8217;s wrong? Not wrong occasionally \u2014 systematically wrong, at scale, for months before anyone notices. Can affected individuals identify that AI was involved in their decision? Can they contest it? Is there a human review pathway, and do people know it exists?<\/p>\n<p>Robodebt is the reference point that should be in every APS AI assessment \u2014 not as a legal precedent, but as an operational reality check. Automated systems that produce legally invalid outcomes at scale create catastrophic harm and institutional crisis. The question isn&#8217;t &#8220;could this happen&#8221; \u2014 it&#8217;s &#8220;how would we know if it was happening, and how quickly could we stop it?&#8221;<\/p>\n<p>Then consider your workforce. The Work Health and Safety Act 2011 creates obligations around AI-driven changes to work processes. Significant changes require consultation. Surveillance and monitoring capabilities require particular scrutiny.<\/p>\n<p>And consider the political exposure. If this system made front-page news tomorrow, could you explain and defend it? If the Senate Select Committee on AI asked questions, would your documentation hold up? The ANAO has flagged digital and data governance as an emerging audit priority. Assume scrutiny is coming.<\/p>\n<p>The standard<br \/>\nThe assessment that protects your agency isn&#8217;t the one that passes the quickest review. It&#8217;s the one that honestly surfaces risks, documents your reasoning, and demonstrates that someone exercised genuine judgement \u2014 not just signed a form.<\/p>\n<p>Proportionality matters. A low-risk internal productivity tool doesn&#8217;t need the same depth as a citizen-facing decision support system. But proportionate doesn&#8217;t mean perfunctory. And the determination of what&#8217;s low-risk is itself a judgement that needs to be documented and defensible.<\/p>\n<p>If you can&#8217;t answer all five questions substantively, the assessment isn&#8217;t complete. Regardless of what the form says.<\/p>\n<pre><em>The views expressed in this article are those of the author in a personal capacity and do not represent the views of any Australian Government agency, employer, or client. Data Mastery operates independently and is not affiliated with any government agency. This article is for general informational purposes and does not constitute legal or professional advice.<\/em>\n\n<em>This article is part of the Data Mastery series on practical AI governance for enterprise and regulated environments. If your organisation is working through AI impact assessment design, contact Data Mastery to discuss.<\/em><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Most AI Impact Assessments I&#8217;ve seen are compliance theatre. Agencies complete them because they have to. They tick the boxes, file the document, and move on. Then something goes wrong \u2014 and the assessment offers no protection at all, because it never actually grappled with the real risks. That&#8217;s the problem this piece is trying [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-268","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/datamastery.com.au\/index.php?rest_route=\/wp\/v2\/posts\/268","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/datamastery.com.au\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/datamastery.com.au\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/datamastery.com.au\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/datamastery.com.au\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=268"}],"version-history":[{"count":2,"href":"https:\/\/datamastery.com.au\/index.php?rest_route=\/wp\/v2\/posts\/268\/revisions"}],"predecessor-version":[{"id":270,"href":"https:\/\/datamastery.com.au\/index.php?rest_route=\/wp\/v2\/posts\/268\/revisions\/270"}],"wp:attachment":[{"href":"https:\/\/datamastery.com.au\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=268"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/datamastery.com.au\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=268"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/datamastery.com.au\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=268"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}