Organisations across Australia are investing real resources into responsible AI. Consultants are engaged, frameworks are written, policies are approved at board level. And then, six months later, staff aren't following them.
This isn't an edge case. It's the pattern.
The question worth asking isn't why the policy failed. The question is why organisations keep expecting policies alone to govern behaviour, when decades of compliance research, and now a growing body of AI governance guidance, have shown consistently that they don't.
The Documentation Trap
Most AI governance frameworks are built backwards. They start with the regulatory requirement or the risk register, work their way to a policy document, and assume that once approved, the work is done.
The document exists. The governance exists. Except it doesn't, not in any meaningful sense.
What exists is a record that governance was attempted. The actual governing, the shaping of decisions, the influencing of daily behaviour, the management of risk at the point where it actually occurs, that hasn't happened yet. And in most organisations, it never will, because the gap between the policy and the person reading it was never designed to be crossed.
This isn't a theoretical concern. Australia's own Guidance for AI Adoption, released by the Department of Industry, Science and Resources in October 2025, acknowledges this explicitly. The guidance was redesigned specifically because organisations told the government that earlier frameworks, while sound in principle, were not translating into day-to-day practice.1 The shift from ten guardrails to six essential practices was driven by exactly this feedback: organisations needed clearer, simpler, and more actionable guidance.2
Simplicity is necessary, but it isn't sufficient. Even the clearest guidance doesn't govern anything if the people who need to act on it don't understand it, don't know how it applies to their specific role, and have no practical mechanism for raising questions when they're uncertain.
What the Evidence Tells Us
The 2025 Responsible AI Index, cited in the Guidance for AI Adoption's development documentation, found something striking: while 78% of surveyed organisations agreed with ethical AI performance statements, only 29% had implemented relevant responsible AI practices.3 That gap, between stated commitment and actual practice, is the governance failure in data form.
It's not a gap caused by bad intentions. It's caused by the absence of the learning architecture needed to translate principle into behaviour. Organisations say they value responsible AI. They don't yet have the systems, the training, the decision frameworks, or the feedback loops to act on that value consistently.
The AI Governance Profession Report published by the IAPP confirms this picture at the organisational level. It found that a significant challenge identified by governance professionals was access to appropriate AI governance talent, specifically, professionals who can translate legislative requirements into actionable policies and combine understanding of AI with governance, risk, and compliance experience.4 That combination is rare because it spans two disciplines that have historically sat apart.
The Enablement Gap
The gap I see most consistently in practice is between governance design and enablement design. These are typically treated as separate workstreams, or more commonly, only the first one happens.
Governance without enablement produces shelf documents. Enablement without governance produces confident people doing the wrong things efficiently.
Australia's Guidance for AI Adoption implicitly acknowledges this. The six essential practices it outlines, assign accountability, understand impacts, manage risks, maintain transparency, test and monitor, maintain human control, are not just technical requirements.5 Each one requires people to understand what they're doing and why. The Implementation Practices version specifically noted that organisations need to ensure appropriate training is provided to anyone overseeing or using AI systems to understand each system's capabilities, limitations, and failure modes.6
Training is governance infrastructure. Without it, the framework is architecture without a building.
The AI Maturity Framework developed for generative AI adoption makes this point through a different lens: organisations in the early stages of maturity tend to treat AI governance as a compliance exercise, while more mature organisations integrate it into their operational culture.7 The transition between those two states isn't achieved by writing better policies. It's achieved by building the human capability to act on them.
Three Things That Make the Difference
They design for the decision point, not the policy principle. Rather than explaining that AI use must be ethical and transparent, they map the specific moments where staff will make AI-related decisions and build guidance around those moments.8 The principle won't answer what to do when a vendor proposes integrating an AI tool into your workflow. Context-specific guidance will.
They measure adoption, not completion. A 100% completion rate on an AI awareness module tells you almost nothing about whether your governance framework is working. The Sibenco Legal analysis of Australia's framework notes that governance obligations now require organisations to demonstrate traceability and accountability, not simply document that training occurred.9
They treat governance as a living system, not a launch event. The AI landscape is changing faster than any governance framework can be written to accommodate, which is precisely why the Department of Industry updated the Voluntary AI Safety Standard within twelve months of releasing it.10
What This Means for Your Organisation
If you have a framework that exists on paper but isn't driving the behaviour you need, the question to ask is: where is the enablement design?
Not the training module. The enablement design, the deliberate, structured plan for how the people in your organisation will develop the knowledge, skills, and practical understanding to apply your governance framework in their day-to-day work.
That's the work that turns a document into governance. And in most organisations, it's the work that hasn't been done yet.
References
- Department of Industry, Science and Resources, Guidance for AI Adoption, October 2025. industry.gov.au
- Department of Industry, Science and Resources, Supporting safer AI adoption, October 2025.
- Department of Industry, Science and Resources, How we developed the guidance, October 2025.
- IAPP and Credo AI, AI Governance Profession Report 2025. iapp.org
- Department of Industry, Science and Resources, Guidance for AI Adoption: Foundations, v1.0, October 2025.
- Department of Industry, Science and Resources, Guidance for AI Adoption: Implementation Practices, v1.0, October 2025.
- Amazon Web Services, Maturity model for adopting generative AI on AWS.
- Department of Industry, Science and Resources, Guidance for AI Adoption: Implementation Practices, October 2025.
- Sibenco Legal & Advisory, Understanding Australia's AI Governance Risk and Assurance Framework, November 2025.
- Department of Industry, Science and Resources, The 10 guardrails, Voluntary AI Safety Standard.