AI at Work in Australia: Shadow Use

The Shadow in Australian Workplaces

Many workers employ AI without approval or guidance. This “shadow use” is not a moral failing. Rather, it shows that the tools are productive and that policy, training and controls have not kept pace.

Microsoft's Work Trend Index found 84% of Australian workers use generative AI, and 78% of those employees do it with their own AI tools. A large proportion of workers are using GenAI secretly for fear of censure. At the same time, a study from KPMG with Melbourne University found that 59% of Australian workers reported making mistakes due to GenAI and 57% have relied on outputs without verifying accuracy.

The percentages themselves are difficult to parse with certainty, but the trend is unarguable.

As Microsoft's Sarah Carney put it, "It's imperative that business leaders engage more and bridge the disconnect by providing clarity to employees on how to use AI in responsible ways that adhere to their organisations' security and privacy requirements."

Productivity and Paradox

The driver is productivity. Used well, AI automates repetitive work and can lift the quality of knowledge work. Ernst & Young reports that a large share of daily users save four or more hours each week. PwC's Global AI Jobs Barometer finds that sectors more exposed to AI have seen productivity grow about 4.8 times faster than less exposed sectors.

Yet the gains are not automatic. A sizeable minority report no improvement or, worse, say workloads increase because time is needed to verify and correct AI outputs. Productivity is not inherent in the AI. It is an outcome of skilled use, and this demands training and governance.

When Shadow AI Causes Real Harm

Every time an employee pastes sensitive client data, a confidential report or proprietary code into a public AI tool, there are considerable risks. This information can be stored on overseas servers, used to train future models unless enterprise controls or data‑use settings prevent it.

These are not theoretical risks. There is a growing number of examples of mistakes made with AI:

Endangering a Child: A child protection worker used ChatGPT to draft a Protection Application report. Sensitive personal details were pasted into a public tool and the generated report downplayed the severity of the actual or potential harm to the child.

Undermining Credibility: Deloitte was paid $439,142 for a government report on welfare compliance. It contained references to academic works that did not exist and an invented quote from a Federal Court case.

Comprising Justice: In August 2025, Victorian KC Rishi Nathwani apologised after filing submissions in a murder case that included AI-generated fake quotes and non-existent case citations.

These incidents demonstrate what can happen when individuals and organisations use AI without proper diligence. In addition to the human cost, there is very real financial and reputational risk.

John Lonsdale, Chair of the Australian Prudential Regulation Authority, warns, "With so much at stake, our tolerance for gaps or weaknesses in how these risks are being managed has never been lower."

Ignorance is Not an Excuse

There is no single AI Act in Australia yet, but that does not mean there’s a vacuum of relevant regulation. The Privacy Act 1988 and the Australian Privacy Principles apply whenever personal information is handled. Australian Consumer Law prohibits misleading or deceptive conduct, which includes AI-generated claims in marketing and communication.

Highly regulated industries, such as those covered by APRA, must exercise strict information security, which is vulnerable to shadow AI use. Public sector agencies must comply with the Policy for the Responsible Use of AI in Government, including AI assurance frameworks and strict records obligations.

As Privacy Commissioner Carly Kind has said, "Robust privacy governance and safeguards are essential for businesses to gain advantage from AI and build trust and confidence in the community."

OCNUS Consulting’s Recommendations for Leaders

The answer is not to ban AI. It is to use it properly. Below is a practical pathway which OCNUS recommends to boards and executives:

Acknowledge and Assess: Progress from turning a blind eye to one of discovery. Conduct a shadow AI use audit to understand which tools are being used and why. This is valuable data on unmet business needs.

Publish Clear and Enabling Guardrails: Create an AI usage policy that enables staff and ensures safe AI use. Be explicit about what must never be used with public tools. Classify data, e.g. Public, Internal, Confidential, and provide clear rules for each.

Provide Sanctioned, Secure Tools: Employees use shadow AI when they lack effective and approved alternatives. Invest in enterprise‑grade AI platforms that operate within security perimeters, giving workers the tools they need in a controlled environment.

Invest in AI Literacy: The data shows a huge demand for training. Invest in practical, role‑based education on prompt engineering, data analysis, ethical use and critically, how to verify AI outputs. This builds a human firewall, which is needed to use AI safely.

Lead from the Front: Senior leaders must visibly and openly use the sanctioned AI tools, discussing both their successes and their limitations. Articulate a clear vision for how AI will augment, not replace, the enterprise’s workforce. Trust is required to transform shadow AI from a hidden liability into a strategic advantage.

Technology companies should shoulder a greater responsibility for preventing online harms. However, this does not absolve leaders of responsibility for their own workplace. What is needed is relevant AI policies and practices and the active monitoring of compliance.

The Bottom Line

Ignoring shadow AI use will inevitably lead to harm, privacy breaches, poor decisions, reputational damage and financial liability. Bringing it into the light means productivity gains can be realised while meeting legal and ethical responsibilities. Leaders must lead into this space. They need to fund a safe path, set clear rules and hold everyone accountable. Do that and AI becomes a competitive asset rather than an unmanaged risk. For this to happen, AI must be brought out of the shadows.

Next
Next

AI at Work in Australia: The Need for a National AI Summit