A Dark Cloud
Back to blogai

Is Your Team Using ChatGPT at Work? Why You Need an AI Policy

Most Perth businesses have staff using ChatGPT, Claude, and other AI tools without approval. Here's why shadow AI is a risk and how to create a practical AI policy.

A Dark Cloud Creative23 March 20266 min read

Here's something most Perth business owners don't know: your staff are almost certainly using AI tools at work right now.

They're pasting client emails into ChatGPT to draft responses. They're uploading financial spreadsheets to Claude to analyse numbers. They're using free AI tools to summarise documents, write proposals, and generate content.

And they're doing it without your knowledge, without your approval, and without any governance.

This is called shadow AI, and it's happening in virtually every business.

What is shadow AI?

Shadow AI is the use of AI tools by employees without organisational approval or oversight. Just like "shadow IT" described employees using unauthorised software a decade ago, shadow AI describes employees using AI tools that your business hasn't vetted, approved, or secured.

The most common examples we see in Perth businesses:

  • ChatGPT (free tier): Staff paste in client emails, internal documents, and financial data to get help drafting responses
  • Claude: Similar usage — document summarisation, analysis, writing assistance
  • Gemini: Used through personal Google accounts, often for research and content creation
  • AI-powered browser extensions: Grammar tools, email aids, and summarisation plugins that process your data through third-party AI
  • Image generators: Staff using Midjourney or DALL-E for presentations and marketing materials with no brand governance

Why this is a problem

"But they're just being productive!" you might think. And you're right — AI tools are genuinely useful. The problem isn't that your team is using AI. The problem is that they're using it without controls.

Data leakage

When your staff paste confidential information into free-tier AI tools, that data may be used to train the model. This means:

  • Client data could be exposed to the AI provider — and potentially surfaced to other users
  • Financial information leaves your controlled environment
  • Intellectual property — proposals, strategies, pricing — gets uploaded to servers you don't control
  • Personal information about employees or clients could violate the Australian Privacy Act

The free versions of ChatGPT and Claude have different data handling policies than their paid business versions. Most staff don't know the difference — and don't check.

No audit trail

When your team uses AI to make decisions, draft documents, or analyse data, there's no record of it. If a mistake is made, you can't trace it back. If a client asks how their data was handled, you can't answer.

In regulated industries (finance, healthcare, legal), this could be a compliance violation.

Quality control

AI tools generate plausible-sounding content that's sometimes wrong. If your team is using AI to draft client advice, proposals, or reports without proper review, you're risking your professional reputation.

What Perth businesses should do

The answer isn't to ban AI. That would be like banning the internet in 2005 — it just pushes usage underground. The answer is to govern it.

Step 1: Discover what's already happening

Before you can govern AI, you need to know what tools your team is using and how. A shadow AI audit typically reveals:

  • 60-80% of staff are using at least one AI tool at work
  • Multiple free-tier accounts with no enterprise-grade data protection
  • Client-facing work being processed through ungoverned AI tools
  • No one thinks it's a problem — staff see it as a productivity tool, not a risk

Step 2: Create an acceptable use policy

An AI acceptable use policy doesn't need to be complicated. It should cover:

  • Which AI tools are approved for business use (and which are not)
  • What data can and cannot be entered into AI tools (never client data, financial data, or IP)
  • Who is responsible for reviewing AI-generated content before it goes to a client
  • How to request new tools — a simple process for staff to propose new AI tools for evaluation
  • Consequences — what happens if the policy is breached

Keep it short, practical, and written in plain English. A 20-page policy that nobody reads is worse than no policy at all.

Step 3: Provide approved alternatives

If you tell your team "stop using ChatGPT" without giving them an alternative, they'll keep using it in secret. Instead:

  • Deploy Microsoft Copilot if you're on M365 — it's governed, logged, and respects your permissions
  • Set up a business ChatGPT or Claude account with enterprise data handling
  • Provide training on how to use approved tools effectively
  • Create prompt templates for common tasks so staff get good results quickly

Step 4: Implement technical controls

For businesses that need stronger governance:

  • Conditional access policies to block access to unapproved AI websites from company devices
  • DLP (Data Loss Prevention) policies that flag when sensitive data is being copied to external services
  • Regular monitoring of AI tool usage across the organisation

Step 5: Review regularly

AI is moving fast. New tools launch weekly. Your policy and approved tools list should be reviewed quarterly at minimum.

A practical framework for Perth SMBs

Here's a simple framework you can implement this week:

LevelDescriptionActions
GreenApproved and governedCopilot, business ChatGPT/Claude accounts — use freely
AmberRequires approvalNew AI tools — submit to IT/management for review before use
RedNot permittedFree-tier ChatGPT, unvetted browser extensions — blocked or prohibited

Add this to your next team meeting. Most staff will appreciate the clarity — they want to use AI, they just don't know what's allowed.

The bottom line

AI tools are here to stay. Your staff are already using them. The question isn't whether to allow AI — it's whether to govern it or ignore it.

Ignoring it means accepting that your client data, financial information, and intellectual property are being uploaded to tools you don't control, by staff who don't know the risks, with no audit trail and no accountability.

Governing it means your team can use AI to be more productive, more creative, and more efficient — without putting your business at risk.

Book a shadow AI discovery audit →


Related reading:

Need help with this?

We're Perth's specialist IT consultancy for small businesses. If anything in this article resonated, let's talk.

Get in Touch