Employee AI policy for SMEs: What a practical AI policy must cover


Almost three-quarters of UK employees use unauthorised AI tools at work. And more than half do it every week. For organisations, this “shadow” use of AI is a risk.

Employees may simply be making an unproductive use of time and, if they’re paying for a subscription, budget. Unsupervised use of AI, even if it doesn't cause harm, often doesn’t yield real benefits. According to McKinsey, 80% of companies using generative AI such as ChatGPT do not see a return on investment.

It may also put your data, customers, and reputation at risk. Without proper guardrails and the right choice of tool, AI undermines data governance and even cyber security.

According to a study by Moody’s, for instance, weak AI governance combined with widespread AI usage is already increasing the risk of data breaches. And almost a quarter of companies have no policies on what data employees can upload to public AIs. Another recent study found that UK small businesses risk falling behind because they are not adopting AI fast enough.

In this article, we explain how you can minimise and mitigate risk while getting the most benefit from AI, with a tailored AI policy specifically for your organisation.

 

Table of contents
  1. What an employee AI policy actually does

  2. When an employee AI policy needs to be formalised

  3. Key risks without an AI policy

  4. AI policy template: what should be included

  5. Maintaining a living AI policy

 

What an employee AI policy actually does

With a tightly drafted AI policy, tailored to your organisation’s operating model, an employee should never be in any doubt about whether they can use AI to help them.

By comparing the task, the data and the required outputs with the policy, workers should easily be able to see:

  • If they may use AI

  • Which AI tools are permitted

  • How to achieve the best results

A well-drafted AI policy protects your company, client data, and reputation from AI risks. It will enable your colleagues to use AI productively. And it will reduce legal and compliance risk.

 

When an employee AI policy needs to be formalised

Given the rates of unauthorised AI use, any staff involved in “knowledge work” (those who work handling company or customer data of any kind and who generate documents or data) should have a well-drafted AI policy they can refer to.

In some situations, however, the need for an AI policy is non-negotiable. For instance:

  • In regulated industries (finance, healthcare, etc.), there is a need to show audit trails for all data processing.

  • In any context in which employees handle or process sensitive information or personally identifiable information (PII).

  • When the business is experiencing rapid, unsupervised AI adoption across multiple AI tools and teams.

  • For organisations with substantial remote or mobile workforces, operating a hybrid working model makes oversight complicated.

 

Key risks without an AI policy

Unapproved, free AI tools often retain user inputs to train their models. This means your company data becomes part of the AI's knowledge base and could potentially be surfaced in responses to other users.

And these are just the most advanced and eye-catching risks of failing to properly regulate, supervise, and secure employee use of AI.

Other significant risks include:

  • Accidental disclosure of client data: uploading client data to public LLMs may cause that data to surface in other users’ outputs from that LLM.

  • Use of poor-quality AI outputs: without training and oversight, users may mistake fluency for accuracy and use low-quality outputs in client-facing work.

  • Data theft via plugins: unapproved browser extensions, for instance, prompt libraries, and other AI plugins may steal data or security credentials.

 

AI policy template: what should be included

A good employee AI policy will define acceptable use, which AIs are permitted, how employees can use sensitive data with AI and, crucially, how to handle possible breaches of the AI policy.

Here is a guide for your AI policy:

Section Purpose / Key points Example
Purpose & scope Define who the policy applies to and why Applies to all employees, contractors, and third-party partners. Defines safe AI use for business purposes.
Approved and unapproved tools List enterprise-grade AI tools Microsoft 365 Copilot, Enterprise ChatGPT. Azure OpenAI Service. Public/free tools are prohibited for company or client data.
Acceptable use What staff can do safely
  • Drafting internal communications

  • Summarising meetings or reports

  • Creating generic templates

  • Brainstorming or ideation

Prohibited use What staff must not do
  • Entering client PII, financial records, contracts, and source code

  • Using public/free AI tools for company data

  • Treating AI output as final without review

Data handling guidance Protect sensitive data Keep prompts anonymised; treat every AI request as potential data disclosure.
Ownership & review Who maintains and updates the policy IT/Security approves tools; HR ensures staff awareness; policy reviewed annually.

 

Approved vs unapproved AI tools

The right AI tools for the job will depend on what that job is. Creative or analytical roles, for instance, will benefit from the flexibility and conversational depth of ChatGPT. While clerical or frontline workers will be better off with the more guided experience of Microsoft Copilot.

Whatever the right tool is, employees should only ever access it through an approved, protected work subscription. No employee should ever upload or enter company or client data into an unvetted, unapproved AI tool or even a private account on an approved tool.

 

Maintaining a living AI policy

To ensure that users accept, adhere to and continue to use your AI policy, it must remain a living document. As new types of AI tools and use cases emerge, your AI policy should address this. If it doesn’t, users will eventually start to work around it. At that point, security problems return.

To ensure your AI policy remains a living document, you should:

  • Periodically review which AI tools are available and update your allow- and disallow lists to reflect the current market landscape.

  • Take new use cases and features into account as they are released, so that teams do not feel hampered by a policy that disallows innovation.

  • Refer to the policy in staff meetings and project briefings to ensure everyone understands why and how to use it.

  • Regularly review incident logs and staff feedback, then use the insights gained to improve and refine the AI policy.

How often you review your AI policy will depend on how you use AI. If you’re using AI such as Microsoft Copilot mainly for knowledge work tasks, a review every six months is probably enough. If you are using a wider range of tools in more sensitive workflows, such as software development, it’s worth reviewing your AI policy more frequently.

 

If you don’t already have an AI policy, now is the time to create one. Use the principles and building blocks above to write a policy that’s tailored to your organisation’s goals, needs and ways of working. Doing so will forestall the risk of employees using unvetted AI tools that compromise your reputation, data security or customer privacy.

Texaport is one of the UK’s leading IT consultants and Microsoft Modern Workplace specialists, with extensive experience in safely integrating AI into business workflows. Contact us if you need help tailoring the AI policy to your business model and risk profile.

Power your progress

Join forces with us to build a stronger IT infrastructure, protect your data, and focus on your future.