Using generative AI at work
15 min read
What every public sector employee should know.
Generative AI tools like Microsoft Copilot and ChatGPT are becoming part of daily working life across the public sector. Understanding what you can do with them, what you must avoid, and what your organisation expects from you is now a core professional responsibility.
This article draws on the UK government’s AI Playbook for Government, the Data and AI Ethics Framework, and the government’s guidance on information asset ownership. It gives you a practical starting point.
What are these tools, exactly?
Generative AI tools produce new content in response to your instructions. You type a question or a request, and the tool generates a response. That response might be a draft email, a summary of a document, a list of ideas, or an answer to a question.
Tools like ChatGPT and Microsoft Copilot are the most widely used examples. ChatGPT is a public service from OpenAI. Copilot is Microsoft’s AI assistant, which is integrated into Microsoft 365 products like Word, Outlook, and Teams. Your organisation may have licensed Copilot for work use, or it may not. You should check before you start.
These tools are impressive. They are also imperfect. They can produce plausible-sounding responses that are factually wrong. This happens without warning, so you always need to review what they produce before you use it.
What can you use generative AI for?
Generative AI can support a wide range of everyday tasks. Used well, it can save time and help you work more clearly.
Tasks where it tends to add genuine value include drafting and editing written content, summarising long documents, generating ideas, reformatting or restructuring text, and creating first drafts of routine communications. It can help you work through a problem by asking it questions, or use it to check the clarity of something you’ve written.
The key word is “support.” These tools are most useful when you treat them as a capable assistant rather than a decision-maker. You bring the judgement. The tool helps with some of the heavy lifting.
The AI Playbook for Government is clear that human oversight is not optional. You remain responsible for every output you use, share, or act on. If the content is wrong, incomplete, or biased, that is your responsibility to catch.
What can’t you put into these tools?
This is the most important section to understand. The type of information you enter into an AI tool determines whether you can use that tool at all.
Public AI services like the free version of ChatGPT are not approved for government information. The information you type into these services may be used to train future versions of the model. It leaves your control the moment you submit it.
You must not enter any of the following into a public AI service: names, addresses, or any details that identify a real person; information classified as OFFICIAL-SENSITIVE or above; details about ongoing cases, investigations, or legal proceedings; commercially sensitive information; anything your organisation has told you to keep confidential.
The government’s Data and AI Ethics Framework sets out clear expectations around privacy and data minimisation. This means only using the data you genuinely need for a task, and making sure it is protected appropriately throughout.
Microsoft Copilot, when accessed through a licensed Microsoft 365 account, operates within your organisation’s security boundary. It does not use your data to train its models. This makes it more suitable for work tasks. But you still need to check your organisation’s acceptable use policy before you start, because policies vary.
When in doubt, use fictional or anonymised examples instead of real data. This protects people and keeps you on the right side of your obligations.
Who owns the information you work with?
Every piece of government information has an information asset owner. This is a named individual — usually a senior civil servant — who is responsible for how that information is handled, protected, and used.
Before you use generative AI with any significant piece of information, it is worth checking whether you know how that information is classified, and whether your intended use is consistent with how it should be handled. If you are unsure, your information asset owner or your line manager can advise.
This is not a bureaucratic hurdle. It is the governance structure that protects you, your colleagues, and the people your work affects. The government’s guidance on information asset owners is clear that handling information well is everyone’s responsibility, not just the owner’s.
What happens if the AI gets it wrong?
Generative AI tools make mistakes. They can produce content that sounds authoritative but contains errors, misrepresents sources, or reflects historical biases in the data they were trained on.
The Data and AI Ethics Framework uses the term “human in the loop” to describe the principle that a person should review and take responsibility for AI-assisted outputs before they are acted on. In practice, this means checking every draft the tool produces, verifying any facts or statistics it cites, and not forwarding or publishing AI-generated content without reviewing it first.
You should also be transparent when AI has contributed to your work, if your organisation requires this. Some departments have specific disclosure requirements. Check what applies to you.
If an AI tool helps you draft a briefing that contains a factual error, and that briefing goes to a minister or a member of the public, the error is yours. The tool does not take accountability. You do.
What does your organisation expect from you?
Most public sector organisations are currently developing or updating their AI acceptable use policies. Whether or not your organisation has published one, the following expectations apply.
You are expected to check before using a tool for work purposes, especially if you are using a public service like ChatGPT rather than an employer-licensed product. You are expected to protect personal and sensitive information at all times. You are expected to review and take responsibility for everything you produce with AI assistance. And you are expected to be honest about how you have used AI tools if asked.
The AI Playbook for Government describes responsible AI use as requiring transparency, accountability, and continuous evaluation. These are not abstract principles. They translate directly into how you handle AI tools day to day.
What skills do you need?
You do not need to be a data scientist or a software engineer to use generative AI responsibly. But you do need a small set of practical capabilities.
The most important is critical evaluation. This means reading AI outputs with the same scepticism you would apply to any unverified source. Check the facts. Notice when something sounds vague or implausible. Ask yourself whether the response actually answers your question.
You also need to understand the basics of prompting. The quality of what a tool produces depends heavily on how you ask. Clear, specific instructions tend to produce better results than vague ones. Providing context, specifying format, and asking the tool to focus on a particular aspect of a task all help.
Finally, you need to understand your own organisation’s boundaries. This means knowing your acceptable use policy, understanding data classification well enough to recognise when something should not go into a public tool, and knowing who to ask when you are unsure.
How do you get started safely?
Start with low-stakes tasks that involve no personal or sensitive information. Drafting a summary of a publicly available document, generating agenda ideas for a team meeting, or restructuring something you have already written are all reasonable starting points.
Read your organisation’s acceptable use policy before you go further. If your organisation has licensed Microsoft Copilot, find out whether you have access and what guidance is available for using it.
Talk to colleagues who are already using these tools. Learning from real experience in your own organisation is more useful than reading about AI in general. And if your team is considering using AI for a process that involves personal data or sensitive information, involve your information governance team early.
A practical rule: if you would not be comfortable explaining to your line manager exactly what information you entered and which tool you used, do not enter it.