AI in Practice

AI Use Checker

This assessment helps you decide whether AI is appropriate for a specific task. It covers enterprise secured tools (Tier 2) and publicly available tools (Tier 3). If you're considering a bespoke AI system (Tier 1), refer instead to the specific guidance provided for that tool.

1. Which tier of AI tool are you considering using?



2. Does this task involve sensitive, confidential, personal, or otherwise restricted information?



3. Can you verify whether the AI output is correct?




4. Does this task require professional or specialist knowledge to get right?




5. Could an error in this task significantly affect people, policy, or public trust?




How the checker works

The checker asks you five questions about your task. It then works through a fixed sequence of conditions and stops at the first one that matches your answers. Every result — yes, maybe, or no — comes from that sequence. Nothing is random, and the same answers will always produce the same result.

Why it works in a fixed order

Some risks matter more than others, so the checker addresses them first. Data safety comes before everything else. If you are considering putting sensitive, confidential, or personal information into a publicly available tool, that is a hard stop regardless of how straightforward the task might be. No other factor outweighs it.

Verification comes next. If you cannot check whether the output is correct, using AI for that task creates a risk you cannot manage. The checker stops there too.

Only once those two conditions are clear does the checker consider the broader picture — whether the task requires specialist knowledge, how serious the consequences of an error might be, and whether any uncertainty remains in your answers.

What the results mean

A green result means the task appears low-risk based on your answers. It is not a guarantee that the output will be correct. AI tools make mistakes. You are still responsible for checking what the tool produces before you use it.

An amber result means the checker has identified something worth pausing on. It might be that you are using an enterprise tool with sensitive data and need to check your organisation's policy first. It might be that the task is complex enough to need a specialist involved. Or it might simply be that one or more of your answers was uncertain. The message tells you which concern applies and what to consider next.

A red result means the checker has identified a condition that makes AI use inappropriate for this task as described. This is not a judgement about AI in general. It reflects a specific combination of answers — either sensitive data going into an unsuitable tool, or output that cannot be verified — where the risk of harm outweighs the benefit.

What it does not do

The checker is a prompt for good judgement, not a substitute for it. It cannot account for every organisational policy, every data classification edge case, or every context that might make a task more or less sensitive than it appears. If your result is amber and you are still unsure, the right step is to ask your line manager or information governance team before proceeding.

The checker also does not log your answers or store any information. Nothing you enter is retained.