Ollinton helps public sector organisations use generative AI responsibly. This policy sets out how we use AI in our own work. Public sector clients are accountable for the tools and advice they rely on. We should be too.
What AI tools we use
Ollinton currently uses two AI tools.
Claude by Anthropic, for research, drafting, editing, and analysis.
Midjourney, for generating images used in Ollinton's content and materials.
We do not use AI tools that have not been reviewed for data handling and security. We do not experiment with untested tools in client work.
How we use AI
Research and analysis
We use Claude to help research topics, summarise documents, and identify relevant policy developments. All research is verified against primary sources by a human before we rely on it.
Drafting and editing
We use Claude to draft and refine written content. Every piece of content published under the Ollinton name is reviewed, edited, and approved by a human. We do not publish AI-generated content without review.
Images
We use Midjourney to create images for our website and materials. We do not use AI to create images of real people or to misrepresent real situations.
What we do not do
These are firm boundaries, not just defaults.
We do not input confidential client data into AI tools without explicit agreement.
We do not use AI to make decisions on behalf of clients.
We do not publish AI-generated content without human review and approval.
We do not use AI to create misleading or fabricated information.
We do not use AI tools that process data in ways that conflict with UK data protection law.
Human oversight
AI supports our work. It does not replace human judgement.
Terri Hart reviews all outputs before they are shared with clients or published. She checks for accuracy, tone, and alignment with Ollinton's standards. Where AI has contributed to a piece of work, she takes responsibility for the final output.
We treat AI as a capable tool with real limitations. It can make mistakes. It can miss context. It can generate plausible-sounding information that is wrong. We check.
Data and privacy
We do not input personal data or confidential client information into AI tools unless we have a clear legal basis and explicit agreement from the client.
When we use Claude, conversations are processed by Anthropic. We have reviewed and actively manage our privacy settings to limit data exposure.
We have disabled the 'Help improve Claude' setting. Anthropic will not use our chats to train its models.
We have disabled location metadata sharing.
Where a conversation involves sensitive topics or information, we use Claude's Incognito mode. Incognito chats are not used for model improvement under any circumstances.
One exception applies regardless of settings: Anthropic may use conversations that are flagged for safety review. This is outside our control and is standard across all Claude accounts. We manage this by not inputting confidential information into Claude sessions.
Transparency with clients
We tell clients when AI has played a significant role in work we deliver.
If a client asks how a piece of work was produced, we answer honestly. If they have concerns about AI use, we discuss them. If they prefer we do not use AI in their engagement, we respect that.
Reviewing this policy
This policy reflects how Ollinton works now. AI tools and our use of them will change. We will update this policy when our approach changes in a material way.
Get in touch
Questions about this policy? Email info@ollinton.com.