
Executive summary
- Data Protection Day falls on January 28th, and it serves as a timely prompt for UK businesses to tackle the growing security risks associated with public AI tools.
- The biggest emerging threat to privacy in 2026 is so-called “shadow AI,” where staff inadvertently leak sensitive company data into free AI tools.
- The safest way to innovate without oversharing is to move staff away from public tools and onto enterprise-grade platforms like Microsoft 365 Copilot.
Introduction
January 28th probably isn’t what you’d describe as a red-letter day.
It’s unlikely that anyone has “Data Protection Day” circled on their kitchen calendar. But, in the world of IT and business security, it’s a day that’s more significant than it first appears.
It’s a time of year that services as a key checkpoint for UK businesses. If nothing else, it’s a reminder that the biggest risk to your company’s confidential data is no longer a hacker breaking in – but may actually be an employee voluntarily pasting trade secrets into a website they found on Google.
So, in the spirit of the season, we’re looking at the “elephant in the room” of modern privacy: AI oversharing.
What is Data Protection Day, anyway?
Before we get into the sci-fi sounding stuff, let’s do a quick history lesson.
Data Protection Day (known as Data Privacy Day outside of Europe) is held on January 28th every year. It commemorates the signing of the “Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data” (or “Convention 108” if you don’t have all day) by the Council of Europe back in 1981. This was the first legally binding international treaty focused on privacy and data protection.
Think of it as the grandfather of the GDPR.
The goal of the day now is to raise awareness and promote best practices for privacy and data protection. For years, this meant reminding people to shred documents and lock filing cabinets. But, in the age of generative AI, “data protection” has taken on a whole new meaning.
The problem with shadow AI
The reality of the workplace in 2026 is pretty simple: most employees want to use AI.
They know it makes them faster. They know it can write emails, summarise reports, or clean up code in seconds. And if you don’t give them a tool to do it, they’ll go out and find one themselves.
This problem even has a name: shadow AI.
The issue arises when staff use free, consumer-grade versions of tools like ChatGPT, Gemini, or Claude. These tools are fantastic, but they often come with a catch tucked away in the T&Cs: the data you feed them can (and usually will) be used to train the model.
So, if your Head of Sales pastes your entire Q1 client strategy into the free version of ChatGPT and asks for a summary, there is a non-zero chance that information is being absorbed into the public model. It’s a bit like having a sensitive business conversation in the middle of a crowded pub. You might get away with it, but do you really want to take the risk?
The dangers of oversharing
So, what actually happens when we overshare with these tools?
It’s not necessarily that a human at an AI company sits down and reads your diary. The risk is a bit more nuanced than that:
- Model training. As we mentioned, public data helps the AI get smarter. If enough people paste proprietary code or legal clauses into the public maw, the AI learns from it. There have already been cases where AI models have regurgitated sensitive code snippets to other users because it “learned” them from a previous input.
- Lack of control. Once data leaves your secure network and enters a public web interface, you’ve lost control of it. You don’t know where it’s stored, who sees it, or how long it’s kept. For a business trying to comply with UK GDPR, that’s a straight-up nightmare.
- The “Confused Deputy”. As we discussed in our article on prompt injection, public AI models can be manipulated. If your staff are using unverified tools to process business data, they’re opening the door to security exploits that simply don’t exist in a secure environment.
How to stop the leak
So, whether you’re reading this on January 28th or a few weeks later, here are four practical steps you can take to stop the oversharing and build something of an AI firewall around your organisation’s privacy:
- Create a (clear) AI policy Before you start blindly blocking AI tools, you need to set the ground rules for your organisation. If you don’t have an Acceptable Use Policy (AUP) that specifically covers generative AI, now is the time to write one. It needs to clearly state which tools are approved and what types of data can (and can’t) be shared with public chatbots.
- Audit your “shadow AI” usage You can’t fix what you can’t see. Ask your IT team to look at your network traffic. Are 50% of your staff visiting chatgpt.com every day? Are they using obscure PDF summarisers they found on Google? Understanding the scale of the problem is the first step to fixing it.
- Block the risky inputs Once you have an authorised tool in place (like Microsoft 365 Copilot), you can justify blocking access to the risky ones. You can even use your firewall or browser policies to prevent staff from pasting data into unauthorised AI sites. It might seem strict, but for many, it’s a standard data protection practice in 2026.
- Train your team on the “why” Most employees aren’t malicious – they’re just trying to be efficient. If you explain why pasting client data into a free chatbot is dangerous, they’ll usually understand. Perhaps use Data Protection Day as a hook to hold a quick 10-minute briefing or send out an email explaining the difference between public AI and private AI.
We can help you stay secure
Data privacy doesn’t have to be boring, and it certainly doesn’t mean you have to say ‘no’ to absolutely everything. There’s a middle ground.
By moving your team over to Microsoft 365 Copilot, you get to empower your team with the technology that everyone’s talking about, while resting easy that your data is safe behind Microsoft’s enterprise-grade security.
If you’re worried about what your staff might be sharing, or if you want to get Copilot set up to mitigate the risk, we’re here to help. Speak to your Get Support Customer Success Manager or call our friendly team on 01865 594 000.