
Executive summary
- AI tools like ChatGPT and Microsoft Copilot can boost productivity, but they come with privacy and data protection risks.
- Many businesses don’t realise that entering client or internal data into public AI tools could result in data leaks.
- There are two main threats: internal misuse by staff and external handling by AI providers.
- The blog offers five practical steps to help UK SMEs use AI tools safely at work.
- Get Support and Databasix have teamed up to provide expert, jargon-free guidance on AI security and GDPR compliance.
Introduction
Artificial intelligence tools like ChatGPT and Microsoft Copilot are quickly becoming part of everyday working life. Whether it’s writing emails, analysing documents, or creating quick content drafts, AI is proving to be a huge time-saver for many teams.
But here’s the catch: if used without care, AI tools can put your business at serious risk, especially when it comes to privacy and data protection.
So how do you make the most of AI without accidentally leaking sensitive business or client data?
At Get Support, we’re tackling this head-on, and with a little help from our data protection friends at Databasix, we’re sharing some simple, practical tips to help your business use AI safely.
Why AI Security Matters More Than Ever
There’s a reason “AI security risks” has become one of the most searched terms for businesses recently.
Many AI tools, especially free or consumer versions, rely on cloud-based large language models. And when you (or your team) paste information into these tools, it’s not always clear where that data is going… or how it might be used.
Take ChatGPT, for example. In 2023, employees at a major smartphone manufacturer accidentally leaked confidential code while trying to use ChatGPT to fix a problem. That data became part of the tool’s training dataset, making it a permanent risk.
And it’s not just the obvious stuff. Something as simple as copying a client’s name, a contract summary, or an internal company process into a chatbot could expose more than you think.
Real-World Risks; Internal and External
There are two types of data risks to watch out for with AI: internal misuse and external mishandling.
- Internal risks come from employees using AI tools without understanding the dangers, like pasting sensitive info into ChatGPT or letting AI suggest answers using private data from emails or files that shouldn’t have more open access across the organisation.
- External risks relate to how the AI tool itself stores, processes, or shares that data. Some tools save your prompts. Others might use your data to train their models. The tools may also put individuals at risk of discrimination, if bias is present in the data used to train the model.
And with the explosion of workplace AI, from Microsoft 365 Copilot to Google Gemini, it’s easy for things to slip through the cracks.
Five Practical Steps to Keep AI Use Safe at Work
So how can your business stay safe while still embracing the productivity perks of AI?
Here are five steps we recommend, with input from our friends at Databasix.
- Create an AI Acceptable Use Policy: This is your safety net. A clear, written policy helps staff know what’s OK to use AI for, and what’s off-limits. If you don’t already have one, we can help you create one.
- Limit Access (and Monitor Usage): Start with a whitelist of approved tools, which have been assessed for privacy and data protection risks. Control the settings in line with your AI policy. Be sure to review user permissions as part of the implementation process as a file stored in the wrong location now may lead to a data breach in the future. Block public-facing AI tools if needed. And keep an eye on who’s using what, especially if they’re handling personal or financial data.
- Train Your Team: Many data breaches happen because people simply don’t know what’s risky. A short training session can make a big difference. Show examples, set boundaries, and remind staff: if you wouldn’t email it to a stranger, don’t paste it into an AI tool. Best practice involves setting up an AI User group to review effectiveness and quality of AI tools in use across the business.
- Choose Enterprise-Grade AI Tools: Where possible, use tools designed for business use, like Microsoft Copilot, which come with better data protection and user controls.
- Review Data Sharing Settings: This one’s easily overlooked. Some tools auto-save every prompt unless you opt out. Regularly check the settings in your AI apps to avoid accidental data retention or sharing. Ensure your user permissions and storage of sensitive information are regularly reviewed to reduce the risk of employees accessing information that they shouldn’t through simple queries.
Databasix Tip: “Transparency is key. Make sure staff understand where AI tools are sourcing their answers from, and where their own data might end up.”
With this in mind, take the time to update your Privacy Notices to reflect the use of AI, particularly if it’s involved in decision-making about individuals.
What About GDPR and Legal Obligations?
It’s a common misconception that AI falls into a legal grey area. The truth? GDPR still applies.
If you’re using AI to process personal data, you need to:
- Be clear with individuals (e.g., employees, clients) about what you’re doing with their data
- Be clear about the impact your use of AI will have on any decision-making about the individuals
- Make sure you have a legal basis for processing their data in this way
- Keep personal data secure and limit how long you store it
- Ensure that personal data is accurate
- Consider and comply with the individual rights around automated decision-making.
Databasix can help you carry out a Data Protection Impact Assessment (DPIA) for AI projects, and make sure you stay on the right side of UK GDPR.
Final Thoughts
You don’t need to ban AI tools to protect your business, but you do need to be proactive about and in control of how they’re used.
At Get Support, we help SMEs roll out AI tools like Copilot safely, with the right security and access controls. And when it comes to keeping things GDPR-compliant, Databasix are right beside us.
Together, we help you use AI responsibly, productively, and securely.
FAQs
Only if it’s used with care, and ideally with a clear acceptable use policy in place. Never share personal, financial, or confidential data in public AI tools.
Avoid sharing client names, personal data (like email addresses or phone numbers), financial figures, passwords, commercially sensitive (e.g. company IP) or anything contractually confidential.
Yes. A simple AI acceptable use policy helps avoid misunderstandings, sets boundaries, and keeps your business legally protected.
Absolutely. If personal data is involved, you must follow GDPR rules, including lawful processing, transparency, accuracy and data minimisation.
Start by reviewing who’s using what, and whether sensitive data has been shared. We can help with an audit, or just a quick chat to get started.
Need help making AI work safely for your business?
Get in touch with Get Support and Databasix today for expert, jargon-free advice.