• Jump to contents
  • Jump to main navigation
  • Jump to site map
  • News
  • Insight
  • Careers
  • Support
  • Free IT Cost Review
  • Contact Us Now
  • Free IT Cost Review
  • Contact Us Now
  • +44 207 837 2444
  • UK and Europe: +44 203 034 2244
  • Change Region
  • +44 203 034 2244
  • Change Region

Cardonet IT Support for Business

Cardonet are a consultative business partner who will work closely with you to provide a transparent, vendor-neutral approach to your IT Services.

+44 203 034 2244
7 Stean Street, London, E8 4ED

+1 323 984 8908
750 N. San Vicente Blvd, Los Angeles, CA 90069

  • Home
  • IT Solutions
    • Industry Sector IT Solutions
      • Hospitality
        • Hotels
        • Hotel Management
        • Restaurants
        • Pub & Bars
      • Finance Associations
      • Manufacturing
      • Media and Creative
        • Marketing Agencies
        • Public Relations and Communications Agencies
        • Design Agencies
        • Advertising Agencies
        • Market Research Agencies
        • Entertainment
      • Charity
      • Education
    • Business IT Challenges
      • Remote and Hybrid Working
      • IT Outsourcing
      • IT Cost Optimisation
      • Office Move and IT Relocation
      • Global Technology Operations
      • Global IT Helpdesk
      • Cyber Security Journey
      • Technology Compliance
      • Multi-site IT Operations
      • GDPR Compliance
      • PCI DSS Compliance
  • IT Services
    • IT Support
      • 24x7 Service Desk
      • 24x7 Network Monitoring
      • IT Service Delivery
      • Proactive IT Support
      • Remote IT Support
      • Onsite IT Support
      • Out of Hours IT Support
      • Dedicated Service Desk
      • Network Support
      • Microsoft Support
      • Apple Mac Support
      • Business IT Support
    • IT Consultancy
      • IT Strategy
      • IT Projects
      • IT Audits
      • Software Licensing
      • IT Infrastructure
      • IT Procurement
      • IT Supplier Management
      • IT Security
      • IT Networks and Cabling
      • Cloud Readiness
      • Virtualisation
      • Backup and Continuity
    • Managed IT
      • Managed Networks
      • Managed Hosting
      • Managed Backups
      • Business Continuity
    • Managed Cloud
      • Private Cloud
      • Hybrid Cloud
      • Public Cloud
    • Communication
      • Onsite Telephone System
      • Hybrid Telephone System
      • Cloud Telephone System
      • Contact Centre
      • Video Conferencing
      • SIP Trunking
      • Lines and Calls
    • Cyber Security
      • Cyber Security Audit
      • Managed Cyber Security
      • Cyber Compliance
  • About
    • About Cardonet
      • Why Cardonet?
      • News
      • Insight
      • Management Team
      • Case Studies
      • Customers
      • Technology Partners
      • Accreditations & Memberships
      • Approach and Culture
      • History
    • Careers with Cardonet
      • Why Cardonet for your Career?
      • Meet our Team
      • Job Entry Options
      • Current Job Vacancies
  • Contact

Insight

The Triple AI Threat: Data Leaks, Hallucinations, and the Trust Crisis

by Sagi Saltoun / Wednesday, 29 October 2025 / Published in IT Consultancy
Triple AI Threat: Data Leaks, Hallucinations, and the Trust Crisis

Research shows 77% of employees share sensitive company data through ChatGPT and similar tools. Most don’t understand the AI data protection UK regulations require. They see a helpful assistant. They don’t see that every query creates a potential GDPR violation.

This pattern is repeated time and again:

  • Marketing teams upload client proposals for editing suggestions
  • Finance directors paste forecasts into ChatGPT for analysis
  • HR managers ask AI to analyse salary data and identify pay gaps
  • Product developers share proprietary code for debugging help
  • Solicitors feed case details into public AI for research assistance.

Each interaction seems harmless.

Collectively, they represent systematic data exfiltration that bypasses every security control you’ve implemented. Each one just created a potential data breach.

In 26 years of running Cardonet, I’ve never seen a technology create so many AI security risks UK businesses face today. Twenty percent of UK companies have already exposed corporate data through public AI tools. That’s one in five businesses where sensitive information entered systems they don’t control, stored on servers they haven’t audited and which are governed by policies they haven’t verified.

But data leakage is only the first problem.

Then there are AI hallucinations – confidently wrong information – which raise risks of legal liability, compliance failures, and financial losses. These are not only potentially very dangerous, but they destroy data credibility. According to some research,  76% of enterprises now require human verification before deploying AI outputs because the risk of false information 

When your team discovers AI has fabricated information and wasted their time they stop trusting it entirely. The productivity tool then becomes an abandoned experiment and the potential productivity gains evaporate..

We need to talk about how UK businesses can capture AI’s benefits without these three catastrophic risks.

Real AI Failures That Cost UK Businesses Money

Want to see what unmanaged AI costs?

Air Canada learned this the expensive way. Their chatbot told a grieving passenger he could claim bereavement fares retroactively. The airline’s own policy said otherwise. Air Canada argued the chatbot was “a separate legal entity responsible for its own actions.”

The court didn’t buy it.

They ruled the airline liable for negligent misrepresentation and ordered them to pay damages. This set a precedent: you’re responsible for what your AI tells customers, even when the AI gets it wrong.

A New York lawyer also faced sanctions after citing six completely fabricated cases generated by ChatGPT in federal court. The judge called it “an unprecedented circumstance.” When the lawyer asked ChatGPT if the cases were real, it confidently assured him they were. They weren’t. They didn’t exist. The reputational damage was immediate. The legal consequences? Severe.

Sports Illustrated published articles under fake author names with AI-generated profile photos. One “author” was available for purchase on a website selling AI-generated headshots, described as “a neutral white young-adult male with short brown hair and blue eyes.”

The credibility damage was immediate and severe.

These aren’t edge cases, rare or uncommon situations that push a system to its extreme limits, testing its behavior outside of normal operating parameters. They’re warnings about what happens when businesses deploy AI without proper governance frameworks.

Can You Capture AI Benefits Without Catastrophic Risks?

Yes. You don’t need to ban AI. I believe that would be counterproductive and ultimately impossible to enforce.

You need to implement it properly with governance frameworks that protect your business while capturing genuine productivity benefits.

The businesses succeeding with AI aren’t using different technology. They’re using different governance approaches. They’ve documented acceptable use policies, specified approved tools, and established mandatory verification procedures before deploying AI across their organisation.

Identifying the Risks

Threat One: Shadow AI Bleeds Your Data

What happens when your team uses public AI tools with company information?

The data leaves your network. It sits on servers you don’t control. It may train models you haven’t approved. It potentially becomes accessible under terms of service that can change without notice.

Why public AI tools create unacceptable risk for business AI policy:

They lack enterprise controls. WhatsApp AI, free ChatGPT accounts, Siri, Google Gemini. These consumer tools weren’t built for corporate data protection. Data you enter may be used to improve models. “No training” promises are difficult to verify and can change with updated terms of service at any time.

The problem compounds when that data contains customer information subject to GDPR, financial records requiring regulatory protection, intellectual property defining your competitive advantage, or strategic plans your competitors would pay to see.

Here’s what it costs

The average data breach costs businesses millions, with financial sector breaches costing even more after accounting for investigation, notification, remediation, regulatory fines, and lost business.

A single employee sharing the wrong information can trigger ICO investigations, resulted in fines running into the tens of thousands, breach notifications that damage client trust, and contractual violations with clients who trusted you with their data.

I’ve helped businesses implement proper IT security services after discovering employees had been using public AI tools with proprietary information for months. The retrospective risk assessment alone costs thousands. The potential regulatory exposure? Far more.

The ICO has been increasingly assertive in 2025. They’ve shifted focus from telemarketing violations to UK GDPR security breaches, with two thirds of recent fines targeting data protection failures rather than marketing compliance.

Threat Two: Hallucinations Destroy Your Credibility

AI doesn’t just make mistakes. It simply makes stuff up with complete confidence. Then it delivers those fabrications in perfectly formatted outputs that look authoritative.

This is what makes AI hallucinations so dangerous for businesses.

Studies show AI hallucinations affect business decisions at alarming rates. These aren’t minor typos you can correct. They’re confident falsehoods that make you look stupid at best, dangerously incompetent at worst. They damage reputations, create costly errors, and expose you to legal liability.

What does this look like in your business?

This takes a variety of forms:

  • AI confidently cites a regulation that doesn’t exist, leading your compliance officer to implement incorrect procedures
  • It invents statistics that make their way into a board presentation, damaging credibility when stakeholders fact-check
  • It fabricates customer quotes in a case study, creating reputational exposure when published under your company name
  • It generates financial calculations with plausible but incorrect logic, triggering bad decisions that cost real money.

Here’s the reality: 76% of enterprises now require human verification before deploying AI outputs because false information caused legal liability, compliance failures, and financial losses they couldn’t afford.

The efficiency paradox hits hard

Knowledge workers now spend hours each week fact-checking AI outputs. That’s productive time devoted to verifying what AI told them instead of doing actual work. The verification tax eliminates the time saved by using AI in the first place.

Deadlines slip. Budgets overrun. Trust evaporates.

Why do hallucinations happen? Current AI systems generate “the most probable next token,” not “the truth.” When AI systems lack information, they guess. When training data contains errors, they perpetuate them. When asked about topics outside their knowledge, they fabricate rather than admit uncertainty.

This architectural limitation means no amount of training eliminates hallucinations completely.

The GDPR and ICO implications are serious

Your cyber security audit should now include AI usage assessment because hallucinations create compliance exposure your traditional security audits don’t capture.

If AI generates incorrect information about data subjects and you publish it? That’s a GDPR violation. If AI fabricates legal justifications for your data processing activities? That’s a compliance failure the ICO will investigate.

The average breach lifecycle in the UK is 210 days. That’s seven months where hallucinated information could be propagating through your systems, damaging your credibility, and creating regulatory exposure.

Threat Three: The Trust Crisis Kills Productivity

Here’s the paradox I see destroying UK business efficiency: AI promises massive productivity gains but unmanaged AI adoption destroys the efficiency you’re chasing.

Why do the benefits evaporate so quickly?

Employees waste time fact-checking every output. When you can’t trust AI to be accurate, you must verify everything it produces. That verification takes longer than doing the work manually would have taken in many cases.

Teams lose trust after discovering fabrications. Once someone has wasted hours on hallucinated information, they will potentially stop using AI entirely. The productivity tool becomes the abandoned application nobody opens.

Workflow chaos emerges from inconsistent use across your organisation. Some team members rely on AI heavily. Others avoid it completely. Nobody knows which outputs are reliable. Which work needs verification. Which decisions were based on accurate information.

Collaboration breaks down because trust has evaporated.

Developer surveys show 84% use or plan to use AI tools, but only 29% trust AI outputs to be accurate. This adoption-trust gap creates massive inefficiency across organisations. People use tools they don’t trust, then spend additional time verifying results they suspect are wrong.

The enterprise numbers tell a stark story

  • 80% of AI projects fail, twice the failure rate of traditional IT initiatives
  • Nearly 70% of enterprises report that 30% or fewer of their generative AI pilots made it to production
  • 58% of organisations propagate AI-generated errors through multiple systems

I see businesses making the same mistake repeatedly. They hear AI drives productivity. They give employees access to tools. Then they wonder why efficiency hasn’t improved and costs haven’t decreased.

The missing element? Governance

Unmanaged AI creates information quality problems that compound over time. One person’s hallucination becomes another person’s input, cascading through workflows and creating systemic unreliability that destroys the productivity gains you implemented AI to achieve.

Are You at Risk? Five Warning Signs

How do you know if your business has a shadow AI problem?

Check for these five signs:

  1. Your team mention using ChatGPT or similar tools for work tasks without IT department approval
  2. You lack a documented AI acceptable use policy specifying which tools are approved and which data types are prohibited
  3. Nobody tracks which AI tools your team are using or what information they’re sharing
  4. Your data loss prevention systems don’t monitor AI tool usage or block sensitive data from entering public AI platforms
  5. You haven’t trained employees on AI risks, GDPR implications, or proper verification procedures

If you recognised three or more of these signs, you have a shadow AI problem creating data protection risks right now.

What does effective AI governance look like in practice?

Document an Acceptable Use Policy

We help businesses create AI acceptable use policies through our IT strategy and advice services. This isn’t a theoretical document. It’s a practical framework your team can actually follow.

Specify which AI tools are approved for business use. Consumer tools like free ChatGPT, WhatsApp AI, and Siri should be prohibited for any information that’s not already public.

Define prohibited data types explicitly in language employees understand:

  • Customer information subject to GDPR
  • Financial data requiring regulatory protection
  • Trade secrets and intellectual property
  • Strategic plans and forecasts
  • Employee records
  • Any contractual confidential information

Make it clear what happens if someone violates the policy. Not to punish people, but to ensure they understand the seriousness of data protection compliance.

Understand Enterprise vs Public AI Tools

This distinction matters more than most businesses realise.

Enterprise systems offer data residency controls that keep information in a specific region, contractual liability provisions that make the vendor responsible for breaches, audit logs that track who accessed what information, and contractual commitments not to train models on your data.

Public consumer tools offer none of these protections.

When we implement AI governance frameworks for clients, we make sure they understand this difference. The cost difference between consumer and enterprise AI tools is minimal compared to the risk reduction you gain.

Establish Mandatory Verification Procedures

Define which outputs require human review. Who performs that review. How verification is documented. What happens when errors are discovered.

This human-in-the-loop approach dramatically reduces error rates while maintaining efficiency benefits. Seventy-six percent of enterprises now require human verification before deploying AI outputs. This isn’t optional anymore. It’s essential for maintaining credibility and avoiding costly mistakes.

We make sure verification procedures are proportionate. High-risk outputs like legal advice, financial calculations, or customer-facing communications get thorough review. Low-risk outputs like draft emails get lighter verification.

Train Your Team on AI Risks

People can’t follow policies they don’t understand.

Your team needs practical training on why shadow AI creates data breaches, how hallucinations happen, what verification procedures apply to their specific role, and which tools are approved for business use.

We deliver this training in plain language without technical jargon. Your team needs to understand the GDPR implications of sharing customer data with public AI tools. They need to recognise hallucinations when they see them. They need to know what to do when AI generates suspicious outputs.

Training isn’t a one-time event. AI technology changes rapidly. Threats evolve. Your training needs to evolve with them.

Integrate AI Governance with Your Security Framework

Your AI strategy should integrate with existing security frameworks, not exist as a separate initiative managed by different people with different priorities.

Data protection controls, access governance, audit capabilities, and incident response procedures all need to account for AI usage across your organisation.

The UK’s cybersecurity landscape is challenging enough without adding new vulnerabilities. Over 40% of UK businesses experienced cyber breaches in 2024. Shadow AI adds another attack surface to an already difficult threat environment.

We help businesses like yours integrate AI governance into their existing security programmes. This ensures consistency, reduces complexity, and improves compliance across all your technology initiatives.

What Happens Next 

AI offers genuine productivity benefits when implemented with proper governance frameworks.

The businesses capturing those benefits are the ones who addressed security, accuracy, and trust systematically. Not as afterthoughts. Not after discovering problems. Before deploying AI across their organisation.

You can’t afford to ignore AI. Your competitors are using it. Your team are already experimenting with it. The question isn’t whether to adopt AI.

The question is how to do it safely while protecting your data, your credibility, and your efficiency.

We’ve helped businesses build AI governance frameworks that actually work in practice. The common thread across successful implementations: treating AI as a risk management challenge requiring proper governance, not just a technology opportunity promising productivity gains.

Here’s what I recommend you do this week

Contact us for a security audit that includes AI usage assessment. We’ll identify which tools your team are using, what data they’re sharing, and what risks you’re facing right now.

Or let’s discuss building an AI acceptable use policy that protects your business while enabling the productivity gains AI promises. We make sure policies are practical, enforceable, and integrated with your existing security frameworks.

The triple threat of data leakage, hallucinations, and trust collapse isn’t inevitable.

It’s preventable with the right approach.

  • Tweet

About Sagi Saltoun

What you can read next

Impact on Nursery Schools of a Data Breach
What is Your Data and Reputation Really Worth? Understanding the Full Impact of Data Breaches for Nursery Schools
Nursery School Cyber Security
After Kidi: A New Era for Nursery School Cyber Security

You must be logged in to post a comment.

Recent Posts

  • Impact on Nursery Schools of a Data Breach

    What is Your Data and Reputation Really Worth? Understanding the Full Impact of Data Breaches for Nursery Schools

    How can nursery leaders and boards realisticall...
  • Nursery School Cyber Security

    After Kidi: A New Era for Nursery School Cyber Security

    How well are nursery school leaders and boards ...
  • Strengthening cybersecurity defences

    Strengthening Your Defences: The Urgent Need for Robust Cybersecurity

    The escalating trend of data breaches poses a s...
  • Data Backup 3-2-1 Rule

    The 3-2-1 Rule for Data Backups

    The importance of data can’t be understated ─ f...

Archives

  • October 2025
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • July 2022
  • June 2022

Categories

  • Cyber Security
  • Guidance
  • IT Consultancy
  • IT Support

Tags

Cyber Attacks Cyber Security IT Support IT Support Company IT Support London
TOP

We will help you overcome your technology challenges

Call us on +1 323 984 8908, email us at or fill out the following form to start the conversation.

",

For further information on how we process your data, please refer to our Privacy Policy.

IT Solutions

  • IT Solutions by Industry
  • Business IT Challenges

IT Services

  • IT Support
  • IT Consultancy
  • Managed IT
  • Managed Cloud
  • Communication
  • Cyber Security

About

  • Why Cardonet
  • Meet our Team
  • News
  • Insight
  • Case Studies
  • Careers

Contact

  • +44 207 837 2444
  • +1 323 984 8908
  • Change Region
Cardonet 25 years proudly supporting our customers
  • Company Number: 06263199
  • VAT No: GB 912250759
  • 7 Stean Street, London, UK, E8 4ED
Cardonet IT Support and IT Services
Change Region
  • United Kingdom and Europe
  • United States and International

© 1999 - 2023 All rights reserved.

  • Sitemap
  • Terms and Conditions
  • Privacy Policy
  • GDPR
  • Accessibility Statement
  • Corporate Social Responsibility
  • Environmental Policy
Contact TOP
Cardonet
Cardonet Consultancy Limited 7 Stean Street London, Greater London E8 4ED
London Map +442030342244
Cardonet US Inc 750 N. San Vicente Blvd, West Hollywood Los Angeles, California 90069
Los Angeles Map +13239848908
Home Cardonet IT Support Logo